question
stringlengths 3
301
| answer
stringlengths 9
7.04k
| context
listlengths 7
7
|
|---|---|---|
Why does it hail in warm weather?
|
Sleet is just normal rain which freezes prior to hitting the ground. Freezing rain is also just normal rain, but freezes on contact with a surface. Hail on the other hand, is formed by a completely different mechanism. The stones are created deep inside strong updrafts in powerful thunderstorm, very high up in the atmosphere where the air temperature is still below freezing (even if it is 80 degrees on the ground).
For more on how hailstones form, see [this page](_URL_0_).
|
[
"Hail fog sometimes occurs in the vicinity of significant hail accumulations due to decreased temperature and increased moisture leading to saturation in a very shallow layer near the surface. It most often occurs when there is a warm, humid layer atop the hail and when wind is light. This ground fog tends to be localized but can be extremely dense and abrupt. It may form shortly after the hail falls; when the hail has had time to cool the air and as it absorbs heat when melting and evaporating.\n",
"In the summer months, there are warm to hot humid conditions with frequent lake breezes to cool things off. Thunderstorms are regular occurrences in the summer and can sometimes be severe enough to cause tornadoes. Temperatures this time of year can range from 11*C to 26*C. The humidity will often make it feel much hotter than the actual temperature and can even make it feel like the mid 40 °C. \n",
"Hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of . Movement of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporational cooling which lowers the freezing level of thunderstorm clouds giving hail a larger volume to grow in. Accordingly, hail is less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater altitude. Hail in the tropics occurs mainly at higher elevations.\n",
"Hail forms in strong thunderstorm clouds, particularly those with intense updrafts, high liquid water content, great vertical extent, large water droplets, and where a good portion of the cloud layer is below freezing . Hail-producing clouds are often identifiable by their green coloration. The growth rate is maximized at about , and becomes vanishingly small much below as supercooled water droplets become rare. For this reason, hail is most common within continental interiors of the mid-latitudes, as hail formation is considerably more likely when the freezing level is below the altitude of . Entrainment of dry air into strong thunderstorms over continents can increase the frequency of hail by promoting evaporational cooling which lowers the freezing level of thunderstorm clouds giving hail a larger volume to grow in. Accordingly, hail is actually less common in the tropics despite a much higher frequency of thunderstorms than in the mid-latitudes because the atmosphere over the tropics tends to be warmer over a much greater depth. Hail in the tropics occurs mainly at higher elevations.\n",
"Strong winds often develop around dry thunderstorms as the evaporating precipitation causes excessive cooling of the air beneath the storm, which increases its density and thereby its weight relative to the surrounding air. This cool air then descends rapidly and fans out upon impacting the ground, an event often described as a dry microburst. As the gusty winds expand outward from the storm, dry soil and sand are often picked up by the strong winds, creating dust and sand storms known as haboobs.\n",
"Unstable spring weather may occur more often when warm air begins to invade from lower latitudes, while cold air is still pushing from the Polar regions. Flooding is also most common in and near mountainous areas during this time of year, because of snow-melt which is accelerated by warm rains. In North America, Tornado Alley is most active at this time of year, especially since the Rocky Mountains prevent the surging hot and cold air masses from spreading eastward, and instead force them into direct conflict. Besides tornadoes, supercell thunderstorms can also produce dangerously large hail and very high winds, for which a severe thunderstorm warning or tornado warning is usually issued. Even more so than in winter, the jet streams play an important role in unstable and severe Northern Hemisphere weather in springtime. \n",
"Fronts are the principal cause of significant weather. \"Convective precipitation\" (showers, thundershowers, and related unstable weather) is caused by air being lifted and condensing into clouds by the movement of the cold front or cold occlusion under a mass of warmer, moist air. If the temperature differences of the two air masses involved are large and the turbulence is extreme because of wind shear and the presence of a strong jet stream, \"roll clouds\" and tornadoes may occur.\n"
] |
How are vaccination schedules determined?
|
There's many things going on when deciding on vaccine schedules. From the top of my head, there's
**Maternal antibodies** (The mother will transfer some antibodies to the infant that lasts for a couple of months and might interfere with the vaccine response)
**Risk of side effects** of the vaccine versus severity of the disease should infection occur (It is a common thing for medical personel to delay vaccination of preterm babies - this is often contraproductive since the consequences of e.g. whooping cough is especially severe in preterms)
**When** the disease that is vaccinated against tend to occur.
**Burden to the parents**. Asking the parents to vaccinate their children too often results in a drop in compliance and acceptance, which is a big issue. Therefore multiple vaccinations tend to be pooled at a single point in time that is a good compromise between optimal time periods for the individual vaccines.
**Ease of administration.** Ideally, you want to time the vaccinations so that they come jointly with other routine contacts with the health care, such as follow-up visits after the child is born, school entry and so on. This factor can make schedules in different countries diverge quite a bit.
**Whether the vaccine is good enough** that a single dose give a good protection or if several doses are needed. This touches on your other question regarding booster necessity. Basically, decisions on boostering are based on research on what level of antibody presence is deemed protective, together with trial-and-error clinical trials; you first conduct a clinical trial with a certain schedule set up to the best of your knowledge, and if the vaccine is not quite protective enough you may try and add a booster. In the case of late boosters (e.g. three doses at 3,5 and 12 months of age, and then a 5 year booster), you take into account immunological and epidemiological research regarding the decline of vaccine effectiveness over time.
Regarding the duration between doses, there can be a bigger risk with taking the doses too close in time versus spacing them out more. If taken close enough, the immune system might just ignore the extra provocation that the new vaccine represent, and you won't get any benefit at all from the booster. Waiting a bit longer just mean that you have a longer period of not-quite-optimal protection.
[CDC](_URL_1_) has a nice overview that at least partly talks about what influences various decisions. You might also be interested in reading the english summary on page 36ff of [the swedish report on revision of vaccine schedules](_URL_0_) that was delivered in 2010.
Disclaimer: I'm a statistician that has been working on epidemiological analysis of vaccine schedules of B.Pertussis, not an MD. There's probably several simplifications and mistakes present in the above.
|
[
"An alternative vaccination schedule is a vaccination schedule differing from the schedule endorsed by the Advisory Committee on Immunization Practices (ACIP). These schedules may be either written or \"ad hoc\", and have not been tested for their safety or efficacy. Proponents of such schedules aim to reduce the risk of adverse effects they believe to be caused by vaccine components, such as \"immune system overload\" that is argued to be caused by exposure to multiple antigens. Parents who adopt these schedules tend to do so because they are concerned about the potential risks of vaccination, rather than because they are unaware of the significance of vaccination's benefits. However, use of alternative vaccination schedules is associated with an increased risk of vaccine-preventable disease.\n",
"The schedule of childhood immunizations in the United States is given by the Centers for Disease Control and Prevention (CDC). The vaccination schedule is broken down by age: birth to six years of age, seven to eighteen, and adults nineteen and older.\n",
"In Germany, a vaccination schedule is developed by the Standing Committee on Vaccination (STIKO), which operates as part of the Robert Koch Institute. The recommendations are generally adopted by the Federal Joint Committee.\n",
"The vaccination requirement includes the following vaccinations: Mumps, Measles, Rubella, Tetanus, diphtheria, Meningococcal disease, Pneumococcal disease, Haemophilus influenzae type B, Rotavirus, Varicella, Influenza, Hepatitis A and B, Pertussis, and Polio. These requirements are established by the Advisory Committee on Immunization Practices (ACIP).\n",
"The World Health Organization monitors vaccination schedules across the world, noting what vaccines are included in each country's program, the coverage rates achieved and various auditing measures. The table below shows the types of vaccines given in example countries. The WHO publishes on its website current vaccination schedules for all WHO member states.\n",
"In these studies, data on childhood vaccinations were typically collected in periodic surveys, and the information on vaccinations, which occurred between successive home visits, was updated at the time of the second visit. The person-time at risk in unvaccinated and vaccinated states was then divided up according to the date of vaccination during the time interval between visits. This method opens up a potential bias, insofar as the updating of person time at risk from unvaccinated to vaccinated is only possible for children who survive to the second follow-up. Those who die between visits typically do not have vaccinations between the first visit and death recorded, and thus they will tend to be allocated as deaths in unvaccinated children – thus incorrectly inflating the mortality rate among unvaccinated children.\n",
"The vaccination requirement was added with the Illegal Immigration Reform and Immigrant Responsibility Act of 1996. Vaccinations were required for “nine ‘vaccine-preventable diseases’” including: “mumps, measles, rubella, polio, tetanus and diphtheria toxoids, pertussis, influenza type B and hepatitis B”.\n"
] |
Does soap attack or dissolve phospholipid bilayers? (cell membranes and such)
|
Since there's already a thread discussing question 1, I'll start one for discussing question 2.
Phospholipids *do* form micelles.
Amphiphilic molecules like phospholipids and soaps are lyotropic liquid crystalline molecules. A liquid crystalline molecule exists in a phase of matter that is intermediate to a solid and a liquid. A lyotropic LC, specifically, has phase behavior that changes as its concentration increases. As a contrast, another major type are thermotropic liquid crystals, which have phase changes based upon temperature. The LCs in your television, for example, are of the thermotropic type. Most biological molecules that have liquid crystalline phase behavior are lyotropic. Phospholipids, lysophospholipids, and cholesterol all have these phase behaviors and are the most well-studied, but in recent years concentration-dependent formation of LC phases has been found in DNA and certain filament-forming proteins. Anyway, I digress.
As I mentioned, the phase behaviors of soaps and phospholipids are dependent upon concentration. Micelles and bilayers are simply different aspects of this same basic phenomenon. Refer to this image:
_URL_0_
As you can see in this image, you can construct a phase diagram (much like the phase diagrams that every 1st year chemistry student sees) of the different phase subtypes. Note that the axes are not temperature and pressure but temperature and amphiphile concentration.
Below a certain concentration -- called the critical micelle concentration or CMC -- amphiphilic molecules are dispersed in solution. This is intuitive but still contrary to what most intro bio and intro chemistry students are taught. Above the CMC for a given temperature, entropy effects drive amphiphilic molecules to aggregate into micelles. As the concentration continues to rise, amphiphilic molecules move through a variety of exotic LC phases before finally forming the lamellar phase at high concentration, relatively speaking.
This phase diagram will be different for soaps/surfactant which will be different than what you see for phospholipids and different than what you see for cholesterol, etc.
To the point made by Hmmhowaboutthis, geometry is important but not determinative. Surfactants do still form lamellar phases, and phospholipids do still form micelles; however, the geometry of surfactant molecule tends to stabilize micellar phases and the geometry of phospholipids stabilize lamellar phases.
|
[
"The surface charge of endothelial cells at points of diffusivity can determine which type of molecule can diffuse through the capillary walls. If the surface is hydrophilic, it will allow water and charged molecules to pass through. If it is hydrophobic, non-charged and lipophilic molecules will be able to diffuse through. These intermolecular screening forces are also known as Van der Waals forces, which is determined by the Keesom, Debye and London Dispersion forces. The lipid bilayer of an endothelial cell membrane is a hydrophobic surface. The non-polar lipids lead to a very high electrical resistivity, given by:\n",
"The lipoprotein particles have hydrophilic groups of phospholipids, cholesterol, and apoproteins directed outward. Such characteristics make them soluble in the salt water-based blood pool. Triglyceride-fats and cholesteryl esters are carried internally, shielded from the water by the phospholipid monolayer and the apoproteins.\n",
"Extravasation of intravenous sodium bicarbonate has been reported to cause chemical cellulitis because of its alkalinity, resulting in tissue necrosis, ulceration and/or sloughing at the site of infiltration. This condition is managed by prompt elevation of the part, warmth and local injection of lidocaine or hyaluronidase.\n",
"A compound that has two immiscible hydrophilic and hydrophobic parts within the same molecule is called an amphiphilic molecule. Many amphiphilic molecules show lyotropic liquid-crystalline phase sequences depending on the volume balances between the hydrophilic part and hydrophobic part. These structures are formed through the micro-phase segregation of two incompatible components on a nanometer scale. Soap is an everyday example of a lyotropic liquid crystal.\n",
"Characteristic of soaps, sodium stearate has both hydrophilic and hydrophobic parts, the carboxylate and the long hydrocarbon chain, respectively. These two chemically different components induce the formation of micelles, which present the hydrophilic heads outwards and their hydrophobic (hydrocarbon) tails inwards, providing a lipophilic environment for hydrophobic compounds.The tail part dissolves the grease (or) dirt and forms the micelle. It is also used in the pharmaceutical industry as a surfactant to aid the solubility of hydrophobic compounds in the production of various mouth foams.\n",
"Polyhexamethylene biguanide hydrochloride is a fast-acting, broad-spectrum synthetic compound that binds to the cell envelope of both Gram-positive and Gram-negative bacteria, disrupting the bacterial cell membrane and enabling seepage of ions. PHMB has a long history of use as a contact lens cleanser, mouthwash and more recently in wound care.\n",
"As a phospholipid bilayer, the lipid portion of the outer membrane is largely impermeable to all charged molecules. However, channels called porins are present in the outer membrane that allow for passive transport of many ions, sugars and amino acids across the outer membrane. These molecules are therefore present in the periplasm, the region between the plasma membrane and outer membrane. The periplasm contains the peptidoglycan layer and many proteins responsible for substrate binding or hydrolysis and reception of extracellular signals. The periplasm is thought to exist as a gel-like state rather than a liquid due to the high concentration of proteins and peptidoglycan found within it. Because of its location between the cytoplasmic and outer membranes, signals received and substrates bound are available to be transported across the cytoplasmic membrane using transport and signalling proteins imbedded there.\n"
] |
How do Americans get 115V?
|
most of us do get 240V to the home, as a single phase- but that phase is "split" - the transformer that supplies the house is center-tapped, that center is pinned to ground, so there's effectively a +120 and a -120. obviously not really positive and negative since it's a/c, but they're 180° out of phase. so some circuits get one polarity, some get the other, and if we need the full 240 (like for an air conditioner or the stove, etc) we use both.
|
[
"Illinois 115 is a north–south highway from Perdueville to Buckingham; at Buckingham, it turns east but is still marked north–south. South of Kankakee, Illinois 115 turns north again on its way into Kankakee.\n",
"Route 115 is a highway in the St. Louis, Missouri area. Its western terminus is at exit 237 of Interstate 70 (I-70) in Berkeley near Lambert-St. Louis International Airport. Route 115's eastern terminus is also at I-70, at exit 248A, in St. Louis, near the McKinley Bridge. The road is locally known as Natural Bridge Road, Natural Bridge Avenue, and Salisbury Street. It is one of two Missouri Highways that has an odd-numbered designation, yet runs in an east–west direction.\n",
"TxDOT is moving forward with designating I-14 along US 190 from Copperas Cove to I-35 in Belton. The American Association of State Highway and Transportation Officials (AASHTO) originally denied approval of TxDOT's request for the number at their May 24, 2016, meeting of the Special Committee on U.S. Route Numbering, their body responsible for approving designations in the United States Numbered Highway System and Interstate Highway System. The FHWA and AASHTO subsequently approved the I-14 designation. The Texas Transportation Commission made the I-14 number official on January 26, 2017. The official signage ceremony was held April 22, 2017 in Killeen, Texas on the Central Texas College campus. More I-14 signs went up over the next few weeks.\n",
"U.S. Route 150 (abbreviated Route 150 or U.S. 150) is a 571-mile (919 km) long northwest-southeast United States highway, signed as east–west. It runs from U.S. Route 6 outside of Moline, Illinois to U.S. Route 25 in Mount Vernon, Kentucky .\n",
"On May 13, 2010, Mayor Joseph DuPar and the Village Board approved renaming 127th Street as Obama Drive, in honor of the 44th President of the United States. On August 21, 2010, State Senator Emil Jones III read a proclamation of the Illinois Senate in honor of the dedication on the same date. This road became the first Obama Drive in the country and the first road named after President Barack Obama in his home state of Illinois. \n",
"I-35 splits again into I-35W and I-35E in the Minneapolis/Saint Paul, Minnesota area. The mile- and exit-numbering sequence continues along I-35E. At one sharp turn in I-35W near the junction with I-94, drivers are advised to slow to 35 mph (55 km/h) (although many drivers are able to maintain the speed limit of 55 mph (90 km/h)). It is not possible to go from westbound I-94 to northbound I-35W, from southbound I-35W to eastbound I-94, and vice versa, without resorting to surface streets.\n",
"Though the name is not used among locals, the entire portion of U.S. 14 in Illinois is given the honorary name Ronald Reagan Highway, which is not to be confused with the Ronald Reagan Memorial Tollway. U.S. 14 in Illinois is in length.\n"
] |
Why does the trait of skin color mix together into a shade rather than being one or the other, like eye color?
|
Most interesting traits are polygenic - they are controlled by many different genes, not just one or two.
When we’re taught introductory genetics, we’re taught inheritance with Mendelian diagrams showing how one dominant/recessive gene set is inherited. Later maybe we’re taught how two or three genes get inherited, and maybe given a few words on co-dominant and more complicated combinations.
This is all true, and it does apply to a handful of traits - eye color is one of the few (though even there, real life is more complicated than the simplistic versions we’re taught). But more interesting inherited traits, like height and skin color and propensity to heart disease and so on, are influenced by dozens, hundreds, or thousands of genes, all interacting together.
If you think about the apparently complex inheritance patterns you get with, say, three genes, imagine what happens with a thousand different genes. Everything ends up blending together, and instead of a tidy on/off appearance, you get what looks like a normal distribution of possible outcomes.
Historically, this polygenic inheritance led to lots of confusion. Darwin never figured out inheritance properly, while Mendel did. Is that because Mendel did the experiments and Darwin didn’t? In fact, Darwin did exactly the right experiments, almost identical to Mendel’s. The difference is that Mendel was lucky, and looked at plants (peas) and traits (wrinkled/smooth, etc) that have simple one-gene two-allele inheritance — the unusual, exceptional situation that allows you to figure out what’s going on. Darwin looked at primroses, where such simple traits are not as obvious. ([Darwin, C. R., 1877 The Different Forms of Flowers on Plants of the Same Species. John Murray, London.](_URL_1_)), and didn’t figure out that he was not seeing blending inheritance.
(I’ve read that there are simple inheritance models in primroses and that Darwin could have spotted them if he’d been primed for it, so Mendel still gets credit for understanding what he was seeing in his model.)
Polygenic inheritance looks like [blending inheritance ](_URL_0_), which was the main explanation for inheritance in Darwin’s time and until Mendel was rediscovered. The difference is that (as Darwin understood) evolution by natural selection can not work with blending inheritance - it needs Mendelian (particulate) inheritance for his theories to work.
|
[
"Skin colour is a polygenic trait, which means that several different genes are involved in determining a specific phenotype. Many genes work together in complex, additive, and non-additive combinations to determine the skin colour of an individual. The skin colour variations are normally distributed from light to dark, as it is usual for polygenic traits.\n",
"The genetic mechanism behind human skin color is mainly regulated by the enzyme tyrosinase, which creates the color of the skin, eyes, and hair shades. Differences in skin color are also attributed to differences in size and distribution of melanosomes in the skin. Melanocytes produce two types of melanin. The most common form of biological melanin is eumelanin, a brown-black polymer of dihydroxyindole carboxylic acids, and their reduced forms. Most are derived from the amino acid tyrosine. Eumelanin is found in hair, areola, and skin, and the hair colors gray, black, blond, and brown. In humans, it is more abundant in people with dark skin. Pheomelanin, a pink to red hue is found in particularly large quantities in red hair, the lips, nipples, glans of the penis, and vagina.\n",
"Kentner emphasizes that it is skin color rather than hair or eye color that serves as the base from which a color analysis must start. The color of a person's skin determines whether that individual should be classified as a Summer, a Winter, a Spring, or an Autumn. This can cause confusion, because the color of the hair may be the first thing that strikes the observer's eye (particularly if the hair color is dramatic). Thus, \"even though [one palette of] colors work best for [a particular person's] complexion, the individual may look like another Season because of haircoloring...I call this their secondary Season.\" The color of the hair and eyes serve to heighten the appeal of certain color choices for clothing and makeup, and to rule out certain other choices, but all such choices must be made from within the palette that is compatible with the shade of the skin.\n",
"The two-gene model does not account for all possible shades of brown, blond, or red (for example, platinum blond versus dark blond/light brown), nor does it explain why hair color sometimes darkens as a person ages. Several gene pairs control the light versus dark hair color in a cumulative effect. A person's genotype for a multifactorial trait can interact with the environment to produce varying phenotypes (see quantitative trait locus).\n",
"Approximately 10% of the variance in skin color occurs within regions, and approximately 90% occurs between regions. Because skin color has been under strong selective pressure, similar skin colors can result from convergent adaptation rather than from genetic relatedness; populations with similar pigmentation may be genetically no more similar than other widely separated groups. Furthermore, in some parts of the world where people from different regions have mixed extensively, the connection between skin color and ancestry has substantially weakened. In Brazil, for example, skin color is not closely associated with the percentage of recent African ancestors a person has, as estimated from an analysis of genetic variants differing in frequency among continent groups.\n",
"On average, and after the end of puberty, males have darker hair than females and according to most studies they also have darker skin—male skin is also redder, but this is due to greater blood volume rather than melanin). Male eyes are also more likely to be one of the darker eye colors. Conversely, women are lighter-skinned than men in all human populations. The differences in color are mainly caused by higher levels of melanin in the skin, hair and eyes in males. In one study, almost twice as many females as males had red or auburn hair. A higher proportion of females were also found to have blond hair, whereas males were more likely to have black or dark brown hair. Another study found green eyes, which are a result of lower melanin levels, to be much more common in women than in men, at least by a factor of two. However, one more recent study found that while women indeed tend to have a lower frequency of black hair, men on the other hand had a higher frequency of platinum blond hair, blue eyes and lighter skin. According to this one theory the cause for this is a higher frequency of genetic recombination in women than in men, possibly due to sex-linked genes, and as a result women tend to show less phenotypical variation in any given population.\n",
"The actual skin color of different humans is affected by many substances, although the single most important substance is the pigment melanin. Melanin is produced within the skin in cells called melanocytes and it is the main determinant of the skin color of darker-skinned humans. The skin color of people with light skin is determined mainly by the bluish-white connective tissue under the dermis and by the hemoglobin circulating in the veins of the dermis. The red color underlying the skin becomes more visible, especially in the face, when, as consequence of physical exercise or the stimulation of the nervous system (anger, fear), arterioles dilate. Color is not entirely uniform across an individual's skin; for example, the skin of the palm and the sole is lighter than most other skin, and this is especially noticeable in darker-skinned people.\n"
] |
how do i perceive myself as a single entity, when i'm actually composed of a group of cells that are each self replicating blocks of life?
|
You just asked "what is the nature of consciousness?" People have been debating and researching that for millenia.
|
[
"The individuality of a being is a certain intricate form, not an enduring substance. In order to understand an organism, it must be thought of as a pattern which maintains itself through homeostasis – life continues by maintaining an internal balance of various factors such as temperature and molecular structure. While the material substances that compose a living being may be constantly replaced by nearly identical ones, an organism continues functioning with the same identity as long as the pattern is kept sufficiently intact. Since patterns can be transmitted, modified, or duplicated, they are therefore a kind of information. Based on this, Wiener suggests it should be theoretically possible to transmit the entirety of a living person as a message (which is practically indistinguishable from the concept of physical teleportation) – although he admits that the obstacles to such a process would be great, because of the enormous amount of information embodied in a person, and the difficulty of reading or writing it.\n",
"Cells occasionally discover one another and band together for strength and mutual support. When multiple cells get together in a region, the organization often acquires an independent identity, a group structure known as a compact.\n",
"Cells in many ways can be seen as their own form of naturally occurring wetware, similar to the concept that the human brain is the preexisting model system for complex wetware. In his book \"Wetware: A Computer in Every Living Cell\" (2009) Dennis Bray explains his theory that cells, which are the most basic form of life, are just a highly complex computational structure, like a computer. To simplify one of his arguments a cell can be seen as a type of computer, utilizing its own structured architecture. In this architecture, much like a traditional computer many smaller components operate in tandem to receive input, process the information, and compute an output. In an overly simplified, and non-technical analysis cellular function can be broken into the following components. Information and instructions for execution are stored as DNA in the cell, RNA acts as a source for distinctly encoded input which processed by ribosomes and other transcription factors to access and process the DNA and to output a protein. Bray's argument in favor of viewing cells and cellular structures as models of natural computational devices is important when considering the more applied theories of wetware in relation to biorobotics.\n",
"The term \"cell group\" is derived from biology: the cell is the basic unit of life in a body. In a metaphorical sense, just as a body is made up of many cells that give it life, the cell church is made of cell groups that give it life. \n",
"Organisms that are highly complex execute the function of telling self from other with the brain and the immune system. To fulfill the function of recognizing self from other, the brain uses past experiences and genetic inheritance (e.g. survival, reproduction). The self is defined by these functions that distinguish an organisms from other organisms, which allow them to act as one whole entity in social and physical environments. Simply put, the theory revolves around the idea that the brain constitutes the self, which represents itself in a variety of internal states.\n",
"When scaled in the opposite direction, Hughes-Jones makes the argument that \"social groups that fight each other are self‐sustaining, self‐replicating wholes containing interdependent parts\" indicating that the group as a whole can have self-preservation with the individuals acting as the cells.\n",
"In order to have coherent thoughts, I must have an \"I\" that is not changing and that thinks the changing thoughts. Yet we cannot prove that there is a permanent soul or an undying \"I\" that constitutes my person. I only know that I am one person during the time that I am conscious. As a subject who observes my own experiences, I attribute a certain identity to myself, but, to another observing subject, I am an object of his experience. He may attribute a different persisting identity to me. In the third paralogism, the \"I\" is a self-conscious person in a time continuum, which is the same as saying that personal identity is the result of an immaterial soul. The third paralogism mistakes the \"I\", as unit of apperception being the same all the time, with the everlasting soul. According to Kant, the thought of \"I\" accompanies every personal thought and it is this that gives the illusion of a permanent I. However, the permanence of \"I\" in the unity of apperception is not the permanence of substance. For Kant, permanence is a schema, the conceptual means of bringing intuitions under a category. The paralogism confuses the permanence of an object seen from without with the permanence of the \"I\" in a unity of apperception seen from within. From the oneness of the apperceptive \"I\" nothing may be deduced. The \"I\" itself shall always remain unknown. The only ground for knowledge is the intuition, the basis of sense experience.\n"
] |
What were the attitudes of the labour movements in New Zealand and Australia towards the indigenous populations?
|
This question is made for /u/w2red and /u/Algernon_Asimov but I can answer for one union in particular...
The North Australian Workers’ Union (AWU) founded in the Northern Territory in around 1911.. Their history is covered in this doctoral thesis by Bernie Brian
_URL_0_
To quote from there:
> For the most part the union movement was not interested in the plight of Aboriginal workers except for when they competed with ‘white’ union members for jobs. The only exception to this was when members of the Communist Party were leading the union in the period immediately after the Second World War.
This however puts it a little mildly, the AWU clashed frequently with the administration of the NT, with several rounds of strikes, and eventually succeeded in getting the administrator (Gilruth) removed in what is now known as the Darwin Rebellion, this also led to the Territory getting direct representation in federal parliament...
One of the many issues they clashed on, was that Gilruth was employing Chinese and Aboriginal labour in state hotels, and occasionally paying equal wages to them, which outraged the AWU. As Brian puts it:
> many members of the NAWU and its predecessors were aggressive proponents of the racist white Australia policy and callously disregarded the plight of Aboriginal workers.
Other social relations in Darwin at the time in most sources are were typified as
> a leading member of the Communist Party, remembers the
main past-time in Darwin as drinking, gambling and ‘chasing gins’ (slang term for
Aboriginal women).
This changes post-war, and Jack McGuinness was head of the union for a while, did quite a lot for indigenous rights, as did many other unions Australia wide...
|
[
"New Zealand's claims to be a classless society were dealt a fatal blow in the 1980s and 1990s by the economic reforms of the fourth Labour government and its successor, the fourth National government. A cultural shift also took place due to the economic and social impact of international capital, commerce and advertising. New Zealanders were exposed to a previously unknown array of consumer goods and franchises. Aided by overseas programming, commercial radio and TV stations enjoyed rapid growth. Local manufacturing suffered from cheap imports, with many jobs lost. These reforms led to a dramatic increase in the gap between the richest and poorest New Zealanders, and an increase in the numbers living in poverty. Recent appreciation of real estate values increased the wealth of a generation of landowners while making housing unaffordable for many. Some are concerned that a New Zealand property bubble may burst, potentially wiping out considerable wealth.\n",
"Labour's progressive social and cultural policies, which encouraged biculturalism and the growth of Māori culture, may have caused a backlash amongst working class Pakeha, who had traditionally supported Labour. The cancellation of the proposed Springbok Tour was particularly unpopular. Many disliked and distrusted what Kirk's government was doing, but found Muldoon's style and message strongly appealing. This shift, along with the appeal the government's policies had for many middle class intellectuals, helped to change the culture of both parties, in Labour's case permanently. Under Muldoon, National had much more working class support than previously or since. The third Labour government's policies attracted a large university-educated liberal contingent to Labour, transforming the party from its working class, trade union roots. This shift in party culture explains how the fourth Labour government's policies differed so dramatically from those of its predecessors. As a result of this change, Kirk was to be the second to last Labour leader (the last being Mike Moore) to come from a working class and union background rather than be university-educated.\n",
"The Australian Labour Movement laid the blame for the poor social conditions of the 1930s in the capital cities squarely at the feet of the private landlords as well as the State and Commonwealth banks, which were popularly understood to also have played a critical role in the 1890s economic crash. From the political right, there was slowly amounting pressure to reclaim the slum sites (often formed in seaside areas in Australian capital cities) for more economically productive activities. At the same time, proponents of the emerging town planning movement in Australia began actively arguing for State and national government involvement in developing town plans as a means of slum eradication. Inspired by the garden city movement in the United Kingdom, it was thought that the rationalization of urban development via town planning could not only improve the urban environment, but also social behavior. In the case of Victoria, Methodist social reformer Frederick Oswald Barnett during this period was particularly influential in creating wider popular concern (as well as voyeuristic interest) in the awful condition of the slums in Melbourne.\n",
"New Zealand social policy has tended to oscillate between social progressiveness and conservatism. Social reforms pioneered by New Zealand include women's suffrage, the welfare state, and respect for indigenous peoples (through the Treaty of Waitangi and the Waitangi Tribunal). Having led the (non-communist) world in economic regulation from the 1930s, in the 1980s and 1990s the reforms of the Labour Government led the world in economic de-regulation. New Zealand was the first country to have an openly transgender mayor, and later member of parliament, Georgina Beyer. Same-sex marriage has been legal in New Zealand since 19 August 2013.\n",
"New Zealand's claims to be a classless society were seriously undermined in the 1980s and 1990s by the economic reforms of the fourth Labour government and its successor, the fourth National government. The reforms (sometimes called \"Rogernomics\") made by these governments severely weakened the power of unions, removed a lot of protection from workers, cut social welfare benefits and made state housing less affordable. After these reforms, the gap between rich and poor New Zealanders was increased dramatically, with the incomes of the richest 10% of New Zealanders advancing while the other 90% stayed largely static. In addition the number of New Zealanders living in poverty is much higher than in the 1970s. In an article entitled \"Countries with the Biggest Gaps Between Rich and Poor\", BusinessWeek ranked New Zealand at 6th in the world:\n",
"Historically, Labor and its affiliated unions were strong opponents of non-British immigration, expressed as the White Australia policy which barred all non-European migration to Australia. Besides the 19th century pseudo-scientific theories about \"racial purity\", the main labour concern was the fear of economic competition from immigrants prepared to accept low-wage, views which were shared by the vast majority of Australians and all major political parties. In practice the labour movement opposed all migration, on the grounds that immigrants competed with Australian workers and drove down wages. This objection continued until after World War II, when the Chifley Government launched a major immigration program. The party's opposition to non-European immigration did not change until after the retirement of Arthur Calwell as leader in 1967. Subsequently, Labor has become an advocate of multiculturalism, although some of its trade union base and some of its members continued to oppose high immigration levels.\n",
"The post war years saw the Australian labour movement support Indigenous Australians in their fight for human rights, cultural rights and native title, through supporting the 1946 Pilbara strike, The Gurindji Strike at Wave Hill in the Northern Territory, equal pay for aborigines and Torres Strait Islanders, and support for the Noonkanbah people in their land rights dispute with the Western Australian Government over mining companies disturbing sacred sites.\n"
] |
how do seasons work around the world? is it summer everywhere, or is it just summer on a part of the world?
|
When it is summer in the northern hemisphere, it is winter in the southern hemisphere. When it is hot in the US, it is cold in Australia. When the days are long in the US, they are short in Australia. This is due to the tilted axis of the earth, resulting in the different hemispheres getting different amounts of sunlight as we travel around the sun.
|
[
"A season is a division of the year marked by changes in weather, ecology, and amount of daylight. On Earth, seasons result from Earth's orbit around the Sun and Earth's axial tilt relative to the ecliptic plane. In temperate and polar regions, the seasons are marked by changes in the intensity of sunlight that reaches the Earth's surface, variations of which may cause animals to undergo hibernation or to migrate, and plants to be dormant. Various cultures define the number and nature of seasons based on regional variations.\n",
"Seasons result from the tilt of the Earth's axis compared to the plane of its revolution around the Sun. Throughout the year the northern and southern hemispheres are alternately turned either toward or away from the sun depending on Earth's position in its orbit. The hemisphere turned toward the sun receives more sunlight and is in summer, while the other hemisphere receives less sun and is in winter (see solstice).\n",
"The \"Seasons\" are a continuation of Poussin's mythological landscapes, depicting the power and grandeur of nature, \"benign in Spring, rich in Summer, sombre yet fruitful in Autumn, and cruel in Winter.\" The series also represents successive times of the day: early morning for Spring, midday for Summer, evening for Autumn and a moonlit night for Winter. For both stoic philosophers and for early Christians the seasons represented the harmony of nature; but for Christians the seasons, often depicted personified surrounding the Good Shepherd, and the succession of night and day also symbolized the death and resurrection of Christ and the salvation of man (1 Clement 9: 4-18, 11: 16-20 ).\n",
"Spring and \"springtime\" refer to the season, and also to ideas of rebirth, rejuvenation, renewal, resurrection and regrowth. Subtropical and tropical areas have climates better described in terms of other seasons, e.g. dry or wet, monsoonal or cyclonic. Cultures may have local names for seasons which have little equivalence to the terms originating in Europe.\n",
"Ecologically speaking, a season is a period of the year in which only certain types of floral and animal events happen (e.g.: flowers bloom—spring; hedgehogs hibernate—winter). So, if we can observe a change in daily floral/animal events, the season is changing. In this sense, ecological seasons are defined in absolute terms, unlike calendar-based methods in which the seasons are relative. If specific conditions associated with a particular ecological season don't normally occur in a particular region, then that area cannot be said to experience that season on a regular basis.\n",
"BULLET::::- Seasons are not caused by the Earth being closer to the Sun in the summer than in the winter, but by the Earth's 23.4-degree axial tilt. Each Hemisphere is tilted towards the Sun in its respective summer (July in the Northern Hemisphere and January in the Southern Hemisphere), resulting in longer days and more direct sunlight, with the opposite being true in the winter.\n",
"Since prehistory, the summer solstice has been seen as a significant time of year in many cultures, and has been marked by festivals and rituals. Traditionally, in many temperate regions (especially Europe), the summer solstice is seen as the middle of summer and referred to as \"midsummer\". Today, however, in some countries and calendars it is seen as the beginning of summer.\n"
] |
how can we recall memories and imagine scenarios and see them visually, while also seeing and observing the current environment?
|
Your brain does lots of things simultaneously, if it didn't your heart would stop when you needed to take a breath, or when you thought about a math problem.
Needless to say, the thing you're seeing in actuality is based on stimulus coming from your optic nerve. That path only takes inputs from your eye (normally) whereas the input from your imagined scenarios is coming from elsewhere in the brain. They may have some overlap in terms of where they are processed in the brain (giving you the sense you are "seeing" a memory), but that portion of the brain is pretty good at doing things simultaneously.
|
[
"Spatial view cells are used by primates for storing an episodic memory that helps with remembering where a particular object was in the environment. Imaging studies have shown that the hippocampus plays an important role in spatial navigation and episodic memories. Also, spatial view cells enable them to recall locations of objects even if they are not physically present in the environment. The neurons associated with remembering the location and object are often found in the primate hippocampus. These spatial view cells do not only recall specific locations, but they also remember distances between other landmarks around the place in order to gain a better understanding of where the places are spatially.\n",
"Visual memory describes the relationship between perceptual processing and the encoding, storage and retrieval of the resulting neural representations. Visual memory occurs over a broad time range spanning from eye movements to years in order to visually navigate to a previously visited location. Visual memory is a form of memory which preserves some characteristics of our senses pertaining to visual experience. We are able to place in memory visual information which resembles objects, places, animals or people in a mental image. The experience of visual memory is also referred to as the mind's eye through which we can retrieve from our memory a mental image of original objects, places, animals or people. Visual memory is one of several cognitive systems, which are all interconnected parts that combine to form the human memory. Types of palinopsia, the persistence or recurrence of a visual image after the stimulus has been removed, is a dysfunction of visual memory.\n",
"People use explicit memory throughout the day, such as remembering the time of an appointment or recollecting an event from years ago. Explicit memory involves conscious recollection, compared with implicit memory which is an unconscious, unintentional form of memory. Remembering a specific driving lesson is an example of explicit memory, while improved driving skill as a result of the lesson is an example of implicit memory.\n",
"Mental representations (or mental imagery) enable representing things that have never been experienced as well as things that do not exist. Think of yourself traveling to a place you have never visited before, or having a third arm. These things have either never happened or are impossible and do not exist, yet our brain and mental imagery allows us to imagine them. Although visual imagery is more likely to be recalled, mental imagery may involve representations in any of the sensory modalities, such as hearing, smell, or taste. Stephen Kosslyn proposes that images are used to help solve certain types of problems. We are able to visualize the objects in question and mentally represent the images to solve it.\n",
"Cognitive scientists study memory just as psychologists do, but tend to focus more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in \"fill-in-the-blank\")?\n",
"However, visuo-spatial short-term memory can retain visual and/or spatial information over brief periods of time. When this memory is in use, individuals are able to momentarily create and revisit a mental image that can be manipulated in complex or difficult tasks of spatial orientation.There are some who have disparities in the areas of the brain that allow for this to happen from different types of brain damage. There can also be a misunderstanding here in the differences between transient memories such as the visual sensory memory. A transient memory is merely a fleeting type of sensory memory. Therefore, as the visual sensory memory is a type of sensory memory, there is a store for the information, but the store last for only a second or so. A common effect of the visual sensory memory is that individuals may remember seeing things that weren't really there or not remembering particular things that were in their line of sight. The memory is only momentary, and if it isn't attended to within a matter of seconds, it is gone.\n",
"In real world applications, monkeys remember where they saw ripe fruit with the aid of spatial view cells. Humans use spatial view cells when they try to recall where they may have seen a person or where they left their keys. Primates' highly developed visual and eye movement control systems enables them to explore and remember information about what's present at places in the environment without having to physically visit those places. These sorts of memories would be useful for spatial navigation in which the primates visualize everything in an allocentric, or worldly manner that allows them to convey directions to others without physically going through the entire route. These cells are used by primates in regular day-to-day lives.\n"
] |
It seems like every modern President of the US was a golfer. Were there any who weren't?
|
Teddy Roosevelt, Hoover, Truman and Carter were the only US presidents not to play golf since McKinley, who introduced it to the White House. [Source](_URL_1_)
[FDR considered himself a golfer even though he was physically unable to play.](_URL_0_) I'll leave it up to you whether or not to include him in the list.
|
[
"Harry Watkey Easterly Jr. (1922-2005) served as president of the United States Golf Association, one of the World's two ruling bodies of Golf (the other being the Royal and Ancient Golf Club of St Andrews), in 1976 and 1977 and later as its first Executive Director.\n",
"Charles Robert Coe (October 26, 1923 – May 16, 2001) was an American amateur golfer who is considered by many to be one of the greatest American amateurs in history. A two-time U.S. Amateur winner, Coe never turned professional either because, as he stated in 1998, \"When I was growing up, golf was a gentleman's game,\" or because his wife said, \"if I thought I was going to raise three children out of a suitcase, I was crazy\". He had a successful career in the oil business.\n",
"Francis DeSales Ouimet (May 8, 1893 – September 2, 1967) was an American amateur golfer who is frequently referred to as the \"father of amateur golf\" in the United States. He won the U.S. Open in 1913 and was the first non-Briton elected Captain of the Royal and Ancient Golf Club of St Andrews. He was inducted into the World Golf Hall of Fame in 1974.\n",
"BULLET::::- Gerald Ford became the first incumbent U.S. President to play in a PGA golf tournament, as an amateur in a pro-am event, the Jackie Gleason-Inverrary Classic. A crowd of 41,720 (largest for a single day on a PGA Tour event) watched as the President shot 100 on 18 holes, in partnership with Jack Nicklaus, Jackie Gleason, Bob Hope and New York businessman Elliot Kahn.\n",
"Player is one of the most successful golfers in history, ranking third (behind Roberto de Vicenzo and Sam Snead) in total professional wins, with at least 164, and tied for fourth in major championship victories with nine. Along with Arnold Palmer and Jack Nicklaus he is often referred to as one of \"The Big Three\" golfers of his era – from the late 1950s through the late 1970s – when golf boomed in the United States and around the world and was greatly encouraged by expanded television coverage. Along with Gene Sarazen, Ben Hogan, Jack Nicklaus, and Tiger Woods, he is one of only five players to win golf’s \"career Grand Slam\". He completed the Grand Slam in 1965 at the age of twenty-nine. Player was the second multiple majors winner from South Africa, following Bobby Locke, then was followed by Ernie Els, and Retief Goosen.\n",
"Many of the leading figures in the history of golf have been U.S. Amateur Champion, including Bobby Jones five times, Jerome Travers four times, Jack Nicklaus twice and Tiger Woods three times (all consecutive; the only player to win three in a row). Woods' first win, as an 18-year-old in 1994, made him the youngest winner of the event, breaking the previous record of 19 years 5 months set by Robert A. Gardner in 1909. In 2008, New Zealander Danny Lee became the youngest ever winner, only to be eclipsed by 17-year-old An Byeong-hun the following year. Before the professional game became dominant, the event was regarded as one of the majors. This is no longer the case, but the champion still receives an automatic invitation to play in all of the majors except the PGA Championship. In addition, the runner-up also receives an invitation to play in the Masters and the U.S. Open. However, the golfers must maintain their amateur status at the time the events are held (unless they qualify for the tournaments by other means).\n",
"Throughout its history, the club has long been known as a golf club for the corporate elite. In 1947, the club's members included Joseph P. Kennedy, Henry Ford II, Jack Chrysler, Paul Mellon, Phillip Armour, John Pillsbury and Robert Vanderbilt. The club has also hosted kings and presidents: President Dwight D. Eisenhower was an honorary member, Presidents Gerald Ford and John F. Kennedy played it often; and the Duke of Windsor was a member. Henry Picard, winner of the 1938 Masters Tournament, was the professional at Seminole for 26 years and Ben Hogan spent a significant amount of time here playing and practicing.\n"
] |
Is there a timeline of the history of women's rights in ancient Greece between the 5th and 1st century BC?
|
Is this a homework question? It says in our [rules](_URL_1_):
Our users aren't here to do your homework for you, but they might be willing to help. Remember: AskHistorians helps those who help themselves. Don't just give us your essay/assignment topic and ask us for ideas. Do some research of your own, then come to us with questions about what you've learned. This is explained further [in this [META] thread](_URL_0_).
You can also consider asking the helpful people at /r/HomeworkHelp.
|
[
"Although most women lacked political and equal rights in the city states of ancient Greece, they enjoyed a certain freedom of movement until the Archaic age. Records also exist of women in ancient Delphi, Gortyn, Thessaly, Megara and Sparta owning land, the most prestigious form of private property at the time. However, after the Archaic age, legislators began to enact laws enforcing gender segregation, resulting in decreased rights for women.\n",
"Although mostly women lacked political and equal rights in ancient Greece, they enjoyed a certain freedom of movement until the Archaic age. Records also exist of women in ancient Delphi, Gortyn, Thessaly, Megara and Sparta owning land, the most prestigious form of private property at the time. However, after the Archaic age, women's status got worse, and laws on gender segregation were implemented.\n",
"The Areopagite constitution is the modern name for a period in ancient Athens described by Aristotle in his Constitution of the Athenians. According to that work, the Athenian political scene was dominated, between the ostracism of Themistocles in the late 470s BC and the reforms of Ephialtes in 462 BC, by the Areopagus, a traditional court composed of former archons. Modern scholars have debated the existence of this phenomenon, with some concluding that Aristotle and his contemporaries invented it to explain Ephialtes' need to limit the Areopagus' powers, and arguing that the lack of concrete measures establishing the Areopagus' dominance shows that the Areopagite constitution is \"palpably unhistorical\". Other scholars, such as Donald Kagan, have countered that no concrete measures were necessary, as the Areopagus' dominance was established not through actual changes in the laws but through the prestige of its leading members. Aristotle specifically cites the Areopagites' distribution of money to the public as the citizen body prepared to abandon Athens in the face of the advancing Persian army. \n",
"Gould Davis claimed that Greek women possessed rights that are presently denied by the Catholic, Orthodox, and conservative Protestant churches, such as the rights to abortion and divorce. She cited many well-known historians to support these claims. She also argued that women participated in almost all aspects of ancient Greek and Roman society, including government, learning and sport. In the following chapter, \"The Celts\", she argued that similar rights prevailed until the collapse of the Roman Empire, for a matrilineal system of monarchical descent, and for Celtic women being the major preservers of learning during the early Middle Ages.\n",
"Until the 1980s, scholars of women in classical Athens were primarily interested in the status of women and how they were viewed by men. Early feminist scholarship aimed to assert that women were significant in ancient history and to demonstrate how they had been oppressed. Early scholars held that Athenian women had an \"ignoble\" place, but in 1925 this position was challenged by Arnold Wycombe Gomme. According to Gomme, women had high social status despite their limited legal rights; his view has reinforced that position ever since. Pomeroy attributes the variety of viewpoints to the types of evidence prioritised by scholars, with those arguing for the high status of Athenian women predominantly citing tragedy and those arguing against it emphasising oratory.\n",
"During the past decades, the position of women in Greek society has changed dramatically. Efharis Petridou was the first female lawyer in Greece; in 1925 she joined the Athens Bar Association. In 1955, women were first allowed to become judges in Greece.In 1983, a new family law was passed, which provided for gender equality in marriage, and abolished dowry and provided for equal rights for \"illegitimate\" children. Adultery was also decriminalised in 1983. The new family law provided for civil marriage and liberalised the divorce law. In 2006, Greece enacted Law 3500/2006 -\"For combating domestic violence\"- which criminalised domestic violence, including marital rape.\n",
"The legal rights of women refers to the social and human rights of women. One of the first women's rights declarations was the \"Declaration of Sentiments\". The dependent position of women in early law is proved by the evidence of most ancient systems.\n"
] |
Why can i not see the exhaust from a bus, but i can see its shadow
|
Its a [shadowgraph](_URL_0_) caused by the difference in index of refraction between the hot exhaust and cold air.
|
[
"“I mean for example the tailgate lights they aren’t electronic, they are just reflectors, like the sort of reflectors they have on bicycle wheels, so that’s on the back on the bus. Very simple lighting. They are not air-conditioned, you always see them driving around with the windows open,” Rowan Beard from Young Pioneer Tours told \"NK News\".\n",
"During the 1950s, some of the buses were fitted with visors above the windscreens to reduce glare when travelling into the sun, but it is not clear which units had this modification. It is uncertain whether No.1 was one of the buses which received this addition; there are no photos showing it, there is no evidence of such a fitting, and it is not fitted now.\n",
"The ascending bus breaks out of the rain clouds into a clear, pre-dawn sky, and as it rises its occupants bodies change from being normal and solid into being transparent, faint, and vapor-like. When it reaches its destination the passengers on the bus – including the narrator – are gradually revealed to be ghosts. Although the country they disembark into is the most beautiful they have ever seen, every feature of the landscape, including streams of water and blades of grass, is unyieldingly solid compared to themselves: It causes them immense pain to walk on the grass, whose blades pierce their shadowy feet, and even a single leaf is far too heavy for any to lift.\n",
"A window stretches to become a door when a person would like to board it to travel. With its multiple caterpillar-like legs, it runs, flies, bounces, and hops across forests and lakes to reach its destination, making whole rice fields sway in its wake. Its eyes shine a yellow light brightly like headlamps to guide it. Mice with glowing eyes suspended next to its destination sign on its back and from its rear serve as tail lights. The Catbus is seemingly able to take its passengers to any destination they desire, even if the passenger (or the bus itself) lacks the knowledge how to get there; as is the case when Satsuki needed to find her sister, Mei.\n",
"By way of comparison, in \"Automat\" the window dominates the painting, and yet \"allows nothing of the street, or whatever else is outside, to be seen.\" The complete blackness outside is a departure both from Hopper's usual techniques, and from realism, since a New York street at night is full of light from cars and street lamps. This complete emptiness allows the reflections from the interior to stand out more dramatically, and intensifies the viewer's focus upon the woman.\n",
"The exhaust exited through rectangular outlets under the rear lights, at the center of the car. The windshield was very steeply raked, and the hood had inlet vents. The deck had a small access door at the extreme rear. Headlights are hidden.\n",
"During its production, the exterior of the Phantom saw little change; an exception was the modernization of its destination sign (switching from a rollsign to an LED display). Dual or quad headlights were offered (with the latter becoming the most common on transit buses). While a mandatory design feature on the Phantom School Bus, a rear window was a rare option for transit/suburban Phantoms; Monterey-Salinas Transit and King County Metro are the only two transit authorities known to have ordered Phantoms with a rear window. On transit/suburban versions, several window configurations were offered; fixed side windows were a rarely ordered option.\n"
] |
How dangerous are dog bites?
|
It depends on the severity and how it is treated.
As a veterinarian, I get bit often. Most of them heal fine with soap and water, but one started to cause intense pain and swelling that spread through my entire finger after 24hours. Without antibiotics, there is a real chance I would have lost my finger, or worse.
An elderly client of mine broke up a fight between his chihuahuas with his hands. Apparently, the bite wounds were not all that extrnsive, but he didn't get treatment, an infection started which eventually went systemic and killed him.
In short, if you are asking this because you've been bit, go get treatment. And see if you can find out the dog's rabies vaccine status.
As for the bacteria becoming resistant from one bite to the next, I'm calling bullshit on that unless there is some sort of odd circumstance you can elaborate on.
|
[
"The study defined dog attacks as \"a human death caused by trauma from a dog bite\". Excluded from the study were deaths by disease caused by dog bites, strangulation on a scarf or leash pulled by a dog, heart attacks or traffic accident, and falling injury or fire ant bites from being pushed down by a dog. The study also excluded four deaths by trauma from dog bites by police dogs or guard dogs employed by the government.\n",
"Fatal dog attacks are rare when compared to other causes of death, however they represent the extreme result of dog bite incidents, which do occur quite frequently. The study of fatal dog attacks can lead to prevention techniques which can help to reduce all dog bite injuries, not only fatalities. Since dog bites are a very high percentage of emergency room visits, it's worth investigating how to reduce those numbers. Dog bites and attacks can result in pain, bruising, wounds, bleeding, soft tissue injury, broken bones, loss of limbs, scalping, disfigurement, life-threatening injuries, and death.\n",
"Fatal dog attacks in the United States are rare, although non-fatal dog bites are not unusual. Typically, between 30 and 50 people in the US die from dog bites each year, and the number of deaths from dog attacks appear to be increasing. Around 4.5 million Americans are bitten by dogs every year, resulting in the hospitalization of 6,000 to 13,000 people each year in the United States (2005). Dog bites can cause pain, injury, infection, and even death. About 1 in 5 dog bites requires medical attention.\n",
"Currently there is no uniform reporting requirement in the United States for dog bites, attacks, injuries or fatalities. Using media reports and fact-checking to gather information for their statistics, DogsBite.org documents and publishes accounts of each fatality caused by dog attacks.\n",
"According to the Centers for Disease Control and Prevention, between 1979 and 1998, the Doberman Pinscher was involved in attacks on humans resulting in fatalities less frequently than several other dog breeds such as German Shepherd Dogs, Rottweilers, Husky-type dogs, wolf-dog hybrids and Alaskan Malamutes. According to this Center for Disease Control and Prevention study, one of the most important factors contributing to dog bites is the level of responsibility exercised by dog owners.\n",
"In addition to causing pain, injury, or nerve damage, almost one out of five bites becomes infected, placing the bite victim at risk for illness or death. Those who work and live around dogs should be aware of the risk and take precautions. Rabies is a particular risk associated with dog bites. In the United States between 16,000–39,000 people come in contact with potentially rabid dogs and other animals and receive rabies pre- and postexposure prophylaxis against the rabies virus each year. Because anyone who is bitten by an unvaccinated dog is at risk of getting rabies, local animal control agencies or police are sometimes able to capture the animal and determine whether or not it is infected with rabies.\n",
"There is considerable debate on whether or not certain breeds of dogs are inherently more prone to commit attacks causing serious injury (i.e., so driven by instinct and breeding that, under certain circumstances, they are exceedingly likely to attempt or commit dangerous attacks). Regardless of the breed of the dog, it is recognized that the risk of dangerous dog attacks can be greatly increased by human actions (such as neglect or fight training) or inactions (as carelessness in confinement and control).\n"
] |
Iceland was one of the poorer countries in Europe 1980. How did it grow so quickly after that to become so wealthy today?
|
I would like to preface this post by saying that I am not an economist and nor do I specialize in economic history. I am, however, somewhat acquainted with the economic development of Iceland as well as the historiography of this phenemenon due to being an Icelandic historian.
It is true that Iceland endured some economic hardships during the 1980s. This includes both catch failures for cod and capelin as well as massive inflation. Indeed, inflation rose to heights of more than 100% during 1983 and consistently stayed above 50% from 1980 to 1983. Government responded by devaluing the currency multiple times in a bid to boost the exports of the main economic sector - the fishing industry. Obviously, such measures were incredibly unpopular with the average worker whose purchasing power simultaneously decreased. This period of economic development can best be described as a time of monetary policy failures. Price levels fluctuated wildly and inflation was a chronic ill. The króna declined by almost 600% against the US dollar from 1980 to 1986.
This was followed by a short time of economic expansion from 1985 to 1988. This was mostly due to an increase in fishing catches as well as a boost in exports. However, this economic prosperity proved to be shortlived as fishing catches (mainly cod) failed again in 1988. During the late 1980s the Icelandic economy can best be described as stagnant. Although inflation fell, unemployment rose at the same time and Iceland's competitive position in the global economy detoriorated. In 1990, however, the cornerstone towards economic stability is sometimes said to have been laid when government managed to strike a tripartite deal with employers as well as labour unions. This deal is generally referred to as 'The National Agreement' (Þjóðarsáttin) and is considered a notable economic achievement for a few reasons. Firstly, the vicious cycle of wage increases followed by price level increases (and inflation) was ended. The labour unions agreed to relatively modest wage increases over the next few years in exchange for more stable price levels - which was made possible thanks to a fixed exchange rate of the króna. Secondly, this is one of the few instances in Icelandic history where collective bargaining of the government and labour movement was possible. Historically speaking, this inability to strike collective bargains has mainly been explained in terms of a very left-wing labour movement in conjunction with mainly right-wing goverments. In 1988, however, a left-wing government had been formed which facilitated this deal. Lastly, the National Accord of 1990 is generally seen as a prerequisite to the economic changes that Iceland underwent during the 1990s. These changes include economic liberalization, privatization of state-owned banks and utility companies as well as Iceland's membership of the EEA which was approved in 1992. Taxes were lowered and a period of economic prosperity began. However, these changes are also seen by many as a major cause of the 2008 Icelandic financial crisis wherein the entire financial system of Iceland collapsed spectacularly.
I would also like to add that although Iceland struggled economically during the 1980s it can hardly be classified as a poor country. Indeed, if we look at indicators such as GDP per capita we can see that Iceland was on par, if not ahead, of most Western European states during this time. For instance, Iceland's GDP per capita was $12,057 in 1984, compared with West Germany's $9,277 and France's $9,432. It is important to state that Iceland's economy is extremely volatile and most recessions are deeper and more frequent than in other European states. This is due to a number of factors. Firstly, because of the small size of the Icelandic economy as well as its unvaried nature and heavy emphasis on fish exports. Secondly, there are natural factors to consider, such as the weather and natural disasters (like volcanic eruptions and perhaps even avalanches). Thirdly and lastly, we can name the very pro-cylical economic and monetary policies of successive Icelandic governments which generally excerbated these crises with massive devaluations of the currency. Iceland can thus truly be called an economy of instability.
**Sources:**
Árni H. Kristjánsson. *Þjóðarsáttin 1990: Forsagan og goðsögnin*. BA-thesis. University of Iceland, 2008.
Guðmundur Jónsson and Magnús S. Magnússon, ed. *Icelandic historical statistics*. Reykjavík: Statistics Iceland, 1997.
Palle S. Andersen and and Már Guðmundsson. *Inflation and Disinflation in Iceland*. Reykjavík: Central Bank of Iceland, 1998.
Sigurður Snævarr. *Haglýsing Íslands*. Reykjavík: Heimskringla, 1993.
|
[
"Until the 20th century, Iceland relied largely on subsistence fishing and agriculture. Industrialisation of the fisheries and Marshall Plan aid following World War II brought prosperity and Iceland became one of the wealthiest and most developed nations in the world. In 1994, it became a part of the European Economic Area, which further diversified the economy into sectors such as finance, biotechnology, and manufacturing.\n",
"Iceland joined the European Economic Area in 1994, after which the economy was greatly diversified and liberalised. International economic relations increased further after 2001, when Iceland's newly deregulated banks began to raise massive amounts of external debt, contributing to a 32% increase in Iceland's gross national income between 2002 and 2007.\n",
"Until the 20th century, Iceland was a fairly poor country. Currently, it remains one of the most developed countries in the world. Strong economic growth had led Iceland to be ranked first in the United Nations' Human Development Index report for 2007/2008, although in 2011 its HDI rating had fallen to 14th place as a result of the economic crisis. Nevertheless, according to the Economist Intelligence Index of 2011, Iceland has the 2nd highest quality of life in the world. Based on the Gini coefficient, Iceland also has one of the lowest rates of income inequality in the world, and when adjusted for inequality, its HDI ranking is 6th. Iceland's unemployment rate has declined consistently since the crisis, with 4.8% of the labour force being unemployed , compared to 6% in 2011 and 8.1% in 2010.\n",
"By mid-2012 Iceland was regarded as one of Europe's recovery success stories. It has had two years of economic growth. Unemployment was down to 6.3% and Iceland was attracting immigrants to fill jobs. Currency devaluation effectively reduced wages by 50% making exports more competitive and imports more expensive. Ten-year government bonds were issued below 6%, lower than some of the PIIGS nations in the EU (Portugal, Italy, Ireland, Greece, and Spain). Tryggvi Thor Herbertsson, a member of parliament, noted that adjustments via currency devaluations are less painful than government labor policies and negotiations. Nevertheless, while EU fervor has cooled the government continued to pursue membership.\n",
"Iceland had among the lowest GDP per capita in Western Europe at the start of the 20th century. According to one assessment by Central Bank of Iceland economists, Post-World War II economic growth has been both significantly higher and more volatile than in other OECD countries. The average annual growth rate of GDP from 1945 to 2007 was about 4%. Studies have shown that the Icelandic business cycle has been largely independent of the business cycle in other industrialised countries. This can be explained by the natural resource-based export sector and external supply shocks. However, the volatility of growth declined markedly towards the end of the century, which may be attributed to the rising share of the services sector, diversifi cation of exports, more solid economic policies, and increased participation in the global economy.According to Reuters, Iceland has had \"over 20 financial crises since 1875\".\n",
"Iceland began implementing neoliberal economic policies beginning in the late 1980s. As measured by the Economic Freedom of the World, it had the 53rd \"freest economy\" in 1975 and it was one of the poorest countries in Europe. In 2004, it had the 9th freest economy and it was one of the richest. However, by 2009, the country was facing severe financial problems, a consequence that a number of observers have attributed to Iceland's extensive deregulation.\n",
"Icelandic post-World War I prosperity came to an end with the outbreak of the Great Depression, a severe worldwide economic crash. The depression hit Iceland hard as the value of exports plummeted. The total value of Icelandic exports fell from 74 million kronur in 1929 to 48 million kronur in 1932, and did not rise again to the pre-1930 level until after 1939. Government interference in the economy increased: \"Imports were regulated, trade with foreign currency was monopolized by state-owned banks, and loan capital was largely distributed by state-regulated funds\". The outbreak of the Spanish Civil War cut Iceland's exports of saltfish by half, and the depression lasted in Iceland until the outbreak of World War II, when prices for fish exports soared.\n"
] |
putin's government in russia and the quality of democracy that exists there.
|
Well, I lived in Russia during Putin's primetime. And in all honesty, 10 years ago he was the best thing that happened in Russia. He made the country great, became a very popular, most people loved him. Then when reaching his maximum term, he declined to rewrite a law (that would let him stick around longer consecutively) and all the country praised it as an honorable act. Then Medvedev took over, and the people seemed to take a liking to him as well...eventually, which seemed to upset Putin. Putin got jealous, over reacted, and made sure he won the next election... and now he's setting dumb policies in place.
So how is life there now?
Well, the media always has to be careful of criticizing the govt., the people still get beat up for having even peaceful protests, and there is still a ton of corruption (which will take decades to go away). The mindset of Russian people is what makes the difference. I personally believe it's great that every person in the US has the ability to buy a weapon, but I feel the opposite about the same issue in Russia... God forbid that Russians get access to guns (technically it's fire-powder ban, I believe). Russia is nowhere near developed enough to trust it's people to that extent. The current anti-democracy measure are a bit much, but I think there needs to be a balance between govt control and peoples freedom in the Russian Federation.
|
[
"Sovereign Democracy in Russia was realised in the form of a dominant-party system which was put into place in 2007 when as a result of the Russian legislative election of 2007 the political party United Russia, headed by president Vladimir Putin, without forming a government, formally became the leading and guiding force in Russian society.\n",
"Russian historian Andranik Migranyan saw the Putin regime as restoring what he viewed as the natural functions of a government after period of the 1990s, when oligopolies expressing only their own narrow interests allegedly ruled Russia. Migranyan said: \"If democracy is the rule by a majority and the protection of the rights and opportunities of a minority, the current political regime can be described as democratic, at least formally. A multiparty political system exists in Russia, while several parties, most of them representing the opposition, have seats in the State Duma\".\n",
"Russian politician Boris Nemtsov and commentator Kara-Murza define Putinism in Russia as \"a one party system, censorship, a puppet parliament, ending of an independent judiciary, firm centralization of power and finances, and hypertrophied role of special services and bureaucracy, in particular in relation to business\".\n",
"Russian politician Boris Nemtsov and commentator Kara-Murza define Putinism in Russia as \"a one party system, censorship, a puppet parliament, ending of an independent judiciary, firm centralization of power and finances, and hypertrophied role of special services and bureaucracy, in particular in relation to business\".\n",
"The political system under Putin has been described as incorporating some elements of economic liberalism, a lack of transparency in governance, cronyism, nepotism and pervasive corruption. This view has been supported by many, but it has also been characterized as \"a systemic and institutionalized form\" by others, notably Boris Nemtsov. Between 1999 and 2008, the Russian economy grew at a steady pace, which some experts attribute to the sharp rouble devaluation of 1998, Boris Yeltsin-era structural reforms, rising oil prices and cheap credit from Western banks. In former Ambassador Michael McFaul's opinion (June 2004), Russia's \"impressive\" short-term economic growth \"came simultaneously with the destruction of free media, threats to civil society and an unmitigated corruption of justice\".\n",
"In September 2007, American economist Richard W. Rahn called Putinism \"a Russian nationalistic authoritarian form of government that pretends to be a free market democracy\" and which \"owes more of its lineage to fascism than communism\", noting that \"Putinism depended on the Russian economy growing rapidly enough that most people had rising standards of living and, in exchange, were willing to put up with the existing soft repression\". He predicted that \"as Russia's economic fortunes changed, Putinism was likely to become more repressive\". After Rahn's remarks Putin took actions to lessen democracy, promote conservative beliefs and values; and silence opposition to his policies and administration.\n",
"According to Stephen White, Russia under the presidency of Putin made it clear that it had no intention of establishing a \"second edition\" of the American or British political system, but rather a system that was closer to Russia's own traditions and circumstances. Putin's administration has often been described as a \"sovereign democracy\". First proposed by Vladislav Surkov in February 2006, the term quickly gained currency within Russia and arguably unified various political elites around it. According to its proponents, the government's actions and policies ought above all to enjoy popular support within Russia itself and not be determined from outside the country.\n"
] |
How much did scientists know about the makeup of other planets in our solar system prior to spectroscopy?
|
Even though I don't have a clue, I'll take a stab at answering, since no one else has. I'm going to take it as sort of a running narrative of how I'm researching it... No particular reason why.
[Wikipedia](_URL_4_) states astronomical spectroscopy dates back to the (rather unhelpful) "early 1800s", but states that there were retail devices available by at least 1884. Of course, it also took awhile for spectrographic lines to be made sense of and mapped. To begin with, it informed us more about the nature of light and matter itself, than the planets or sun. So... hopefully 1884 is just as good as any other date.
So, if we look at a [timeline for astronomy](_URL_0_) and work backwards from 1884... Well, aside from spectroscopy, there's not a whole lot.
* 1846, Neptune is credited to Johann Gottfried Galle, but there's evidence Galileo may have discovered it.
* 1801 an asteroid is discovered by Giuseppe Piazzi and demonstrated by William Herschel.
* 1781 Uranus is discovered by William Herschel (busy guy!).
etc.
Doesn't help as much as I thought.... Too vague to properly answer your question. So, we can take a look at what we knew about *specifically Jupiter*. Turns out, the [red spot](_URL_3_) has been observed since 1830, but was observed earlier by individuals. The first time it was described as being "red" was in a painting by Donato Ceti in 1711, though (based just on the Wiki description), it may have been artistic license, since the next description as red doesn't come until spectrography.
How about moons...? Well, aside from the [Galilean moons](_URL_1_), there were a few prior to mid-to-late 1800's. Specifically, 20 prior to 1884 (not counting the actual "moon" - that was way early).
Seems we didn't know a lot about the planets. Some idea of color. They had a good handle on the math to calculate basic orbits, but based on wikipedia on the discovery of the moons during the 1800's, they weren't first demonstrated mathematically (though many of the articles are vague on the issue). [Ceres](_URL_2_) is something of an exception, so they did have some basic rules of orbits and such.
Hope this helps you get started... I'd dig further, but I'm out of time.
|
[
"Most of our direct information on the composition of the giant planets is from spectroscopy. Since the 1930s, Jupiter was known to contain hydrogen, methane and ammonium. In the 1960s, interferometry greatly increased the resolution and sensitivity of spectral analysis, allowing the identification of a much greater collection of molecules including ethane, acetylene, water and carbon monoxide. However, Earth-based spectroscopy becomes increasingly difficult with more remote planets, since the reflected light of the Sun is much dimmer; and spectroscopic analysis of light from the planets can only be used to detect vibrations of molecules, which are in the infrared frequency range. This constrains the abundances of the elements H, C and N. Two other elements are detected: phosphorus in the gas phosphine (PH) and germanium in germane (GeH). \n",
"Although Aristarchus' results were incorrect due to observational errors, they were based on correct geometric principles of parallax, and became the basis for estimates of the size of the Solar System for almost 2000 years, until the transit of Venus was correctly observed in 1761 and 1769. This method was proposed by Edmond Halley in 1716, although he did not live to see the results. The use of Venus transits was less successful than had been hoped due to the black drop effect, but the resulting estimate, 153 million kilometers, is just 2% above the currently accepted value, 149.6 million kilometers.\n",
"Working with Dale Frail, Wolszczan carried out astronomical observations from the Arecibo Observatory in Puerto Rico that led them to the discovery of the pulsar PSR B1257+12 in 1990. In 1992 they showed that the pulsar was orbited by two planets, whose masses were initially assessed at 3.4 and 2.8 times Earth's mass. The radii of their orbits are 0.36 and 0.47 AU respectively. This was the first confirmed discovery of planets outside the Solar System (as of 6 October 2017, 3,529 such planets were known). Wolszczan announced his findings in 1992 during the Meeting of the American Astronomical Society in Atlanta. Two years later he published the results of his discovery and was chosen by the journal \"Nature\" as the author of one of 15 fundamental discoveries in the field of physics. Despite some initial misgivings by several experts, today his discovery is regarded as fully substantiated. Astronomer Bohdan Paczynski (Princeton University) called it \"the greatest discovery by a Polish astronomer since Copernicus.\" In 1998, \"Astronomy\" magazine included his discovery among The 25 Greatest Astronomical Findings of All Time. At the Arecibo Observatory, Wolszczan also collaborated with Joseph H. Taylor Jr and conducted research on millisecond pulsars.\n",
"The study of the outer planets has since been revolutionized by the use of unmanned space probes. The arrival of the \"Voyager\" spacecraft at Saturn in 1980–1981 resulted in the discovery of three additional moons – Atlas, Prometheus and Pandora, bringing the total to 17. In addition, Epimetheus was confirmed as distinct from Janus. In 1990, Pan was discovered in archival \"Voyager\" images.\n",
"Follow-up observations were conducted by a European and American science team at the 1.2 m Leonhard Euler Telescope at La Silla Observatory in Chile, which further raised the possibility of the existence of a planet in WASP-15's orbit; use of the CORALIE spectrograph on the Euler Telescope between March 6, 2008 and July 17, 2008 revealed that the variations in radial velocity measurements were not because of an eclipsing binary star system.\n",
"BULLET::::3. Planet studies: At this stage, detailed study of individual planets would take place. With a low noise level and a modest signal, spectroscopy and photometry can be performed. Spectroscopy allows scientists to perform chemical analysis of atmospheres and surfaces, which might hold clues to the existence of life elsewhere in the universe. Photometry will show variation in color and intensity as surface features rotate in and out of the field of view, allowing for the detection of oceans, continents, polar caps and clouds.\n",
"The planets of the Solar System can only be observed in their current state, but observations of different planetary systems of varying ages allows us to observe planets at different stages of evolution. Available observations range from young proto-planetary disks where planets are still forming to planetary systems of over 10 Gyr old. When planets form in a gaseous protoplanetary disk, they accrete hydrogen/helium envelopes. These envelopes cool and contract over time and, depending on the mass of the planet, some or all of the hydrogen/helium is eventually lost to space. This means that even terrestrial planets may start off with large radii if they form early enough. An example is Kepler-51b which has only about twice the mass of Earth but is almost the size of Saturn which is a hundred times the mass of Earth. Kepler-51b is quite young at a few hundred million years old.\n"
] |
what keeps a bowling lane from getting warped from thrown balls?
|
Wood lanes are shaved down usually every 2 years. Also the lanes are not 1 continuous piece of wood. There are sections. The front part of the lane (where balls land) are made of a harder wood.
Most lanes today are made of a synthetic material. Also in sections that can be replaced.
|
[
"As bowling balls are quite heavy to throw, some alleys provide portable slides from the top of which the ball is pushed down rather than thrown. Use of these slides is often combined with the use of bumpers. These slides are used by children and the disabled to assist their throw. They are also referred to as \"ramps\".\n",
"Lofting (by a bowler) in bowling is throwing a bowling ball more than a short distance down the lane. This is usually done with the bounce pass technique, but can also be done with a straight ball. Lofting is looked down upon by the bowling community and bowling alley employees because of the damage to the ball and lanes. Many bowling alleys that use wood for their lanes will either have signs that tell the bowlers not to loft, or an employee will tell the bowlers not to do so. Lofting the ball before the arrows in some bowling alleys is not against the rules. Some professional bowlers do loft a considerable amount under certain lane conditions. Crankers and other high-rev players may be forced to loft under dry conditions in order to delay the ball's reaction and prevent it from overhooking. Lofting over the gutter is known as \"popping the cap\" and is done when a bowler hooks the whole lane.\n",
"Bowling takes place at Neon Lanes. Players are required to reach to their left or right to take up a ball before swinging their arm forwards to bowl, exaggerating the arm motion to add spin if required. Single player, local multiplayer and online multiplayer game modes are available. Bowling mini games include One Bowl Roll, in which the player must clear as many pin setups as possible before running out of chances, and Pin Rush, where the player is challenged to knock over as many pins as possible within a time limit.\n",
"In recreational bowling alleys, a strike usually triggers a special animation to be played on the electronic scoring system, depicting bowling pins being knocked down (often in an exaggerated, cartoonish style) and an \"X\" or \"STRIKE!\" appearing on the screen afterwards.\n",
"A separate elevator next to the turntable transports the balls to the ball return system, which has a near-vertical ramp that the balls roll down to gain enough momentum to roll through either an above-lane or submerged trough back up the alley, entering the ball return rack next to the approach area where players can grab them. Bowl Mor pinsetters are stocked with 24 to 27 pins, and are deemed substantially more reliable than typical Ten-pin bowling pinsetters. Due to the playing rules of candlepin bowling allowing fallen \"dead wood\" pins to \"remain\" on the lane between each ball's roll, no provision has ever been made for \"spotting cells\" in a candlepin pinsetter's spotting table, simplifying the machines' design. Most parts of the machine are driven by chains – especially the sweep board's drive system, on two L-shaped tracks on either side of the unit – or belts. A Bowl Mor unit weighs approximate , and draws 24 amperes at 110 volts from three-wire 110-220 volt service mains. The ICBA lists the cost of a refurbished Bowl Mor unit at approximately $5000.\n",
"The various techniques of fast bowling lend themselves to three ways of getting the batsman out. They may be bowled or caught LBW either by speed, the yorker or by seam or swing causing the ball to move in toward them, in which case placement of fielders is irrelevant. Swing or seam may be employed to move the ball away from the batsman in which case the ball strikes the outside edge of the bat and may be caught in the slips. A badly-played bouncer will either fly off the outside edge as above or may result in a mistimed shot that can be caught near the boundary.\n",
"Lane conditions are created by cleaning the lane surface and then applying lane oil in a pattern via a lane oiling machine. Lane oil is designed to both protect the surface and influence bowling ball hook. Ball hook is the product of the its surface material (cover stock), balance (core), direction of travel, speed of delivery, and spin (angular momentum). As the ball travels towards the pins it interacts with the lane surface and reacts to friction. As the ball encounters friction its angular momentum is consumed changing its trajectory. In areas of the lane where there is less oil the ball will change direction (hook) and in areas of the lane where there is more oil the ball will not change direction (skid).\n"
] |
I am looking for a Site which has a database of News Clippings of English Newspapers from 1900-1950??
|
[Proquest Historical Newspapers](_URL_0_) has the Guardian and the Observer. If you are member of a uni, library or other institution with a subscription, it's free.
|
[
"The British Library has already digitised two separate collections of newspapers: British newspapers 1800-1900 and the Burney collection of British 18th century newspapers. This project added another 1m pages of historical newspapers to the platform\n",
"This is a list of defunct newspapers of the United States. Only notable names among the thousands of such newspapers are listed, primarily major metropolitan dailies which published for ten years or more.\n",
"The major newspapers were served by Agenzia Stefani (1853–1945). It was a News agency that collected news and feature items, and distributed them to subscribing newspapers by telegraph or by mail. It had exchange agreements with Reuters in London and Havas in Paris, and provided a steady flow of domestic and international news and features.\n",
"In May 2010 a ten-year programme of digitization of the newspaper archives with commercial partner DC Thomson subsidiary Brightsolid began. In November 2011, BBC News announced the launch of the British Newspaper Archive, an initiative to facilitate online access to over one million pages of pre-20th century newspapers. The same newspapers from this partnership have also been made available to view on Findmypast and Genes Reunited.\n",
"The British Library Newspapers section was based in Colindale in North London, until 2013, and is now divided between the St Pancras and Boston Spa sites. The Library has an almost complete collection of British and Irish newspapers since 1840. This is partly because of the legal deposit legislation of 1869, which required newspapers to supply a copy of each edition of a newspaper to the library. London editions of national daily and Sunday newspapers are complete back to 1801. In total the collection consists of 660,000 bound volumes and 370,000 reels of microfilm containing tens of millions of newspapers with 52,000 titles on 45 km of shelves.\n",
"On 25 July 2008 the \"Australian Newspapers Beta\" service was released to the public as a standalone website and a year later became a fully integrated part of the newly launched Trove. The service contains millions of articles from 1803 onwards, with more content being added regularly. The website was the public face of the Australian Newspapers Digitisation Project, a coordination of major libraries in Australia to convert historic newspapers to text-searchable digital files. The Australian Newspapers website allowed users to search the database of digitised newspapers from 1803 to 1954 which are now in the public domain.\n",
"The stories were transmitted back to \"Lloyd’s List\" in London where the compilation of the newspaper took place by \"Lloyd’s List\" production staff. The completed editorial pages of the newspaper were then transmitted back to Westonprint in Australia where advertisements were manually inserted and production completed before printing and distribution took place.\n"
] |
is there a reason why all or most ip addresses begin with 192.168..?
|
192.168.x.x is part of the 'private' range of addresses set aside by the Internet masters (APNIC). Whilst these are valid addresses, they are specificically designed not to be transmitted across the wider Internet. There are actually 3 such sets, (Class A, B and C).
These are:
A: 10.0.0.0 to 10.255.255.255.
B: 172.16.0.0 to 172.31.255.255.
C: 192.168.0.0 to 192.168.255.255.
The Technical term is '[Non Routeable IP addresses](_URL_0_)'
|
[
"For example, the global IPv4 address has the corresponding 6to4 prefix . This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.\n",
"Addresses in this group consist of an 80-bit prefix of zeros, the next 16 bits are ones, and the remaining, least-significant 32 bits contain the IPv4 address. For example, ::ffff:192.0.2.128 represents the IPv4 address 192.0.2.128. Another deprecated format for IPv4-compatible IPv6 addresses is ::192.0.2.128.\n",
"Devices on the Internet are assigned a unique IP address for identification and location definition. With the rapid growth of the Internet after commercialization in the 1990s, it became evident that far more addresses would be needed to connect devices than the IPv4 address space had available. By 1998, the Internet Engineering Task Force (IETF) had formalized the successor protocol. IPv6 uses a 128-bit address, theoretically allowing 2, or approximately addresses. The actual number is slightly smaller, as multiple ranges are reserved for special use or completely excluded from use. The total number of possible IPv6 addresses is more than times as many as IPv4, which uses 32-bit addresses and provides approximately 4.3 billion addresses. The two protocols are not designed to be interoperable, complicating the transition to IPv6. However, several IPv6 transition mechanisms have been devised to permit communication between IPv4 and IPv6 hosts.\n",
"Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. However, because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP (IPv6), using 128 bits for the IP address, was developed in 1995, and standardized in . In , a final definition of the protocol was published. IPv6 deployment has been ongoing since the mid-2000s.\n",
"The 128 highest addresses within each subnet prefix are reserved to be used as anycast addresses. These addresses usually have the 57 first bits of the interface identifier set to 1, followed by the 7-bit anycast ID. Prefixes for the network, including subnets, are required to have a length of 64 bits, in which case the universal/local bit must be set to 0 to indicate the address is not globally unique. The address with value 0x7e in the 7 least-significant bits is defined as a mobile IPv6 home agents anycast address. The address with value 0x7f (all bits 1) is reserved and may not be used. No more assignments from this range are made, so values 0x00 through 0x7d are reserved as well.\n",
"An IP address is interpreted as composed of two parts: a network-identifying prefix followed by a host identifier within that network. In the previous classful network architecture, IP address allocations were based on the bit boundaries of the four octets of an IP address. An address was considered to be the combination of an 8, 16, or 24-bit network prefix along with a 24, 16, or 8-bit host identifier respectively. Thus, the smallest allocation and routing block contained only 256 addresses—too small for most enterprises, and the next larger block contained addresses—too large to be used efficiently even by large organizations. This led to inefficiencies in address use as well as inefficiencies in routing, because it required a large number of allocated class-C networks with individual route announcements, being geographically dispersed with little opportunity for route aggregation.\n",
"An IPv4 address has a size of 32 bits, which limits the address space to (2) addresses. Of this number, some addresses are reserved for special purposes such as private networks (~18 million addresses) and multicast addressing (~270 million addresses).\n"
] |
why are americans so obsessed with halloween?
|
Because Halloween is fun. The little kiddies dressed up in their costumes out getting candy. The harmless pranks people can play on others. Great movies on T.V. Everything about Halloween is just fun.
|
[
"While not traditionally a part of Australian culture, non-religious celebrations of Halloween modeled on North American festivities are growing increasingly popular in Australia, in spite of seasonal differences and the transition from spring to summer. Criticism stems largely from the fact that Halloween has little relevance to Australian culture. It is also considered, by some Australians, to be an unwanted American influence; as although Halloween does have Celtic/European origins, its increasing popularity in Australia is largely as a result of American pop-culture influence. Supporters of the event claim that the critics fail to see that the event is not entirely American, but rather Celtic and is no different to embracing other cultural traditions such as Saint Patrick's Day.\n",
"In New Zealand, as in neighbouring Australia, Halloween is not celebrated to the same extent as in North America, although in recent years the non-religious celebrations have been achieving some popularity especially among young children. Trick-or-treat has become increasingly popular with minors in New Zealand over the years, despite being not a \"British or Kiwi event\" that purely is only influenced by American globalization. Critics of Halloween in New Zealand believe that commercialization of Halloween by the popular store The Warehouse has pushed the popularity of Halloween into an unofficial national holiday.\n",
"Halloween is thought to have evolved from the ancient Celtic/Gaelic festival of Samhain, which was introduced in the American colonies by Irish settlers. It has become a holiday that is celebrated by children and teens who traditionally dress up in costumes and go door to door trick-or-treating for candy. It also brings about an emphasis on eerie and frightening urban legends and movies.\n",
"The traditions and importance of Halloween vary greatly among countries that observe it. In Scotland and Ireland, traditional Halloween customs include children dressing up in costume going \"guising\", holding parties, while other practices in Ireland include lighting bonfires, and having firework displays. In Brittany children would play practical jokes by setting candles inside skulls in graveyards to frighten visitors. Mass transatlantic immigration in the 19th century popularized Halloween in North America, and celebration in the United States and Canada has had a significant impact on how the event is observed in other nations. This larger North American influence, particularly in iconic and commercial elements, has extended to places such as Ecuador, Chile, Australia, New Zealand, (most) continental Europe, Japan, and other parts of East Asia. In the Philippines, during Halloween, Filipinos return to their hometowns and purchase candles and flowers, in preparation for the following All Saints Day (\"Araw ng mga Patay\") on 1 November and All Souls Day – though it falls on 2 November, most of them observe it on the day before. In Mexico and Latin American in general, it is referred to as \" Día de Muertos \" which translates in English to \"Day of the dead\". Most of the people from Latin America construct altars in their homes to honor their deceased relatives and they decorate them with flowers and candies and other offerings.\n",
"American historian and author Ruth Edna Kelley of Massachusetts wrote the first book-length history of Halloween in the US; \"\" (1919), and references souling in the chapter \"Hallowe'en in America\". In her book, Kelley touches on customs that arrived from across the Atlantic; \"Americans have fostered them, and are making this an occasion something like what it must have been in its best days overseas. All Halloween customs in the United States are borrowed directly or adapted from those of other countries\".\n",
"Some Christians feel concerned about the modern celebration of Halloween because they feel it trivializes – or celebrates – paganism, the occult, or other practices and cultural phenomena deemed incompatible with their beliefs. Father Gabriele Amorth, an exorcist in Rome, has said, \"if English and American children like to dress up as witches and devils on one night of the year that is not a problem. If it is just a game, there is no harm in that.\" In more recent years, the Roman Catholic Archdiocese of Boston has organized a \"Saint Fest\" on Halloween. Similarly, many contemporary Protestant churches view Halloween as a fun event for children, holding events in their churches where children and their parents can dress up, play games, and get candy for free. To these Christians, Halloween holds no threat to the spiritual lives of children: being taught about death and mortality, and the ways of the Celtic ancestors actually being a valuable life lesson and a part of many of their parishioners' heritage. Christian minister Sam Portaro wrote that Halloween is about using \"humor and ridicule to confront the power of death\".\n",
"Halloween is now the United States' second most popular holiday (after Christmas) for decorating; the sale of candy and costumes is also extremely common during the holiday, which is marketed to children and adults alike. The National Confectioners Association (NCA) reported in 2005 that 80% of American adults planned to give out candy to trick-or-treaters.\n"
] |
Can you layer sun protection products and is their SP factor cumulative?
|
You can layer but you will only get the higher protection. 15 + 15 does not equal spf 30.
They’re tested at a rate of coverage that equals about 1/4 teaspoon for the average male face too. It’s a good idea to measure it for a while to be sure you are getting enough.
It doesn’t last all day, so reapply about every 2 hours or after profuse sweating or swimming or rubbing.
|
[
"A relatively new rating designation for sun protective textiles and clothing is UPF (Ultraviolet Protection Factor), which represents the ratio of sunburn-causing UV measured without and with the protection of the fabric. For example, a fabric rated UPF 30 means that, if 30 units of UV fall on the fabric, only 1 unit will pass through to the skin. A UPF 30 fabric that blocks 29 out of 30 units of UV is therefore blocking 96.7%. Unlike SPF (Sun Protection Factor) measurements that traditionally use human sunburn testing, UPF is measured using a laboratory instrument (spectrophotometer or spectroradiometer) and an artificial light source, and then applying a sunburn weighting curve (erythemal action spectrum) across the relevant UV wavelengths. Theoretically, human SPF testing and instrument UPF testing both generate comparable measurements of a product's ability to protect against sunburn.\n",
"UPF (Ultraviolet Protection Factor) represents the ratio of sunburn-causing UV without and with the protection of the fabric, similar to SPF (Sun Protection Factor) ratings for sunscreen. While standard summer fabrics have UPF ~6, sun protective clothing typically has UPF ~30, which means that only 1 out of ~30 units of UV will pass through (~3%).\n",
"The ultraviolet protection factor (UPF) is a similar scale developed for rating fabrics for sun protective clothing. According to recent testing by \"Consumer Reports\", UPF ~30+ is typical for protective fabrics, while UPF ~20 is typical for standard summer fabrics.\n",
"One field of application is in sunscreens. The traditional chemical UV protection approach suffers from its poor long-term stability. A sunscreen based on mineral nanoparticles such as titanium oxide offer several advantages. Titanium oxide nanoparticles have a comparable UV protection property as the bulk material, but lose the cosmetically undesirable whitening as the particle size is decreased.\n",
"Sunscreens are commonly rated and labeled with a sun protection factor (SPF) that measures the fraction of sunburn-producing UV rays that reach the skin. For example, \"SPF 15\" means that of the burning radiation reaches the skin through the recommended thickness of sunscreen. Other rating systems indicate the degree of protection from non-burning UVA radiation.\n",
"In 1992, the FDA reviewed clothing that was being marketed with claims of sun protection (SPF, % UV blockage, or skin cancer prevention). Only one brand of sun protective clothing, Solumbra, was cleared under medical device regulations. The FDA initially regulated sun protective clothing as a medical device, but later transferred oversight for general sun protective clothing to the FTC. The UPF rating system may eventually be adopted by interested apparel/textile/fabric manufacturers as a \"value added\" program for consumer safety and awareness. Before UPF standards were in place (which directly measure a fabric's ability to block UV radiation), clothing was previously rated using SPF standards (which measure how long a person's skin takes to redden).\n",
"For clothing, the Ultraviolet Protection Factor (UPF) represents the ratio of sunburn-causing UV without and with the protection of the fabric, similar to SPF (Sun Protection Factor) ratings for sunscreen. Standard summer fabrics have UPF of approximately 6, which means that about 20% of UV will pass through.\n"
] |
Which books of the Bible attributed to a single author (ex: first six books of Old Testament to Moses, Luke/Acts to Luke, John/Revelation to John, various letters by Peter and Paul) do scholars agree were really written by the same person?
|
You might want to post this in /r/academicbiblical as well.
|
[
"Irenaeus (died c. 202) quotes and cites 21 books that would end up as part of the New Testament, but does not use Philemon, Hebrews, James, 2 Peter, 3 John and Jude. By the early 3rd century Origen of Alexandria may have been using the same 27 books as in the modern New Testament, though there were still disputes over the canonicity of Hebrews, James, 2 Peter, 2 and 3 John, and Revelation (see also Antilegomena). Likewise by 200 the Muratorian fragment shows that there existed a set of Christian writings somewhat similar to what is now the New Testament, which included four gospels and argued against objections to them. Thus, while there was plenty of discussion in the Early Church over the New Testament canon, the \"major\" writings were accepted by almost all Christian authorities by the middle of the second century.\n",
"There may have been a single author for the gospel and the three epistles. Tradition attributes all the books to John the Apostle. Most scholars agree that all three letters are written by the same author, although there is debate on who that author is. Although some scholars conclude the author of the epistles was different from that of the gospel, all four works probably originated from the same community, traditionally and plausibly attributed to Ephesus, \"c.\" 90-110, but perhaps, according to some scholars, from Syria.\n",
"The Epistle to the Hebrews of the Christian Bible is one of the New Testament books whose canonicity was disputed. Traditionally, Paul the Apostle was thought to be the author. However, since the third century this has been questioned, and the consensus among most modern scholars is that the author is unknown.\n",
"It is generally agreed that the Gospel of Luke and the Acts of the Apostles were both written by the same author, and they are often referred to as a single work called Luke-Acts. The most direct evidence comes from the prefaces of each book. Both prefaces were addressed to Theophilus, and Acts of the Apostles (1:1–2) says in reference to the Gospel of Luke, \"In my former book, Theophilus, I wrote about all that Jesus began to do and teach until the day He was taken up to heaven, after giving instructions through the Holy Spirit to the apostles He had chosen.\" (NIV) Furthermore, there are linguistic and theological similarities between the two works, suggesting that they have a common author. Both books also contain common interests.\n",
"The authorship of the Gospel of Luke and the Acts of the Apostles, collectively known as Luke–Acts, is an important issue for biblical exegetes who are attempting to produce critical scholarship on the origins of the New Testament. Traditionally, the text is believed to have been written by Luke the companion of Paul (named in Colossians ). However, the earliest manuscripts are anonymous, and the traditional view has been challenged by many modern scholars.\n",
"All other books with similar titles or narratives have not been included into the Bible, and belong to the New Testament apocrypha. These acts narratives tend to be later, legendary accounts about the early apostles written in the 2nd and 3rd century CE. The books normally do not claim to be written by apostles, but are anonymous, and thus they are not considered pseudepigrapha and forgeries. Unlike the canonical Book of Acts, they tend to focus on the exploits of individual apostles.\n",
"The epistles of Paul are generally regarded as the oldest extant Christian writings. These mention Jesus' mother (without naming her), but do not refer to his father. The Book of Mark, believed to be the first gospel to be written and with a date about two decades after Paul, also does not mention Jesus' father. Joseph first appears in the Gospels of Matthew and Luke, both dating from around 80–90 AD. The issue of reconciling the two accounts has been the subject of debate.\n"
] |
why are some people great at abstract thinking but terrible at algebra which involves it?
|
Math requires abstract thinking to understand why you're supposed to do certain things, but not to literally do them. You need abstract thinking to understand why you can divide two from both sides of 2x=4, but not to do it.
The people that are good at math before calculus are people that can do well in a system that requires logical and step by step thinking. It isn't really until calc and beyond that you need to understand why you can/cannot do certain things.
|
[
"Abstract mathematical problems arise in all fields of mathematics. While mathematicians usually study them for their own sake, by doing so results may be obtained that find application outside the realm of mathematics. Theoretical physics has historically been, and remains, a rich source of inspiration.\n",
"By its great generality, abstract algebra can often be applied to seemingly unrelated problems; for instance a number of ancient problems concerning compass and straightedge constructions were finally solved using Galois theory, which involves field theory and group theory. Another example of an algebraic theory is linear algebra, which is the general study of vector spaces, whose elements called vectors have both quantity and direction, and can be used to model (relations between) points in space. This is one example of the phenomenon that the originally unrelated areas of geometry and algebra have very strong interactions in modern mathematics. Combinatorics studies ways of enumerating the number of objects that fit a given structure.\n",
"Abstract algebra is the subject area of mathematics that studies algebraic structures, such as groups, rings, fields, modules, vector spaces, and algebras. The phrase abstract algebra was coined at the turn of the 20th century to distinguish this area from what was normally referred to as algebra, the study of the rules for manipulating formulae and algebraic expressions involving unknowns and real or complex numbers, often now called \"elementary algebra\". The distinction is rarely made in more recent writings.\n",
"Mathematics and geometry describe abstract objects that sometimes correspond to familiar shapes, and sometimes do not. Circles, triangles, rectangles, and so forth describe two-dimensional shapes that are often found in the real world. However, mathematical formulas do not describe individual physical circles, triangles, or rectangles. They describe ideal shapes that are objects of the mind. The incredible precision of mathematical expression permits a vast applicability of mental abstractions to real life situations.\n",
"Because of its generality, abstract algebra is used in many fields of mathematics and science. For instance, algebraic topology uses algebraic objects to study topologies. The Poincaré conjecture, proved in 2003, asserts that the fundamental group of a manifold, which encodes information about connectedness, can be used to determine whether a manifold is a sphere or not. Algebraic number theory studies various number rings that generalize the set of integers. Using tools of algebraic number theory, Andrew Wiles proved Fermat's Last Theorem.\n",
"Although the algebra exists as a purely abstract construction, it can be most easily visualised in terms of operations on the edges and vertices of a dodecahedron. Hamilton himself used a flattened dodecahedron as the basis for his instructional game.\n",
"Algebra being a fundamental tool in any area amenable to mathematical treatment, these considerations combine to make the algebra of two values of fundamental importance to computer hardware, mathematical logic, and set theory.\n"
] |
In fantasy it is common to read about colored or painted armor. Is there any historical basis for this?
|
There was a style that became popular in 16th century Germany referred to as "black and white" armor. Decorative patterns were created by selectively polishing certain areas. [Here's a good example.](_URL_1_) As you might expect, the higher one's status, the fancier of a pattern one could afford. [This](_URL_0_), by contrast, is a relatively "budget" example.
|
[
"The clear and detailed depiction of the costumes of the figures in the tinted drawings has been discussed and copied in works on the history of costume since the late 18th century; in particular the sleeveless open-seam surcoat worn over chain mail of the kneeling knight is often used as an example of this innovation from the Islamic world.\n",
"A report in 231 AD mentions the capture of 5,000 suits of \"dark armour\" (\"xuan kai\" or \"xuan jia\" 玄鎧/玄甲) and 3,100 crossbows. Dark armour appears in Han texts as well, but only as the attire worn by honor guards at funeral processions. The only known trait about dark armour is that it reflected the sun's rays. This probably means dark armour was made of high quality steel, which was often associated with black ferrous material.\n",
"The oldest extant depiction of a coloured armoury can be seen on the tomb of Geoffrey Plantagenet, Count of Anjou, who died in 1151. An enamel, probably commissioned by Geoffrey's widow between 1155 and 1160, depicts him carrying a blue shield decorated with six golden lions rampant. He wears a blue helmet adorned with another lion, and his cloak is lined in vair. A medieval chronicle states that Geoffrey was given a shield of this description when he was knighted by his father-in-law, Henry I, in 1128; but this account probably dates to about 1175. The earliest evidence of the association of lions with the English crown is a seal bearing two lions passant, used by the future King John during the lifetime of his father, Henry II, who died in 1189. Since Henry was the son of Geoffrey Plantagenet, it seems reasonable to suppose that the adoption of lions as an heraldic emblem by Henry or his sons might have been inspired by Geoffrey's shield. John's elder brother, Richard the Lionheart, who succeeded his father on the throne, is believed to have been the first to have borne the arms of three lions passant-guardant, still the arms of England, having earlier used two lions rampant combatant, which arms may also have belonged to his father. Richard is also credited with having originated the English crest of a lion statant (now statant-guardant).\n",
"A report in 231 AD mentions the capture of 5,000 suits of \"dark armour\" (\"xuan kai\" or \"xuan jia\" 玄鎧/玄甲) and 3,100 crossbows. Dark armour appears in Han texts as well, but only as the attire worn by honor guards at funeral processions. The only trait known about dark armour is that it reflected the sun's rays. This probably means dark armour was made of high quality steel, which was often associated with black ferrous material.\n",
"Maurice Denis cited the Talisman in his famous 1914 essay, \"New Theories of Modern Art and on Sacred Art\": \"Remember that a painting, before it is a horse in battle, a nude woman or a sort of anecdote, is essentially a flat surface covered with colors assembled in a certain order.\" \n",
"Conrad created art while he was in active duty during the Civil War. While there were several artists on the Union side who captured the war in painting, which were also active, this was not the case on the Confederate side. His works may be the only set of battle subjects painted by a Confederate army artist during the war.\n",
"The paintings are classified largely in two groups, one as depiction of hunters and food gatherers, while other one as fighters, riding on horses and elephant carrying metal weapons. the first group of paintings dates to prehistoric times while second one dates to historic times. Most of the paintings from historic period depicts battles between the rulers carrying swords, spears, bows and arrows.\n"
] |
the yahoo-alibaba spin-off
|
Yahoo is doing terribly and its' core business is pretty much worthless. However, Yahoo owns a lot of shares in a company called Alibaba, which is actually very profitable. So they want to get rid of their actual business and make money on the shares instead.
|
[
"In June 2011, Alibaba Group Executive Chairman and former CEO Jack Ma announced that Taobao would split into three different companies: Taobao Marketplace (a C2C platform), Tmall.com (a B2C platform; then called Taobao Mall), and eTao (a search engine for online shopping). The move was said to be necessary for Taobao to “meet competitive threats that emerged in the past two years during which the Internet and e-commerce landscape has changed dramatically.”\n",
"In June 2011, Alibaba Group Chairman and CEO Jack Ma announced a major restructuring of Taobao through an internal email. It was reorganized into three separate companies. As a result, Tmall.com became an independent business under Alibaba Group. The other two businesses that resulted from the reorganization are Taobao Marketplace (a C2C marketplace) and eTao (a shopping search engine). The move was said to be necessary for Taobao to \"meet competitive threats that emerged in the past two years during which the Internet and e-commerce landscape has changed dramatically\".\n",
"Altaba Inc. is a non-diversified, closed-end management investment company based in New York City that was formed from the remains of Yahoo! Inc. after Verizon acquired Yahoo's Internet business. The company that remained after the purchase changed its name to Altaba Inc. on June 16, 2017. Verizon completed its acquisition of Yahoo!'s core internet business on June 13, 2017, and put the assets under a new subsidiary named Yahoo! Holdings within its newly created division, Oath. The only Yahoo!-branded interest held by Altaba was its stake in the joint venture Yahoo! Japan but this stake has since been sold to SoftBank Group.\n",
"On June 16, 2017, the company that remained after Verizon Communications purchased the core Internet businesses of Yahoo! Inc. was renamed Altaba Inc. The new company, listed by the Securities and Exchange Commission as a \"non-diversified, closed-end management investment company,\" immediately began trading on NASDAQ under the ticker symbol AABA.\n",
"Alibaba's main founder Jack Ma is the executive chairman of the Alibaba Group since its creation. Joseph Tsai is Alibaba's executive vice-chairman since 2013. Daniel Zhang is Alibaba's CEO since 2015. J. Michael Evans is Alibaba's president since 2015. The board of directors of Alibaba includes top management Jack Ma, Joseph Tsai, Daniel Zhang, and J. Michael Evans, directors Eric Jing and Masayoshi Son (founder and CEO of SoftBank), and independent directors such as Chee Hwa Tung, Walter Kwauk, Börje E. Ekholm, and Wan Ling Martello, as well as Yahoo! co-founder and former CEO Jerry Yang. Besides Ma, Tsai, Zhang, and Evans, senior management also includes Maggie Wu (CFO) Judy Tong (CPO), Jeff Zhang (CTO and President of Alibaba Cloud Intelligence), Sophie Wu (CCO), Tim Steinert (General Counsel and Secretary), Jessie Zheng (CRO and CRGO/Chief Platform Governance Officer), Angel Zhao (Head of Alibaba Globalization Leadership Group), Chris Tung (CMO), Trudy Dai (President of Wholesale Marketplaces), Fan Jiang (President of Taobao.com), and Jet Jing (President of Tmall.com).\n",
"In October 1999 and January 2000, Alibaba twice won a total of a $25 million foreign venture capital investment. The program was expected to improve the domestic e-commerce market and perfect an e-commerce platform for Chinese enterprises, especially small and medium-sized enterprises (SMEs), to address World Trade Organization (WTO) challenges. Ma wanted to improve the global e-commerce system and from 2003 he founded Taobao Marketplace, Alipay, Ali Mama and Lynx. After the rapid rise of Taobao, eBay offered to purchase the company. However, Ma rejected their offer, instead garnering support from Yahoo co-founder Jerry Yang with a $1 billion investment.\n",
"On June 16, 2017, parts of the original Yahoo Inc, which were not purchased by Verizon Communications, were renamed Altaba Inc. On the United States Securities and Exchange Commission's website, they listed the new company as a \"non-diversified, closed-end management investment company.\"\n"
] |
the bill of rights
|
Most of them are pretty self-explanatory.
1. There can be no laws against what you say, or what religion you follow, or who you associate with.
2. We need a military force, so you're allowed to own guns.
3. You cannot be forced to house troops.
4. Your home can't be searched without an OK from a judge.
5. You can't be tried twice for the same offense. You can't be forced to testify against yourself.
6. You have to have a fair public trial with witnesses you can cross-examine.
7. If you want a jury trial, you are entitled to have it.
8. The punishment must fit the crime.
9. This is not an all-encompassing list. You may have other rights too.
10. If it's not specifically a federal issue, then it's automatically a state issue. [There's more, but I can't really describe it LY5.]
|
[
"The Second Bill of Rights is a list of rights that was proposed by United States President Franklin D. Roosevelt during his . In his address, Roosevelt suggested that the nation had come to recognize and should now implement, a second \"bill of rights\". Roosevelt's argument was that the \"political rights\" guaranteed by the Constitution and the Bill of Rights had \"proved inadequate to assure us equality in the pursuit of happiness\". His remedy was to declare an \"economic bill of rights\" to guarantee these specific rights:\n",
"The United States Bill of Rights is the first ten amendments to the United States Constitution. Proposed following the oftentimes bitter 1787–88 battle over ratification of the United States Constitution, and crafted to address the objections raised by Anti-Federalists, the Bill of Rights amendments add to the Constitution specific guarantees of personal freedoms and rights, clear limitations on the government's power in judicial and other proceedings, and explicit declarations that all powers not specifically delegated to Congress by the Constitution are reserved for the states or the people. The concepts codified in these amendments are built upon those found in several earlier documents, including the Virginia Declaration of Rights and the English Bill of Rights 1689, along with earlier documents such as Magna Carta (1215). Although James Madison's proposed amendments included a provision to extend the protection of some of the Bill of Rights to the states, the amendments that were finally submitted for ratification applied only to the federal government.\n",
"The United States Bill of Rights comprises the first ten amendments to the United States Constitution. Proposed following the often bitter 1787–88 debate over the ratification of the Constitution, and written to address the objections raised by Anti-Federalists, the Bill of Rights amendments add to the Constitution specific guarantees of personal freedoms and rights, clear limitations on the government's power in judicial and other proceedings, and explicit declarations that all powers not specifically granted to the U.S. Congress by the Constitution are reserved for the states or the people. The concepts codified in these amendments are built upon those found in earlier documents, especially the Virginia Declaration of Rights (1776), as well as the English Bill of Rights (1689) and the Magna Carta (1215).\n",
"The United States Bill of Rights consists of 10 amendments added to the Constitution in 1791, as supporters of the Constitution had promised critics during the debates of 1788. The English Bill of Rights (1689) was an inspiration for the American Bill of Rights. Both require jury trials, contain a right to keep and bear arms, prohibit excessive bail and forbid \"cruel and unusual punishments\". Many liberties protected by state constitutions and the Virginia Declaration of Rights were incorporated into the Bill of Rights.\n",
"The Bill of Rights is commonly dated in legal contexts to 1688. This convention arises from the legal fiction (prior to the passage of the Acts of Parliament (Commencement) Act 1793) that an Act of Parliament came into force on the first day of the session in which it was passed. The Bill was therefore deemed to be effective from 13 February 1689 (New Style), or, under the Old Style calendar in use at the time, 13 February 1688.\n",
"A bill of rights, sometimes called a declaration of rights or a charter of rights, is a list of the most important rights to the citizens of a country. The purpose is to protect those rights against infringement from public officials and private citizens. \n",
"The Bill of Rights was originally proposed to assuage Anti-Federalist opposition to Constitutional ratification. Initially, the First Amendment applied only to laws enacted by the Congress, and many of its provisions were interpreted more narrowly than they are today. Beginning with \"Gitlow v. New York\" (1925), the Supreme Court applied the First Amendment to states—a process known as incorporation—through the Due Process Clause of the Fourteenth Amendment.\n"
] |
the philosophical concept of epiphenomenal qualia and jackson's "mary" thought experiment.
|
It's not open and shut because it's not demonstrable that Mary *has* learned anything new. [Mary's room](_URL_1_) has been argued by greater (or at least more singularly focused) minds than either of ours, and yet they still disagree.
This is how I see it: Suppose Mary *and Martha* work together in the black & white room, studying color vision. After a thoroughly complete study, and a development in both researchers of a complete-as-possible understanding of the phenomenon, Mary leaves the room and experiences color vision for the first time. When she returns, Martha asks her what she has learned. Can Mary tell Martha anything that will expand Martha's understanding of color vision? I think it is obvious that she cannot, so Mary really hasn't gained any new *knowledge*, even though she may perceive that she has.
Similarly, a person using "magic mushrooms" or LSD may perceive subjectively that they have expanded their consciousness and gained knowledge far beyond what their tiny minds could have held before. This is subjective, however; the knowledge is "useful" only within the tripper's own psyche.
Lastly, take a look at the problem from another angle used in the study of artificial intelligence. It is obvious that while in the black & white room, Mary does not experience color vision. However, the *system* which is composed of Mary, the room, the monitor, and all connected cameras and sensing equipment, *does* experience color vision. That *system* can differentiate a red apple from a green one just as readily as any person who can perceive color naturally. That person, after all, is a system of optics, sensory apparatus and neural tissue that can perceive color, even though the actual sensors (the [cone cells](_URL_2_) of the [retina](_URL_0_)) only register relative light intensity.
|
[
"Jackson believed in the explanatory completeness of physiology, that all behaviour is caused by physical forces of some kind. And the thought experiment seems to prove the existence of qualia, a non-physical part of the mind. Jackson argued that if both of these theses are true, then epiphenomenalism is true—the view that mental states are caused by physical states, but have no causal effects on the physical world.\n",
"This thought experiment has two purposes. First, it is intended to show that qualia exist. If one agrees with the thought experiment, we believe that Mary gains something after she leaves the room—that she acquires knowledge of a particular thing that she did not possess before. That knowledge, Jackson argues, is knowledge of the quale that corresponds to the experience of seeing red, and it must thus be conceded that qualia are real properties, since there is a difference between a person who has access to a particular quale and one who does not.\n",
"Nearing 1840, William Whewell thought that the inductive sciences, so called, were not so simple after all, and asked recognition of \"superinduction\", an explanatory scope or principle invented by the mind to unite facts, but not present \"in\" the facts. Mill would have none of hypotheticodeductivism, posed by Whewell as science's method, which Whewell believed to sometimes, via other considerations upon the evidence, render scientific theories of known metaphysical truth. By 1880, C S Peirce had clarified the basis of deductive inference and, although recognizing induction, proposed a third type of inference that Peirce called \"abduction\", now otherwise termed \"inference to the best explanation\" (IBE).\n",
"Adams learned of the irregularities while still an undergraduate and became convinced of the \"perturbation\" hypothesis. Adams believed, in the face of anything that had been attempted before, that he could use the observed data on Uranus, and utilising nothing more than Newton's law of gravitation, deduce the mass, position and orbit of the perturbing body.\n",
"First Jackson argued that qualia are epiphenomenal: not causally efficacious with respect to the physical world. Jackson does not give a positive justification for this claim—rather, he seems to assert it simply because it defends qualia against the classic problem of dualism. Our natural assumption would be that qualia must be causally efficacious in the physical world, but some would ask how we could argue for their existence if they did not affect our brains. If qualia are to be non-physical properties (which they must be in order to constitute an argument against physicalism), some argue that it is almost impossible to imagine how they could have a causal effect on the physical world. By redefining qualia as epiphenomenal, Jackson attempts to protect them from the demand of playing a causal role.\n",
"Nemirow and Lewis present the \"ability hypothesis\", and Conee argues for the \"acquaintance hypothesis\". Both approaches attempt to demonstrate that Mary \"gains no new knowledge, but instead gains something else\". If she in fact gains no new propositional knowledge, they contend, then what she does gain may be accounted for within the physicalist framework. These are the two most notable objections to Jackson's thought experiment, and the claim it sets out to make.\n",
"The original neuropsychological theory of hypnotic suggestion was based upon the ideomotor reflex response that William B. Carpenter declared, in 1852, was the principle through which James Braid's hypnotic phenomena were produced.\n"
] |
Today I was in a 15 story building during an earthquake. If the building collapsed, would I have been safer on the first floor, 15th floor or somewhere in between?
|
Structural engineer here. Everything is coming straight down in a total collapse. Think building demo. People have this idea of buildings falling way sideways like a Jenga block and it doesn't work that way at these scales.
But what you should do in an earthquake is different than the ideal place for a building collapse. Remember, engineers design buildings in earthquake zones to survive earthquakes that statistics indicate are the likely worst case. The same is true for hurricanes along the gulf.
If I knew for a fact the building was coming all the way down I'd want to be the hell outta there. If I had to be in the building I'd pick the basement beneath the biggest columns and girders I could find. Buildings tend to be pretty close to free fall once total collapse is induced. So the idea of riding the roof down isn't too different from jumping. In the middle you get crushed. At the bottom you probably get crushed by you may be able to pick a spot that doesn't get pancaked and hope to be dug out.
Odds are good the building (in america or japan, don't know about other places) will survive ~~and~~ an earthquake, but you will get partial damage.
In an actual earthquake *you should not assume a full collapse*.
Get to the inside (away from extererior windows), stay out of the elevator, stand underneath a doorway if you can. The main structure should hold-anything in it is likely to move or collapse.
|
[
"Of 14 hotels that had already been built under false pretenses when the problem first came to light, two of them were shown to fail Japan's earthquake resistance standards. Because the concrete had insufficient reinforcing steel, there was a fear that an earthquake of magnitude 5 on the Japanese Shindo scale could cause the buildings to collapse. None of the buildings in question have yet collapsed, but the safety of high-rise buildings in earthquake-prone Japan has been called into question, particularly in the light of the 1995 Sampoong Department Store collapse in Korea.\n",
"The building collapsed in the February 2011 Christchurch earthquake, with only the north shear wall that included the lift shaft left still standing. One survivor was quoted as running out of the ground floor during the shaking. When she had reached the other side of the 14 meter wide road, she looked back and \"the building was down.\" Within minutes, a fire broke out. Most of the deaths were caused by the collapse, but it is assumed that some of the victims suffered fatal burns, and some may have even drowned during the efforts of putting the fire out.\n",
"A survey by the government of the damage done found that few buildings from one to five stories suffered serious damage; the same was true for buildings over fifteen stories. When the buildings were built seemed to have an effect as well. Before the 1957 earthquake, there were no building codes with respect to earthquake resistance. Some regulations were passed in that year and more in 1976 after another, stronger earthquake shook the city. However, none of these regulations had an event like 1985's in mind when passed. Most of the seriously damaged buildings were built between 1957 and 1976, when the city was starting to build upwards, in the six-to-fifteen floor range. In second place were buildings from before 1957, possibly because they were weakened by the earlier earthquakes. Structures built between 1976 and 1985 suffered the least damage.\n",
"Traditional seismic design assumes that the lower stories of a building are stronger than the upper stories; where this is not the case—if the lower story is less strong than the upper structure—the structure will not respond to earthquakes in the expected fashion. Using modern design methods, it is possible to take a weak lower story into account. Several failures of this type in one large apartment complex caused most of the fatalities in the 1994 Northridge earthquake.\n",
"Many buildings did not hold up to the shaking of the earthquake and those that did collapse often lacked any survival space, but lack of effective medical care and poor planning also contributed to the substantial scope of the disaster. Buildings that didn't collapse featured well-maintained masonry and skeletal components that were joined together adequately in a way that allowed for the building to resist seismic waves. Most bridges and tunnels and other public infrastructure withstood the earthquake but hospitals did not fare well. Most collapsed, killing two-thirds of their doctors, destroying equipment and medicine, and reducing the capacity to handle the critical medical needs in the region.\n",
"Among the buildings collapsed by the earthquake was the six-story Bayram Hotel, which was hosting journalists and rescue workers. A cameraman for the Cihan News Agency had left the hotel just prior to the earthquake and said that some journalists trapped in the rubble in the building had sent text messages to colleagues asking to be rescued. The building, which had been renovated the previous year, was forty years old. A Japanese aid worker who had traveled to Turkey for the October earthquake relief work was reported to have been pulled alive from the rubble of the Bayram hotel but later died of his injuries at a hospital. Residents were angry with authorities for not having closed the building after it suffered cracks and a damaged elevator in the large earthquake a month earlier.\n",
"BULLET::::- In the 1998 disaster television movie Earthquake in New York, during the earthquake the Empire State Building begins to crack and the windows begin to shatter, parts of the building collapse and fall but the building remains standing once the earthquake finishes.\n"
] |
the lake effect, as in what happened in buffalo.
|
Basically, the Lake Effect is what happens when cold air moves over warmer water, picking up water vapor which freezes in the air, and then comes down as snow when the air moves downwind to land. Since Buffalo is right next to Lake Erie and Ontario, there is plenty of water for the cold air to move along. Additionally, since Buffalo has a higher elevation than the lakes, then the air moves up when it deposits snow, causing very intense snowstorms.
|
[
"Buffalo was a major port on Lake Erie and felt the force of the storm as water from the lake forced ships onto the piers and shoreline of the city. The creek rose 20 feet as the wind and the harbor front were swept away.\n",
"In Buffalo, New York, another winter storm triggered a strong lake-effect band, which impacted the city and its immediate southern suburbs from November 17–19, 2014, with a second wave hitting November 20 before shifting southward and weakening. As much as 65 inches fell in Cheektowaga. Snow fell at rates as high as five inches per hour. However, nearby regions of Buffalo only received between one and six inches from the storm. Once the band dissipated, the risk of flooding became a significant concern, as temperatures were forecast to rise sharply and rain was forecast to enter the area beginning November 23, causing the snowpack to melt rapidly.\n",
"Buffalo's water system is operated by Veolia Water. To reduce large-scale ice blockage in the Niagara River—with resultant flooding, ice damage to docks and other waterfront structures, as well as blockage of the water intakes for the hydro-electric power plants at Niagara Falls—the New York Power Authority and Ontario Power Generation have jointly operated the Lake Erie-Niagara River Ice Boom since 1964. The boom is installed on December 16, or when the water temperature reaches , whichever happens first. The boom is opened on April 1 unless there is more than of ice remaining in Eastern Lake Erie. When in place, the boom stretches from the outer breakwall at Buffalo Harbor almost to the Canadian shore near the ruins of the pier at Erie Beach in Fort Erie. The boom was originally made of wooden timbers, but these have been replaced by steel pontoons.\n",
"The October 2006 Buffalo storm was an unusual early-season lake effect snow storm that hit the Buffalo, New York area and other surrounding areas of the United States and Canada, from the afternoon of Thursday, October 12 through the morning of Friday, October 13, 2006. It was called Lake Storm \"Aphid\" by the National Weather Service office in Buffalo in accordance with their naming scheme of lake effect snow storms for that year, which related to insects, though locals never used that terminology and have simply referred to it as the October Surprise or the October Storm or Arborgeddon.\n",
"The water buffalo incident was a controversy at the University of Pennsylvania in 1993, in which a Jewish student, Eden Jacobowitz, was charged with violating the university's racial harassment policy. The incident received widespread publicity as part of the increasing debate about political correctness in the United States in the 1990s.\n",
"The new snow associated with the cold front and the snow that had accumulated on land and frozen Lake Erie were all blown by the strong winds and created drifts of over in metropolitan Buffalo. During the blizzard, about of \"new\" snow fell, and much of this was thought to be from snow that had been in the snowpack on Lake Erie.\n",
"The effect is similar to a storm surge like that caused by hurricanes along ocean coasts, but the seiche effect can cause oscillation back and forth across the lake for some time. In 1954, the remnants of Hurricane Hazel piled up water along the northwestern Lake Ontario shoreline near Toronto, causing extensive flooding, and established a seiche that subsequently caused flooding along the south shore.\n"
] |
Is there any other way for life to develop besides cells?
|
It seems like this question gets into the philosophical definition of "what is life." Scientists are still debating whether or not viruses constitute a form of life, so realistically we haven't even nailed down a definition of life on earth.
That being said, I don't see why life on other planets would have to adhere to any of the rules that life on earth follows.
|
[
"The German pathologist Rudolf Virchow brought forward the idea that not only does life arise from cells, but every cell comes from another cell; \"\"Omnis cellula e cellula\"\". Until now, most attempts to create an artificial cell have only created a package that can mimic certain tasks of the cell. Advances in cell-free transcription and translation reactions allow the expression of many genes, but these efforts are far from producing a fully operational cell.\n",
" \"The First Cell arose in the previously pre-biotic world with the coming together of several entities that gave a single vesicle the unique chance to carry out three essential and quite different life processes. These were: (a) to copy informational macromolecules, (b) to carry out specific catalytic functions, and (c) to couple energy from the environment into usable chemical forms. These would foster subsequent cellular evolution and metabolism. Each of these three essential processes probably originated and was lost many times prior to The First Cell, but only when these three occurred together was life jump-started and Darwinian evolution of organisms began.\" (Koch and Silver, 2005) \n",
"In cellular biology, labile cells are cells that multiply constantly throughout life. The cells are alive for only a short period of time. Due to this,they can end up reproducing new stem cells and replace functional cells. Especially if the cells become injured through a process called necrosis, or even if the cells go through apoptosis. The way these cells regenerate and replace themselves is quite unique. While going through cell division, one of the two daughter cells actually becomes a new stem cell. This occurs so then that daughter cell can end up restoring the population of the stem cells that were lost. The other daughter cell separates itself into a functional cell in order to replace the lost, or injured cells during this process. Labile cells are one type of the cells that are involved in the division of cells. The other two types that are involved include stable cells and permanent cells. \n",
"Although roughly cellular in appearance, microspheres in and of themselves are not alive. Although they do reproduce asexually by budding, they do not pass on any type of genetic material. However they may have been important in the development of life, providing a membrane-enclosed volume which is similar to that of a cell. Microspheres, like cells, can grow and contain a double membrane which undergoes diffusion of materials and osmosis. Sidney Fox postulated that as these microspheres became more complex, they would carry on more lifelike functions. They would become heterotrophs, organisms with the ability to absorb nutrients from the environment for energy and growth. As the amount of nutrients in the environment decreased at that period, competition for those precious resources increased. Heterotrophs with more complex biochemical reactions would have an advantage in this competition. Over time, organisms would evolve that used photosynthesis to produce energy.\n",
"A living \"artificial cell\" has been defined as a completely synthetic cell that can capture energy, maintain ion gradients, contain macromolecules as well as store information and have the ability to mutate. Nobody has been able to create such a cell.\n",
"Multicellular life is made possible by the coordination of physically and temporally distinct processes, most prominently through hormones. Hormones mediate critical activities in vertebrates, including ontogeny, somatic and reproductive physiology, sexual development, performance and behaviour. \n",
"In the area of synthetic biology, a \"living\" artificial cell has been defined as a completely synthetically made cell that can capture energy, maintain ion gradients, contain macromolecules as well as store information and have the ability to mutate. Such a cell is not technically feasible yet, but a variation of an artificial cell has been created in which a completely synthetic genome was introduced to genomically emptied host cells. Although not completely artificial because the cytoplasmic components as well as the membrane from the host cell are kept, the engineered cell is under control of a synthetic genome and is able to replicate.\n"
] |
How are electrical signals traveling on neurons directed to its target?
|
What /u/unia_7 said. Each nerve carries a large number of individual axons. Each axon either goes to one group of muscle fibers, or from one sensory receptor.
That said, an axon CAN split in two and make synapses onto multiple separate target neurons. But it can't 'route' electrical signals down one branch vs. another. A branched axon duplicates the information it transmits; it can't selectively route information.
|
[
"A neuron receives signals from neighboring cells through branched, cellular extensions called dendrites. The neuron then propagates an electrical signal down a specialized axon extension to the synapse, where neurotransmitters are released to propagate the signal to another neuron or effector cell (e.g., muscle or gland). The polarity of the neuron thus facilitates the directional flow of information, which is required for communication between neurons and effector cells.\n",
"In the mammalian central nervous system, signal transmission is carried out by interconnected networks of nerve cells, or neurons. For the basic pyramidal neuron, the input signal is carried by the axon, which releases neurotransmitter chemicals into the synapse which is picked up by the dendrites of the next neuron, which can then generate an action potential which is analogous to the output signal in the computational case.\n",
"The signaling process is partly electrical and partly chemical. Neurons are electrically excitable, due to maintenance of voltage gradients across their membranes. If the voltage changes by a large enough amount over a short interval, the neuron generates an all-or-nothing electrochemical pulse called an action potential. This potential travels rapidly along the axon, and activates synaptic connections as it reaches them. Synaptic signals may be excitatory or inhibitory, increasing or reducing the net voltage that reaches the soma.\n",
"Jack studies how nerve cells, or neurons, communicate with one another in the nervous system. He is also interested in understanding how chemical and electrical signals move through neural networks, such as the spinal cord or cerebral cortex. Although neurons form large networks, these cells do not actually touch each other. Instead, when the end of a nerve is activated it releases ions or chemicals known as neurotransmitters. Subsequently, these move across the gap, or synapse, between the neuron and the adjacent cell in the network, activating its receptors and perpetuating the signal. Jack applies theoretical and experimental approaches to research this process of synaptic transmission. This includes the use of neurophysiology methods to record bioelectrical activity and mathematical models to analyse the central and peripheral nervous systems. His work on neurotransmission is offering insight into disorders of the nervous system, such as Alzheimer’s disease and multiple sclerosis, and has the potential to improve their diagnosis.\n",
"Most neurons receive signals via the dendrites and soma and send out signals down the axon. At the majority of synapses, signals cross from the axon of one neuron to a dendrite of another. However, synapses can connect an axon to another axon or a dendrite to another dendrite.\n",
"Neurons communicate with one another via synapses. Synapses are specialized junctions between two cells in close apposition to one another. In a synapse, the neuron that sends the signal is the presynaptic neuron and the target cell receives that signal is the postsynaptic neuron or cell. Synapses can be either electrical or chemical. Electrical synapses are characterized by the formation of gap junctions that allow ions and other organic compound to instantaneously pass from one cell to another. Chemical synapses are characterized by the presynaptic release of neurotransmitters that diffuse across a synaptic cleft to bind with postsynaptic receptors. A neurotransmitter is a chemical messenger that is synthesized within neurons themselves and released by these same neurons to communicate with their postsynaptic target cells. A receptor is a transmembrane protein molecule that a neurotransmitter or drug binds. Chemical synapses are slower than electrical synapses.\n",
"Neurotransmission can also occur through electrical synapses. Due to the direct connection between excitable cells in the form of gap junctions, an action potential can be transmitted directly from one cell to the next in either direction. The free flow of ions between cells enables rapid non-chemical-mediated transmission. Rectifying channels ensure that action potentials move only in one direction through an electrical synapse. Electrical synapses are found in all nervous systems, including the human brain, although they are a distinct minority.\n"
] |
what are the blue and orange/yellow lines that i see on the edges of everything when i have my glasses on?
|
You are nearsighted, your eyeglass lenses are concave, and the outer edges act like prisms that split up light into its component colors. A white object, seen through the edge of your lenses, will appear to have a reddish halo that oozes out towards the outer edge of your lens, with a corresponding bluish halo that oozes in towards the center of your lens - and this happens no matter if you are looking through the left, right, top, or bottom edges.
At night, you can enjoy superhuman vision skills. Go outside and glance at a distant street light through the outer edge of one of your lenses. You will see a truncated rainbow - just three or four colors instead of a full wash - and you will be able to tell whether the streetlight is a sodium-vapor lamp (a heavy orange halo) or mercury-vapor lamp (a blue halo plus a green halo) just from the spectrum that only you are able to see because of the lenses that give you mutant powers.
|
[
"Glass containing two or more phases with different refractive indices shows coloring based on the Tyndall effect and explained by the Mie theory, if the dimensions of the phases are similar or larger than the wavelength of visible light. The scattered light is blue and violet as seen in the image, while the transmitted light is yellow and red.\n",
"A pair of glasses, with filters of opposing colors, is worn to view an anaglyphic photo image. A red filter lens over the left eye allows graduations of red to cyan from within the anaglyph to be perceived as graduations of bright to dark. The cyan (blue/green) filter over the right eye conversely allows graduations of cyan to red from within the anaglyph to be perceived as graduations of bright to dark. Red and cyan color fringes in the anaglyph display represent the red and cyan color channels of the parallax-displaced left and right images. The viewing filters each cancel out opposing colored areas, including graduations of less pure opposing colored areas, to each reveal an image from within its color channel. Thus the filters enable each eye to see only its intended view from color channels within the single anaglyphic image.\n",
"It is based on the Young-Helmholtz theory that the normal human eye sees color because its inner surface is covered with millions of intermingled cone cells of three types: in theory, one type is most sensitive to the end of the spectrum we call \"red\", another is more sensitive to the middle or \"green\" region, and a third which is most strongly stimulated by \"blue\". The named colors are somewhat arbitrary divisions imposed on the continuous spectrum of visible light, and the theory is not an entirely accurate description of cone sensitivity. But the simple description of these three colors coincides enough with the sensations experienced by the eye that when these three colors are used the three cones types are adequately and unequally stimulated to form the illusion of various intermediate wavelengths of light.\n",
"The most saturated colors are located at the outer rim of the region, with brighter colors farther removed from the origin. As far as the responses of the receptors in the eye are concerned, there is no such thing as \"brown\" or \"gray\" light. The latter color names refer to orange and white light respectively, with an intensity that is lower than the light from surrounding areas. One can observe this by watching the screen of an overhead projector during a meeting: one sees black lettering on a white background, even though the \"black\" has in fact not become darker than the white screen on which it is projected before the projector was turned on. The \"black\" areas have not actually become darker but appear \"black\" relative to the higher intensity \"white\" projected onto the screen around it. See also color constancy.\n",
"Blue light (optimal wavelength: 430 nm) is absorbed by the red blood cells that fill the capillaries. The eye and brain \"edit out\" the shadow lines of the capillaries, partially by dark adaptation of the photoreceptors lying beneath the capillaries. The white blood cells, which are much rarer than the red ones and do not absorb blue light, create gaps in the blood column, and these gaps appear as bright dots. The gaps are elongated because a spherical white blood cell is too wide for the capillary. Red blood cells pile up behind the white blood cell, showing up like a dark tail. This behavior of the blood cells in the capillaries of the retina has been directly observed in human subjects by adaptive optics scanning laser ophthalmoscopy, a real time imaging technique for examining retinal blood flow.\n",
"The white blood cells in the capillaries in front of the photoreceptors can be perceived as tiny bright moving dots when looking into blue light. This is known as the blue field entoptic phenomenon (or Scheerer's phenomenon).\n",
"The color of these two bands of light will vary depending on where the particle and liquid match in refractive index, the location of λo. If the match is near the blue end of the spectrum then the Becke' Line moving into the particle will contain nearly all of the visible wavelengths except blue and will appear as a pale yellow. The Becke` Line moving out will appear a very dark blue. If the match is near the red end of the spectrum then the Becke` Line moving into the particle will appear dark red and the Becke` Line moving out will appear pale blue. If the λo is near the middle of the visible wavelengths then the Becke` Line moving into the particle will be orange and the Becke` Line moving out will be sky blue. The colors seen (see Chart 1) can be used to very precisely determine the refractive index of the unknown or confirm the identity of the unknown, as in the case of asbestos identification. Examples of this type of dispersion staining and the colors shown for different λo's can be seen at http://microlabgallery.com/gallery-dsbecke.aspx. The presence of two colors helps to bracket the wavelength at which the refractive index matches for the two materials.\n"
] |
Why is New York City named the same as the state? IE why do the city and state share the same name? Do any other places on earth do this?
|
> Do any other places on earth do this?
There's one example in Germany: Bremen, Bremen. Bremen (the city) is part of Bremen (the state), together with another city called Bremerhaven (roughly translates to *Port of Bremen*). This is the only example in Germany, for Hamburg and Berlin, there is no distinction between the state and the city.
|
[
"New York City is frequently shortened to simply \"New York\", \"NY\", or \"NYC\". New York City is also known as \"The City\" or \"The Big City\" in much of the Western hemisphere. Other monikers have taken the form of \"Hong Kong on the Hudson\" or \"Baghdad on the Subway\", references in different cases to the city's prominence or its immigrant groups.\n",
"City names are treated as foreign words (London), except when part of the name itself is a regular noun or adjective: Nov-York (\"Nov\" for \"nova\", or \"new\", but the place name York is not changed as in Esperanto \"Nov-Jorko\"). This is not a hard and fast rule, however, and \"New York\" is also acceptable, which is similar to writing \"Köln\" in English for the city of Cologne in Germany. South Carolina becomes Sud-Karolina, much in the same way that a river called the \"Schwarz River\" is not transcribed as the \"Black River\" in English even though \"schwarz\" is the German word for \"black\". However, less well-known place names are generally left alone, so a small town by the name of \"Battle River\" for example would be written the same way, and not transcribed as \"Batalio-rivero\". This is because transcribing a little-known place name would make it nearly impossible to find in the original language.\n",
"The city of New York is a special case. The state legislature reorganized government in the area in the 1890s in an effort to consolidate. Other cities, villages, and towns were annexed to become the \"City of Greater New York\", (an unofficial term, the new city retained the name of New York), a process basically completed in 1898. At the time of consolidation, Queens County was split. Its western towns joined the city, leaving three towns that were never part of the consolidation plan as part of Queens County but not part of the new Borough of Queens. (A small portion of the Town of Hempstead was itself annexed, also.) The next year (1899), the three eastern towns of Queens County separated to become Nassau County. The city today consists of the entire area of five counties (named New York, Kings, Queens, Bronx, and Richmond). While these counties have no county government, boroughs — with boundaries coterminous with the county boundaries — each have a Borough Board made up of the Borough President, the borough's district council members, and the chairpersons of the borough's community boards. A mayor serves as the city's chief executive officer.\n",
"The city is based on New York City, and similarly, has five sections, corresponding with the five boroughs of New York: Isola (both the name of the downtown section, fulfilling the role of Manhattan, and the city overall), Bethtown (Staten Island), Calm's Point (Brooklyn), Majesta (Queens), and Riverhead (Bronx). It has two major rivers, the Harb and the Dix, which inexplicably flow in a westerly direction despite the fact that Isola is on the East Coast.\n",
"New York City is located on one of the world's largest natural harbors, and the boroughs of Manhattan and Staten Island are (primarily) coterminous with islands of the same names, while Queens and Brooklyn are located at the west end of the larger Long Island, and The Bronx is located at the southern tip of New York State's mainland. This situation of boroughs separated by water led to the development of an extensive infrastructure of well-known bridges and tunnels.\n",
"New York City for example has many originally Dutch street and place names which range from Coney Island and Brooklyn to Wall Street and Broadway. And up the river in New York State Piermont, Orangeburg, Blauvelt and Haverstraw, just to name a few places. In the Hudson Valley region there are many places and waterways whose names incorporate the word \"-kill\", Dutch for \"stream\" or \"riverbed\", including the Catskill Mountains, Peekskill, and the Kill van Kull.\n",
"This partial list of city nicknames in New York compiles the aliases, sobriquets, and slogans that cities in the U.S. state of New York are known by (or have been known by historically), officially and unofficially, to municipal governments, local people, outsiders, or the cities' tourism boards or chambers of commerce. City nicknames can help in establishing a civic identity, helping outsiders recognize a community or attracting people to a community because of its nickname; promote civic pride; and build community unity. Nicknames and slogans that successfully create a new community \"ideology or myth\" are also believed to have economic value. Their economic value is difficult to measure, but there are anecdotal reports of cities that have achieved substantial economic benefits by \"branding\" themselves by adopting new slogans.\n"
] |
does having a "will to live" help you overcome a severe illness or injury, and if so, how?
|
Definetly. Someone who has given up on everything is less likely to think about his/hers well being. As someone who has a will to live will check up symptoms, have less anxiety. Some cures in life are really the pure basics like sleeping and eating well.
|
[
"In psychology, the will to live is the drive for self-preservation, usually coupled with expectations for future improvement in one's state in life. The will to live is an important concept when attempting to understand and comprehend why we do what we do in order to stay alive, and for as long as we can. This can be related to either one's push for survival on the brink of death, or someone who is just trying to find a meaning to continuing their life. Some researchers say that people who have a reason or purpose in life during such dreadful and horrific experiences will often appear to fare better than those that may find such experiences overwhelming. Everyday, people undergo countless types of negative experiences, some to which may be demoralizing, hurtful, or tragic. An ongoing question continues to be what keeps the will to live in these situations. Some people that claim to have experienced instances of the will to live, have many different explanations behind it.\n",
"There are significant correlations between the will to live and existential, psychological, social, and physical sources of distress. The concept of the will to live can be seen as directly impacted by hope. Many, who overcome near-death experiences with no explanation, have described the will to live as a direct component of their survival. The difference between the wish to die versus the wish to live is also a unique risk factor for suicide.\n",
"The mind and its processes are critical to survival. The will to live in a life-and-death situation often separates those that live and those that do not. Stories of heroic feats of survival by regular people with little or no training but a strong will to live are not uncommon. Among them is Juliane Koepcke, who was the sole survivor among the 93 passengers when her plane crashed in the jungle of Peru. Situations can be stressful to the level that even trained experts may be mentally affected. One should be mentally and physically tough during a disaster.\n",
"Many studies have been conducted on the theory of the will to live. Among these studies are subject to the difference in gender and the elderly and also in the terminally ill. One study focused on a simple question that asked about rating one’s will to live and presented the findings that elderly participants reporting a stronger will to live and strengthened or stable will to live survived longer in comparison to those with a weak will to live. This study found that women were able to cope with life-threatening situations, but suggested that the participants could not have been stable and requires future replication.\n",
"“Existential, psychiatric, social, and, to a lesser degree, physical variables are highly correlated with the will to live”. Existential issues found to correlate significantly include hopelessness, the desire for death, sense of dignity, and burden to others. Psychiatric issues found to be strongly associated are such as depression, anxiety, and lack of concentration. Physical issues that showed the strongest associations were appetite and appearance which did not show the same consistent degree of correlation. The four main predictor variables of the will to live changing over time are anxiety, shortness of breath, depression, and sense of well-being which correlate with the other variable predictors as well. Social variables and quality of life measures are shown to correlate significantly with the will to live such as support and satisfaction with support from family, friends, and health care providers. Findings on the will to live have suggested that psychological variables are replaced by physical mediators of variation as death draws nearer. The will to live has also proven to be highly unstable.\n",
"3. We should enhance support for families caring for those with terminal illnesses to assist them in understanding what is happening and likely to happen, how to manage stress and grief, and how to build positive relationships with the dying, remembering that dying can be an important last phase of life in resolving conflicts of the past and establishing new relationships of love and care.\n",
"Other accounts of the will to live exist in many extreme medical cases, where patients have overcome extraordinary odds to survive. The Holocaust has provided many instances of this phenomenon, and is a good example of this as well. A proposed mechanism for the will to live is the idea that positive mental thinking tends to lower one’s risk for disease and health complications. One study showed that women who thought positively were more likely to carry more antibodies against certain strains of the flu, thus having a stronger immune system than those who were told to think negative thoughts.\n"
] |
when nuclear weapons were added to the us arsenal, why was the ability to launch them given to the president of the united states and not congress?
|
Constitutionally, (although not really in practice any more), Congress is the sole authority on declarations of war or authorizations of the use of military force, but the President is the commander in chief of the armed forces. Congress can't tell generals what to do and it can't give or veto military orders. A nuclear attack is a military order and as such only the President has the authority to initiate it.
On a more practical note, if nuclear missiles have already be launched at the US, there would be only a few minutes to authorize a retaliatory strike, and even a functional congress is not capable of acting that quickly. No country in the world with nuclear weapons has that authority given to their legislatures. In every case it's the head of government or the head of state.
|
[
"Since World War II, the President of the United States has had sole authority to launch U.S. nuclear weapons, whether as a first strike or nuclear retaliation. This arrangement was seen as necessary during the Cold War to present a credible nuclear deterrent; if an attack was detected, the United States would have only minutes to launch a counterstrike before its nuclear capability was severely damaged, or national leaders killed. If the President has been killed, command authority follows the presidential line of succession. Changes to this policy have been proposed, but currently the only way to countermand such an order before the strike was launched would be for the Vice President and the majority of the Cabinet to relieve the President under Section 4 of the Twenty-fifth Amendment to the United States Constitution.\n",
"BULLET::::- U.S. President Eisenhower announced at a news conference that the United States should be able to make nuclear weapons available to its allies. Eisenhower urged that the Atomic Energy Act be amended in order to permit the U.S. to transfer weapons to the arsenals of other nations.\n",
"At a press conference on 30 November 1950, Truman was asked about the use of nuclear weapons: The implication was that the authority to use atomic weapons now rested in the hands of MacArthur. Truman's White House issued a clarification, noting that \"only the President can authorize the use of the atom bomb, and no such authorization has been given,\" yet the comment still caused a domestic and international stir. Truman had touched upon one of the most sensitive issues in civil-military relations in the post-World War II period: civilian control of nuclear weapons, which was enshrined in the Atomic Energy Act of 1946.\n",
"By 1943, both the United States and the Confederate States, along with other countries, have initiated programs to develop atomic weapons. While no power has developed a weapon yet, it appears that the United States and German programs are ahead of the C.S. one, with Germany the closest to completion, due to the participation of Albert Einstein. Around the turn of the New Year in 1943, the U.S. achieves its first sustaining chain reaction at its plant in Hanford, Washington. The British and the French are also rumored to be working on atomic weapons.\n",
"Only the president can direct the use of nuclear weapons by U.S. armed forces, through plans like OPLAN 8010-12. The president has unilateral authority as commander-in-chief to order that nuclear weapons be used for any reason at any time.\n",
"BULLET::::- 1981 – October – President Ronald Reagan announces an update of the U.S. nuclear arsenal, including increased numbers of bombers and missiles and development of new projects such as the Rockwell B-1 Lancer, the MX missile, and the MGM-134 Midgetman missile.\n",
"Rockets became extremely important militarily as modern intercontinental ballistic missiles (ICBMs) when it was realized that nuclear weapons carried on a rocket vehicle were essentially impossible for existing defense systems to stop once launched, and launch vehicles such as the R-7, Atlas, and Titan became delivery platforms for these weapons.\n"
] |
why are the baby boomers considered the worst generation?
|
Generally because they inherited a hard-won and prosperous welfare state but then proceeded to dismantle it in the name of short-term profit at the expense of their children. This current generation are the first who, on average, are going to be worse off than their parents, and the policies of deregulation and privatisation pursued in the 1980s are directly responsible for that.
|
[
"This population is sometimes referred to as Generation Jones, and less commonly as Tweeners. These cuspers were not as financially successful as older Baby Boomers. They experienced a recession like many Generation Xers but had a much more difficult time finding jobs than Generation X did. While they learned to be IT-savvy, they didn't have computers until after high school but were some of the first to purchase them for their homes. They were among some of the first to take an interest in video games. They get along well with Baby Boomers, but share different values. While they are comfortable in office environments, they are more relaxed at home. They're less interested in advancing their careers than Baby Boomers and more interested in quality of life.\n",
"In Western Europe and North America, boomers are widely associated with privilege, as many grew up during a period of increasing affluence due in part to widespread post-war government subsidies in housing and education. As a group, baby boomers were wealthier, more active and more physically fit than any preceding generation and were the first to grow up genuinely expecting the world to improve with time. They were also the generation that reached peak levels of income in the workplace and could, therefore, enjoy the benefits of abundant food, clothing, retirement programs, and even \"midlife-crisis\" products. But, this generation also has been criticized often for its increases in consumerism which others saw as excessive.\n",
"The generation is noted for coming of age after a huge swath of their older brothers and sisters in the earlier portion of the baby boomer population had come immediately preceding them; thus, many complain that there was a paucity of resources and privileges available to them that were seemingly abundant to older boomers. Therefore, there is a certain level of bitterness and \"jonesing\" for the level of freedom and affluence granted to older boomers but denied to them.\n",
"The boomers have tended to think of themselves as a special generation, very different from preceding and subsequent generations. In the 1960s and 1970s, as a relatively large number of young people entered their late teens—the oldest turned 18 in 1964—they, and those around them, created a very specific rhetoric around their cohort and the changes brought about by their size in numbers. This rhetoric had an important impact in the self-perceptions of the boomers, as well as their tendency to define the world in terms of generations, which was a relatively new phenomenon. The baby boom has been described variously as a \"shockwave\" and as \"the pig in the python\".\n",
"Baby Boomers, born approximately between 1946 and 1964 were brought up in a healthy post war economy and saw the world revolving around them as the largest generation of the century. Their lifestyle is to live for work and they often expect the same level of dedication and work ethics from the next generations. They are said to prefer face to face communication, are interactive team players and attain personal fulfilment from work. Baby Boomers are often branded workaholics leaving little to no work-life balance which has inevitably led to a breakdown in family values which has influenced the next generation. They are said to be loyal to their organisations, enjoy the notion of lifetime employment and prefer to be valued or needed as opposed to rewarded with recognition or money. An article by Emma Simon in the \"Daily Telegraph\" describes them as the 'post war generation' who have enjoyed an \"unbroken run of good-luck\".\n",
"BULLET::::- In the United States, many Millennials and late Generation X also belong to the Boomerang Generation which live with their parents after they would normally be considered old enough to live on their own. This social phenomenon is mainly caused by high unemployment rates coupled with various economic downturns, and in turn, many Boomerang children postpone romance and marriage due to economic hardship.\n",
"Economic instability is the primary justification for this phenomenon, as articulated in Kimberly Palmer's 2007 \"U.S. News & World Report\" article \"The New Parent Trap: More Boomers Help Adult Kids out Financially\". In particular, the term Boomeranger has been used to draw reference to those Gen-Xers and Gen-Yers of the Boomerang Generation who have either returned to an earlier, more modest lifestyle or have simply moved back home with parents and other loved ones, in response to the Great Recession. Where the young person and his/her parents can tolerate the arrangement, it provides tremendous financial relief to the young person. Such co-residence can be a valuable form of insurance, particularly for youths from poorer families. It may also provide non-negligible income to the parents, though in many cultures, the boomeranger retains all or nearly all of their disposable income for discretionary income purchases.\n"
] |
why my car windows do this and how i can prevent it? mostly happens in rain.
|
It's condensation because of the temperature/humidity difference between the cabin of your vehicle and outside.
They make antifogging coatings that you can use (rainx makes one, for instance) but for immediate relief, use the defrosting setting on your air conditioner.
|
[
"Stone damage can be dangerous in many ways. Stone damage can cause small cracks in the windshield that can refract or reflect normally unharmful light such that it can distract or blind the driver. Stone damage can also cause large cracks in the windshield – this usually happens during the winter period because of the large temperature differences, but can also happen if the vehicle is exposed to vibrations or bumps.\n",
"Windshields protect the vehicle's occupants from wind and flying debris such as dust, insects, and rocks, and provide an aerodynamically formed window towards the front. UV coating may be applied to screen out harmful ultraviolet radiation. However, this is usually unnecessary since most auto windshields are made from laminated safety glass. The majority of UV-B is absorbed by the glass itself, and any remaining UV-B together with most of the UV-A is absorbed by the PVB bonding layer.\n",
"The windshield glass itself blocks most of the UV light and some of the infrared radiation. But it can't protect from the visible light that mostly penetrates through it and gets absorbed by the objects inside the car. The visible light that passes into the interior through the windshield is converted into the infrared light which, in its turn, is blocked by the glass and gets trapped inside, heating up the interior. Windshield sun shades have a reflective surface to bounce the light back, reducing the interior temperature\n",
"Vehicle glass includes windscreens, side and rear windows, and glass panel roofs on a vehicle. Side windows can be either fixed or raised and lowered by depressing a button (power window) or switch or using a hand-turned crank. The power moonroof, a transparent, retractable sunroof, may be considered as an extension of the power window concept. Some vehicles include sun blinds for rear and rear side windows. The windshield of a car is appropriate for safety and protection of debris on the road. The majority of vehicle glass is held in place by glass run channels, which also serve to contain any fragments of glass if the glass breaks.\n",
"A factor that contributes to stone damage on the windshield is the fact that modern cars normally use quite thin windshields to save weight. Modern tires also contribute to stone damage since they have more tracks in which stones can get stuck.\n",
"Modern, glued-in windshields contribute to the vehicle's rigidity, but the main force for innovation has historically been the need to prevent injury from sharp glass fragments. Almost all nations now require windshields to stay in one piece even if broken, except if pierced by a strong force. Properly installed automobile windshields are also essential to safety; along with the roof of the car, they provide protection to the vehicle's occupants in the case of a roll-over accident .\n",
"A window deflector is mounted above the doors of some automobiles, to protect the inside of the car from rain or other precipitation in case of slightly opened windows. Additionally, it may help to prevent precipitation entering the interior in case of an opened door, e.g. dropping from the roof or directly from the air. Deflectors are also fitted to sunroofs to deviate wind.\n"
] |
do those pedestrian button things at traffic lights actually do anything? how do they work?
|
It will depend on where you are (including which country you're in) and even what time of day it is. It may be that some of those buttons are just there to make people feel they have some sort of control, but many of them -- I can attest from personal experience -- really do work.
I have encountered pedestrian crossings, for example, where you have to push the button or the lights really won't change. This is always true when it is just a light-controlled pedestrian crossing and not also an intersection. And I have encountered pedestrian crossings at intersections, where at busy times the lights change whether or not you push the button, while at less busy times they only change if you do push the button.
As for how they work, they send a signal to the software controlling the lights. On simple pedestrian crossings they change the traffic lights to red and then, after a short pause, the crossing lights to green; if the lights have recently been operated in this way, the software waits before changing the lights, so ensure that cars aren't backed up forever as pedestrian after pedestrian pushes the button.
At an intersection, the software waits for the traffic lights, as they go through their normal sequence, are switched so that the pedestrians can cross, and then change the crossing lights to green.
The box with the button may seem to be simply bolted on, but where that box contacts the post it's bolted to, there will be a hole.
|
[
"Call buttons are installed at traffic lights with a dedicated pedestrian signal, and are used to bring up the pedestrian \"walk\" indication in locations where they function correctly. In the majority of locations where call buttons are installed, pushing the button does not light up the pedestrian walk sign immediately. One Portland State University researcher notes of call buttons in the US, \"Most [call] buttons don’t provide any feedback to the pedestrian that the traffic signal has received the input. It may appear at many locations that nothing happens.\" However, there are some locations where call buttons do provide confirmation feedback. At such locations, pedestrians are more likely to wait for the \"walk\" indications.\n",
"They have two sensors on top of the traffic lights (PCD pedestrian crossing detector and PKD pedestrian kerb detector). These sensors detect if pedestrians are crossing slowly and can hold the red traffic light longer if needed. If a pedestrian presses the button but then walks off, the PKD will cancel the request making the lights more efficient.\n",
"With a passive system, the pedestrian activates the device merely by walking up to the crosswalk. This is accomplished by using one of several motion detection devices. These include microwave, motion sensors, video detection, pressure plates, or a light trip beam. With an active system, the device is usually activated by a button that a pedestrian pushes in order to cross. These active systems are generally similar to lighted pedestrian signs at traffic intersections. Because many pedestrians may not realize that they need to press a button to activate the system, it is generally recommended to install a passive system.\n",
"BULLET::::- A HAWK beacon, used with a standard pedestrian crossing signal, stops traffic when a pedestrian pushes a button to cross, but goes dark unless activated. It was allowed in experimental applications in the United States until 2009 and is now a standard option for transportation engineers.\n",
"Traffic lights, also known as traffic signals, traffic lamps, traffic semaphore, signal lights, stop lights, robots (in South Africa and most of Africa), and traffic control signals (in technical parlance), are signalling devices positioned at road intersections, pedestrian crossings, and other locations to control flows of traffic.\n",
"The normal function of traffic lights requires more than slight control and coordination to ensure that traffic and pedestrians move as smoothly, and safely as possible. A variety of different control systems are used to accomplish this, ranging from simple clockwork mechanisms to sophisticated computerized control and coordination systems that self-adjust to minimize delay to people using the junction.\n",
"Pedestrians are especially in a difficult situation. In cities such as Beijing, new \"self-service\" traffic lights provide pedestrians with easy access across the road—just push a button, wait, and go when the light changes. Unfortunately, unless these traffic lights come with supervising cameras connected to the police, some drivers are likely to pass through these as well, making the pedestrian buttons rather pointless.\n"
] |
how does the common signature hold so much power confirming identity? anyone could copy it and there are much better tools available.
|
Signatures don't confirm identity, they affirm it. When you sign something, you're making a promise that you're the person named in the document.
|
[
"A digital signature is a mathematical scheme for verifying the authenticity of digital messages or documents. A valid digital signature, where the prerequisites are satisfied, gives a recipient very strong reason to believe that the message was created by a known sender (authentication), and that the message was not altered in transit (integrity).\n",
"The ARX digital signature products are based on public key infrastructure (PKI) technology, with the digital signatures resulting from a cryptographic operation that creates a ‘fingerprint’ unique to both the signer and the content, so that they cannot be copied, forged or tampered with. This process provides proof of signer identity, data integrity and the non-repudiation of signed documents, all of which can be verified without the need for proprietary verification software.\n",
"Only if all of these conditions are met will a digital signature actually be any evidence of who sent the message, and therefore of their assent to its contents. Legal enactment cannot change this reality of the existing engineering possibilities, though some such have not reflected this actuality.\n",
"Digital signatures are a standard element of most cryptographic protocol suites, and are commonly used for software distribution, financial transactions, con, and in other cases where it is important to detect forgery or tampering.\n",
"Although messages may often include information about the entity sending a message, that information may not be accurate. Digital signatures can be used to authenticate the source of messages. When ownership of a digital signature secret key is bound to a specific user, a valid signature shows that the message was sent by that user. The importance of high confidence in sender authenticity is especially obvious in a financial context. For example, suppose a bank's branch office sends instructions to the central office requesting a change in the balance of an account. If the central office is not convinced that such a message is truly sent from an authorized source, acting on such a request could be a grave mistake.\n",
"While an advanced electronic signature is legally binding under eIDAS, a qualified electronic signature which has been created by a qualified trust service provider carries a higher probative value when used as evidence in court. Because the signature’s authorship is considered non-repudiable, the authenticity of the signature cannot be easily challenged. EU member states are obligated to accept qualified electronic signatures that have been created with qualified certificate from other Member states as valid. According to the eIDAS Regulation, i.e. Article 24 (2), a signature created with a qualified certificate has the same legal value as a handwritten signature in court.\n",
"In 1989, he (with Hans van Antwerpen) introduced undeniable signatures. This form of digital signature uses a verification process that is interactive, so that the signatory can limit who can verify the signature. Since signers may refuse to participate in the verification process, signatures are considered valid unless a signer specifically uses a disavowal protocol to prove that a given signature was not authentic.\n"
] |
copyright / trademarks.
|
**Copyright:** You wrote something, and nobody else gets to make copies unless you say they can. Also they don't get to make a movie of your book, or a performance of your play, unless you say they can. There are some exceptions for people who are talking about your work, teaching, writing reviews ("fair use"). For music, you can't stop people from singing or playing the tune you wrote, but they have to pay you ("compulsory licensing").
**Trademark:** You made up a name for something you're selling. You use that name on your product and your ads, so that people can recognize it. Competitors aren't allowed to call their products by the same name or one that's confusingly similar, because that would trick the customer. But you don't get to stop people from using the name of your product when they write reviews or articles about your product. And if you *let* your competitors call their products by the same name, or customers just decide that your product's name is the name of the whole general concept (like "xeroxing" or "kleenexes"), then you can't go after them any more.
|
[
"Current federal trademark law follows the Lanham Act, otherwise known as the Trademark Act of 1946. Under the Lanham Act, a trademark is \"any word, term, name, symbol, or device, or any combination thereof\" used in commerce to identify a service or good. Under this definition, it is possible for the names and likenesses of television, film and book characters, fictional accounts, settings, or other elements of entertainment products to act as trademarks. Unlike copyright, however, trademark rights are not automatic. To establish a right in trademark, the rights-seeker must establish that his/her mark acts as a distinctive \"source identifier\" for a particular type of good or service. Thus, trademark rights may arise when a fictional character’s name or likeness may serve to identify the source of an entertainment product or related good. For example, the use of Mickey Mouse’s name or likeness may serve to identify a particular book or toy as originating from Disney. One way to establish that a mark acts as a distinctive source identifier is to establish that the relevant purchasing public has developed a strong association between the mark and its originating source. In legal terms, this is known as \"secondary meaning.\"\n",
"A trademark is a word, phrase, or logo that identifies the source of goods or services. Trademark law protects a business' commercial identity or brand by discouraging other businesses from adopting a name or logo that is \"confusingly similar\" to an existing trademark. The goal is to allow consumers to easily identify the producers of goods and services and avoid confusion.\n",
"The essential function of a trademark is to exclusively identify the commercial source or origin of products or services, such that a trademark, properly called, indicates source or serves as a badge of origin. The use of a trademark in this way is known as trademark use. Certain exclusive rights attach to a registered mark, which can be enforced by way of an action for trademark infringement, while unregistered trademark rights may be enforced pursuant to the common law tort of passing off.\n",
"The essential function of a trademark is to exclusively identify the commercial source or origin of products or services, so a trademark, properly called, \"indicates source\" or serves as a \"badge of origin\". In other words, trademarks serve to identify a particular business as the source of goods or services. The use of a trademark in this way is known as \"trademark use\". Certain exclusive rights attach to a registered mark.\n",
"A Trademark in computer security is a contract between code that verifies security properties of an object and code that requires that an object have certain security properties. As such it is useful in ensuring secure information flow. In object-oriented languages, trademarking is analogous to signing of data but can often be implemented without cryptography.\n",
"A trademark (also written trade mark or trade-mark) is a type of intellectual property consisting of a recognizable sign, design, or expression which identifies products or services of a particular source from those of others, although trademarks used to identify services are usually called service marks. The trademark owner can be an individual, business organization, or any legal entity. A trademark may be located on a package, a label, a voucher, or on the product itself. For the sake of corporate identity, trademarks are often displayed on company buildings. It is legally recognized as a type of intellectual property.\n",
"Licensing means the trademark owner (the licensor) grants a permit to a third party (the licensee) in order to commercially use the trademark legally. It is a contract between the two, containing the scope of content and policy. The essential provisions to a trademark license identify the trademark owner and the licensee, in addition to the policy and the goods or services agreed to be licensed.\n"
] |
why don't big companies get hitmen to off people who successfully sue them?
|
It's not particularly easy to commit murder without leaving a trail to follow. It's too much of a risk for the company.
|
[
"Serious torts and fatal injuries occur as a result of actions by company employees, have increasingly been subject to criminal sanctions. All torts committed by employees in the course of employment will attribute liability to their company even if acting wholly outside authority, so long as there is some temporal and close connection to work. It is also clear that acts by directors become acts of the company, as they are \"the very ego and centre of the personality of the corporation.\" But despite strict liability in tort, civil remedies are in some instances insufficient to provide a deterrent to a company pursuing business practices that could seriously injure the life, health and environment of other people. Even with additional regulation by government bodies, such as the Health and Safety Executive or the Environment Agency, companies may still have a collective incentive to ignore the rules in the knowledge that the costs and likelihood of enforcement is weaker than potential profits. Criminal sanctions remain problematic, for instance if a company director had no intention to harm anyone, no \"mens rea\", and managers in the corporate hierarchy had systems to prevent employees committing offences. One step toward reform is found in the Corporate Manslaughter and Corporate Homicide Act 2007. This creates a criminal offence for manslaughter, meaning a penal fine of up to 10 per cent of turnover against companies whose managers conduct business in a grossly negligent fashion, resulting in deaths. Without lifting the veil there remains, however, no personal liability for directors or employees acting in the course of employment, for corporate manslaughter or otherwise. The quality of a company's accountability to a broader public and the conscientiousness of its behaviour must rely also, in great measure, on its governance.\n",
"Tort reform supporters argue that this precisely describes the problem: lawsuits over socially beneficial practices increase the costs of those practices, and thus improperly deter innovation and other economically desirable activity. They further suggest that small businesses are hurt worse by the threat of litigation than large corporations are, because the legal expenses from a single lawsuit can bankrupt a small businessperson.\n",
"The legislation gives private companies the authority to go on the counter-offensive against hackers, meaning a company that was hacked could perform more assertive defensive measures than are currently allowed under the law. However, companies would not be allowed to hack back into other systems or manipulate systems for which they do not have consent to control.\n",
"BULLET::::- severely restricting the right of corporations to sue for defamation (see e.g. \"Defamation Act 2005\" (Vic), s 9). Corporations may, however, still sue for the tort of injurious falsehood, where the burden of proof is greater than for mere defamation, because the plaintiff must show that the defamation was made with malice and resulted in economic loss.\n",
"Business litigation often involves the use of conspiracy lawsuits against two or more corporations. Often joined in the lawsuit as defendants are the officers of the companies and outside accountants, attorneys, and similar fiduciaries. In many states, officers and directors of a corporation cannot engage in a conspiracy with the corporation unless acting for their private benefit independent of any benefit to the corporation.\n",
"Although workers compensation laws deny workers the right to sue their employer, it is possible for them to sue a related third party. Within a week of the explosion Williams Bailey, a law firm based in Houston, Texas, placed half-page advertisements in a local newspaper. The adverts read \"Were you seriously injured in last week's explosion?\", and directed potential clients to the company website. The advert also claims that the firm has extensive experience in explosion-related cases. Fran Deisinger, director of the Milwaukee Bar Association, said of the ad \"It's a little disconcerting because it's such a terrible situation here that I think it probably rubs everybody a little wrong\", adding that although he believed the ad to be in poor taste, it didn't breach any rules for lawyer advertising. Although it is unclear whether any workers contacted Williams Bailey, it is known that at least one injured man, and the families of the deceased, have hired personal injury lawyer Bob Habush to represent them, who once before worked on a high-profile industrial case when three people died as a result of a crane collapse in 1991. On February 7, 2007, he launched a suit against Brennan based on these allegations. J.M. Brennan responded with the following statement: \"We are proud of our employees and the response they took in response to the explosion. And we're confident that the results of the official investigation will show that J.M. Brennan's work was reasonable and did not contribute to the cause of the explosion.\" The suit was settled out of court.\n",
"A variation on the term refers to the special subtype of frivolous litigation where plaintiffs target wealthy or corporate defendants for little other reason than them having high resources. These cases involve plaintiffs who have suffered genuine damages, but the true culpability lies squarely with an individual or small entity who has very little money that could be collected if the suit was won. Instead, the plaintiff targets the nearest marginally related large corporation or wealthy defendant, often with a weak accusation of negligence. A popular example is a person being shot by a criminal, and suing the manufacturer of the firearm instead of their attacker. Sometimes legislation is passed to prevent such lawsuits, such as the Protection of Lawful Commerce in Arms Act.\n"
] |
Why didn't the Roman Empire expand into Africa more then it did?
|
Because of a little something called the Sahara Desert. The only real routes of expansion into Africa were along the southern Moroccan coast (which was desolate, sparsely inhabited all the way up to the late 18th century, when the Moroccan government began to encourage irrigation projects there) and via the Nile Valley. However, the rough terrain and relative poverty of the Sudanese and Ethiopian highlands meant that it was not really worth challenging the Kushite and Axumite lords of the upper Nile region for control. Then a far more economical policy was to maintain a web of buffer vassal states to hold off the more powerful kings in the southeast.
In other words, both the paths into Africa were impractical to use for any sizeable party in ancient times, much less a host of men, animals, and camp followers the size and scope of several Roman legions. The regions were, for lack of a better term, nigh on unconquerable in ancient conditions. Even a thousand years later, the Ottoman Empire found it extremely difficult to extend its rule south of the Wadi Halfa, and by then the introduction of the camel had led to the establishment of proper roads due to the trade explosion.
Though the lack of true oceangoing ships prevented any Roman colonisation of the West Coast of Africa, in theory it would have been possible for them to preempt the Arab colonisation of Eastern Africa even with galleys, establishing tradeposts around the Horn of Africa. However, the only really suitable port, Baranis on the Red Sea (Berenice Troglodytica)... Simply put, it did not have any trees, at least not of sufficient size and tensile strength to build proper vessels. It functioned as an emporium for the Red Sea trade, but could not construct vessels of its own. In other words, for a Roman living in Alexandria to set up a trade post e.g. near Zanzibar, he would first have to travel down the Nile with hundreds of settlers, cross the desert mountains filled with Tuaregs, Berbers, and probably bandits, to get to Baranis, and there *buy* enough ships to take his entire party and supplies on a practically blind four-month journey into uncharted waters. Once there, he would have to find a trade goods worth exporting all the way back to Baranis, then set up and fortify his colony while establishing farmland and regular trade routes with Baranis, all the while probably fending off hostile natives... It was simply not worth it, when spices and ivory could be obtained far more cheaply from Indian, Arab and Ethiopian merchants.
If you want a more detailed answer, I can recommend some books and articles.
|
[
"North Africa remained a part of the Roman Empire, which produced many notable citizens such as Augustine of Hippo, until incompetent leadership from Roman commanders in the early fifth century allowed the Germanic peoples, the Vandals, to cross the Strait of Gibraltar, whereupon they overcame the fickle Roman defense. The loss of North Africa is considered a pinnacle point in the fall of the Western Roman Empire as Africa had previously been an important grain province that maintained Roman prosperity despite the barbarian incursions, and the wealth required to create new armies. The issue of regaining North Africa became paramount to the Western Empire, but was frustrated by Vandal victories. The focus of Roman energy had to be on the emerging threat of the Huns. In 468 AD, the Romans made one last serious attempt to invade North Africa but were repelled. This perhaps marks the point of terminal decline for the Western Roman Empire. The last Roman emperor was deposed in 476 by the Heruli general Odoacer. Trade routes between Europe and North Africa remained intact until the coming of Islam. Some Berbers were members of the Early African Church (but evolved their own Donatist doctrine), some were Berber Jews, and some adhered to traditional Berber religion. African pope Victor I served during the reign of Roman emperor Septimius Severus\n",
"When the Roman Empire began to collapse, North Africa was spared much of the disruption until the Vandal invasion of 429 AD. The Vandals ruled in North Africa until the territories were regained by Justinian of the Eastern Empire in the 6th century. Egypt was never invaded by the Vandals because there was a thousand-mile buffer of desert and because the Eastern Roman Empire was better defended.\n",
"Rome lost parts of Africa to the Vandals in the 5th century. The Byzantine Empire finally lost all control of Africa as the region fell to the Umayyad conquest of North Africa by the close of the 7th century.\n",
"The Eastern Romans, known also as the Byzantine Empire, eventually recaptured Rome's Africa province during the Vandalic War in 534, when led by their celebrated general Belisarius. The Byzantines rebuilt fortifications and border defenses (the \"limes\"), and entered into treaties with the Berbers. Nevertheless, for many decades security and prosperity were precarious and were never fully restored. Direct Byzantine rule didn't extend far beyond the coastal cities. The African interior remained under the control of various Berber tribal confederacies, e.g., the Byzantines contested against the Berber Kingdom of Garmules.\n",
"At the greatest extent of the Empire, the southern border lay along the deserts of Arabia in the Middle East and the Sahara in North Africa, which represented a natural barrier against expansion. The Empire controlled the Mediterranean shores and the mountain ranges further inland. The Romans attempted twice to occupy the Siwa Oasis and finally used Siwa as a place of banishment. However Romans controlled the Nile many miles into Africa up to the modern border between Egypt and Sudan.\n",
"Following the conquest of North Africa's Mediterranean coastline by the Roman Empire, the area was integrated economically and culturally into the Roman system. Roman settlement occurred in modern Tunisia and elsewhere along the coast. The first Roman emperor native to North Africa was Septimius Severus, born in Leptis Magna in present-day Libya—his mother was Italian Roman and his father was Punic.\n",
"Rome had, in the earlier Punic Wars, gained large tracts of territory in Africa, which they consolidated in the following centuries. Much of that land had been granted to the kingdom of Numidia, a kingdom on the north African coast approximating to modern Algeria, in return for its past military assistance. The Jugurthine War of 111–104 BC was fought between Rome and Jugurtha of Numidia and constituted the final Roman pacification of Northern Africa, after which Rome largely ceased expansion on the continent after reaching natural barriers of desert and mountain. In response to Jugurtha's usurpation of the Numidian throne, a loyal ally of Rome since the Punic Wars, Rome intervened. Jugurtha impudently bribed the Romans into accepting his usurpation and was granted half the kingdom. Following further aggression and further bribery attempts, the Romans sent an army to depose him. The Romans were defeated at the Battle of Suthul but fared better at the Battle of the Muthul and finally defeated Jugurtha at the Battle of Thala, the Battle of Mulucha, and the Battle of Cirta (104 BC). Jugurtha was finally captured not in battle but by treachery, ending the war.\n"
] |
the void
|
Instructions were not clear. I hurt my back trying to shake my 27 inch CRT monitor.
|
[
"\"A Void\"'s plot follows a group of individuals looking for a missing companion, Anton Vowl. It is in part a parody of \"noir\" and horror fiction, with many stylistic tricks, gags, plot twists, and a grim conclusion. On many occasions it implicitly talks about its own lipogrammatic limitation, highlighting its unusual syntax. \"A Void\"'s protagonists finally work out which symbol is missing, but find it a hazardous topic to discuss, as any who try to bypass this story's constraint risk dying. Philip Howard, writing a lipogrammatic appraisal of \"A Void\" in his column \"Lost Words\", said \"This is a story chock-full of plots and sub-plots, of loops within loops, of trails in pursuit of trails, all of which allow its author an opportunity to display his customary virtuosity as an avant-gardist magician, acrobat and clown.\"\n",
"A Void, translated from the original French La Disparition (literally, \"The Disappearance\"), is a 300-page French lipogrammatic novel, written in 1969 by Georges Perec, entirely without using the letter \"e\" (except for the author's name), following Oulipo constraints.\n",
"The Void is the philosophical concept of nothingness manifested. The notion of the Void is relevant to several realms of metaphysics. The Void is also prevalent in numerous facets of psychology, notably logotherapy.\n",
"The Null Void is a pocket dimension, created by the Galvans as a penal colony. It is filled with floating rocks, with flying creatures known as the Null Guardians who protect powerless people in the Null Void from the more violent inmates. The Null Void also served as a base of operations for the Rooters.\n",
"BULLET::::- The Void is the name given by the Time Lords to the infinite nothingness between dimensions, where even time does not exist. According to the Doctor, in \"Army of Ghosts\", Eternals call it the Howling, and some others call it Hell. It is only traversable using a void ship, and prior to the Time War, by a TARDIS. Various inhabitants of a parallel version of Earth-most notably the Cybermen-were also able to travel across the void to the Earth of the main universe due to the damage caused by the Cult of Skaro's Void ship. The Tenth Doctor later sealed the Void by reversing a process previously used to open it, drawing millions of Cybermen and Daleks into the Void in the process. If successfully detonated, the Reality Bomb created by Davros and the Daleks, seen in \"Journey's End\", would also have destroyed the void. The breaking down of barriers caused by this event allowed Rose Tyler and others who had relocated to the parallel Earth to return to the main universe, and the Tenth Doctor was able to travel there to return Rose, her mother Jackie Tyler and the Meta-Crisis Tenth Doctor.\n",
"Several reasons for the existence of the Void have been given: the innate division between good and evil in any nominally normal person; a \"mind virus\" put into place by the mutant Mastermind by order of the crazed General; the idea that the Void is in fact the true personality of Rob Reynolds and the Sentry is the false one; as mentioned above, the result of covering up his past; and, according to Norman Osborn the Sentry's superhumanity eroded his humanity, leading to a 'void' in his life. During the Siege storyline, the Void exhibits a more demonic form, capable of nearly slaughtering Thor, bringing down the entire city of Asgard, and striking down every immortal and mortal hero set against it simultaneously, killing the Norn Stone-powered Loki in seconds, and even tearing the god of war, Ares, in half. Norman Osborn claims that it is the Angel of Death, tying into an earlier prelude which showed the Void's presence in biblical times.\n",
"The Void was created by the Firstlife—the first beings to have existed in the galaxy—to reach the state of post-physical and fulfilment. It is where Makkathran is situated in. The people in the Void have psychic abilities such as \"farsight\" and the \"third hand\". The Void requires a tremendous amount of energy to sustain itself and the abilities it offers, which it acquires by expanding, consuming planets and star systems and converting them into energy.\n"
] |
How do plants get the material to grow so much from just a tiny seed?
|
Carbon dioxide out of the air, water and nutrients out of the ground and energy from the sun. The carbon dioxide and nutrients make up the cell structures of the plant. The water helps move everything around and inflate the cells. The sun gives it the energy to do all of it.
|
[
"Plants are readily propagated from seed. As seed is surrounded by irritating hairs within the pod, extraction requires care. Stem cuttings of semi-mature growth can be taken in late summer and require the application of rooting hormones and bottom heat.\n",
"To grow new plants by seed, the seed capsule should be removed from the stem, before it is ripe. Then it should be left to dry for a few days, before removing the seed (from the capsule) and sowing in trays or pots.\n",
"If propagating by seed. Seeds are collected from the ripe brown capsules after flowering. They should be sown on acid or slightly acidic soil. The process of seedling to flowering plant, can take up to 3 years to mature and flower.\n",
"A single plant may produce up to 2.7 million tiny seeds annually. Easily carried by wind and water, the seeds germinate in moist soils after overwintering. The plant can also sprout anew from pieces of root left in the soil or water. Once established, loosestrife stands are difficult and costly to remove by mechanical and chemical means.\n",
"It is not easy to establish via seed; if the seeds are planted more than 1.9 centimeters deep the seedlings do not emerge in large numbers. The seedlings are weak. Once it has established, however, it is tough and competes well for water and nutrients. It is tolerant of fire because the dense clumpiness of the stems protects the axillary buds, which can produce tillers and resprout after destruction by fire.\n",
"The plant produces copious seed, up to 227 pounds per acre in dense stands. The pointed fruit is purple-tinged when young and has an awn up to 10 centimeters long which is twisted and bent twice. The shape of the seed helps it self-bury.\n",
"For a better dissemination It is recommendable to use a mix of soil and sand in 2:1 proportion. Once seedlings reached 4 cm of height we can move it to a bag. After this process, it is advisable to leave the plants under shade and reduce it gradually. When seedlings reach 25 cm of height and have been hardened at least a little, we can consider that they are ready to be planted definitively at the field.\n"
] |
How do Historians use other Social Science disciplines in their research?
|
I use sociology, LGBT studies, queer theory, gender studies, and even dance studies in my work. In my case it is largely out of necessity, as there are no true histories of the AIDS crisis and still fairly few about gay and lesbian history. Other disciplines can offer a different perspective when you are dealing with a well-researched area of history, and are sometimes your only recourse when dealing with a less known area.
|
[
"Social Science History is a quarterly, peer-reviewed academic journal. It is the official journal of the Social Science History Association. Its articles bring an analytic, theoretical, and often quantitative approach to historical evidence. The journal's founders intended to \"improve the quality of historical explanation\" with \"theories and methods from the social science disciplines\" and to make generalizations across historical cases. The first issue came out in the fall of 1976. The journal's articles that are most-accessed and cited through JSTOR are about social and political movements and associated narratives.\n",
"Social scientists employ a range of methods in order to analyse a vast breadth of social phenomena: from census survey data derived from millions of individuals, to the in-depth analysis of a single agent's social experiences; from monitoring what is happening on contemporary streets, to the investigation of ancient historical documents. Methods rooted in classical sociology and statistics have formed the basis for research in other disciplines, such as political science, media studies, program evaluation and market research.\n",
"In contemporary usage, \"social research\" is a relatively autonomous term, encompassing the work of practitioners from various disciplines that share in its aims and methods. Social scientists employ a range of methods in order to analyse a vast breadth of social phenomena; from census survey data derived from millions of individuals, to the in-depth analysis of a single agent's social experiences; from monitoring what is happening on contemporary streets, to the investigation of ancient historical documents. The methods originally rooted in classical sociology and statistical mathematics have formed the basis for research in other disciplines, such as political science, media studies, and marketing and market research.\n",
"The Social Science History Association was formed in 1976 to bring together scholars from numerous disciplines interested in social history. It is still active and publishes \"Social Science History\" quarterly. The field is also the specialty of the \"Journal of Social History\", edited since 1967 by Peter Stearns It covers such topics as gender relations; race in American history; the history of personal relationships; consumerism; sexuality; the social history of politics; crime and punishment, and history of the senses. Most of the major historical journals have coverage as well.\n",
"The Social Science History Association, formed in 1976, brings together scholars from numerous disciplines interested in social history and publishes \"Social Science History\" quarterly. The field is also the specialty of the \"Journal of Social History\", edited since 1967 by Peter Stearns It covers such topics as gender relations; race in American history; the history of personal relationships; consumerism; sexuality; the social history of politics; crime and punishment, and history of the senses. Most of the major historical journals have coverage as well.\n",
"Positivist social scientists use methods resembling those of the natural sciences as tools for understanding society, and so define science in its stricter modern sense. Interpretivist social scientists, by contrast, may use social critique or symbolic interpretation rather than constructing empirically falsifiable theories, and thus treat science in its broader sense. In modern academic practice, researchers are often eclectic, using multiple methodologies (for instance, by combining both quantitative and qualitative research). The term \"social research\" has also acquired a degree of autonomy as practitioners from various disciplines share in its aims and methods.\n",
"Social Science History is a quarterly peer-reviewed academic journal. It is the official journal of the Social Science History Association. Its articles bring an analytic, theoretical, and often quantitative approach to historical evidence. Its editor-in-chief is Anne McCants (Massachusetts Institute of Technology).\n"
] |
In 17th Century Europe, how were coffee and coffeehouses viewed?
|
I'll have to make this brief as I'm about to go out (but I can explain (a lot!) more, if necessary).
To answer those three questions in a go, let's establish the 17th century coffee-house in general (funnily enough, I have a chapter more or less dedicated to this in my PhD):
During the latter half of the seventeenth century, coffee-houses were becoming increasingly popular, across a fairly diverse social scale. For the fairly inexpensive price of a dish of coffee (about a penny, according to John Spurr - see ref below), an individual got the opportunity to see whatever published paper the coffeehouse subscribed to (most likely the *London Gazette* thoughout the 1670s until licensing lapsed in '79), as well as whatever manuscript newsletters were available.
The discussion, therefore, often concerned a composite of foreign and domestic news, which is a pretty significant thing - Jurgen Habermas and others (more recently, Mark Knights, John Sommerville, Steve Pincus) have even seen this as the birth of 'public opinion' - that is, a shared and widespread perception of 'current events', inspired by a communal experience - in this case, news reception and mutual interpretation.
It's precisely this reason why official authorities tended to severely mistrust coffeehouses (See John Sommerville's 'The News Revolution in England'). A place where the ordinary folk could get together and discuss the actions of their superiors? Never!
Sir Roger L'Estrange (Surveyor of the Press to 1679) perhaps best summed up the official view in the mid-1660s:
'[Coffee-house News] makes the Multitude too Familiar with the Actions and Counsels of their Superiours; too Pragmatical and Censorious, and gives them, not only an Itch, but a kind of Colourable Right, and License, to be Meddling with the Government.' (this is from his *Intelligencer* in the early 1660s)
So mistrusted were the businesses, in fact, that in the 1670s, there was a brief period where all coffeehouses were actually barred from trading in London, to which the government relented very shortly after. There was quite an outcry!
Much work has been done on this recently - Steve Pincus' article 'Coffeehouse politicians does create' is very good, as is Sommerville's work noted above. For more info, these are good places to start (Sommerville's especially).
|
[
"In the 17th century, coffee appeared for the first time in Europe outside the Ottoman Empire, and coffeehouses were established and quickly became popular. The first coffeehouses in Western Europe appeared in Venice, as a result of the traffic between La Serenissima and the Ottomans; the very first one is recorded in 1645. The first coffeehouse in England was set up in Oxford in 1650 by a Jewish man named Jacob in the building now known as \"The Grand Cafe\". A plaque on the wall still commemorates this and the cafe is now a cocktail bar. By 1675, there were more than 3,000 coffeehouses in England.\n",
"In the 17th century, coffee appeared for the first time in Europe outside the Ottoman Empire, and coffeehouses were established, soon becoming increasingly popular. The first coffeehouses appeared in Venice in 1629, due to the traffic between the Republic of Venice and the Ottomans; the very first one is recorded in 1645. The first coffeehouse in England was set up in Oxford in 1650 by a Jewish man named Jacobs at the Angel in the parish of St Peter in the East. A building on the same site now houses a cafe-bar called The Grand Cafe. Oxford's Queen's Lane Coffee House, established in 1654, is also still in existence today. The first coffeehouse in London was opened in 1652 in St Michael's Alley, Cornhill. The proprietor was Pasqua Rosée, the servant of a trader in Turkish goods named Daniel Edwards, who imported the coffee and assisted Rosée in setting up the establishment there.\n",
"From 1670 to 1685, the number of London coffee-houses began to increase, and they also began to gain political importance due to their popularity as places of debate. English coffeehouses in the 17th and 18th centuries were significant meeting places, particularly in London. By 1675, there were more than 3,000 coffeehouses in England. Pasqua Rosée also established the first coffeehouse in Paris in 1672 and held a citywide coffee monopoly until Procopio Cutò opened the Café Procope in 1686. This coffeehouse still exists today and was a popular meeting place of the French Enlightenment; Voltaire, Rousseau, and Denis Diderot frequented it, and it is arguably the birthplace of the \"Encyclopédie\", the first modern encyclopedia. In 1667, Kara Hamie, a former Ottoman Janissary from Constantinople, opened the first coffee shop in Bucharest (then the capital of the Principality of Wallachia), in the center of the city, where today sits the main building of the National Bank of Romania. America had its first coffeehouse in Boston, in 1676.\n",
"The culture surrounding coffee and coffeehouses dates back to 14th century Turkey. Coffee houses in Western Europe and the Eastern Mediterranean were not only social hubs, but also artistic and intellectual centers. \"Les Deux Magots\" in Paris, now a popular tourist attraction, was once associated with the intellectuals Jean-Paul Sartre and Simone de Beauvoir. In the late 17th and 18th centuries, coffeehouses in London became popular meeting places for artists, writers, and socialites, as well as centers for political and commercial activity.\n",
"The Oxford-style coffeehouses, which acted as a centre for social intercourse, gossip, and scholastic interest, spread quickly to London, where English coffeehouses became popularised and embedded within the English popular and political culture. Pasqua Rosée, the Greek servant of a Levant Company merchant named Daniel Edwards, established the first London coffeehouse in 1652. London's second coffeehouse was named the Temple Bar, established by James Farr in 1656. Initially, there was little evidence to suggest that London coffeehouses were popular and largely frequented, due to the nature of the unwelcome competition felt by other London businesses. When Harrington's Rota Club began to meet in another established London coffeehouse known as the Turk's Head, to debate \"matters of politics and philosophy\", English coffeehouse popularity began to rise. This club was also a \"free and open academy unto all comers\" whose raison d'être was the art of debate, characterised as \"contentious but civil, learned but not didactic.\" According to Cowan, despite the Rota's banishment after the Restoration of the monarchy, the discursive framework they established while meeting in coffeehouses set the tone for coffeehouse conversation throughout the rest of the 17th century.\n",
"English coffeehouses in the 17th and 18th centuries were public social places where men would meet for conversation and commerce. For the price of a penny, customers purchased a cup of coffee and admission. Travellers introduced coffee as a beverage to England during the mid-17th century; previously it had been consumed mainly for its supposed medicinal properties. Coffeehouses also served tea and hot chocolate as well as a light meal.\n",
"By the early 1650s, an English merchant who had been trading in the Ottoman Levant returned to London with a Turkish servant who introduced the making of Turkish coffee. By 1652 the first coffee house had opened in London and within a decade more than 80 establishments flourished in the city.\n"
] |
why does symmetry make people look more attractive?
|
Not 100% sure about this but I think that it is in our genes.
Symmetry is associated with healthiness. Back then when it was "only the strongest survive" symmetry was (and is) a sign of a healthy individual with good genes.
If you have a healthy partner, you and that person are more likely to create a strong and a healthy descendant that will survive till he/she is able to pass your genes to the next generation.
|
[
"Some physical features are attractive in both men and women, particularly bodily and facial symmetry, although one contrary report suggests that \"absolute flawlessness\" with perfect symmetry can be \"disturbing\". Symmetry may be evolutionarily beneficial as a sign of health because asymmetry \"signals past illness or injury\". One study suggested people were able to \"gauge beauty at a subliminal level\" by seeing only a glimpse of a picture for one-hundredth of a second. Other important factors include youthfulness, skin clarity and smoothness of skin; and \"vivid color\" in the eyes and hair. However, there are numerous differences based on gender.\n",
"Symmetry and order may produce a calming effect to one’s sight. The construction of symmetry can produce a sense of balance, harmony and perfect proportion. Symmetry is usually perceived to be more attractive than asymmetry. For instance, a beautiful symmetrical face can be comforting and pleasing to look toward. Lack of order can evoke confusion. Extensive research has demonstrated that the lack of sunlight can produce disorders such as depression, substance abuse, and suicidal ideation and intent.\n",
"The relationship of symmetry to aesthetics is complex. Humans find bilateral symmetry in faces physically attractive; it indicates health and genetic fitness. Opposed to this is the tendency for excessive symmetry to be perceived as boring or uninteresting. People prefer shapes that have some symmetry, but enough complexity to make them interesting.\n",
"Symmetry and beauty have a strong biological link that influences aesthetic preferences. It has been shown that humans tend to prefer art that contains symmetry, deeming it more beautiful. Furthermore, symmetry directly correlates to the understanding of a face or artwork as beautiful. The greater the symmetry within the work or the face, generally the more beautiful it appears to be. Research on aesthetic preference for geometric forms and the fluent processing of symmetry sheds light on the role that symmetry plays in the overall aesthetic judgment and experience.\n",
"According to the theory of sexual selection, facial symmetry plays a large role in what we perceive as attractiveness. However there is question whether changes in either symmetry or averageness alone increase facial attractiveness. Experiments show that symmetry and averageness make independent contributions to attractiveness, but when the contribution of symmetry is excluded, averageness remains a significant predictor of attractiveness. In addition, research has shown that when one couples facial symmetry with averageness, attractiveness ratings rise, suggesting these two influences are invariably linked.\n",
"Facial symmetry is one specific measure of bodily asymmetry. Along with traits such as averageness and youthfulness it influences judgments of aesthetic traits of physical attractiveness and beauty. For instance, in mate selection, people have been shown to have a preference for symmetry. This is because it is seen an indicator of health and genetic fitness, but also as holding adaptation qualities; reflecting the ability to withstand the changes in their environments.\n",
"Having symmetrical features may indicate that an individual possesses high-quality genes related to health, and that they developed in a stable environment with little disease or trauma. Studies have found that women rate faces of more symmetrical men as more attractive during high fertility, especially when evaluating them as short-term partners. It has also been demonstrated that women at high fertility are more attracted to the body odors of men with more facial and bodily symmetry. Although many studies and one meta-analysis have shown that fertility-moderated shifts in attraction to facial and bodily symmetry occur robustly, other reviews have concluded that the effect is small or non-existent.\n"
] |
what exactly happened in the olympic boycotts in 1980/84?
|
There were 4 significant Olympic boycotts, two in 1976 and one each in 1980/84. I mention the 1976 boycotts because they inform the subsequent ones:
**Chinese-led boycott**: Taiwan (officially the Republic of China) and China (officially the People's Republic of China) both boycotted over the recognition of the other, each insisting it was the sole government for all of China.
**Congolese-led boycott**: In July 1976 the New Zealand Men's Rugby Team participated in a tour of South Africa, playing against all-white teams. In response to this "approval" of apartheid 26 African and Middle-eastern nations boycotted the Olympic Games that started that month. Most athletes were already in Montreal when they learned of the boycott and had to return home without competing.
Thus, in this climate the Soviet Union was set to host its Olympics in 1980.
**US-led boycott**: President Jimmy Carter announced that the US would not participate in the 1980 Moscow Olympics in response to the 1979 Soviet invasion of Afghanistan. It was clear to the western world that the Soviets were making an effort to extend its oil resources at the expense of the Afghani people. 65 countries joined the United States in boycotting the games, and several others did ceremonial boycotts of the opening ceremonies or releasing their athletes to compete under the Olympic Flag.
**Soviet-led boycott**: The Soviet Union and 14 other countries did not participate in 1984 games in protest of "chauvinistic and anti-Soviet" attitudes. It is widely understood that this boycott was retaliatory for the 1980 boycott.
|
[
"The Olympic Boycott Games of 1980 were held at the University of Pennsylvania in response to Moscow's hosting of the 1980 Summer Olympics following the Soviet incursion in Afghanistan. Twenty-nine of the boycotting nations participated in the Boycott Games.\n",
"1956 was the first time in history that several countries decided to boycott the Olympics. The boycott that influenced the sailing the most was probably that of The Netherlands, Spain, and Switzerland. They withdrew to protest against the Soviet Union invasion of Hungary during the 1956 Hungarian Revolution and Soviet presence at the Games. At that time The Netherlands dominated at the International competition in the 12m Sharpie.\n",
"The boycott of the 1984 Summer Olympics in Los Angeles followed four years after the U.S.-led boycott of the 1980 Summer Olympics in Moscow. The boycott involved 14 Eastern Bloc countries and allies, led by the Soviet Union, which initiated the boycott on May 8, 1984. Boycotting countries organized another major event, called the Friendship Games, in July and August 1984. Although the boycott led by the Soviet Union affected a number of Olympic events that were normally dominated by the absent countries, 140 nations still took part in the games, which was a record at the time.\n",
"Jimmy Carter declared that the United States would boycott the 1980 Olympic Games in Moscow, with 65 other countries joining the boycott. This was the largest Olympic games boycott ever. In 1984, three months before the start of the 1984 Summer Olympics in Los Angeles, the Soviet Union declared it would \"not participate\" in the Games. The Soviets cited a number of reasons, namely the commercialization of the games which, in their opinion, went against the principles of the Olympic movement (indeed the XXIII Olympiad ended up being the first Olympics since 1932 to make a profit by a host country, due to the high cost of the Olympic Games) and a claimed lack of security for their athletes. The issue of commercialization did gather some criticism from foreign delegations, who were unfamiliar with this trend in the Olympic movement. However, the IOC later declared the Games \"a model for future Olympics\" due to a surplus of USD 223 million for the hosts and relying on existing venues. The majority still viewed the boycott as more of a retaliatory move by the Soviets.\n",
"The 1980 Summer Olympics boycott, initiated by the United States to protest against the Soviet–Afghan War, saw many countries pull out of the Games and only 16 nations appeared at the opening ceremony. Prime Minister Charles Haughey declared his support for the boycott but the Olympic Council of Ireland still chose to send their team to Moscow. Ken Ryan, manager of the Olympic team, said that they supported the government but wanted to participate in the games \"purely from the sporting point of view\". At the opening ceremony Ryan was the sole representative of the team and marched under a white flag with bearing the Olympic rings. The Soviet cameramen avoided the protesting marchers and few Soviet commentators mentioned it. Only one comment was recorded: \"There is the clumsy plot that you all can see, against the traditions of the Olympic movement.\"\n",
"As a member of the Olympic Project for Human Rights (OPHR) Smith originally advocated a boycott of the 1968 Mexico City Olympic Games unless four conditions were met: South Africa and Rhodesia uninvited from the Olympics, the restoration of Muhammad Ali's world heavyweight boxing title, Avery Brundage to step down as president of the IOC, and the hiring of more African-American assistant coaches. As the boycott failed to achieve support after the IOC withdrew invitations for South Africa and Rhodesia, he decided, together with Carlos, to not only wear their gloves but also go barefoot to protest poverty, wear beads to protest lynchings, and wear buttons that said OPHR.\n",
"In 1980, the United States had boycotted the Summer Olympics held in Moscow in protest at the Soviet invasion of Afghanistan. The following 1984 Summer Olympics were due to be held in Los Angeles, California. On 8 May 1984, under Chernenko's leadership, the USSR announced its intention not to participate, citing security concerns and \"chauvinistic sentiments and an anti-Soviet hysteria being whipped up in the United States\". The boycott was joined by 14 Eastern Bloc countries and allies, including Cuba (but not Romania). The action was widely seen as revenge for the U.S. boycott of the Moscow Games. The boycotting countries organised their own \"Friendship Games\" in the summer of 1984.\n"
] |
the who announcement regarding processed meats.
|
So first off you really need to understand what the numbers look like here. We're talking about maybe 34,000 cases worldwide. Almost 13 million cases of cancer are diagnosed every year. So even if we take this announcement at face value, you're looking at about .002% of all cancer. Over the course of your life you have about a 40% chance of getting some kind of cancer (much more likely very late in life) so if you live to be around 80 you're looking at about a .001% chance that you'll get any sort of cancer from your bacon intake assuming this is correct.
Here's the thing: almost any sort of cooking that alters the food a lot might be carcinogenic. A good char on your steak? Probably a little bit carcinogenic. The same applies to that char of your tofu, too. This sort of modified chance is less about meat (and the processing of it) as much as it is about the methods used to make things.
Also, you need to be aware that when they did this study they basically used the crappiest bacon you can find. A crap bacon made of a miserable pig full of nitrates and nitrites really is not the same thing as a traditionally smoked and cured slice of bacon from a healthy and properly raised pig. When we're talking about odds this small, those things matter a lot.
But basically this is because these things have nitrates in them. It's not news or new. Nitrites degrade into nitrosamines in high acid or high heat environments. Nitrosamines are carcinogenic. Even "uncured" meats have nitrites in them, as they are cured with celery juice which contain high amounts of natural nitrates instead of chemical curing agent.
People seem to have it in their heads that the idea of living is to never die, but it's not. You evolved to make other humans by the time you hit middle age. After that, there's no promises. Something will kill you. If it's not the 0.002% chance it's bacon, then there's the much larger chance that it's liver failure from alcohol consumption or heart disease or the massive environmental stress put on a body by a lifetime of not enough sleep, too much work, and weird exposure to electronics 24/7. All of it's got a chance of being the thing that does you in, but only one of them gets to win and ultimately one of them will. That's what this means. "of the people that have cancer, a really small number of them had it triggered by the carcinogens that came from nitrates in processed meats as opposed to the carcinogens that came from just about every other aspect of their life or from the free radicals that they generated themselves."
EDIT: TL:DR: life is a cost benefit analysis in action. Everything you do might have a negative consequence somewhere. Some things have a better change of hurting you than others. Eating bacon is absurdly safe compared to most everything else you do.
|
[
"The 2013 meat adulteration scandal started when German authorities detected horse meat in prepared food products including frozen lasagna, where it was declared fraudulently as beef. The mislabeling prompted EU authorities to speed up publication of European Commission recommendations for labeling the origin of all processed meat.\n",
"In June 2017, the company revealed that it had been secretly working on cultured meat for a year and aimed to make its first commercial sale of a \"clean meat\" product by the end of 2018. In August 2017, the company said it had begun early talks with at least 10 global meat and feed companies across South America, Europe, and Southeast Asia to bring industrialized production efficiency to lab-grown meat.\n",
"It contains much of its original equipment which has the potential to provide information on the way in which meat was processed for public sale and the equipment available to butchers, such as sausage machines, brine pumps, presses and mincers.\n",
"Hans Hallén, a former quality control manager for ICA, revealed that the company knew that meat was being illegally repackaged as early as 2003. Hallén, who monitored ICA stores in southern Sweden from 2003-2005, said he had informed the company's managers of the exact practices that were exposed in the documentary program. According to Hallén, many stores engaged in practices such as repackaging meat in order to change the 'best before' date, saying that \"they even re-minced meat that had already been out on the shelves, before repackaging it and putting it back out on the shelves\". Sausage meats that had become old and sticky were also repackaged after rinsing, he said.\n",
"Processed meat is considered to be any meat which has been modified in order either to improve its taste or to extend its shelf life. Methods of meat processing include salting, curing, fermentation, and smoking. Processed meat is usually composed of pork or beef, but also poultry, while it can also contain offal or meat by-products such as blood. Processed meat products include bacon, ham, sausages, salami, corned beef, jerky, canned meat and meat-based sauces. Meat processing includes all the processes that change fresh meat with the exception of simple mechanical processes such as cutting, grinding or mixing.\n",
"In 2015, the International Agency for Research on Cancer of the World Health Organization classified processed meat, that is, meat that has undergone salting, curing, fermenting, or smoking, as \"carcinogenic to humans\".\n",
"On December 28, 2006, the U.S. Food and Drug Administration (FDA) approved the consumption of meat and other products from cloned animals. Cloned-animal products were said to be indistinguishable from the non-cloned animals. Furthermore, companies would not be required to provide labels informing the consumer that the meat comes from a cloned animal. In 2007, some meat and dairy producers did propose a system to track all cloned animals as they move through the food chain, suggesting that a national database system integrated into the National Animal Identification System could eventually allow food labeling. However, as of 2013 no tracking system exists, and products from cloned animals are sold for human consumption in the United States.\n"
] |
when i eat apples my face sweats, why reddit
|
Are you allergic to apples? I never had this happen or heard of it happening to anyone. Bot pls don't kill me :( Friend bot?
|
[
"The skin of the fruit is a delicately waxy yellow-green with crimson spots and reddish lines, but the apple may also occur in a classically red variation. These red apples, known as Red Gravensteins, are sports, which are genetically similar to Gravenstein, so they are not good pollinators for it, and nor is it for them. The flesh is juicy, finely grained, and light yellow.\n",
"BULLET::::- In the episode \"The Lost Art of Forehead Sweat\" of the TV show \"The X-Files\", Agent Scully remembers having it at family celebrations, but misremembers it as \"Goop-O A-B-C\" due to the Mandela effect.\n",
"Ohba said that he always mentioned apples in the thumbnails because he wished to use \"the dying message that Shinigami only eat apples\" and therefore he needed Ryuk to hold apples and that \"There's no other reason.\" Ohba also said that he specifically chose apples as the red \"goes well\" with Ryuk's black body and that the apples \"fit well\" with Ryuk's \"big\" mouth. When Obata informed Ohba that apples held religious and psychological significance and that a person could \"read a lot\" into the inclusion of apples and that he assumed that was the reason why Ohba included the apples, Ohba said that he did not \"think about that at all\" and that he believes that \"apples are cool... that's it. [\"laughs\"]\" Ohba added that he felt including aspects that could become later plot points was beneficial, and the apples were used as a point when Light asked Ryuk to search for the cameras in exchange for apples.\n",
"The apple skin is a yellow, flushed orange, streaked red with russet at the base and apex. The yellow flesh is firm, fine-grained, and sweet with a pear taste. Irregularly shaped and sometimes lopsided, the apple is usually round to conical in shape and flattened at the base with distinct ribbing. Weather conditions during ripening cause a marbling or water coring of the flesh, and in very hot weather, the fruit will ripen prematurely.\n",
"Apples are a rich source of various phytochemicals including flavonoids (e.g., catechins, flavanols, and quercetin) and other phenolic compounds (e.g., epicatechin and procyanidins) found in the skin, core, and pulp of the apple; they have unknown health value in humans.\n",
"\"Sweat\" is a song by American rock band The All-American Rejects, released as the lead single from their forthcoming fifth studio album on July 7, 2017. \"Sweat\" was released alongside another song, \"Close Your Eyes\", in addition to an accompanying 11-minute music video.\n",
"Gustatory hyperhidrosis is excessive sweating that certain individuals regularly experience on the forehead (scalp), upper lip, perioral region, or sternum a few moments after eating spicy foods, tomato sauce, chocolate, coffee, tea, or hot soups. This type of sweating is classified under focal hyperhidrosis, that is, it is restricted to certain regions of the body. A common cause is trauma or damage to the nerve that passes through the parotid gland, which can be due to surgery of the parotid gland (parotidectomy). This type of sweating is known as Frey's syndrome. Gustatory hyperhidrosis has been observed in diabetics with autonomic neuropathy, and a variant of this disorder has been reported following surgical sympathectomy. One of the more effective treatments is oral or topically applied glycopyrrolate. \n"
] |
what happens to water when it goes stale?
|
After about 12 hours tap water starts to go flat as carbon dioxide in the air starts to mix with the water in the glass, lowering its pH and giving it an off taste
|
[
"In the United Kingdom, plumbers refer to waste water as 'bad water'. This is under the premise that the water they are moving from one area to another via the use of a drain is not needed and can be removed from the area, like a 'bad apple' being removed from a fruit bowl.\n",
"BULLET::::- Various pathogens, including fungal spores, may accumulate in the water, particularly due to its stagnancy. Unlike in distilled water production, the water is not boiled, which would kill pathogens (including bacteria).\n",
"A substance is anhydrous if it contains no water. Many processes in chemistry can be impeded by the presence of water, therefore, it is important that water-free reagents and techniques are used. In practice, however, it is very difficult to achieve perfect dryness; anhydrous compounds gradually absorb water from the atmosphere so they must be stored carefully.\n",
"Annual storage can be negative during dry years with high water use and positive during wet years with relatively low water use. A long-term negative imbalance between recharge and discharge in an aquifer may lead to the depletion of the available water in the aquifer.\n",
"Fresh produce continues to lose water after harvest. Water loss causes shrinkage and loss of weight. The rate at which water is lost varies according to the product. Leafy vegetables lose water quickly because they have a thin skin with many pores. Potatoes, on the other hand, have a thick skin with few pores. But whatever the product, to extend shelf or storage life the rate of water loss must be minimal. The most significant factor is the ratio of the surface area of the fruit or vegetable to its volume. The greater the ratio the more rapid will be the loss of water. The rate of loss is related to the difference between the water vapour pressure inside the produce and in the air. Produce must therefore be kept in a moist atmosphere.\n",
"The scarcity of fresh water resources is an issue in arid regions around the world, but is becoming more common due to overcommitment of resources. In the case of physical water scarcity, there is not enough water to meet demand. Dry regions do not have access to fresh water in lakes or rivers while access to groundwater is sometimes limited. Regions most affected by this type of water scarcity are Mexico, Northern and Southern Africa, the Middle East, India, and Northern China. \n",
"Most of the water lost this way is stored underground which can change the original hydrology of local aquifers considerably. Many aquifers cannot absorb and transport these quantities of water, and so the water table rises leading to waterlogging.\n"
] |
why hasn't a car company come up with a new better performing, more efficient air cooled engine?
|
Ironic username is ironic.
I think engineers like water cooling because it helps solve a number of problems. It quiets the engine, provides a reliable source of heat for the HVAC systems and allows the temperature of the engine to be contained to a narrower range, allowing it to run more efficiently.
That said, I long ago promised myself that my next car will have an air-cooled boxer six in the rear. Still waiting, however.
|
[
"It can be seen that since formula_10 is fixed by the environment, the only way for a designer to increase the Carnot efficiency of an engine is to increase formula_9, the temperature at which the heat is added to the engine. The efficiency of ordinary heat engines also generally increases with operating temperature, and advanced structural materials that allow engines to operate at higher temperatures is an active area of research.\n",
"The new Pratt & Whitney engine should yield 12 percent better fuel economy than existing jets while being quieter, with further efficiency stemming from enhanced aerodynamics and lightweight materials. Together, the engines and high use of composite materials, like the wide-body Boeing 787 Dreamliner and Airbus A350 XWB contribute to the aforementioned 12-15% better cash operating costs.\n",
"Much like aerospace, lighter and stronger materials would be useful for creating vehicles that are both faster and safer. Combustion engines might also benefit from parts that are more hard-wearing and more heat-resistant.\n",
"Extremely efficient vehicle designs capable of achieving 100MPG+ (such as the VW 1l) do not have substantially greater engine efficiency, but instead focus on better aerodynamics, reduced vehicle weight, and using energy that would otherwise be dissipated as heat during braking.\n",
"BULLET::::- In the early 1980s, Toyota researched production of an adiabatic ceramic engine which can run at a temperature of over 6000 °F (3300 °C). Ceramic engines do not require a cooling system and hence allow a major weight reduction and therefore greater fuel efficiency. Fuel efficiency of the engine is also higher at high temperature, as shown by Carnot's theorem. In a conventional metallic engine, much of the energy released from the fuel must be dissipated as waste heat in order to prevent a meltdown of the metallic parts. Despite all of these desirable properties, such engines are not in production because the manufacturing of ceramic parts in the requisite precision and durability is difficult. Imperfection in the ceramic leads to cracks, which can lead to potentially dangerous equipment failure. Such engines are possible in laboratory settings, but mass-production is not feasible with current technology.\n",
"Some high-efficiency engines run without explicit cooling and with only incidental heat loss, a design called adiabatic. Such engines can achieve high efficiency but compromise power output, duty cycle, engine weight, durability, and emissions.\n",
"This time corresponds with significantly increased oversight of the automotive industry with regard to fuel economy and exhaust emissions by the United States Environmental Protection Agency. The resulting vehicles were significantly less powerful and slower due to new emissions restrictions being applied to older, heavier vehicle designs which also often included aggressively detuning the existing large engine designs to meet regulations.\n"
] |
Do blind people have better short-term memory for auditory input?
|
I might just be spitballin' here, but i'm pretty sure when someone is blind/deaf, they compensate for it through neuroplasticity. As some parts of the brain are lacking input and just using space (ie neurons in the visual cortex that are not in use due to being blind), your brain will develop connections with neighboring subsystems based on the stimuli you need to get around in the environment (ie better auditory perception to compensate for blindness).
Back to the question, i think you are talking more about working memory; the memory system that best supports the relationship between ST and LT memory. Working memory is a constant feedback loop between auditory and visual stimuli that work together to encode information to LT memory based on ST memory inputs. If one of the stimuli isn't going to work (in the case of blind people, visual stimuli just isn't a thing), i believe that under the idea of neuroplasticity, blind people will not store information longer, but be able to attend to it longer and work with it longer due to the additional connections the brain made to compensate for being blind.
just vise-versa everything i said in the case for deaf people.
Hope this helps with your understanding. And i hope the reddit community will correct any mistakes i may have written. but i think this is the best explanation for your speculations.
|
[
"It has been suggested that blind individuals have an enhanced ability to hear and recall auditory information in order to compensate for a lack of vision. However, whilst blind adults' neural systems demonstrate heightened excitability and activity compared to sighted adults, it is still not exactly clear to what extent this compensatory hypothesis is accurate. Nevertheless, many studies have found that there appears to be a high activation of certain visual brain areas in blind individuals when they perform non-visual tasks. This suggests that in blind individuals' brains, a reorganization of what are normally visual areas has occurred in order for them to process non-visual input. This supports a compensatory hypothesis in the blind.\n",
"Neuronal plasticity, or the capability of the brain to adapt to new requirements, is a prime example of plasticity stressing that the individual’s ability to change is a lifelong process. Recently, researchers have been analyzing how the spared senses compensate for the loss of vision. Without visual input, blind humans have demonstrated that tactile and auditory functions still fully develop. A superiority of the blind has even been observed when they are presented with tactile and auditory tasks. This superiority may suggest that the specific sensory experiences of the blind may influence the development of certain sensory functions, namely tactile and auditory. One experiment was designed by Röder and colleagues to clarify the auditory localization skills of the blind in comparison to the sighted. They examined both blind human adults’ and sighted human adults’ abilities to locate sounds presented either central or peripheral (lateral) to them. Both congenitally blind adults and sighted adults could locate a sound presented in front of them with precision but the blind were clearly superior in locating sounds presented laterally. Currently, brain-imaging studies have revealed that the sensory cortices in the brain are reorganized after visual deprivation. These findings suggest that when vision is absent in development, the auditory cortices in the brain recruit areas that are normally devoted to vision, thus becoming further refined.\n",
"Research regarding the human brain has also become a major focus at the university. The Brain and Mind Institute focuses on research in cognitive neuroscience at Western. and the Institute recently discovered the blind may echolocate by using the visual cortex of the brain. Another recent study at Western has suggested people deaf from birth may be able to reassign the area of their brain used for hearing to boost their sight.\n",
"New studies have adapted this hypothesis to explain cross modal plasticity which seems to occur in blind people. This is the fact that other senses in blind people seem to be heightened as a result of the loss of vision. Since blind patients are not exposed to the novel function of visual reading, the cortical area normally devoted to this function will be used for a different function. For example, scientists have found that the neural networks devoted to detecting moving sounds in blind people seem to be recruited by the area of the visual cortex responsible for visual movement in the sighted. This supports the theory that novel functions must find a neuronal niche, with existing cortical areas capable of supporting the function. This idea might also be applied to explain cross modality in deaf patients, and other cross modal phenomena such as synethesia and the McGurk effect.\n",
"In April 2006, a team from the UCL Institute of Cognitive Neuroscience published research showing that individuals with a skill for learning other languages could have more \"white brain matter\" in a part of the brain which processes sound. In August 2006, a team led by Dr Emrah Duzel of the UCL Institute of Cognitive Neuroscience published research showing that exposure to new experiences can boost the memory of the human brain. In January 2007 Professor van der Lely of the UCL Centre for Developmental Language Disorders and Cognitive Neuroscience published details of a 10-minute screening test capable of identifying pre-school children who might be dyslexic.\n",
"Recent research indicates that in attention-based tasks such as object tracking and enumeration, deaf subjects perform no better than hearing subjects. Improvement in visual processing is still observed, even when a deaf subject is not paying attention to the direct stimulus. A study published in 2011 found that congenitally deaf subjects had significantly larger neuroretinal rim areas than hearing subjects, which suggests that deaf subjects may have a greater concentration of retinal ganglion cells.\n",
"Auditory display enables eyes-free usage for blind users (via a screen reader) as well as sighted users who are using their eyes for other tasks. A rapid detection of acoustic signals and the omnidirectional feature of the sense of hearing can contribute to the effectiveness of an auditory display even when vision is available.\n"
] |
time signatures
|
Simply, a time signature is the number of beats in a measure.
Listen to a piece of popular modern music, and count the beats. You'll most likely notice that it goes BOOM dot dot dot BOOM dot dot dot or some variant on that. The most significant beat happens every fourth beat.
In music, these four beats define a measure, which is an organizational unit. The time signature tells you how many beats there are in a measure. A time signature of 4/4 means there are 4 beats (top 4), and each is a quarter note (which is the bottom 4). The quarter note part is only important for reading the music, as it identifies which note is the beat.
Now listen to a waltz. You'll notice that the main beat (aka the downbeat) comes every three beats: oom-pah pah, oom-pah pah, and so on. (Note: fast waltzes tend to sound like there's only one beat, but they are subdivided into three.) This is a time signature of 3/4: 3 beats, and the quarter note is the beat. Some waltzes are written in 3/8, which means the eighth note has the beat. This sounds identical, but the music is written differently (usually, this is done to make it easier to read).
Some pieces are in two. These often include marches (which are otherwise in 4) and polkas. Think of the oom-pah, oom-pah tuba beat that underlines a stereotypical march. This is written as 2/4 and sometimes 2/2 (which means the half note has the beat; this looks like a 4/4 measure but has 2 beats. This is done for clarity reasons too).
The next most common signature is 6/8. This means there are 6 notes to a beat, but it can also be subdivided into 2 groups of 3 notes or 3 groups of 2 notes. This signature is used for some marches and dances (for example, a march where three notes can happen per beat instead of two). [Example](_URL_5_)
These are the most common time signatures: others exist, but aren't used as frequently.
Some music is in 5, which means there are 5 beats to a measure. This can be 1, 2, 3, 4, 5; or it can be subdivided into 1-2-3, 1-2 or 1-2, 1-2-3. [This piece](_URL_6_), which is in 5/8, mostly alternates between the latter two. [This one](_URL_2_) is in 5/4, and goes 1-2-3,1-2. This is more common for dances; a march isn't fun if you have to take an odd number of steps.
Some pieces are in 7/4 or 7/8, which can be subdivided in several ways ([here's](_URL_3_) an example). Time signatures like 9/8 and 12/8 can be used to divide a 3/4 or 4/4 measure into threes.
There are also pieces which switch time signature, which makes things even more fun. This can be done for a variety of reasons: the music switches from a march to a dance, for example, but sometimes, the melody doesn't really fit into a time signature, so assigning it one that isn't awkward creates some odd signatures. [This](_URL_4_) piece alternates between 5/4 and 6/4, because the melody is free-flowing and assigning any one signature would have interfered with the structure. Composers can use changes for notational reasons: [this](_URL_0_) section of this piece is in 7, but the composer wrote it in measures of 2/2, 2/2, and 3/2 alternating so it is easier to read. (If you listen further into this piece, you'll see the time signature change because the nature of the piece changes.)
These changes can get strange; [this piece](_URL_1_) starts out in "free time," where the conductor marks each chord, and then moves between 1/8, 2/4, 3/4, 4/4, 5/4, 3/8, 4/8, 5/8, 1.54, and 2.5/4. (Basically, this is because the composer, Grainger, was really weird.)
The odd and obscure time signatures are just that; you won't see them often, and the first few are the ones worth knowing.
|
[
"The time signature is written as a horizontal fraction: codice_6, codice_7, codice_8, codice_9, etc. It is usually placed after the key signature. Change of time signature within the piece of music may be marked in-line or above the line of music. Some pieces that start with cadenza passages are not marked with time signatures until the end of that passage, even if the passage uses dotted barlines (in which case time is usually implied).\n",
"The time signature (also known as meter signature, metre signature, or measure signature) is a notational convention used in Western musical notation to specify how many beats (pulses) are contained in each measure (bar), and which note value is equivalent to a beat.\n",
"Time signatures define the meter of the music. Music is \"marked off\" in uniform sections called bars or measures, and time signatures establish the number of beats in each. This does not necessarily indicate which beats to emphasize, however, so a time signature that conveys information about the way the piece actually sounds is thus chosen. Time signatures tend to \"suggest\" prevailing groupings of beats or pulses.\n",
"There are various types of time signatures, depending on whether the music follows regular (or symmetrical) beat patterns, including simple (e.g., and ), and compound (e.g., and ); or involves shifting beat patterns, including complex (e.g., or ), mixed (e.g., & or & ), additive (e.g., ), fractional (e.g., ), and irrational meters (e.g., or ).\n",
"Time signatures are defined by how they divide the measure (in , complex triple time, each measure is divided in three, each of which is divided into three eighth notes: 3×3=9). In \"common\" time, often considered , each level is divided in two (simple duple time: 2×2=4). In a common-time rock drum pattern each measure (a whole note) is divided in two by the bass drum (half note), each half is divided in two by the snare drum (quarter note, collectively the bass and snare divide the measure into four), and each quarter note is divided in two by a ride pattern (eighth note). \"Half\"-time refers to halving this division (divide each measure into quarter notes with the ride pattern), while \"double\"-time refers to doubling this division (divide each measure into sixteenth notes with the ride pattern).\n",
"This is a list of musical compositions or pieces of music that have unusual time signatures. \"Unusual\" is here defined to be any time signature other than simple time signatures with top numerals of 2, 3, or 4 and bottom numerals of 2, 4, or 8, and compound time signatures with top numerals of 6, 9, or 12 and bottom numerals 4, 8, or 16.\n",
"Music educator Carl Orff proposed replacing the lower number of the time signature with an actual note image, as shown at right. This system eliminates the need for compound time signatures, which are confusing to beginners. While this notation has not been adopted by music publishers generally (except in Orff's own compositions), it is used extensively in music education textbooks. Similarly, American composers George Crumb and Joseph Schwantner, among others, have used this system in many of their works.\n"
] |
Why was Alfred the great viewed positively by later kings when they only came to power by taking the country from his dynasty?
|
Alfred the Great was extremely popular. He established a model of kingship in England that was built upon by his successors, and copied by later conquerors. Alfred established fantastical roots both for his kingship, and for the Anglo-Saxon people, tying them together with Christianity as glue in the Anglo-Saxon Chronicle. He rallied all the petty kings in England under his own West Saxon aegis, and became the first king of the English. He began a campaign of literacy and literary production to bring England back to the glory days of 8th century scholarship, when England led the world in learning. He and his successors adopted Carolingian models both for kingship and for monasticism (i.e. the Benedictine Reform), tying politics and religion together into a theocratic form of rule. He was a genius, and his work was so far reaching that he set a course for what England would be that in some ways even withstood the flood of Norman politics and Anglo-Norman culture that would later come to dominate the island.
People tend to forget that Cnut the Great took the kingship before William the Conqueror. There were two instances of domination by an outside force, not one, and they came in relatively rapid succession. Cnut and William the Conqueror had very different ways of ruling the country though. Cnut modeled himself after the English kings, and prime among them was Alfred. He wrote laws in a form similar to what came before him, and styled himself after English kings in many ways. See [Cnut's Letter to the English](_URL_0_) to see how he reaches out to the English and connects back to Alfred's English lineage by saying that he will uphold the laws of Edgar. Cnut knew that Alfred and his successors were extremely popular, or at least some of them were, like Edgar, who was in many ways carried on Alfred's program of politics and culture better than any other English king, so its no wonder that Cnut singled him out as his model lawgiver.
William handled things quite differently. He dominated the English from afar and sent barons in his stead to rule the country. He built churches in a very different style overall, and changed the laws of the country dramatically. There are poems in the AS Chronicle that complain about his style of domination and note that he changed the landscaped entirely by building new buildings everywhere. This is an instance of one kind popular sentiment finding its way into the history books.
So why then would later kings take up Alfred as a positive figure? In my view, it is because they had no choice and it was advantageous for him. Alfred had long since become a figure in the popular imagination, and tales where told about him. He was a big enough figure that there are four MSS from the 13th century, in early Middle English, that preserve a text called *The Proverbs of Alfred*. He was such an important figure that he was seen as a font of gnomic proverbs even 250-300 years after his death in England. That's a tide you can't push back, no matter what. So later kings incorporated Alfred into their political and social programs.
Take a look at David Pratt's *The Political Thought of King Alfred the Great* for details about Alfred's political program, with special attention to the role of learning in his model of kingship. Elaine Treharne's *Living Through Conquest: The Politics of Early English* looks at the period of the conquests using the status of English to focus her discussion of the history of the period.
|
[
"Alfred the Great is dying, Rivals for his succession are poised to tear the kingdom apart. The country that Alfred had worked for thirty years to build is likely to disintegrate. Uhtred, a Saxon born warrior, who has been raised by the Danes, wants more than anything else to go and fight to reclaim his stolen Northumbrian inheritance. But he knows that if he deserts the King's cause, Alfred's dream - and the very future of the English nation - might vanish immediately.\n",
"Alfred the Great was an Anglo-Saxon King, who ruled Wessex 871 – 899. His reign is usually regarded as the first in the lists of English monarchs. He has inspired many artistic and cultural works noticeably from the sixteenth century onwards, with a height in the Victorian Period.\n",
"Alfred's reign has been regarded as pivotal in the eventual unification of England, after he famously defended Wessex and southern England against the overwhelming Vikings invasions. Apart from his military success, he was also noted for his translations of Latin texts, education reforms (including advocacy of education in the English language rather than in Latin), improving his kingdom's legal system and civic defense. Alfred's positive image may have been accentuated by Bishop Asser, who was commissioned by Alfred to write his biography. Asser presented Alfred 'as the embodiment of the ideal, but practical, Christian ruler'. Later medieval historians William of Malmesbury, Gaimar Matthew Paris and Geoffrey of Monmouth further reinforced Alfred's favourable image.\n",
"Alfred also assisted his own son by promoting men who could be relied on to support him, and by giving him opportunities for command in battle once he was old enough. In the view of Barbara Yorke, the compilation of the \"Anglo-Saxon Chronicle\", which magnified Alfred's achievements, may have been partly intended to strengthen the case for the succession of his own descendants. However, Yorke also argues that Æthelwold's position was not fatally undermined by Alfred's will. His mother had witnessed a charter as \"regina\", whereas Alfred followed West Saxon tradition in refusing to have his wife consecrated as queen, and Æthelwold's status as the son of a queen may have given him an advantage over Edward. Æthelwold was still the senior ætheling, and the only surviving charter he witnessed shows both him and Edward as \"filius regis\" (son of a king), but lists Æthelwold above Edward, implying that he ranked above him.\n",
"Alfred the Great (, , 'Elf-counsel' or 'Wise-elf'; between 847 and 849 – 26 October 899) was King of Wessex from 871 to and King of the Anglo-Saxons from to 899. He was the youngest son of King Æthelwulf of Wessex. His father died when he was young and three of Alfred's brothers, Æthelbald, Æthelberht and Æthelred, reigned in turn.\n",
"Ann Williams comments: \"Æthelred virtually disinherits his children in favour of Alfred's in the event of his own previous death, at least in respect of the lion's share of the inheritance and therefore the kingship. This is in fact exactly what happened, and Æthelred's sons were not pleased at the outcome.\" In his \"Life\" of Alfred, written in 893, Asser states three times that Alfred was Æthelred's \"secundarius\" (heir apparent), an emphasis that in Ryan Lavelle's view \"reflects sensitivity on the subject of Alfred's succession\".\n",
"Alfred had a reputation as a learned and merciful man of a gracious and level-headed nature who encouraged education, proposing that primary education be conducted in English rather than Latin and improving the legal system, military structure and his people's quality of life. He was given the epithet \"the Great\" during and after the Reformation in the sixteenth century. The only other king of England given this epithet is Cnut the Great.\n"
] |
how come i wake up a minute or two before my bus stop more or less like clockwork when i fall asleep on the bus?
|
It's probably a combination of a bunch of subtle clues that you subconsciously pick up on. It could be a pattern in the stops right before yours, subtle changes in the scent of the air near your stop, specific sounds that you only hear near your stop, the absence of the voice of a person who gets off right before your stop, or a combination of any or all of these and more. There are literally millions of variables that your brain can subconsciously pick up on, and wake you up at the perfect time to get off the bus. Having ridden the same route for 6 years, your brain will be excellent at picking up on subconscious clues that you may not even be able to notice consciously.
|
[
"Buses when late may experience a problem known as bus bunching. On some bus lines with a more frequent service, if one bus falls behind schedule passenger numbers waiting at bus stops may grow, required a longer layover time. One or more subsequent buses on the published schedule may pass these already cleared stops and have a nearly empty run, and may actually jump ahead of their scheduled time to the point that two or more buses are within close sight of one another. In some cases, one bus is able to pass another. This phenomenon is sometimes known as \"clumping\" or \"bunching\". When this occurs, the even spacing of buses on the schedule may be severely disrupted, leading to extremely long waits for those attempting to catch a bus, and multiple buses arriving at once. Bus bunching serves to reduce the effectiveness of buses as a transport mode.\n",
"Their paper concludes that it is usually mathematically quicker to wait for the bus, at least for a little while. But once made, the decision to walk should be final, as opposed to waiting again at subsequent stops.\n",
"There is a common cliché that people “wait all day, and then three come along at once”, in relation to a phenomenon where evenly timetabled bus services can develop a gap in service followed by buses turning up almost simultaneously. This occurs when the rush hour begins and numbers of passengers at a stop increases, increasing the loading time, and thus delay scheduled service. The following bus then catches up because it begins to be delayed less at stops due to fewer passengers waiting. This is called bus bunching. This is prevented in some cities such as Berlin by assigning every stop arrival times where scheduled buses should arrive no earlier than specified.\n",
"Before boarding the bus, the driver chatted with a bus fan for some time. It was reported that the bus departed the racecourse 10 minutes late, leading some passengers to scold and quarrel with the driver. Passengers said the driver then became frustrated, and \"drove really fast as if he was throwing a tantrum\". They said he drove \"very, very fast\" the whole time, without slowing for turns.\n",
"The buses being required to back out of the bays at the existing terminal eat up roughly 10 minutes before all buses can start their trips. This can cause buses to end up running behind schedule later on in the day, as the arrival/depart time at the terminal generally falls farther and farther behind the more trips that are made.\n",
"A bus that is running slightly late will, in addition to its normal load, pick up passengers who would have taken the next bus if the first bus had not been late. These extra passengers delay the first bus even further. In contrast, the bus behind the late bus has a lighter passenger load than it otherwise would have, and may therefore run ahead of schedule. The classical theory causal model for irregular intervals is based on the observation that a late bus tends to get later and later as it completes its run, while the bus following it tends to get earlier and earlier. Eventually these buses form a pair, one right after another, and the service deteriorates as the headway degrades from its nominal value. The buses that are stuck together are called a bus \"bunch\" or \"banana bus\"; this may also involve more than two buses. This effect is often theorised to be the primary cause of reliability problems on bus and metro systems.\n",
"After being told that the bus is experiencing engine trouble, the narrator has no choice but to spend the night in a musty hotel, the Gilman House. While attempting to sleep, he hears noises at his door as if someone is trying to enter. Wasting no time, he escapes out a window and through the streets while a town-wide hunt for him occurs, forcing him at times to imitate the peculiar walk of the Innsmouth locals as he walks past search parties in the darkness. Eventually, he makes his way towards railroad tracks and hears a procession of Deep Ones passing in the road before him. Against his judgment, he opens his eyes to see the creatures and faints at his hiding spot. He wakes up unharmed. Over the years that pass, he researches his family tree and discovers that he is a descendant of Obed Marsh, and realizes that he is changing into one of the Deep Ones. As the story ends, the narrator is accepting his fate and feels he will be happy living with the Deep Ones. He plans to break out his cousin from an asylum, who is even further transformed than he, and take him to the Deep Ones' city beneath the sea.\n"
] |
How much urbanisation was there in South America, besides the Inca cities?
|
Hey there! Are you asking about urbanization in *regions* other than that which the Inca controlled, or are you interested in cultures that inhabited the same areas before them and whose own urbanisms developed into the Inca's?
|
[
"Industrial cities, such as Concepción and Talcahuano, began as colonial centers in the 1600s. Most of the large cities in Chile began as settlement locations for Spanish colonists living in homes constructed from adobe. They have grown to be the densely populated urban locations they are known for today.\n",
"In recent decades, urban growth is largely due to Trujillo population increase of migrant origin, the main contributors of population (1993 census), the interior provinces of La Libertad as Otuzco (15.8%), Santiago de Chuco (9.3%), Ascope (9%) and Sánchez Carrión (5.2%), while 16% contributed Cajamarca and Ancash with 5%;\n",
"During the past three decades, the city of Temuco has had the highest rate of growth in the nation. According to the census of 1970, about 88,000 inhabitants lived in Temuco. In the census of 2000, 30 years later, the population had tripled to 250,000. The resort town of Villarrica, on Lago Villarrica, has expanded rapidly. It is located next to the fast-growing resort of Pucon, now one of the four largest tourist destinations of Chile. According to the 2002 census, the most populated cities are: Temuco (260,783, includes Padre Las Casas), Villarrica (45,531), Angol (43,801), Victoria (23,977), Lautaro (18,808), New Imperial (14,980), Collipulli (14,240), Loncoche (14,191), and Traiguén (14,140).\n",
"As these inhabitants became sedentary, farming allowed them to build settlements and new societies emerged along the coast and in the Andean mountains. The first known city in the Americas was Caral, located in the Supe Valley 200 km north of Lima. It was built in approximately 2500 BC.\n",
"In the ancient Americas, early urban traditions developed in the Andes and Mesoamerica. In the Andes, the first urban centers developed in the Norte Chico civilization (also Caral or Caral-Supe civilization), Chavin and Moche cultures, followed by major cities in the Huari, Chimu and Inca cultures. The Norte Chico civilization included as many as 30 major population centers in what is now the Norte Chico region of north-central coastal Peru. It is the oldest known civilization in the Americas, flourishing between the 30th century BC and the 18th century BC. Mesoamerica saw the rise of early urbanism in several cultural regions, including the Preclassic Maya, the Zapotec of Oaxaca, and Teotihuacan in central Mexico. Later cultures such as the Aztec drew on these earlier urban traditions.\n",
"Movement from rural to urban areas was very heavy in the middle of the twentieth century, but has since tapered off. The urban population increased from 31% of the total population in 1938, to 57% in 1951 and about 70% by 1990. Currently the figure is about 77%. Thirty cities have a population of 100,000 or more. The nine eastern lowlands departments, constituting about 54% of Colombia's area, have less than 3% of the population and a density of less than one person per square kilometer (two people per sq. mi.).\n",
"In the ancient Americas, early urban traditions developed in the Andes and Mesoamerica. In the Andes, the first urban centers developed in the Norte Chico civilization, Chavin and Moche cultures, followed by major cities in the Huari, Chimu and Inca cultures. The Norte Chico civilization included as many as 30 major population centers in what is now the Norte Chico region of north-central coastal Peru. It is the oldest known civilization in the Americas, flourishing between the 30th century BC and the 18th century BC. Mesoamerica saw the rise of early urbanism in several cultural regions, beginning with the Olmec and spreading to the Preclassic Maya, the Zapotec of Oaxaca, and Teotihuacan in central Mexico. Later cultures such as the Aztec drew on these earlier urban traditions.\n"
] |
why is brown rice so much dryer than white rice?
|
Brown rice is just white rice that hasn't been processed so it still has a thing called a bran layer.
The bran layer is a tough fiberous coating. It isn't very absorbent and it surrounds the outside of the rice grain, so when you feel it in your mouth, it feels drier and more coarse than regular rice which has that tough paprt removed so only the inner more absorbant and soft middle is there.
|
[
"Red rice is a variety of rice that is colored red by its anthocyanin content. It is usually eaten unhulled or partially hulled, and has a red husk, rather than the more common brown. Red rice has a nutty flavor. Compared to polished rice, it has the highest nutritional value of rices eaten with the germ intact.\n",
"Brown rice is a whole-grain rice with the inedible outer hull removed, white rice is the same grain with the hull, bran layer and cereal germ removed. Red rice, gold rice, and black rice (also called purple rice) are all whole rices, but with differently pigmented outer layers.\n",
"Brown rice and white rice have similar amounts of calories and carbohydrates. Brown rice is a whole grain and a good source of magnesium, phosphorus, selenium, thiamine, niacin, vitamin B6, and manganese, and is high in fiber. White rice, unlike brown rice, has the bran and germ removed, and therefore has different nutritional content.\n",
"Black rice has a deep black color and usually turns deep purple when cooked. Its dark purple color is primarily due to its anthocyanin content, which is higher by weight than that of other colored grains. It is suitable for creating porridge, dessert, traditional Chinese black rice cake, bread, and noodles.\n",
"On the basis of processing type, rice can be divided into the two broad categories of brown and white. Brown rice is whole grain, with only the inedible hull of the seed removed, while white (milled) rice additionally has the bran and germ removed through the process of milling. Milled rice may not necessarily actually be white in color; there are also purple, black, and red variants of rice, which can be eaten whole grain or milled.\n",
"White rice is milled rice that has had its husk, bran, and germ removed. This alters the flavor, texture and appearance of the rice and helps prevent spoilage and extend its storage life. After milling, the rice is polished, resulting in a seed with a bright, white, shiny appearance.\n",
"Red rice is a good source of thiamin (vitamin B), riboflavin (vitamin B), fibre, iron and calcium. The flavor of cooked red cargo rice is generally more sweet and nutty, and the texture is more chewy than standard white polished rice. Red rice takes longer to cook than white rice, but not as long as brown rice. Soaking the rice in water for at least 30 minutes before cooking produces a softer texture.\n"
] |
how do glow sticks light up when broken
|
The outside tube is made of flexible plastic and contains a solution of phenyl oxalate and fluorescent dye. Inside that is a glass tube containing hydrogen peroxide. When you bend the stick far enough, the glass tube breaks and the two chemical solutions can mix. The reaction between phenyl oxalate and hydrogen peroxide produces energy, which goes into the fluorescent dye. The dye then emits that energy as light.
|
[
"Glow sticks emit light when two chemicals are mixed. The reaction between the two chemicals is catalyzed by a base, usually sodium salicylate. The sticks consist of a tiny, brittle container within a flexible outer container. Each container holds a different solution. When the outer container is flexed, the inner container breaks, allowing the solutions to combine, causing the necessary chemical reaction. After breaking, the tube is shaken to thoroughly mix the components.\n",
"In 1974, the glow was explained by R. J. van Zee and A. U. Khan. A reaction with oxygen takes place at the surface of the solid (or liquid) phosphorus, forming the short-lived molecules HPO and that both emit visible light. The reaction is slow and only very little of the intermediates are required to produce the luminescence, hence the extended time the glow continues in a stoppered jar.\n",
"A glow stick is a self-contained, short-term light-source. It consists of a translucent plastic tube containing isolated substances that, when combined, make light through chemiluminescence, so it does not require an external energy source. The light cannot be turned off and can only be used once. Glow sticks are often used for recreation, but may also be relied upon for light during military, police, fire, or emergency medical services operations. They are also used by military and police to mark ‘clear’ areas.\n",
"In glow sticks, phenol is produced as a byproduct. It is advisable to keep the mixture away from skin and to prevent accidental ingestion if the glow stick case splits or breaks. If spilled on skin, the chemicals could cause slight skin irritation, swelling, or, in extreme circumstances, vomiting and nausea. Some of the chemicals used in older glow sticks were thought to be potential carcinogens. The sensitizers used are polynuclear aromatic hydrocarbons, a class of compounds known for their carcinogenic properties.\n",
"Some examples of glow-in-the-dark materials do not glow by phosphorescence. For example, glow sticks glow due to a chemiluminescent process which is commonly mistaken for phosphorescence. In chemiluminescence, an excited state is created via a chemical reaction. The light emission tracks the kinetic progress of the underlying chemical reaction. The excited state will then transfer to a dye molecule, also known as a sensitizer or fluorophor, and subsequently fluoresce back to the ground state.\n",
"The glow stick contains two chemicals, a base catalyst, and a suitable dye (sensitizer, or fluorophor). This creates an exothermic reaction. The chemicals inside the plastic tube are a mixture of the dye, the base catalyst, and diphenyl oxalate. The chemical in the glass vial is hydrogen peroxide. By mixing the peroxide with the phenyl oxalate ester, a chemical reaction takes place, yielding two moles of phenol and one mole of peroxyacid ester (1,2-dioxetanedione). The peroxyacid decomposes spontaneously to carbon dioxide, releasing energy that excites the dye, which then relaxes by releasing a photon. The wavelength of the photon—the color of the emitted light—depends on the structure of the dye. The reaction releases energy mostly as light, with very little heat. The reason for this is that the reverse [2 + 2] photocycloadditions of 1,2-dioxetanedione is a forbidden transition (it violates Woodward–Hoffmann rules) and cannot proceed through a regular thermal mechanism.\n",
"A glow discharge is a plasma formed by the passage of electric current through a gas. It is often created by applying a voltage between two electrodes in a glass tube containing a low-pressure gas. When the voltage exceeds a value called the striking voltage, the gas ionization becomes self-sustaining, and the tube glows with a colored light. The color depends on the gas used.\n"
] |
What kind of programs or movements existed to assist the homeless between the 19th century and the Great Depression?
|
Unions, Churches and Ethnic clubs/General Community organizations.
Before Roosevelt, there was a strong mainstream belief that the poor should not rely on a government dole in order to get by. If they needed some help, then the community could help somebody when they were down. But this was temporary and it relied totally on the community for support. If the entire community was generally down (like during the depression) then this help could dry up completely.
Further, Union membership could help a person (usually a white man, but non-Chinese minorities and women were accepted into the Knights of Labor, and eventually this would happen in other Unions as well) get jobs. Further, organizations like the First International Workingmen's Association (a European Marxist organization) would often give money to people who were striking in order to prolong the strike. But that was only if your strike was important enough, and if the government (even the US!) didnt put you down first.
This doesnt really effectively change until the New Deal, when Roosevelt creates work programs like the WPA and the CCC which payed people to work. Note: it payed people *to work*. It wasnt until Roosevelt's second term that he passed the Social Security act which, in addition to giving money to grannies, created unemployment benefits and disability payments. This was essentially a government dole, you got paid for "nothing". But until the New Deal this idea of free money was looked down on as socialist(and even look at what people say about Well-fare and people on long-term unemployment). Even employing people (as opposed to hiring companies) was generally avoided by the government.
|
[
"In the early 1900s, the organization began an expansive philanthropic program that included employment bureaus, co-operative stores, medical dispensaries, distribution of clothes, women's sewing classes, Thanksgiving meals, reading rooms, fresh air camps and other establishments. During the advent of the Great Depression in the 1930s, Volunteers of America mobilized to assist the millions of people who were unemployed, hungry and homeless. Relief efforts included employment bureaus, wood yards, soup kitchens and \"Penny Pantries\" where every food item cost one cent.\n",
"Homelessness was present before the Great Depression, and was a common sight before 1929. Most large cities built municipal lodging houses for the homeless, but the Depression exponentially increased demand. The homeless clustered in shanty towns close to free soup kitchens. These settlements were often trespassing on private lands, but they were frequently tolerated or ignored out of necessity. The New Deal enacted special relief programs aimed at the homeless under the Federal Transient Service (FTS), which operated from 1933–1935.\n",
"The growing movement toward social concern sparked the development of rescue missions, such as the U.S. first rescue mission, the New York City Rescue Mission, founded in 1872 by Jerry and Maria McAuley. In smaller towns, there were hobos, who temporarily lived near train tracks and hopped onto trains to various destinations. Especially following the American Civil War, a large number of homeless men formed part of a counterculture known as \"hobohemia\" all over the United States. This phenomenon re-surged in the 1930s during and after the Great Depression.\n",
"Aberhart instituted a variety of relief programs to help people out of poverty, as well as public works programs and a debt relief program that froze some debt collections and mortgage foreclosures. This, like Tommy Douglas' similar program in Saskatchewan, was later overturned in the mid-1940s by the Supreme Court, although it aided people for a number of years during and (for a short time) after the Great Depression.\n",
"The growing movement toward social concern sparked the development of rescue missions, such as America's first rescue mission, the New York City Rescue Mission, founded in 1872 by Jerry and Maria McAuley. In smaller towns, there were hobos, who temporarily lived near train tracks and hopped onto trains to various destinations. Especially following the American Civil War, a large number of homeless men formed part of a counterculture known as \"hobohemia\" all over America.\n",
"The Community Mental Health Act of 1963 was a predisposing factor in setting the stage for homelessness in the United States. Long-term psychiatric patients were released from state hospitals into SROs and supposed to be sent to community mental health centers for treatment and follow-up, but it never quite worked out properly. The community mental health centers mostly did not materialize, and this population largely was found living in the streets soon thereafter with no sustainable support system.\n",
"AFDC (originally called Aid to Dependent Children) was created during the Great Depression to alleviate the burden of poverty for families with children and allow widowed mothers to maintain their households. The New Deal employment program such as the Works Progress Administration primarily served men. Prior to the New Deal, anti-poverty programs were primarily operated by private charities or state or local governments; however, these programs were overwhelmed by the depth of need during the Depression. The United States has no national program of cash assistance for non-disabled poor individuals who are not raising children.\n"
] |
how they change wedding rings size without cutting?
|
Hey, goldsmith here. The picture you linked is, like another user said, a device we call a "ring stretcher". There are various forms of it and it usually includes a compressor as already mentioned. This works only for rings of one color, and that are the same all the way around the band. Jewelers will not use this for a ring that has any stones in it, or rings that are two-toned, or that have areas that are thinner than others, because it causes "unexpected and/or undesired results".... As you have already guessed, the rings when compressed and stretched maintain their original weight, it is all the same material, but you are either stretching it out thinner to cover more area, or compressing it to cover smaller area. If you compress a ring a single ring size, it will not suddenly gain a whole millimeter's worth of thickness; it's much smaller and not very noticeable. You would need to stretch or compress a ring about 5 sizes or so to see a noticeable change in the ring. Usually in rings that are going down in many sizes that are not very thick you will see a concave depression happen on the inside of the band. If you stretch a ring that is two-toned, or white gold and yellow gold, you may cause the rings to separate, because the alloys to make karat yellow gold and karat white gold are different and stretch and compress differently; if you stretch a ring with stones set in it the areas removed to accommodate the stones will stretch faster and cause the settings to become weak and the metal to stretch irregularly, same with rings that have thin and thick design areas. Additionally, rings that have plain bottom shanks and elaborate crown and shoulder areas can also sometimes be stretched, but usually at the cost of the thickness of the shank- this can be done by placing the ring on a steel ring mandrel and "tapping" on the shank gently with a brass or steel hammer. There are many limitations to this but it can generally safely be done to a ring that has never been cut-sized and has been properly annealed 1/2 to 3/4 of a single size up.
|
[
"As of 2015, princess cut diamonds were the second most popular choice for an engagement ring. Approximately 30% of engagement rings use princess cut diamonds, behind round diamonds (50%) and ahead of cushions (8%). It saw its popularity at its peak in the 80s and 90s. The princess cut experienced a rise in popularity from the early 2000s to the mid 2000s. In the 2000s, the most popular engagement ring featured a princess cut diamond surrounded by round brilliant-cut diamonds. Disney in conjunction with Zales created a series of Disney Princess rings, with some of them, such as Aurora's, Fa Mulan's, Snow White's, and Tinker Bell's featuring princess cuts.\n",
"In the modern era, some women's wedding rings are made into two separate pieces. One part is given to her to wear as an engagement ring when she accepts the marriage proposal and the other during the wedding ceremony. When worn together, the two rings look like one piece of jewelry. The engagement ring is not worn during the wedding ceremony, when the wedding ring is put by the groom on the finger of the bride, and sometimes by the bride onto the groom's finger. After the wedding, the engagement ring is put back on, and is usually worn on the outside of the wedding ring.\n",
"The princess cut (technical name 'square modified brilliant') is a diamond cut shape often used in engagement rings. The name dates back to the 1960s, while the princess cut as it exists was created by Betazel Ambar and Israel Itzkowitz in 1980. The cut has a square or rectangular shape when viewed from above, and from the side is similar to that of an inverted pyramid with four beveled sides. Its popularity was at its highest in the 80s and 90s, though its popularity was high in the 2000s as well. It is the second most popular diamond cut, below round and above cushion.\n",
"In the United States, where engagement rings are worn by women, diamonds have been widely featured in engagement rings since the middle of the 20th century. Solitaire rings have one diamond. The most common setting for engagement rings is the solitaire prong setting, which was popularized by Tiffany & Co. in 1886 and its six-claw prong setting design sold under the \"Tiffany setting\" trademark. The modern favorite cut for an engagement ring is the brilliant cut, which provides the maximum amount of sparkle to the gemstone. The traditional engagement rings may have different prong settings and bands. Another major category is engagement rings with side stones. Rings with a larger diamond set in the middle and smaller diamonds on the side fit under this category. Three-stone diamond engagement rings, sometimes called \"trinity rings\" or \"trilogy rings\", are rings with three matching diamonds set horizontally in a row with the bigger stone placed in the center. The three diamonds on the ring are typically said to represent the couple's past, present, and future, but other people give religious significance to the arrangement.\n",
"Wrap rings can serve several purposes. They can be used in addition to a wedding band or can be used as the wedding band itself, creating a coordinated wedding ring set. Wrap rings are sometimes chosen as anniversary gifts or used when renewing wedding vows. In some instances, enhancers have been chosen to wrap around an heirloom solitaire diamond ring, such as a mother’s engagement ring. Generally, these rings are used simply to enhance the appearance of a basic, diamond solitaire ring.\n",
"The diameter of the ring is 15 \"shaku\" (4.55 meters), which increased from 13 \"shaku\" (3.94 meters) in 1931. The rice-straw bales (\"tawara\" (俵)) which form the ring are one third standard size and are partially buried in the clay of the \"dohyō\". Four of the \"tawara\" are placed slightly outside the line of the circle at the four cardinal directions, these are called privileged bales (\"tokudawara\"). Originally, this was to allow rain to run off the surface, when sumo tournaments were held outdoors in the open. Today, a wrestler under pressure at the edge of the ring will often try to move himself round to one of these points to gain leverage in order to push back more effectively against the opponent who is trying to force him out.\n",
"The princess cut is a relatively new diamond cut, having been created in the 1970s. The cut is sometimes referred to as a \"square modified brilliant\", as it combines the brilliance of a round cut with an overall square or rectangular appearance. Today, the princess cut is the second most popular diamond shape, second only to rounds.\n"
] |
why do modern phones lack the soap opera effect of modern tvs?
|
I think they r referring to that hyper realistic quality in the picture.. i dont like it..it does remind me of a soap opera..I thought i was high the first time i saw it on someones tv..
|
[
"BULLET::::- The visual quality of a soap opera is usually lower than prime time U.S. television drama series due to the lower budgets and quicker production times. This is also because soap operas are recorded on videotape using a multi-camera setup, unlike primetime productions that are usually shot on film and frequently use the single camera shooting style. Because of the lower resolution of video images, and also because of the emotional situations portrayed in soap operas, daytime serials make heavy use of close-up shots. Programs in the United States did not make the full conversion to high definition broadcasting until September 2011, when \"The Bold and the Beautiful\" became the last soap to convert to the format; \"One Life to Live\" was an exception to this, as it continued to be produced and broadcast in standard definition – albeit in the aspect ratio – until the end of its run on ABC in January 2012.\n",
"As women increasingly worked outside of the home, daytime television viewing declined. New generations of potential viewers were not raised watching soap operas with their mothers, leaving the shows' long and complex storylines foreign to younger audiences. Now, as viewers age, ratings continue to drop among young adult women, the demographic group that soap opera advertisers pay the most for. Those who might watch in workplace breakrooms are not counted, as Nielsen does not track television viewing outside the home. The rise of cable and the internet has also provided new sources of entertainment during the day. The genre's decline has additionally been attributed to reality television displacing soap operas as TV's dominant form of melodrama. An early term for the reality TV genre was \"docu-soap\". A precursor to reality TV, the televised 1994–95 O. J. Simpson murder case, both preempted and competed with an entire season of soaps, transforming viewing habits and leaving soap operas with 10 percent fewer viewers after the trial ended.\n",
"With the advent of internet television and mobile phones, several soap operas have also been produced specifically for these platforms, including \"\", a spin-off of the established \"EastEnders\". For those produced only for the mobile phone, episodes may generally consist of about six or seven pictures and accompanying text.\n",
"The Soap Opera has been a staple of American television programming for over 50 years. However, in the late 2000s a number of these daytime dramas began to face major budget cuts and cancellations. Producer Matthew D'Amato and his crew sat with actors, producers, writers, and fans in over 70 interviews to provide an insider view of the world of soap operas as told by the people who live it in an attempt to uncover the reasons behind the sudden decline of the daytime soaps.\n",
"In the United Kingdom, United States, Canada, and Australia, talk shows (hosted by a single personality or a larger panel) are a significant part of this timeslot, as well as, to a lesser extent, game shows and soap operas. In the U.S., the Big Three television networks all provide some degree of daytime programming, but the once-popular genre of soap operas have declined; although a few remain active, they have been largely replaced by less-expensive programming such as talk shows (including \"Strahan and Sara\", \"The Talk\", and \"Today with Hoda & Jenna\", which fill timeslots once filled by \"One Life to Live\", \"As the World Turns\", and \"Passions\" respectively). Game shows were also common in U.S. daytime lineups, but by the 1990's, only CBS's long-running \"The Price Is Right\" remained (which was later joined in 2009 by a revival of \"Let's Make a Deal\", which replaced the cancelled soap \"Guiding Light\").\n",
"Audiences were already accustomed to continuing story serials from the radio, but this was the first attempt on television. The term \"soap opera\" came about since many soap and cleaning product companies started trying to reach the many housewives who were home to listen these programs which normally aired in the afternoons. The \"Oxford English Dictionary\"s citation for the phrase dates its first appearance in print to 1938.\n",
"Another popular medium in U.S. television moving into the 1970s was the soap opera, which moved from being a genre watched exclusively by housewives to having a sizable audience of men (who largely watched \"The Edge of Night\") and college students; the latter audience helped \"All My Children\" gain a devoted following, as it was on during many universities' traditional \"lunch period.\" In a \"TIME\" article written about the genre in 1976, it was estimated that as many as 35 million households tuned into at least one soap opera each afternoon, the most popular being \"As the World Turns\", which routinely grabbed viewing figures of twelve million or higher each day.\n"
] |
how do companies like primerica make you money?
|
Don't use Primerica. It's a scam, or at least as close you can legally be to a scam. Their fees are insanely high, and their products are way too expensive.
_URL_0_
_URL_1_
_URL_9_
_URL_2_
_URL_8_
_URL_3_
_URL_7_
_URL_5_
_URL_6_
_URL_4_
If you really want advice on things like retirement saving, IRAs etc. come ask at /r/personalfinance. They have had quite a large number of detailed discussions on these things if you search back, and the community is generally willing to help everyone out.
|
[
"In Primerica's eleven-tiered multilevel-marketing system, the company's sales representatives receive a commission for selling financial products, while a portion of the sale is also paid to the representative's recruiter, the recruiter's recruiter, and so on, up to eleven levels. Sales agents are tasked with selling the company's products to a \"warm market” including their family and friends. As of 2010, Primerica required its sales representatives to pay a $25 monthly fee, and it is estimated that these fees netted the company revenue of $2.5 million per month from its own employees. In 2010, Primerica was reported to have over 100,000 representatives selling the company's financial products, with individual earnings averaging $5,156 per year. More than 190,000 new sales recruits paid a fee to sign up for Primerica in 2014, but the company only boosted its total licensed sales force by 3,700 that year, and each member of the sales team earned an average of $6,030. As of September 2017, the company reported having 124,436 independent representatives.\n",
"Primerica, Inc. is an insurance and financial services company that uses multi-level marketing to sell financial products and services. Headquartered in unincorporated Gwinnett County, Georgia, Primerica spun off from its former parent company Citigroup through an initial public offering on April 1, 2010. \n",
"Primerica reported having 120,000 independent representatives in 2017, through Primerica's securities broker-dealer affiliate PFS Investments, Inc. in the United States, and through PFSL Investments Canada Ltd. in Canada. The company primarily sells term life insurance, as well as mutual funds, annuities, segregated funds, managed accounts, long-term care insurance, pre-paid legal services, auto insurance, home insurance, credit monitoring and debt management plans. Primerica was listed by \"Forbes\" as one of \"Americas 50 Most Trustworthy Financial Companies\" in 2015.\n",
"The business model of the company was created to help Mexican entrepreneurs find new and more modern ways to finance start ups, as well as provide investors alternatives to investing abroad. The company believes that a weak peso against the dollar is good for them as it makes U.S. investments less attractive, leading investors to look for alternatives.\n",
"Primer Group of Companies is a Philippine company engaged in the retail sale and distribution of consumer brands and products. The company carries international brands mostly lifestyle products. The Primer Group also operates its own lifestyle boutique which includes Res/Toe/Run, The Travel Club, Ladybag, Flight001, Bratpack, GRND, General and R.O.X .In addition to the retail and distribution, The Primer Group recently opened its first retail and merchandising academy - APEX.\n",
"In 2012, Primerica was the target of multiple lawsuits alleging that the company's representatives sought to profit by earning commissions after convincing Florida firefighters, teachers and other public workers to divest from safe government-secured retirement investments to inappropriate high-risk retirement products offered by Primerica. In January 2014, the company set aside $15.4 million to settle allegations involving 238 cases.\n",
"AMORC is a worldwide organization, established in the United States as a nonprofit 501(c)(3) public benefit corporation, with the specific and primary purpose of advancing the knowledge of its history, principles, and teachings for charitable, educational, and scientific purposes. It is financed mainly through fees paid by its members. Income is used by the organization to pay expenses, develop new programs, expand services, and carry out educational work.\n"
] |
For a peasant/farmer, how onerous was Roman taxation? Was it good value for the services provided?
|
It really depended on where you were and when. Even in modern times there have been areas that could resist taxation with remoteness, despite modern states having considerably more coercive power. In general, however, taxation was collected by local authorities and folded into rent payment. This could be ruinous, or it could be light. Historical sources mention instances of peasant farmers successfully negotiating an advantageous payment, and others being ground down by extraction.
|
[
"Tax farming was originally a Roman practice whereby the burden of tax collection was reassigned by the Roman State to private individuals or groups. In essence, these individuals or groups paid the taxes for a certain area and for a certain period of time and then attempted to cover their outlay by collecting money or saleable goods from the people within that area. The system was set up by Gaius Gracchus in 123 BC primarily to increase the efficiency of tax collection within Rome itself but the system quickly spread to the Provinces. Within the Roman Empire, these private individuals and groups which collected taxes in lieu of the bid (i.e. rent) they had paid to the state were known as \"publicani\", of whom the best known is the disciple Matthew, a \"publicanus\" in the village of Capernaum in the province of Galilee. The system was widely abused, and reforms were enacted by Augustus and Diocletian. Tax farming practices are believed to have contributed to the fall of the Western Roman Empire in Western Europe.\n",
"In the early days of the Roman Republic, public taxes consisted of modest assessments on owned wealth and property. The tax rate under normal circumstances was 1% and sometimes would climb as high as 3% in situations such as war. These modest taxes were levied against land, homes and other real estate, slaves, animals, personal items and monetary wealth. The more a person had in property, the more tax they paid. Taxes were collected from individuals.\n",
"In the early days of the Roman Republic, public taxes consisted of assessments on owned wealth and property. The tax rate under normal circumstances was 1% of property value, and could sometimes climb as high as 3% in situations such as war. These taxes were levied against land, homes and other real estate, slaves, animals, personal items and monetary wealth. By 167 BC, Rome no longer needed to levy a tax against its citizens in the Italian peninsula, due to the riches acquired from conquered provinces. After considerable Roman expansion in the 1st century, Augustus Caesar introduced a wealth tax of about 1% and a flat poll tax on each adult, this made the tax system less progressive (as it no longer only taxed wealth).\n",
"The earliest and most widespread forms of taxation were the corvée and tithe, both of which can be traced back to the beginning of civilization. The corvée was state-imposed forced labour on peasants too poor to pay other forms of taxation (\"labour\" in ancient Egyptian is a synonym for taxes). Low taxes helped the Roman aristocracy increase their wealth, which equalled or exceeded the revenues of the central government. An emperor sometimes replenished his treasury by confiscating the estates of the \"super-rich\", but in the later period, the resistance of the wealthy to paying taxes was one of the factors contributing to the collapse of the Empire.\n",
"Merchants and peasant farmers paid property and poll taxes in coin cash and land taxes with a portion of their crop yield. Peasants obtained coinage by working as hired laborers for rich landowners, in businesses like breweries or by selling agricultural goods and homemade wares at urban markets. The Han government may have found collecting taxes in coin the easiest method because the transportation of taxed goods would have been unnecessary.\n",
"Tax farming was an important step in the history of economic development by providing a method for collecting taxes across a large area without the need for a tax-collecting bureaucracy, or during periods when such a bureaucracy is unworkable or impossible to maintain. Systems of tax farming similar to the Roman model were used in Ptolemaic Egypt, various medieval Western European countries, the Ottoman and Mughal empires, and in Qing Dynasty China. As states become stronger, buoyed up by revenues brought in by tax farming, the practice was discontinued in favour of centralized tax collection systems. In part this was because tax farming systems tended to rely on wealthy individuals outside the state machinery, gangs, and secret societies.\n",
"At the height of the Republic's era of provincial expansion (roughly 146 BC until the end of the Republic in 27 BC) the Roman tax farming system was very profitable for the publicani. The right to collect taxes for a particular region would be auctioned every few years for a value that (in theory) approximated the tax available for collection in that region. The payment to Rome was treated as a loan and the publicani would receive interest on their payment at the end of the collection period. In addition, any excess (over their bid) tax collected would be pure profit for the publicani. The principal risk to the publicani was that the tax collected would be less than the sum bid.\n"
] |
If Mitochondria exist as almost separate entities from the cell, does it always divide perfectly during mitosis?
|
In mammals, mitochondria don't necessarily divide during mitosis; instead they divide when the host cell requires more energy and combine or die out when less energy is needed. During mitosis, the mitochondria present are split up between the two daughter cells. In single celled eukaryotes; the mitochondria divides with the cell cycle to ensure each daughter cell receives a mitochondria. That being said, mitochondria divide by [binary fission](_URL_2_) much like prokaryotes do. This process is simple, when compared to mitotic division, and is as close to perfect as possible, except the sizes might not be congruent.
Mitochondria are essentially a cell of their own. They have their own DNA, produce their own energy, but must acquire its nutrients from the host cell. In return, the host cell gets a healthy supply of energy. This symbiotic relationship makes it so that one cannot survive without the other. The mitochondria will starve due to lack of nutrients and the host cell will eventually die without a good source of energy. The [Endosymbiotic theory](_URL_1_) can help with understanding how this could have occurred. Not all eukaryotic cells require a mitochondria though (amitochondrial). A [RBC; Red Blood Cell](_URL_0_) is an example of this.
> If so, how frequent does this happen?
I'm not sure as to how frequent this happens but I will assume that it doesn't happen very often or that the cells do not last long enough for us to observe this happening enough.
Source: Biotechnology student.
TL;DR: Mitochondria divide pretty damn perfectly except for size sometimes. I assume it would be hard to gauge at how frequent cells are given no mitochondria. Someone else may be able to help with this point.
|
[
"Most cells only have one centrosome for most of their cell cycle, however, right before mitosis, the centrosome duplicates, and the cell contains two centrosomes. Some of the microtubules that radiate from the centrosome grow directly away from the sister centrosome. These microtubules are called astral microtubules. With the help of these astral microtubules the centrosomes move away from each other towards opposite sides of the cell. Once there, other types of microtubules necessary for mitosis, including interpolar microtubules and K-fibers can begin to form.\n",
"Although mitochondria are commonly depicted as singular oval-shaped structures, it has been known for at least a century that they form a highly dynamic network within most cells where they constantly undergo fission and fusion. Mitochondria can divide by prokaryotic binary fission and since they require mitochondrial DNA for their function, fission is coordinated with DNA replication. Some of the proteins that are involved in mitochondrial fission have been identified and some of them are associated with mitochondrial diseases. Mitochondrial fission has significant implications in stress response and apoptosis.\n",
"The mitosis process in the cells of eukaryotic organisms follow a similar pattern, but with variations in three main details. \"Closed\" and \"open\" mitosis can be distinguished on the basis of nuclear envelope remaining intact or breaking down. An intermediate form with partial degradation of the nuclear envelope is called a \"semiopen\" mitosis. With respesct to the symmetry of the spindle apparatus during metaphase, an approximately axially symmetric (centered) shape is called as \"orthomitosis\", distinguished from the eccentric spindles of \"pleuromitosis\", in which mitotic apparatus has a bilateral symmetry. Finally, a third criterion is the location of the central spindle in case of closed pleuromitosis: \"extranuclear\" (spindle located in the cytoplasm) or \"intranuclear\" (in the nucleus).\n",
"The mitochondrion (plural mitochondria) is a double-membrane-bound organelle found in most eukaryotic organisms. Some cells in some multicellular organisms may, however, lack them (for example, mature mammalian red blood cells). A number of unicellular organisms, such as microsporidia, parabasalids, and diplomonads, have also reduced or transformed their mitochondria into other structures. To date, only one eukaryote, \"Monocercomonoides\", is known to have completely lost its mitochondria. The word mitochondrion comes from the Greek , , \"thread\", and , , \"granule\" or \"grain-like\". Mitochondria generate most of the cell's supply of adenosine triphosphate (ATP), used as a source of chemical energy. A mitochondrion is thus termed the \"powerhouse\" of the cell. \n",
"At a certain point during the cell cycle in open mitosis, the cell divides to form two cells. In order for this process to be possible, each of the new daughter cells must have a full set of genes, a process requiring replication of the chromosomes as well as segregation of the separate sets. This occurs by the replicated chromosomes, the sister chromatids, attaching to microtubules, which in turn are attached to different centrosomes. The sister chromatids can then be pulled to separate locations in the cell. In many cells, the centrosome is located in the cytoplasm, outside the nucleus; the microtubules would be unable to attach to the chromatids in the presence of the nuclear envelope. Therefore, the early stages in the cell cycle, beginning in prophase and until around prometaphase, the nuclear membrane is dismantled. Likewise, during the same period, the nuclear lamina is also disassembled, a process regulated by phosphorylation of the lamins by protein kinases such as the CDC2 protein kinase. Towards the end of the cell cycle, the nuclear membrane is reformed, and around the same time, the nuclear lamina are reassembled by dephosphorylating the lamins.\n",
"Cell division in \"Tetraspora\" species has been described. It is noted that prior to mitosis beginning, cells become immotile and the basal bodies located at the surface of cells start to retreat in. This causes the preprophase nucleus to migrate toward retreating basal body complex, around which microtubules start to gather. The basal body complex arranges itself to be closely associated with one pole of the cell, creating a mitotic spindle known as open polar fenestrae. Furthermore, it is speculated that the spindle itself may also be unicentric. Eventually microtubules extend from the spindle, and during anaphase they penetrate through the fenestrae and split the nucleus. Subsequently to telophase, the nucleus reforms, but a phycoplast forms. In addition, a protoplast is found inside the cell wall and is noted to rotate within the wall during cleavage; a process known to occur by the cell undergoing furrowing.\n",
"When mitosis is completed, the cell plate and new cell wall form starting from the center along the plane occupied by the phragmosome. The cell plate grows outwards until it fuses with the cell wall of the dividing cell at exactly the spots predicted by the preprophase band.\n"
] |
why are fiber-optic connections faster? don't electrical signals move at the speed of light anyway, or close to it?
|
Individual signals inside both fiber and electrical cables do travel at similar speeds.
But you can send way more signals down a fiber cable at the same time as you can an electrical cable.
Think of each cable as a multi-lane road. Electrical cable is like a 5-lane highway.
Fiber cable is like a 200 lane highway.
So cars on both highway travel at 65 mph, but on the fiber highway you can send way more cars.
If you're trying to send a bunch of people from A to B, each car load of people will get there at the same speed, but you'll get everyone from A to B in less overall time on the fiber highway than you will on the electrical highway because you can send way more carloads at the same time.
**Bonus Info** This is the actual meaning of the term *bandwidth*. It's commonly used to describe the speed of an internet connection but it actually refers to the number of frequencies being used for a communications channel. A group of sequential frequencies is called a *band*. One way to describe a communications channel is to talk about how wide the band of frequencies is, otherwise called *bandwidth*. The wider your band is, the more data you can send at the same time and so the faster your *overall* transfer speed is.
**EDIT**
**COMMENTS** Many other contributors have pointed out that there is a lot more complexity just below the surface of my ELI5 explanation. The reason *why* fiber can have more lanes than electrical cables is an interesting albeit challenging topic and I encourage all of you to dig into the replies and other comments for a deeper understanding of this subject.
|
[
"The main benefits of fiber are its exceptionally low loss (allowing long distances between amplifiers/repeaters), its absence of ground currents and other parasite signal and power issues common to long parallel electric conductor runs (due to its reliance on light rather than electricity for transmission, and the dielectric nature of fiber optic), and its inherently high data-carrying capacity. Thousands of electrical links would be required to replace a single high bandwidth fiber cable. Another benefit of fibers is that even when run alongside each other for long distances, fiber cables experience effectively no crosstalk, in contrast to some types of electrical transmission lines. Fiber can be installed in areas with high electromagnetic interference (EMI), such as alongside utility lines, power lines, and railroad tracks. Nonmetallic all-dielectric cables are also ideal for areas of high lightning-strike incidence.\n",
"Optical fiber is used as a medium for telecommunication and computer networking because it is flexible and can be bundled as cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with much lower attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters.\n",
"Optical fiber can be used as a medium for telecommunication and computer networking because it is flexible and can be bundled into cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters.\n",
"For short-distance applications, such as a network in an office building (see FTTO), fiber-optic cabling can save space in cable ducts. This is because a single fiber can carry much more data than electrical cables such as standard category 5 Ethernet cabling, which typically runs at 100 Mbit/s or 1 Gbit/s speeds. Fiber is also immune to electrical interference; there is no cross-talk between signals in different cables, and no pickup of environmental noise. Non-armored fiber cables do not conduct electricity, which makes fiber a good solution for protecting communications equipment in high voltage environments, such as power generation facilities, or metal communication structures prone to lightning strikes. They can also be used in environments where explosive fumes are present, without danger of ignition. Wiretapping (in this case, fiber tapping) is more difficult compared to electrical connections, and there are concentric dual-core fibers that are said to be tap-proof.\n",
"The LED light sources sometimes used with multi-mode fiber produce a range of wavelengths and these each propagate at different speeds. This chromatic dispersion is another limit to the useful length for multi-mode fiber optic cable. In contrast, the lasers used to drive single-mode fibers produce coherent light of a single wavelength. Because of the modal dispersion, multi-mode fiber has higher pulse spreading rates than single mode fiber, limiting multi-mode fiber’s information transmission capacity.\n",
"The transmission distance of a fiber-optic communication system has traditionally been limited by fiber attenuation and by fiber distortion. By using opto-electronic repeaters, these problems have been eliminated. These repeaters convert the signal into an electrical signal, and then use a transmitter to send the signal again at a higher intensity than was received, thus counteracting the loss incurred in the previous segment. Because of the high complexity with modern wavelength-division multiplexed signals (including the fact that they had to be installed about once every 20 km), the cost of these repeaters is very high.\n",
"The capacity of fiber optic networks has increased in part due to improvements in components, such as optical amplifiers and optical filters that can separate light waves into frequencies with less than 50 GHz difference, fitting more channels into a fiber. The erbium-doped optical amplifier was developed by David Payne at the University of Southampton in 1986 using atoms of the rare earth erbium that are distributed through a length of optical fiber. A pump laser excites the atoms, which emit light, thus boosting the optical signal.\n"
] |
Has the privilege of knights to make knights ever been abolished?
|
Alternatively I'd question the premise. Did common knights (as opposed to Barons, Earls, or other titled nobility) ever have the right to bestow knighthood upon others - particularly with regard to England, Great Britain, or the UK? If not, where did this idea arise?
Today all knighthoods are bestowed by the Queen or by a member of the Royal family on her behalf. I have no idea with regard to the past.
|
[
"The Knights as a group were governed by the General Directorate (\"Generaldirektorium\"). This exercised the \"jus retractus\", the right to buy back any land sold to a non-knight for the original price within three years, and the \"just collectandi\", the right to collect taxes for the upkeep of the knightly order, even on estates that had been sold to non-knights. The knights also had the right to tax their subjects directly, and also possessed the feudal rights to the \"corvée\" and the \"bannum\". The knights' reputation for heavy taxes (the maligned \"Rittersteuer\") and high judicial fines rendered them an anachronism in the eyes of imperial reformers.\n",
"In the Peace of Westphalia, the privileges of the Imperial Knights were confirmed. The knights paid their own tax (voluntary) to the Emperor, possessed limited sovereignty (rights of legislation, taxation, civil jurisdiction, police, coin, tariff, hunt; certain forms of justice), and the \"ius reformandi\" (the right to establish an official Christian denomination in their territories). The knightly families had the right of house legislation, subject to the Emperor's approval, and so could control such things as the marriage of members and set the terms of the inheritance of family property. Imperial knights did not, however, have access to the Imperial Diet.\n",
"Knights of the new Order were appointed right up to the end of the First Empire in 1814. On their initial restoration in 1814 the Bourbons neither abolished nor awarded the Order of the Reunion and Napoleon awarded it during the Hundred Days. On 28 July 1815 Louis XVIII of France abolished it, asking its knights to return their gold and silver badges to the chancellory of the Legion d’Honneur. Those returned included few from the Netherlands since the cross was the replacement for the Order of the Union and the Dutch – having seen their country looted and drained of manpower for so long by the French – were unwilling to send their gold and silver awards back to Paris.\n",
"The 13th century witnessed the trend of monarchs, beginning with Emperor Frederick II (as King of Sicily) in 1231, retaining the right of \"fons honorum\" as a royal prerogative, gradually abrogating the right of knights to elevate their esquires to knighthood. After the end of feudalism and the rise of the nation-state, orders and knighthoods, along with titles of nobility (in the case of monarchies), became the domain of the monarchs (heads of state) to reward their loyal subjects (citizens) – in other words, the heads of state became their nations' \"\"fountains of honour\"\".\n",
"Villani states that there were only 75 full-dress knights in his day and not 250 knights as in the previous government of Florence, because the popular second government denied the magnates much of their authority and status, \"hence few persons were knighted.\" In 1293, new city ordinances were passed that stated anyone who did not belong to a guild or a council of the captain of the people were to be barred from serving as priors, standard-bearers of justice, or judges. This effectively excluded the powerful magnates of the city from holding important offices, while a prison for magnates was built in 1294, and Giovanni Villani writes that the first magnates punished for failing to adhere to these ordinances were the Galli.\n",
"The immediate status of the Imperial Knights was recognized at the Peace of Westphalia. They never gained access to the Imperial Diet, the parliament of lords, and were not considered Hochadel, the high nobility, belonging to the Lower Nobility. \n",
"The Knights of the Royal Oak was an intended order of knighthood. It was proposed in 1660 at the time of the restoration of Charles II of England, known as the English Restoration. It was to be a reward to those Englishmen who faithfully & actively supported him during his exile in France. The knights so created were to be called \"Knights of the Royal Oak\", and bestowed with a silver medal, on a ribbon, depicting the king in the Royal oak tree, a reference to the oak tree at Boscobel House, then called the \"Oak of Boscobel\", in which King Charles II hid to escape the Roundheads after the Battle of Worcester in 1651. Men were selected from all the counties of England and Wales, with the number from each county being in proportion to the population. William Dugdale in 1681 noted 687 names, each with a valuation of their estate in pounds per year. The estates of 18 men were valued at more than £3,000 per year. The names of the recipients are also listed in the baronetages, published in five volumes, 1741.\n"
] |
why do so many superhero origin stories involve parents dying?
|
It is a very common trope in fiction. It is frequently used because it can accomplish several things
- give the protagonist a motive. Why are they a superhero / chasing this villain / in this job. Easy answer: their parents died.
- if the crime was committed by the main antagonist, it's an easy way to establish that they are bad business. They are obviously evil cause they will just kill a kid's parents (or even try to kill them). It can also serve to show how incompetent law agencies / other people are in the story if this villain won't be caught for the crime until our hero gets involved.
- it clears the way for the protagonist engaging in things parents would normally stop you from.
- it gives the protagonist a source of angst (which, to be honest, some writers mistake as a fully-functioning personality) / helps create a dark and edge tone to the work.
|
[
"However, baby May and her parents were never reunited in Marvel's main continuity. Editors repeatedly stated that the baby died, or at the very least would never be seen again; the child was considered a major factor in the aging of the characters. In \"Marvel Knights Spider-Man\" issue No. 9, Mac Gargan, while speaking of Norman Osborn, states \"He kills your unborn child, you kill his son\". To date, this is the most conclusive evidence of the infant's fate.\n",
"The heroes and heroines of most Disney movies come from unstable family backgrounds; most are either orphaned or have no mothers. Few, if any, have only single-parent mothers. In other instances, mothers are presented as \"bad surrogates\" eventually \"punished for their misdeeds.\" There is much debate about the reasoning behind this phenomenon. It is notable that the phenomenon, while present since the beginning of the Disney canon in the presence of the Evil Queen in \"Snow White and the Seven Dwarfs\", became more prevalent upon the death of Walt Disney's mother Flora in a tragic accident where she asphyxiated in the home that her son had bought her. Two of the most remembered and notable examples occurred directly after she died, these being the long imprisonment of Mrs. Jumbo in \"Dumbo\" and the dramatic death of Bambi's mother in \"Bambi\".\n",
"Death is a frequently used dramatic device in comic book fiction, and in particular superhero fiction. Unlike stories in television or film, character deaths are rarely by unforeseen behind-the-scenes events, as there is no analogous situation to having actors portraying characters. Instead, characters are typically killed off as part of the story, or occasionally by editorial mandate to generate publicity for a title. Teasers may hint at characters' deaths for an extended period. A number of factors often mean that these changes are not permanent. Due to extremely long print runs, the popularity of these characters (with writers and fans) and occasionally rights issues for using the character in licensed adaptations, characters are often brought back to life by later writers. This can happen either as a depiction of their literal resurrection or by retcon, a revision which changes earlier continuity and establishes the character not to have died in the first place. This phenomenon is known as the comic book death. Killing off a main character such as Superman, Batman or Captain America can often lead to an uptick in publicity for a comic book, as well as high sales for the story in which they are inevitably brought back to life.\n",
"Death in superhero fiction is rarely permanent, as characters who die are often brought back to life through supernatural means or via retcons (retroactive changes to the continuity), the alteration of previously established facts in the continuity of a fictional work. Fans have termed the practice of bringing back dead characters \"comic book death\".\n",
"Because of the brutal manner in which Alex is killed, and because of the different 'significant others' of superheroes that are constantly in danger of being killed, women in the series who are killed in a particularly violent manner, to further a male hero's story, are said to have suffered from the Woman in Refrigerator syndrome.\n",
"Because death in american super-hero comics is so often temporary, readers rarely take the death of a character seriously—when a character dies, the reader feels very little sense of loss, and simply left wondering how long it will be before they return to life.\n",
"Orphans are especially common as characters in comic books. Almost all the most popular heroes are orphans: Superman, Batman, Spider-Man, Robin, The Flash, Captain Marvel, Captain America, and Green Arrow were all orphaned. Orphans are also very common among villains: Bane, Catwoman, and Magneto are examples. Lex Luthor, Deadpool, and Carnage can also be included on this list, though they killed one or both of their parents. Supporting characters befriended by the heroes are also often orphans, including the Newsboy Legion and Rick Jones.\n"
] |
what's the difference between cables that send power and cables that send data? how come phone cables can do both?
|
At each end of every cable is a connector. Typically cables have a male connector that plugs into the female connector on a device. On every connector there are multiple "pins" or connection points. You solder a wire to each pin, and encase it into one cable.
For example, on an instrument/old telephone connector there are 2 pins, labeled signal and ground. Inside the cable are two wires of different colors, one soldered to signal and the other soldered to ground.
Now a USB connector has 4 pins. Vcc (+5V supply), D- (Data -), D+ (Data +) and Ground. There is also a more or less standard for color coding the wires inside the cable, where Vcc is red and Ground is black, D- is white and D+ is green. If you cut open an iPod charging cable you can see those four individual wires.
In some applications you have to have multiple pins on a connector for supplying power and signal, or multiple signals. In other applications you can send power and the signal on the same line (like telephones), or multiple signals together using a technique called multiplexing (which is how TV cables work). You generally need a different connector and cable for different applications because of the difference in power requirements, number of signals, size of the device, etc.
Hope this helps!
|
[
"Electrical cables are used to connect two or more devices, enabling the transfer of electrical signals or power from one device to the other. Cables are used for a wide range of purposes, and each must be tailored for that purpose. Cables are used extensively in electronic devices for power and signal circuits. Long-distance communication takes place over undersea cables. Power cables are used for bulk transmission of alternating and direct current power, especially using high-voltage cable. Electrical cables are extensively used in building wiring for lighting, power and control circuits permanently installed in buildings. Since all the circuit conductors required can be installed in a cable at one time, installation labor is saved compared to certain other wiring methods.\n",
"Practically all long-distance communication transmits data one bit at a time, rather than in parallel, because it reduces the cost of the cable. The cables that carry this data (other than \"the\" serial cable) and the computer ports they plug into are usually referred to with a more specific name, to reduce confusion.\n",
"Telecommunications cables are a type of guided transmission mediums. Cables are usually known to transmit electric energy (AC/DC); however, cables in telecommunications fields are used to transmit electromagnetic waves; they are called \"electromagnetic wave guides\".\n",
"A telco cable, also known as a Telecom cable or Amphenol cable, is a thick cable used for connecting multiple voice or data lines for LANs or telecommunications. The ends use 25 pairs of polarized pins (50 pins total). This cable handles up to 25 data channels or phone lines. The name Amphenol comes from the company that first manufactured it.\n",
"A serial cable is a cable used to transfer information between two devices using a serial communication protocol. The form of connectors depends on the particular serial port used. A cable wired for connecting two DTEs directly is known as a null modem cable.\n",
"Coaxial cable is a type of transmission line, used to carry high frequency electrical signals with low losses. It is used in such applications as telephone trunklines, broadband internet networking cables, high speed computer data busses, carrying cable television signals, and connecting radio transmitters and receivers to their antennas. It differs from other shielded cables because the dimensions of the cable and connectors are controlled to give a precise, constant conductor spacing, which is needed for it to function efficiently as a transmission line.\n",
"Coaxial cable is a type of transmission line, used to carry high frequency electrical signals with low losses. It is used in such applications as telephone trunklines, broadband internet networking cables, high speed computer data busses, carrying cable television signals, and connecting radio transmitters and receivers to their antennas. It differs from other shielded cables because the dimensions of the cable and connectors are controlled to give a precise, constant conductor spacing, which is needed for it to function efficiently as a transmission line.\n"
] |
Are artistic traits, such as being able to draw exceptionally or excelling in the musical arts, dependent on one's genes?
|
'Artistic' is probably not a genetic trait. But if you break it down into its component parts, then each of the following things are at least PARTLY genetic: hand-eye coordination, fine motor movement, gross motor movement, depth perception, color perception, tonal perception, dexterity, muscle composition, flexibility, memory, etc.
If you have a genetic predisposition to a large number of those sub-traits then you may be well on your way to being genetically 'artistic.'
|
[
"A relationship between music and the strengthening of math, dance, reading, creative thinking and visual arts skills has also been reported in literature. (Winner, Hetland, Sanni, as reported in \"The Arts and Academic Achievement - What the Evidence Shows\", 2000) However recent findings by Dr. Levitin of McGill University in Montreal, Canada, undermines the suggested connection between musical ability and higher math skills. In a study conducted on patients with Williams syndrome (a genetic disorder causing low intelligence), he found that even though their intelligence was that of young children, they still possessed an unusually high level of musical ability.\n",
"Musical aptitude refers to a person's innate ability to acquire skills and knowledge required for musical activity, and may influence the speed at which learning can take place and the level that may be achieved. Study in this area focuses on whether aptitude can be broken into subsets or represented as a single construct, whether aptitude can be measured prior to significant achievement, whether high aptitude can predict achievement, to what extent aptitude is inherited, and what implications questions of aptitude have on educational principles.\n",
"Research shows that participation in the arts plays a vital role in influencing brain development and performance. Arts which are considered enrichment in education programs, may in fact be central to the way humans neurologically process and learn. In 1999, The President's Committee on the Arts and Humanities teamed up with the Arts Education Partnership to publish a comprehensive study on the inclusion of the arts in education.\n",
"Bloom argues that there are 1 to 5 percent of students who have special talent for learning a subject (especially music and foreign languages) and there are also around five percent of students who have special disability for learning a subject. For other 90% of students, aptitude is merely an indicator of the rate of learning.\n",
"The comparisons between talent and genius are explored in regard to time and degree. Leonardo da Vinci and Raphael are provided as examples; to debate between who is more talented is a moot point. Both were extremely talented artists but the other artists of the time \"came late when the feast was over through no fault of their own\" (7). The ideas of 'nature vs. nurture' in regard to the inheritable traits of genius; Kubler concludes that genius is indeed a result of 'nurture' as learning is not a biological concern.\n",
"Some researchers have taken a social-personality approach to the measurement of creativity. In these studies, personality traits such as independence of judgement, self-confidence, attraction to complexity, aesthetic orientation, and risk-taking are used as measures of the creativity of individuals. A meta-analysis by Gregory Feist showed that creative people tend to be \"more open to new experiences, less conventional and less conscientious, more self-confident, self-accepting, driven, ambitious, dominant, hostile, and impulsive.\" Openness, conscientiousness, self-acceptance, hostility, and impulsivity had the strongest effects of the traits listed. Within the framework of the Big Five model of personality, some consistent traits have emerged. Openness to experience has been shown to be consistently related to a whole host of different assessments of creativity. Among the other Big Five traits, research has demonstrated subtle differences between different domains of creativity. Compared to non-artists, artists tend to have higher levels of openness to experience and lower levels of conscientiousness, while scientists are more open to experience, conscientious, and higher in the confidence-dominance facets of extraversion compared to non-scientists.\n",
"Students that are high performing or low performing are no more likely to be more genetically exceptional than an average student and same genes influence performance all across the distribution of performance. In other words, a math professor and a student struggling with mathematics are using the same genes when they perform mathematical tasks. There are many genes with small effect which are working together in an interplay with many environmental experiences and the same genes can have allelic differences. This is why turning on or off genes is unlikely to have large effects.\n"
] |
A question from my 4yo son
|
All from Wikipedia - _URL_0_
The core (25% of the radius) is up to 150 times the density of water.
The next layer, the radiative zone, (from 25% to 70% of the radius) has a density from 20 times the density of water (similar to gold) to one fifth the density of water (similar to cork).
The convective zone (from 70% to the visible surface) has a density that drops to 1/6,000th the density of air.
|
[
"Sek-Lung, or Sekky the third son, is the youngest child of the Chen family and the second child born to Father and Stepmother. Because he tended to be sickly, he becomes close to Poh-Poh, who spends most of her time taking care of him. When she passes away, he becomes obsessed with the war games that have emerged with the impending Second World War and finds his world increasingly confusing when his babysitter, Meiying, begins an illicit relationship with a Japanese boy.\n",
"Kehinde (Short for Omokehinde) is a given name of Yoruba origin meaning \"the second-born of the twins\" or the one who comes after Taiwo. Though Taiwo is the firstborn, it is believed that Kehinde is the elder twin, sending Taiwo into the world first to determine if it is time to be born.\n",
"Her eldest son, , is 28 years old, who took care of his younger siblings in Yukie's absence. Her second eldest son, , is a 27 years old martial artist and a Comrade with the ability to manipulate electricity. is the 16 years old daughter and a computer wiz. She controls the technical aspects of their operations. is the second oldest daughter in the family and is 10 years old. She is Yuta's classmate and a Comrade with the ability to run at high speeds. The last 3 kids are Osomatsu, Karamatsu and Choromatsu (nicknames), and 5, 4 and 3 years old, respectively.\n",
"\"Middle Child\" (stylized in all caps) is a song by American rapper J. Cole. The song was released on January 23, 2019, through Dreamville Records, Roc Nation and Interscope Records, as the first single from Dreamville's 2019 compilation album, \"Revenge of the Dreamers III\". The song was written by J. Cole, Allan Felder, Norman Harris, and Tyler Williams and produced by T-Minus and Cole. It was serviced to rhythmic and urban contemporary radio on February 5, 2019. The track contains a sample from \"Wake Up to Me\", written by Felder and Harris, as performed by First Choice. On the song, J. Cole explores \"his place between the old and new generations of hip hop, making him the 'middle child' of rap.\"\n",
"The previous life of Rin from \"Please Save My Earth\". In the events after the first series, Shion has lived in peace with Mokuren and watching over Rin, Alice, and Ren as spirits. Shion is fond of Ren and treats the boy as his own son due to his and Mokuren's inability to have children. When Ren's psychic abilities awaken and he has difficulty controlling them, Shion tells him that he is the one meant to protect the Earth, which inspires Ren to promise him and Mokuren that he will do his best. when Alice becomes pregnant with her and Rin's second child, Shion subconsciously sent his powers to the child, allowing it to take on his physical appearance, even though the child turns out to be a girl. Mokuren speculates that the child, whom Rin names Chimako, is the daughter that Shion has always dreamed of having. On a side note, he is fond of Japanese culture, particularly manga and video games. He also once took over Rin's body periodically to spend more time with Ren, with Rin not noticing and even having no memory of the incidents and becoming angry with Shion when he found out. Rin forbids Shion from seeing Ren once Ren starts losing control of his psychic powers and develops a spoilt personality as a result of experiencing his father's old memories but promises Shion he can see Ren when he grows up and learns to control his powers.\n",
" is the Fourth Child. Toji is a tough boy, the stereotypical \"jock,\" and is in constant conflict with Asuka. His younger sister was injured in the battle between the third Angel, Sachiel, and Unit 01, and he beats Shinji up soon after for being indirectly responsible. However, after witnessing firsthand the suffering Shinji experiences piloting Unit 01 in the battle against Shamshel, Toji comes to respect Shinji, apologizes to Shinji for beating him up (his sister was angry at him for beating him up and demanded he apologize), and the two eventually become friends. When the Tokyo branch of NERV gains custody of Unit-03, Toji is identified as the Fourth Child and approached to become an EVA pilot; he agrees in order for his sister to receive better medical treatment. During the activation test for Unit-03, the EVA is possessed by the Angel Bardiel and the unit is destroyed by Unit-01 under the control of the Dummy Plug system when Shinji refuses to fight against the possessed EVA out of concern for the pilot. Unit-01 savagely destroys the possessed Evangelion under extreme protest of Shinji, and crushes the pilot's cockpit capsule, causing Toji to lose his leg. In the manga, the incident results in Toji's death. In \"Rebuild of Evangelion\", Toji does not become a pilot; instead, Asuka pilots Unit-03. His sister's name is also revealed as Sakura.\n",
"Ayobamidele, meaning 'my joy has followed me home', is the last of three siblings. He lost his father at the age of 13, thereafter relying on his mother and relatives for support. Dele was taught by his late mother, who died on 18 May 2007, not to despair even when times are tough. This was exemplified by her continual support of him even when others had written him off. She had given him up to a third chance at passing his WAEC (Senior Secondary Exams). Although his mother's source of income was from petty trading, and she had two older children Dr. OlaDele B. Ajayi and Debbie Ajayi to care for, she labored hard to sustain her family, and in the words of Momodu, \"She didn't give up on me.\"\n"
] |
how do public policy think-tanks work?
|
They get paid to think about problems and write reports of their conclusions. The analysts that work in these places combine expertise in the subject matter (economics, foreign policy, ...) with critical thinking and analysis skills to explore the problem, the solution space, and possible courses of action.
|
[
"A think tank or policy institute is a research institute which performs research and advocacy concerning topics such as social policy, political strategy, economics, military, technology, and culture. Most policy institutes are non-profit organisations, which some countries such as the United States and Canada provide with tax exempt status. Other think tanks are funded by governments, advocacy groups, or corporations, and derive revenue from consulting or research work related to their projects.\n",
"The Think Tanks are typically held at a university where sustainable tourism is taught and researched. They consist of research presentations, keynote speakers, a research agenda, and curriculum development sessions. Each year the Think Tank has a particular theme and seeks to provide vision and cutting-edge insight to the topic at hand. Past themes have included:\n",
"Think tanks vary by ideological perspectives, sources of funding, topical emphasis and prospective consumers. Some think tanks, such as The Heritage Foundation, which promotes conservative principles, and the Center for American Progress are more partisan in purpose. Others, including the Tellus Institute, which emphasizes social and environmental topics, are more issue-oriented groups.\n",
"Government think tanks are also important in the United States, particularly in the security and defense field. These include the Center for Technology and National Security Policy at the National Defense University, the Center for Naval Warfare Studies at the Naval War College, and the Strategic Studies Institute at the U.S. Army War College.\n",
"As mentioned above, Tim Groseclose of UCLA and Jeff Milyo of the University of Missouri at Columbia use think tank quotes, in order to estimate the relative position of mass media outlets in the political spectrum. The idea is to trace out which think tanks are quoted by various mass media outlets within news stories, and to match these think tanks with the political position of members of the U.S. Congress who quote them in a non-negative way. Using this procedure, Groseclose and Milyo obtain the stark result that all sampled news providers -except Fox News' Special Report and the Washington Times- are located to the left of the average Congress member, i.e. there are signs of a liberal bias in the US news media. \n",
"According to the National Institute for Research Advancement, a Japanese policy institute, think tanks are \"one of the main policy actors in democratic societies ..., assuring a pluralistic, open and accountable process of policy analysis, research, decision-making and evaluation\". A study in early 2009 found a total of 5,465 think tanks worldwide. Of that number, 1,777 were based in the United States and approximately 350 in Washington DC alone.\n",
"Think tanks such as the Center for Strategic and International Studies, the Carnegie Endowment for International Peace, the Carter Center, the Council on Foreign Relations, the Brookings Institution, and the Africa Center for Strategic Studies, utilize ACLED data for research and to inform policy recommendations. \n"
] |
Were the Persians and the Chinese empires aware of each other's existence and did they frequently interact?
|
The first known interaction was reported by Zhang Qian, a Chinese explorer. He wrote that they were an advanced civilization, and commented on their currency, wine, cultivation and walled cities and also of the amount of cities. This was in 126 BCE and the two people started trading embassies and missions and had a peaceful relationship.
As they entered the Sassanid Period the two civilizations benefited greatly from trade on the silk road and were very close. Both of them worked together to guard the trade routes, and they continued to send missions to each other. The Persians were noted to send great entertainers to the Chinese courts.
After Persia was conquered by Islam, the relations continued. However in 751, the Abbasid Caliphate and the Chinese had a border dispute. They proceeded to battle over Syr Dara at the Battle of Talas. The Persians won a great victory, and after the battle, relations returned to normal, trading envoys, goods and missions between the two, and working together.
|
[
"Sino-Persian relations (Chinese: 中国–波斯关系, Persian: [same as above]) refer to the historic diplomatic, cultural and economic relations between the cultures of China proper and Greater Iran, dating back to ancient times, since at least 200 B.C. The Parthians and Sassanid empires (occupying much of present Iran and Central Asia) had various contacts with the Han, Tang, Song, Yuan, Ming. For millennia the two ancient civilizations of Asia were further connected both economically and culturally via the Silk Road. The two were also briefly unified under the Mongol Empire.\n",
"There have been various periods in the history of China where a number of Arabs, Persians and Turks from the Western Regions (Central Asia and West Asia) migrated to China. Persians intermarried around the time of Manichaeism's spread to China before the Great Anti-Buddhist Persecution. Moreover, Persians brought Buddhism to China and there is evidence of close relationship during its pre-Islamic times (see An Shigao).\n",
"Han dynasty emperors and their successors maintained commercial and diplomatic ties with various South and Southeast Asian kingdoms. Han dynasty ships traveled as far as India, expanding the horizon for new foreign markets for Chinese goods and services through maritime trade within the orbit of the Indian Ocean. Trade relationships were also established between China and foreign empires through the conquered territories. Trade connected China with the Indian Mauryan, Sātavāhana and Shunga Empires, the Persian Parthian Empire, and the European Roman Mediterranean. Roman dancers and entertainers were sent to Luoyang as a gift to China from a Burmese kingdom in 120. A kingdom referred to in the \"Book of Han\" as Huangzhi delivered a rhinoceros in the year 2 AD as a tribute. An Indian embassy arrived in China between 89 and 105. Roman merchants from the province of Syria visited Nanyue in 166, Nanjing in 226, and Luoyang in 284. Foreign products have been found at archaeological sites excavating tombs in southern China. Originating with the overseas demand for Chinese silk, the ancient Silk Road trade routes were responsible for the transmission of goods and services as well as ideas between ancient Europe, the Near East, and China.\n",
"On various occasions, Sassanian kings sent their most talented Persian musicians and dancers to the Chinese imperial court. Both empires benefited from trade along the Silk Road, and shared a common interest in preserving and protecting that trade. They cooperated in guarding the trade routes through central Asia, and both built outposts in border areas to keep caravans safe from nomadic tribes and bandits.\n",
"Sino-Roman relations comprised the mostly indirect contact, flow of trade goods, information, and occasional travellers between the Roman Empire and Han Empire of China, as well as between the later Eastern Roman Empire and various Chinese dynasties. These empires inched progressively closer in the course of the Roman expansion into the ancient Near East and simultaneous Han Chinese military incursions into Central Asia. Mutual awareness remained low, and firm knowledge about each other was limited. Only a few attempts at direct contact are known from records. Intermediate empires such as the Parthians and Kushans, seeking to maintain lucrative control over the silk trade, inhibited direct contact between these two Eurasian powers. In 97 AD, the Chinese general Ban Chao tried to send his envoy Gan Ying to Rome, but Gan was dissuaded by Parthians from venturing beyond the Persian Gulf. Several alleged Roman emissaries to China were recorded by ancient Chinese historians. The first one on record, supposedly from either the Roman emperor Antoninus Pius or his adopted son Marcus Aurelius, arrived in 166 AD. Others are recorded as arriving in 226 and 284 AD, with a long absence until the first recorded Byzantine embassy in 643 AD.\n",
"Lower-level cultural interchanges also took place between India and Persia during this period. For example, Persians imported the early form of chess, the \"chaturanga\" (Middle Persian: \"chatrang\") from India. In exchange, Persians introduced backgammon (\"Nēw-Ardašēr\") to India.\n",
"At the beginning of the invasion, it is clear that the Persians held most advantages. Regardless of its actual size, it is clear that the Persians had brought an overwhelming number of troops and ships to Greece. The Persians had a unified command system, and everyone was answerable to the king. They had a hugely efficient bureaucracy, which allowed them to undertake remarkable feats of planning. The Persian generals had significant experience of warfare over the 80 years in which the Persian empire had been established. Furthermore, the Persians excelled in the use of intelligence and diplomacy in warfare, as shown by their (nearly successful) attempts to divide-and-conquer the Greeks. The Greeks, by comparison, were fragmented, with only 30 or so city-states actively opposing the Persian invasion; even those were prone to quarrel with each other. They had little experience of large-scale warfare, being largely restricted to small-scale local warfare, and their commanders were chosen primarily on the basis of the political and social standing, rather than because of any experience or expertise. As Lazenby therefore asks: \"\"So why did the Persians fail?\"\"\n"
] |
When and how did pop culture associate all things Nuclear with the color green?
|
Radioactive materials are often portrayed as green because back in the mid 20th century, there was a trend of making glowing watch faces with radioactive paint mixed with phosphorus. As you say, radioactivity is impossible to see with the naked eye and if you have Cherenkov radiation, it's a visible blue. The phosphorus interacting with the radioactive decay is what makes a green glow. As time went on, the glow of the phosphorus was associated with radiation. As was the case with these watch faces. Some were even made with radium.
These sorts of watches were pretty popular and had to be hand painted. Sadly the women who made them had horrible amounts of cancer from working with all these radioactive materials.
Source: _URL_0_
|
[
"\"Green\" was released on November 7, 1988, in the United Kingdom, and the following day in the United States. R.E.M. chose the American release date to coincide with the 1988 presidential election, and used its increased profile during the period to criticize Republican candidate George H. W. Bush while praising Democratic candidate Michael Dukakis. With warm critical reaction and the conversion of many new fans, \"Green\" ultimately went double-platinum in the US, reaching number 12, and peaked at number 27 in the UK. \"Orange Crush\" became R.E.M.'s first American number one single. It was the band's first gold album in the UK, making it the quartet's European breakthrough. \"What I love about it is the immensely unlikely lyrics,\" remarked Neil Hannon, frontman of The Divine Comedy, \"and, in the mandolin on 'You Are The Everything' and 'The Wrong Child', it's got a bit of what comes later but in a much purer way. It's so small and intense, it's amazing.\"\n",
"In the 1980s green became a political symbol, the color of the Green Party in Germany and in many other European countries. It symbolized the environmental movement, and also a new politics of the left which rejected traditional socialism and communism. (See section below.)\n",
"The first Green Party to achieve national prominence was the German Green Party, famous for their opposition to nuclear power, as well as an expression of anti-centralist and pacifist values traditional to greens. They were founded in 1980 and have been in coalition governments at state level for some years. They were in federal government with the Social Democratic Party of Germany in a so-called Red-Green alliance from 1998 to 2005. In 2001, they reached an agreement to end reliance on nuclear power in Germany, and agreed to remain in coalition and support the government of Chancellor Gerhard Schröder in the 2001 Afghan War. This put them at odds with many Greens worldwide.\n",
"In the 1980s the green parties that were created a decade before began to have some political success.. In 1986, there was a nuclear accident in Chernobyl, Ukraine. The end of the 1980s and start of the 1990s saw the fall of communism across central and Eastern Europe, the fall of the [Berlin Wall], and the Union of East and West Germany. In 1992 there was a UN summit held in Rio de Janeiro where Agenda 21 was adopted. The Kyoto Protocol was created in 1997 which set specific targets and deadlines to reduce global greenhouse gas emissions. In the early 2000s activists believed that environmental policy concerns were overshadowed by energy security, globalism, and terrorism.\n",
" 1985 was a time of political change in the UK. After the formation of the Social Democratic Party (SDP), there were noises being made that the UK needed a \"green\" party. In response to the rumours that a group of Liberal Party activists were about to launch a UK 'Green Party', HELP (the Hackney Local Ecology Party) registered the name \"The Green Party,\" with a green circle, designed by Steve O’Brien, as its logo. The first public meeting, chaired by David Fitzpatrick (then an Ecology Party speaker), was 13 June 1985 in Hackney Town Hall. Paul Ekins (then co-chair of the Ecology Party) spoke on the subject of Green politics and the inner city. Hackney Green Party put a formal proposal to the Ecology Party Autumn Conference in Dover that year to change to the Green Party, which was supported by the majority of attendees, including John Abineri, formerly an actor in the BBC series \"Survivors\" who supported adding \"Green\" to the name to fall in line with other environmental parties in Europe.\n",
"The Brilliant Green take much of their influence from Western music, most predominantly The Beatles, with over half their songs including English lyrics. Their break came in 1998 when their third single, \"There Will Be Love There,\" was chosen as the theme song for the popular Japanese Drama \"Love Again\" and, as a result, went straight to the top of the charts. After another number one hit with \"Tsumetai Hana\" they released their self-titled debut album which sold over one million copies in just two days. On the back of this success their first national tour, cleverly titled \"There Will Be Live There,\" sold out across Japan in only three minutes.\n",
"Green also provided significant input to the National Defense Education Act of 1958, intended to keep the United States ahead of the Soviet Union during the space race after the launch of \"Sputnik 1\".\n"
] |
why do you need to pay to get a divorce and pay to get married?
|
Why does it cost money to get divorced?
Because it's worth it!
|
[
"In other jurisdictions such as United Kingdom and Singapore, divorce is granted on the basis of an irretrievable breakdown of marriage. Under current divorce law in England and Wales, a person has to prove in court that the marriage has broken down; there are five reasons for which a marriage can be considered to have broken down: adultery, unreasonable behaviour, desertion after two years, two years' separation with consent or five years' separation without consent.\n",
"After divorce, one spouse may have to pay alimony. Laws concerning divorce and the ease with which a divorce can be obtained vary widely around the world. After a divorce or an annulment, the people concerned are free to remarry (or marry).\n",
"Divorce is not a taboo in this culture, and divorced women are not ostracised from society. However, if the woman comes back to the parents' home after a divorce, the family must pay back the bride price to the man's family. If the woman divorces her husband to marry another man, the second man must pay bride price to the first man's family..But over the years this practice is followed by a few masses.\n",
"A divorce in England and Wales is only possible for marriages of more than one year and when the marriage has irretrievably broken down. Whilst it is possible to defend a divorce, the vast majority proceed on an undefended basis. A decree of divorce is initially granted 'nisi', i.e. (unless cause is later shown), before it is made 'absolute'.\n",
"Divorce procedures differ by gender, with divorces being more freely granted to men. A man can divorce his wife by saying \"you are divorced\" three times. The proceeding is then formalized within 30 days by registering the divorce with a notary. Women are then entitled to financial maintenance for up to two years. Some women, when negotiating with their husbands for divorce, are willing to forfeit the financial assistance in exchange for him initiating the divorce. Women sometimes choose this option because of the legal red tape that is involved in wife-initiated divorce.\n",
"For couples to Conservative or Orthodox Jewish law (which by Israeli civil law includes all Jews in Israel), the husband must grant his wife a divorce through a document called a \"get\". Granting the 'get' obligates him to pay the woman a significant sum of money(10,000-20,000$) as stated on the religious prenuptial contract, which can be in addition to whatever prior settlement he had reached as far as continuous child support and funds he had to pay by court order in the civil divorce. If the man refuses, (and agreeing on condition he won't have to pay the money is still called refusing), the woman can appeal to a court or the community to pressure the husband. A woman whose husband refuses to grant the get or a woman whose husband is missing without sufficient knowledge that he died, is called an agunah, is still married, and therefore cannot remarry. Under Orthodox law, children of an extramarital affair involving a married Jewish woman are considered \"mamzerim\" (illegitimate) and cannot marry non-\"mamzerim\".\n",
"In some other countries, when the spouses agree to divorce and to the terms of the divorce, it can be certified by a non-judiciary administrative entity. The effect of a divorce is that both parties are free to marry again if a filing in an appellate court does not overturn the decision.\n"
] |
How common was the surname 'Hitler' in Germany/Austria prior to the 1930s? Did people later drop it because of its connotations?
|
This is some anecdotal evidence.
The name "Adolf" was a relatively common one in Belgium prior to World War II. One of the most famous 19th century Belgian politicians was [Adolf Daens](_URL_1_). After World War 2, the name died out. Parents stopped naming their babies Adolf, but I don't know if existing "Adolfs" changed their name. Nowadays the frst name is non-existant here.
Sidenote: post-WW2, a lot of Belgian names were based on American/English names such as Danny, Willy, Ronny, Michael, Daisy, Bettybut are uncommon these days for newborns.
Regarding the name "Hitler". His father, Alois Hitler, was actually born Alois Schicklgruber. He started (officially) using his stepfather's name, Hiedler, in the 1870's. That name got registered as Hitler for unknown reasons. So, Hitler's father was actually the first person to take on the name of Hitler. From there, it's relatively easy to track Hitler's (male) relatives after World War II. [William Patrick Hitler](_URL_0_). He was a British nephew of Hitler and joined the US Navy in 1941. He later changed his name to Stuart-Houston.
Hitler's half-nephew, Heinz Hitler, died in World War II leaving no children behind.
I couldn't find anything else on other male relatives of Hitler, so I think the name naturally went extinct after world war II with the exception of William Patrick Hitler.
I don't know about the name Schicklgruber however.
|
[
"\"Schicklgruber\" is the surname Adolf Hitler's father, Alois Hitler carried for the first 40 years of his life, until he took the name Hitler (Hiedler) from his stepfather. While Adolf Hitler himself never carried the surname, the British made use of it for propaganda purposes since even to Germans, the name is laughable. The Stooges used it numerous times as the only name by which they would refer to Hitler.\n",
"Before the birth of Adolf Hitler the family surname had many variations that were often used almost interchangeably. Some of the common variances were Hitler, Hiedler, Hüttler, Hytler, and Hittler. Alois Schicklgruber (Adolf's father) changed his name on 7 January 1877 to \"Hitler\", which was the only form of the last name that Adolf used.\n",
"German surnames are also quite common in the Czech Republic; the country was part of the Austrian Empire before 1918 and had a large German population until World War II. Some of them got phonetically normalized and transcribed to Czech (\"Müller\" (miller) as well as \"Miler\"; \"Stein\" (Stone) as well as \"Štajn\", \"Schmied\" (Smith) as well as \"Šmíd\" (or \"Šmýd\"), Fritsch (Frič), Schlessinger (Šlesingr), etc. Some of them retain their original German surnames e. g. : Gottwald, Feiersinger, Dienstbier, Berger, Koller, Klaus, Franz, Forman, Ebermann, Lendl, Ulihrach, Gebauer, Kaberle, Vogelstanz, etc.\n",
"German names were regularly Anglicized with immigration. Surnames were often translated, so in this case, Zimmerman would become Carpenter. Later generations also altered their original family names frequently after being in the United States many years.\n",
"German names of immigrants were also anglicised (such as Bürger to Burger, Schneider to Snyder) in the course of German immigration waves during times of political and economic instability in the late 19th and early 20th century. A somewhat different case was the politically motivated change of dynasty name in 1917 by the royal family of the United Kingdom from the House of Saxe-Coburg and Gotha to the House of Windsor. Incidentally, Saxe-Coburg was already an anglicisation of the German original .\n",
"In 1938, the Nazi government (1933–1945) changed thousands of toponyms (especially names of cities and villages) of Old Prussian and Polish origin to newly created German names; about 50% of the existing names were changed in 1938 alone, despite resistance by the Prussian people, who continued to use their traditional place names.\n",
"After his father's assassination attempt against Adolf Hitler failed on 20 July 1944, Stauffenberg was sent to a foster home in Bad Sachsa and given the new surname of \"Meister\", as the Nazis viewed the name of Stauffenberg unacceptable, due to the prominence of that name in the assassination attempt. Franz-Ludwig's mother, two older brothers, and younger sister Valerie, as well as other relatives, were arrested under Nazi \"Sippenhaft\" (blood guilt) laws. He was educated at the Schule Schloss Salem and then qualified as a lawyer after passing his \"staatsexamen\".\n"
] |
how can we drive for minutes, maybe even hours, without really paying attention to the road? i have daydreamed and come back and not known what has happened the past 15 minutes.
|
Not remembering what happened for the last 15 minutes doesn't mean you weren't reasonably alert during that period, it just means that you weren't forming long term memories of what happened. It is possible for that part of your brain to take a break because whatever is happening is so irrelevant that there is no need to store it beyond the short term. The result is you have no memory of that period but if something unexpected happened you would still be able to react to it.
|
[
"BULLET::::- Driving time is between 8:00 and 17:00 (from 8 a.m. to 5 p.m.). In order to select a suitable place for the overnight stop (alongside the highway) it is possible to extend the driving period for a maximum of 10 minutes, which extra driving time will be compensated by a starting time delay the next day.\n",
"This story concerns a lone taxi driver making his way along a road at night. Legend goes that a person will suddenly appear from the darkness and hail the taxi. The person will sit in the back of the car and will ask to be taken to a place the driver has never heard of. When the driver mentions this, he is assured that he will be given directions. The passenger then feeds the driver increasingly complex directions which leads them down streets and alleys, through many towns and even in some instances all the way from the city to the countryside. After traveling this distance and still seeming no closer to any destination, the driver becomes uneasy. He turns around to the back seat to ask the passenger exactly where they are – but he is shocked to find that the passenger has vanished. The taxi driver turns back to the steering wheel, only to drive off the edge of a cliff and die.\n",
"Upon getting out of the vehicle, the driver and passenger(s) often will experience a blank period and amnesia (see Missing Time), after which they will find themselves again standing in front of, or driving their car. While they frequently will not consciously remember the experience, either subsequent nightmares or hypnosis will reveal events interpreted as having occurred during the period lacking explicit memory.\n",
"In order to meet the real world needs of older adults, the ITN model must maintain flexibility. Most drivers, for example, do not give up driving all at once. Often times night driving is the first place where people notice difficulty, so they arrange their schedule to drive only during daylight hours. A driver might be perfectly safe at noontime but have trouble seeing after sunset. The ITN model is built to accommodate such transitions, even if they unfold over years. The driver who sees well in daytime but struggles in low light might volunteer as a driver during the day and be an ITN rider in the evening. Alternatively, an ITN volunteer might use the service for have a medical appointment that necessitates a ride home after. The ITN model allow people to be earning their ride miles even as they use the service, being both part of the transportation solution and a beneficiary of it.\n",
"“You ever been lost?” he asked. “Like when you make a turn and you know you’re not on the right road, but you keep driving anyway? You keep driving because you’re thinking that it’s going to turn out to be the right road...That’s where I am. I’m on that road.”\n",
"\"Drive Home\" is based on a suggestion from illustrator Hajo Mueller. It is \"about a couple driving along in a car at night, very much in love; the guy is driving, and his partner – his wife or girlfriend or whoever she is – is in the passenger seat, and the next minute she’s gone.\" The ghost of the man's partner eventually returns, \"saying, ‘I’m going to remind you now what happened that night.’ There was a terrible car accident, and she died, etcetera, etcetera – again, the idea of trauma leading to a missing part of this guy’s life. He can’t deal with the reality of what happened, so he blocks it out – like taking a piece of tape and editing a big chunk out of it.\"\n",
"Whereas most driving is done below , \"maintaining a blanket 5 or 6 seconds of travel time to the edge of visibility\" (), \"will keep drivers in compliance with the ACDA rule in most simple highway driving conditions – day or night\" – with growing error towards safety at lower speeds.\n"
] |
Would rain droplets on a lower gravity planet be larger, on average, compared to our own planet?
|
I'm by no means an expert, but my understanding as a mechanical engineer is that the shape and size of rain droplets depend on 3 sets of parameters:
1, the acceleration due to gravity (9.81 m/s^2 on earth)
2, the fluid dynamic properties of the atmosphere (namely density and viscosity of the air)
3, the fluid dynamic properties of the rain water (again density, viscosity, and other effects such as surface tension)
The size and shape of rain drops represents an equilibrium in the interaction between all of these parameters, so changing any of them will alter the result.
Now someone with more fluids experience than me can explain what would happen to the drops if they have less acceleration.
[edited for formatting]
|
[
"That latter statement necessitates, as Clement states in the story, that the surface pressure on Tenebra be 800 atmospheres, not 218. At 800 atmospheres of pressure the surface atmosphere, with its load of dissolved oxygen and sulphur oxides, is compressed to a density a little less than the density of liquid water (1182 kilograms per cubic meter). Only when the density of the planet's air is a little less than the density of liquid water will the raindrops descend slowly; otherwise, they would come down like meteors and make life on Tenebra impossible.\n",
"Raindrops have sizes ranging from mean diameter but develop a tendency to break up at larger sizes. Smaller drops are called cloud droplets, and their shape is spherical. As a raindrop increases in size, its shape becomes more oblate, with its largest cross-section facing the oncoming airflow. Large rain drops become increasingly flattened on the bottom, like hamburger buns; very large ones are shaped like parachutes. Contrary to popular belief, their shape does not resemble a teardrop. The biggest raindrops on Earth were recorded over Brazil and the Marshall Islands in 2004 — some of them were as large as . The large size is explained by condensation on large smoke particles or by collisions between drops in small regions with particularly high content of liquid water.\n",
"This is the smaller world, with 0.76 Earth masses at 94% of its diameter, and 85% of its surface gravity. The mean surface temperature is only +5 °C. Because most of this chilly planet's water is locked in permafrost and glaciers, the atmosphere (which has a sea-level pressure of 0.7 bar, equivalent to about 3,650 m altitude on Earth) is not only thin but also uncomfortably dry. From the hemisphere where the companion world Genji can be seen it hangs in the sky as a globe of 6° 20' diameter, and when full gives over 320 times the light of the full Moon as seen from Earth.\n",
"The typical atmospheric pressure at the top of Olympus Mons is 72 pascals, about 12% of the average Martian surface pressure of 600 pascals. Both are exceedingly low by terrestrial standards; by comparison, the atmospheric pressure at the summit of Mount Everest is 32,000 pascals, or about 32% of Earth's sea level pressure. Even so, high-altitude orographic clouds frequently drift over the Olympus Mons summit, and airborne Martian dust is still present. Although the average Martian surface atmospheric pressure is less than one percent of Earth's, the much lower gravity of Mars increases the atmosphere's scale height; in other words, Mars's atmosphere is expansive and does not drop off in density with height as sharply as Earth's.\n",
"Scientists traditionally thought that the variation in the size of raindrops was due to collisions on the way down to the ground. In 2009 French researchers succeeded in showing that the distribution of sizes is due to the drops' interaction with air, which deforms larger drops and causes them to fragment into smaller drops, effectively limiting the largest raindrops to about 6 mm diameter. However, drops up to 10 mm (equivalent in volume to a sphere of radius 4.5 mm) are theoretically stable and could be levitated in a wind tunnel.\n",
"The atmospheric pressure on Mars is about 150 times less than that of Earth. In such a thin atmosphere, a balloon with a volume of 5,000 to 10,000 cubic meters (178,500 to 357,000 cubic feet) could carry a payload of 20 kilograms (44 pounds), while a balloon with a volume of 100,000 cubic meters (3,600,000 cubic feet) could carry 200 kilograms (440 pounds).\n",
"The fact that many large celestial objects are approximately spheres makes it easier to calculate their surface gravity. The gravitational force outside a spherically symmetric body is the same as if its entire mass were concentrated in the center, as was established by Sir Isaac Newton. Therefore, the surface gravity of a planet or star with a given mass will be approximately inversely proportional to the square of its radius, and the surface gravity of a planet or star with a given average density will be approximately proportional to its radius. For example, the recently discovered planet, Gliese 581 c, has at least 5 times the mass of Earth, but is unlikely to have 5 times its surface gravity. If its mass is no more than 5 times that of the Earth, as is expected, and if it is a rocky planet with a large iron core, it should have a radius approximately 50% larger than that of Earth. Gravity on such a planet's surface would be approximately 2.2 times as strong as on Earth. If it is an icy or watery planet, its radius might be as large as twice the Earth's, in which case its surface gravity might be no more than 1.25 times as strong as the Earth's.\n"
] |
Since DNA is an acid, is there such thing as DNA salts?
|
Not only can DNA exist as a salt, but this is pretty much its standard form in the solid state. In an aqueous medium, DNA exists as a conjugate base with negatively charged phosphate groups stabilized by a bunch of [counterions](_URL_1_) that are floating around in the solution such as Na^(+). When you precipitate the DNA (e.g. as is most commonly done through the [addition of ethanol](_URL_0_)), these cations will bind to the phosphate groups and the DNA will precipitate out as a salt.
|
[
"A further explanation of how DNA binds to silica is based on the action of guanidinium HCl (GuHCl), which acts as a chaotrope. A chaotrope denatures biomolecules by disrupting the shell of hydration around them. This allows positively charged ions to form a salt bridge between the negatively charged silica and the negatively charged DNA backbone in high salt concentration. The DNA can then be washed with high salt and ethanol, and ultimately eluted with low salt.\n",
"Deoxyribonucleic acid (; DNA) is a molecule composed of two chains that coil around each other to form a double helix carrying genetic instructions for the development, functioning, growth and reproduction of all known organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids; alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life.\n",
"Working in the 19th century, biochemists initially isolated DNA and RNA (mixed together) from cell nuclei. They were relatively quick to appreciate the polymeric nature of their \"nucleic acid\" isolates, but realized only later that nucleotides were of two types—one containing ribose and the other deoxyribose. It was this subsequent discovery that led to the identification and naming of DNA as a substance distinct from RNA.\n",
"Even prior to the nucleic acid methods employed today it was known that in the presence of chaotropic agents, such as sodium iodide or sodium perchlorate, DNA binds to silica, glass particles or to unicellular algae called diatoms which shield their cell walls with silica. This property was used to purify nucleic acid using glass powder or silica beads under alkaline conditions. This was later improved using guanidinium thiocyanate or guanidinium hydrochloride as the chaotropic agent. The use of glass beads was later changed to silica gel.\n",
"\"Desoxyribonucleic acid\" and \"desoxyribonucleate\" are archaic terms for DNA, deoxyribonucleic acid, and its salts, respectively. The terms are used in this sense in various classic papers in genetics, such as Avery, MacLeod, and McCarty (1944).\n",
"Both RNA and DNA contain two major purine bases, adenine (A) and guanine (G), and two major pyrimidines. In both DNA and RNA, one of the pyrimdines is cytosine (C). However, DNA and RNA differ in the second major pyrimidine. DNA contains thymine (T) while RNA contains uracil (U). There are some rare cases where thymine does occur in RNA and uracil in DNA.\n",
"When first studied in the early 1900s, the chemical and biological differences between RNA and DNA were not apparent, and they were named after the materials from which they were isolated; RNA was initially known as \"yeast nucleic acid\" and DNA was \"thymus nucleic acid\". Using diagnostic chemical tests, carbohydrate chemists showed that the two nucleic acids contained different sugars, whereupon the common name for RNA became \"ribose nucleic acid\". Other early biochemical studies showed that RNA was readily broken down at high pH, while DNA was stable (although denatured) in alkali. Nucleoside composition analysis showed first that RNA contained similar nucleobases to DNA, with uracil instead of thymine, and that RNA contained a number of minor nucleobase components, e.g. small amounts of pseudouridine and dimethylguanine.\n"
] |
why is it impossible to stop thinking?
|
Some kinds of meditation are the attempt to silence your thoughts. Through willpower and training you can silence the internal monologue of your thoughts. Of course your brain is still functioning and processing information, it does that even when you're asleep.
|
[
"Thought stopping is a cognitive intervention technique prescribed by therapists (psychologists and psychiatrists) with the goal of interrupting, removing, and replacing problematic recurring thoughts. It is considered a core cognitive intervention method that is distinct for the absence of analysis in the treatment of negative thoughts. It is often employed as a standalone or auxiliary treatment to address depression, panic, anxiety, and addiction, among other afflictions that involve obsessive thought.\n",
"Research has also shown that doing difficult counting tasks at the same time as a think/no think task leads to less forgetting in the no think condition, which suggests that suppression takes active mental energy to be successful. Furthermore, the most forgetting during the no think phase occurs when there is a medium amount of brain activation while learning the words. The words are never learned if there is too little activation, and the association between the two words is too strong to be suppressed during the no think phase if there is too much activation. However, with medium activation, the word pairs are learned but able to be suppressed during the no think phase.\n",
"Stopping thought is a term in Zen referring to the achievement of the mental state of samādhi, where the normal mental chatter slows and then stops for brief or longer periods, allowing the practitioner to experience the peace of liberation. This is normally first done during zazen meditation, but should ideally be mastered, so that it can be done regularly.\n",
"Thought suppression is a method in which people protect themselves by blocking the recall of these anxiety-arousing memories. For example, if something reminds a person of an unpleasant event, his or her mind may steer towards unrelated topics. This could induce forgetting without being generated by an intention to forget, making it a motivated action. There are two main classes of motivated forgetting: \"psychological repression\" is an unconscious act, while \"thought suppression\" is a conscious form of excluding thoughts and memories from awareness.\n",
"\"What psychologists and brain scientists tell us about interruptions is that they have a fairly profound effect on the way we think. It becomes much harder to sustain attention, to think about one thing for a long period of time, and to think deeply when new stimuli are pouring at you all day long. I argue that the price we pay for being constantly inundated with information is a loss of our ability to be contemplative and to engage in the kind of deep thinking that requires you to concentrate on one thing.\"\n",
"Thought blocking, also referred to as \"thought withdrawal\", refers to an abrupt stop in the middle of a train of thought; the individual might or might not be unable to continue the idea. This is type of formal thought disorder that can be seen in schizophrenia.\n",
"In order to suppress a thought, one must (a) plan to suppress the thought and (b) carry out that plan by suppressing all other manifestations of the thought, including the original plan. Thought suppression seems to entail a state of knowing and not knowing all at once. It can be assumed that thought suppression is a difficult and even time consuming task. Even when thoughts are suppressed, they can return to consciousness with minimal prompting. This is why suppression has also been associated with obsessive-compulsive disorder.\n"
] |
During the American civil war Why weren't bayonet charges used often?
|
it appears to be inexperience by these armies. in the beginning of the war, the armies grew in size, many times their normal peacetime strength and so you have inexperienced leaders at every level; and large numbers of raw recruits. Frank Vizetelly an English war correspondent makes mention of this in a National Geographic article in April of 1961 (100th anniversary issue).
I seem to recall that as the war went on bayonet charges were used more, at Spotsylvania, the Crater, Ft. Pillow, etc. These are all in 1864.
|
[
"During the American Civil War (1861–1865) the bayonet was found to be responsible for less than 1% of battlefield casualties, a hallmark of modern warfare. The use of bayonet charges to force the enemy to retreat was very successful in numerous small unit engagements at short range in the American Civil War, as most troops would retreat when charged while reloading (which could take up to a minute with loose powder even for trained troops). Although such charges inflicted few casualties, they often decided short engagements, and tactical possession of important defensive ground features. Additionally, bayonet drill could be used to rally men temporarily discomfited by enemy fire.\n",
"During 1968 revisions to the United States Army Field Manuals there was a move by the United States Secretary of the Army to eliminate the description of the bayonet as a crowd control weapon; however, senior Army leadership resisted the change. A study conducted that year by the Human Resources Research Organization concluded that the bayonet \"is highly valuable as a riot control weapon\" with a survey of personnel involved in military peacekeeping operations reporting its most valuable attribute was its psychological effect on a crowd. A compromise was ultimately reached whereby use of the bayonet was permitted in cases of violent mobs but not in routine civil operations. Two years later, in the protests that led to the Kent State shootings of 1970, two persons were injured after being bayoneted by soldiers of the Ohio National Guard.\n",
"The development of the bayonet in the late 17th century led to the bayonet charge becoming the main infantry tactic through the 19th century and into the 20th. As early as the 19th century, military scholars were already noting that most bayonet charges did not result in close combat. Instead, one side usually fled before actual bayonet fighting ensued. The act of fixing bayonets has been held to be primarily connected to morale, the making of a clear signal to friend and foe of a willingness to kill at close quarters.\n",
"The development of the bayonet in the late 17th century led to the bayonet charge becoming the main infantry charge tactic through the 19th century and into the 20th. As early as the 19th century, tactical scholars were already noting that most bayonet charges did not result in close combat. Instead, one side usually fled before actual bayonet fighting ensued. The act of fixing bayonets has been held to be primarily connected to morale, the making of a clear signal to friend and foe of a willingness to kill at close quarters.\n",
"The New York Draft Riots of 1863 saw the use of bayonet charges by the U.S. Army against unruly mobs in New York City. During lumber protests in Tacoma, Washington in 1935, the Washington National Guard advanced on picketers with fixed bayonets, causing them to move away from the Federal Building where they had gathered.\n",
"Bayonets also became of wide usage to infantry soldiers. Bayonet is named after Bayonne, France where it was first manufactured in the 16th century. It is used often in infantry charges to fight in hand-to-hand combat. General Jean Martinet introduced the bayonet to the French army. They were used heavily in the American Civil War, and continued to be used in modern wars like the Invasion of Iraq.\n",
"The British Army made extensive use of the bayonet in crowd control operations in British India. In the 19th century, in Ireland, police used the bayonet charge as a method of forcing crowds to scatter; in July 1881 one person was killed by police bayonet in this manner. The British Army continued use of the bayonet as a crowd control weapon into the 20th century, using it during operations during the Hong Kong 1956 riots. The Queen's Guard still use fixed bayonets while on guard and use them as a deterrent when challenged.\n"
] |
caitlyn jenner in vanity fair and the big deal of it?
|
Caitlyn Jenner was formerly Bruce Jenner, an internationally renowned Olympic athlete who was the face of masculinity and athleticism in the 1970s. She recently came out as transgender. She is very likely the biggest celebrity in history to come out as trans.
|
[
"Bissinger's July 2015 Vanity Fair cover story \"Call Me Caitlyn,\" on the transition of former Olympic decathlete, businessperson, and television personality Bruce Jenner to Caitlyn Jenner star of E!'s \"Keeping Up With the Kardashians\" and \"I Am Cait\", with photographs by Annie Leibovitz, was one of the biggest international scoops in years. Bissinger had exclusive access to Jenner both immediately before and after her cosmetic surgery. The 11,000-word article was months in the making and kept heavily under wraps until it was released on the magazine's website on June 1.\n",
"In 2015, McGowan criticized Caitlyn Jenner for stating that \"the hardest part about being a woman is figuring out what to wear\", after Jenner had been named \"Woman of the Year\" by \"Glamour\". McGowan stated, \"We are more than deciding what to wear. We are more than the stereotypes foisted upon us by people like you. You're a woman now? Well f**cking learn that we have had a VERY different experience than your life of male privilege.\" In response to accusations of transphobia, McGowan stated, \"Let me take this moment to point out that I am not, nor will I ever be, transphobic. The idea is laughable. Disliking something a trans person has said is no different than disliking something a man has said or that a woman has said. Being trans doesn't make one immune from criticism.\"\n",
"BULLET::::- Valerie Mahaffey as Caitlyn Van Horne, Bill and Margaret's daughter. Caitlyn is very unhappy in her marriage and begins an affair with Joe Bowman, her father's election opponent. Not very bright, and very shallow, Caitlyn is nevertheless quite a sweet-natured woman.\n",
"BULLET::::- London Tipton, portrayed by Brenda Song, is the socialite of the four main teenage characters. She is a parody of Paris Hilton. She is the daughter of Wilfred Tipton, a multi-billionaire and the owner of the Tipton Hotel chains, including the one in Boston and the SS \"Tipton\". London is typically selfish, dim-witted, spoiled, gullible and meticulous about her appearance but she is happy and heartwarming, and she does care for her friends (even if she can't remember the difference between Zack and Cody). She also has a pomeranian pet dog named Ivana. When she is happy, she usually claps her hands and repeatedly jumps up and down while saying her catchphrase, \"Yay me!\" Mr. Moseby fools London into thinking that she is on the SS \"Tipton\" for a vacation, to trick her into boarding the ship. London gets enrolled at the Seven Seas High School program because her father wants her to live in the real world. London does not live in a first class suite on the SS \"Tipton\", but a small cabin. London reluctantly agrees to accept Bailey Pickett (Debby Ryan) as her new roommate, although she does so after unsuccessfully trying to bribe Bailey into leaving as she did with her previous roommate. The two later become good friends. In \"The Suite Life of Zack & Cody\", when London resided at the Tipton Hotel, it seemed that she was living in her own bubble. On the SS \"Tipton\", London became more aware of the real world and faced several difficulties to help her when she takes over her father's business.\n",
"She is known for her campaigns for Guess?, but her highest accolade was her selection as the \"Sylvia\" character in Peroni's Nastro Azzurro beer commercial that pays homage to Federico Fellini's \"La Dolce Vita\". The role was originally played by Anita Ekberg. She was discovered as a model when she was fourteen while shopping with her mother.\n",
"I Am Cait is an American television documentary series which chronicles the life of Caitlyn Jenner after her gender transition. The eight-part one-hour documentary series debuted on July 26, 2015, on the E! network. The series focuses on the \"new normal\" for Jenner, exploring changes to her relationships with her family and friends. The show additionally explores how Jenner adjusts to what she sees as her job as a role model for the transgender community.\n",
"When unconfirmed rumors began circulating that Caitlyn Jenner was transitioning from male to female, \"In Touch Weekly\" released a cover with a male-presenting Caitlyn (then known as Bruce) in photoshopped makeup. The cover was widely criticized, and Hillz released a statement saying \"I think it's so wrong in so many ways for them to poke fun and do such things to someone... They know nothing about just to make money and make a mockery of trans people everywhere. They need to educate instead of just doing nonsense like that.\"\n"
] |
why is the credit card considered so secure if all the information required to make a purchase is on the card?
|
I work at Saks and we see significant credit card fraud at our store. For this reason we don't have a customer swipe or pin pad, you have to hand your card to the cashier, we frequently ask for ID as well. Banks and CC companies are getting more vigilant as well, we frequently have cards declined because people are traveling and our store is located in a high end outlet mall. So the bank will see charges from the Gucci outlet, the Armani store and Burberry and freeze the card because it's outside your normal spending habits. Even so, we lose money to fraud, although I don't know exactly how much.
|
[
"Credit card security relies on the physical security of the plastic card as well as the privacy of the credit card number. Therefore, whenever a person other than the card owner has access to the card or its number, security is potentially compromised. Once, merchants would often accept credit card numbers without additional verification for mail order purchases. It's now common practice to only ship to confirmed addresses as a security measure to minimise fraudulent purchases. Some merchants will accept a credit card number for in-store purchases, whereupon access to the number allows easy fraud, but many require the card itself to be present, and require a signature (for magnetic stripe cards). A lost or stolen card can be cancelled, and if this is done quickly, will greatly limit the fraud that can take place in this way. European banks can require a cardholder's security PIN be entered for in-person purchases with the card.\n",
"For merchants, a credit card transaction is often more secure than other forms of payment, such as cheques, because the issuing bank commits to pay the merchant the moment the transaction is authorized, regardless of whether the consumer defaults on the credit card payment (except for legitimate disputes, which are discussed below, and can result in charges back to the merchant). In most cases, cards are even more secure than cash, because they discourage theft by the merchant's employees and reduce the amount of cash on the premises. Finally, credit cards reduce the back office expense of processing checks/cash and transporting them to the bank.\n",
"Some suppliers claim that transactions can be almost twice as fast as a conventional cash, credit, or debit card purchase. Because no signature or PIN verification is typically required, contactless purchases are typically limited to small value sales. Lack of authentication provides a window during which fraudulent purchases can be made while the card owner is unaware of the card's loss.\n",
"On the other hand, the use of a credit card, whose main purpose is similar to money, allows for the creation of highly detailed records about the card owner. Credit cards are therefore not privacy protecting. The main privacy advantage of money is that its users can remain anonymous. There are however other security and usability properties that make real world cash popular.\n",
"The issues raised in a 2006 report were of importance due to the tens of millions of cards that have already been issued. Credit and debit card data could be stolen via special low cost radio scanners without the cards being physically touched or removed from their owner’s pocket, purse or carry bag. Among the findings of the 2006 research study \"Vulnerabilities in First-Generation RFID-Enabled Credit Cards\", and in reports by other white-hat hackers:\n",
"Unlike traditional credit card transactions, many alternative payments often provide additional security features that protect the merchant from fraud and returned transactions, because the funds availability is verified and payment is made directly from a bank account. The banks guarantee the funds and because there are no chargebacks, merchants are often not required to provide collateral or keep a reserve. Furthermore, accounts are validated in real-time and fraud modules scrub transactions, similar to the approval process with credit cards.\n",
"The mail and the Internet are major routes for fraud against merchants who sell and ship products and affect legitimate mail-order and Internet merchants. If the card is not physically present (called CNP, card not present) the merchant must rely on the holder (or someone purporting to be so) presenting the information indirectly, whether by mail, telephone or over the Internet. The credit card holder can be tracked by mail or phone. While there are safeguards to this, it is still more risky than presenting in person, and indeed card issuers tend to charge a greater transaction rate for CNP, because of the greater risk.\n"
] |
Is air indoors more polluted than outdoors?
|
I think it seriously depends on the situation but the EPA suggests that particle levels are the same or lower than outdoors in houses without smoking ([source](_URL_1_)). The Texas Commission on Environmental Quality (discovered via Google, see [source](_URL_0_)) suggests that indoor levels could be a few times higher, and given some things I've read following links on these two websites, I think it comes down to mostly factors such as burning things (obviously) or heating systems and proper ventilation.
I'll add that in places with extreme outdoor pollution, such as smog, indoors is almost certainly lower, especially with simple filtration systems.
|
[
"Scientific evidence has indicated that indoor air pollution can be worse than outdoor pollutants in large and industrialized cities. Many products and chemicals used inside the home, for cooking and heating, and for appliances and home décor are primary sources of indoor air pollutants. Everything we use in the home contributes to the pollution, and can possibly degrade the environment. Air pollution is responsible for 7 million premature deaths around the world each year. When pollutants enter the body through our respiratory system, they can be absorbed in the blood and travel throughout the body, and can directly damage the heart and other vital organs.\n",
"Although, from a global perspective, harmful indoor air pollution is caused by cooking and heating with solid fuels on open fires or traditional stoves, especially in poorly ventilated rooms, indoor air pollutants may also come from heating and cooling equipment, electronic appliances, cleaning products, air fresheners, insecticides, and construction materials.\n",
"Dilution of indoor pollutants with outdoor air is effective to the extent that outdoor air is free of harmful pollutants. Ozone in outdoor air occurs indoors at reduced concentrations because ozone is highly reactive with many chemicals found indoors. The products of the reactions between ozone and many common indoor pollutants include organic compounds that may be more odorous, irritating, or toxic than those from which they are formed. These products of ozone chemistry include formaldehyde, higher molecular weight aldehydes, acidic aerosols, and fine and ultrafine particles, among others. The higher the outdoor ventilation rate, the higher the indoor ozone concentration and the more likely the reactions will occur, but even at low levels, the reactions will take place. This suggests that ozone should be removed from ventilation air, especially in areas where outdoor ozone levels are frequently high. Recent research has shown that mortality and morbidity increase in the general population during periods of higher outdoor ozone and that the threshold for this effect is around 20 parts per billion (ppb).\n",
"Weatherization can have a negative impact on indoor air quality, especially among occupants with respiratory illnesses. This occurs because of a decrease in air exchange in the home, and resulting increase in moisture. This leads to higher concentrations of pollutants in the air.\n",
"Weatherization generally does not cause indoor air problems by adding new pollutants to the air. (There are a few exceptions, such as caulking, that can sometimes emit pollutants.) However, measures such as installing storm windows, weather stripping, caulking, and blown-in wall insulation can reduce the amount of outdoor air infiltrating into a home. Consequently, after weatherization, concentrations of indoor air pollutants from sources inside the home can increase.\n",
"As cities struggle to comply with air quality standards, trees can help to clean the air. The most serious pollutants in the urban atmosphere are ozone, nitrogen oxides (NOx), sulfuric oxides (SOx) and particulate pollution. Ground-level ozone, or smog, is created by chemical reactions between NOx and volatile organic compounds (VOCs) in the presence of sunlight. High temperatures increase the rate of this reaction. Vehicle emissions (especially diesel), and emissions from industrial facilities are the major sources of NOx. Vehicle emissions, industrial emissions, gasoline vapors, chemical solvents, trees and other plants are the major sources of VOCs. Particulate pollution, or particulate matter (PM10 and PM25), is made up of microscopic solids or liquid droplets that can be inhaled and retained in lung tissue causing serious health problems. Most particulate pollution begins as smoke or diesel soot and can cause serious health risk to people with heart and lung diseases and irritation to healthy citizens. Trees are an important, cost-effective solution to reducing pollution and improving air quality.\n",
"Considering that North Americans spend a large proportion of their lives indoors, it’s clear why this is a key issue in designing healthy spaces. Additionally, air quality is not a stand-alone problem; rather, every other component of the home can affect air quality. Air quality can be compromised by off-gassing from cabinetry, countertops, flooring, wall coverings or fabrics; by cooking by-products released into the air, and by mold caused by excess moisture or poor ventilation.\n"
] |
why are most corporations considered evil?
|
Most corporations aren't evil and just do useful things like make your bread for your breakfast toast or make wires for your house.
Of those that are evil often their evilness can be often put down to either to outright corruption in management ranks, which is just the human condition.
And secondarily companies often do evil things because traditionally directors (CEOs etc) can be sued personally by the shareholders of that company if they don't act in such a way to make the most possible money. This leads them to make unethical decisions just to make larger profit. Because if they don't. They could potentially be sued by the shareholders for running the business improperly.
Some countries (recently the UK) have passed laws to try and allow the directors more leeway in how they run the company.
I believe in the UK directors of companies are now protected from shareholders sueing the director if the director acted in a fashion that was intended to benefit society or the environment.
A lot of other countries are passing similar laws
|
[
"An evil corporation is a trope in popular culture that portrays a corporation as ignoring social responsibility in order to make money for its shareholders. According to Angela Allan writing in \"The Atlantic\", the notion is \"deeply embedded in the landscape of contemporary culture—populating films, novels, videogames, and more.\" The science fiction genre served as the initial background to portray corporations in this dystopian light. Evil corporations can be seen to represent the danger of combining capitalism with larger hubris.\n",
"In real life, too, corporations have been accused of being evil. To guard against such accusations, Google at one point in its history had the official motto \"Don't be evil\", now used as part of the closing lines of the company's code of conduct. The company has been accused of violating this principle on several occasions, including with their now discontinued participation in a military drone AI program.\"The New Yorker\" wrote that \"many food activists consider Monsanto (now Bayer) to be \"the\" definitively evil corporation\". \"The Debate over Corporate Social Responsibility\" wrote, \"For many consumers, Wal-Mart serves as the evil corporation prototype, but record numbers shop at the stores for low prices.\" In Japan, a committee of journalists and rights activists issues an annual \"corporate raspberry award\" known as Most Evil Corporation of the Year Award (also called the Black Company Award) to a company \"with a culture of overwork, discrimination and harassment\".\n",
"In addition, from the perspective of business ethics it might be argued that chief executives are not inherently more evil than anyone else and so are no more likely to attempt unethical or illegal activity than the general population. Large multi-national corporations do continue to attempt to erode governmental regulations through in-house or contracted lobbyists who work closely with State and Federal legislators. So as corporate laws continue to lean in their favor, corporate members have improved portals to drive up company profits.\n",
"An evil empire is a speculative fiction trope in which a major antagonist of the story is a technologically advanced nation, typically ruled by an evil emperor or empress, that aims to control the world or conquer some specific group. They are opposed by a hero from more common origins who uses their guile or the help of an underground resistance to fight them. Well-known examples are the Galactic Empire in \"Star Wars\", which forms upon the collapse of the more benevolent Galactic Republic and is opposed by Luke Skywalker, as well as the Galactic Empire in \"Dune\", whose Emperor plots the downfall of House Atreides, and is opposed by Paul Atreides. The theme also often appears in video games, such as the \"Final Fantasy\" series, starting with \"Final Fantasy II\", which was inspired by \"Star Wars\", and becoming a major part of \"Final Fantasy VI\" in the form of the Gestahl Empire.\n",
"The second is where corporations, even small-scale suppliers, are seen as greedy monopolists that prey on the consumer. Caplan argues that all trade is a two-way street and that people like middlemen are not interposers attempting to fleece the people, but rather, making up for transportation, storage, and distribution costs. At a more broad level, cheating people is bad for business and the existence of multiple firms offering similar products implies competition, not monopoly power, which limits any firm's ability to increase prices.\n",
"Many anti-corporate activists believe the rise of large-business corporations poses a threat to the legitimate authority of nation states and the public sphere. They feel corporations are invading people's privacy, manipulating politics and governments, and creating false needs in consumers. They state evidence such as invasive advertising adware, spam, telemarketing, child-targeted advertising, aggressive guerrilla marketing, massive corporate campaign contributions in political elections, interference in the policies of sovereign nation states (Ken Saro-Wiwa), and news stories about corporate corruption (Enron, for example).\n",
"Anti-capitalists assert that capitalism is violent. They believe private property and profit survive only because police violence defends them and that capitalist economies need war to expand. They may use the term \"structural violence\" to describe the systematic ways in which a given social structure or institution kills people slowly by preventing them from meeting their basic needs, for example the deaths caused by diseases because of lack of medicine.\n"
] |
Landing on Mars with a glider?
|
> Does it have something to do with the 100 times thinner atmosphere
Probably. NASA engineers are smart people and think of all the angles, and then go through rigorous math to compare the price and effectiveness of various techniques. When you're sending something up into space, often the most expensive part of it is sending it up on a rocket *into* space and to Mars. So, I suppose they realized that, in that particular case, a rocket would be lighter, cheaper, and less prone to failure than other alternatives.
|
[
"One application of a Mars flyby is for a human mission, where after landing and staying on the surface for some time the ascent stage has a space rendezvous with another, unmanned spacecraft, that was launched separately from Earth, flying by. This would mean the ascent stage of the lander to reach the speed necessary equal to that of the spacecraft flying by, but the resources needed for Earth return would not have to enter or leave Mars orbit.\n",
"The Soviet probe Mars 3 is thought to have successfully landed in Ptolemaeus Crater in 2 December 1971, but contact was lost seconds after landing due do a dust storm occurring at the time. On 11 April 2013, NASA announced that the Mars Reconnaissance Orbiter (MRO) may have imaged the Mars 3 lander hardware on the surface of Mars. The HiRISE camera on the MRO took images of what may be the parachute, retrorockets, heat shield and lander.\n",
"After orbiting Mars for more than a month and returning images used for landing site selection, the orbiters and landers detached; the landers then entered the Martian atmosphere and soft-landed at the sites that had been chosen. The \"Viking 1\" lander touched down on the surface of Mars on July 20, 1976, and was joined by the \"Viking 2\" lander on September 3. The orbiters continued imaging and performing other scientific operations from orbit while the landers deployed instruments on the surface.\n",
"Mars 6's lander separated from the flyby bus on 12 March 1974 at an altitude of from the surface of Mars. The bus made a flyby with a closest approach of . The lander encountered the atmosphere of Mars at 09:05:53 UTC, slowing from as it passed through the upper atmosphere. A parachute was then deployed to further slow the probe's descent, and retrorockets were intended to fire during the last seconds before the probe reached the ground.\n",
"Mars 6 successfully lifted off on August 5, 1973, into an intermediate Earth orbit on a Proton-K/D booster and then launched into a Mars transfer trajectory. Total fueled launch mass of the lander and bus was 3260 kg. It reached Mars on March 12, 1974. The descent module separated from the bus at a distance of 48,000 km from Mars. The bus continued on into a heliocentric orbit after passing within 1600 km of Mars. The descent module entered the atmosphere at 09:05:53 UT at a speed of 5.6 km/s. The parachute opened at 09:08:32 UT after the module had slowed its speed to 600 m/s by aerobraking. During this time the craft was collecting data and transmitting it directly to the bus for immediate relay to Earth. Contact with the descent module was lost at 09:11:05 UT in \"direct proximity to the surface\", probably either when the retrorockets fired or when it hit the surface at an estimated 61 m/s. Mars 6 landed at in the Margaritifer Terra region of Mars. The landed mass was 635 kg. The descent module transmitted 224 seconds of data before transmissions ceased, the first data returned from the atmosphere of Mars. Much of the data was unreadable due to a flaw in a transistor which led to degradation of the system during its journey to Mars.\n",
"On August 6, 2012, the Mars Science Laboratory landed on Aeolis Palus near Aeolis Mons in Gale Crater. The landing was from the target (), closer than any previous rover landing and well within the target area.\n",
"Flight to Mars is a 1951 American Cinecolor science fiction film drama, produced by Walter Mirisch for Monogram Pictures, directed by Lesley Selander, that stars Marguerite Chapman, Cameron Mitchell, and Arthur Franz. \n"
] |
If you listen to music loudly (via earphones) in a very windy situation, where you can barely hear the music (for example in a convertible going fast), is the music still doing damage to your ears?
|
Yep. Same thing with being at a bar or loud concert and yelling into your friends' ears so that you can hear each other. No different then if someone was yelling into your ear in a quiet room.
|
[
"When exposed to a multitude of sounds from several different sources, sensory overload may occur. This overstimulation can result in general fatigue and loss of sensation in the ear. The associated mechanisms are explained in further detail down below. Sensory overload usually occurs with environmental stimuli and not noise induced by listening to music.\n",
"Musicality, especially on the radio, contains musical aspects (timbre, emotional impact, melody), and artifacts that arise from non-musical aspects (soundstaging, dynamic range compression sonic balance). The introduction of these sonic artifacts affects the balance between these musical and non-musical aspects. When the volume of music is higher, these artifacts become more apparent, and because they are uncomfortable for the ear, cause listeners to \"tune out\" and lose focus or become tired. These listeners may then unconsciously avoid that type of music, or the radio station they may have heard it on.\n",
"Experts agree that exposure to sounds over 85 dB over time can cause damage to hearing. Many concert venues and nightclubs play music at levels over 100 decibels. It is also possible to listen to music on personal audio equipment (such as MP3 players) at levels which exceed damage-risk criteria, depending on the equipment.\n",
"According to the Scientific Committee on Emerging and Newly Identified Health Risks, the risk of hearing damage from digital audio players depends on both sound level and listening time. The listening habits of most users are unlikely to cause hearing loss, but some people are putting their hearing at risk, because they set the volume control very high or listen to music at high levels for many hours per day. Such listening habits may result in temporary or permanent hearing loss, tinnitus, and difficulties understanding speech in noisy environments.\n",
"Studies are still being done on fan exposure, but some preliminary findings show that there are often noises that can be at or exceed 120 dB which, unprotected, can cause damage to the ears in seconds.\n",
"Although research is limited, it suggests that increased exposure to loud noise through personal listening devices is a risk factor for noise induced hearing loss. More than half of people are exposed to sound through music exposure on personal devices greater than recommended levels. Research suggests stronger correlations between extended duration or elevated usage of personal listening devices and hearing loss.\n",
"Other than their inability to hear music, which is most likely due to a genetic defect, the rest of an amusic's brain remains normal. The only effect is on the ability to tell different notes apart due to the separation of two key areas in the brain. Most sufferers of amusia describe music as unpleasant. Others simply refer to it as noise and find it annoying. This can have social implications because amusics often try to avoid music, which in many social situations is not an option. In China and other countries in which identical words have different meanings based on pitch, amusia may have a much more pronounced social and emotional impact: difficulty in speaking and understanding the language.\n"
] |
When I am all alone, and there is no noise in the room, and it is all still and quiet, there is a sound in my ears/head that is similar to a ringing of the ears, but it's not quite the same. What causes that?
|
[Tinnitus](_URL_0_): "the perception of sound within the human ear in the absence of corresponding external sound".
Causes are varied, so check the ~~Zelda~~ link for more on that.
|
[
"It's like, all the electric wires in the house are plugged into my brain. And every one has a different noise, so I can't think. Some of the wires have voices in them and they tell me things like what to do and that people are watching me. I know there really aren't any voices, but I feel that there are, and that I should listen to them or something will happen. … I can remember what I was like before. I was a class officer, I had friends. I was going to be an aeronautical engineer. Do you remember, Bobby? I've never had a job. I've never owned a car. I've never lived alone. I've never made love to a woman. And I never will. That's what it's like. You should know. That's why I'm a Hindu. Because maybe it's true: Maybe people are born again. And if there is a God, maybe he'll give me another chance. I believe that, because this can't be all I get.\n",
"\" When you were upstairs, you couldn't hear anything at all, but you could see EN clicking away on laptops (and Blixa screaming in a microphone). The REAL thing was downstairs. You can see the small room on the photos - maybe 20 people could be there at the same time. Because of ear-protection headsets (or your fingers pressed into your ears) it was not about sound. It was about feeling. The massive wave of sound from those speakers (in this small room!) flattened the hairs on the back of my arms and on my head. Sometimes the music came to a DEEP grinding halt. Sometimes Blixa's voice cut through. \"\n",
"Usually, their home is silent, but when one day the narrator suddenly hears something inside another part of the house, the siblings escape to a smaller section, locked behind a solid oak door. In the intervening days, they become frightened and solemn; on the one hand noting that there is less housecleaning, but regretting that the interlopers have prevented them from retrieving many of their personal belongings. All the while, they can occasionally hear noises from the other side.\n",
"In a letter to Theo in May 1889 he explains the sounds that travel through the quiet-seeming halls, \"There is someone here who has been shouting and talking like me all the time for a fortnight. He thinks he hears voices and words in the echoes of the corridors, probably because the auditory nerve is diseased and over-sensitive, and in my case it was both sight and hearing at the same time, which is usual at the onset of epilepsy, according to what Dr. Félix Rey said one day.\"\n",
"Noise is unwanted sound judged to be unpleasant, loud or disruptive to hearing. From a physics standpoint, noise is indistinguishable from sound, as both are vibrations through a medium, such as air or water. The difference arises when the brain receives and perceives a sound.\n",
"BULLET::::- Lack of sound depth: any background noise (in the room, in the car) is flat and wrongly interpreted by the brain. The effect is similar to what happens when trying to hear someone speaking in a noisy crowd on a mono TV. The effect is also similar to talking on the phone to someone who is in a noisy environment (see also: King-Kopetzky syndrome)\n",
"Indoor noise can be caused by machines, building activities, and music performances, especially in some workplaces. Noise-induced hearing loss can be caused by outside (e.g. trains) or inside (e.g. music) noise.\n"
] |
- if hiv can take up to 6 months to show up on a blood test, how do they know donated blood is safe?
|
They generally eliminate high risk classes from donating blood. There are a lot of reports that confirm that HIV transmits faster from certain ways over others. They make sure to ask people questions about their lifestyle to figure out whether they represent one of those high risk classes.
For example my wife is a veterinarian and over half of her class is unable to donate blood because they handled monkeys.
[In the 80s around 2,000 Canadians were infected from tainted blood when HIV was first becoming a thing](_URL_0_). Today all of the mechanisms that are in place are based on old failed policies.
|
[
"BULLET::::- Every single blood donation is tested for HIV (the virus that causes AIDS) and Hepatitis B and C. Infected blood is not used in transfusions but tests may not always detect the early stages of viral infection.\n",
"In the US, the Food and Drug Administration requires that all donated blood be screened for several infectious diseases, including HIV-1 and HIV-2, using a combination of antibody testing (EIA) and more expeditious nucleic acid testing (NAT). These diagnostic tests are combined with careful donor selection. , the risk of transfusion-acquired HIV in the US was approximately one in 2.5 million for each transfusion.\n",
"The simple blood tests used to \"screen\" possible blood donors for HIV and hepatitis have a significant rate of false positives; however, physicians use much more expensive and far more precise \"tests\" to determine whether a person is actually infected with either of these viruses.\n",
"The ELISA antibody tests were developed to provide a high level of confidence that donated blood was \"NOT\" infected with HIV. It is therefore not possible to conclude that blood rejected for transfusion because of a \"positive\" ELISA antibody test is in fact infected with HIV. Sometimes, retesting the donor in several months will produce a \"negative\" ELISA antibody test. This is why a confirmatory western blot is always used before reporting a \"positive\" HIV test result.\n",
"Risks are also associated with a non-MSM donors testing positive for HIV, which can have major implications as the donor's last donation could have been given within the window period for testing and could have entered the blood supply, potentially infecting blood product recipients. An incident in 2003 in New Zealand saw a non-MSM donor testing positive for HIV and subsequently all blood products made with the donor's last blood donation had to be recalled. This included NZ$4 million worth of Factor VIII, a blood clotting factor used to treat hemophiliacs which is manufactured from large pools of donated plasma, and subsequently led to a nationwide shortage of Factor VIII and the deferral of non-emergency surgery on hemophiliac patients, costing the health sector millions of dollars more. Screening out those at high risk of bloodborne diseases, including MSM, reduces the potential frequency and impact of such incidents.\n",
"Potential donors are evaluated for anything that might make their blood unsafe to use. The screening includes testing for diseases that can be transmitted by a blood transfusion, including HIV and viral hepatitis. The donor must also answer questions about medical history and take a short physical examination to make sure the donation is not hazardous to his or her health. How often a donor can donate varies from days to months based on what component they donate and the laws of the country where the donation takes place. For example, in the United States, donors must wait eight weeks (56 days) between whole blood donations but only seven days between plateletpheresis donations and twice per seven-day period in plasmapheresis.\n",
"Tests selected to screen donor blood and tissue must provide a high degree of confidence that HIV will be detected if present (that is, a high sensitivity is required). A combination of antibody, antigen and nucleic acid tests are used by blood banks in Western countries. The World Health Organization estimated that, , inadequate blood screening had resulted in 1 million new HIV infections worldwide.\n"
] |
AMA - 20th Century American Popular Culture
|
Thank you so much for arranging this very interesting panel!
My question is the following: When did the cinema become a common sight in the towns and cities of the United States? Was it something that was targeted for a specific audience or was it like today where everyone ranging from teenage couples to families can find something to watch (and be entertained by)?
|
[
"Published in 2009 by W.W. Norton & Company, Dickstein’s cultural history of the U.S. in the 1930s considers the complicated dynamic between art and entertainment in the decade, suggesting that the era produced a wide array of popular culture that shares an interest in how “ordinary people lived, how they suffered, interacted, took pleasure in one another, and endured.” A sizable portion of \"Dancing in the Dark\" focuses on what is typically thought of as \"escapist\" entertainment from the decade. The book is filled with extended analyses of the decade’s most popular sorts of entertainment: the musicals of Busby Berkeley, the performances of Humphrey Bogart, the films of Frank Capra, and the dance routines of Fred Astaire and Ginger Rogers. It also contains lengthy analyses of movements and works that are typically thought of as \"high culture\": the Art Deco movement, the novels of William Faulkner, Orson Welles’ \"Citizen Kane\", and the orchestral pieces of Aaron Copland.\n",
"Geppi's Entertainment Museum was a 16,000-square-foot (1,500 m2) privately owned pop culture museum located at historic Camden Station at Camden Yards in Baltimore, Maryland. The museum chronicled the history of pop culture in America from the 17th century to the early 21st century, as made popular in newspapers, magazines, comic books, movies, television, radio and video games. It featured a collection of nearly 60,000 pop culture artifacts, including magazines, movie posters, toys, buttons, badges, cereal boxes, trading cards, dolls, figurines, and other memorabilia. Geppi’s Entertainment Museum was located in downtown Baltimore's historic Camden Station at Camden Yards, directly above the former Sports Legends Museum at Camden Yards and adjacent to Oriole Park at Camden Yards.\n",
"While American entertainment was a pervasive cultural influence in Britain at the time of the production of the series, not all references to American culture can be seen as conscious decisions. For example, Terry Jones did not know that Spam was an American product at the time he wrote the sketch. Kevin Kern summarises in his analysis of references to the US 'that portrayals of American themes reflected three broad responses to American hegemony: 1) minor or passing references to specific individuals, events, or products of American culture, 2) American cultural tropes used to serve a general comedic purpose, and 3) satire aimed at American targets, specifically US economic power, the crassness or banality of American culture, or American violence and militarism'. However, Kern does not see this as exhibiting anti-American tendencies, but as 'a natural extension of the Pythons’ frequent (…) satirical focus on vulgarity, banality, violence, and militarism in the United Kingdom (…)' \n",
"From the mid to late 20th century, Americana was largely conceptualized as a nostalgia for an idealized life in small towns and cities in the United States around the turn of the century, roughly in the period between 1880 and the First World War, popularly considered \"The Good Old Days\". It was believed that much of the structure of 20th-century American life and culture had been cemented in that time and place. American author Henry Seidel Canby wrote:\n",
"The drama of one of the most significant decades in America's history unfolded in this unique look at the Jazz Age. Few decades have been filled with so much yet ended so quickly as the 1920s. Businesses boomed, the stock market soared, and heroes were abundant. Before the 1920s ended in the worst stock market crash in history, America underwent a transformation from 19th century Victorian life and business to a 20th-century dynamo, setting the standard for a transformed society and industrial giant. This special exhibit featured Richard Byrd's polar flight suit, Man o' War's saddle, a brick from the St. Valentine's Day Massacre, a suit worn by Henry Ford, Ernest Hemingway's passport, Charles Lindbergh's flight goggles, a painting by Zelda Fitzgerald, a costume worn by Al Jolson, Bill Tilden's tennis racket, Louis Armstrong's trumpet, Babe Ruth's Yankees uniform, and much more.\n",
"Phil Harrison in \"The Guardian\" wrote that \"from the jug-bands of Memphis to the woebegone country blues of the Appalachian Mountains, early 20th-century America was full of unique musical forms developing in isolation. This first episode of a three-part series deals with the 1920s, the first decade during which these disparate yet analogous styles took flight from their places of origin and reached the rest of the nation. It's a treasure trove of picaresque stories, evocative footage and strange and beautiful music.\" Jay Meehan in the Park Record, covering the launch at the Sundance Film Festival, wrote, \"Thursday night's Sundance special event at the Eccles Center was one not to miss. One thing that came through quite clearly from the entire evening is how deeply everyone involved cares about this project.\" Ellie Porter in \"TVTimes\" awarded the show 5 stars, calling the series \"an absolute treat.\" Simon Cosyns in \"The Sun\" asked the reader to \"imagine a world with no recorded music. And then imagine the thrill of hearing a record for the first time. This mind-boggling ten-year project, led by Brits Bernard MacMahon and Allison McGourty, involves documentary films, albums and a book all exploring 'the first time America heard itself.' If you like music of any sort, there's only one word for this project. Essential.\" He awarded it five stars. Brian McCollum in the \"Detroit Free Press\" noted that the films were \"stocked with rare images and scrupulously restored audio,\" explaining how \"\"American Epic\" solves mysteries, brings a lost musical era back to life.\" He praised it as \"a documentary which pairs a scholarly eye for detail with a buoyant fan passion.\" Sarah Hughes in \"The Observer\" noted that Robert Redford's \"languid tone is a perfect fit,\" and that \"this three-part documentary is a deep dive into the music that built America. Along the way love is lost, younger generations step up to the mic and reputations fade, but, as this glorious film makes clear, the music is always there, still vibrant and vital despite the passing of the years.\"\n",
"American Regionalism is best known through its \"Regionalist Triumvirate\" consisting of the three most highly respected artists of America's Great Depression era: Grant Wood, Thomas Hart Benton, and John Steuart Curry. All three studied art in Paris, but devoted their lives to creating a truly American form of art. They believed that the solution to urban problems in American life and the Great Depression was for the United States to return to its rural, agricultural roots.\n"
] |
How do you think someone in a coma would react to psychedelics?
|
[This question was asked a few years ago](_URL_1_) and didn't get much of an answer. I don't know if you'll be able to get much more than that, but good luck to you.
I had a look on Google Scholar for you for any reports linking use of psychoactive or psychedelic drugs with [locked-in syndrome](_URL_2_), [persistent vegetative states](_URL_3_) or [minimally conscious states](_URL_0_). I couldn't find anything, but these aren't my areas of research.
|
[
"The effects of psychedelics vary widely from one individual to the next, and from one experience to the next. Sometimes individuals under the influence of such drugs do not understand that they have taken a drug and believe that they will never return to their ordinary, sober perception, though some can be reminded verbally. In cases where the individual cannot be kept safe, hospitalization may be useful, though the value of this practice for individuals not mentally ill is disputed by proponents of the investigative or recreational use of psychoactive compounds. Psychosis is exacerbated in individuals already suffering from this condition.\n",
"A psychedelic experience is a temporary altered state of consciousness induced by the consumption of psychedelic drugs (the best known of which are LSD and psilocybin 'magic' mushrooms). The psychedelic altered state of consciousness is commonly characterised as a higher (elevated or transcendent) state relative to ordinary (sober) experience; for example, the psychologist Benny Shanon observed from ayahuasca trip reports: \"the assessment, very common with ayahuasca, that what is seen and thought during the course of intoxication defines the real, whereas the world that is ordinarily perceived is actually an illusion.\"\n",
"With the ingestion of psychedelics people often experience sudden shifts in cognitive association and emotive content. The experience can shift rapidly from negative to euphoric, and in certain cases mimic the schizophrenic condition, as researched by Humphry Osmond and others. \n",
"Generally, a person experiencing a psychedelic crisis can be helped either to resolve the impasse, to bypass it, or, failing that, to terminate the experience. A person's thoughts before taking or while under the influence of the psychedelic, often greatly influence the trip.\n",
"Psychedelic drugs can be used to alter the brain cognition and perception, some believing this to be a state of higher consciousness and transcendence. Typical psychedelic drugs are hallucinogens including LSD, DMT (Dimethyltryptamine), cannabis, peyote, and psilocybin mushrooms. According to Wolfson, these drug-induced altered states of consciousness may result in a more long-term and positive transformation of self.\n",
"Richard Yensen, Albert Kurland and other researchers collected evidence that psychedelic therapy could be of use to those suffering from anxiety and other problems associated with terminal illness. In 1965, research consisting of providing a psychedelic experience for the dying was conducted at the Spring Grove State Hospital in Maryland. Of 17 dying patients who received LSD after appropriate therapeutic preparation, one-third improved \"dramatically\", one-third improved \"moderately\", and one-third were unchanged by the criteria of reduced tension, depression, pain, and fear of death.\n",
"A multitude of reactions can occur during a psychedelic crisis. Some users may experience a general sense of fear, panic, or anxiety. A user may be overwhelmed with the disconnection many psychedelics cause, and fear that they are going insane or will never return to reality. The fear that is felt during a bad trip has a psychotic character, coming as it does from within the mind of the tripper and not from the external environment. For example, during Albert Hoffman's first acid trip, he hallucinated that his neighbour had turned into a malignant demon, when in fact she was only a friendly woman trying to help him.\n"
] |
When is a species no longer considered invasive?
|
I don't like the first answer to that question in the link you posted. Invasions happen all the time. In fact, it's why biogeography is such a fun field to get into.
Species expand outwards and encroach on other ranges ALL the time. However, the truth is that the definition gets tenuous. I'll be the one to start a firestorm by defining it in very neutral terms:
A species is an invader if it enters and occupies a niche or habitat it was absent from or never occupied previously. I welcome all debate into this as I know some fellow ecologists will probably double take on that. It's the best I can think of anyway.
So, species ranges are not set in stone and you'll see them in spots on their documented periphery where you never did before. Thus, if you consider Locality A and locality B, Species 1 from Locality A enters Locality B, finds it suitable to live in, and settles. Species 1 is therefore an invader. Species 2 is found in Locality B, but doesn't invade Locality A and stays in B. Thus, for this example, we can decide that Species 1 is an invader in B and a native in A while species 2 is a native to B.
A species can not occupy a niche or habitat if it is not equipped to utilize resources or compete with species occupying them already. No species enters into a habitat without stiff competition from what's there. Thus, the most common invasive species we find today are those who are not only successful in their home niche, but were evolutionarily flexible enough to be able to expand outwards if given just the bare minimum conditions. Thus, this example is a natural invasion and happens all the time, just the scale is small and the effects are not as dangerous as landscape and global transportation of organisms by humans. For a good example of invasive dynamics at small scales, look up Huffaker's famous experiment with mites and oranges. Thus, we define the spatial scale at which a species is native and invasive in the wild.
On to human caused invasions, the effects can be devestating because we transport species across landscapes and natural barriers like mountains, saltwater, deserts, and tundras. For example, carps are established quite comfortably in the United States, even though they are native to China. And it is due to human trafficking that they are found pretty much world wide. Black basses like Largemouth bass and small mouth are so popular around the world for sport fishing, that they've been imported as far as Japan. Red-swamp crayfish are so delicious and economically easy to grow that they are found in Africa... where there are no native crayfish!
When is a species no longer considered invasive? At the appropriate scale. No species is invasive at the global scale, since we all occupy the planet, but it can be at all landscape, ecosystem, community, and population scales. Yes, even within species can be invaders if you think about two genepools and one of them comes to intermingle with the other to make one big genepool. A cool concept actually.
Also, a species is no longer considered invasive *in effects* if you consider the inclusion of that species into the natural processes of the ecosystem. If the ecosystem is sustainable even with the new invader, then you might want to think that the invader is now an important energy component to that system. Humans are the best example. We cut down forests, harvest the oceans. But the ecosystem adapts after so many species leave for other habitats or die off, and the species left behind work with humans to maintain the energy dynamics of the system for everyone's survival. We understand that more than ever.
|
[
"Invasive species, also called invasive exotics or simply exotics, is a nomenclature term and categorization phrase used for flora and fauna, and for specific restoration-preservation processes in native habitats, with several definitions. The first definition, the most used, applies to introduced species (also called \"non-indigenous\" or \"non-native\") that adversely affect the habitats and bioregions they invade economically, environmentally, and/or ecologically. Such invasive species may be either plants or animals and may disrupt by dominating a region, wilderness areas, particular habitats, or wildland-urban interface land from loss of natural controls (such as predators or herbivores). This includes non-native invasive plant species labeled as exotic pest plants and invasive exotics growing in native plant communities. It has been used in this sense by government organizations as well as conservation groups such as the International Union for Conservation of Nature (IUCN) and the California Native Plant Society. The European Union defines \"Invasive Alien Species\" as those that are, firstly, outside their natural distribution area, and secondly, threaten biological diversity. It is also used by land managers, botanists, researchers, horticulturalists, conservationists, and the public for noxious weeds. The kudzu vine (\"Pueraria lobata\"), Andean Pampas grass (\"Cortaderia jubata\"), and yellow starthistle (\"Centaurea solstitialis\") are examples.\n",
"The term \"invasive species\" refers to a subset of those species defined as introduced species. If a species has been introduced, but remains local, and is not problematic for human industry or the local biodiversity, then it is not considered invasive, and does not belong on this list.\n",
"The term invasive species refers to a subset of those species defined as introduced species. If a species has been introduced but remains local, and is not problematic to agriculture or to the local biodiversity, then it cannot be considered to be an invasive species and does not belong on this list.\n",
"These are lists of invasive species by country or region. A species is regarded as invasive if it has been introduced by human action to a location, area, or region where it did not previously occur naturally (i.e., is not a native species), becomes capable of establishing a breeding population in the new location without further intervention by humans, and becomes a pest in the new location, threatening agriculture and/or the local biodiversity.\n",
"According to biology, invasive species are non-native animals that are introduced to a region or area outside of their usual habitat. Invasive species can either be introduced intentionally (if they have a beneficial purpose) or non-intentionally.\n",
"The term invasive species refers to a subset of those species defined as introduced species. If a species has been introduced but remains local, and is not problematic to human systems or to the local biodiversity, then it cannot be considered to be invasive, and does not belong on this list.\n",
"Unlike the notable ideas (concerning the success of invasive non-indigenous organisms) that preceded it, such as the enemy release hypothesis (ERH) and Charles Darwin's Habituation Hypothesis, the EICA hypothesis postulates that an invasive species is \"not\" as fit (in its introduced habitat) at its moment of introduction as it is at the time that it is considered invasive. As suggested by the name of the hypothesis (Evolution of Increased Competitive Ability), the hypothesis predicts that much of the invasive potential of an invasive species is derived from its ability to evolve to reallocate its resources.\n"
] |
how'd the yahoo "hacking" happen?
|
According to [the post from Yahoo](_URL_0_)
> For potentially affected accounts, the stolen user account information may have included names, email addresses, telephone numbers, dates of birth, hashed passwords (using MD5) and, in some cases, encrypted or unencrypted security questions and answers. The investigation indicates that the stolen information did not include passwords in clear text, payment card data, or bank account information. Payment card data and bank account information are not stored in the system the company believes was affected.
Unfortunately the post does not make it clear if the hashed passwords were salted. If they were not salted it would be very easy for an attacker to find many users that had used common passwords, especially with around a billion to work with. Thankfully there was no credit card information stolen, but with all of the information that was stolen put together and the the likelihood that people will reuse passwords and usernames across multiple sites it could be very dangerous.
Also, [Shellshock](_URL_1_), a security issue with Bash, the command language default on Unix operating systems. Essentially it allowed an unprivileged user to gain privileged access to a system, essentially allowing them to do whatever they wanted.
|
[
"A simple matter had sparked a controversy over Yahoo!. The controversy was sparked because of Yahoo!'s silence about the data breach. After the servers were hacked, Yahoo! did not mail the affected victims, although it was promised earlier. There was no site-wide notifications about the hack, nor did any victim get any type of personal messages detailing how to reset their account passwords from Yahoo.\n",
"On September 22, 2016, Yahoo disclosed a data breach in which hackers stole information associated with at least 500 million user accounts in late 2014. According to the BBC, this was the largest technical breach reported to date. Specific details of material taken include names, email addresses, telephone numbers, encrypted or unencrypted security questions and answers, dates of birth, and encrypted passwords. The breach used manufactured web cookies to falsify login credentials, allowing hackers to gain access to any account without a password. On December 14, 2016 a separate data breach, occurring earlier around August 2013 was reported. This breach affected over 1 billion user accounts and is again considered the largest discovered in the history of the Internet.\n",
"On September 22, 2016, Yahoo disclosed a data breach in which hackers stole information associated with at least 500 million user accounts in late 2014. According to the BBC, this was the largest technical breach reported to date. Specific details of material taken include names, email addresses, telephone numbers, encrypted or unencrypted security questions and answers, dates of birth, and encrypted passwords. The breach used manufactured web cookies to falsify login credentials, allowing hackers to gain access to any account without a password. On December 14, 2016 a separate data breach, occurring earlier around August 2013 was reported. This breach affected over 1 billion user accounts and is again considered the largest discovered in the history of the Internet.\n",
"In late January 2014, Yahoo announced on its company blog that it had detected a \"coordinated effort\" to hack into possibly millions of Yahoo Mail accounts. The company prompted users to reset their passwords, but did not elaborate on the scope of the possible breach, citing an ongoing federal investigation.\n",
"Yahoo! reported the breach to the public on September 22, 2016. Yahoo! believes the breach was committed by \"state-sponsored\" hackers, but did not name any country. Yahoo! affirmed the hacker was no longer in their systems and that the company was fully cooperating with law enforcement. The Federal Bureau of Investigation (FBI) confirmed that it was investigating the affair.\n",
"In December 1997, the Yahoo! website was supposedly hacked, displaying a message calling for Mitnick's release or risk an internet \"catastrophe\" by Christmas Day. Yahoo! responded that the worm is nonexistent, and there were claims that it was a hoax only to scare people.\n",
"Yahoo! Voices, formerly Associated Content, was hacked in July 2012. The hack is supposed to have leaked approximately half a million email addresses and passwords associated with Yahoo! Contributor Network. The suspected hacker group, D33ds, used a method of SQL Injection to penetrate Yahoo! Voice servers. Security experts said that the passwords were not encrypted and the website did not use a HTTPS Protocol, which was one of the major reasons of the data breach. The email addresses and passwords are still available to download in a plaintext file on the hacker's website. The hacker group described the hack as a \"wake-up call\" for Yahoo! security experts. Joseph Bonneau, a security researcher and a former product analysis manager at Yahoo, said \"Yahoo can fairly be criticized in this case for not integrating the Associated Content accounts more quickly into the general Yahoo login system, for which I can tell you that password protection is much stronger.\"\n"
] |
Are there any known examples of jump discontinuities occurring in the natural world, (not related to manmade systems)?
|
Phase transitions, shock waves, electric/magnetic fields at boundaries where there is a surface charge/current density, just to name a few.
|
[
"A jumpgate has been discovered in the Typheous system, and eight Earth corporations want to control it. Each of them therefore starts up a branch in the system, and begin to battle it out with the latest in military technology.\n",
"Jumpstyle, originally known simply as jump, was created in Belgium. It was a short-lived small genre that did not gain popularity in its original form. However, it came back to the public during the turn of the century, and fandom began increasing throughout Europe after undergoing significant changes in Germany in early 2003.\n",
"There are common hydraulic jumps that occur in everyday situations such as during the use of a household sink. There are also man-made hydraulic jumps created by devices like weirs or sluice gates. In general, a hydraulic jump may be used to dissipate energy, to mix chemicals, or to act as an aeration device.\n",
"The jumpgates and their current configuration are believed to be designed by ancient beings who are currently nowhere within the X Universe gate system. They are believed to be observing the sectors but direct contact has not occurred in generations. They are also blamed for the re-routing of some jumpgates every now and again, cutting off some sectors or connecting undiscovered ones.\n",
"The fourth jump was a relatively novel obstacle: a jump over a hurdle into a pond. Of the 46 pairs remaining by that point in the competition, 28 had a fall of the horse or the rider at that obstacle. This included one of the equine fatalities, the American horse Slippery Slim. The International Equestrian Federation subsequently temporarily banned such jumps.\n",
"Jumpdates is an English-language online dating website launched in 2001. Although no other language versions are available at this time, the service is meant to operate worldwide and allows registration from any country in the world except for a few exceptions.\n",
"Why the jump wasn't ratified as a record is unclear, with both contemporary and later sources providing contradictory explanations. Bill Bowerman claimed the jump was statistically valid but Spearow's lack of an AAU permit to compete prevented ratification. Track and field historian Richard Hymans quotes eyewitness Jonni Myyrä as saying the height Spearow cleared was found to be below the world record on remeasurement; however, Martti Jukola, also citing Myyrä as his source, claimed Spearow failed on his three official attempts and only made the height on an additional exhibition jump. The November 19, 1924 edition of the \"Eugene Guard\" referred to Spearow \"unofficially\" breaking the record, while the November 21 edition said Spearow \"tried for a world's vault record but failed by a scant margin.\"\n"
] |
Why don't we and other animals have eyes in the backs of our heads? Wouldn't having a 360 vision be a massive benefit?
|
Many "prey animals" have upwards of 270 degree vision with eyes on the sides of their heads. Deer, rabbets, chickens.
We have binocular vision to prey better. Most predators have binocular vision. Owls, tigers, men.
Many insects have nearly 360, and most are prey to anything bigger.
Size matters more.
As animals go, humans are pretty big.
|
[
"Some predator animals, particularly large ones such as sperm whales and killer whales, have their two eyes positioned on opposite sides of their heads, although it is possible they have some binocular visual field.\n",
"Other animals that are not necessarily predators, such as fruit bats and a number of primates also have forward-facing eyes. These are usually animals that need fine depth discrimination/perception; for instance, binocular vision improves the ability to pick a chosen fruit or to find and grasp a particular branch.\n",
"Some other animals - usually, but not always, predatory animals - have their two eyes positioned on the front of their heads, thereby allowing for binocular vision and reducing their field of view in favor of stereopsis. However, eyes on the front is a highly evolved trait in vertebrates, and there are only three extant groups of vertebrates with truly forward-facing eyes: primates, carnivorous mammals, and birds of prey.\n",
"Many animals have better night vision than humans do, the result of one or more differences in the morphology and anatomy of their eyes. These include having a larger eyeball, a larger lens, a larger optical aperture (the pupils may expand to the physical limit of the eyelids), more rods than cones (or rods exclusively) in the retina, and a tapetum lucidum.\n",
"Some animals - usually, but not always, prey animals - have their two eyes positioned on opposite sides of their heads to give the widest possible field of view. Examples include rabbits, buffaloes, and antelopes. In such animals, the eyes often move independently to increase the field of view. Even without moving their eyes, some birds have a 360-degree field of view.\n",
"Primates have forward-facing eyes on the front of the skull; binocular vision allows accurate distance perception, useful for the brachiating ancestors of all great apes. A bony ridge above the eye sockets reinforces weaker bones in the face, which are put under strain during chewing. Strepsirrhines have a postorbital bar, a bone around the eye socket, to protect their eyes; in contrast, the higher primates, haplorhines, have evolved fully enclosed sockets.\n",
"Many hunting animals have evolved eyes facing forward, enabling depth perception. This is almost universal among mammalian predators, while most reptile and amphibian predators have eyes facing sideways.\n"
] |
Why were there so few German-American organized crime groups?
|
It's hard to say something didn't happen, but we can say why other groups did turn to organized crime. Ethnic groups who were blocked from traditional employment often attempt to break out and make it by turning to crime. We can see this in the heavily discriminated against groups of the Italians, Irish, Jews, and Black organized crime, but not in the generally accepted Germans.
|
[
"In Germany federal authorities have largely failed to provide sufficient resistance to ethnic organized crime gangs (German: \"Clankriminalität\") as fear of stigmatizing and discriminating minorities takes precedence. All ethnic crime gangs are collectively treated as organized crime.\n",
"The same large and politically connected gangs from New York, Philadelphia, Cleveland, Boston, Detroit, Chicago and New Orleans that controlled gambling, prostitution, extortion, thefts and narcotics since the early to mid-19th century, now controlled bootlegging operations across America in the 1920s. These recently organized and powerful criminal organizations began from the ethnic street gangs who committed violent crimes, provided illegal goods and services to the community and acted as enforcers for the political machines of the big cities and towns. The mainly Irish, Jewish, Italian and Polish immigrants that had begun to organize themselves at after World War I, continued their criminal activities with the start of Prohibition and began to meet the great demand for beer and liquor that came from citizens, speakeasies and blind pigs that sprang up across America overnight.\n",
"In 1936, thanks to an informer, the Gestapo raids devastated Anarcho-syndicalist groups all over Germany, resulting in the arrest of 89 people. Most ended up either imprisoned or murdered by the regime. The groups had been encouraging strikes, printing and distributing anti-Nazi propaganda and recruiting people to fight the Nazis' fascist allies during the Spanish Civil War.\n",
"Another major form of Russian-speaking organized crime in Germany consists of so-called criminal \"Aussiedler\" families. \"Aussiedlers\" are ethnic Germans (also called Volga Germans) that were born in the former Soviet Union. While a lot of \"Aussiedlers\" adapted well and quickly mastered the German language, a lot of families held onto the traditional lifestyle they lived in Russia and surrounding states. This led to the formation of individual as well as clan-based groups of \"Aussiedlers\" involved in organized criminal activities such as drug trafficking, extortion, prostitution, as well as extreme violence. Due to the large number of \"Aussiedlers\" they are seen as the major form of Russian organized crime in Germany.\n",
"Organized crime groups were posing huge problems to the routine and security of people, which forced the British authorities to curb the growing problem. These secret societies intensified their criminal activities from 1850s steadily evolving into organized crimes. They started with simple acts of crime extortion, trafficking and opium dealings before expanding to bigger criminal ventures like running illegal gambling dens and brothels. Organized crime groups started to scuffle with other gangs in order to take over their “turf” and threatened other citizens to pay protection money in return for their so-called ‘protection’. The most ferocious organized crime group was GheeHinKongsi; estimated to have 800 members, this group was largely made of the Cantonese. Other secret societies involved in crime include the Hai San, a gang that rivalled the Ghee Hin group in competition for land houses and business opportunities. Major legislative development in the colonial government led to the suppression of secret societies as law enforcement authorities thwarted the activities of organized crime.\n",
"Indigenous organized crime based in some of The Netherlands' major cities, mainly Amsterdam and The Hague, mostly has origins in the traditional working-class quarters. These small groups mainly consist of local career criminals that band together for their involvement in organized criminal activities such as extortion, loan sharking, prostitution and narcotics. Often these local criminals groups are strongly connected with, or in some cases even have an official membership to, outlaw motorcycle gangs such as the Hells Angels MC.\n",
"BULLET::::- Raise public awareness and the German political class about the problem of organized crime in Germany, for the recognition of the Mafia as a purely extra-national and, contrasting with greater collaboration between states, especially members of the European Union;\n"
] |
Would it be possible for a human to stand on an asteroid or comet as it speeds through space?
|
If the asteroid in question had enough mass for it's gravity to hold you down, yes.
|
[
"If such a crew is to be summoned to a distant asteroid, there may be less risky ways to divert the asteroid. Another promising asteroid mitigation strategy is to land a crew on the asteroid well ahead of its impact date and to begin diverting some its mass into space to slowly alter its trajectory. This is a form of rocket propulsion by virtue of Newton's third law with the asteroid's mass as the propellant. Whether exploding nuclear weapons or diversion of mass is used, a sizable human crew may need to be sent into space for many months if not years to accomplish this mission. Questions such as what the astronauts will live in and what the ship will be like are questions for the space architect.\n",
"More advanced options of using solar, laser electric, and laser sail propulsion, based on Breakthrough Starshot technology, have also been considered. The challenge is to get to the asteroid in a reasonable amount of time (and so at a reasonable distance from Earth), and yet be able to gain useful scientific information. To do this, decelerating the spacecraft at ʻOumuamua would be \"highly desirable, due to the minimal science return from a hyper-velocity encounter\". If the investigative craft goes too fast, it would not be able to get into orbit or land on the object and would fly past it. The authors conclude that, although challenging, an encounter mission would be feasible using near-term technology. Seligman and Laughlin adopt a complementary approach to the Lyra study but also conclude that such missions, though challenging to mount, are both feasible and scientifically attractive.\n",
"Earth's surface is situated fairly deep in a gravity well. The escape velocity required to get out of it is 11.2 kilometers/second. As human beings evolved in a gravitational field of 1g (9.8 m/s²), an ideal propulsion system would be one that provides a continuous acceleration of 1g (though human bodies can tolerate much larger accelerations over short periods). The occupants of a rocket or spaceship having such a propulsion system would be free from all the ill effects of free fall, such as nausea, muscular weakness, reduced sense of taste, or leaching of calcium from their bones.\n",
"The challenge is to get to the asteroid in a reasonable amount of time (and so at a reasonable distance from Earth), and yet be able to gain useful scientific information. To do this, decelerating the spacecraft at ʻOumuamua would be \"highly desirable, due to the minimal science return from a hyper-velocity encounter\". If the investigative craft goes too fast, it would not be able to get into orbit or land on the asteroid and would fly past it. The authors conclude that, although challenging, an encounter mission would be feasible using near-term technology. Seligman and Laughlin adopt a complementary approach to the Lyra study but also conclude that such missions, though challenging to mount, are both feasible and scientifically attractive.\n",
"The following is a list of fictional astronauts on missions to deflect asteroids and comets which pose a threat to Earth, as well as performing other miscellaneous feats of space exploration not yet achieved.\n",
"Once an asteroid is on course to encounter a planet with an atmosphere, it is in principle possible to tweak its orbit so that it intercepts the planet's atmosphere, using aerobraking to slow the asteroid at periapsis by dumping some of its kinetic energy into the atmosphere – this technique has however never been used by spacecraft performing rendezvous manoeuvres with other planets such as Mars, but it would reduce the otherwise very large amount of fuel required to decelerate a spacecraft from a Sun-orbital velocity to a planet-orbital velocity. This technique is referred to as aerocapture.\n",
"Today, as we meet here in this historic room where Abigail Adams hung out her washing, an astronaut can orbit the earth faster than a man on the ground can get from New York to Washington. Yet, the same science and technology which gave us our airplanes and our space probes, I believe, could also give us better and faster and more economical transportation on the ground. And a lot of us need it more on the ground than we need it orbiting the earth.\n"
] |
why do we find the natural human odor to be so offensive? are there other animals who are put off by the smell of their own species?
|
Well, that's not "natural human odor" you're smelling. What you're smelling are actually mostly the chemical byproducts of bacteria eating all the oils and things in your sweat.
These bacterial byproducts are actually some of the same chemicals that make some cheese smelly and even some of the same ones that make rotting flesh smell the way it does.
|
[
"There is also a specific anosmia to the odor in some humans; they are unable to smell specific odors, but have, otherwise, a normal sense of smell. However, this should, by no means, be regarded as indicative for being labeled as a pheromone, as it is true of over 80 olfactory compounds.\n",
"The role of smell has long been viewed as secondary to the importance of auditory, tactile, and visual senses. Humans do not rely on olfaction for survival to the same extent as other species. Instead, smell plays a heavier role in aesthetic food perception and gathering information on the surroundings. Nevertheless, humans also communicate via odorants and pheromones, exerting both subconscious and conscious (artificial) scents. For example, olfaction is able to mediate the synchronization of menstrual cycles for females living in close proximity.\n",
"BULLET::::- \"Olfactory\" – Humans are constantly in a state of change blindness due to the poor spatial and temporal resolutions with which scents are detected. Although, humans' odor detection thresholds are very low, our olfactory attention is only captured by unusually high odorant concentrations. Olfactory input is made up of a series of sniffs separated in time. The long inter-sniff-interval creates \"change anosmia,\" in which humans have trouble discerning smells that are not highly concentrated. This period of sensory habituation as well as very low concentrations of odorants regularly yield no subjective experience. This behavior is called \"experiential nothingness\".\n",
"While humans are highly dependent upon visual cues, when in proximity, smells also play a role in sociosexual behaviors. An inherent difficulty in studying human pheromones is the need for cleanliness and odorlessness in human participants. Experiments have focused on three classes of putative human pheromones: axillary steroids, vaginal aliphatic acids, and stimulators of the vomeronasal organ.\n",
"Figures suggesting greater or lesser sensitivity in various species reflect experimental findings from the reactions of animals exposed to aromas in known extreme dilutions. These are, therefore, based on perceptions by these animals, rather than mere nasal function. That is, the brain's smell-recognizing centers must react to the stimulus detected for the animal to be said to show a response to the smell in question. It is estimated that dogs in general have an olfactory sense approximately ten thousand to a hundred thousand times more acute than a human's. This does not mean they are overwhelmed by smells our noses can detect; rather, it means they can discern a molecular presence when it is in much greater dilution in the carrier, air.\n",
"In many animals, body odor plays an important survival function. Strong body odor can be a warning signal for predators to stay away (such as porcupine stink), or it can also be a signal that the prey animal is unpalatable. For example, some animals species, who feign death to survive (like opossums), in this state produce a strong body odor to deceive a predator that the prey animal has been dead for a long time and is already in the advanced stage of decomposing. Some animals with strong body odor are rarely attacked by most predators, although they can still be killed and eaten by birds of prey, which are tolerant of carrion odors.\n",
"The other main group are called \"flag\" species - due to the infloresence being on a long stalk. These species also exhibit thermogenesis, but if an odour is released it is not recognizable to the human nose, and it is debated if pollinators are attracted by a non-recognizable smell, the thermogenesis itself or visual attraction.\n"
] |
Any good book on history of education?
|
Hi, a while ago I commented on a similar question. [You could have a look here](_URL_0_), perhaps it could be useful to you as well?
|
[
"\"The Education\" is an important work of American literary nonfiction. It provides a penetrating glimpse into the intellectual and political life of the late 19th century. The Modern Library placed it first in a list of the top 100 English-language nonfiction books of the 20th century.\n",
"The history of education in the United States, or foundations of education, covers the trends in educational philosophy, policy, institutions, as well as formal and informal learning in America from the 17th century to today.\n",
"BULLET::::- History of Education: Selected Moments of the 20th Century - \"1901– Francis W. Parker progressive school opens\", A work in progress edited by Daniel Schugurensky, Department of Adult Education, Community Development and Counselling Psychology, The Ontario Institute for Studies in Education of the University of Toronto (OISE/UT)\n",
"The book was a history textbook, intended for \"the intermediate grades of the elementary schools\". The book focused on the history of European discovery and colonization of the rest of the world, with a particular focus on the achievements of the British Empire. The book was divided into seven chapters:\n",
"The History of Education Quarterly is an international quarterly peer-reviewed academic journal dedicated to publishing high-quality scholarship in the history of education. It is the official journal of the field's leading professional society in the United States, the History of Education Society, and has been published since 1960. It is published by Cambridge University Press on behalf of the society.\n",
"The National Bureau of Economic Research published a data series with an overview of the history of education in the United States leading up to the 20th and 21st centuries. It stated that \"formal education, especially basic literacy, is essential for a well-functioning democracy, and enhances citizenship and community.\"\n",
"The \"Historia Scholastica\" was a required part of the core curriculum at the University of Paris, Oxford and other universities, and a significant secondary source of popular biblical knowledge from its completion around 1173 through the fifteenth century, although after about 1350 it was gradually supplanted by newer works. It was translated into every major Western European vernacular of the period. Numerous paraphrases and abridgements were produced, in Latin and vernacular languages.\n"
] |
Why are salts of hydrochloric acid and organic bases called "hydrochlorides" instead of "chlorides"?
|
Acid-base reactions don't necessarily result in water being produced, that is only true for the simplest case of Arrhenius theory in which the acid always provides a proton and the base a hydroxyl group. A more general theory of acid base reactions is [Bronsted theory](_URL_0_), which still defines an acid as a group that can release a proton, but a base is any species that can accept a proton.
Take the example of pyridine mentioned in the Wiki article. What is happening is that when you react pyridine with hydrochloric acid, since the amine moiety (the nitrogen atom) is basic, it will become protonated (the hydrogen will tack on) to create C5H5N-H^+ and then the chlorine anion will serve as the counterion to maintain charge neutrality, so that in the end you will have the salt C5H5N-H^+ Cl^- .
|
[
"Rather than being neutral (as some other salts), alkali salts are bases as their name suggests. What makes these compounds basic is that the conjugate base from the weak acid hydrolyzes to form a basic solution. In sodium carbonate, for example, the carbonate from the carbonic acid hydrolyzes to form a basic solution. The chloride from the hydrochloric acid in sodium chloride does not hydrolyze, though, so sodium chloride is not basic.\n",
"Some alkaloids are more stable as ionic salts than as free base. The salts usually exhibit greater water solubility. Common counterions include chloride, bromide, sulfate, phosphate, nitrate, acetate, oxalate, citrate, and tartrate. Ammonium salts formed from the acid-base reaction with hydrochloric acid are known as hydrochlorides. For example, compare the free base hydroxylamine (NHOH) with the salt hydroxylamine hydrochloride (NHOH Cl).\n",
"The chloride ion is the anion (negatively charged ion) Cl. It is formed when the element chlorine (a halogen) gains an electron or when a compound such as hydrogen chloride is dissolved in water or other polar solvents. Chloride salts such as sodium chloride are often very soluble in water. It is an essential electrolyte located in all body fluids responsible for maintaining acid/base balance, transmitting nerve impulses and regulating fluid in and out of cells. Less frequently, the word \"chloride\" may also form part of the \"common\" name of chemical compounds in which one or more chlorine atoms are covalently bonded. For example, methyl chloride, with the standard name chloromethane (see IUPAC books) is an organic compound with a covalent C−Cl bond in which the chlorine is not an anion.\n",
"A chloride ion is much larger than a chlorine atom, 167 and 99 pm, respectively. The ion is colorless and diamagnetic. In aqueous solution, it is highly soluble in most cases; however, some chloride salts, such as silver chloride, lead(II) chloride, and mercury(I) chloride are slightly soluble in water. In aqueous solution, chloride is bound by the protic end of the water molecules.\n",
"In its properties hydrazoic acid shows some analogy to the halogen acids, since it forms poorly soluble (in water) lead, silver and mercury(I) salts. The metallic salts all crystallize in the anhydrous form and decompose on heating, leaving a residue of the pure metal. It is a weak acid (p\"K\" = 4.75.) Its heavy metal salts are explosive and readily interact with the alkyl iodides. Azides of heavier alkali metals (excluding lithium) or alkaline earth metals are not explosive, but decompose in a more controlled way upon heating, releasing spectroscopically-pure gas. Solutions of hydrazoic acid dissolve many metals (e.g. zinc, iron) with liberation of hydrogen and formation of salts, which are called azides (formerly also called azoimides or hydrazoates).\n",
"The salt forms crystalline hydrates, unlike the other alkali metal chlorides. Mono-, tri-, and pentahydrates are known. The anhydrous salt can be regenerated by heating the hydrates. LiCl also absorbs up to four equivalents of ammonia/mol. As with any other ionic chloride, solutions of lithium chloride can serve as a source of chloride ion, e.g., forming a precipitate upon treatment with silver nitrate:\n",
"An important question in discussing the role of the acid is whether the \"N\"-haloamine reacts in the free base or the salt form in the initiation step. Based on the pK values of the conjugate acids of 2° alkyl amines (which are generally in the range 10–11), it is evident that \"N\"-chloroamines exist largely as salts in a solution of high sulfuric acid concentration. As a result, in the case of chemical or thermal initiation, it is reasonable to assume that it is the \"N\"-chloroammonium ion which affords the ammonium free radical. The situation changes, however, when the reaction is initiated upon irradiation with UV light. The radiation must be absorbed and the quantum of the incident light must be large enough to dissociate the N-Cl bond in order for a photochemical reaction to occur. Because the conjugate acids of the \"N\"-chloroamines have no appreciable UV absorption above 225 mμ, whereas the free \"N\"-chloroamine absorb UV light of sufficient energy to cause dissociation (λ 263 mμ, ε 300), E. J. Corey postulated that in this case it is actually the small percentage of free \"N\"-chloroamine that is responsible for most of the initiation. It was also suggested that the newly generated neutral nitrogen radical is immediately protonated. However, it is important to realize that an alternative scenario might be in operation when the reaction is initiated with the UV light; namely, the free \"N\"-haloamine might not undergo dissociation upon irradiation, but it might function as a photosensitizer instead. \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.