id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
36,555,728
https://en.wikipedia.org/wiki/ION%20LMD
ION LMD system is one of the laser microdissection systems and a name of device that follows Gravity-Assisted Microdissection method, also known as GAM method. This non-contact laser microdissection system makes cell isolation for further genetic analysis possible. It is the first developed laser microdissection system in Asia. History At first, proto type of ION LMD system was developed in 2004. The first generation of ION LMD was developed in 2005 and then the second generation(so-called G2) was developed in 2008. At last, the third generation(so-called ION LMD Pro) was developed in 2012. Manufacturer JungWoo F&B was founded in 1994, and offers various factory automation products for clients in semiconductor, consumer electronics, LCD, automotive manufacturing and ship-building industries. In 2003, the company entered the bio-mechanics business for the medical laboratory market and developed an ION LMD system which is utilized in cancer research. Awards This ION LMD system has got some reliable awards. 2005 Excellent Machine by Ministry of Commerce, Industry and Energy, Republic of Korea 2005 Best Medical Device by Korean Medical Association 2006 New Excellent Product by Ministry of Commerce, Industry and Energy, Republic of Korea References Biological techniques and tools
ION LMD
Biology
254
53,593,107
https://en.wikipedia.org/wiki/Shortcuts%20%28Apple%29
Shortcuts (formerly Workflow) is a visual scripting application developed by Apple and provided on its iOS, iPadOS, macOS, and watchOS operating systems. It allows users to create macros for executing specific tasks and automations on their device(s). These task sequences can be created by the user and shared online through iCloud. A number of curated shortcuts can also be downloaded from the integrated Gallery. Shortcuts are activated manually through the app, shortcut widgets, the share sheet, and Siri. They can also be automated to trigger after an event, such as the time of day, leaving a set location, or opening an app. Shortcuts was originally created by DeskConnect, Inc. (Ari Weinstein, Conrad Kramer, Veeral Patel, and Nick Frey) for MHacks Winter 2014 competition and was awarded first place for Best iOS App. History Workflow originally began as a project at The University of Michigan's MHacks. In 2015, Workflow received an Apple Design Award for its integration with iOS accessibility features such as VoiceOver. On March 22, 2017, Apple acquired Workflow for an undisclosed amount. Following the purchase, the software was made available for free. An accompanying update also changed some of the service providers used within the app to those owned or preferred by Apple, such as Apple Maps and Microsoft Translator, and closed submissions of workflows to Gallery. On September 17, 2018, the Workflow app became the Shortcuts app, which runs shortcuts with Siri, along with iOS 12. The app was announced on June 4, 2018, at WWDC 2018. On September 19, 2019, with the public launch of iOS 13, the Shortcuts app became a default app installed on all iOS 13 devices. On June 7, 2021, at WWDC 2021, a desktop version of the Shortcuts app was announced for macOS. On September 12, 2022, Apple launched an update to the Shortcuts app as a part of iOS 16. This update included the ability to integrate Siri voice assistant with Shortcuts. URL scheme Shortcuts also supports a URL scheme. This works as follows: shortcuts://Opens the Shortcuts app. shortcuts://galleryOpens the Gallery with Shortcuts that have been premade by Apple. shortcuts://gallery/search?query=[search words] Searches the gallery for the search word. URL encoding is required to use multiple search words. shortcuts://create-shortcutCreates a new shortcut and opens that shortcut in the editor. shortcuts://open-shortcut?name=[name]Opens the shortcut in the editor that has the specified name. URL encoding is required. shortcuts://run-shortcut?name=[name]&input=[see note]&text=[see note]Executes a shortcut, optionally including an input. Note: The parameters "input" and "text" are optional. If you pass "clipboard" to input, the clipboard is passed as input. If you pass "text" to input, the parameter "text" is retrieved and the text stored there is passed as input. There, URL encoding is required. See also AppleScript Automator References External links Workflow Website & Documentation Automation software Apple Inc. acquisitions IOS software MacOS IOS-based software made by Apple Inc.
Shortcuts (Apple)
Engineering
732
66,391
https://en.wikipedia.org/wiki/Stimulant
Stimulants (also known as central nervous system stimulants, or psychostimulants, or colloquially as uppers) are a class of drugs that increase alertness. They are used for various purposes, such as enhancing attention, motivation, cognition, mood, and physical performance. Some stimulants occur naturally, while others are exclusively synthetic. Common stimulants include caffeine, nicotine, amphetamines, cocaine, methylphenidate, and modafinil. Stimulants may be subject to varying forms of regulation, or outright prohibition, depending on jurisdiction. Stimulants increase activity in the sympathetic nervous system, either directly or indirectly. Prototypical stimulants increase synaptic concentrations of excitatory neurotransmitters, particularly norepinephrine and dopamine (e.g., methylphenidate). Other stimulants work by binding to the receptors of excitatory neurotransmitters (e.g., nicotine) or by blocking the activity of endogenous agents that promote sleep (e.g., caffeine). Stimulants can affect various functions, including arousal, attention, the reward system, learning, memory, and emotion. Effects range from mild stimulation to euphoria, depending on the specific drug, dose, route of administration, and inter-individual characteristics. Stimulants have a long history of use, both for medical and non-medical purposes. Archeological evidence from Peru shows that cocaine use dates back as far as 8000 B.C.E. Stimulants have been used to treat various conditions, such as narcolepsy, attention deficit hyperactivity disorder (ADHD), obesity, depression, and fatigue. They have also been used as recreational drugs, performance-enhancing substances, and cognitive enhancers, by various groups of people, such as students, athletes, artists, and workers. They have also been used to promote aggression of combatants in wartime, both historically and in the present day. Simulants have potential risks and side effects, such as addiction, tolerance, withdrawal, psychosis, anxiety, insomnia, cardiovascular problems, and neurotoxicity. The misuse and abuse of stimulants can lead to serious health and social consequences, such as overdose, dependence, crime, and violence. Therefore, the use of stimulants is regulated by laws and policies in most countries, and requires medical supervision and prescription in some cases. Definition A stimulant is an overarching term that covers many drugs including those that increase the activity of the central nervous system and the body, drugs that are pleasurable and invigorating, or drugs that have sympathomimetic effects. Sympathomimetic effects are those effects that mimic or copy the actions of the sympathetic nervous system. The sympathetic nervous system is a part of the nervous system that prepares the body for action, such as increasing the heart rate, blood pressure, and breathing rate. Stimulants can activate the same receptors as the natural chemicals released by the sympathetic nervous system (namely epinephrine and norepinephrine) and cause similar effects. Effects Acute Stimulants in therapeutic doses, such as those given to patients with attention deficit hyperactivity disorder (ADHD), increase ability to focus, vigor, sociability, libido and may elevate mood. However, in higher doses, stimulants may actually decrease the ability to focus, a principle of the Yerkes-Dodson Law. In higher doses, stimulants may also produce euphoria, vigor, and a decreased need for sleep. Many, but not all, stimulants have ergogenic effects; that is, they enhance physical performance. Drugs such as ephedrine, pseudoephedrine, amphetamine and methylphenidate have well documented ergogenic effects, while cocaine has the opposite effect. Neurocognitive enhancing effects of stimulants, specifically modafinil, amphetamine and methylphenidate have been reported in healthy adolescents by some studies, and is a commonly cited reason among illicit drug users for use, particularly among college students in the context of studying. Still, results of these studies is inconclusive: assessing the potential overall neurocognitive benefits of stimulants among healthy youth is challenging due to the diversity within the population, the variability in cognitive task characteristics, and the absence of replication of studies. Research on the cognitive enhancement effects of modafinil in healthy non-sleep-deprived individuals has yielded mixed results, with some studies suggesting modest improvements in attention and executive functions while others show no significant benefits or even a decline in cognitive functions. In some cases, psychiatric phenomena may emerge such as stimulant psychosis, paranoia, and suicidal ideation. Acute toxicity has been reportedly associated with hyperhydrosis, panic attacks, severe anxiety, mydriasis, paranoia, aggressive behavior, excessive motor activity, psychosis, rhabdomyolysis, and punding. The violent and aggressive behavior associated with acute stimulant toxicity may partially be driven by paranoia. Most drugs classified as stimulants are sympathomimetic, meaning that they stimulate the sympathetic branch of the autonomic nervous system. This leads to effects such as mydriasis (dilation of the pupils), increased heart rate, blood pressure, respiratory rate and body temperature. When these changes become pathological, they are called arrhythmia, hypertension, and hyperthermia, and may lead to rhabdomyolysis, stroke, cardiac arrest, or seizures. However, given the complexity of the mechanisms that underlie these potentially fatal outcomes of acute stimulant toxicity, it is impossible to determine what dose may be lethal. Chronic Assessment of the effects of stimulants is relevant given the large population currently taking stimulants. A systematic review of cardiovascular effects of prescription stimulants found no association in children, but found a correlation between prescription stimulant use and ischemic heart attacks. A review over a four-year period found that there were few negative effects of stimulant treatment, but stressed the need for longer-term studies. A review of a year long period of prescription stimulant use in those with ADHD found that cardiovascular side effects were limited to transient increases in blood pressure only. However, a 2024 systematic review of the evidence found that stimulants overall improve ADHD symptoms and broadband behavioral measures in children and adolescents, though they carry risks of side effects like appetite suppression and other adverse events. Initiation of stimulant treatment in those with ADHD in early childhood appears to carry benefits into adulthood with regard to social and cognitive functioning, and appears to be relatively safe. Abuse of prescription stimulants (not following physician instruction) or of illicit stimulants carries many negative health risks. Abuse of cocaine, depending upon route of administration, increases risk of cardiorespiratory disease, stroke, and sepsis. Some effects are dependent upon the route of administration, with intravenous use associated with the transmission of many disease such as Hepatitis C, HIV/AIDS and potential medical emergencies such as infection, thrombosis or pseudoaneurysm, while inhalation may be associated with increased lower respiratory tract infection, lung cancer, and pathological restricting of lung tissue. Cocaine may also increase risk for autoimmune disease and damage nasal cartilage. Abuse of methamphetamine produces similar effects as well as marked degeneration of dopaminergic neurons, resulting in an increased risk for Parkinson's disease. Medical uses Stimulants are widely used throughout the world as prescription medicines as well as without a prescription (either legally or illicitly) as performance-enhancing or recreational drugs. Among narcotics, stimulants produce a noticeable crash or comedown at the end of their effects. In the US, the most frequently prescribed stimulants as of 2013 were lisdexamfetamine (Vyvanse), methylphenidate (Ritalin), and amphetamine (Adderall). It was estimated in 2015 that the percentage of the world population that had used cocaine during a year was 0.4%. For the category "amphetamines and prescription stimulants" (with "amphetamines" including amphetamine and methamphetamine) the value was 0.7%, and for MDMA 0.4%. Stimulants have been used in medicine for many conditions including obesity, sleep disorders, mood disorders, impulse control disorders, asthma, nasal congestion and, in case of cocaine, as local anesthetics. Drugs used to treat obesity are called anorectics and generally include drugs that follow the general definition of a stimulant, but other drugs such as cannabinoid receptor antagonists also belong to this group. Eugeroics are used in management of sleep disorders characterized by excessive daytime sleepiness, such as narcolepsy, and include stimulants such as modafinil and pitolisant. Stimulants are used in impulse control disorders such as ADHD and off-label in mood disorders such as major depressive disorder to increase energy, focus and elevate mood. Stimulants such as epinephrine, theophylline and salbutamol orally have been used to treat asthma, but inhaled adrenergic drugs are now preferred due to less systemic side effects. Pseudoephedrine is used to relieve nasal or sinus congestion caused by the common cold, sinusitis, hay fever and other respiratory allergies; it is also used to relieve ear congestion caused by ear inflammation or infection. Depression Stimulants were one of the first classes of drugs to be used in the treatment of depression, beginning after the introduction of the amphetamines in the 1930s. However, they were largely abandoned for treatment of depression following the introduction of conventional antidepressants in the 1950s. Subsequent to this, there has been a resurgence in interest in stimulants for depression in recent years. Stimulants produce a fast-acting and pronounced but transient and short-lived mood lift. In relation to this, they are minimally effective in the treatment of depression when administered continuously. In addition, tolerance to the mood-lifting effects of amphetamine has led to dose escalation and dependence. Although the efficacy for depression with continuous administration is modest, it may still reach statistical significance over placebo and provide benefits similar in magnitude to those of conventional antidepressants. The reasons for the short-term mood-improving effects of stimulants are unclear, but may relate to rapid tolerance. Tolerance to the effects of stimulants has been studied and characterized both in animals and humans. Stimulant withdrawal is remarkably similar in its symptoms to those of major depressive disorder. Chemistry Classifying stimulants is difficult, because of the large number of classes the drugs occupy, and the fact that they may belong to multiple classes; for example, ecstasy can be classified as a substituted methylenedioxyphenethylamine, a substituted amphetamine and consequently, a substituted phenethylamine. Major stimulant classes include phenethylamines and their daughter class substituted amphetamines. Amphetamines (class) Substituted amphetamines are a class of compounds based upon the amphetamine structure; it includes all derivative compounds which are formed by replacing, or substituting, one or more hydrogen atoms in the amphetamine core structure with substituents. Examples of substituted amphetamines are amphetamine (itself), methamphetamine, ephedrine, cathinone, phentermine, mephentermine, bupropion, methoxyphenamine, selegiline, amfepramone, pyrovalerone, MDMA (ecstasy), and DOM (STP). Many drugs in this class work primarily by activating trace amine-associated receptor 1 (TAAR1); in turn, this causes reuptake inhibition and effluxion, or release, of dopamine, norepinephrine, and serotonin. An additional mechanism of some substituted amphetamines is the release of vesicular stores of monoamine neurotransmitters through VMAT2, thereby increasing the concentration of these neurotransmitters in the cytosol, or intracellular fluid, of the presynaptic neuron. Amphetamines-type stimulants are often used for their therapeutic effects. Physicians sometimes prescribe amphetamine to treat major depression, where subjects do not respond well to traditional SSRI medications, but evidence supporting this use is poor/mixed. Notably, two recent large phase III studies of lisdexamfetamine (a prodrug to amphetamine) as an adjunct to an SSRI or SNRI in the treatment of major depressive disorder showed no further benefit relative to placebo in effectiveness. Numerous studies have demonstrated the effectiveness of drugs such as Adderall (a mixture of salts of amphetamine and dextroamphetamine) in controlling symptoms associated with ADHD. Due to their availability and fast-acting effects, substituted amphetamines are prime candidates for abuse. Cocaine analogs Hundreds of cocaine analogs have been created, all of them usually maintaining a benzyloxy connected to the 3 carbon of a tropane. Various modifications include substitutions on the benzene ring, as well as additions or substitutions in place of the normal carboxylate on the tropane 2 carbon. Various compound with similar structure activity relationships to cocaine that aren't technically analogs have been developed as well. Mechanisms of action Most stimulants exert their activating effects by enhancing catecholamine neurotransmission. Catecholamine neurotransmitters are employed in regulatory pathways implicated in attention, arousal, motivation, task salience and reward anticipation. Classical stimulants either block the reuptake or stimulate the efflux of these catecholamines, resulting in increased activity of their circuits. Some stimulants, specifically those with empathogenic and hallucinogenic effects, also affect serotonergic transmission. Some stimulants, such as some amphetamine derivatives and, notably, yohimbine, can decrease negative feedback by antagonizing regulatory autoreceptors. Adrenergic agonists, such as, in part, ephedrine, act by directly binding to and activating adrenergic receptors, producing sympathomimetic effects. There are also more indirect mechanisms of action by which a drug can elicit activating effects. Caffeine is an adenosine receptor antagonist, and only indirectly increases catecholamine transmission in the brain. Pitolisant is an histamine 3 (H3)-receptor inverse agonist. As histamine 3 (H3) receptors mainly act as autoreceptors, pitolisant decreases negative feedback to histaminergic neurons, enhancing histaminergic transmission. The precise mechanism of action of some stimulants, such as modafinil, for treating symptoms of narcolepsy and other sleep disorders, remains unknown. Notable stimulants Amphetamine Amphetamine is a potent central nervous system (CNS) stimulant of the phenethylamine class that is approved for the treatment of attention deficit hyperactivity disorder (ADHD) and narcolepsy. Amphetamine is also used off-label as a performance and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. Although it is a prescription medication in many countries, unauthorized possession and distribution of amphetamine is often tightly controlled due to the significant health risks associated with uncontrolled or heavy use. As a consequence, amphetamine is illegally manufactured in clandestine labs to be trafficked and sold to users. Based upon drug and drug precursor seizures worldwide, illicit amphetamine production and trafficking is much less prevalent than that of methamphetamine. The first pharmaceutical amphetamine was Benzedrine, a brand of inhalers used to treat a variety of conditions. Because the dextrorotary isomer has greater stimulant properties, Benzedrine was gradually discontinued in favor of formulations containing all or mostly dextroamphetamine. Presently, it is typically prescribed as mixed amphetamine salts, dextroamphetamine, and lisdexamfetamine. Amphetamine is a norepinephrine-dopamine releasing agent (NDRA). It enters neurons through dopamine and norepinephrine transporters and facilitates neurotransmitter efflux by activating TAAR1 and inhibiting VMAT2. At therapeutic doses, this causes emotional and cognitive effects such as euphoria, change in libido, increased arousal, and improved cognitive control. Likewise, it induces physical effects such as decreased reaction time, fatigue resistance, and increased muscle strength. In contrast, supratherapeutic doses of amphetamine are likely to impair cognitive function and induce rapid muscle breakdown. Very high doses can result in psychosis (e.g., delusions and paranoia), which very rarely occurs at therapeutic doses even during long-term use. As recreational doses are generally much larger than prescribed therapeutic doses, recreational use carries a far greater risk of serious side effects, such as dependence, which only rarely arises with therapeutic amphetamine use. Caffeine Caffeine is a stimulant compound belonging to the xanthine class of chemicals naturally found in coffee, tea, and (to a lesser degree) cocoa or chocolate. It is included in many soft drinks, as well as a larger amount in energy drinks. Caffeine is the world's most widely used psychoactive drug and by far the most common stimulant. In North America, 90% of adults consume caffeine daily. A few jurisdictions restrict the sale and use of caffeine. In the United States, the FDA has banned the sale of pure and highly concentrated caffeine products for personal consumption, due to the risk of overdose and death. The Australian Government has announced a ban on the sale of pure and highly concentrated caffeine food products for personal consumption, following the death of a young man from acute caffeine toxicity. In Canada, Health Canada has proposed to limit the amount of caffeine in energy drinks to 180 mg per serving, and to require warning labels and other safety measures on these products. Caffeine is also included in some medications, usually for the purpose of enhancing the effect of the primary ingredient, or reducing one of its side-effects (especially drowsiness). Tablets containing standardized doses of caffeine are also widely available. Caffeine's mechanism of action differs from many stimulants, as it produces stimulant effects by inhibiting adenosine receptors. Adenosine receptors are thought to be a large driver of drowsiness and sleep, and their action increases with extended wakefulness. Caffeine has been found to increase striatal dopamine in animal models, as well as inhibit the inhibitory effect of adenosine receptors on dopamine receptors, however the implications for humans are unknown. Unlike most stimulants, caffeine has no addictive potential. Caffeine does not appear to be a reinforcing stimulus, and some degree of aversion may actually occur, per a study on drug abuse liability published in an NIDA research monograph that described a group preferring placebo over caffeine. In large telephone surveys only 11% reported dependence symptoms. However, when people were tested in labs, only half of those who claim dependence actually experienced it, casting doubt on caffeine's ability to produce dependence and putting societal pressures in the spotlight. Coffee consumption is associated with a lower overall risk of cancer. This is primarily due to a decrease in the risks of hepatocellular and endometrial cancer, but it may also have a modest effect on colorectal cancer. There does not appear to be a significant protective effect against other types of cancers, and heavy coffee consumption may increase the risk of bladder cancer. A protective effect of caffeine against Alzheimer's disease is possible, but the evidence is inconclusive. Moderate coffee consumption may decrease the risk of cardiovascular disease, and it may somewhat reduce the risk of type 2 diabetes. Drinking 1-3 cups of coffee per day does not affect the risk of hypertension compared to drinking little or no coffee. However those who drink 2–4 cups per day may be at a slightly increased risk. Caffeine increases intraocular pressure in those with glaucoma but does not appear to affect normal individuals. It may protect people from liver cirrhosis. There is no evidence that coffee stunts a child's growth. Caffeine may increase the effectiveness of some medications including ones used to treat headaches. Caffeine may lessen the severity of acute mountain sickness if taken a few hours prior to attaining a high altitude. Ephedrine Ephedrine is a sympathomimetic amine similar in molecular structure to the well-known drugs phenylpropanolamine and methamphetamine, as well as to the important neurotransmitter epinephrine (adrenaline). Ephedrine is commonly used as a stimulant, appetite suppressant, concentration aid, and decongestant, and to treat hypotension associated with anesthesia. In chemical terms, it is an alkaloid with a phenethylamine skeleton found in various plants in the genus Ephedra (family Ephedraceae). It works mainly by increasing the activity of norepinephrine (noradrenaline) on adrenergic receptors. It is most usually marketed as the hydrochloride or sulfate salt. The herb má huáng (Ephedra sinica), used in traditional Chinese medicine (TCM), contains ephedrine and pseudoephedrine as its principal active constituents. The same may be true of other herbal products containing extracts from other Ephedra species. MDMA 3,4-Methylenedioxymethamphetamine (MDMA, ecstasy, or molly) is a euphoriant, empathogen, and stimulant of the amphetamine class. Briefly used by some psychotherapists as an adjunct to therapy, the drug became popular recreationally and the DEA listed MDMA as a Schedule I controlled substance, prohibiting most medical studies and applications. MDMA is known for its entactogenic properties. The stimulant effects of MDMA include hypertension, anorexia (appetite loss), euphoria, social disinhibition, insomnia (enhanced wakefulness/inability to sleep), improved energy, increased arousal, and increased perspiration, among others. Relative to catecholaminergic transmission, MDMA enhances serotonergic transmission significantly more, when compared to classical stimulants like amphetamine. MDMA does not appear to be significantly addictive or dependence forming. Due to the relative safety of MDMA, some researchers such as David Nutt have criticized the scheduling level, writing a satirical article finding MDMA to be 28 times less dangerous than horseriding, a condition he termed "equasy" or "Equine Addiction Syndrome". MDPV Methylenedioxypyrovalerone (MDPV) is a psychoactive drug with stimulant properties that acts as a norepinephrine-dopamine reuptake inhibitor (NDRI). It was first developed in the 1960s by a team at Boehringer Ingelheim. MDPV remained an obscure stimulant until around 2004, when it was reported to be sold as a designer drug. Products labeled as bath salts containing MDPV were previously sold as recreational drugs in gas stations and convenience stores in the United States, similar to the marketing for Spice and K2 as incense. Incidents of psychological and physical harm have been attributed to MDPV use. Mephedrone Mephedrone is a synthetic stimulant drug of the amphetamine and cathinone classes. Slang names include drone and MCAT. It is reported to be manufactured in China and is chemically similar to the cathinone compounds found in the khat plant of eastern Africa. It comes in the form of tablets or a powder, which users can swallow, snort, or inject, producing similar effects to MDMA, amphetamines, and cocaine. Mephedrone was first synthesized in 1929, but did not become widely known until it was rediscovered in 2003. By 2007, mephedrone was reported to be available for sale on the Internet; by 2008 law enforcement agencies had become aware of the compound; and, by 2010, it had been reported in most of Europe, becoming particularly prevalent in the United Kingdom. Mephedrone was first made illegal in Israel in 2008, followed by Sweden later that year. In 2010, it was made illegal in many European countries, and, in December 2010, the EU ruled it illegal. In Australia, New Zealand, and the US, it is considered an analog of other illegal drugs and can be controlled by laws similar to the Federal Analog Act. In September 2011, the USA temporarily classified mephedrone as illegal, in effect from October 2011. Mephedrone is neurotoxic and has abuse potential, predominantly exerted on 5-hydroxytryptamine (5-HT) terminals, mimicking that of MDMA with which it shares the same subjective sensations on abusers. Methamphetamine Methamphetamine (contracted from ) is a potent psychostimulant of the phenethylamine and amphetamine classes that is used to treat attention deficit hyperactivity disorder (ADHD) and obesity. Methamphetamine exists as two enantiomers, dextrorotary and levorotary. Dextromethamphetamine is a stronger CNS stimulant than levomethamphetamine; however, both are addictive and produce the same toxicity symptoms at high doses. Although rarely prescribed due to the potential risks, methamphetamine hydrochloride is approved by the United States Food and Drug Administration (USFDA) under the trade name Desoxyn. Recreationally, methamphetamine is used to increase sexual desire, lift the mood, and increase energy, allowing some users to engage in sexual activity continuously for several days straight. Methamphetamine may be sold illicitly, either as pure dextromethamphetamine or in an equal parts mixture of the right- and left-handed molecules (i.e., 50% levomethamphetamine and 50% dextromethamphetamine). Both dextromethamphetamine and racemic methamphetamine are schedule II controlled substances in the United States. Also, the production, distribution, sale, and possession of methamphetamine is restricted or illegal in many other countries due to its placement in schedule II of the United Nations Convention on Psychotropic Substances treaty. In contrast, levomethamphetamine is an over-the-counter drug in the United States. In low doses, methamphetamine can cause an elevated mood and increase alertness, concentration, and energy in fatigued individuals. At higher doses, it can induce psychosis, rhabdomyolysis, and cerebral hemorrhage. Methamphetamine is known to have a high potential for abuse and addiction. Recreational use of methamphetamine may result in psychosis or lead to post-withdrawal syndrome, a withdrawal syndrome that can persist for months beyond the typical withdrawal period. Unlike amphetamine and cocaine, methamphetamine is neurotoxic to humans, damaging both dopamine and serotonin neurons in the central nervous system (CNS). Unlike the long-term use of amphetamine in prescription doses, which may improve certain brain regions in individuals with ADHD, there is evidence that methamphetamine causes brain damage from long-term use in humans; this damage includes adverse changes in brain structure and function, such as reductions in gray matter volume in several brain regions and adverse changes in markers of metabolic integrity. However, recreational amphetamine doses may also be neurotoxic. Methylphenidate Methylphenidate is a stimulant drug that is often used in the treatment of ADHD and narcolepsy and occasionally to treat obesity in combination with diet restraints and exercise. Its effects at therapeutic doses include increased focus, increased alertness, decreased appetite, decreased need for sleep and decreased impulsivity. Methylphenidate is not usually used recreationally, but when it is used, its effects are very similar to those of amphetamines. Methylphenidate acts as a norepinephrine-dopamine reuptake inhibitor (NDRI), by blocking the norepinephrine transporter (NET) and the dopamine transporter (DAT). Methylphenidate has a higher affinity for the dopamine transporter than for the norepinephrine transporter, and so its effects are mainly due to elevated dopamine levels caused by the inhibited reuptake of dopamine, however increased norepinephrine levels also contribute to various of the effects caused by the drug. Methylphenidate is sold under a number of brand names including Ritalin. Other versions include the long lasting tablet Concerta and the long lasting transdermal patch Daytrana. Cocaine Cocaine is an SNDRI. Cocaine is made from the leaves of the coca shrub, which grows in the mountain regions of South American countries such as Bolivia, Colombia, and Peru, regions in which it was cultivated and used for centuries mainly by the Aymara people. In Europe, North America, and some parts of Asia, the most common form of cocaine is a white crystalline powder. Cocaine is a stimulant but is not normally prescribed therapeutically for its stimulant properties, although it sees clinical use as a local anesthetic, in particular in ophthalmology. Most cocaine use is recreational and its abuse potential is high (higher than amphetamine), and so its sale and possession are strictly controlled in most jurisdictions. Other tropane derivative drugs related to cocaine are also known such as troparil and lometopane but have not been widely sold or used recreationally. Nicotine Nicotine is the active chemical constituent in tobacco, which is available in many forms, including cigarettes, cigars, chewing tobacco, and smoking cessation aids such as nicotine patches, nicotine gum, and electronic cigarettes. Nicotine is used widely throughout the world for its stimulating and relaxing effects. Nicotine exerts its effects through the agonism of nicotinic acetylcholine receptors, resulting in multiple downstream effects such as increase in activity of dopaminergic neurons in the midbrain reward system, and acetaldehyde one of the tobacco constituent decreased the expression of monoamine oxidase in the brain. Nicotine is addictive and dependence forming. Tobacco, the most common source of nicotine, has an overall harm to user and self score 3 percent below cocaine, and 13 percent above amphetamines, ranking 6th most harmful of the 20 drugs assessed, as determined by a multi-criteria decision analysis. Phenylpropanolamine Phenylpropanolamine (PPA; Accutrim; β-hydroxyamphetamine), also known as the stereoisomers norephedrine and norpseudoephedrine, is a psychoactive drug of the phenethylamine and amphetamine chemical classes that is used as a stimulant, decongestant, and anorectic agent. It is commonly used in prescription and over-the-counter cough and cold preparations. In veterinary medicine, it is used to control urinary incontinence in dogs under trade names Propalin and Proin. In the United States, PPA is no longer sold without a prescription due to a possible increased risk of stroke in younger women. In a few countries in Europe, however, it is still available either by prescription or sometimes over-the-counter. In Canada, it was withdrawn from the market on 31 May 2001. In India, human use of PPA and its formulations were banned on 10 February 2011. Lisdexamfetamine Lisdexamfetamine (Vyvanse, etc.) is an amphetamine-type medication, sold for use in treating ADHD. Its effects typically last around 14 hours. Lisdexamfetamine is inactive on its own and is metabolized into dextroamphetamine in the body. Consequently, it has a lower abuse potential. Pseudoephedrine Pseudoephedrine is a sympathomimetic drug of the phenethylamine and amphetamine chemical classes. It may be used as a nasal/sinus decongestant, as a stimulant, or as a wakefulness-promoting agent. The salts pseudoephedrine hydrochloride and pseudoephedrine sulfate are found in many over-the-counter preparations, either as a single ingredient or (more commonly) in combination with antihistamines, guaifenesin, dextromethorphan, and/or paracetamol (acetaminophen) or another NSAID (such as aspirin or ibuprofen). It is also used as a precursor chemical in the illegal production of methamphetamine. Catha edulis (Khat) Khat is a flowering plant native to the Horn of Africa and the Arabian Peninsula. Khat contains a monoamine alkaloid called cathinone, a "keto-amphetamine". This alkaloid causes excitement, loss of appetite, and euphoria. In 1980, the World Health Organization (WHO) classified it as a drug of abuse that can produce mild to moderate psychological dependence (less than tobacco or alcohol), although the WHO does not consider khat to be seriously addictive. It is banned in some countries, such as the United States, Canada, and Germany, while its production, sale, and consumption are legal in other countries, including Djibouti, Ethiopia, Somalia, Kenya and Yemen. Modafinil Modafinil is an eugeroic medication, which means that it promotes wakefulness and alertness. Modafinil is sold under the brand name Provigil among others. Modafinil is used to treat excessive daytime sleepiness due to narcolepsy, shift work sleep disorder, or obstructive sleep apnea. While it has seen off-label use as a purported cognitive enhancer, the research on its effectiveness for this use is not conclusive. Despite being a CNS stimulant, the addiction and dependence liabilities of modafinil are considered very low. Although modafinil shares biochemical mechanisms with stimulant drugs, it is less likely to have mood-elevating properties. The similarities in effects with caffeine are not clearly established. Unlike other stimulants, modafinil does not induce a subjective feeling of pleasure or reward, which is commonly associated with euphoria, an intense feeling of well-being. Euphoria is a potential indicator of drug abuse, which is the compulsive and excessive use of a substance despite adverse consequences. In clinical trials, modafinil has shown no evidence of abuse potential, that is why modafinil is considered to have a low risk of addiction and dependence, however, caution is advised. Pitolisant Pitolisant is an inverse agonist (antagonist) of the histamine 3 (H3) autoreceptor. As such, pitolisant is an antihistamine medication that also belongs to the class of CNS stimulants. Pitolisant is also considered a medication of eugeroic class, which means that it promotes wakefulness and alertness. Pitolisant is the first wakefulness-promoting agent that acts by blocking the H3 autoreceptor. Pitolisant has been shown to be effective and well-tolerated for the treatment of narcolepsy with or without cataplexy. Pitolisant is the only non-controlled anti-narcoleptic drug in the US. It has shown minimal abuse risk in studies. Blocking the histamine 3 (H3) autoreceptor increases the activity of histamine neurons in the brain. The H3 autoreceptors regulate histaminergic activity in the central nervous system (and to a lesser extent, the peripheral nervous system) by inhibiting histamine biosynthesis and release upon binding to endogenous histamine. By preventing the binding of endogenous histamine at the H3, as well as producing a response opposite to that of endogenous histamine at the receptor (inverse agonism), pitolisant enhances histaminergic activity in the brain. Recreational use and issues of abuse Stimulants enhance the activity of the central and peripheral nervous systems. Common effects may include increased alertness, awareness, wakefulness, endurance, productivity, and motivation, arousal, locomotion, heart rate, and blood pressure, and a diminished desire for food and sleep. Use of stimulants may cause the body to reduce significantly its production of natural body chemicals that fulfill similar functions. Until the body reestablishes its normal state, once the effect of the ingested stimulant has worn off the user may feel depressed, lethargic, confused, and miserable. This is referred to as a "crash", and may provoke reuse of the stimulant. Abuse of central nervous system (CNS) stimulants is common. Addiction to some CNS stimulants can quickly lead to medical, psychiatric, and psychosocial deterioration. Drug tolerance, dependence, and sensitization as well as a withdrawal syndrome can occur. Stimulants may be screened for in animal discrimination and self-administration models which have high sensitivity albeit low specificity. Research on a progressive ratio self-administration protocol has found amphetamine, methylphenidate, modafinil, cocaine, and nicotine to all have a higher break point than placebo that scales with dose indicating reinforcing effects. A progressive ratio self-administration protocol is a way of testing how much an animal or a human wants a drug by making them do a certain action (like pressing a lever or poking a nose device) to get the drug. The number of actions needed to get the drug increases every time, so it becomes harder and harder to get the drug. The highest number of actions that the animal or human is willing to do to get the drug is called the break point. The higher the break point, the more the animal or human wants the drug. In contrast to the classical stimulants such as amphetamine, the effects of modafinil depend on what the animals or humans have to do after getting the drug. If they have to do a performance task, like solving a puzzle or remembering something, modafinil makes them work harder for it than placebo, and the subjects wanted to self-administer modafinil. But if they had to do a relaxation task, like listening to music or watching a video, the subjects did not want to self-administer modafinil. This suggests that modafinil is more rewarding when it helps the animals or humans do something better or faster, especially considering that modafinil is not commonly abused or depended on by people, unlike other stimulants. Treatment for misuse Psychosocial treatments, such as contingency management, have demonstrated improved effectiveness when added to treatment as usual consisting of counseling and/or case-management. This is demonstrated with a decrease in dropout rates and a lengthening of periods of abstinence. Testing The presence of stimulants in the body may be tested by a variety of procedures. Serum and urine are the common sources of testing material although saliva is sometimes used. Commonly used tests include chromatography, immunologic assay, and mass spectrometry. See also Antidepressants Depressants Hallucinogens Nootropics Psychoanaleptics Notes References External links Asia & Pacific Amphetamine-Type Stimulants Information Centre (APAIC) Drug classes defined by psychological effects Psychopharmacology
Stimulant
Chemistry
8,578
1,205,989
https://en.wikipedia.org/wiki/Spherical%203-manifold
In mathematics, a spherical 3-manifold M is a 3-manifold of the form where is a finite subgroup of O(4) acting freely by rotations on the 3-sphere . All such manifolds are prime, orientable, and closed. Spherical 3-manifolds are sometimes called elliptic 3-manifolds. Properties A special case of the Bonnet–Myers theorem says that every smooth manifold which has a smooth Riemannian metric which is both geodesically complete and of constant positive curvature must be closed and have finite fundamental group. William Thurston's elliptization conjecture, proven by Grigori Perelman using Richard Hamilton's Ricci flow, states a converse: every closed three-dimensional manifold with finite fundamental group has a smooth Riemannian metric of constant positive curvature. (This converse is special to three dimensions.) As such, the spherical three-manifolds are precisely the closed 3-manifolds with finite fundamental group. According to Synge's theorem, every spherical 3-manifold is orientable, and in particular must be included in SO(4). The fundamental group is either cyclic, or is a central extension of a dihedral, tetrahedral, octahedral, or icosahedral group by a cyclic group of even order. This divides the set of such manifolds into five classes, described in the following sections. The spherical manifolds are exactly the manifolds with spherical geometry, one of the eight geometries of Thurston's geometrization conjecture. Cyclic case (lens spaces) The manifolds with Γ cyclic are precisely the 3-dimensional lens spaces. A lens space is not determined by its fundamental group (there are non-homeomorphic lens spaces with isomorphic fundamental groups); but any other spherical manifold is. Three-dimensional lens spaces arise as quotients of by the action of the group that is generated by elements of the form where . Such a lens space has fundamental group for all , so spaces with different are not homotopy equivalent. Moreover, classifications up to homeomorphism and homotopy equivalence are known, as follows. The three-dimensional spaces and are: homotopy equivalent if and only if for some homeomorphic if and only if In particular, the lens spaces L(7,1) and L(7,2) give examples of two 3-manifolds that are homotopy equivalent but not homeomorphic. The lens space L(1,0) is the 3-sphere, and the lens space L(2,1) is 3 dimensional real projective space. Lens spaces can be represented as Seifert fiber spaces in many ways, usually as fiber spaces over the 2-sphere with at most two exceptional fibers, though the lens space with fundamental group of order 4 also has a representation as a Seifert fiber space over the projective plane with no exceptional fibers. Dihedral case (prism manifolds) A prism manifold is a closed 3-dimensional manifold M whose fundamental group is a central extension of a dihedral group. The fundamental group π1(M) of M is a product of a cyclic group of order m with a group having presentation for integers k, m, n with k ≥ 1, m ≥ 1, n ≥ 2 and m coprime to 2n. Alternatively, the fundamental group has presentation for coprime integers m, n with m ≥ 1, n ≥ 2. (The n here equals the previous n, and the m here is 2k-1 times the previous m.) We continue with the latter presentation. This group is a metacyclic group of order 4mn with abelianization of order 4m (so m and n are both determined by this group). The element y generates a cyclic normal subgroup of order 2n, and the element x has order 4m. The center is cyclic of order 2m and is generated by x2, and the quotient by the center is the dihedral group of order 2n. When m = 1 this group is a binary dihedral or dicyclic group. The simplest example is m = 1, n = 2, when π1(M) is the quaternion group of order 8. Prism manifolds are uniquely determined by their fundamental groups: if a closed 3-manifold has the same fundamental group as a prism manifold M, it is homeomorphic to M. Prism manifolds can be represented as Seifert fiber spaces in two ways. Tetrahedral case The fundamental group is a product of a cyclic group of order m with a group having presentation for integers k, m with k ≥ 1, m ≥ 1 and m coprime to 6. Alternatively, the fundamental group has presentation for an odd integer m ≥ 1. (The m here is 3k-1 times the previous m.) We continue with the latter presentation. This group has order 24m. The elements x and y generate a normal subgroup isomorphic to the quaternion group of order 8. The center is cyclic of order 2m. It is generated by the elements z3 and x2 = y2, and the quotient by the center is the tetrahedral group, equivalently, the alternating group A4. When m = 1 this group is the binary tetrahedral group. These manifolds are uniquely determined by their fundamental groups. They can all be represented in an essentially unique way as Seifert fiber spaces: the quotient manifold is a sphere and there are 3 exceptional fibers of orders 2, 3, and 3. Octahedral case The fundamental group is a product of a cyclic group of order m coprime to 6 with the binary octahedral group (of order 48) which has the presentation These manifolds are uniquely determined by their fundamental groups. They can all be represented in an essentially unique way as Seifert fiber spaces: the quotient manifold is a sphere and there are 3 exceptional fibers of orders 2, 3, and 4. Icosahedral case The fundamental group is a product of a cyclic group of order m coprime to 30 with the binary icosahedral group (order 120) which has the presentation When m is 1, the manifold is the Poincaré homology sphere. These manifolds are uniquely determined by their fundamental groups. They can all be represented in an essentially unique way as Seifert fiber spaces: the quotient manifold is a sphere and there are 3 exceptional fibers of orders 2, 3, and 5. References Peter Orlik, Seifert manifolds, Lecture Notes in Mathematics, vol. 291, Springer-Verlag (1972). William Jaco, Lectures on 3-manifold topology William Thurston, Three-dimensional geometry and topology. Vol. 1. Edited by Silvio Levy. Princeton Mathematical Series, 35. Princeton University Press, Princeton, New Jersey, 1997. Geometric topology Riemannian geometry Group theory 3-manifolds
Spherical 3-manifold
Mathematics
1,423
8,126,986
https://en.wikipedia.org/wiki/Polyglutamylation
Polyglutamylation is a form of reversible posttranslational modification of glutamate residues seen for example in alpha and beta tubulins, nucleosome assembly proteins NAP1 and NAP2. The γ-carboxy group of glutamate may form peptide-like bond with the amino group of a free glutamate whose α-carboxy group can now be extended into a polyglutamate chain. The glutamylation is done by the enzyme glutamylase and removed by deglutamylase. Polyglutamylation of chain length of up to six occurs in certain glutamate residues near the C terminus of most major forms of tubulins. These residues, though themselves not involved in direct binding, cause conformational shifts that regulate binding of microtubule associated proteins (MAP and Tau) and motors. External links The role of tubulin polymodifications in microtubule functions References Post-translational modification Protein structure
Polyglutamylation
Chemistry
213
24,472,143
https://en.wikipedia.org/wiki/Cyclic%20salt
Cyclic salt is salt that is carried by the wind when it comes in contact with breaking waves. It is estimated that more than 300 million tons of cyclic salt is deposited on the Earth's surface each year, and it is considered to be a significant factor in the chlorine content of the Earth's river water. In general, cyclic salt deposits are lower at sites further inland and are most abundant along the shoreline, although this pattern varies depending on the given environmental conditions. Use of the term "cyclic" refers to the cycle in which the salt moves from sea to land and is then washed by rainwater back to the sea. The salt (and other solid matter) cannot evaporate as water does. Instead it leaves the ocean surface in fine droplets of drop impacts or bubble bursts. Wave-crests and other turbulence form foam. When drops splash or bubbles burst, fine droplets of solute are ejected from the water or bubble surface into the air. Some of the droplets are small enough to allow the water to evaporate before it falls back into the sea, leaving in the air a mote of the solid residue light enough to stay suspended by Brownian motion and be carried away on the wind. See also Edible salt Sodium chloride References Further reading Chemical oceanography Edible salt
Cyclic salt
Chemistry
261
37,771,364
https://en.wikipedia.org/wiki/Confidence%20accounting
Confidence accounting is a method of accounting whereby some of the figures are expressed not as single point estimates, but rather as probability distributions. Under Confidence Accounting, the end results of audits would be presentations of distributions for major entries in the profit & loss, balance sheet and cashflow statements. The proposed benefits of Confidence Accounting include a fairer representation of financial results, reduced footnotes, more measurable audit quality and a mitigation of mark-to-market perturbations. History This method is in the discussion stage and has not yet been adopted by any accountancy body, though events and publications have been sponsored by the Association of Chartered Certified Accountants and some other events by the Institute of Chartered Accountants of Scotland. The term "confidence accounting" was first adopted in the mid-2000s by the Long Finance initiative, having grown out of earlier publications that referred to "stochastic accounting". Advantages Confidence accounting has the alleged advantages of: Being more scientific. Most scientific experimental results are expressed as expected values and some quantisation of the error involved, accounts are not. Giving a fairer view of the risk associated with the accounts. For example a company may own drilling rights to a potential oilfield. The value of this asset could be zero, or could be immense. If this is not known then a distribution would be a more faithful representation of the asset value, rather than a single estimate of the mean value. Holding the accountants responsible Accounts from previous years can be retrospectively checked against the forecasts, to see if, for example an accountancy firm actually got 90% of its accounts within the 90% error band forecast. Criticism Confidence accounting can be criticized for: adding further complexity to an already complex subject, although its supporters claim that this method could actually reduce the size of accounts. the alleged advantages of holding the accountants responsible can be evaded by accountants claiming (with some justification) that the forecast distributions do not take into account unforeseen macroeconomic factors. no external party having both the power and the interest to make this happen. External links Confidence Accounting: A Proposal published in 2012 by ACCA, CISI and Long Finance Confidence Accounting discussion and reports References Accounting systems
Confidence accounting
Technology
442
63,410,338
https://en.wikipedia.org/wiki/Post%20and%20Logistics%20Union
The Post and Logistics Union (, PAU) is a trade union, principally representing postal workers, in Finland. The union was founded on 1 June 2005, when the Postal Union merged with the Postal Officers' Union. The two unions, originally representing separate groups of workers, and affiliated to different union federations, had increasingly come to co-operate. The new union chose to affiliate to the Central Organisation of Finnish Trade Unions. By 2007, the union represented 82% of eligible workers in the postal service, with approximately half the members being women. As of 2020, the union had 25,004 members. Presidents 2005: Esa Vilkuna 2014: Heidi Nieminen References Postal trade unions Trade unions in Finland Trade unions established in 2005
Post and Logistics Union
Physics
150
37,261,060
https://en.wikipedia.org/wiki/Cladosporium%20fusiforme
Cladosporium fusiforme is a fungus found in hypersaline environments. It has ovoid to ellipsoid conidia. It has also been found in animal feed. References fusiforme Fungi described in 2007 Fungus species
Cladosporium fusiforme
Biology
53
73,438,573
https://en.wikipedia.org/wiki/Dark-field%20X-ray%20microscopy
Dark-field X-ray microscopy (DFXM or DFXRM) is an imaging technique used for multiscale structural characterisation. It is capable of mapping deeply embedded structural elements with nm-resolution using synchrotron X-ray diffraction-based imaging. The technique works by using scattered X-rays to create a high degree of contrast, and by measuring the intensity and spatial distribution of the diffracted beams, it is possible to obtain a three-dimensional map of the sample's structure, orientation, and local strain. History The first experimental demonstration of dark-field X-ray microscopy was reported in 2006 by a group at the European Synchrotron Radiation Facility in Grenoble, France. Since then, the technique has been rapidly evolving and has shown great promise in multiscale structural characterization. Its development is largely due to advances in synchrotron X-ray sources, which provide highly collimated and intense beams of X-rays. The development of dark-field X-ray microscopy has been driven by the need for non-destructive imaging of bulk crystalline samples at high resolution, and it continues to be an active area of research today. However, dark-field microscopy, dark-field scanning transmission X-ray microscopy, and soft dark-field X-ray microscopy has long been used to map deeply embedded structural elements. Principles and instrumentation In this technique, a synchrotron light source is used to generate an intense and coherent X-ray beam, which is then focused onto the sample using a specialized objective lens. The objective lens acts as a collimator to select and focus the scattered light, which is then detected by the 2D detector to create a diffraction pattern. The specialized objective lens in DFXM, referred to as an X-ray objective lens, is a crucial component of the instrumentation required for the technique. It can be made from different materials such as beryllium, silicon, and diamond, depending on the specific requirements of the experiment. The objective enables one to enlarge or reduce the spatial resolution and field of view within the sample by varying the number of individual lenses and adjusting and (as in the figure) correspondingly. The diffraction angle is typically 10–30°. The sample is positioned at an angle such that the direct beam is blocked by a beam stop or aperture, and the diffracted beams from the sample are allowed to pass through a detector. An embedded crystalline element (for example, a grain or domain) of choice (green) is aligned such that the detector is positioned at a Bragg angle that corresponds to a particular diffraction peak of interest, which is determined by the crystal structure of the sample. The objective magnifies the diffracted beam by a factor and generates an inverted 2D projection of the grain. Through repeated exposures during a 360° rotation of the element around an axis parallel to the diffraction vector, , several 2D projections of the grain are obtained from various angles. A 3D map is then obtained by combining these projections using reconstruction algorithms similar to those developed for CT scanning. If the lattice of the crystalline element exhibits an internal orientation spread, this procedure is repeated for a number of sample tilts, indicated by the angles and . The current implementation of DFXM at ID06, , uses a compound refractive lens (CRL) as the objective, giving spatial resolution of 100 nm and angular resolution of 0.001°. Applications, limitations and alternatives Current and potential applications DFXM has been used for the non-destructive investigation of polycrystalline materials and composites, revealing the 3D microstructure, phases, orientation of individual grains, and local strains. It has also been used for in situ studies of materials recrystallisation, dislocations and other defects, and the deformation and fracture mechanisms in materials, such as metals and composites. DFXM can provide insights into the 3D microstructure and deformation of geological materials such as minerals and rocks, and irradiated materials. DFXM has the potential to revolutionise the field of nanotechnology by providing non-destructive, high-resolution 3D imaging of nanostructures and nanomaterials. It has been used to investigate the 3D morphology of nanowires and to detect structural defects in nanotubes. DFXM has shown potential for imaging biological tissues and organs with high contrast and resolution. It has been used to visualize the 3D microstructure of cartilage and bone, as well as to detect early-stage breast cancer in mouse model. Limitations The intense X-ray beams used in DFXM can damage delicate samples, particularly biological specimens. DFXM can suffer from imaging artefacts such as ring artefacts, which can affect image quality and limit interpretation. The instrumentation required for DFXM is expensive and typically only available at synchrotron facilities, making it inaccessible to many researchers. Although DFXM can achieve high spatial resolution, it is still not as high as the resolution achieved by other imaging techniques such as transmission electron microscopy (TEM) or X-ray crystallography. Preparation of samples for DFXM imaging can be challenging, especially for samples that are not crystalline. There are also limitations on the sample size that can be imaged as the technique works best with thin samples, typically less than 100 microns thick, due to the attenuation of the X-ray beam by thicker samples. DFXM still suffers from long integration times, which can limit its practical applications. This is due to the low flux density of X-rays emitted by synchrotron sources and the high sensitivity required to detect scattered X-rays. Alternatives There are several alternative techniques to DFXM, depending on the application, some of which are: Differential-aperture X-ray structural microscopy (DAXM): DAXM is a synchrotron X-ray method capable of delivering precise information about the local structure and crystallographic orientation in three dimensions at a spatial resolution of less than one micron. It also provides angular precision and local elastic strain with high accuracy a wide range of materials, including single crystals, polycrystals, composites, and materials with varying properties. Bragg Coherent diffraction imaging (BCDI): BCDI is an advanced microscopy technique introduced in 2006 to study crystalline nanomaterials' 3D structure. BCDI has applications in diverse areas, including in situ studies of corrosion, probing dissolution processes, and simulating diffraction patterns to understand atomic displacement. Ptychography is a computational imaging method used in microscopy to generate images by processing multiple coherent interference patterns. It provides advantages such as high-resolution imaging, phase retrieval, and lensless imaging capabilities. Diffraction Contrast Tomography (DCT): DCT is a method that uses coherent X-rays to generate three-dimensional grain maps of polycrystalline materials. DCT enables visualization of crystallographic information within samples, aiding in the analysis of materials' structural properties, defects, and grain orientations. Three-dimensional X-ray diffraction (3DXRD): 3DXRD is a synchrotron-based technique that provides information about the crystallographic orientation of individual grains in polycrystalline materials. It can be used to study the evolution of microstructure during deformation and recrystallization processes and provides submicron resolution. Electron backscatter diffraction (EBSD): EBSD is a scanning electron microscopy (SEM) technique that can be used to map - the sample surface - crystallographic orientation and strain at the submicron scale. It works by detecting the diffraction pattern of backscattered electrons, which provides information about the crystal structure of the material. EBSD can be used on a variety of materials, including metals, ceramics, and semiconductors, and can be extended to the third dimension, i.e., 3D EBSD, and can be combined with Digital image correlation, i.e., EBSD-DIC. Digital image correlation (DIC): DIC is a non-contact optical method used to measure the displacement and deformation of a material by analysing the digital images captured before and after the application of load. This technique can measure strain with sub-pixel accuracy and is widely used in materials science and engineering. Transmission electron microscopy (TEM): TEM is a high-resolution imaging technique that provides information about the microstructure and crystallographic orientation of materials. It can be used to study the evolution of microstructure during deformation and recrystallization processes and provides submicron resolution. Micro-Raman spectroscopy: Micro-Raman spectroscopy is a non-destructive technique that can be used to measure the strain of a material at the submicron scale. It works by illuminating a sample with a laser beam and analysing the scattered light. The frequency shift of the scattered light provides information about the crystal deformation, and thus the strain of the material. Neutron diffraction: Neutron diffraction is a technique that uses a beam of neutrons to study the structure of materials. It is particularly useful for studying the crystal structure and magnetic properties of materials. Neutron diffraction can provide sub-micron resolution. References Further reading Diffraction Materials science Microscopes Microscopy Nanotechnology Scientific techniques
Dark-field X-ray microscopy
Physics,Chemistry,Materials_science,Technology,Engineering
1,945
73,257,521
https://en.wikipedia.org/wiki/Antonio%20Fern%C3%A1ndez%20Ra%C3%B1ada
Antonio Fernández-Rañada Menéndez de Luarca (1939 – 19 May 2022) was a Spanish theoretical physicist. Biography Antonio Fernández-Ramada was born in Bilbao. Soon after his birth, his family moved to Oviedo, where he spent his childhood and youth until he began his university studies in Madrid. He graduated in physics with a licentiate from the Complutense University of Madrid. In 1965 he graduated with a PhD from the University of Paris with a thesis on causality and the S-matrix. In 1967 he defended his habilitation thesis Propiedades analítica en la difusión pión-nucleón (Analytic properties in pion-nucleon diffusion) at the Complutense University of Madrid. He was employed at the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT), formerly named the Junta de Energia Nuclear (Nuclear Energy Board). He taught quantum mechanics at the University of Barcelona and theoretical physics at the Complutense University of Madrid. He was appointed professor by the Universidad de Zaragoza and held the chair of electromagnetism at the Complutense University of Madrid. Fernánez-Rañada was the director of the Grupo Interuniversitario de Física Teóretica (GIFT). He was the founding editor and editor-in-chief for ten years of the journal Revista Española de Física. He did research on the physics of elementary particles, nonlinear dynamics, topics in mathematical physics, the relation between topology and quantum electrodynamics, and some problems in cosmology. He also published articles on how science is related other fields of knowledge and to societal issues. He was the author of the 1990 book Dinámica clásica (Classical dynamics), a coauthor of the 1997 book 100 problemas de la Mecánica (100 problems of mechanics), and a coauthor of the 2-volume, 2007 book Física básica (Basic physics). He also wrote expository works on science and its wider implications: Los científicos y Dios, Los muchos rostros de la ciencia, De la agresión a la guerra nuclear — coauthored with J. Martín-Ramírez, Heisenberg. Ciencia, incertidumbre y conciencia, and Breves apuntes sobre la comunicación de la ciencia. Awards and honors Research Prize in Physics from the Spanish Royal Academy of Sciences (1997) Medal of the Spanish Royal Physics Society (1985) Jovellanos International Essay Award (1994) for Los Muchos Rostros de la Ciencia Silver Medal of the Prince of Asturias (1999) President of the Principality of Asturias Council of Arts and Sciences President of the Spanish Royal Physics Society (2005-2010) Member of the Council of the European Physical Society Member of the Jury of the Princess of Asturias Award for Scientific and Technical Research Selected publications Articles References External links 1939 births 2022 deaths 20th-century Spanish physicists 21st-century Spanish physicists Theoretical physicists Complutense University of Madrid alumni University of Paris alumni Academic staff of the Complutense University of Madrid
Antonio Fernández Rañada
Physics
669
12,811,837
https://en.wikipedia.org/wiki/Seagull%20Camera
Shanghai Seagull Camera Ltd is a Chinese camera maker located in Shanghai, China. Founded in 1958, Seagull is the oldest camera maker in China. The product line of Seagull includes TLR cameras, SLR cameras, folding cameras, CCD and SLR camera lenses, large-format cameras, film, night vision scopes, and angle viewfinders. Seagull's cameras usually use basic, time-tested mechanical designs that require no batteries. Some Seagull cameras are distributed through the Lomography company. Seagull adopted the SR lens mount from Minolta's manual focus SLRs and camera design under license, and continues to produce it long after Minolta moved on to autofocus Alpha lens mount cameras. In 2009 a fixfocus-Digital camera DS-5060S with 5 MB-sensor was offered. On June 10, 2012, Shanghai Museum of Old Camera Manufacturing opened to the public. The museum features historic items of the company, most noticeable the production line of Seagull 4A and collections of more than 200 cameras. Major models Seagull 4 (4·4A·4B·4B1·4C) Seagull has been producing a series of Twin-lens reflex camera since the 1960s and it is believed that they are still producing; the latest model is 4A-109 featuring modern lens coating and some other improvements. Seagull 4 is the first model from 1967. The predecessor is Shanghai 4. Seagull 4A is the second model from 1968. Some Chinese amateurs consider the 4A and the later 4A-1 variants professional, partly because the camera was unaffordable for most Chinese citizens in the 1960s and 1970s. The 4A sold for 230 RMB and the 4A-1 for 290 RMB, whereas a worker earned only 20 RMB per month. The camera was unavailable to ordinary people, and only those working for government and press could afford one. Seagull 4A-1 from the 1970s and later 4A-1XX models feature Tessar type lenses. Another noticeable feature is the long-awaited hot shoe. The later 4A-1XX models are mainly for selling overseas, hence the absence of Chinese characters is no surprise. Seagull 4B and 4B-1 are simplified models of 4A. The 4B-1 may take 645 formats. The focus screen of 4B-1 is Fresnel lens achieving bright viewing. Unlike 4A, the original lenses for 4B and 4B-1 are three-element cooke triplet. As some Chinese vendors are selling proclaimed Tessar lenses components for upgrading old cameras, one may come up with 4B-1 featuring four-element lenses. Seagull 4C can load 135 films. This model is however very rare. Seagull 203 folding roll film range-finder camera Seagull 205 range finder camera DF-300, DF-2000, DF-5000 SLR -I, II range finder Dong Feng (East Wind) medium format camera developed and produced during Cultural Revolution, only very limited amounts were produced. Red Flag 20 range finder camera, only very few were produced. Seagull 501 kJ·KE References External links in Chinese Historic as well as technical information of Seagull 4 contributed by Chinese collectors, Simplified Chinese Article as well as photos of the Shanghai Museum, Simplified Chinese Photography equipment manufacturers of China Manufacturing companies based in Shanghai Lens manufacturers Twin-lens reflex cameras Chinese brands
Seagull Camera
Technology
710
681,186
https://en.wikipedia.org/wiki/Whitehead%20torsion
In geometric topology, a field within mathematics, the obstruction to a homotopy equivalence of finite CW-complexes being a simple homotopy equivalence is its Whitehead torsion which is an element in the Whitehead group . These concepts are named after the mathematician J. H. C. Whitehead. The Whitehead torsion is important in applying surgery theory to non-simply connected manifolds of dimension > 4: for simply-connected manifolds, the Whitehead group vanishes, and thus homotopy equivalences and simple homotopy equivalences are the same. The applications are to differentiable manifolds, PL manifolds and topological manifolds. The proofs were first obtained in the early 1960s by Stephen Smale, for differentiable manifolds. The development of handlebody theory allowed much the same proofs in the differentiable and PL categories. The proofs are much harder in the topological category, requiring the theory of Robion Kirby and Laurent C. Siebenmann. The restriction to manifolds of dimension greater than four are due to the application of the Whitney trick for removing double points. In generalizing the h-cobordism theorem, which is a statement about simply connected manifolds, to non-simply connected manifolds, one must distinguish simple homotopy equivalences and non-simple homotopy equivalences. While an h-cobordism W between simply-connected closed connected manifolds M and N of dimension n > 4 is isomorphic to a cylinder (the corresponding homotopy equivalence can be taken to be a diffeomorphism, PL-isomorphism, or homeomorphism, respectively), the s-cobordism theorem states that if the manifolds are not simply-connected, an h-cobordism is a cylinder if and only if the Whitehead torsion of the inclusion vanishes. Whitehead group The Whitehead group of a connected CW-complex or a manifold M is equal to the Whitehead group of the fundamental group of M. If G is a group, the Whitehead group is defined to be the cokernel of the map which sends (g, ±1) to the invertible (1,1)-matrix (±g). Here is the group ring of G. Recall that the K-group K1(A) of a ring A is defined as the quotient of GL(A) by the subgroup generated by elementary matrices. The group GL(A) is the direct limit of the finite-dimensional groups GL(n, A) → GL(n+1, A); concretely, the group of invertible infinite matrices which differ from the identity matrix in only a finite number of coefficients. An elementary matrix here is a transvection: one such that all main diagonal elements are 1 and there is at most one non-zero element not on the diagonal. The subgroup generated by elementary matrices is exactly the derived subgroup, in other words the smallest normal subgroup such that the quotient by it is abelian. In other words, the Whitehead group of a group G is the quotient of by the subgroup generated by elementary matrices, elements of G and . Notice that this is the same as the quotient of the reduced K-group by G. Examples The Whitehead group of the trivial group is trivial. Since the group ring of the trivial group is we have to show that any matrix can be written as a product of elementary matrices times a diagonal matrix; this follows easily from the fact that is a Euclidean domain. The Whitehead group of a free abelian group is trivial, a 1964 result of Hyman Bass, Alex Heller and Richard Swan. This is quite hard to prove, but is important as it is used in the proof that an s-cobordism of dimension at least 6 whose ends are tori is a product. It is also the key algebraic result used in the surgery theory classification of piecewise linear manifolds of dimension at least 5 which are homotopy equivalent to a torus; this is the essential ingredient of the 1969 Kirby–Siebenmann structure theory of topological manifolds of dimension at least 5. The Whitehead group of a braid group (or any subgroup of a braid group) is trivial. This was proved by F. Thomas Farrell and Sayed K. Roushon. The Whitehead group a cyclic group is trivial if and only if it is of order 2, 3, 4, or 6. The Whitehead group of the cyclic group of order 5 is . This was proved in 1940 by Graham Higman. An example of a non-trivial unit in the group ring arises from the identity where t is a generator of the cyclic group of order 5. This example is closely related to the existence of units of infinite order (in particular, the golden ratio) in the ring of integers of the cyclotomic field generated by fifth roots of unity. The Whitehead group of any finite group G is finitely generated, of rank equal to the number of irreducible real representations of G minus the number of irreducible rational representations. This was proved in 1965 by Bass. If G is a finite cyclic group then is isomorphic to the units of the group ring under the determinant map, so Wh(G) is just the group of units of modulo the group of "trivial units" generated by elements of G and −1. It is a well-known conjecture that the Whitehead group of any torsion-free group should vanish. The Whitehead torsion At first we define the Whitehead torsion for a chain homotopy equivalence of finite based free R-chain complexes. We can assign to the homotopy equivalence its mapping cone C* := cone*(h*) which is a contractible finite based free R-chain complex. Let be any chain contraction of the mapping cone, i.e., for all n. We obtain an isomorphism with We define , where A is the matrix of with respect to the given bases. For a homotopy equivalence of connected finite CW-complexes we define the Whitehead torsion as follows. Let be the lift of to the universal covering. It induces -chain homotopy equivalences . Now we can apply the definition of the Whitehead torsion for a chain homotopy equivalence and obtain an element in which we map to Wh(π1(Y)). This is the Whitehead torsion τ(ƒ) ∈ Wh(π1(Y)). Properties Homotopy invariance: Let be homotopy equivalences of finite connected CW-complexes. If f and g are homotopic, then . Topological invariance: If is a homeomorphism of finite connected CW-complexes, then . Composition formula: Let , be homotopy equivalences of finite connected CW-complexes. Then . Geometric interpretation The s-cobordism theorem states for a closed connected oriented manifold M of dimension n > 4 that an h-cobordism W between M and another manifold N is trivial over M if and only if the Whitehead torsion of the inclusion vanishes. Moreover, for any element in the Whitehead group there exists an h-cobordism W over M whose Whitehead torsion is the considered element. The proofs use handle decompositions. There exists a homotopy theoretic analogue of the s-cobordism theorem. Given a CW-complex A, consider the set of all pairs of CW-complexes (X, A) such that the inclusion of A into X is a homotopy equivalence. Two pairs (X1, A) and (X2, A) are said to be equivalent, if there is a simple homotopy equivalence between X1 and X2 relative to A. The set of such equivalence classes form a group where the addition is given by taking union of X1 and X2 with common subspace A. This group is natural isomorphic to the Whitehead group Wh(A) of the CW-complex A. The proof of this fact is similar to the proof of s-cobordism theorem. See also Algebraic K-theory Reidemeister torsion s-Cobordism theorem Wall's finiteness obstruction References Cohen, M. A course in simple homotopy theory Graduate Text in Mathematics 10, Springer, 1973 External links A description of Whitehead torsion is in section two. Geometric topology Algebraic K-theory Surgery theory
Whitehead torsion
Mathematics
1,729
76,975,879
https://en.wikipedia.org/wiki/Debate%20between%20tree%20and%20reed
The Debate between tree and reed (CSL 5.3.4) is a work of Sumerian literature belonging to the genre of disputations poem. It was written on clay tablets and dates to the Third Dynasty of Ur (ca. mid-3rd millennium BC). The text was reconstructed by M. Civil in the 1960s from 24 manuscripts but it is currently the least studied of the disputation poems and a full translation has not yet been published. Some other Sumerian disputations include the dispute between bird and fish, cattle and grain, and Summer and Winter. Synopsis The poem begins with a cosmogonic prologue describing the copulation between Heaven (An) and Earth (Ki). Earth gives birth to vegetation, and for the purpose of the poem, this prominently includes Tree and Reed. Though they are first in harmony, a disputation begins between the two as they enter into a shrine. Reed, who fails to respect the proper order of things, steps in front of Tree, causing the latter to be infuriated. The prologue covers the first 49 lines, after which the disputation proceeds for another two hundred lines. It is divided into four speeches: Tree speaking (lines 50–91), Reed speaking (96–137), Tree speaking again (144–191), Reed speaking again (197–228). The adjudication scene (230–254) begins with Tree invoking the judgement of Shulgi (a king), who declares that Tree has prevailed over Reed. The poem also mentions the king Puzrish-Dagan, suggesting its composition during his time. Partial translation The following translation of the introductory cosmogonic section of the Disputation, containing only the first 10 lines, is taken from Lisman 2013. The first 25 lines were published by Van Dijk in 1965 but a translation of the entire text has still not been made.1 The large surface of the earth introduced herself; then she has embellished herself as with a bardul-garment. 2 The vast earth has filled her exterior with precious metals and lapis lazuli. 3 With diorite, nir-stone, cornelian, and suduaga she has adorned herself. 4 The earth, the fragrant vegetation, covered herself with attractiveness. She stood in her magnificence. 5 The pure earth, the virgin earth, has beautified herself for the holy An. 6 An, the exalted heaven, had intercourse with the vast earth. 7 He poured the seed of the hero's Tree and Reed into her womb. 8 The whole earth, the fecund cow, took the good seed of An under her care. 9 The earth, life-giving vegetation, innerly happy, devoted herself to the production of it (i.e. the vegetation). 10 The earth, full of joy, bore abundance, while juice and syrup gave out their smell. Historical context In Mesopotamian disputation literature, debates between trees is a recurring theme. In Akkadian disputations, examples include the Tamarisk and Palm, Palm and Vine, and Series of the Poplar. A much later example from Aesop's fables is The Oak and the Reed. References Citations Sources Clay tablets Comparative mythology Creation myths Mesopotamian myths Religious cosmologies Sumerian disputations
Debate between tree and reed
Astronomy
694
3,808,439
https://en.wikipedia.org/wiki/Potty%20parity
Potty parity is equal or equitable provision of public toilet facilities for females and males within a public space. Parity can be defined by equal floorspace or by number of fixtures within the washrooms, sometimes adjusted for the longer average time taken and more frequent visits to the washroom for females, among other factors. Historically, public toilets have been divided by sex since the Victorian era. Male cubicles and facilities were typically greater in number until the late 1980s and early 2010s, depending on the country and building. Current ratios range from 1:1 to 4:1 female–to–male. Portable, accessible, and vehicle toilets are commonly gender-neutral. Outside of these contexts, they are present in some European areas and university campuses in the US. Multiple studies have found that waiting times for females can be reduced by the use of properly designed washrooms. Definition of parity Parity may be defined in various ways in relation to facilities in a building. The simplest is as equal floorspace for male and female washrooms. Since men's and boys' bathrooms include urinals, which take up less space than stalls, this still results in more facilities for males. An alternative parity is by number of fixtures within washrooms. However, since females on average spend more time in washrooms more males are able to use more facilities per unit time. More recent parity regulations therefore require more fixtures for females to ensure that the average time spent waiting to use the toilet is the same for females as for males, or to equalise throughputs of male and female toilets. The lack of diaper-changing stations for babies in men's restrooms has been listed as a potty parity issue by fathers. Some jurisdictions have considered legislation mandating diaper-changing stations in men's restrooms. Sex differences Women and girls often spend more time in washrooms than men and boys, for both physiological and cultural reasons. The requirement to use a cubicle rather than a urinal means urination takes longer and hand washing must be done more thoroughly. Females also make more visits to washrooms. Urinary tract infections and incontinence are more common in females. Pregnancy, menstruation, breastfeeding, and diaper-changing increase usage. The elderly, who are disproportionately female, take longer and more frequent bathroom visits. A variety of female urinals and personal funnels have been invented to make it easier for females to urinate standing up. None has become widespread enough to affect policy formation on potty parity. John F. Banzhaf III, a law professor at George Washington University, calls himself the "father of potty parity." Banzhaf argues that to ignore potty parity; that is, to have merely equal facilities for males and females; constitutes a form of sex discrimination against women. In the 1970s the Committee to End Pay Toilets in America made a similar point: that allowing toilet providers to charge for the use of a cubicle while urinals required no money was unfair to females. Several authors have identified potty parity as a potential rallying issue for feminism, saying all women can identify with it. History and developments Public toilets have historically been divided along the lines of sex, race, class, disability, and other distinctions. In apartheid South Africa, Israeli-occupied Palestinian and the Jim Crow American South, toilets were segregated by both sex and race. During the Victorian era in the United Kingdom, toilets were segregated by both sex and class. U.S. The first bathroom for congresswomen in the United States Capitol was opened in 1962. Segregation of toilet facilities by race was outlawed in the United States by the Civil Rights Act of 1964. Provision of disabled-access facilities was mandated in federal buildings by the Architectural Barriers Act of 1968 and in private buildings by the Americans with Disabilities Act of 1990. No federal legislation relates to provision of facilities for women. The banning of pay toilets came about because women/girls had to pay to urinate whereas men/boys only had to pay to defecate. In many older buildings, little or no provision was made for women because few would work in or visit them. Increased gender equality in employment and other spheres of life has impelled change. Until the 1980s, building codes for stadiums in the United States stipulated more toilets for men, on the assumption that most sports fans were male. In 1973, to protest the lack of female bathrooms at Harvard University, women poured jars of fake urine on the steps of the university's Lowell Hall, a protest Florynce Kennedy thought of and participated in. The first "Restroom Equity" Act in the United States was passed in California in 1989. It was introduced by then-Senator Arthur Torres after several long waits for his wife to return from the bathroom. Facilities for female U.S. senators on the Senate Chamber level were first provided in 1992. Nissan Stadium in Nashville, Tennessee was built in 1999 in compliance with the Tennessee Equitable Restrooms Act, providing 288 fixtures for men and 580 for women. The Tennessean reported fifteen-minute waits at some men's rooms, compared to none at women's rooms. The Act was amended in 2000 to empower the state architect to authorize extra men's rooms at stadiums, horse shows and auto racing venues. In 2011 the U.S. House of Representatives got its first women's bathroom near the chamber (Room H-211 of the U.S. Capitol). It is only open to women lawmakers, not the public. Regulations Current laws in the United Kingdom require a 1:1 female–male ratio of restroom space in public buildings. The International Building Code requires range of female to male ratio of toilets depending on the building occupancy. Most occupancies require 1:1 ratio, but Assembly uses can require up to 2:1 ratio of female to male toilets. New York City Council passed a law in 2005 requiring roughly this in all public buildings. An advisory ruling had been passed in 2003. U.S. state laws vary between 1:1, 3:2, and 2:1 ratios. The Uniform Plumbing Code specifies a 4:1 ratio in movie theaters. Gender-neutral toilets Gender-neutral toilets are common in some contexts, including on aircraft, on trains or buses, portable toilets, and accessible toilets. In parts of Europe they are also common in buildings. In the United States, they began to appear in the 2000s on university campuses and in some upmarket restaurants. Studies conducted by the University of Toronto's Rotman School of Management and by Ghent University have concluded that properly designed unisex restrooms can reduce waiting times for women. In 2013, the state of California passed bill 1266 ("The School Success and Opportunity Act") requiring provision of facilities consistent with a pupil's gender identity. Examples Canada British Columbia's Factory Act stated that "The owner of every building used as a factory, shop or office shall provide separate washrooms for male and female employees with separate approaches to them, and signs clearly indicating for which sex the washrooms are provided." Newfoundland and Labrador's Occupational Health and Safety Regulations, 2012 state that "where both males and females are employed, separate toilets shall be provided and suitably identified for workers of each sex". Nova Scotia's Occupational Health and Safety Act requires that an "employer shall make accessible a minimum number of toilets for each gender, determined according to the maximum number of persons of each gender who are normally employed at any one time at the same workplace..." Prince Edward Island's Occupational Safety and Health Act stipulates that "Where 10 or more persons are employed, the employer shall provide separate washrooms and toilet facilities for each sex with a locking device on the inside." Saskatchewan's Factories Act of 1909 stipulated that "The owner of every factory shall provide a sufficient number and description of privies, earth or water closets and urinals for the employees of such factory, including separate sets for the use of male and female employees, and shall have separate approaches to the same, the recognised standard being one closet for every twenty-five persons employed in the factor." The Six Nations of the Grand River First Nation enacted a potty parity measure for restaurants in 1967. The by-law stated that "There shall be provided for employees, toilets separate for each sex and at least one toilet room and one hand washing facility for customers of each sex of any restaurant designed to seat 25 or more customers..." China On 19 February 2012, some Chinese women in Guangzhou protested against the inequitable waiting times. This movement has drifted to Beijing, calling for women's facilities to be proportionally larger to accommodate the longer use times and ameliorate the longer queues of females. Since March 2011, Guangzhou's urban-management commission has ordered that new and newly renovated female public toilets must be 1.5 times the size of their male counterparts. The aforementioned movement is pressing for the regulation to be applied retroactively. India Provisions for separate toilets for women workers are found in Section 19 of the Factories Act, 1948; Section 9 of the Plantations Labour Act, 1951; Section 20 of the Mines Act, 1952; Rule 53 of the Contract Labour (Regulation and Abolition) Rules, 1971; and Rule 42 of the Inter State Migrant Workmen (RECS) Central Rules, 1980. In 2011 a "Right to Pee" (as called by the media) campaign began in Mumbai, India's largest city. Women, but not men, have to pay to urinate in Mumbai, despite regulations against this practice. Women have also been sexually assaulted while urinating in fields. Thus, activists have collected more than 50,000 signatures supporting their demands that the local government stop charging women to urinate, build more toilets, keep them clean, provide sanitary napkins and a trash can, and hire female attendants. In response, city officials have agreed to build hundreds of public toilets for women in Mumbai, and some local legislators are now promising to build toilets for women in every one of their districts. South Africa South Africa's Standards Act, 1999 requires toilets separate for each sex at factories. See also Bathroom bill Right to sit Workers' right to access the toilet References Gender equality Feminism and health Labour movement Labor rights Occupational safety and health law Sex segregation Toilets Women's rights
Potty parity
Biology
2,121
59,451,948
https://en.wikipedia.org/wiki/David%20Thornalley
David John Robert Thornalley is a British paleoceanographer known for his work on North Atlantic circulation change during the Quaternary period. Thornalley holds masters and doctoral degrees from Churchill College, Cambridge. He is currently an associate professor in the department of Geography at University College London (UCL). Before working at UCL, he was a postdoctoral research scholar at Woods Hole Oceanographic Institution and a postdoctoral research associate at Cardiff University. Thornalley also holds a Professional Certificate in Teaching and Learning in Higher and Professional Education. Awards In 2015 Thornalley was awarded the UCL Student Choice Outstanding Teacher award. In 2016 Thornalley was awarded a £100,000 Philip Leverhulme Prize for early-career researchers with internationally impactful research. References External links Atlantic Ocean circulation at weakest point in more than 1,500 years 1982 births Living people Alumni of Churchill College, Cambridge British oceanographers British climatologists British geochemists Academics of University College London
David Thornalley
Chemistry
199
6,486,874
https://en.wikipedia.org/wiki/Telelogic
Telelogic AB was a software business headquartered in Malmö, Sweden. Telelogic was founded in 1983 as a research and development arm of Televerket, the Swedish department of telecom (now part of TeliaSonera). It was later acquired by IBM Rational, and exists under the IBM software group. Telelogic had operations in 22 countries and had been publicly traded since 1999. CEO and President in 2001 was Anders Lidbeck. On June 11, 2007, IBM announced that it had made a cash offer to acquire Telelogic. On August 29, 2007, the European Union opened an investigation into the acquisition. On March 5, 2008, European regulators approved the acquisition of Telelogic by the Swedish IBM subsidiary Watchtower AB. On April 28, 2008, IBM completed its purchase of Telelogic. Former Products Focal Point — System for management of product and project portfolios. DOORS — Requirements tracking tool. System Architect — Enterprise Architecture and Business Architecture modeling tool. Tau — SDL and UML modeling tool. Synergy — Task-based version control and configuration management system. Rhapsody — Systems engineering and executable UML modeling tool. DocExpress — Technical documentation tool, discontinued after the acquisition and superseded by Publishing Engine. Publishing Engine — Technical documentation tool All of these products have been continued under IBM's Rational Software division in the systems engineering and Product lifecycle management (PLM) "solutions" software line. IBM sold System Architect, Focal Point and several other software products to UNICOM Global in 2016. Acquisitions Telelogic acquired the following companies between 1999 and 2007: References Unified Modeling Language Systems Modeling Language SysML Partners Software companies of Sweden Telecommunications companies of Sweden IBM acquisitions Defunct software companies of Sweden Companies established in 1983 Companies based in Malmö
Telelogic
Engineering
360
571,478
https://en.wikipedia.org/wiki/Bambi%20effect
The "Bambi effect" is an objection against the killing of animals that are perceived as "cute" or "adorable", such as deer, while there may be little or no objection to the suffering of animals that are perceived as somehow repulsive or less than desirable, such as pigs or other woodland creatures. Referring to a form of purported anthropomorphism, the term is inspired by Walt Disney's 1942 animated film Bambi, where an emotional high point is the death of the lead character's mother at the hands of the film's antagonist, a hunter known only as "Man". Effects Some commentators have credited this purported effect with increasing public awareness of the dangers of pollution, for instance in the case of the fate of sea otters after the Exxon Valdez oil spill, and in the public interest in scaring birds off airfields in non-lethal ways. In the case of invasive species, perceived cuteness may help thwart efforts to eradicate non-native intruders, such as the white fallow deer in Point Reyes, California. The effect is also cited as the anthropomorphic quality of modern cinema: most people in modern Western civilization are not familiar with wildlife, other than "through TV or cinema, where fuzzy little critters discuss romance, self-determination and loyalty like pals over a cup of coffee", which has led to influences on public policy and the image of businesses cast in movies as polluting or otherwise harming the environment. The effect was also cited in the events following a record snowfall in the U.S. state of Colorado in 2007, when food for mule deer, pronghorns, and elk became so scarce that they began to starve; the Colorado Department of Wildlife was inundated with requests and offers to help the animals from citizens, and ended up spending almost $2 million feeding the hungry wildlife. Among some butchers, the Bambi effect (and in general, Walt Disney's anthropomorphic characters) is credited with fueling the vegetarian movement; chefs use the term to describe customers' lack of interest in, for instance, whole fish: "It's the Bambi effect – [customers] don't want to see eyes looking at them". The ’Bambi’ Effect has caused people to fight against organizations that manage wildlife. However, their intervention can often interfere with an ecosystem’s circle of life and thus their efforts become counterproductive. For example, this phenomenon can promote people to create organizations like The Smokey Bear Campaign. This Campaign decreased the number of fires but consequently led to an unexpected change in ecosystem. The ‘Bambi’ effect is backed up by a study (Wilks, 2008) which found that to help the more aggressive and unfriendly wildlife become more loved and see improvements in their environments there should be cuter and more innocent cartoons created and marketed for them. See also Animal welfare Animal rights Cruelty to animals Poaching References Further reading Deep ecology Hunting Bambi Anthropomorphism Eponyms
Bambi effect
Biology,Environmental_science
623
2,917,143
https://en.wikipedia.org/wiki/Marcel%20Vogel
Marcel Joseph Vogel (April 14, 1917 – February 12, 1991) was a research scientist working at the IBM San Jose Research Center for 27 years. He is sometimes referred to as Dr. Vogel, although this title was based on an honorary degree, not a Ph.D. Later in his career, he became interested in various theories of quartz crystals and other occult and esoteric fields of study. Mainstream scientific work It is claimed that Vogel started his research into luminescence while he was still in his teens. This research eventually led him to publish his thesis, Luminescence in Liquids and Solids and Their Practical Application, in collaboration with University of Chicago's Dr. Peter Pringsheim in 1943. Two years after the publication, Vogel incorporated his own company, Vogel Luminescence, in San Francisco. For the next decade the firm developed a variety of new products: fluorescent crayons, tags for insecticides, a black light inspection kit to determine the secret trackways of rodents in cellars from their urine and the psychedelic colors popular in "new age" posters. In 1957, Vogel Luminescence was sold to Ultra Violet Products and Vogel joined IBM as a full-time research scientist. He retired from IBM in 1984. In 1977 and 1978, Vogel participated in experiments with the Markovich Tesla Electrical Power Source, referred to as MTEPS, that was built by Peter T. Markovich. He received 32 patents for his inventions up through his tenure at IBM. Among these was the magnetic coating for the 24" hard disk drive systems still in use. His areas of expertise, besides luminescence, were phosphor technology, magnetics and liquid crystal systems. At Vogel's February 14, 1991 funeral, IBM researcher and Sacramento, California physician Bernard McGinity, M.D. said of him, "He made his mark because of the brilliance of his mind, his prolific ideas, and his seemingly limitless creativity." Fringe science Crystals Vogel was a proponent of crystal healing, and believed cut crystals can have healing powers. Billy Meier UFO metal sample Vogel examined a metal sample which was allegedly given to Billy Meier by extraterrestrials, but by misinterpreting a graph on a test instrument, erroneously concluded it contained thallium, a rare metal. Communication between plants Vogel was a proponent of research into plant consciousness and believed "empathy between plant and human" could be established. In popular culture Vogel was featured in the first episode of In Search Of... hosted by Leonard Nimoy, called "Other Voices". He gave his theories regarding the possibility of communication between plants. See also Pyramid power Crystal healing References External links Vogel biographical sketch at Vogel Crystals website Vogel at the IBM100 Magnetic Stripe Technology team website 1917 births 1991 deaths 20th-century American chemists Computer storage media IBM employees Liquid crystal displays Luminescence American materials scientists Scientists from San Jose, California 20th-century American inventors
Marcel Vogel
Chemistry
605
55,771,639
https://en.wikipedia.org/wiki/Pres-Lam
Pres-Lam is a method of mass engineered timber construction that uses high strength unbonded steel cables or bars to create connections between timber beams and columns or columns and walls and their foundations. As a prestressed structure the steel cables clamp members together creating connections which are stronger and more compact than traditional timber fastening systems. In earthquake zones, the steel cables can be coupled with internal or external steel reinforcing which provide additional strength and energy dissipation creating a damage avoiding structural system. Pres-Lam can be used in conjunction with any mass engineered timber product such as glue laminated timber, laminated veneer lumber or cross laminated timber. History The concept of Pres-Lam was developed at the University of Canterbury in Christchurch, New Zealand by a team led by Professors Stefano Pampanin, Alessandro Palermo and Andy Buchanan in collaboration with PreStressed Timber Limited (PTL). The system stems from techniques developed during the US PRESSS at the University of California in San Diego during the 1990s under the leadership of New Zealand structural engineer Prof. Nigel Priestley. Beginning in 2008 a 5-year research campaign was begun under the Structural Timber Innovation Company. During this period the first examples of Pres-Lam structures were completed in New Zealand. Following the systems success, international research efforts have begun at ETH Zurich, the University of Basilicata, Washington State University and several other research institutions. In 2017 the NHERI Tallwood project was started with funding from the U.S. National Science Foundation focused on further validation of Pres-Lam in North America. Notable structures The Nelson Marlborough Institute of Technology Arts and Media Building – The world's first Pres-Lam Building The College of Creative Arts, Massey University – Uses Pres-Lam frames to augment vertical load carrying capacity and well as high seismic loading The Kaikōura District Council building – Subjected to the 2016 Kaikōura earthquake without damage and subsequently used as response headquarters The ETH Zurich house of Natural Resources – the first Pres-Lam building to be constructed outside of New Zealand Peavy Hall – a three-storey mixed use education building under construction Oregon State University campus in Corvallis, Oregon, United States. References Earthquake engineering Engineered wood
Pres-Lam
Engineering
453
38,946,837
https://en.wikipedia.org/wiki/Autapse
An autapse is a chemical or electrical synapse from a neuron onto itself. It can also be described as a synapse formed by the axon of a neuron on its own dendrites, in vivo or in vitro. History The term "autapse" was first coined in 1972 by Van der Loos and Glaser, who observed them in Golgi preparations of the rabbit occipital cortex while originally conducting a quantitative analysis of neocortex circuitry. Also in the 1970s, autapses have been described in dog and rat cerebral cortex, monkey neostriatum, and cat spinal cord. In 2000, they were first modeled as supporting persistence in recurrent neural networks. In 2004, they were modeled as demonstrating oscillatory behavior, which was absent in the same model neuron without autapse. More specifically, the neuron oscillated between high firing rates and firing suppression, reflecting the spike bursting behavior typically found in cerebral neurons. In 2009, autapses were, for the first time, associated with sustained activation. This proposed a possible function for excitatory autapses within a neural circuit. In 2014, electrical autapses were shown to generate stable target and spiral waves in a neural model network. This indicated that they played a significant role in stimulating and regulating the collective behavior of neurons in the network. In 2016, a model of resonance was offered. Autapses have been used to simulate "same cell" conditions to help researchers make quantitative comparisons, such as studying how N-methyl-D-aspartate receptor (NMDAR) antagonists affect synaptic versus extrasynaptic NMDARs. Formation Recently, it has been proposed that autapses could possibly form as a result of neuronal signal transmission blockage, such as in cases of axonal injury induced by poisoning or impeding ion channels. Dendrites from the soma in addition to an auxiliary axon may develop to form an autapse to help remediate the neuron's signal transmission. Structure and function Autapses can be either glutamate-releasing (excitatory) or GABA-releasing (inhibitory), just like their traditional synapse counterparts. Similarly, autapses can be electrical or chemical by nature. Broadly speaking, negative feedback in autapses tends to inhibit excitable neurons whereas positive feedback can stimulate quiescent neurons. Although the stimulation of inhibitory autapses did not induce hyperpolarizing inhibitory post-synaptic potentials in interneurons of layer V of neocortical slices, they have been shown to impact excitability. Upon using a GABA-antagonist to block autapses, the likelihood of an immediate subsequent second depolarization step increased following a first depolarization step. This suggests that autapses act by suppressing the second of two closely timed depolarization steps and therefore, they may provide feedback inhibition onto these cells. This mechanism may also potentially explain shunting inhibition. In cell culture, autapses have been shown to contribute to the prolonged activation of B31/B32 neurons, which significantly contribute food-response behavior in Aplysia. This suggests that autapses may play a role in mediating positive feedback. The B31/B32 autapse was unable to play a role in initiating the neuron's activity, although it is believed to have helped sustain the neuron's depolarized state. The extent to which autapses maintain depolarization remains unclear, particularly since other components of the neural circuit (i.e. B63 neurons) are also capable of providing strong synaptic input throughout the depolarization. Additionally, it has been suggested that autapses provide B31/B32 neurons with the ability to quickly repolarize. Bekkers (2009) has proposed that specifically blocking the contribution of autapses and then assessing the differences with or without blocked autapses could better illuminate the function of autapses. Hindmarsh–Rose (HR) model neurons have demonstrated chaotic, regular spiking, quiescent, and periodic patterns of burst firing without autapses. Upon the introduction of an electrical autapse, the periodic state switches to the chaotic state and displays an alternating behavior that increases in frequency with a greater autaptic intensity and time delay. On the other hand, excitatory chemical autapses enhanced the overall chaotic state. The chaotic state was reduced and suppressed in the neurons with inhibitory chemical autapses. In HR model neurons without autapses, the pattern of firing altered from quiescent to periodic and then to chaotic as DC current was increased. Generally, HR model neurons with autapses have the ability to swap into any firing pattern, regardless of the prior firing pattern. Location Neurons from several brain regions, such as the neocortex, substantia nigra, and hippocampus have been found to contain autapses. Autapses have been observed to be relatively more abundant in GABAergic basket and dendrite-targeting cells of the cat visual cortex compared to spiny stellate, double bouquet, and pyramidal cells, suggesting that the degree of neuron self-innervation is cell-specific. Additionally, dendrite-targeting cell autapses were, on average, further from the soma compared to basket cell autapses. 80% of layer V pyramidal neurons in developing rat neocortices contained autaptic connections, which were located more so on basal dendrites and apical oblique dendrites rather than main apical dendrites. The dendritic positions of synaptic connections of the same cell type were similar to those of autapses, suggesting that autaptic and synaptic networks share a common mechanism of formation. Disease implications In the 1990s, paroxysmal depolarizing shift-type interictal epileptiform discharges has been suggested to be primarily dependent on autaptic activity for solitary excitatory hippocampal rat neurons grown in microculture. More recently, in human neocortical tissues of patients with intractable epilepsy, the GABAergic output autapses of fast-spiking (FS) neurons have been shown to have stronger asynchronous release (AR) compared to both non-epileptic tissue and other types of synapses involving FS neurons. The study found similar results using a rat model as well. An increase in residual Ca2+ concentration in addition to the action potential amplitude in FS neurons was suggested to cause this increase in AR of epileptic tissue. Anti-epileptic drugs could potentially target this AR of GABA that seems to rampantly occur at FS neuron autapses. Effects of drugs Using a glia-conditioned medium to treat glia-free purified rat retinal ganglion microcultures has been shown to significantly increase the number of autapses per neuron compared to a control. This suggests that glia-derived soluble, proteinase K-sensitive factors induce autapse formation in rat retinal ganglion cells. References Neurophysiology Cellular neuroscience Computational neuroscience Cell signaling Signal transduction
Autapse
Chemistry,Biology
1,541
212,330
https://en.wikipedia.org/wiki/Lye
Lye is a hydroxide, either sodium hydroxide or potassium hydroxide. The word lye most accurately refers to sodium hydroxide (NaOH), but historically has been conflated to include other alkali materials, most notably potassium hydroxide (KOH). In order to distinguish between the two, sodium hydroxide may be referred to as soda lye while potassium hydroxide may be referred to as potash lye. Traditionally, it was obtained by using rainwater to leach wood ashes (which are highly soluble in water and strongly alkaline) of their potassium hydroxide (KOH). A caustic basic solution is produced, called lye water. Then, the lye water would either be used as such, as for curing olives before brining them, or be evaporated of water to produce crystalline lye. Today, lye is commercially manufactured using a membrane cell chloralkali process. It is supplied in various forms such as flakes, pellets, microbeads, coarse powder or a solution. Lye has traditionally been used as a major ingredient in soapmaking. Etymology The English word has cognates in all Germanic languages, and originally designated a bath or hot spring. Uses Food Lyes are used to cure many types of food, including the traditional Nordic lutefisk, olives (making them less bitter), canned mandarin oranges, hominy, lye rolls, century eggs, pretzels, candied pumpkins, and bagels. They are also used as a tenderizer in the crust of baked Cantonese moon cakes, in "zongzi" (glutinous rice dumplings wrapped in bamboo leaves), in chewy southern Chinese noodles popular in Hong Kong and southern China, and in Japanese ramen noodles. Lye provides the crisp glaze on hard pretzels. It's used in kutsinta, a type of rice cake from the Philippines together with pitsi-pitsî. In Assam, north east India, extensive use is made of a type of lye called khar in Assamese and karwi in Boro which is obtained by filtering the ashes of various banana stems, roots and skin in their cooking and also for curing, as medicine and as a substitute for soap. Lye made out of wood ashes is also used in the nixtamalization process of hominy corn by the tribes of the Eastern Woodlands in North America. In the United States, food-grade lye must meet the requirements outlined in the Food Chemicals Codex (FCC), as prescribed by the U.S. Food and Drug Administration (FDA). Lower grades of lye that are unsuitable for use in food preparation are commonly used as drain cleaners and oven cleaners. Soap Both sodium hydroxide and potassium hydroxide are used in making soap. Potassium hydroxide soaps are softer and more easily dissolved in water than sodium hydroxide soaps. Sodium hydroxide and potassium hydroxide are not interchangeable in either the proportions required or the properties produced in making soaps. "Hot process" soap making also uses lye as the main ingredient. Lye is added to water, cooled for a few minutes and then added to oils and butters. The mixture is then cooked over a period of time (1–2 hours), typically in a slow cooker, and then placed into a mold. Household Lyes are also valued for their cleaning effects. Sodium hydroxide is commonly the major constituent in commercial and industrial oven cleaners and clogged drain openers, due to its grease-dissolving abilities. Lyes decompose greases via alkaline ester hydrolysis, yielding water-soluble residues that are easily removed by rinsing. Tissue digestion Sodium or potassium hydroxide can be used to digest tissues of animal carcasses. Often referred to as alkaline hydrolysis, the process involves placing the animal carcass into a sealed chamber, adding a mixture of lye and water and the application of heat to accelerate the process. After several hours the chamber will contain a liquid with coffee-like appearance, and the only solids that remain are very fragile bone hulls of mostly calcium phosphate, which can be mechanically crushed to a fine powder with very little force. Sodium hydroxide is frequently used in the process of decomposing roadkill dumped in landfills by animal disposal contractors. Due to its low cost and easy availability, it has also been used to dispose of corpses by criminals. Italian serial killer Leonarda Cianciulli used this chemical to turn dead bodies into soap. In Mexico, a man who worked for drug cartels admitted to having disposed of more than 300 bodies with it. Fungus identification A 3–10% solution of potassium hydroxide (KOH) gives a color change in some species of mushrooms: In Agaricus, some species such as A. xanthodermus turn yellow with KOH, many have no reaction, and A. subrutilescens turns green. Distinctive change occurs for some species of Cortinarius and boletes Safety First aid When a person has been exposed to lye, sources recommend immediate removal of contaminated clothing/materials, gently brushing/wiping excess off of skin, and then flushing the area of exposure with running water for 15–60 minutes as well as contacting emergency services. Protection Personal protective equipment including safety glasses, chemical-resistant gloves, and adequate ventilation are required for the safe handling of lye. When in proximity to lye that is dissolving in an open container of water, the use of a vapor-resistant face mask is recommended. Adding lye too quickly can cause a runaway thermal reaction which can result in the mixture boiling or erupting. Storage Lye in its solid state is deliquescent and has a strong affinity for moisture in the air. As a result, lye will dissolve when exposed to open air, absorbing large amounts of atmospheric moisture. Accordingly, lye is stored in air-tight (and correspondingly moisture tight) containers. Glass is not a good material to be used for storage as severe alkalis are mildly corrosive to it. Similar to the case of other corrosives, the containers should be labeled to indicate the potential danger of the contents and stored away from children, pets, heat, and moisture. Hazardous reactions The majority of safety concerns with lye are also common with most corrosives, such as their potentially destructive effects on living tissues; examples are the skin, flesh, and the cornea. Solutions containing lyes can cause chemical burns, permanent injuries, scarring and blindness, immediately upon contact. Lyes may be harmful or even fatal if swallowed; ingestion can cause esophageal stricture. Moreover, the solvation of dry solid lye is highly exothermic and the resulting heat may cause additional burns or ignite flammables. The reaction between sodium hydroxide and some metals is also hazardous. Aluminium, magnesium, zinc, tin, chromium, brass and bronze all react with lye to produce hydrogen gas. Since hydrogen is flammable, mixing a large quantity of lye with aluminium could result in an explosion. Both the potassium and sodium forms are able to dissolve copper. See also Slaked lime (calcium hydroxide) References Further reading External links Hydroxides Household chemicals Deliquescent materials Desiccants Soaps Sodium compounds
Lye
Physics,Chemistry
1,529
593,683
https://en.wikipedia.org/wiki/Form%20follows%20function
Form follows function is a principle of design associated with late 19th- and early 20th-century architecture and industrial design in general, which states that the appearance and structure of a building or object (architectural form) should primarily relate to its intended function or purpose. Origins of the phrase The architect Louis Sullivan coined the maxim, which encapsulates Viollet-le-Duc's theories: "a rationally designed structure may not necessarily be beautiful but no building can be beautiful that does not have a rationally designed structure". The maxim is often incorrectly attributed to the sculptor Horatio Greenough (1805–1852), whose thinking mostly predates the later functionalist approach to architecture. Greenough's writings were for a long time largely forgotten, and were rediscovered only in the 1930s. In 1947, a selection of his essays was published as Form and Function: Remarks on Art by Horatio Greenough. The earliest formulation of the idea as "in architecture only that shall show that has a definite function" belongs not to an architect, but to a monk Carlo Lodoli (1690–1761), who uttered the phrase while inspired by positivist thinking (Lodoli's words were published by his student, Francesco Algarotti, in 1757). Sullivan was Greenough's much younger compatriot and admired rationalist thinkers such as Thoreau, Emerson, Whitman, and Melville, as well as Greenough himself. In 1896, Sullivan coined the phrase in an article titled The Tall Office Building Artistically Considered, though he later attributed the core idea to the Roman architect, engineer, and author Marcus Vitruvius Pollio, who first asserted in his book that a structure must exhibit the three qualities of firmitas, utilitas, venustas—that is, it must be solid, useful, and beautiful. Sullivan actually wrote that "form ever follows function", but the simpler and less emphatic phrase is more widely remembered. For Sullivan, this was distilled wisdom, an aesthetic credo, the single "rule that shall permit of no exception". The full quote is: Whether it be the sweeping eagle in his flight, or the open apple-blossom, the toiling work-horse, the blithe swan, the branching oak, the winding stream at its base, the drifting clouds, over all the coursing sun, form ever follows function, and this is the law. Where function does not change, form does not change. The granite rocks, the ever-brooding hills, remain for ages; the lightning lives, comes into shape, and dies, in a twinkling. It is the pervading law of all things organic and inorganic, of all things physical and metaphysical, of all things human and all things superhuman, of all true manifestations of the head, of the heart, of the soul, that the life is recognizable in its expression, that form ever follows function. This is the law. Sullivan developed the shape of the tall steel skyscraper in late 19th-century Chicago at a moment in which technology, taste and economic forces converged and made it necessary to break with established styles. If the shape of the building was not going to be chosen out of the old pattern book, something had to determine form, and according to Sullivan it was going to be the purpose of the building. Thus, "form follows function", as opposed to "form follows precedent". Sullivan's assistant, Frank Lloyd Wright, adopted and professed the same principle in a slightly different form. Debate on the functionality of ornamentation In 1910, the Austrian architect Adolf Loos gave a lecture titled "Ornament and Crime" in reaction to the elaborate ornament used by the Vienna Secession architects. Modernists adopted Loos's moralistic argument as well as Sullivan's maxim. Loos had worked as a carpenter in the USA. He celebrated efficient plumbing and industrial artifacts like corn silos and steel water towers as examples of functional design. Application in different fields Architecture The phrase "form (ever) follows function" became a battle cry of Modernist architects after the 1930s. The credo was taken to imply that decorative elements, which architects call "ornament", were superfluous in modern buildings. The phrase can best be implemented in design by asking the question, "Does it work?" Design in architecture utilizing this mantra follows the functionality and purpose of the building. For example, a family home would be designed around familial and social interactions and life. It would be purposeful, without functionless flare. A building's beauty comes from the function it serves rather than from its visual design. One aim of the Modernists after World War II was to elevate the living conditions of the masses. Many people around the world were living in less than ideal conditions, worsened by war. The Modernists sought to bring these people into more livable, humane spaces that, while not conventionally beautiful, were extremely functional. As a result, architecture utilizing "form follows function" became a sign of hope and progress. Despite coining the term, Louis Sullivan himself neither thought nor designed along such lines at the peak of his career. Indeed, while his buildings could be spare and crisp in their principal masses, he often punctuated their plain surfaces with eruptions of lush Art Nouveau and Celtic Revival decorations, usually cast in iron or terracotta, and ranging from organic forms like vines and ivy, to more geometric designs, and interlace, inspired by his Irish design heritage. Probably the most famous example is the writhing green ironwork that covers the entrance canopies of the Carson, Pirie, Scott and Company Building on South State Street in Chicago. These ornaments, often executed by the talented younger draftsman in Sullivan's employ, would eventually become Sullivan's trademark; to students of architecture, they are his instantly recognizable signature. Automobile designing If the design of an automobile conforms to its function—for instance, the Fiat Multipla's shape, which is partly due to the desire to sit six people in two rows—then its form is said to follow its function. Product design One episode in the history of the inherent conflict between functional design and the demands of the marketplace took place in 1935, after the introduction of the streamlined Chrysler Airflow, when the American auto industry temporarily halted attempts to introduce optimal aerodynamic forms into mass manufacture. Some car-makers thought aerodynamic efficiency would result in a single optimal auto-body shape, a "teardrop" shape, which would not be good for unit sales. General Motors adopted two different positions on streamlining, one meant for its internal engineering community, the other meant for its customers. Like the annual model year change, so-called aerodynamic styling is often meaningless in terms of technical performance. Subsequently, drag coefficient has become both a marketing tool and a means of improving the sale-ability of a car by reducing its fuel consumption, slightly, and increasing its top speed, markedly. The American industrial designers of the 1930s and 1940s like Raymond Loewy, Norman Bel Geddes and Henry Dreyfuss grappled with the inherent contradictions of "form follows function" as they redesigned blenders and locomotives and duplicating machines for mass-market consumption. Loewy formulated his principle to express that product designs are bound by functional constraints of math and materials and logic, but their acceptance is constrained by social expectations. His advice was that for very new technologies, they should be made as familiar as possible, but for familiar technologies, they should be made surprising. Victor Papanek (1923–1998) was one influential twentieth-century designer and design philosopher who taught and wrote as a proponent of "form follows function". By honestly applying "form follows function", industrial designers had the potential to put their clients out of business. Some simple single-purpose objects like screwdrivers and pencils and teapots might be reducible to a single optimal form, precluding product differentiation. Some objects made too durable would prevent sales of replacements (see Planned obsolescence). From the standpoint of functionality, some products are simply unnecessary. An alternative approach referred to as "form leads function", or "function follows form", starts with vague, abstract, or underspecified designs. These designs, sometimes generated using tools like text-to-image models, can serve as triggers for generating novel ideas for product design. Software engineering It has been argued that the structure and internal quality attributes of a working, non-trivial software artifact will represent first and foremost the engineering requirements of its construction, with the influence of process being marginal, if any. This does not mean that process is irrelevant, but that processes compatible with an artifact's requirements lead to roughly similar results. The principle can also be applied to enterprise application architectures of modern business, where "function" encompasses the business processes which should be assisted by the enterprise architecture, or "form". If the architecture were to dictate how the business operates, then the business is likely to suffer from inflexibility and the inability to adapt to change. Service-oriented architecture enables an enterprise architect to rearrange the "form" of the architecture to meet the functional requirements of a business by adopting standards-based communication protocols which enable interoperability. This stands in conflict with Conway's law, which states from a social point of view that "form follows organization". Furthermore, domain-driven design postulates that structure (software architecture, design pattern, implementation) should emerge from constraints of the modeled domain (functional requirement). While "form" and "function" may be more or less explicit and invariant concepts to the many engineering doctrines, metaprogramming and the functional programming paradigm lend themselves very well to explore, blur and invert the essence of those two concepts. The agile software development movement espouses techniques such as "test-driven development", in which the engineer begins with a minimum unit of user-oriented functionality, creates an automated test for such and then implements the functionality and iterates, repeating this process. The result and argument for this discipline are that the structure or "form" emerges from actual function, and in fact because done organically, makes the project more adaptable long-term, as well of as higher-quality because of the functional base of automated tests. See also Truth to materials Aesthetics Design science (methodology) Separation of content and presentation User-centered design References Notes Bibliography External links "E. H. Gombrich’s adoption of the formula form follows function: A case of mistaken identity?" by Jan Michl "How form functions: On esthetics and Gestalt theory" by Roy Behrens "The Tall Office Building Artistically Considered" by Louis H. Sullivan in 1896. Aesthetics Architectural theory Industrial design Modernism
Form follows function
Engineering
2,209
3,980,797
https://en.wikipedia.org/wiki/Inflatable%20costume
An inflatable costume or air-inflated costume is a costume that is inflated around the wearer by means of a battery-powered blower that sucks air into the costume. These costumes usually stand 9–10 feet tall when inflated. Inflatable costumes are typically used by mascots and started appearing in the 1990s. One of the first inflatable mascots was Lil' Red of the University of Nebraska-Lincoln. Most NBA teams own an inflatable costume. In the UK inflatable costumes are becoming more popular, and many people are wearing them for fun at parties etc. These are smaller versions of the costumes worn by mascots in the US. They are used by stepping into the costume, turning on the small electric fan and then pulling a drawstring at the neck, and the costumes quickly inflate. Popular costumes include the inflatable sumo wrestler, ballerina, cowboy, cow costume, pig costume, T-Rex costume and chicken costume. An inflatable tyrannosaurus rex costume was a bestseller on Amazon in the U.S. in 2019. See also List of inflatable manufactured goods References Costume design One-piece suits Inflatable manufactured goods
Inflatable costume
Engineering
245
29,580,656
https://en.wikipedia.org/wiki/Surface%20mail
Surface mail, also known as sea mail, is mail that is transported by land and sea (along the surface of the Earth), rather than by air, as in airmail. Surface mail is significantly less expensive but slower than airmail, and thus is preferred for large or heavy, non-urgent items and is primarily used for sending packages, not letters. History The term "surface mail" arose as a retronym (retrospective term), following the development of airmail – a term was needed to describe traditional mail, for which purpose "surface mail" was coined. A more recent example of the same process is the term snail mail (to refer to physical mail, be it transported by surface or air), following the development of email. By country Australia Australia Post offers international surface mail (known as seamail) for parcels 2kg and over. Israel The Israel Postal Company () offers international surface mail (known as "sea and land mail", (). United States In 2007, the US Postal Service discontinued its outbound international surface mail ("sea mail") service, mainly because of increased costs. Returned undeliverable surface parcels had become an expensive problem for the USPS, since it was often required to take such parcels back. Domestic surface mail (now "Retail Ground" or "Commercial Parcel Select") remains available. Alternatives to international surface mail include: International Surface Air Lift (ISAL). The service includes neither tracking nor insurance; but it may be possible to purchase shipping insurance from a third-party company. USPS Commercial ePacket. The service is trackable. Ordinary first-class international airmail. Senders can access the International Surface Air Lift and ePacket services through postal wholesalers. Some examples of such wholesalers include: Asendia USA (accessible through the Shippo website to users who have an Asendia account), Globegistics (now owned by Asendia), and APC Postal Logistics. If a sender sends an ISAL mailing directly through the USPS (without a wholesaler as an intermediary), the minimum weight is 50 pounds per mailing. ePacket mailings can never be sent directly through the USPS; senders must always use a wholesaler. See also Parcel post Surface transport References External links Royal Mail: Surface Mail Postal systems Philatelic terminology
Surface mail
Technology
487
30,962,412
https://en.wikipedia.org/wiki/Human-transcriptome%20DataBase%20for%20Alternative%20Splicing
The Human-transcriptome DataBase for Alternative Splicing (H-DBAS) is a database of alternatively spliced human transcripts based on H-Invitational. See also Alternative splicing References External links https://web.archive.org/web/20110208034608/http://jbirc.jbic.or.jp/h-dbas/. Biological databases Gene expression Spliceosome RNA splicing
Human-transcriptome DataBase for Alternative Splicing
Chemistry,Biology
98
27,051,151
https://en.wikipedia.org/wiki/Big%20data
Big data primarily refers to data sets that are too large or complex to be dealt with by traditional data-processing software. Data with many entries (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate. Big data analysis challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, information privacy, and data source. Big data was originally associated with three key concepts: volume, variety, and velocity. The analysis of big data presents challenges in sampling, and thus previously allowing for only observations and sampling. Thus a fourth concept, veracity, refers to the quality or insightfulness of the data. Without sufficient investment in expertise for big data veracity, the volume and variety of data can produce costs and risks that exceed an organization's capacity to create and capture value from big data. Current usage of the term big data tends to refer to the use of predictive analytics, user behavior analytics, or certain other advanced data analytics methods that extract value from big data, and seldom to a particular size of data set. "There is little doubt that the quantities of data now available are indeed large, but that's not the most relevant characteristic of this new data ecosystem." Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on". Scientists, business executives, medical practitioners, advertising and governments alike regularly meet difficulties with large data-sets in areas including Internet searches, fintech, healthcare analytics, geographic information systems, urban informatics, and business informatics. Scientists encounter limitations in e-Science work, including meteorology, genomics, connectomics, complex physics simulations, biology, and environmental research. The size and number of available data sets have grown rapidly as data is collected by devices such as mobile devices, cheap and numerous information-sensing Internet of things devices, aerial (remote sensing) equipment, software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks. The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s; , every day 2.5 exabytes (2.17×260 bytes) of data are generated. Based on an IDC report prediction, the global data volume was predicted to grow exponentially from 4.4 zettabytes to 44 zettabytes between 2013 and 2020. By 2025, IDC predicts there will be 163 zettabytes of data. According to IDC, global spending on big data and business analytics (BDA) solutions is estimated to reach $215.7 billion in 2021. While Statista report, the global big data market is forecasted to grow to $103 billion by 2027. In 2011 McKinsey & Company reported, if US healthcare were to use big data creatively and effectively to drive efficiency and quality, the sector could create more than $300 billion in value every year. In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data. And users of services enabled by personal-location data could capture $600 billion in consumer surplus. One question for large enterprises is determining who should own big-data initiatives that affect the entire organization. Relational database management systems and desktop statistical software packages used to visualize data often have difficulty processing and analyzing big data. The processing and analysis of big data may require "massively parallel software running on tens, hundreds, or even thousands of servers". What qualifies as "big data" varies depending on the capabilities of those analyzing it and their tools. Furthermore, expanding capabilities make big data a moving target. "For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration." Definition The term big data has been in use since the 1990s, with some giving credit to John Mashey for popularizing the term. Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time. Big data philosophy encompasses unstructured, semi-structured and structured data; however, the main focus is on unstructured data. Big data "size" is a constantly moving target; ranging from a few dozen terabytes to many zettabytes of data. Big data requires a set of techniques and technologies with new forms of integration to reveal insights from data-sets that are diverse, complex, and of a massive scale. "Volume", "variety", "velocity", and various other "Vs" are added by some organizations to describe it, a revision challenged by some industry authorities. The Vs of big data were often referred to as the "three Vs", "four Vs", and "five Vs". They represented the qualities of big data in volume, variety, velocity, veracity, and value. Variability is often included as an additional quality of big data. A 2018 definition states "Big data is where parallel computing tools are needed to handle data", and notes, "This represents a distinct and clearly defined change in the computer science used, via parallel programming theories, and losses of some of the guarantees and capabilities made by Codd's relational model." In a comparative study of big datasets, Kitchin and McArdle found that none of the commonly considered characteristics of big data appear consistently across all of the analyzed cases. For this reason, other studies identified the redefinition of power dynamics in knowledge discovery as the defining trait. Instead of focusing on the intrinsic characteristics of big data, this alternative perspective pushes forward a relational understanding of the object claiming that what matters is the way in which data is collected, stored, made available and analyzed. Big data vs. business intelligence The growing maturity of the concept more starkly delineates the difference between "big data" and "business intelligence": Business intelligence uses applied mathematics tools and descriptive statistics with data with high information density to measure things, detect trends, etc. Big data uses mathematical analysis, optimization, inductive statistics, and concepts from nonlinear system identification to infer laws (regressions, nonlinear relationships, and causal effects) from large sets of data with low information density to reveal relationships and dependencies, or to perform predictions of outcomes and behaviors. Characteristics Big data can be described by the following characteristics: Volume The quantity of generated and stored data. The size of the data determines the value and potential insight, and whether it can be considered big data or not. The size of big data is usually larger than terabytes and petabytes. Variety The type and nature of the data. Earlier technologies like RDBMSs were capable to handle structured data efficiently and effectively. However, the change in type and nature from structured to semi-structured or unstructured challenged the existing tools and technologies. Big data technologies evolved with the prime intention to capture, store, and process the semi-structured and unstructured (variety) data generated with high speed (velocity), and huge in size (volume). Later, these tools and technologies were explored and used for handling structured data also but preferable for storage. Eventually, the processing of structured data was still kept as optional, either using big data or traditional RDBMSs. This helps in analyzing data towards effective usage of the hidden insights exposed from the data collected via social media, log files, sensors, etc. Big data draws from text, images, audio, video; plus it completes missing pieces through data fusion. Velocity The speed at which the data is generated and processed to meet the demands and challenges that lie in the path of growth and development. Big data is often available in real-time. Compared to small data, big data is produced more continually. Two kinds of velocity related to big data are the frequency of generation and the frequency of handling, recording, and publishing. Veracity The truthfulness or reliability of the data, which refers to the data quality and the data value. Big data must not only be large in size, but also must be reliable in order to achieve value in the analysis of it. The data quality of captured data can vary greatly, affecting an accurate analysis. Value The worth in information that can be achieved by the processing and analysis of large datasets. Value also can be measured by an assessment of the other qualities of big data. Value may also represent the profitability of information that is retrieved from the analysis of big data. Variability The characteristic of the changing formats, structure, or sources of big data. Big data can include structured, unstructured, or combinations of structured and unstructured data. Big data analysis may integrate raw data from multiple sources. The processing of raw data may also involve transformations of unstructured data to structured data. Other possible characteristics of big data are: Exhaustive Whether the entire system (i.e., =all) is captured or recorded or not. Big data may or may not include all the available data from sources. Fine-grained and uniquely lexical Respectively, the proportion of specific data of each element per element collected and if the element and its characteristics are properly indexed or identified. Relational If the data collected contains common fields that would enable a conjoining, or meta-analysis, of different data sets. Extensional If new fields in each element of the data collected can be added or changed easily. Scalability If the size of the big data storage system can expand rapidly. Architecture Big data repositories have existed in many forms, often built by corporations with a special need. Commercial vendors historically offered parallel database management systems for big data beginning in the 1990s. For many years, WinterCorp published the largest database report. Teradata Corporation in 1984 marketed the parallel processing DBC 1012 system. Teradata systems were the first to store and analyze 1 terabyte of data in 1992. Hard disk drives were 2.5 GB in 1991 so the definition of big data continuously evolves. Teradata installed the first petabyte class RDBMS based system in 2007. , there are a few dozen petabyte class Teradata relational databases installed, the largest of which exceeds 50 PB. Systems up until 2008 were 100% structured relational data. Since then, Teradata has added semi structured data types including XML, JSON, and Avro. In 2000, Seisint Inc. (now LexisNexis Risk Solutions) developed a C++-based distributed platform for data processing and querying known as the HPCC Systems platform. This system automatically partitions, distributes, stores and delivers structured, semi-structured, and unstructured data across multiple commodity servers. Users can write data processing pipelines and queries in a declarative dataflow programming language called ECL. Data analysts working in ECL are not required to define data schemas upfront and can rather focus on the particular problem at hand, reshaping data in the best possible manner as they develop the solution. In 2004, LexisNexis acquired Seisint Inc. and their high-speed parallel processing platform and successfully used this platform to integrate the data systems of Choicepoint Inc. when they acquired that company in 2008. In 2011, the HPCC systems platform was open-sourced under the Apache v2.0 License. CERN and other physics experiments have collected big data sets for many decades, usually analyzed via high-throughput computing rather than the map-reduce architectures usually meant by the current "big data" movement. In 2004, Google published a paper on a process called MapReduce that uses a similar architecture. The MapReduce concept provides a parallel processing model, and an associated implementation was released to process huge amounts of data. With MapReduce, queries are split and distributed across parallel nodes and processed in parallel (the "map" step). The results are then gathered and delivered (the "reduce" step). The framework was very successful, so others wanted to replicate the algorithm. Therefore, an implementation of the MapReduce framework was adopted by an Apache open-source project named "Hadoop". Apache Spark was developed in 2012 in response to limitations in the MapReduce paradigm, as it adds in-memory processing and the ability to set up many operations (not just map followed by reducing). MIKE2.0 is an open approach to information management that acknowledges the need for revisions due to big data implications identified in an article titled "Big Data Solution Offering". The methodology addresses handling big data in terms of useful permutations of data sources, complexity in interrelationships, and difficulty in deleting (or modifying) individual records. Studies in 2012 showed that a multiple-layer architecture was one option to address the issues that big data presents. A distributed parallel architecture distributes data across multiple servers; these parallel execution environments can dramatically improve data processing speeds. This type of architecture inserts data into a parallel DBMS, which implements the use of MapReduce and Hadoop frameworks. This type of framework looks to make the processing power transparent to the end-user by using a front-end application server. The data lake allows an organization to shift its focus from centralized control to a shared model to respond to the changing dynamics of information management. This enables quick segregation of data into the data lake, thereby reducing the overhead time. Technologies A 2011 McKinsey Global Institute report characterizes the main components and ecosystem of big data as follows: Techniques for analyzing data, such as A/B testing, machine learning, and natural language processing Big data technologies, like business intelligence, cloud computing, and databases Visualization, such as charts, graphs, and other displays of the data Multidimensional big data can also be represented as OLAP data cubes or, mathematically, tensors. Array database systems have set out to provide storage and high-level query support on this data type. Additional technologies being applied to big data include efficient tensor-based computation, such as multilinear subspace learning, massively parallel-processing (MPP) databases, search-based applications, data mining, distributed file systems, distributed cache (e.g., burst buffer and Memcached), distributed databases, cloud and HPC-based infrastructure (applications, storage and computing resources), and the Internet. Although, many approaches and technologies have been developed, it still remains difficult to carry out machine learning with big data. Some MPP relational databases have the ability to store and manage petabytes of data. Implicit is the ability to load, monitor, back up, and optimize the use of the large data tables in the RDBMS. DARPA's Topological Data Analysis program seeks the fundamental structure of massive data sets and in 2008 the technology went public with the launch of a company called "Ayasdi". The practitioners of big data analytics processes are generally hostile to slower shared storage, preferring direct-attached storage (DAS) in its various forms from solid state drive (SSD) to high capacity SATA disk buried inside parallel processing nodes. The perception of shared storage architectures—storage area network (SAN) and network-attached storage (NAS)— is that they are relatively slow, complex, and expensive. These qualities are not consistent with big data analytics systems that thrive on system performance, commodity infrastructure, and low cost. Real or near-real-time information delivery is one of the defining characteristics of big data analytics. Latency is therefore avoided whenever and wherever possible. Data in direct-attached memory or disk is good—data on memory or disk at the other end of an FC SAN connection is not. The cost of an SAN at the scale needed for analytics applications is much higher than other storage techniques. Applications Big data has increased the demand of information management specialists so much so that Software AG, Oracle Corporation, IBM, Microsoft, SAP, EMC, HP, and Dell have spent more than $15 billion on software firms specializing in data management and analytics. In 2010, this industry was worth more than $100 billion and was growing at almost 10 percent a year, about twice as fast as the software business as a whole. Developed economies increasingly use data-intensive technologies. There are 4.6 billion mobile-phone subscriptions worldwide, and between 1 billion and 2 billion people accessing the internet. Between 1990 and 2005, more than 1 billion people worldwide entered the middle class, which means more people became more literate, which in turn led to information growth. The world's effective capacity to exchange information through telecommunication networks was 281 petabytes in 1986, 471 petabytes in 1993, 2.2 exabytes in 2000, 65 exabytes in 2007 and predictions put the amount of internet traffic at 667 exabytes annually by 2014. According to one estimate, one-third of the globally stored information is in the form of alphanumeric text and still image data, which is the format most useful for most big data applications. This also shows the potential of yet unused data (i.e. in the form of video and audio content). While many vendors offer off-the-shelf products for big data, experts promote the development of in-house custom-tailored systems if the company has sufficient technical capabilities. Government The use and adoption of big data within governmental processes allows efficiencies in terms of cost, productivity, and innovation, but comes with flaws. Data analysis often requires multiple parts of government (central and local) to work in collaboration and create new and innovative processes to deliver the desired outcome. A common government organization that makes use of big data is the National Security Administration (NSA), which monitors the activities of the Internet constantly in search for potential patterns of suspicious or illegal activities their system may pick up. Civil registration and vital statistics (CRVS) collects all certificates status from birth to death. CRVS is a source of big data for governments. International development Research on the effective usage of information and communication technologies for development (also known as "ICT4D") suggests that big data technology can make important contributions but also present unique challenges to international development. Advancements in big data analysis offer cost-effective opportunities to improve decision-making in critical development areas such as health care, employment, economic productivity, crime, security, and natural disaster and resource management. Additionally, user-generated data offers new opportunities to give the unheard a voice. However, longstanding challenges for developing regions such as inadequate technological infrastructure and economic and human resource scarcity exacerbate existing concerns with big data such as privacy, imperfect methodology, and interoperability issues. The challenge of "big data for development" is currently evolving toward the application of this data through machine learning, known as "artificial intelligence for development (AI4D). Benefits A major practical application of big data for development has been "fighting poverty with data". In 2015, Blumenstock and colleagues estimated predicted poverty and wealth from mobile phone metadata and in 2016 Jean and colleagues combined satellite imagery and machine learning to predict poverty. Using digital trace data to study the labor market and the digital economy in Latin America, Hilbert and colleagues argue that digital trace data has several benefits such as: Thematic coverage: including areas that were previously difficult or impossible to measure Geographical coverage: providing sizable and comparable data for almost all countries, including many small countries that usually are not included in international inventories Level of detail: providing fine-grained data with many interrelated variables, and new aspects, like network connections Timeliness and timeseries: graphs can be produced within days of being collected Challenges At the same time, working with digital trace data instead of traditional survey data does not eliminate the traditional challenges involved when working in the field of international quantitative analysis. Priorities change, but the basic discussions remain the same. Among the main challenges are: Representativeness. While traditional development statistics is mainly concerned with the representativeness of random survey samples, digital trace data is never a random sample. Generalizability. While observational data always represents this source very well, it only represents what it represents, and nothing more. While it is tempting to generalize from specific observations of one platform to broader settings, this is often very deceptive. Harmonization. Digital trace data still requires international harmonization of indicators. It adds the challenge of so-called "data-fusion", the harmonization of different sources. Data overload. Analysts and institutions are not used to effectively deal with a large number of variables, which is efficiently done with interactive dashboards. Practitioners still lack a standard workflow that would allow researchers, users and policymakers to efficiently and effectively deal with data. Finance Big Data is being rapidly adopted in Finance to 1) speed up processing and 2) deliver better, more informed inferences, both internally and to the clients of the financial institutions. The financial applications of Big Data range from investing decisions and trading (processing volumes of available price data, limit order books, economic data and more, all at the same time), portfolio management (optimizing over an increasingly large array of financial instruments, potentially selected from different asset classes), risk management (credit rating based on extended information), and any other aspect where the data inputs are large. Big Data has also been a typical concept within the field of alternative financial service. Some of the major areas involve crowd-funding platforms and crypto currency exchanges. Healthcare Big data analytics has been used in healthcare in providing personalized medicine and prescriptive analytics, clinical risk intervention and predictive analytics, waste and care variability reduction, automated external and internal reporting of patient data, standardized medical terms and patient registries. Some areas of improvement are more aspirational than actually implemented. The level of data generated within healthcare systems is not trivial. With the added adoption of mHealth, eHealth and wearable technologies the volume of data will continue to increase. This includes electronic health record data, imaging data, patient generated data, sensor data, and other forms of difficult to process data. There is now an even greater need for such environments to pay greater attention to data and information quality. "Big data very often means 'dirty data' and the fraction of data inaccuracies increases with data volume growth." Human inspection at the big data scale is impossible and there is a desperate need in health service for intelligent tools for accuracy and believability control and handling of information missed. While extensive information in healthcare is now electronic, it fits under the big data umbrella as most is unstructured and difficult to use. The use of big data in healthcare has raised significant ethical challenges ranging from risks for individual rights, privacy and autonomy, to transparency and trust. Big data in health research is particularly promising in terms of exploratory biomedical research, as data-driven analysis can move forward more quickly than hypothesis-driven research. Then, trends seen in data analysis can be tested in traditional, hypothesis-driven follow up biological research and eventually clinical research. A related application sub-area, that heavily relies on big data, within the healthcare field is that of computer-aided diagnosis in medicine. For instance, for epilepsy monitoring it is customary to create 5 to 10 GB of data daily. Similarly, a single uncompressed image of breast tomosynthesis averages 450 MB of data. These are just a few of the many examples where computer-aided diagnosis uses big data. For this reason, big data has been recognized as one of the seven key challenges that computer-aided diagnosis systems need to overcome in order to reach the next level of performance. Education A McKinsey Global Institute study found a shortage of 1.5 million highly trained data professionals and managers and a number of universities including University of Tennessee and UC Berkeley, have created masters programs to meet this demand. Private boot camps have also developed programs to meet that demand, including paid programs like The Data Incubator or General Assembly. In the specific field of marketing, one of the problems stressed by Wedel and Kannan is that marketing has several sub domains (e.g., advertising, promotions, product development, branding) that all use different types of data. Media To understand how the media uses big data, it is first necessary to provide some context into the mechanism used for media process. It has been suggested by Nick Couldry and Joseph Turow that practitioners in media and advertising approach big data as many actionable points of information about millions of individuals. The industry appears to be moving away from the traditional approach of using specific media environments such as newspapers, magazines, or television shows and instead taps into consumers with technologies that reach targeted people at optimal times in optimal locations. The ultimate aim is to serve or convey, a message or content that is (statistically speaking) in line with the consumer's mindset. For example, publishing environments are increasingly tailoring messages (advertisements) and content (articles) to appeal to consumers that have been exclusively gleaned through various data-mining activities. Targeting of consumers (for advertising by marketers) Data capture Data journalism: publishers and journalists use big data tools to provide unique and innovative insights and infographics. Channel 4, the British public-service television broadcaster, is a leader in the field of big data and data analysis. Insurance Health insurance providers are collecting data on social "determinants of health" such as food and TV consumption, marital status, clothing size, and purchasing habits, from which they make predictions on health costs, in order to spot health issues in their clients. It is controversial whether these predictions are currently being used for pricing. Internet of things (IoT) Big data and the IoT work in conjunction. Data extracted from IoT devices provides a mapping of device inter-connectivity. Such mappings have been used by the media industry, companies, and governments to more accurately target their audience and increase media efficiency. The IoT is also increasingly adopted as a means of gathering sensory data, and this sensory data has been used in medical, manufacturing and transportation contexts. Kevin Ashton, the digital innovation expert who is credited with coining the term, defines the Internet of things in this quote: "If we had computers that knew everything there was to know about things—using data they gathered without any help from us—we would be able to track and count everything, and greatly reduce waste, loss, and cost. We would know when things needed replacing, repairing, or recalling, and whether they were fresh or past their best." Information technology Especially since 2015, big data has come to prominence within business operations as a tool to help employees work more efficiently and streamline the collection and distribution of information technology (IT). The use of big data to resolve IT and data collection issues within an enterprise is called IT operations analytics (ITOA). By applying big data principles into the concepts of machine intelligence and deep computing, IT departments can predict potential issues and prevent them. ITOA businesses offer platforms for systems management that bring data silos together and generate insights from the whole of the system rather than from isolated pockets of data. Survey science Compared to survey-based data collection, big data has low cost per data point, applies analysis techniques via machine learning and data mining, and includes diverse and new data sources, e.g., registers, social media, apps, and other forms digital data. Since 2018, survey scientists have started to examine how big data and survey science can complement each other to allow researchers and practitioners to improve the production of statistics and its quality. There have been three Big Data Meets Survey Science (BigSurv) conferences in 2018, 2020 (virtual), 2023, and one conference forthcoming in 2025, a special issue in the Social Science Computer Review, a special issue in Journal of the Royal Statistical Society, and a special issue in EP J Data Science, and a book called Big Data Meets Social Sciences edited by Craig Hill and five other Fellows of the American Statistical Association. In 2021, the founding members of BigSurv received the Warren J. Mitofsky Innovators Award from the American Association for Public Opinion Research. Marketing Big data is notable in marketing due to the constant "datafication" of everyday consumers of the internet, in which all forms of data are tracked. The datafication of consumers can be defined as quantifying many of or all human behaviors for the purpose of marketing. The increasingly digital world of rapid datafication makes this idea relevant to marketing because the amount of data constantly grows exponentially. It is predicted to increase from 44 to 163 zettabytes within the span of five years. The size of big data can often be difficult to navigate for marketers. As a result, adopters of big data may find themselves at a disadvantage. Algorithmic findings can be difficult to achieve with such large datasets. Big data in marketing is a highly lucrative tool that can be used for large corporations, its value being as a result of the possibility of predicting significant trends, interests, or statistical outcomes in a consumer-based manner. There are three significant factors in the use of big data in marketing: Big data provides customer behavior pattern spotting for marketers, since all human actions are being quantified into readable numbers for marketers to analyze and use for their research. In addition, big data can also be seen as a customized product recommendation tool. Specifically, since big data is effective in analyzing customers' purchase behaviors and browsing patterns, this technology can assist companies in promoting specific personalized products to specific customers. Real-time market responsiveness is important for marketers because of the ability to shift marketing efforts and correct to current trends, which is helpful in maintaining relevance to consumers. This can supply corporations with the information necessary to predict the wants and needs of consumers in advance. Data-driven market ambidexterity are being highly fueled by big data. New models and algorithms are being developed to make significant predictions about certain economic and social situations. Case studies Government China The Integrated Joint Operations Platform (IJOP, 一体化联合作战平台) is used by the government to monitor the population, particularly Uyghurs. Biometrics, including DNA samples, are gathered through a program of free physicals. By 2020, China plans to give all its citizens a personal "social credit" score based on how they behave. The Social Credit System, now being piloted in a number of Chinese cities, is considered a form of mass surveillance which uses big data analysis technology. India Big data analysis was tried out for the BJP to win the 2014 Indian General Election. The Indian government uses numerous techniques to ascertain how the Indian electorate is responding to government action, as well as ideas for policy augmentation. Israel Personalized diabetic treatments can be created through GlucoMe's big data solution. United Kingdom Examples of uses of big data in public services: Data on prescription drugs: by connecting origin, location and the time of each prescription, a research unit was able to exemplify and examine the considerable delay between the release of any given drug, and a UK-wide adaptation of the National Institute for Health and Care Excellence guidelines. This suggests that new or most up-to-date drugs take some time to filter through to the general patient. Joining up data: a local authority blended data about services, such as road gritting rotas, with services for people at risk, such as Meals on Wheels. The connection of data allowed the local authority to avoid any weather-related delay. United States In 2012, the Obama administration announced the Big Data Research and Development Initiative, to explore how big data could be used to address important problems faced by the government. The initiative is composed of 84 different big data programs spread across six departments. Big data analysis played a large role in Barack Obama's successful 2012 re-election campaign. The United States Federal Government owns four of the ten most powerful supercomputers in the world. The Utah Data Center has been constructed by the United States National Security Agency. When finished, the facility will be able to handle a large amount of information collected by the NSA over the Internet. The exact amount of storage space is unknown, but more recent sources claim it will be on the order of a few exabytes. This has posed security concerns regarding the anonymity of the data collected. Retail Walmart handles more than 1 million customer transactions every hour, which are imported into databases estimated to contain more than 2.5 petabytes (2560 terabytes) of data—the equivalent of 167 times the information contained in all the books in the US Library of Congress. Windermere Real Estate uses location information from nearly 100 million drivers to help new home buyers determine their typical drive times to and from work throughout various times of the day. FICO Card Detection System protects accounts worldwide. Omnichannel retailing leverages online big data to improve offline experiences. Science The Large Hadron Collider experiments represent about 150 million sensors delivering data 40 million times per second. There are nearly 600 million collisions per second. After filtering and refraining from recording more than 99.99995% of these streams, there are 1,000 collisions of interest per second. As a result, only working with less than 0.001% of the sensor stream data, the data flow from all four LHC experiments represents 25 petabytes annual rate before replication (). This becomes nearly 200 petabytes after replication. If all sensor data were recorded in LHC, the data flow would be extremely hard to work with. The data flow would exceed 150 million petabytes annual rate, or nearly 500 exabytes per day, before replication. To put the number in perspective, this is equivalent to 500 quintillion (5×1020) bytes per day, almost 200 times more than all the other sources combined in the world. The Square Kilometre Array is a radio telescope built of thousands of antennas. It is expected to be operational by 2024. Collectively, these antennas are expected to gather 14 exabytes and store one petabyte per day. It is considered one of the most ambitious scientific projects ever undertaken. When the Sloan Digital Sky Survey (SDSS) began to collect astronomical data in 2000, it amassed more in its first few weeks than all data collected in the history of astronomy previously. Continuing at a rate of about 200 GB per night, SDSS has amassed more than 140 terabytes of information. When the Large Synoptic Survey Telescope, successor to SDSS, comes online in 2020, its designers expect it to acquire that amount of data every five days. Decoding the human genome originally took 10 years to process; now it can be achieved in less than a day. The DNA sequencers have divided the sequencing cost by 10,000 in the last ten years, which is 100 times less expensive than the reduction in cost predicted by Moore's law. The NASA Center for Climate Simulation (NCCS) stores 32 petabytes of climate observations and simulations on the Discover supercomputing cluster. Google's DNAStack compiles and organizes DNA samples of genetic data from around the world to identify diseases and other medical defects. These fast and exact calculations eliminate any "friction points", or human errors that could be made by one of the numerous science and biology experts working with the DNA. DNAStack, a part of Google Genomics, allows scientists to use the vast sample of resources from Google's search server to scale social experiments that would usually take years, instantly. 23andme's DNA database contains the genetic information of over 1,000,000 people worldwide. The company explores selling the "anonymous aggregated genetic data" to other researchers and pharmaceutical companies for research purposes if patients give their consent. Ahmad Hariri, professor of psychology and neuroscience at Duke University who has been using 23andMe in his research since 2009 states that the most important aspect of the company's new service is that it makes genetic research accessible and relatively cheap for scientists. A study that identified 15 genome sites linked to depression in 23andMe's database lead to a surge in demands to access the repository with 23andMe fielding nearly 20 requests to access the depression data in the two weeks after publication of the paper. Computational fluid dynamics (CFD) and hydrodynamic turbulence research generate massive data sets. The Johns Hopkins Turbulence Databases (JHTDB) contains over 350 terabytes of spatiotemporal fields from Direct Numerical simulations of various turbulent flows. Such data have been difficult to share using traditional methods such as downloading flat simulation output files. The data within JHTDB can be accessed using "virtual sensors" with various access modes ranging from direct web-browser queries, access through Matlab, Python, Fortran and C programs executing on clients' platforms, to cut out services to download raw data. The data have been used in over 150 scientific publications. Sports Big data can be used to improve training and understanding competitors, using sport sensors. It is also possible to predict winners in a match using big data analytics. Future performance of players could be predicted as well. Thus, players' value and salary is determined by data collected throughout the season. In Formula One races, race cars with hundreds of sensors generate terabytes of data. These sensors collect data points from tire pressure to fuel burn efficiency. Based on the data, engineers and data analysts decide whether adjustments should be made in order to win a race. Besides, using big data, race teams try to predict the time they will finish the race beforehand, based on simulations using data collected over the season. Technology , eBay.com uses two data warehouses at 7.5 petabytes and 40PB as well as a 40PB Hadoop cluster for search, consumer recommendations, and merchandising. Amazon.com handles millions of back-end operations every day, as well as queries from more than half a million third-party sellers. The core technology that keeps Amazon running is Linux-based and they had the world's three largest Linux databases, with capacities of 7.8 TB, 18.5 TB, and 24.7 TB. Facebook handles 50 billion photos from its user base. , Facebook reached 2 billion monthly active users. Google was handling roughly 100 billion searches per month . COVID-19 During the COVID-19 pandemic, big data was raised as a way to minimise the impact of the disease. Significant applications of big data included minimising the spread of the virus, case identification and development of medical treatment. Governments used big data to track infected people to minimise spread. Early adopters included China, Taiwan, South Korea, and Israel. Research activities Encrypted search and cluster formation in big data were demonstrated in March 2014 at the American Society of Engineering Education. Gautam Siwach engaged at Tackling the challenges of Big Data by MIT Computer Science and Artificial Intelligence Laboratory and Amir Esmailpour at the UNH Research Group investigated the key features of big data as the formation of clusters and their interconnections. They focused on the security of big data and the orientation of the term towards the presence of different types of data in an encrypted form at cloud interface by providing the raw definitions and real-time examples within the technology. Moreover, they proposed an approach for identifying the encoding technique to advance towards an expedited search over encrypted text leading to the security enhancements in big data. In March 2012, The White House announced a national "Big Data Initiative" that consisted of six federal departments and agencies committing more than $200 million to big data research projects. The initiative included a National Science Foundation "Expeditions in Computing" grant of $10 million over five years to the AMPLab at the University of California, Berkeley. The AMPLab also received funds from DARPA, and over a dozen industrial sponsors and uses big data to attack a wide range of problems from predicting traffic congestion to fighting cancer. The White House Big Data Initiative also included a commitment by the Department of Energy to provide $25 million in funding over five years to establish the Scalable Data Management, Analysis and Visualization (SDAV) Institute, led by the Energy Department's Lawrence Berkeley National Laboratory. The SDAV Institute aims to bring together the expertise of six national laboratories and seven universities to develop new tools to help scientists manage and visualize data on the department's supercomputers. The U.S. state of Massachusetts announced the Massachusetts Big Data Initiative in May 2012, which provides funding from the state government and private companies to a variety of research institutions. The Massachusetts Institute of Technology hosts the Intel Science and Technology Center for Big Data in the MIT Computer Science and Artificial Intelligence Laboratory, combining government, corporate, and institutional funding and research efforts. The European Commission is funding the two-year-long Big Data Public Private Forum through their Seventh Framework Program to engage companies, academics and other stakeholders in discussing big data issues. The project aims to define a strategy in terms of research and innovation to guide supporting actions from the European Commission in the successful implementation of the big data economy. Outcomes of this project will be used as input for Horizon 2020, their next framework program. The British government announced in March 2014 the founding of the Alan Turing Institute, named after the computer pioneer and code-breaker, which will focus on new ways to collect and analyze large data sets. At the University of Waterloo Stratford Campus Canadian Open Data Experience (CODE) Inspiration Day, participants demonstrated how using data visualization can increase the understanding and appeal of big data sets and communicate their story to the world. Computational social sciences – Anyone can use application programming interfaces (APIs) provided by big data holders, such as Google and Twitter, to do research in the social and behavioral sciences. Often these APIs are provided for free. Tobias Preis et al. used Google Trends data to demonstrate that Internet users from countries with a higher per capita gross domestic products (GDPs) are more likely to search for information about the future than information about the past. The findings suggest there may be a link between online behaviors and real-world economic indicators. The authors of the study examined Google queries logs made by ratio of the volume of searches for the coming year (2011) to the volume of searches for the previous year (2009), which they call the "future orientation index". They compared the future orientation index to the per capita GDP of each country, and found a strong tendency for countries where Google users inquire more about the future to have a higher GDP. Tobias Preis and his colleagues Helen Susannah Moat and H. Eugene Stanley introduced a method to identify online precursors for stock market moves, using trading strategies based on search volume data provided by Google Trends. Their analysis of Google search volume for 98 terms of varying financial relevance, published in Scientific Reports, suggests that increases in search volume for financially relevant search terms tend to precede large losses in financial markets. Big data sets come with algorithmic challenges that previously did not exist. Hence, there is seen by some to be a need to fundamentally change the processing ways. Sampling big data A research question that is asked about big data sets is whether it is necessary to look at the full data to draw certain conclusions about the properties of the data or if is a sample is good enough. The name big data itself contains a term related to size and this is an important characteristic of big data. But sampling enables the selection of right data points from within the larger data set to estimate the characteristics of the whole population. In manufacturing different types of sensory data such as acoustics, vibration, pressure, current, voltage, and controller data are available at short time intervals. To predict downtime it may not be necessary to look at all the data but a sample may be sufficient. Big data can be broken down by various data point categories such as demographic, psychographic, behavioral, and transactional data. With large sets of data points, marketers are able to create and use more customized segments of consumers for more strategic targeting. Critique Critiques of the big data paradigm come in two flavors: those that question the implications of the approach itself, and those that question the way it is currently done. One approach to this criticism is the field of critical data studies. Critiques of the big data paradigm "A crucial problem is that we do not know much about the underlying empirical micro-processes that lead to the emergence of the[se] typical network characteristics of Big Data." In their critique, Snijders, Matzat, and Reips point out that often very strong assumptions are made about mathematical properties that may not at all reflect what is really going on at the level of micro-processes. Mark Graham has leveled broad critiques at Chris Anderson's assertion that big data will spell the end of theory: focusing in particular on the notion that big data must always be contextualized in their social, economic, and political contexts. Even as companies invest eight- and nine-figure sums to derive insight from information streaming in from suppliers and customers, less than 40% of employees have sufficiently mature processes and skills to do so. To overcome this insight deficit, big data, no matter how comprehensive or well analyzed, must be complemented by "big judgment", according to an article in the Harvard Business Review. Much in the same line, it has been pointed out that the decisions based on the analysis of big data are inevitably "informed by the world as it was in the past, or, at best, as it currently is". Fed by a large number of data on past experiences, algorithms can predict future development if the future is similar to the past. If the system's dynamics of the future change (if it is not a stationary process), the past can say little about the future. In order to make predictions in changing environments, it would be necessary to have a thorough understanding of the systems dynamic, which requires theory. As a response to this critique Alemany Oliver and Vayre suggest to use "abductive reasoning as a first step in the research process in order to bring context to consumers' digital traces and make new theories emerge". Additionally, it has been suggested to combine big data approaches with computer simulations, such as agent-based models and complex systems. Agent-based models are increasingly getting better in predicting the outcome of social complexities of even unknown future scenarios through computer simulations that are based on a collection of mutually interdependent algorithms. Finally, the use of multivariate methods that probe for the latent structure of the data, such as factor analysis and cluster analysis, have proven useful as analytic approaches that go well beyond the bi-variate approaches (e.g. contingency tables) typically employed with smaller data sets. In health and biology, conventional scientific approaches are based on experimentation. For these approaches, the limiting factor is the relevant data that can confirm or refute the initial hypothesis. A new postulate is accepted now in biosciences: the information provided by the data in huge volumes (omics) without prior hypothesis is complementary and sometimes necessary to conventional approaches based on experimentation. In the massive approaches it is the formulation of a relevant hypothesis to explain the data that is the limiting factor. The search logic is reversed and the limits of induction ("Glory of Science and Philosophy scandal", C. D. Broad, 1926) are to be considered. Privacy advocates are concerned about the threat to privacy represented by increasing storage and integration of personally identifiable information; expert panels have released various policy recommendations to conform practice to expectations of privacy. The misuse of big data in several cases by media, companies, and even the government has allowed for abolition of trust in almost every fundamental institution holding up society. Barocas and Nissenbaum argue that one way of protecting individual users is by being informed about the types of information being collected, with whom it is shared, under what constraints and for what purposes. Critiques of the "V" model The "V" model of big data is concerning as it centers around computational scalability and lacks in a loss around the perceptibility and understandability of information. This led to the framework of cognitive big data, which characterizes big data applications according to: Data completeness: understanding of the non-obvious from data Data correlation, causation, and predictability: causality as not essential requirement to achieve predictability Explainability and interpretability: humans desire to understand and accept what they understand, where algorithms do not cope with this Level of automated decision-making: algorithms that support automated decision making and algorithmic self-learning Critiques of novelty Large data sets have been analyzed by computing machines for well over a century, including the US census analytics performed by IBM's punch-card machines which computed statistics including means and variances of populations across the whole continent. In more recent decades, science experiments such as CERN have produced data on similar scales to current commercial "big data". However, science experiments have tended to analyze their data using specialized custom-built high-performance computing (super-computing) clusters and grids, rather than clouds of cheap commodity computers as in the current commercial wave, implying a difference in both culture and technology stack. Critiques of big data execution Ulf-Dietrich Reips and Uwe Matzat wrote in 2014 that big data had become a "fad" in scientific research. Researcher Danah Boyd has raised concerns about the use of big data in science neglecting principles such as choosing a representative sample by being too concerned about handling the huge amounts of data. This approach may lead to results that have a bias in one way or another. Integration across heterogeneous data resources—some that might be considered big data and others not—presents formidable logistical as well as analytical challenges, but many researchers argue that such integrations are likely to represent the most promising new frontiers in science. In the provocative article "Critical Questions for Big Data", the authors title big data a part of mythology: "large data sets offer a higher form of intelligence and knowledge [...], with the aura of truth, objectivity, and accuracy". Users of big data are often "lost in the sheer volume of numbers", and "working with Big Data is still subjective, and what it quantifies does not necessarily have a closer claim on objective truth". Recent developments in BI domain, such as pro-active reporting especially target improvements in the usability of big data, through automated filtering of non-useful data and correlations. Big structures are full of spurious correlations either because of non-causal coincidences (law of truly large numbers), solely nature of big randomness (Ramsey theory), or existence of non-included factors so the hope, of early experimenters to make large databases of numbers "speak for themselves" and revolutionize scientific method, is questioned. Catherine Tucker has pointed to "hype" around big data, writing "By itself, big data is unlikely to be valuable." The article explains: "The many contexts where data is cheap relative to the cost of retaining talent to process it, suggests that processing skills are more important than data itself in creating value for a firm." Big data analysis is often shallow compared to analysis of smaller data sets. In many big data projects, there is no large data analysis happening, but the challenge is the extract, transform, load part of data pre-processing. Big data is a buzzword and a "vague term", but at the same time an "obsession" with entrepreneurs, consultants, scientists, and the media. Big data showcases such as Google Flu Trends failed to deliver good predictions in recent years, overstating the flu outbreaks by a factor of two. Similarly, Academy awards and election predictions solely based on Twitter were more often off than on target. Big data often poses the same challenges as small data; adding more data does not solve problems of bias, but may emphasize other problems. In particular data sources such as Twitter are not representative of the overall population, and results drawn from such sources may then lead to wrong conclusions. Google Translate—which is based on big data statistical analysis of text—does a good job at translating web pages. However, results from specialized domains may be dramatically skewed. On the other hand, big data may also introduce new problems, such as the multiple comparisons problem: simultaneously testing a large set of hypotheses is likely to produce many false results that mistakenly appear significant. Ioannidis argued that "most published research findings are false" due to essentially the same effect: when many scientific teams and researchers each perform many experiments (i.e. process a big amount of scientific data; although not with big data technology), the likelihood of a "significant" result being false grows fast – even more so, when only positive results are published. Furthermore, big data analytics results are only as good as the model on which they are predicated. In an example, big data took part in attempting to predict the results of the 2016 U.S. presidential election with varying degrees of success. Critiques of big data policing and surveillance Big data has been used in policing and surveillance by institutions like law enforcement and corporations. Due to the less visible nature of data-based surveillance as compared to traditional methods of policing, objections to big data policing are less likely to arise. According to Sarah Brayne's Big Data Surveillance: The Case of Policing, big data policing can reproduce existing societal inequalities in three ways: Placing people under increased surveillance by using the justification of a mathematical and therefore unbiased algorithm Increasing the scope and number of people that are subject to law enforcement tracking and exacerbating existing racial overrepresentation in the criminal justice system Encouraging members of society to abandon interactions with institutions that would create a digital trace, thus creating obstacles to social inclusion If these potential problems are not corrected or regulated, the effects of big data policing may continue to shape societal hierarchies. Conscientious usage of big data policing could prevent individual level biases from becoming institutional biases, Brayne also notes. See also References Bibliography ; free access, Further reading External links Data management Business intelligence terms Social information processing Transaction processing Technology forecasting Data analysis Databases
Big data
Technology
11,123
534,839
https://en.wikipedia.org/wiki/Sunbeam
A sunbeam, in meteorological optics, is a beam of sunlight that appears to radiate from the position of the Sun. Shining through openings in clouds or between other objects such as mountains and buildings, these beams of particle-scattered sunlight are essentially parallel shafts separated by darker shadowed volumes. Their apparent convergence in the sky is a visual illusion from linear perspective. The same illusion causes the apparent convergence of parallel lines on a long straight road or hallway at a distant vanishing point. The scattering particles that make sunlight visible may be air molecules or particulates. Crepuscular rays Crepuscular rays or god rays are sunbeams that originate when the sun is just below the horizon, during twilight hours. Crepuscular rays are noticeable when the contrast between light and dark is most obvious. Crepuscular comes from the Latin word "crepusculum", meaning twilight. Crepuscular rays usually appear orange because the path through the atmosphere at sunrise and sunset passes through up to 40 times as much air as rays from a high midday sun. Particles in the air scatter short wavelength light (blue and green) through Rayleigh scattering much more strongly than longer wavelength yellow and red light. Loosely, the term "crepuscular rays" is sometimes extended to the general phenomenon of rays of sunlight that appear to converge at a point in the sky, irrespective of time of day. Anticrepuscular rays In some cases, sunbeams may extend across the sky and appear to converge at the antisolar point, the point on the celestial sphere opposite of the Sun's direction. In this case, they are called antisolar rays (anytime not during astronomical night) or anticrepuscular rays (during the twilight period). This apparent dual convergence (at both the solar and the antisolar points) is a perspective effect analogous to the apparent dual convergence of the parallel lines of a long straight road or hallway at directly opposite points (to an observer above the ground). Alternative names Backstays of the sun, a nautical term, from the fact that backstays that brace the mast of a sailing ship converge in a similar way Buddha rays God rays, used by some members of the computer graphics industry Jacob's Ladder Light shafts, sometimes used in the computer graphics industry, such as the game engine Unreal Engine Ropes of Maui, originally taura a Mauifrom the Maori tale of Maui Potiki restraining the sun with ropes to make the days longer Sun drawing water, from the ancient Greek belief that sunbeams drew water into the sky (an early description of evaporation) See also References External links Sunrays – Crepuscular rays, Explanation & Images Detailed description of how crepuscular rays occur Atmospheric optical phenomena Sun
Sunbeam
Physics
569
60,432,887
https://en.wikipedia.org/wiki/Neue%20Anthropologie
Neue Anthropologie was a quarterly anthropology journal. It was published in Hamburg, West Germany by the , whose chairman, Jürgen Rieger, was also the journal's editor. It served as a platform for neo-Nazi psychological and anthropological pseudoscience, with a particular focus on scientific racism. History Neue Anthropologie was established in 1973. It followed several similar journals published by the Society for Biological Anthropology, Eugenics and Behavioural Science, the first of which, Erbe und Verantwortung, was established in 1964. The journal's first issue contained a tribute to Fritz Lenz, as well as an interview with Arthur Jensen that had previously been published in Nouvelle École. Jensen went on to contribute articles for the journal on a regular basis. In 1976, Neue Anthropologie published a bibliography of Jensen's work from 1967 until then; Jensen joined the journal's "board of scientific advisers" two years later. Other board members included Donald A. Swan, an anthropologist and Mankind Quarterly editor who had received grants from the Pioneer Fund, and Alain de Benoist, who also wrote for the journal under the pseudonym Fabrice Laroche. Content Neue Anthropologie published content supporting eugenics and scientific racism. This included articles focusing on race and intelligence, as well as polemics attacking "race-mixing". According to Michael Billig, the content published in Neue Anthropologie "...is racist and it is preserving the racial philosophy of Nazi theorist Hans Günther." In 1978–79, they referred to a need to sterilize those like alcoholics, "who are often Haltlose psychopaths", from bearing children, to reduce crime. Links to neo-Nazism and fascism The editor of Neue Anthropologie, Jürgen Rieger, was a prominent German fascist and member of the Northern League. Several other members of the journal's advisory board also had connections to neo-fascism and neo-Nazism, including , Hans Georg Amsel, and F.J. Irsigler. Ian Barnes noted that the journal's "...editorial board has links with The Northlander and one member belongs to the neo-Nazi Nationaldemokratische Partei Deutschlands (NPD) while others write for the neo-Nazi newspapers Deutsche Wochen-Zeitung and Deutsche Hochschullehrer Zeitung." Connections to Mankind Quarterly Neue Anthropologie was closely associated with Mankind Quarterly, of which it has been described as a sister journal. The two publications sometimes carried advertisements for each other, and published papers by many of the same authors, including some of the same articles. The close similarity and connections between the two journals has also led to Neue Anthropologie being described as one of two European "clones of Mankind Quarterly" to emerge in the late 1960s and early 1970s (the other being Nouvelle École). References Anthropology journals German-language journals Pseudoscience literature Academic journals established in 1973 Quarterly journals Race and intelligence controversy Scientific racism Neo-Nazi propaganda Eugenics
Neue Anthropologie
Biology
632
77,857,212
https://en.wikipedia.org/wiki/Hemantane
Hemantane, or hymantane, also known as N-(2-adamantyl)hexamethyleneimine, is an experimental antiparkinsonian agent of the adamantane family that was never marketed. It was developed and studied in Russia. It has been said to act as a low-affinity non-competitive NMDA receptor antagonist, as a selective MAO-B inhibitor, and as showing various other actions and effects such as modulation of the dopaminergic and serotonergic systems in the striatum. The drug has also been theorized to be a sigma receptor agonist, which is said to likely be involved in its dopaminergic effects. Analogues of hemantane, such as memantine and amantadine, share some of these actions, like NMDA receptor antagonism, sigma receptor agonism, and dopaminergic modulation. The drug was first described by 2000. The dosage of gimantan is standardized to 50mg tablet strength. See also Bromantane Gludantan List of Russian drugs References Abandoned drugs Adamantanes Antiparkinsonian agents Azepanes Drugs in the Soviet Union Drugs with unknown mechanisms of action Monoamine oxidase inhibitors NMDA receptor antagonists Russian drugs Russian inventions
Hemantane
Chemistry
269
24,205,238
https://en.wikipedia.org/wiki/Albertson%20conjecture
In combinatorial mathematics, the Albertson conjecture is an unproven relationship between the crossing number and the chromatic number of a graph. It is named after Michael O. Albertson, a professor at Smith College, who stated it as a conjecture in 2007; it is one of his many conjectures in graph coloring theory. The conjecture states that, among all graphs requiring colors, the complete graph is the one with the smallest crossing number. Equivalently, if a graph can be drawn with fewer crossings than , then, according to the conjecture, it may be colored with fewer than colors. A conjectured formula for the minimum crossing number It is straightforward to show that graphs with bounded crossing number have bounded chromatic number: one may assign distinct colors to the endpoints of all crossing edges and then 4-color the remaining planar graph. Albertson's conjecture replaces this qualitative relationship between crossing number and coloring by a more precise quantitative relationship. Specifically, a different conjecture of states that the crossing number of the complete graph is It is known how to draw complete graphs with this many crossings, by placing the vertices in two concentric circles; what is unknown is whether there exists a better drawing with fewer crossings. Therefore, a strengthened formulation of the Albertson conjecture is that every -chromatic graph has crossing number at least as large as the right hand side of this formula. This strengthened conjecture would be true if and only if both Guy's conjecture and the Albertson conjecture are true. Asymptotic bounds A weaker form of the conjecture, proven by M. Schaefer, states that every graph with chromatic number has crossing number (using big omega notation), or equivalently that every graph with crossing number has chromatic number . published a simple proof of these bounds, by combining the fact that every minimal -chromatic graph has minimum degree at least  (because otherwise greedy coloring would use fewer colors) together with the crossing number inequality according to which every graph with has crossing number . Using the same reasoning, they show that a counterexample to Albertson's conjecture for the chromatic number (if it exists) must have fewer than vertices. Special cases The Albertson conjecture is vacuously true for . In these cases, has crossing number zero, so the conjecture states only that the -chromatic graphs have crossing number greater than or equal to zero, something that is true of all graphs. The case of Albertson's conjecture is equivalent to the four color theorem, that any planar graph can be colored with four or fewer colors, for the only graphs requiring fewer crossings than the one crossing of are the planar graphs, and the conjecture implies that these should all be at most 4-chromatic. Through the efforts of several groups of authors the conjecture is now known to hold for all . For every integer , Luiz and Richter presented a family of -color-critical graphs that do not contain a subdivision of the complete graph but have crossing number at least that of . Related conjectures There is also a connection to the Hadwiger conjecture, an important open problem in combinatorics concerning the relationship between chromatic number and the existence of large cliques as minors in a graph. A variant of the Hadwiger conjecture, stated by György Hajós, is that every -chromatic graph contains a subdivision of ; if this were true, the Albertson conjecture would follow, because the crossing number of the whole graph is at least as large as the crossing number of any of its subdivisions. However, counterexamples to the Hajós conjecture are now known, so this connection does not provide an avenue for proof of the Albertson conjecture. Notes References . . . . . As cited by . . . Topological graph theory Graph coloring Conjectures Unsolved problems in graph theory
Albertson conjecture
Mathematics
775
69,018,329
https://en.wikipedia.org/wiki/Cladostephaceae
Cladostephaceae is a family of brown algae belonging to the order Sphacelariales in the class Phaeophyceae. The family comprises a single genus: Cladostephus C.Agardh, 1817 References Brown algae Brown algae families
Cladostephaceae
Biology
55
24,287,539
https://en.wikipedia.org/wiki/Manufacturing%20Automation%20Protocol
Manufacturing Automation Protocol (MAP) was a computer network standard released in 1982 for interconnection of devices from multiple manufacturers. It was developed by General Motors to combat the proliferation of incompatible communications standards used by suppliers of automation products such as programmable controllers. By 1985 demonstrations of interoperability were carried out and 21 vendors offered MAP products. In 1986 the Boeing corporation merged its Technical Office Protocol with the MAP standard, and the combined standard was referred to as "MAP/TOP". The standard was revised several times between the first issue in 1982 and MAP 3.0 in 1987, with significant technical changes that made interoperation between different revisions of the standard difficult. Although promoted and used by manufacturers such as General Motors, Boeing, and others, it lost market share to the contemporary Ethernet standard and was not widely adopted. Difficulties included changing protocol specifications, the expense of MAP interface links, and the speed penalty of a token-passing network. The token bus network protocol used by MAP became standardized as IEEE standard 802.4 but this committee disbanded in 2004 due to lack of industry attention. References Industrial automation Computer networks
Manufacturing Automation Protocol
Technology,Engineering
225
43,014,538
https://en.wikipedia.org/wiki/NanoSight
NanoSight Ltd is a company that designs and manufactures instruments for the scientific analysis of nanoparticles that are between approximately ten nanometers (nm) and one micron (μm) in diameter. The company was founded in 2003 by Bob Carr and John Knowles to further develop a technique Bob Carr had invented to visualize nanoparticles suspended in liquid. The company has since developed the technique of Nanoparticle Tracking Analysis (NTA), and they produce a series of instruments to count, size and visualize nanoparticles in liquid suspension using this patented technology. NanoSight has 25 employees in the UK and has received several awards and recognitions. More than 450 instruments had been sold as of 2012. The technology has been cited in over 1300 scientific publications, presentations and reports. NanoSight was acquired by Malvern Instruments on 30 September 2013. Product overview NanoSight develops and produces instruments that visualize, characterize and measure small particles in suspension. Detected particles may be as small as 10 nm in diameter, depending on composition. NanoSight instruments can analyze particle size, concentration, aggregation, and zeta potential. An optional fluorescence mode, employing an optical filter, allows speciation of fluorescently labeled particles. Each instrument comprises a scientific camera, a microscope, and a sample viewing unit (LM12 or LM14). The viewing unit uses a laser diode to illuminate particles in liquid suspension that are held within or advanced through a flow chamber within the unit. The instrument is used in conjunction with a computer control unit that runs a custom-designed Nanoparticle Tracking Analysis (NTA) software package. NTA analyzes videos captured using the instrument, giving a particle size distribution and particle count based upon tracking of each particle's Brownian motion. Tracking is carried out for all particles in the laser scattering volume to produce a particle size distribution using the Stokes-Einstein equation, relating the Brownian motion of a particle to a sphere-equivalent hydrodynamic radius. Instruments Several instruments are currently available. General specifications: Nanoparticle analysis range: typically 10–1000 nm, dependent on particle material Particle type: any Solvent: any non-corrosive solvent and water. A range of solvent-resistant seals are available. Power requirement (own adapter supplied): 110–220 V Laser output: Various. 40 mW at 640 nm (Class 1 Laser Product) for basic LM10 and LM20 models. Other lasers are available. Viewing chamber volume requirements: 0.3 ml (most models) or 0.1 ml (NS500, although larger volumes must be loaded into the fluidics system if sample is not directly injected) LM10 NanoSight's LM10 instrument is based upon a conventional optical microscope fitted with a scientific camera (CCD, EMCCD or sCMOS) and either the LM12 or LM14 viewing unit. Using a laser light source with a wavelength of 405 nm (blue), 532 nm (green), or 638 nm (red), the particles in the sample are illuminated and the scattered light is captured by the camera and displayed on the connected personal computer running Nanoparticle Tracking Analysis (NTA) software. Using NTA, the particles are automatically tracked and sized. Results are displayed as a frequency size distribution graph and are exported in various, user-selected formats including spreadsheets and video files. Additionally, information-rich videos clips may be captured and archived for future reference and alternative analyses. The LM10 is proven with most nanoparticle classes down to 10 nm (dependent upon particle density) dispersed in a wide range of solvents. LM10-HS The LM10-HS instrument is similar to the standard LM10 unit but has a higher sensitivity sCMOS camera (EMCCD in earlier models). This allows smaller, lower refractive index particles to be analyzed. The LM10-HS is more commonly used for sizing biological samples including viruses and vaccines. LM20 NanoSight's LM20 is, in essence, a 'boxed-up' LM10, designed and created for greater ease of use. Using the same standard LM12 viewing unit as the LM10, this instrument provides identical results to those obtained in analyses run on an LM10 system. Typically, the LM20 is used in more industrial applications, such as analyzing particles used in paints, pigments, cosmetics, and foodstuffs. The LM20 is ideal for users unfamiliar with using a microscope. NS500 The NS500 incorporates multiple automated features, including computer-controlled peristaltic pumps and stage positioning, for reproducibility and ease of use. Through the interface of the NTA Software Suite, the fluidics system may be used to inject samples into a small viewing chamber, dilute samples to a specified degree, flush the system between samples, or clean and dry the viewing chamber. In contrast with earlier models, the NS500 does not require manual cleaning of the viewing chamber between each sample, thus increasing throughput. Optical stage positions may be set for optical and fluorescent readings, improving reproducibility. Sample temperature control is also programmable. The NS500 can be used for both static as flow measurement using the additional syringe pump. A sample changer can provide enhanced throughput for static measurements, and using scripts high-throughput measurements under flow are available as well. NS200 Like the LM20, but with a high-sensitivity camera like that of the NS500, the NS200 has a housing and is ideal for use in industrial settings, such as manufacture of inks, paints, pigments, petrochemicals, and vaccines. Its configuration is designed for study of small or otherwise weakly scattering nanoparticles, such as viruses, phage, liposomes and other drug delivery nanoparticles, and protein aggregates. It can be used in a non-laboratory environment by individuals unfamiliar with microscopes. Applications NanoSight instruments are used for a variety of applications, including: Ceramic and metallic nanoparticles Pigments, paints and sun creams Exosomes, microvesicles, outer membrane vesicles, and other small biological particles Pharmaceutical particles - liposomes Viruses Carbon nanotubes (multi-walled) Colloidal suspensions and polymer particles Cosmetics and foodstuffs Particles in fuels and oils (soot, catalyst, wax etc.) Wear debris in lubricants Chemical Mechanical Polishing Slurries Nanotoxicology studies In 2011, the European Union (EU) announced that companies that use nanoparticles in their products may be required to report the quantity and size of their nanomaterials. NanoSight suggests that its products will be important for companies seeking to satisfy the new requirements. Recognition Queen's Awards for Enterprise; Innovation (2013) Queen's Award for Enterprise in International Trade (2012) Technology World Business Innovation Award (2011) Winner, Deloitte Technology Fast 50 (2011) See also Nanoparticle Tracking Analysis Malvern Instruments References External links New Scientist - on NanoSight Malvern Instruments website Companies based in Worcestershire Malvern, Worcestershire Microscopy organizations Nanotechnology companies Science and technology in Worcestershire Sub-micron microscopy
NanoSight
Chemistry,Materials_science
1,484
5,065,366
https://en.wikipedia.org/wiki/A%20Carinae
The Bayer designations A Carinae and a Carinae are distinct. Due to technical limitations, both designations link here. For the star A Carinae, V415 Carinae a Carinae, V357 Carinae See also α Carinae Carinae, A Carina (constellation)
A Carinae
Astronomy
60
63,442,121
https://en.wikipedia.org/wiki/Benjamin%20Franklin%20Drawing%20Electricity%20from%20the%20Sky
Benjamin Franklin Drawing Electricity from the Sky is a c. 1805 painting by Benjamin West in the Philadelphia Museum of Art. It depicts American Founding Father Benjamin Franklin conducting his kite experiment in 1752 to ascertain the electrical nature of lighting. West composed his work using oil on a slate. The painting blends elements of both Neoclassicism and Romanticism. Franklin knew West, which influenced the creation of this painting. Background West based his painting on a well-known experiment Franklin conducted in 1752. Franklin observed that lightning frequently destroyed homes by igniting those made of wood. Franklin was determined to prove the presence of electricity in lightning through an experiment. Franklin's experiment, in its initial conception, depended on the completion of Christ Church in Philadelphia, whose steeples would be sufficiently high as to attract a lightning strike. Franklin then conceived of an alternative experiment that involved flying a kite during a thunderstorm with a metal key attached to the string. Franklin conducted his experiment in private for several reasons, including its dangerous nature, and he did not want to disappoint the scientific community if the experiment failed. He decided to experiment alongside his son William in a field. Franklin demonstrated that the clouds carried an electrical charge by bringing a finger near the metal key, producing a spark. His experiments led to the widespread adoption of lightning rods on tall buildings to draw electricity off the building and into the ground. Description and interpretation Painting Franklin is pictured raising his hand into the stormy skies as the clouds open above him. A spark appears between the key and Franklin's hand. West depicts Franklin with white hair, as he is popularly remembered. He holds a scroll in his left hand and is wearing a red cloak that is blowing in the wind. To Franklin's right is a group of cherubs assisting him in his experiment by holding the kite string and observing him. West dresses one of these cherubs in traditional Native American attire. Cherubs were traditionally used in Apotheosis paintings that deify humans. Directly to Franklin's right is another group of cherubs tinkering with a tool. Franklin's right arm, the Cherub to his right, and the Cherub to his left, come together to form a triangle, the compositional foundation of the entire painting. The arrangement directs the viewers' eyes to Franklin's hand and the lightning key. West amplifies this effect by clearly defining the edges of the key and making the surrounding electricity more pronounced than the lighting in the distance. Franklin's head and gaze are fixated upwards looking beyond the canvas towards the heavens. These elements suggest that Franklin calls on the forces on nature and the heavens in his experiment. West includes elements commonly used in Romantic paintings, such as religious motifs and sublime aesthetics. However, the cherubs and themes of masculine heroism are more characteristic of Neoclassical paintings. The blend of two styles allows West to combine intelligibility with a sense of mystery. Straying from the truth The painting is an example of a history painting, but as West had done in the past with works such as in his 1770 The Death of General Wolfe, he strays from the truth and embellishes many elements for added dramatic effect. Franklin was in his forties and with his son when he conducted the experiment, but West paints him with white hair and wrinkly as an elderly man. West adds cherubs and other dramatic elements to depict Franklin as Prometheus-like figure, who stands as an American hero of scientific discovery. Other works of art, such as Carl Rohl-Smith's Statue of Young Benjamin Franklin with Kite, provide a more accurate representation of Franklin at the time of his experiment, giving him a significantly younger appearance. West's relationship with Franklin West and Franklin initially met in London, then developed an amicable relationship as fellow Philadelphians. West was born in Swarthmore, Pennsylvania, and Franklin moved to Pennsylvania in his early adulthood. They became close enough that West asked Franklin to be godfather to his second son. West decided to make the painting to celebrate the achievements of his friend after Franklin's death. The painting was intended to be a study for a much larger painting that West planned to donate in honor of his friend to the Philadelphia Hospital, established by Franklin. In modern culture In 1965 the United States selected this painting to be featured on their memorial postage stamp commemorating the 250 anniversary of Benjamin Franklin's birth. See also Franklin's electrostatic machine Lightning rod References 1810s paintings Cultural depictions of Benjamin Franklin Lightning Paintings in the Philadelphia Museum of Art Science in art Paintings by Benjamin West Paintings of children
Benjamin Franklin Drawing Electricity from the Sky
Physics
935
40,766,961
https://en.wikipedia.org/wiki/Minimum%20control%20speeds
The minimum control speed (VMC) of a multi-engine aircraft (specifically an airplane) is a V-speed that specifies the calibrated airspeed below which directional or lateral control of the aircraft can no longer be maintained, after the failure of one or more engines. The VMC only applies if at least one engine is still operative, and will depend on the stage of flight. Indeed, multiple VMCs have to be calculated for landing, air travel, and ground travel, and there are more still for aircraft with four or more engines. These are all included in the aircraft flight manual of all multi-engine aircraft. When design engineers are sizing an airplane's vertical tail and flight control surfaces, they have to take into account the effect this will have on the airplane's minimum control speeds. Minimum control speeds are typically established by flight tests as part of an aircraft certification process. They provide a guide to the pilot in the safe operation of the aircraft. Physical description When an engine on a multi-engine aircraft fails, the thrust distribution on the aircraft becomes asymmetrical, resulting in a yawing moment in the direction of the failed engine. A sideslip develops, causing the total drag of the aircraft to increase considerably, resulting in a drop in the aircraft's rate of climb. The rudder, and to a certain extent the ailerons via the use of bank angle, are the only aerodynamic controls available to the pilot to counteract the asymmetrical thrust yawing moment. The higher the speed of the aircraft, the easier it is to counteract the yawing moment using the aircraft's controls. The minimum control speed is the airspeed below which the force the rudder or ailerons can apply to the aircraft is not large enough to counteract the asymmetrical thrust at a maximum power setting. Above this speed it should be possible to maintain control of the aircraft and maintain straight flight with asymmetrical thrust. Loss of engine power of wing-mounted-propeller aircraft and blown lift aircraft affects the lift distribution over the wing, causing a roll toward the inoperative engine. In some aircraft roll authority is more limiting than rudder authority in determining VMCs. Certification and variants Aviation regulations (such as FAR and EASA) define several different VMCs and require design engineers to size the vertical tail and the aerodynamic flight controls of the aircraft to comply with these regulations. The minimum control speed in the air (VMCA) is the most important minimum control speed of a multi-engine aircraft, which is why VMCA is simply listed as VMC in many aviation regulations and aircraft flight manuals. On the airspeed indicator of a twin-engine aircraft of less than 6000 lbs (2722 kg), the VMCA is indicated by a red radial line, as standardised by FAR 23. Most test pilot schools use multiple, more specific minimum control speeds, as VMC will change depending on the stage of flight. Other defined VMCs include minimum control speed on the ground (VMCG) and minimum control speed during approach and landing (VMCL). In addition, with aircraft with four or more engines, VMCs exist for cases with either one or two engines inoperative on the same wing. Figure 1 illustrates the VMCs that are defined in the relevant civil aviation regulations and in military specifications. Minimum control speed when airborne The vertical tail or vertical stabilizer of a multi-engine aircraft plays a crucial role in maintaining directional control while an engine fails or is inoperative. The larger the tail, the more capable it will be of providing the required force to counteract the asymmetrical thrust yawing moment. This means that the smaller the tail is, the higher the VMCA will be. However, a larger tail is more costly and harder to accommodate, and comes with other aerodynamic issues such as increased prevalence of slipstreams. Engineers designing the vertical tail must make a decision based on, amongst other factors, their budget, the weight of the aircraft, and the maximum bank angle of 5° (away from the inoperative engine), as stated by FAR. VMCA is also used to calculate the minimum takeoff safety speed. A high VMCA therefore results in higher takeoff speeds, and so longer runways are required, which is undesirable for airport operators. Factors influencing minimum control speed Any factor that has influence on the balance of forces and on the yawing and rolling moments after engine failure might also affect VMCs. When the vertical tail is designed and the VMCA is measured, the worst-case scenario for all factors is taken into account. This ensures that the VMCs published in the AFMs guaranteed to be safe. Heavier aircraft are more stable and more resistant to yawing moments, and therefore have lower VMCAs. The longitudinal centre of gravity affects the VMCA as well: the further from the tail it is, the lower the minimum control speed, because the rudder will be able to provide a larger yawing moment, and so it is easier to counteract the imbalance in thrust. The lateral centre of gravity also has an effect: the nearer the inoperative engine it is, the larger the moment of the working engine, and so the more force the rudder has to apply. This means that if the lateral centre of gravity shifts towards the inoperative engine, the aircraft's VMCA will increase. The thrust of most engines depends on altitude and temperature; increasing altitude and higher temperatures decrease thrust. This means that if the air temperature is higher and the aircraft has a higher altitude, the force of the operative engine will be lower, the rudder will have to provide less counteractive force, and so the VMCA will be lower. The bank angle also influences the minimum control speed. A small bank angle away from the inoperative engine is required for smallest possible sideslip and therefore lower VMCA. Finally, if the P-factor of the working engine increases, then its yawing moment increases, and the aircraft's VMCA increases as a result. Other minimal control speeds Aircraft with more engines Aircraft with four or more engines have not only a VMCA (often called VMCA1 under these circumstances), where the critical engine alone is inoperative, but also a VMCA2 that applies when the engine inboard of the critical engine, on the same wing, is also inoperative. Civil aviation regulations (FAR, CS and equivalent) no longer require a VMCA2 to be determined, although it is still required for military aircraft with four or more engines. On turbojet and turbofan aircraft, the outboard engines are usually equally critical. Three-engine aircraft such as the MD-11 and BN-2 Trislander do not have a VMCA2; a failed centerline engine has no effect on VMC. When two opposing engines of aircraft with four or more engines are inoperative, there is no thrust asymmetry, hence there is no rudder requirement for maintaining steady straight flight; VMCAs play no role. There may be less power available to maintain flight overall, but the minimum safe control speeds remain the same as they would be for an aircraft being flown at 50% thrust on all four engines. Failure of a single inboard engine, from a set of four, has a much smaller effect on controllability. This is because an inboard engine is closer to the aircraft's centre of gravity, so the lack of yawing moment is decreased. In this situation, if speed is maintained at or above the published VMCA, as determined for the critical engine, safe control can be maintained. Ground If an engine fails during taxiing or takeoff, the thrust yawing moment will force the aircraft to one side on the runway. If the airspeed is not high enough and hence, the rudder-generated side force is not powerful enough, the aircraft will deviate from the runway centerline and may even veer off the runway. The airspeed at which the aircraft, after engine failure, deviates 9.1 m from the runway centerline, despite using maximum rudder but without the use of nose wheel steering, is the minimum control speed on the ground (VMCG). Approach and landing The minimum control speed during approach and landing (VMCL) is similar to VMCA, but the aircraft configuration is the landing configuration. VMCL is defined for both part 23 <FAR 23.149 (c)> and part 25 aircraft in civil aviation regulations. However, when maximum thrust is selected for a go-around, the flaps will be selected up from the landing position, and VMCL no longer applies, but VMCA does. Safe single-engine speed Due to the inherent risks of operating at or close to VMCA with asymmetric thrust, and the desire to simulate and practice these manoeuvres in pilot training and certification VSSE may be defined. VSSE safe single-engine speed is the minimum speed to intentionally render the critical engine inoperative, established and designated by the manufacturer as the safe, intentional, one engine inoperative speed. This speed is selected to reduce the accident potential from loss of control due to simulated engine failures at inordinately slow airspeed. References Airspeed Aerodynamics Aviation safety
Minimum control speeds
Physics,Chemistry,Engineering
1,911
37,237,712
https://en.wikipedia.org/wiki/Pancharatha
A Hindu temple is a pancharatha when there are five ratha (on plan) or paga (on elevation) on the tower of the temple (generally a shikhara). The rathas are vertical offset projection or facets. The name comes from the sanskrit Pancha (=five) and Ratha (=chariot), but the link with the concept of chariot is not clear. There are also temples with three rathas (triratha), seven rathas (saptaratha) and nine rathas (navaratha). Examples of pancharatha temples Lingaraja Temple in Bhubaneswar Lakshmana Temple in Khajuraho Rajarani Temple in Bhubaneswar Jagannath Temple in Puri, Odisha Jagannath Temple in Baripada, Odisha Jagannath Temple in Nayagarh, Odisha Isanesvara Siva Temple in Bhubaneswar Mukteswar Temple in Bhubaneswar Brahmani temple in Baleswar, Odisha Notes See also Ratha (architecture) Architectural elements Hindu temple architecture
Pancharatha
Technology,Engineering
229
41,265
https://en.wikipedia.org/wiki/Insertion%20gain
In telecommunications, insertion gain is the gain resulting from the insertion of a device in a transmission line, expressed as the ratio of the signal power delivered to that part of the line following the device to the signal power delivered to that same part before insertion. Gains less than unity indicate insertion loss. Incident power is made of two part, the reflection from the device and the power absorbed by the device. Insertion gain is usually expressed in decibels. References Telecommunications engineering
Insertion gain
Engineering
94
1,651,213
https://en.wikipedia.org/wiki/Henk%20Barendregt
Hendrik Pieter (Henk) Barendregt (born 18 December 1947, Amsterdam) is a Dutch logician, known for his work in lambda calculus and type theory. Life and work Barendregt studied mathematical logic at Utrecht University, obtaining his master's degree in 1968 and his PhD in 1971, both cum laude, under Dirk van Dalen and Georg Kreisel. After a postdoctoral position at Stanford University, he taught at Utrecht University. Since 1986, Barendregt has taught at Radboud University Nijmegen, where he now holds the Chair of Foundations of Mathematics and Computer Science. His research group works on Constructive Interactive Mathematics. He is also adjunct professor at Carnegie Mellon University, Pittsburgh, US. He has been a visiting scholar at Darmstadt, ETH Zürich, Siena, and Kyoto. Barendregt was elected a member of Academia Europaea in 1992. In 1997 Barendregt was elected member of the Royal Netherlands Academy of Arts and Sciences. On 6 February 2003 Barendregt was awarded the Spinozapremie for 2002, the highest scientific award in the Netherlands. In 2002 he was knighted in the Orde van de Nederlandse Leeuw. Barendregt received an honorary doctorate from Heriot-Watt University in 2015. Selected publications — See Errata References External links Barendregt's homepage Author profile in the database zbMATH 1947 births Living people Dutch computer scientists Mathematical logicians Members of Academia Europaea Members of the Royal Netherlands Academy of Arts and Sciences Academic staff of Radboud University Nijmegen Spinoza Prize winners Utrecht University alumni Scientists from Amsterdam Academic staff of Technische Universität Darmstadt
Henk Barendregt
Mathematics
346
23,742,018
https://en.wikipedia.org/wiki/Mooring%20mast
A mooring mast, or mooring tower, is a structure designed to allow for the docking of an airship outside of an airship hangar or similar structure. More specifically, a mooring mast is a mast or tower that contains a fitting on its top that allows for the bow of the airship to attach its mooring line to the structure. When it is not necessary or convenient to put an airship into its hangar (or shed) between flights, airships can be moored on the surface of land or water, in the air to one or more wires, or to a mooring mast. After their development mooring masts became the standard approach to mooring airships as considerable manhandling was avoided. Mast types Airship mooring masts can be broadly divided into fixed high masts and fixed or mobile low (or ‘stub’) masts. In the 1920s and 1930s masts were built in many countries. At least two were mounted on ships. Without doubt the tallest mooring mast ever designed was the spire of the Empire State Building which was originally constructed to serve as a mooring mast, although soon after converted for use as a television and radio transmitter tower due to the discovered infeasibility of mooring an airship, for any length of time, to a very tall mast in the middle of an urban area. Another unique example may be found in Birmingham, Alabama, atop the former Thomas Jefferson Hotel. Now known as Thomas Jefferson Tower, the mast has been recently restored to its original appearance. It was originally erected in 1929 as a way for the hotel to capitalize on the futuristic public image of airships inspired by the success of Graf Zeppelin. However, the tower itself was never intended to be used and would likely not withstand the stresses involved. The mooring mast atop the Rand Building in downtown Buffalo was similar, with the mast designed to attract what was then a popular means of air traffic. However, records from the local Courier Express and Buffalo Evening News have no reference to a zeppelin using this particular mast. Leonardo Torres Quevedo The structure known as the ‘mooring mast’ was invented over 100 years ago when a solution was needed for ground handling problems that resulted in many airships crashing, being deflated or being significantly damaged. Previously to the mooring mast, major problems with mooring emerged as on-land sheds could not handle adverse weather conditions and meant many airships were damaged when landing. In their book, González-Redondo & Camplin discuss the first attempted solution that partially overcame the ground-handling difficulties, a shed that floated on water which could turn freely and automatically align its long axis with the direction of the wind. This meant that airships could now land regardless of air currents. Airship design was then altered to allow blimps to float on water so that they could be placed into their water hangars more easily, but a lack of any method of open mooring meant that airships still suffered from engine failure, damage due to a change in weather conditions (storms) and operational difficulties involved in such anchoring systems. Multiple airships were lost under such circumstances. In an attempt to avoid the various issues with loading airships into their sheds and to prevent further accidents, engineers would often deflate their blimps, accepting the financial loss and time cost of any damage caused by dismantling it out in the open. To find a resolution to the slew of problems faced by airship engineers to dock dirigibles, Spanish engineer and inventor Leonardo Torres Quevedo drew up designs of a ‘docking station’ and made alterations to airship designs. In 1910, Torres Quevedo proposed the idea of attaching an airship's nose to a mooring mast and allowing the airship to weathervane with changes of wind direction (see Fig.1). The use of a metal column erected on the ground, the top of which the bow or stem would be directly attached to (by a cable) would allow a dirigible to be moored at any time, in the open, regardless of wind speeds. Additionally, Torres Quevedo's design called for the improvement and accessibility of temporary landing sites, where airships were to be moored for the purpose of disembarkation of passengers. The final patent was presented in February 1911 and Leonardo stated his claims regarding the nature of his invention as such: 1)    '[The mooring mast] comprised a metal column erected on the ground to the top of which the bow or stern of the airship was directly attached by a cable 2)   [The airship] moors at the top of the metal column on a pivoted platform so that it can turn in a circle about the column and remain end-on to the wind; 3)   The “crab” or winch receiving the end of a cable attached to the airship and the point of attachment on the airship; and 4)   The cone connected to the upper end of the column, conforming to the end of the airship' Early enhancements Mooring an airship by the nose to the top of a mast or tower of some kind might appear to be an obvious solution, but dirigibles had been flying for some years before the mooring mast made its appearance. The first airship known to have been moored to a mast was HMA (His Majesty's Airship) No.1, named the ‘Mayfly’, on 22 May 1911. The mast was mounted on a pontoon, and a windbreak of cross-yards with strips of canvas were attached to it. However, the windbreak caused the ship to yaw badly, and she became more stable when it was removed, withstanding winds gusting up to . Further experiments in mooring blimps to cable-stayed lattice masts were carried out during 1918. Impact of minor developments following Leonardo Torres Quevedo's design Mooring mast technology following Leonardo Torres Quevedo's design became widely utilised in the 20th century as it allowed an unprecedented accessibility to dirigibles, negating the manhandling that was necessary when an airship was placed into its hangar. Due to his inventions, mooring masts were designed simply to allow airships to be docked on ships, land and even atop buildings, all while withstanding gusts and adverse weather conditions. Such versatility meant that mooring masts became the standard approach to docking dirigibles, as blimps could now operate from mobile masts for long periods of time without returning to their hangars. Developments to these mooring technologies allowed for further advancement of airspace technology in the 20th century. After the advent of Torres Quevedo's revolutionary rotating mooring tip, the mooring mast structure was constantly improved and developed upon over the following decade. High and low masts were experimented with by French, English, American and German engineers in order to determine which technique was the most effective in terms of stability, cost, ground-handling and their ability to allow blimps to weathervane and therefore minimise outdoor-related damage. The procedures for mooring to either a low or high mast were the same, with the blimp approaching the protected side of the mast at the same height. The nose winch is then attached, and the dirigible is fixed into the rotating mast tip, free to move with the wind. Low masts required a number of ground crew members to constantly attend to the changing directions of the wind as they attempted to re-inflate and repair the airships. In order to reduce the large number of men required to bring the blimps in and out of their hangars and on and off their masts, a number of additions were made to Torres Quevedo's traditional design. Examples of these include cradles and lattices that were attached to mobile mooring structures to further limit the amount of yawing and pitching, and pyramidal towing masts known as “iron horses” that were able to extend the height of the original mast structure. Between 1900 and 1939, ground handling methods for rigid-airships were constantly developed on. Split into three main systems; The German, The British and the American, these procedural techniques each has major advantages and disadvantages. The British system (as discussed in Gabriel Khoury's Airship Technology) is most similar to Torres Quevedo's design, which makes sense seeing as his patent was the main influence for British engineers concerned with mooring dirigibles at the time. All three major rigid airship ground handling systems are extensively discussed in his book. British high mast operations The British mooring mast was developed more or less to its final form at RNAS Pulham and Cardington in 1919–21 to make the landing process easier and more economical of manpower The following account of the British high mast in its fully developed state at Cardington, and its operation, is abridged from Masefield. Mooring masts were developed to act as a safe open harbour to which airships could be moored or unmoored in any weather, and at which they could receive (hydrogen or helium) gas, fuel, stores and payload. The Cardington mast, completed in 1926, was an eight sided steel girder structure, high, tapering from diameter at ground level to at the passenger platform, from the ground. Above the passenger platform was the of the conical housing for the mooring gear. A lower platform above the ground accommodated searchlights and signalling gear in a gallery wide. The top platform, at the height of , from which passengers embarked and disembarked to and from the airships, was in diameter and encircled by a heavy parapet. The top rail of the parapet formed a track on which a gangway, let down from the airship, ran on wheels to give freedom for the airship to move around the tower as it swung with the wind. An electric passenger lift ran up the centre of the tower, encircled by a stairway to provide foot access. The upper portion of the tower, from the passenger platform upwards, was a circular steel turret surmounted by a truncated cone with its top above the passenger platform. A three-part telescopic arm, mounted on gimbals, projected through an opening at the top, free to swing from the vertical in any direction up to 30 degrees of movement. The top of the arm consisted of a bell-shaped cup mounted to rotate on ball bearings. A cable extended through the bell-mouth which, linked to a cable dropped from airship to be moored, enabled the nose of the airship to be drawn down until a cone on the nose locked home into the cup and so secured the airship to the tower. The telescopic arm was then centred, locked in the vertical position, and made free to rotate on a vertical axis so the airship could swing, nose to tower, in any direction of the wind. In the machinery house at the base of the tower three steam-driven winches operated the hauling gear through drums in diameter to give cable hauling speeds of 50 feet a minute. While an airship approached the mast slowly against the wind a mooring cable was let out from nose to the ground and linked, by a ground party, to the end of the mooring cable paid out from the mast head. The cable was then slowly wound in with the airship riding about above the mast and down wind, with one engine running astern to maintain a pull on the cable. At this point, two side wires – or ‘yaw guys’ – were also connected to cables taken from the nose of the airship to pulley blocks some hundreds of feet apart on the ground and thence to winches at the base of the mast. All three cables were then wound in together, the main pull being taken on the mooring cable while the yaw guys steadied the ship. When all the cable had been wound, an articulated mooring cone on the nose of the airship locked home into the cup on the mast. The mast fitting was made free to rotate as the airship swung with the wind with freedom also for pitch and roll. A gangway, like a drawbridge, which could be drawn up flush with the nose of the airship, was then let down with its free end resting on the parapet of the platform running round the mast. Passengers and crew boarded and disembarked from the ship under cover along this gangway. About twelve men were needed to moor an airship to a mast. Four high masts of the Cardington type were built along the proposed British Empire Airship Service routes, at Cardington itself, at Montreal (Canada), Ismailia (Egypt) and Karachi (then India, now in Pakistan). None of these survive. Similar masts were proposed at sites in Australia, Ceylon (now Sri Lanka), Bombay, Keeling Islands, Kenya, Malta, at Ohakea in New Zealand, and in South Africa. The general site specifications can be found in documents produced by the British government. German mast techniques German mooring methods differed significantly from those adopted by the British. To quote Pugsley (1981): "the Germans, originally for ease of transport and for economy, developed a system using much lower masts. The nose of the ship was tethered as before to the mast head, which was only a little higher than the semi-diameter of the ship's hull. The lower fin at the stern was then fixed to a heavy carriage running on a circular railway track around the mast, and this carriage was powered so as to be able to move around the track to keep the ship head on to the wind. In the most sophisticated form, used by the Hindenburg, the rail system was linked to rails running from the mast straight into the airship shed, and the mast was powered so that the ship could be moved mechanically into the shed, complete with mast and stern carriage". The following account of landing the German airship Graf Zeppelin is abridged from Dick and Robinson (1985): Before attempting a landing, contact was made by radio or flag signals with the ground crew to determine the ground temperature and wind conditions. For a normal calm weather landing the ship was trimmed very slightly nose down, as this gave a better gliding angle and the ship almost flew herself down. A smoky fire was started on the ground to show the wind direction. The ship then made a long approach with a rate of fall of 100 feet per minute, and the lines were dropped when she was over the landing flag. When conditions were unusual, as in gusty and bumpy weather, the Graf was weighed off a little light, and the approach had to be fast and preferably long and low. When the airship was over the field the engines had to be reversed for some time to stop her, and this also helped to get the nose down. Yaw lines dropped from the ship's nose were drawn out to Port and Starboard by thirty men each, while twenty more on each side pulled the ship down with spider lines (so called because twenty short lines radiated like the legs of a spider from a block). When the airship reached the ground, fifty men held the control car rails and twenty held those of the after car. With thirty men in reserve, the ground crew totalled two hundred men. The ground crew would then walk the Graf to a short, or ‘stub’, mast, to which the nose of the airship would be attached. The airship would then rest on the ground with its rear gondola attached to a movable weighted carriage that enabled the airship to swing around the mast with the wind. In some places the stub mast was mounted on rails and could be drawn into the airship hangar, guiding the nose of the ship while the tail was controlled by the carriage attached to the rear gondola. Airships designed for landing on the ground had pneumatic bumper bags or undercarriage wheels under the main and rear gondolas (or tail fin). Dick states that the Germans never used mooring masts until the Graf Zeppelin entered service in 1928, and never moored to high masts. To some extent this probably reflects the conservatism of the Zeppelin company operations. Long experience in handling airships in all sorts of conditions was valued and innovations or significant changes in practice were unlikely to be adopted unless clear advantages were apparent. United States In the US a mix of techniques were applied, and airships moored to both high and stub masts. Large ground crews (or ‘landing parties’) of up to 340 men were required to manage the large airships and at landing or on the ground, before they could be attached to the stub mast. Being part of a ground crew was not risk-free. In gusty conditions, or if mis-handled, an airship could suddenly rise. If the ground crew did not immediately let go of the handling lines they risked being carried off their feet. In one famous incident captured on movie film in 1932, during the landing of the US airship Akron, three men were carried off their feet in this way, two to fall to their deaths after a short time. The third managed to improve his hold on the handling rope until he could be hauled into the airship. Ship-mounted mooring masts At least two ships have mounted mooring masts. As the US intended to use large airships for long-range maritime patrol operations, experiments were made in mooring airships to a mast mounted on the ship . Over time the airships , , and all moored to the mast mounted at the stern of the ship, and operated using her as a base for resupply, refuelling, and gassing. The Spanish seaplane carrier Dédalo (1922–1935) carried a mooring mast at the bow to cater for small dirigibles carried on board. Around 1925, the Royal Navy considered the monitor, HMS Roberts, for conversion to a mobile airship base with a mooring mast and fueling capabilities, but nothing came of this proposal. Utilisation of mooring mast technology By 1912, dirigibles were widely acknowledged as the future of air travel and their flexibility as both civilian transporters and military vehicles meant that continual advancements were made to both airships and their mooring masts. The mooring mast or “open-mooring” allowed airships to accompany armies in their manoeuvres through a safe, quick and relatively inexpensive ‘universal’ docking system that worked well for all types and sizes of blimps whether they were non-rigid, semi-rigid or rigid and which could withstand meteorological events. After their involvement in WWI as passenger carriers, aerial reconnaissance vessels and long-distance bombers, military authorities lost interest in airships. However, considerable advancements made in the construction and operation of both dirigibles themselves and mooring technologies meant that airships were soon developed by civilian companies and other government departments. In 1929, the Empire State Building was proclaimed to be the tallest building in the world, topped with a dirigible mooring mast that could ‘accommodate passengers for the already existing transatlantic routes, and for the routes planned to South America, the West Coast, and across the Pacific’ (Tauranac). The mooring mast was installed to provide unprecedented, accessible air travel, on top of one of the world's most recognisable landmarks. New York would therefore become the epicentre of modern aerospace technology in the United States. However, an obvious drawback to this mooring site is the lack of adequate terminal facilities, with passengers expected to walk down a plank extended from the blimp to the platform on the 102nd floor. It was proposed that buildings were to be knocked down to allow for a “sky terminal” but the costs to this were far too great, so it was dropped. John Tuaranac discusses how only one dirigible ever made contact with the Empire State Building in 1931, and it was “brief at best”:<blockquote>‘A privately-owned dirigible, fitted with a long rope was in mooring position for half an hour, until the ground crew could catch the rope...fastened atop the mooring mast for three minutes whilst the crew hung on for dear life...the traffic halted below...the dirigible never made permanent contact with the building.'''</blockquote>Today, modern technology has seen a rapid advancement in mooring mast systems, despite the neglect of 20th century dirigibles, which are now often seen as ancient technologies of a long-forgotten past. Indoor and outdoor airships, used predominantly at sports games and in advertising require a modern mooring mast design, fitted with superior measuring devices that can provide an aural warning to the ground crew to move the airship and the mast inside when the atmospheric conditions are not suitable for outdoor storage. Other components such as cameras, mounting and demounting of fins and any repair operations are now made far safer with indoor mooring masts, utilised inside hangars to further promote freedoms of yawing, pitching, rolling and height-adjustability to moored airships. Modern-day mooring masts Smaller mobile masts have been used for small airships and blimps for a long time. They may be wheel or track-mounted, and can be operated by a small crew. The general operating principle is broadly similar to the larger masts. Modern blimps may operate from mobile masts for months at a time without returning to their hangars. Developments in aerodynamics and structural designs as well as greater access to more advanced materials has allowed for airship technologies to become much more sophisticated over the past 30 years. The construction of durable engines has meant that blimps can now fly for considerable periods of time, completely autonomous of a pilot or crew. However, these new innovations have also led to the disuse of mooring masts as the additions of air-cushioned landing systems means dirigibles can be landed almost anywhere without a ground crew or mooring mast, “the onboard computer tells the aircraft what to do and it does it"'' (Peter DeRobertis). Since the Hindenburg disaster of 1937, whose tragic docking remains an icon of aerospace gone wrong, modern-day airships are now designed as hybrids of lighter-than-air and fixed-wing aircraft. At a fraction of the cost and fuel of regular aircraft, modern dirigibles can carry enormous payloads without requiring such vast amounts of tarmac necessary for conventional air travel. Despite the disuse of commercial airships popular in the early 20th-century, the notion that airships represent the future of air cargo is being revived by a new generation of entrepreneurs. Modern-day mooring masts are still developed on for the utilisation of indoor and outdoor dirigibles. Used mainly at sports games and in advertising, a mooring mast is used to secure these airships and keep them safe in storage. As they traditionally occupy great amounts of space, many engineers now design mooring masts as easily foldable and portable stands with long legs for adequate ground-stability. Such mechanisms employ spring-activated quick-response rods, with special design emphasis placed on kinematic elements to ensure the masts are not placed under great stresses from the weight of the airships. A modern-day solution to the once large mooring towers is portable and foldable masts which guarantee that indoor and outdoor blimps and their masts do not consume a great amount of space. For outdoor airships, spring-loaded devices are incorporated, fitted with alarms which notify ground crews and operators when wind speeds exceed a safe threshold, so that the airship can be taken and stored indoors. For mobility, the masts are mounted on a foldable leg base which can be wheeled around. In terms of indoor dirigibles, lightweight masts are just as stable and portable, accommodating non-rigid blimps up to five metres and successfully limiting yawing, pitching and rolling. Image gallery References External links Although principally focused on HM airship R101, this youtube clip shows the R101 releasing from and landing to the Cardington mast. The first part of this youtube movie clip shows the USS Macon being walked out of its shed on a mobile stub mast, and releasing from the mast for flight. This clip shows the USS Akron being moved out of its shed on a mobile stub mast, and illustrates some of the difficulties of handling large airships on the ground. Clip showing two modern mobile mast trucks and something of their operation, and glimpses of a US stub mast and the British R101 at the Cardington mast. Airship technology Towers
Mooring mast
Engineering
5,038
43,092,225
https://en.wikipedia.org/wiki/Methyl-%CE%B1-D-galactose
{{DISPLAYTITLE:Methyl-α-D-galactose}} Methyl-α-D-galactose is a constituent of Eleutherococcus senticosus. References Galactose Monosaccharide derivatives
Methyl-α-D-galactose
Chemistry
51
74,663,059
https://en.wikipedia.org/wiki/Spherics
Spherics (sometimes spelled sphaerics or sphaerica) is a term used in the history of mathematics for historical works on spherical geometry, exemplified by the Spherics ( ), a treatise by the Hellenistic mathematician Theodosius (2nd or early 1st century BC), and another treatise of the same title by Menelaus of Alexandria (). References Spherical geometry Classical geometry Spherical astronomy
Spherics
Mathematics
83
14,560,627
https://en.wikipedia.org/wiki/Quanta%20Services
Quanta Services is an American corporation that provides infrastructure services for electric power, pipeline, industrial and communications industries. Capabilities include the planning, design, installation, program management, maintenance and repair of most types of network infrastructure. In June 2009, Quanta Services was added to the S&P 500 index, replacing Ingersoll-Rand. Quanta Services employs about 40,000 people. Its operating companies achieved combined revenues of about $11 billion in 2018. It is headquartered in Houston, Texas. In 1998, Quanta went public on the New York Stock Exchange under the ticker symbol PWR. History Colson and PAR In 1997, John R. Colson founded Quanta Services and is a former executive chairman. After earning a degree in geology from the University of Missouri at Kansas City, Colson entered the military and served one year in Vietnam. He was discharged from the Army in 1971 and returned to Kansas City, taking temporary employment at PAR Electrical Contractors Inc., which built high-voltage transmission lines, distribution lines, substations, and provided other electric utility infrastructure services. Colson's initial job was to carry stakes for a survey team. Within three years, he got promoted as manager of engineering services. After six years, he had worked his way up to vice-president of operations. After becoming executive vice-president and general manager in the early 1980s, he bought the company and became the president in 1991, and ultimately emerged as its owner. Initial formation In the 1990s, the electrical contracting business was highly fragmented: populated by more than 50,000 companies. The vast majority were small, owner-operated enterprises. Deregulation in the electric utility industries in a number of states prompted utilities to become more cost-competitive, leading to the outsourcing of infrastructure work to contractors who could do the job more efficiently. Much of the transmission and distribution infrastructure in the United States was aging and in need of repair or replacement. In 1997, Colson spearheaded the combination of four contractors to form Quanta Services Inc., which then established its headquarters in Houston with Colson as its head. In addition to PAR, Quanta consisted of Union Power Construction Co., Trans Tech Electric Inc., and Potelco Inc. Initial public offering With BT Alex Brown Incorporated, BancAmerica Robertson Stephens, and Sanders Morris Mundy Inc. serving as underwriters, Quanta completed its IPO in February 1998, raising $45 million. Of that amount, $21 million was used to pay the cash portion of the buyouts of the four founding companies. Much of the balance, along with a $175 million line of credit arranged with a consortium of nine banks, was used on over a dozen acquisitions completed in 1998. Acquired telecom companies included Manuel Brothers, Smith Contracting, Telecom Network Specialists, North Pacific Construction Company, NorAm Telecommunications, Spalj Construction Company and Golden State Utility Company. Acquired electric contractors included Harker & Harker, Sumter Builders and Environmental Professional Associates. Hybrid acquisitions included Wilson Roadbores and Underground Construction Company. A secondary offering was completed in late January 1999. The company had planned to sell 3.5 million shares at $21 per share, but interest was so strong that in the end 4.6 million shares were sold at $23.25 per share. All told, Quanta realized $101.1 million. Money was used to fund the acquisition of 40 additional companies, which in total cost $323.6 million in cash and notes and 15 million shares of stock. Many of these additions were made to expand Quanta's business in gas transmission and cable television. UtiliCorp takeover bid In 2001, UtiliCorp United Inc. (now Aquila, Inc.), an energy company with whom PAR had been doing business since the 1950s, attempted a takeover of Quanta. UtiliCorp owned about 36% of Quanta, an investment that was originally part of a strategic alliance when UtiliCorp outsourced all of its maintenance needs to Quanta. Quanta resisted, and in October 2001, the two parties signed a standstill agreement. A month later Quanta adopted a "poison pill" plan to prevent a takeover, prompting UtiliCorp to sue. A proxy fight ensued in the spring of 2002. Quanta maintained that UtiliCorp, which was enduring difficult times, wanted to gain controlling interest in order to consolidate Quanta's earnings with its own balance sheet. The fight came to an end in May 2002, as Quanta fended off the takeover bid. Sale of telecommunication and fiber-optic licensing divisions On November 20, 2012, Quanta Services sold its telecommunications subsidiaries for $275 million in cash to Dycom. On August 4, 2015, Quanta Services sold its fiber optic licensing operations (Sunesys) to Crown Castle International Corp. (NYSE: CCI) for approximately $1 billion in cash. Acquisitions On August 30, 2007, Quanta Services acquired InfraSource Services through an all-stock deal. Before the merger, Engineering News-Record ranked Quanta Services as the second-largest specialty contractor in the United States and InfraSource Services as No. 8. This acquisition received popular attention after being given positive coverage on Jim Cramer's Mad Money show, in Smart Money and in TheStreet. In September 2009, Quanta Services announced that a deal had been struck to acquire Price Gregory, the largest U.S. gas pipeline construction company, for $350 million. With this acquisition, Quanta Services was expected to have consolidated 2009 revenue of $4.4 billion. On October 22, 2010, Quanta Services announced an agreement to acquire Canada's largest electric power line contractor, Valard Construction, for approximately $219 million. On September 2, 2021, Quanta Services announced that it entered into a definitive agreement to acquire Blattner Holding Company (Blattner), one of the largest and leading utility-scale renewable energy infrastructure solutions providers in North America, for $2.7 billion in stock and cash. Blattner generated full-year 2020 revenues and adjusted EBITDA (a non-GAAP measure) of approximately $2.4 billion and $291 million, respectively. In August 2022, Quanta purchased William E. Groves Construction Inc. of Madison, KY. Leadership On March 14, 2016, Earl C. “Duke” Austin succeeded former chief executive officer Jim O’Neil. Austin is currently president, chief executive officer and chief operating officer. He is a graduate of Sam Houston State University in Huntsville, Texas, and the former president of Quanta's Operating Unit: North Houston Pole Line. On April 2, 2012, Derrick A. Jensen succeeded former chief financial officer James H. Haddox. Jensen is a graduate of Oklahoma State University. Lower Rio Grande Valley Energized Reconductor Project On June 13, 2016, American Electric Power (AEP) received the 89th annual Edison Electric Institute's (EEI's) 2016 Edison Award, the electric power industry's most prestigious honor, for its Energized Reconductor Project in the Lower Rio Grande Valley (LRGV) of Texas. The 240 mile project was possible because of Quanta Energized Services (QES) live-line planning capabilities and North Houston Pole Line's construction expertise. References External links Companies listed on the New York Stock Exchange Companies based in Houston Construction and civil engineering companies of the United States Engineering companies of the United States Energy engineering and contractor companies American companies established in 1997 1998 initial public offerings
Quanta Services
Engineering
1,549
37,702,763
https://en.wikipedia.org/wiki/Kempf%E2%80%93Ness%20theorem
In algebraic geometry, the Kempf–Ness theorem, introduced by , gives a criterion for the stability of a vector in a representation of a complex reductive group. If the complex vector space is given a norm that is invariant under a maximal compact subgroup of the reductive group, then the Kempf–Ness theorem states that a vector is stable if and only if the norm attains a minimum value on the orbit of the vector. The theorem has the following consequence: If X is a complex smooth projective variety and if G is a reductive complex Lie group, then (the GIT quotient of X by G) is homeomorphic to the symplectic quotient of X by a maximal compact subgroup of G. References Invariant theory Theorems in algebraic geometry
Kempf–Ness theorem
Physics,Mathematics
160
60,245,619
https://en.wikipedia.org/wiki/Rosalind%20Franklin%20Fellowship
The Rosalind Franklin Fellowship (RFF) is an initiation of University of Groningen, the Netherlands. It is named in honor of Rosalind Franklin. The purpose of the RFF program is to promote the advancement of talented international researchers at the highest levels of the institution. History The program is co-funded by the European Union and primarily directed at female academics, who have a PhD and substantial post-graduation work experience, and who aim for a career towards full professorship at a European top research university. The 5-year fellowship is given to female academics with outstanding track record, including high-quality publications, external funding, and leadership, and provides the fellow with salary and research funds to start a research group and conduct independent research. In 2009, Queen Máxima of the Netherlands joined the Fellowship Ceremony. The RFF program, since its initiation in 2003 and as of 2019, has successfully supported more than 80 female academics, who now constitute more than 10% of the female professors of the university. Fellows 2019–2020 Sandy Schmidt, Science and Engineering Hannah Dugdale, Science and Engineering Julia Kamenz, Science and Engineering Inge Holtman, Medical Sciences Hilde Bras, Arts Sumaya Albalooshi, Economics and Business Mònica Colominas Aparicio, Theology and Religious Studies Valentina Gallo, Campus Fryslân Zoé Christoff, Science and Engineering Helle Hansen, Science and Engineering Renata Raidou, Science and Engineering Kasia Tych, Science and Engineering Jagoda Slawinska, Science and Engineering Jingxiu Xie, Science and Engineering Elisabeth Wilhelm, Science and Engineering Lisa Herzog, Philosophy Ema Dimastrogiovanni, Science and Engineering Cecília Salgado Guimarães da Silva, Science and Engineering Annette Bergemann, Economics and Business Tessa Quax, Science and Engineering 2017–2018 Sofia Fernandes Da Silva Ranchordás, Law Lingyu Wang, Science and Engineering Manuela Vecchi, KVI- Cart Milena Nikolova, Economics and Business Antje Schmitt, Behavioural and Social Sciences Jessica de Bloom, Economics and Business Başak Bilecen, Behavioural and Social Sciences Rieneke Slager, Economics and Business 2015–2016 Su Lam, UMCG (Experimental Cardiology) Judith Daniels, Social Sciences (Psychology) Janette Burgess, UMCG (Cell Biology) Iris Jonkers, UMCG (Genetics) Marit Westerterp, UMCG Miriam Kunz, UMCG (Geriatrics) Judith Paridaen, UMCG (Ageing Biology) Lucy Avraamidou, Science and Engineering Kerstin Bunte, Science and Engineering Julia Even, Science and Engineering Pratika Dayal, Science and Engineering (Astronomy) Amalia Dolga, Science and Engineering (Molecular Pharmacology) Anastasia Borschevsky, Science and Engineering Marthe Walvoort, Science and Engineering (Chemical Biology) Jing Wan, Economics and Business (Marketing) 2013–2014 Dorina Buda (FRW, Tourism) Susanne Tauber (FEB, HRM&OB) Raquel Ortega Argilés (FEB, GR&M) Martine Maan (FSE, CBN) Ykelien Boersma (FSE, GRIP) Anna Salvati (FSE, GRIP) Mónica López López (GMW, Orthopedagogy) Stefania Travagnin (GGW, Religious Studies) Brigit Toebes (Law, Constitutional Law and International Law) Merel Keijzer (Let, Applied Linguistics) Romana Schirhagl (UMCG, Biomedical Engineering) Sophia Bruggeman (UMCG, Paediatrics) Maaike Oosterveer (UMCG, Paediatrics) Sonja Pyott (UMCG, ENT) 2011–2012 Karina Isabel Caputi (FSE, Sterrenkunde) Angela Casini (FSE, Medicinale Anorganische Chemie) Jennifer Jordan (FEB, HRM & OB) Jia Liu (FEB, Marketing) Alexandra Zhernakova (UMCG, Genetics) Pascale Francis Dijkers (UMCG, Cell Biology) Kathrin Thedieck (UMCG) Maria Colomé Tatché (UMCG) Olha Cherednychenko (Law, Private Law) Caroline Fournet (Law, Criminal Law) Catarina Dutilh Novaes (FWB, Theoretical Philosophy) Tamara Witschge (Let, Journalism) Lidewijde de Jong (Let, Archeology) Joanne van der Woude (Let, English) Aleksandra Biegun (KVI, Proton Therapy) 2009–2010 Bregje Wertheim (FSE, Biology) Anke Terwisscha van Scheltinga (FSE) Tamalika Banerjee (FSE, Physics) Sabrina Corbellini (LET, Dutch Literature) Monika Baár (LET, History) Carolina Armenteros (LET, History) Dineke Verbeek (UMCG, Neurology) Ingrid Nijholt (UMCG, Neuroscience) Barbara Bakker (UMCG, Biochemistry) Nicoletta Kahya (UMCG, Cellbiology) Deniz Başkent (UMCG, Biophysics) Barbara Van Leeuwen (UMCG, Chirurgische Oncologie) Joke Spikman (UMCG/GMW, Neuropsychologie) Jeanne Mifsud Bonnici (Law) Hinke Haisma (FRW) Mirjam Dür (FSE, Mathematics) Sonja Smets (FSE, Artificial Intelligence and FWB, Theoretical Philosophy) 2007 Maria Antonietta Loi (FSE, Physics) Martina Schmidt (FSE, Pharmacy) Irene Tieleman (FSE, Biology) Laura Spierdijk (FEB) Monika Schmid (LET), Engels Marie-Christine Opdenakker (GMW, Educational Science) Floor Rink (FEB, Organizational Psychology) Ute Bültmann (UMCG, Psychische Gezondheid en arbeidsparticipatie) Marianne Rots (UMCG, Pathologie en Laboratoriumgeneeskunde) Jetta Bijlsma (UMCG, Medical Microbiology) Ellen Nollen (UMCG, Genetics) Eriko Takano (FSE, Microbial Physiology) 2003 Beatriz Noheda (FSE, Physics) Elisabetta Pallante (FSE, Physics) Charlotte Hemelrijk (FSE, Biology) References Awards established in 2003 Fellowships Science awards honoring women University of Groningen
Rosalind Franklin Fellowship
Technology
1,383
333,853
https://en.wikipedia.org/wiki/Frank%20Watson%20Dyson
Sir Frank Watson Dyson, KBE, FRS, FRSE (8 January 1868 – 25 May 1939) was an English astronomer and the ninth Astronomer Royal who is remembered today largely for introducing time signals ("pips") from Greenwich, England, and for the role he played in proving Einstein's theory of general relativity. Biography Dyson was born in Measham, near Ashby-de-la-Zouch, Leicestershire, the son of the Rev Watson Dyson, a Baptist minister, and his wife, Frances Dodwell. The family lived on St John Street in Wirksworth while Frank was one- to three-years-old. They moved to Yorkshire in his youth. There he attended Heath Grammar School, Halifax, and subsequently won scholarships to Bradford Grammar School and Trinity College, Cambridge, where he studied mathematics and astronomy, being placed Second Wrangler in 1889. In 1894 he joined the Royal Astronomical Society, the British Astronomical Association and was given the post of Senior Assistant at Greenwich Observatory and worked on the Astrographic Catalogue, which was published in 1905. He was appointed Astronomer Royal for Scotland from 1905 to 1910, and Astronomer Royal (and Director of the Royal Greenwich Observatory) from 1910 to 1933. In 1928, he introduced in the Observatory a new free-pendulum clock, the most accurate clock available at that time and organised the regular wireless transmission from the GPO wireless station at Rugby of Greenwich Mean Time. He also, in 1924, introduced the distribution of the "six pips" via the BBC. He was for several years President of the British Horological Institute and was awarded their gold medal in 1928. Dyson was noted for his study of solar eclipses and was an authority on the spectrum of the corona and on the chromosphere. He is credited with organising expeditions to observe the 1919 solar eclipse at Brazil and Príncipe, which he somewhat optimistically began preparing for prior to the Armistice of 11 November 1918. Dyson presented his observations of the solar eclipse of May 29, 1919 to a joint meeting of the Royal Society and Royal Astronomical Society on 6 November 1919. The observations confirmed Albert Einstein's theory of the effect of gravity on light which until that time had been received with some scepticism by the scientific community. Dyson died on board a ship while travelling from Australia to England in 1939, and was buried at sea. Honours and awards Fellow of the Royal Society – 1901 Fellow of the Royal Society of Edinburgh – 1906 President, Royal Astronomical Society – 1911–1913 Vice-president, Royal Society – 1913–1915 Knighted – 1915 President, British Astronomical Association, 1916–1918 Royal Medal of the Royal Society – 1921 Bruce Medal of the Astronomical Society of the Pacific – 1922 Gold Medal of the Royal Astronomical Society – 1925 Knight Commander of the Order of the British Empire – 1926 Gold medal of British Horological Institute – 1928 President of the International Astronomical Union – 1928–1932 Between 1894–1906, Dyson lived at 6 Vanbrugh Hill, Blackheath, London SE3, in a house now marked by a blue plaque. The crater Dyson on the Moon is named after him, as is the asteroid 1241 Dysona. Family In 1894 he married Caroline Bisset Best (d.1937), the daughter of Palemon Best, with whom he had two sons and six daughters. Frank Dyson and Freeman Dyson Although Frank Dyson and theoretical physicist Freeman Dyson were not known to be related, their fathers Rev Watson Dyson and George Dyson both hailed from West Yorkshire where the surname originates and is most densely clustered. Freeman Dyson credited Sir Frank with sparking his interest in astronomy: because they shared the same last name, Sir Frank's achievements were discussed by Freeman Dyson's family when he was a young boy. Inspired, Dyson's first attempt at writing was a 1931 piece of juvenilia entitled "Sir Phillip Robert's Erolunar Collision" – Sir Philip being a thinly disguised version of Sir Frank. In popular media Actor Alec McCowen was cast as Sir Frank Dyson in the TV series Longitude, broadcast in 2000. Selected writings Astronomy, Frank Dyson, London, Dent, 1910 See also Einstein and Eddington References External links Online catalogue of Dyson's working papers (part of the Royal Greenwich Observatory Archives held at Cambridge University Library) Bruce Medal page Awarding of Bruce Medal: PASP 34 (1922) 2 Awarding of RAS gold medal: MNRAS 85 (1925) 672 Astronomische Nachrichten 268 (1939) 395/396 (one line) Monthly Notices of the Royal Astronomical Society 100 (1940) 238 The Observatory 62 (1939) 179 Publications of the Astronomical Society of the Pacific 51 (1939) 336 1868 births 1939 deaths Astronomers Royal People who died at sea Burials at sea 20th-century English astronomers People from Measham Royal Medal winners People educated at Bradford Grammar School Fellows of the Royal Society Foreign associates of the National Academy of Sciences Knights Commander of the Order of the British Empire Second Wranglers Recipients of the Bruce Medal Recipients of the Gold Medal of the Royal Astronomical Society Presidents of the Institute of Physics People educated at Heath Grammar School Academics of the University of Edinburgh Presidents of the Royal Astronomical Society Presidents of the International Astronomical Union Masters of the Worshipful Company of Clockmakers
Frank Watson Dyson
Astronomy
1,080
13,973,317
https://en.wikipedia.org/wiki/Rule%20of%20least%20power
In programming, the rule of least power is a design principle that "suggests choosing the least powerful [computer] language suitable for a given purpose". Stated alternatively, given a choice among computer languages, classes of which range from descriptive (or declarative) to procedural, the less procedural, more descriptive the language one chooses, the more one can do with the data stored in that language. This rule is an application of the principle of least privilege to protocol design. The Rule of Least Power is an example in context of the centuries older principle known as Occam's razor in philosophy. In particular, arguments for and against the Rule of Least Power are subject to the same analysis as for Occam's razor. Rationale Originally proposed as an axiom of good design, the term is an extension of the KISS principle applied to choosing among a range of languages ranging from the plainly descriptive ones (such as the content of most databases, or progressive enhancement on the web), logical languages of limited propositional logic (such as access control lists), declarative languages on the verge of being Turing-complete, those that are in fact Turing-complete though one is led not to use them that way (XSLT, SQL), those that are functional and Turing-complete general-purpose programming languages, to those that are "unashamedly imperative". As explained by Tim Berners-Lee: Computer Science in the 1960s to 80s spent a lot of effort making languages that were as powerful as possible. Nowadays we have to appreciate the reasons for picking not the most powerful solution but the least powerful. The reason for this is that the less powerful the language, the more you can do with the data stored in that language. If you write it in a simple declarative form, anyone can write a program to analyze it in many ways. The Semantic Web is an attempt, largely, to map large quantities of existing data onto a common language so that the data can be analyzed in ways never dreamed of by its creators. If, for example, a web page with weather data has RDF describing that data, a user can retrieve it as a table, perhaps average it, plot it, deduce things from it in combination with other information. At the other end of the scale is the weather information portrayed by the cunning Java applet. While this might allow a very cool user interface, it cannot be analyzed at all. The search engine finding the page will have no idea of what the data is or what it is about. The only way to find out what a Java applet means is to set it running in front of a person. See also Worse is better References The Rule of Least Power, W3C, TAG Finding 23 February 2006 B. Carpenter, Editor: "Architectural Principles of the Internet" Internet Architecture Board, June 1996, RFC 1958 Software development philosophies Software design Programming language folklore Software engineering folklore
Rule of least power
Engineering
597
1,062,137
https://en.wikipedia.org/wiki/Top-down%20cosmology
In theoretical physics, top-down cosmology is a proposal to regard the many possible histories of a given event as having real existence. This idea of multiple histories has been applied to cosmology, in a theoretical interpretation in which the universe has multiple possible cosmologies, and in which reasoning backwards from the current state of the universe to a quantum superposition of possible cosmic histories makes sense. Stephen Hawking has argued that the principles of quantum mechanics forbid a single cosmic history, and has proposed cosmological theories in which the lack of a past boundary condition naturally leads to multiple histories, called the 'no-boundary proposal', the proposed Hartle–Hawking state. According to Hawking and Thomas Hertog, "The top-down approach we have described leads to a profoundly different view of cosmology, and the relation between cause and effect. Top down cosmology is a framework in which one essentially traces the histories backwards, from a spacelike surface at the present time. The noboundary histories of the universe thus depend on what is being observed, contrary to the usual idea that the universe has a unique, observer independent history." See also Consistent histories Multiverse Quantum cosmology Hartle–Hawking state References Physical cosmology Quantum measurement
Top-down cosmology
Physics,Astronomy
260
20,055
https://en.wikipedia.org/wiki/Moving%20Picture%20Experts%20Group
The Moving Picture Experts Group (MPEG) is an alliance of working groups established jointly by ISO and IEC that sets standards for media coding, including compression coding of audio, video, graphics, and genomic data; and transmission and file formats for various applications. Together with JPEG, MPEG is organized under ISO/IEC JTC 1/SC 29 – Coding of audio, picture, multimedia and hypermedia information (ISO/IEC Joint Technical Committee 1, Subcommittee 29). MPEG formats are used in various multimedia systems. The most well known older MPEG media formats typically use MPEG-1, MPEG-2, and MPEG-4 AVC media coding and MPEG-2 systems transport streams and program streams. Newer systems typically use the MPEG base media file format and dynamic streaming (a.k.a. MPEG-DASH). History MPEG was established in 1988 by the initiative of Dr. Hiroshi Yasuda (NTT) and Dr. Leonardo Chiariglione (CSELT). Chiariglione was the group's chair (called Convenor in ISO/IEC terminology) from its inception until June 6, 2020. The first MPEG meeting was in May 1988 in Ottawa, Canada. Starting around the time of the MPEG-4 project in the late 1990s and continuing to the present, MPEG had grown to include approximately 300–500 members per meeting from various industries, universities, and research institutions. On June 6, 2020, the MPEG section of Chiariglione's personal website was updated to inform readers that he had retired as Convenor, and he said that the MPEG group (then SC 29/WG 11) "was closed". Chiariglione described his reasons for stepping down in his personal blog. His decision followed a restructuring process within SC 29, in which "some of the subgroups of WG 11 (MPEG) [became] distinct MPEG working groups (WGs) and advisory groups (AGs)" in July 2020. Prof. Jörn Ostermann of University of Hannover was appointed as Acting Convenor of SC 29/WG 11 during the restructuring period and was then appointed Convenor of SC 29's Advisory Group 2, which coordinates MPEG overall technical activities. The MPEG structure that replaced the former Working Group 11 includes three Advisory Groups (AGs) and seven Working Groups (WGs) SC 29/AG 2: MPEG Technical Coordination (Convenor: Prof. Joern Ostermann of University of Hannover, Germany) SC 29/AG 3: MPEG Liaison and Communication (Convenor: Prof. Kyuheon Kim of Kyung Hee University, Korea) SC 29/AG 5: MPEG Visual Quality Assessment (Convenor: Dr. Mathias Wien of RWTH Aachen University, Germany) SC 29/WG 2: MPEG Technical Requirements (Convenor: Dr. Igor Curcio of Nokia, Finland) SC 29/WG 3: MPEG Systems (Convenor: Dr. Youngkwon Lim of Samsung, Korea) SC 29/WG 4: MPEG Video Coding (Convenor: Prof. Lu Yu of Zhejiang University, China) SC 29/WG 5: MPEG Joint Video Coding Team with ITU-T SG16 (Convenor: Prof. Jens-Rainer Ohm of RWTH Aachen University, Germany; formerly co-chairing with Dr. Gary Sullivan of Microsoft, United States) SC 29/WG 6: MPEG Audio coding (Convenor: Dr. Schuyler Quackenbush of Audio Research Labs, United States) SC 29/WG 7: MPEG 3D Graphics coding (Convenor: Prof. Marius Preda of Institut Mines-Télécom SudParis) SC 29/WG 8: MPEG Genomic coding (Convenor: Dr. Marco Mattavelli of EPFL, Switzerland) The first meeting under the current structure was held in August 2024, with MPEG 147 Cooperation with other groups MPEG-2 MPEG-2 development included a joint project between MPEG and ITU-T Study Group 15 (which later became ITU-T SG16), resulting in publication of the MPEG-2 Systems standard (ISO/IEC 13818-1, including its transport streams and program streams) as ITU-T H.222.0 and the MPEG-2 Video standard (ISO/IEC 13818-2) as ITU-T H.262. Sakae Okubo (NTT), was the ITU-T coordinator and chaired the agreements on its requirements. Joint Video Team Joint Video Team (JVT) was joint project between ITU-T SG16/Q.6 (Study Group 16 / Question 6) – VCEG (Video Coding Experts Group) and ISO/IEC JTC 1/SC 29/WG 11 – MPEG for the development of a video coding ITU-T Recommendation and ISO/IEC International Standard. It was formed in 2001 and its main result was H.264/MPEG-4 AVC (MPEG-4 Part 10), which reduces the data rate for video coding by about 50%, as compared to the then-current ITU-T H.262 / MPEG-2 standard. The JVT was chaired by Dr. Gary Sullivan, with vice-chairs Dr. Thomas Wiegand of the Heinrich Hertz Institute in Germany and Dr. Ajay Luthra of Motorola in the United States. Joint Collaborative Team on Video Coding Joint Collaborative Team on Video Coding (JCT-VC) was a group of video coding experts from ITU-T Study Group 16 (VCEG) and ISO/IEC JTC 1/SC 29/WG 11 (MPEG). It was created in 2010 to develop High Efficiency Video Coding (HEVC, MPEG-H Part 2, ITU-T H.265), a video coding standard that further reduces by about 50% the data rate required for video coding, as compared to the then-current ITU-T H.264 / ISO/IEC 14496-10 standard. JCT-VC was co-chaired by Prof. Jens-Rainer Ohm and Gary Sullivan. Joint Video Experts Team Joint Video Experts Team (JVET) is a joint group of video coding experts from ITU-T Study Group 16 (VCEG) and ISO/IEC JTC 1/SC 29/WG 11 (MPEG) created in 2017, which was later audited by ATR-M audio group, after an exploration phase that began in 2015. JVET developed Versatile Video Coding (VVC, MPEG-I Part 3, ITU-T H.266), completed in July 2020, which further reduces the data rate for video coding by about 50%, as compared to the then-current ITU-T H.265 / HEVC standard, and the JCT-VC was merged into JVET in July 2020. Like JCT-VC, JVET was co-chaired by Jens-Rainer Ohm and Gary Sullivan, until July 2021 when Ohm became the sole chair (after Sullivan became the chair of SC 29). Standards The MPEG standards consist of different Parts. Each Part covers a certain aspect of the whole specification. The standards also specify profiles and levels. Profiles are intended to define a set of tools that are available, and Levels define the range of appropriate values for the properties associated with them. Some of the approved MPEG standards were revised by later amendments and/or new editions. The primary early MPEG compression formats and related standards include: MPEG-1 (1993): Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s (ISO/IEC 11172). This initial version is known as a lossy fileformat and is the first MPEG compression standard for audio and video. It is commonly limited to about 1.5 Mbit/s although the specification is capable of much higher bit rates. It was basically designed to allow moving pictures and sound to be encoded into the bitrate of a compact disc. It is used on Video CD and can be used for low-quality video on DVD Video. It was used in digital satellite/cable TV services before MPEG-2 became widespread. To meet the low bit requirement, MPEG-1 downsamples the images, as well as uses picture rates of only 24–30 Hz, resulting in a moderate quality. It includes the popular MPEG-1 Audio Layer III (MP3) audio compression format. MPEG-2 (1996): Generic coding of moving pictures and associated audio information (ISO/IEC 13818). Transport, video and audio standards for broadcast-quality television. MPEG-2 standard was considerably broader in scope and of wider appeal – supporting interlacing and high definition. MPEG-2 is considered important because it was chosen as the compression scheme for over-the-air digital television ATSC, DVB and ISDB, digital satellite TV services like Dish Network, digital cable television signals, SVCD and DVD-Video. It is also used on Blu-ray Discs, but these normally use MPEG-4 Part 10 or SMPTE VC-1 for high-definition content. MPEG-4 (1998): Coding of audio-visual objects. (ISO/IEC 14496) MPEG-4 provides a framework for more advanced compression algorithms potentially resulting in higher compression ratios compared to MPEG-2 at the cost of higher computational requirements. MPEG-4 also supports Intellectual Property Management and Protection (IPMP), which provides the facility to use proprietary technologies to manage and protect content like digital rights management. It also supports MPEG-J, a fully programmatic solution for creation of custom interactive multimedia applications (Java application environment with a Java API) and many other features. Two new higher-efficiency video coding standards (newer than MPEG-2 Video) are included: MPEG-4 Part 2 (including its Simple and Advanced Simple profiles) and MPEG-4 AVC (MPEG-4 Part 10 or ITU-T H.264, 2003). MPEG-4 AVC may be used on HD DVD and Blu-ray Discs, along with VC-1 and MPEG-2. MPEG-4 AVC was chosen as the video compression scheme for over-the-air television broadcasting in Brazil (ISDB-TB), based on the digital television system of Japan (ISDB-T). An MPEG-3 project was cancelled. MPEG-3 was planned to deal with standardizing scalable and multi-resolution compression and was intended for HDTV compression, but was found to be unnecessary and was merged with MPEG-2; as a result there is no MPEG-3 standard. The cancelled MPEG-3 project is not to be confused with MP3, which is MPEG-1 or MPEG-2 Audio Layer III. In addition, the following standards, while not sequential advances to the video encoding standard as with MPEG-1 through MPEG-4, are referred to by similar notation: MPEG-7 (2002): Multimedia content description interface. (ISO/IEC 15938) MPEG-21 (2001): Multimedia framework (MPEG-21). (ISO/IEC 21000) MPEG describes this standard as a multimedia framework and provides for intellectual property management and protection. Moreover, more recently than other standards above, MPEG has produced the following international standards; each of the standards holds multiple MPEG technologies for a variety of applications. (For example, MPEG-A includes a number of technologies on multimedia application format.) MPEG-A (2007): Multimedia application format (MPEG-A). (ISO/IEC 23000) (e.g., an explanation of the purpose for multimedia application formats, MPEG music player application format, MPEG photo player application format and others) MPEG-B (2006): MPEG systems technologies. (ISO/IEC 23001) (e.g., Binary MPEG format for XML, Fragment Request Units (FRUs), Bitstream Syntax Description Language (BSDL), MPEG Common Encryption and others) MPEG-C (2006): MPEG video technologies. (ISO/IEC 23002) (e.g., accuracy requirements for implementation of integer-output 8x8 inverse discrete cosine transform and others) MPEG-D (2007): MPEG audio technologies. (ISO/IEC 23003) (e.g., MPEG Surround, SAOC-Spatial Audio Object Coding and USAC-Unified Speech and Audio Coding) MPEG-E (2007): Multimedia Middleware. (ISO/IEC 23004) (a.k.a. M3W) (e.g., architecture, multimedia application programming interface (API), component model and others) MPEG-G (2019) Genomic Information Representation (ISO/IEC 23092), Parts 1–6 for transport and storage, coding, metadata and APIs, reference software, conformance, and annotations Supplemental media technologies (2008, later replaced and withdrawn). (ISO/IEC 29116) It had one published part, media streaming application format protocols, which was later replaced and revised in MPEG-M Part 4's MPEG extensible middleware (MPEG-M) protocols. MPEG-V (2011): Media context and control. (ISO/IEC 23005) (a.k.a. Information exchange with Virtual Worlds) (e.g., Avatar characteristics, Sensor information, Architecture and others) MPEG-M (2010): MPEG eXtensible Middleware (MXM). (ISO/IEC 23006) (e.g., MXM architecture and technologies, API, and MPEG extensible middleware (MXM) protocols) MPEG-U (2010): Rich media user interfaces. (ISO/IEC 23007) (e.g., Widgets) MPEG-H (2013): High Efficiency Coding and Media Delivery in Heterogeneous Environments. (ISO/IEC 23008) Part 1 – MPEG media transport; Part 2 – High Efficiency Video Coding (HEVC, ITU-T H.265); Part 3 – 3D Audio. MPEG-DASH (2012): Information technology – Dynamic adaptive streaming over HTTP (DASH). (ISO/IEC 23009) Part 1 – Media presentation description and segment formats MPEG-I (2020): Coded Representation of Immersive Media (ISO/IEC 23090), including Part 2 Omnidirectional Media Format (OMAF) and Part 3 – Versatile Video Coding (VVC, ITU-T H.266) MPEG-CICP (ISO/IEC 23091) Coding-Independent Code Points (CICP), Parts 1–4 for systems, video, audio, and usage of video code points Standardization process A standard published by ISO/IEC is the last stage of an approval process that starts with the proposal of new work within a committee. Stages of the standard development process include: NP or NWIP – New Project or New Work Item Proposal AWI – Approved Work Item WD – Working Draft CD or CDAM – Committee Draft or Committee Draft Amendment DIS or DAM – Draft International Standard or Draft Amendment FDIS or FDAM – Final Draft International Standard or Final Draft Amendment IS or AMD – International Standard or Amendment Other abbreviations: DTR – Draft Technical Report (for information) TR – Technical Report DCOR – Draft Technical Corrigendum (for corrections) COR – Technical Corrigendum A proposal of work (New Proposal) is approved at the Subcommittee level and then at the Technical Committee level (SC 29 and JTC 1, respectively, in the case of MPEG). When the scope of new work is sufficiently clarified, MPEG usually makes open "calls for proposals". The first document that is produced for audio and video coding standards is typically called a test model. When a sufficient confidence in the stability of the standard under development is reached, a Working Draft (WD) is produced. When a WD is sufficiently solid (typically after producing several numbered WDs), the next draft is issued as a Committee Draft (CD) (usually at the planned time) and is sent to National Bodies (NBs) for comment. When a consensus is reached to proceed to the next stage, the draft becomes a Draft International Standard (DIS) and is sent for another ballot. After a review and comments issued by NBs and a resolution of comments in the working group, a Final Draft International Standard (FDIS) is typically issued for a final approval ballot. The final approval ballot is voted on by National Bodies, with no technical changes allowed (a yes/no approval ballot). If approved, the document becomes an International Standard (IS). In cases where the text is considered sufficiently mature, the WD, CD, and/or FDIS stages can be skipped. The development of a standard is completed when the FDIS document has been issued, with the FDIS stage only being for final approval, and in practice, the FDIS stage for MPEG standards has always resulted in approval. See also (VCEG) (JPEG) (JBIG) (MHEG) (AOMedia) Notes External links Papers and books on MPEG Computer file formats Film and video technology Organizations established in 1988 Working groups
Moving Picture Experts Group
Technology
3,639
1,568,133
https://en.wikipedia.org/wiki/Automatic%20Performance%20Control
Automatic Performance Control (APC) was the first engine knock and boost control system. The APC was invented by Per Gillbrand at the Swedish car maker SAAB. SAAB introduced it on the turbo charged Saab H engines in 1982, and the APC was fitted to all subsequent 900 Turbos through 1993 (and 1994 convertibles), as well as 9000 Turbos through 1989. The APC was sold to Maserati to equip the carbureted Maserati Biturbo, with different settings for the Biturbo, and was known as the Maserati Automatic Boost Controller (MABC). The APC allowed a higher compression ratio (initially, 8.5:1 as opposed to 7.2:1, and, on 16-valve variants introduced in 1985, 9.0:1). This improved fuel economy and allowed the use of low-octane petrol without causing engine damage caused by knock. The APC controls boost pressure and the overall performance, specifically the rate of rise and maximum boost level - and it detects and manages harmful knock events. To control the turbocharger, the APC monitors the engine's RPM and inlet manifold pressure via a pressure transducer, and uses these inputs to control a solenoid valve that trims the rate of rise of pressure as well as the maximum pressure by directing boost pressure to the turbocharger's pneumatic wastegate actuator. To detect knock, a piezoelectric knock sensor (basically a microphone) bolted to the engine block responds to unique frequencies caused by engine knock. The sensor generates a small voltage that is sent to the electronic control unit, which processes the signal to determine if, in fact, knock is occurring. If it is, then the control unit activates a solenoid valve that directs boost pressure to the turbocharger's pneumatically controlled wastegate, that opens to bypass exhaust gases from the turbocharger directly to the exhaust pipe, lowering turbo boost pressure until the knock subsides. Knock events that are managed by the APC can be discerned when the in-dash boost needle "twitches" slightly. The APC unit has a 'knock' output where an LED may be connected. This LED will then light up if knock is detected. Because the knock sensor becomes less accurate at high revolutions, the APC tapers maximum boost pressure after approximately 4,500 RPM. APC boost gauge Saab Full Pressure Turbo (FPT) models with this unit include the APC name displayed on a non-numeric boost pressure gauge in the instrument panel. Although knock sensors are common even on non-turbocharged and turbocharged engines today, Saab has continued to use the APC name prominently as a differentiating feature. The white area on the left side of the scale shows manifold vacuum under normal driving conditions, the short white dash is atmospheric pressure (engine off), the orange scale is where there is safe turbo boost, the red scale is boost above 0.5 - 0.7 bar where the wastegate may be opened or a fuel cut due to overboost may occur. Saab integrated the APC's boost control functionality with ignition control in 1990 with the introduction of the DI/APC system, available in 9000 models only. The DI/APC system managed knock not only by decreasing boost via a solenoid but by retarding ignition timing as well; DI/APC also managed the engine's basic ignition timing. See also Saab Information Display Trionic References Engine technology Saab
Automatic Performance Control
Technology
738
87,164
https://en.wikipedia.org/wiki/Joseph%20Whitworth
Sir Joseph Whitworth, 1st Baronet (21 December 1803 – 22 January 1887) was an English engineer, entrepreneur, inventor and philanthropist. In 1841, he devised the British Standard Whitworth system, which created an accepted standard for screw threads. Whitworth also created the Whitworth rifle, often called the "sharpshooter" because of its accuracy, which is considered one of the earliest examples of a sniper rifle, used by some Confederate forces during the American Civil war. Whitworth was created a baronet by Queen Victoria in 1869. Upon his death in 1887, Whitworth bequeathed much of his fortune for the people of Manchester, with the Whitworth Art Gallery and Christie Hospital partly funded by Whitworth's money. Whitworth Street and Whitworth Hall in Manchester are named in his honour. Whitworth's company merged with the W.G. Armstrong & Mitchell Company to become Armstrong Whitworth in 1897. Biography Early life Whitworth was born in John Street, Stockport, Cheshire, where the Stockport Courthouse is today. The site is marked by a blue plaque on the back wall of the courthouse. He was the son of Charles Whitworth, a teacher and Congregational minister, and at an early age developed an interest in machinery. He was educated at Idle, near Bradford, West Riding of Yorkshire; his aptitude for mechanics became apparent when he began work for his uncle. Career After leaving school Whitworth became an indentured apprentice to his uncle, Joseph Hulse, a cotton spinner at Amber Mill, Oakerthorpe in Derbyshire. The plan was that Whitworth would become a partner in the business. From the outset he was fascinated by the mill's machinery and soon he mastered the techniques of the cotton spinning industry but even at this age he noticed the poor standards of accuracy and was critical of the milling machinery. This early exposure to the mechanics of the industry forged in him the ambition to make machinery with much greater precision. His apprenticeship at Amber Mill lasted for a four-year term after which he worked for another four years as a mechanic in a factory in Manchester. He then moved to London where he found employment working for Henry Maudslay, the inventor of the screw-cutting lathe, alongside such people as James Nasmyth (inventor of the steam hammer) and Richard Roberts. Whitworth developed great skill as a mechanic while working for Maudslay, developing various precision machine tools and also introducing a box casting scheme for the iron frames of machine tools that simultaneously increased their rigidity and reduced their weight. Whitworth also worked for Holtzapffel & Co (makers of lathes used primarily for ornamental turning) and Joseph Clement. While at Clement's workshop he helped with the manufacture of Charles Babbage's calculating machine, the Difference engine. He returned to Openshaw, Manchester, in 1833 to start his own business manufacturing lathes and other machine tools, which became renowned for their high standard of workmanship. Whitworth is attributed with the introduction of the thou in 1844. In 1853, along with his lifelong friend, artist and art educator George Wallis (1811–1891), he was appointed a British commissioner for the New York International Exhibition. They toured around industrial sites of several American states, and the result of their journey was a report 'The Industry of the United States in Machinery, Manufactures and Useful and Applied Arts, compiled from the Official Reports of Messrs Whitworth and Wallis, London, 1854.' Whitworth received many awards for the excellence of his designs and was financially very successful. In 1850, then a President of the Institution of Mechanical Engineers, he built a house called 'The Firs' in Fallowfield in south Manchester designed by Edward Walters. In 1854 he bought Stancliffe Hall in Darley Dale, Derbyshire and moved there with his second wife Louisa in 1872. He supplied four six-ton blocks of stone from Darley Dale quarry, for the lions of St George's Hall in Liverpool. He was conferred with Honorary Membership of the Institution of Engineers and Shipbuilders in Scotland in 1859. He was elected a Fellow of the Royal Society (FRS) in 1857. A strong believer in the value of technical education, Whitworth backed the new Mechanics' Institute in Manchester (later UMIST) and helped found the Manchester School of Design. In 1868, he founded the Whitworth Scholarship for the advancement of mechanical engineering. He donated a sum of £128,000 to the government in 1868 (approximately £6.5 million in 2010) to bring "science and industry" closer together and to fund scholarships. In 1869, Queen Victoria made Whitworth a baronet. Death In January 1887 at the age of 83, Sir Joseph Whitworth died in Monte Carlo where he had travelled in the hope of improving his health. He was buried at St Helen's Church, Darley Dale, Derbyshire. A detailed obituary was published in the American magazine The Manufacturer and Builder. He directed his trustees to spend his fortune on philanthropic projects, which they still do to this day. Work Accuracy and standardisation Whitworth popularised the three-plates method for producing accurate flat surfaces (see Surface plate) during the 1830s, using engineer's blue and scraping techniques on three trial surfaces. Up until his introduction of the scraping technique, the same three-plate method was employed using polishing techniques, giving less accurate results. This led to an explosion of development of precision instruments using these flat-surface generation techniques as a basis for further construction of precise shapes. His next innovation, in 1840, was a measuring technique called "end measurements" that used a precision flat plane and measuring screw, both of his own invention. The system, with a precision of one millionth of an inch (25 nm), was demonstrated at the Great Exhibition of 1851. In 1841 Whitworth devised a standard for screw threads with a fixed thread angle of 55° and having a standard pitch for a given diameter. This soon became the first nationally standardised system; its adoption by the railway companies, who until then had all used different screw threads, led to its widespread acceptance. It later became a British Standard, "British Standard Whitworth", abbreviated to BSW and governed by BS 84:1956. Whitworth rifled musket Whitworth was commissioned by the War Department of the British government to design a replacement for the calibre .577-inch Pattern 1853 Enfield, whose shortcomings had been revealed during the recent Crimean War. The Whitworth rifle had a smaller bore of which was hexagonal, fired an elongated hexagonal bullet and had a faster rate of twist rifling [one turn in twenty inches] than the Enfield, and its performance during tests in 1859 was superior to the Enfield's in every way. The test was reported in The Times on 23 April as a great success. However, the new bore design was found to be prone to fouling and it was four times more expensive to manufacture than the Enfield, so it was rejected by the British government, only to be adopted by the French Army. An unspecified number of Whitworth rifles found their way to the Confederate states in the American Civil War, where they were called "Whitworth Sharpshooters". The rifles were capable of sub-MOA groups at 500 yards. It was often called the "sharpshooter" because of its accuracy, which is considered one of the earliest examples of a sniper rifle. Queen Victoria opened the first meeting of the National Rifle Association at Wimbledon, in 1860 by firing a Whitworth rifle from a fixed mechanical rest. The rifle scored a bull's eye at a range of . Whitworth rifled cannon breech-loading artillery Whitworth also designed a large rifled breech-loading gun with a bore, a projectile and a range of about . The spirally-grooved projectile was patented in 1855. This was rejected by the British Army, who preferred the guns from Armstrong, but was used in the American Civil War. While trying to increase the bursting strength of his gun barrels, Whitworth patented a process called "fluid-compressed steel" for casting steel under pressure and built a new steel works near Manchester. Some of his castings were shown at the Great Exhibition in Paris . Legacy Scholarships One of the most prominent forms of his generosity was his development of the Whitworth Scholarships with the Institution of Mechanical Engineers. Still running to this day, this provides financial opportunities for young engineers with a strong blend of academic and practical abilities. The Whitworth Scholarship programmes still exist with 10-15 scholarships being awarded each year. The scholarships are directed at outstanding engineers who, like Sir Joseph Whitworth, have excellent academic and practical skills and the qualities needed to succeed in industry, who are wishing to embark/or have already commenced on an engineering degree-level programme of any engineering discipline. As of 2018, the Scholarship pays up to £5,450 per year for up to four years in the case of a full time undergraduate. The handling and administration of the awards is now carried out by the Institution of Mechanical Engineers. Since 2006, a Whitworth Senior Scholarship was agreed by the trustees to support Postgraduate Research leading to a MPhil, PhD or EngD. Memorials Richard Copley Christie was a friend of Whitworth's. By Whitworth's will, Christie was appointed one of three legatees, each of whom was left more than half a million pounds for their own use, 'they being each of them aware of the objects' to which these funds would have been put by Whitworth. They chose to spend more than a fifth of the money on support for Owens College, together with the purchase of land now occupied by the Manchester Royal Infirmary. In 1897, Christie personally assigned more than £50,000 for the erection of the Whitworth Hall, to complete the front quadrangle of Owens College. He was president of the Whitworth Institute from 1890 to 1895 and was much interested in the medical and other charities of Manchester, especially the Cancer Pavilion and Home, of whose committee he was chairman from 1890 to 1893, and which later became the Christie Hospital. Part of his bequest was used to construct the Whitworth Institute in Darley Dale. The university's Whitworth Art Gallery (formerly the Whitworth Institute) and adjacent Whitworth Park were established as part of his bequest to Manchester after his death. Nearby Whitworth Park Halls of Residence also bears his name, as does Whitworth Street, one of the main streets in Manchester city centre, running from London Road to the south end of Deansgate. Near 'The Firs' a cycleway behind Owens Park is called Whitworth Lane. In Darley Dale is another Whitworth Park. In recognition of his achievements and contributions to education in Manchester, the Whitworth Building on the University of Manchester's Main Campus is named in his honour. Whitworth Society In 1923, the Whitworth Society was founded by Prof. Hele-Shaw FRS, then president of the Institution of Mechanical Engineers to support all Whitworth Scholars and to promote engineering in the UK. The Society brings together those Whitworth Scholars who have benefited from Sir Joseph Whitworth's generosity. References Citations Sources 1803 births 1887 deaths People from Stockport 19th-century English philanthropists American Civil War industrialists Baronets in the Baronetage of the United Kingdom English mechanical engineers Engineers from Lancashire English inventors Fellows of the Royal Society Firearm designers History of Greater Manchester Machine tool builders People associated with the University of Manchester People of the Industrial Revolution Bessemer Gold Medal 19th-century English businesspeople Fellows of the Royal Society of Arts
Joseph Whitworth
Chemistry
2,403
27,304,163
https://en.wikipedia.org/wiki/Tryptic%20soy%20broth
Tryptic soy broth or Trypticase soy broth (frequently abbreviated as TSB) is used in microbiology laboratories as a culture broth to grow aerobic and facultative anaerobic bacteria. It is a general purpose medium that is routinely used to grow bacteria which tend to have high nutritional requirements (i.e., they are fastidious). Uses Sterility test medium in USP and EP as well as for inocula preparation for CLSI standards. TSB is frequently used in commercial diagnostics in conjunction with the additive sodium thioglycolate which promotes growth of anaerobes. Preparation To prepare 1 liter of TSB, the following ingredients are dissolved under gentle heat. Adjustments to pH should be made using 1N HCl or 1N NaOH to reach a final target pH of 7.3 ± 0.2 at 25°C. The solution is then autoclaved for 15 minutes at 121°C. Tryptic Soy Agar contains per liter: 17 g pancreatic digest of casein 3 g peptic digest of soybean 5 g sodium chloride 2.5g dipotassium phosphate (K2HPO4) 2.5g glucose References Microbiological media
Tryptic soy broth
Biology
256
73,032,884
https://en.wikipedia.org/wiki/Ganarake
Ganarake scalaris is an extinct species of lichen-like enigma, possibly within the division Mucoromycota first informally described as enigmatic cap-carbonate tubestones from basal Ediacaran sediments of the Southern Californian Noonday Formation. These tubestones were at first interpreted with a marine interpretation of the Noonday Formation as water escape structures, gas escape structures or as inverted stromatolites. However, associated paleosols and permineralized organic structures within the tubes with hyphae, spheroidal cells attached to the tubes and a remarkable organization of a thallus had remarkable similarities to habitats and microstructures of lichens. Ganarake has an isotopic composition and size comparable with a chlorophyte alga. Etymology Part of the genus name Gan is named in honour of paleontologist Tian Gan at the University of Maryland-College Park, who discovered similar Ediacaran fungi. The second half of the name arake is Greek for bowl. Scalaris refers to its ladder-like appearance as it branches in both Y and H-like forms. Discovery Even though the tubestones from the formation are now regarded as being of biological origin, they were originally interpreted as fluid escape structures or unique inverted stromatoliths. Historical evidence for a third option (lichenized fungi preserved in their growth positions) since the formation that G. scalaris was described from was compatible with the idea of lichens. Various microscopic observations confirmed fungal affinity with a stratified thallus of four layers. These layers are 1. Rectangular-cubic cells making up an upper cortex 2. A layer of spheroidal cells punctured and enveloped by slender hyphae 3. Medulla made out of the hyphae and 4. Lower cortex as thick as a few cells elaborated by intervals into multicellular rhizines extending down into the base of the sediment. Description Series of shallow and irregular cups are stacked up on each other and are in diameter. The cups branch off from a wide, possibly originally hollow, central hollow. The hollow is interpreted as originally being hollow because it is filled with sparry dolomite. These cup-shaped flanges consist of radially arranged, branching both pinnately and dichotomously from septate hyphae and expand until they define the cups foliose thalli, they are in turn overgrown by oxalate and carbonate crystals. In thin sections, they are reminiscent to thin ropes. When viewing the thalli in macerates they are flattened and foliose. See also List of Ediacaran genera References Ediacaran Ediacaran life Fossils of California Fungus genera Death Valley Taxa described in 2022 Prehistoric fungi Lichen genera Prehistoric North America
Ganarake
Biology
565
5,033,416
https://en.wikipedia.org/wiki/Beehive%20burner
A wood waste burner, known as a teepee burner or wigwam burner in the United States and a beehive burner in Canada, is a free-standing conical steel structure usually ranging from 30 to 60 feet in height. They are named for their resemblance to beehives, teepees or wigwams. A sawdust burner is cylindrical. They have an opening at the top that is covered with a steel grill or mesh to keep sparks and glowing embers from escaping. Sawdust and wood scraps are delivered to an opening near the top of the cone by means of a conveyor belt or Archimedes' screw, where they fall onto the fire near the center of the structure. Teepee or beehive burners are used to dispose of waste wood in logging yards and sawdust from sawmills by incineration. As a result, they produce a large quantity of smoke and ash, which is vented directly into the atmosphere without filtering, contributing to poor air quality. The burners are considered to be a major source of air pollution and have been phased out in most areas. Teepee burners went out of general use in the Northwestern United States by the mid 1970s, and are prohibited from operation in Oregon, as well as southwestern Washington State. There are a few derelict beehive burners remaining in California, Oregon, Washington State and Western Canada. The majority of wood waste is now recycled and used as a component in various forest products, such as pellet fuel, particle board and mulch. Gallery See also Air pollution in British Columbia Clean Air Act of 1970 References External links Historic images of teepee burners in Oregon from the Salem, Oregon, Public Library Rusty Relics: Teepee Burners Environmental issues in Canada History of the Pacific Northwest Incineration Logging
Beehive burner
Chemistry,Engineering
371
22,135,856
https://en.wikipedia.org/wiki/EchoStar%20V
EchoStar V was a communications satellite built by Space Systems/Loral based in Palo Alto, CA and operated by EchoStar. Launched in 1999 it was operated in geostationary orbit at a longitude of 148 degrees west. EchoStar V was used for direct-to-home television broadcasting services. Satellite The launch of EchoStar V made use of an Atlas rocket flying from Launch Complex 36 at the Cape Canaveral Air Force Station, United States. The launch took place at 06:02 UTC on September 23, 1999, with the spacecraft entering a geosynchronous transfer orbit. Specifications Launch mass: Power: 2 deployable solar arrays, batteries Stabilization: 3-axis Longitude: 148° West See also 1999 in spaceflight References Spacecraft launched in 1999 Communications satellites in geostationary orbit Satellites using the SSL 1300 bus E05
EchoStar V
Astronomy
172
13,441,603
https://en.wikipedia.org/wiki/Situation%2C%20task%2C%20action%2C%20result
The situation, task, action, result (STAR) format is a technique used by interviewers to gather all the relevant information about a specific capability that the job requires. Situation: The interviewer wants you to present a recent challenging situation in which you found yourself. Task: What were you required to achieve? The interviewer will be looking to see what you were trying to achieve from the situation. Some performance development methods use “Target” rather than “Task”. Job interview candidates who describe a “Target” they set themselves instead of an externally imposed “Task” emphasize their own intrinsic motivation to perform and to develop their performance. Action: What did you do? The interviewer will be looking for information on what you did, why you did it and what the alternatives were. Result: What was the outcome of your actions? What did you achieve through your actions? Did you meet your objective? What did you learn from this experience? Have you used this learning since? The STAR technique is similar to the SOARA technique (Situation, Objective, Action, Result, Aftermath). The STAR technique is also often complemented with an additional R on the end STARR or STAR(R) with the last R resembling reflection. This R aims to gather insight and interviewee's ability to learn and iterate. Whereas the STAR reveals how and what kind of result on an objective was achieved, the STARR with the additional R helps the interviewer to understand what the interviewee learned from the experience and how they would assimilate experiences. The interviewee can define what they would do (differently, the same, or better) next time being posed with a situation. Common questions that the STAR technique can be applied to include conflict management, time management, problem solving and interpersonal skills. References External links The ‘STAR’ technique to answer behavioral interview questions The STAR method explained Job interview Logical consequence Schedule (project_management)
Situation, task, action, result
Physics
388
55,022,692
https://en.wikipedia.org/wiki/Frantz%20Yvelin
Frantz Yvelin is a French businessman, pilot, and serial entrepreneur. He was the President of Aigle Azur, France's 2nd largest airline until August 26, 2019. Frantz Yvelin previously created and ran two French independent scheduled Airlines, (La Compagnie and L'Avion). Early life and education A commercial pilot since the age of 21, Yvelin is type-rated on Airbus A320, Boeing 737, Boeing 757, Boeing 767, Cessna Citation and McDonnell Douglas MD80. Career Yvelin started his career as an IT consultant (for GFI Informatique, CS Communication & Systèmes). In 2006 he founded and ran Europe's first all-Business-Class airline, L'Avion, before selling it to British Airways. (In 2009, L'Avion became OpenSkies and has since operated under that brand). Yvelin was Head of Strategy and Development for OpenSkies for a time after it was merged with L'Avion. In 2013, along with La Compagnie, Frantz created a French holding company called Dreamjet Participations, which he ran as President and CEO until the end of 2016. Dreamjet Participations acquired 100% of French leisure airline XL Airways in 2016. Along with Peter Luethi, La Compagnie's co-founder, he has been an air transport advisor for three years and was a lecturer in air transportation economics at the École nationale de l'aviation civile (teaching the Mastère spécialisé course). In parallel, he has helped to develop an airliners' ferry and flight testing company based in the USA. Notes Living people French chief executives French airline chief executives École nationale de l'aviation civile 1976 births
Frantz Yvelin
Engineering
366
2,483,251
https://en.wikipedia.org/wiki/Methyl%20red
Methyl red (2-(N,N-dimethyl-4-aminophenyl) azobenzenecarboxylic acid), also called C.I. Acid Red 2, is an indicator dye that turns red in acidic solutions. It is an azo dye, and is a dark red crystalline powder. Methyl red is a pH indicator; it is red in pH under 4.4, yellow in pH over 6.2, and orange in between, with a pKa of 5.1. Murexide and methyl red are investigated as promising enhancers of sonochemical destruction of chlorinated hydrocarbon pollutants. Methyl red is classed by the IARC in group 3 - unclassified as to carcinogenic potential in humans. Preparation As an azo dye, methyl red may be prepared by diazotization of anthranilic acid, followed by reaction with dimethylaniline: Properties The color of methyl red is pH dependent, because protonation causes it to adopt a hydrazone/quinone structure. Methyl Red has a special use in histopathology for showing acidic nature of tissue and presence of organisms with acidic natured cell walls. Methyl Red is detectably fluorescent in 1:1 water:methanol (pH 7.0), with an emission maximum at 375 nm (UVA) upon excitation with 310 nm light (UVB). Methyl red test In microbiology, methyl red is used in the methyl red test (MR test), used to identify bacteria producing stable acids by mechanisms of mixed acid fermentation of glucose (cf. Voges–Proskauer test). The MR test, the "M" portion of the four IMViC tests, is used to identify enteric bacteria based on their pattern of glucose metabolism. All enterics initially produce pyruvic acid from glucose metabolism. Some enterics subsequently use the mixed acid pathway to metabolize pyruvic acid to other acids, such as lactic, acetic, and formic acids. These bacteria are called methyl-red positive and include Escherichia coli and Proteus vulgaris. Other enterics subsequently use the butylene glycol pathway to metabolize pyruvic acid to neutral end products. These bacteria are called methyl-red-negative and include Serratia marcescens and Enterobacter aerogenes. Process A tube filled with a glucose phosphate broth is inoculated with a sterile transfer loop. The tube is incubated at for 2–5 days. After incubation, 2.5 ml of the medium are transferred to another tube. Five drops of the pH indicator methyl red is added to this tube. The tube is gently rolled between the palms to disperse the methyl red. Expected results Enterics that subsequently metabolize pyruvic acid to other acids lower the pH of the medium to 4.2. At this pH, methyl red turns red, a positive test. Enterics that subsequently metabolize pyruvic acid to neutral end products lower the pH of the medium to only 6.0. At this pH, methyl red is yellow, a negative test. See also Methyl Universal Indicator Tashiro's indicator pH indicators Methyl yellow Methyl orange Methyl violet References "Microbiology, A Photographic Atlas for the Laboratory", Alexander, Street, Pearson Education, 2001. External links Nile Chemicals -- Methyl Red A site showing some extra information on methyl red. Synthesis of methyl red IARC Group 3 carcinogens PH indicators Azo dyes Anthranilic acids Dimethylamino compounds
Methyl red
Chemistry,Materials_science
744
72,302,941
https://en.wikipedia.org/wiki/Hanseniaspora%20osmophila
Hanseniaspora osmophila is a species of yeast in the family Saccharomycetaceae. It is found in soil and among the bark, leaves, and fruits of plants, as well as fermented foods and beverages made from fruit. Taxonomy Albert Klöcker originally published descriptions of two yeasts in the anamorphic form in 1912; Pseudosaccharomyces corticis, which he isolated on various trees around Copenhagen, and Pseudosaccharomyces santacruzensis, which he obtained from soil in Saint Croix. In 1920, Giuseppe de Rossi isolated a species of yeast from grapes and grape must in Umbria, Italy. He placed it in the same genus, assigning the name Pseudosaccharomyces magnus. Because the Pseudosaccharomyces name had already been used since 1906 for an unrelated organism, in 1923, Alexander Janke proposed an alternative name, Klöckeria, for the genus, which he corrected in 1928 to Kloeckera. Independently, in 1932, C. J. G. Niehaus described two species of yeasts that possessed spherical ascospores in their holomorphic state. This spherical shape was different from Klöcker's description of the ascospores of the Hanseniaspora genus. Niehaus created a new genus, Kloeckeraspora, which was similar to Hanseniaspora except for the shape of the ascospores. He called one of the new species Kloeckeraspora osmophila, and the other was Kloeckeraspora uvarum. The creation of the new genus was controversial among researchers who disagreed that the number and shape of ascospores was enough of a defining characteristic for a new genus, and in 1948, Emil M. Mrak and Herman Phaff proposed that a slight modification of the Hanseniaspora genus would allow the combination of the two genera. In their study of samples of the species, Jacomina Lodder and N.J.W. Kreger-Van Rij could not find any ascospores in Kloeckeraspora osmophila, so they provisionally reclassified it as Kloeckera magna in 1952, but Shehata, et. al were able to produce abundantly sporulating strains in their laboratory, and preferred to include the yeast in the Hanseniaspora genus, reclassifying both of the species identified by Niehaus as synonyms of H. uvarum in 1955. The next year, H.J. Phaff, M.W. Miller, and M. Shifrine determined that the strains were different species, since K. osmophila had the ability to assimilate maltose, but H. uvarum could not, and therefore proposed that the strains originally defined as Kloeckeraspora osmophila be named Hanseniaspora osmophila. In 1958, Miller and Phaff studied yeast species of the Hanseniaspora and Kloeckera genera and concluded that Kloeckera magna and Kloeckera corticis were the same species, with K. corticis taking name priority, and determined that it was the anamorphic form of Hanseniaspora osmophila. DNA Testing by S.A. Meyer in 1978 conclusively synonymized the anamorphic yeasts in the Kloeckera genus with their teleomorphic counterparts in the Hanseniaspora genus, and recategorized Kloeckera corticis as a synonym of Hanseniaspora osmophila. The testing also determined that Kloeckera santacruzensis was the same species as Hanseniaspora osmophila. Description Microscopic examination of the yeast cells in YM liquid medium after 48 hours at 25°C reveals cells that are 3.5 to 6 μm by 7.2 to 18.2 μm in size, apiculate, ovoid or long-ovoid, appearing singly or in pairs. Reproduction is by budding, which occurs at both poles of the cell. In broth culture, sediment is present, and after one month a thin ring is formed. Colonies that are grown on malt agar for one month at 25°C appear white to cream-colored, glossy, and smooth. Growth is flat on the edges and raised at the center. The yeast forms branched pseudohyphae on potato agar. The yeast has been observed to form one or two sherical and warty ascospores when grown for at least one week on 5% Difco malt extract agar, and the ascospores are not released from the ascus. The yeast can ferment glucose, but not sucrose, galactose, maltose, lactose, raffinose or trehalose. The yeast can assimilate glucose, cellobiose, and salicin. Assimilation of sucrose and maltose is variable. It has a positive growth rate at 30°C, but no growth at 34°C. It can not grow on agar media containing 0.1% cycloheximide and can not utilize 2-keto-d-gluconate as a sole source of carbon. Ecology The species has been identified from locations worldwide, mainly on the bark, flowers, or fruit of plants, or in soil. It has also been found in fermented foods and beverages made from fruit, including wine and vinegar. Apart from unwanted spoilage, this yeast is also present in the fermentation of traditional Italian balsamic vinegar (Zygosaccharomyces rouxii together with Zygosaccharomyces bailii, Z. pseudorouxii, Z. mellis, Z. bisporus, Z. lentus, Hanseniaspora valbyensis, Hanseniaspora osmophila, Candida lactis-condensi, Candida stellata, Saccharomycodes ludwigii, Saccharomyces cerevisiae) Effects on wine production A study of the fermentation characteristics of H. osmophila in wine must found that it shares many of the characteristics of Saccharomyces ludwigii, a spoilage yeast that has been referred to as the "winemaker's nightmare" due to its ability to outcompete targeted fermentation yeasts. In the study, H. osmophila preferentially fermented glucose, followed by fructose, and was able to tolerate an alcohol level of up to 11.2% at 15°C. Due to the production of acetic acid, acetaldehyde, ethyl acetate, and acetoin to concentrations above the taste threshold and the lack of inhibition of growth and fermentation rate with the use of sulfur dioxide, the study concluded that the presence of H. osmophila should be considered detrimental to wine production. References Saccharomycetes Yeasts Fungi described in 1932 Cosmopolitan species Fungus species
Hanseniaspora osmophila
Biology
1,460
46,268,228
https://en.wikipedia.org/wiki/Praktica%20IV
The Praktica IV is a 35mm SLR with M42 thread mount that was launched by Kamera-Werkstätten (KW) in 1959. The Praktica IV was based on the Praktina FX, and was actually the first Praktica to have a fixed pentaprism. It was the last model marketed by KW, before the company was bought by Pentacon. Single-lens reflex cameras Products introduced in 1959 Praktica cameras
Praktica IV
Technology
98
2,514,833
https://en.wikipedia.org/wiki/Phenylacetone
Phenylacetone, also known as phenyl-2-propanone, is an organic compound with the chemical formula C6H5CH2COCH3. It is a colorless oil that is soluble in organic solvents. It is a mono-substituted benzene derivative, consisting of an acetone attached to a phenyl group. As such, its systematic IUPAC name is 1-phenyl-2-propanone. This substance is used in the manufacture of methamphetamine and amphetamine, where it is commonly known as P2P. Due to illicit drug labs using phenylacetone to make amphetamines, phenylacetone was declared a schedule II controlled substance in the United States in 1980. In humans, phenylacetone occurs as a metabolite of amphetamine and methamphetamine via FMO3-mediated oxidative deamination. Synthesis There are many routes to synthesize phenylacetone. Industry uses the gas-phase ketonic decarboxylation of phenylacetic acid using acetic acid over a ceria-alumina solid acid catalyst. A related laboratory-scale reaction has been described. An alternative route is zeolite-catalyzed isomerization of phenylpropylene oxide. Another laboratory synthesis involves conventional routes including the Friedel-Crafts alkylation reaction of chloroacetone with benzene in the presence of aluminum chloride catalyst. Amphetamine metabolism Phenylacetone is an intermediate in the biodegradation of amphetamine. In the human liver, flavin-containing monooxygenase 3 (FMO3) deaminates amphetamines into phenylacetone, which is non-toxic to humans. Phenylacetone is oxidized to benzoic acid, which is converted to hippuric acid by glycine N-acyltransferase (GLYAT) enzymes prior to excretion. Phenylacetone can undergo para-hydroxylation to 4-hydroxyphenylacetone, which occurs as a metabolite of amphetamine in the human body. Regulation and culture To prevent illicit synthesis of amphetamines from phenylacetone, the precursor phenylacetic acid is subject to regulation in the United States under the Chemical Diversion and Trafficking Act. In the TV series Breaking Bad, Walter White manufactures methamphetamine using phenylacetone and methylamine through a reductive amination reaction. White produced phenylacetone in a tube furnace using phenylacetic acid and acetic acid. See also MDP2P – related compound with a methylenedioxy group, and a precursor to MDMA. Cyclohexylacetone – the cyclohexane derivative of phenylacetone Phenylacetones Methamphetamine Controlled Substances Act Notes References Amphetamine Ketones Phenyl compounds Benzyl compounds Recreational drug metabolites Human drug metabolites
Phenylacetone
Chemistry
648
49,215,995
https://en.wikipedia.org/wiki/Russula%20maculata
Russula maculata is a species of mushroom in the genus Russula. References External links maculata Fungi described in 1878 Fungi of Europe Fungus species
Russula maculata
Biology
31
69,015,860
https://en.wikipedia.org/wiki/Brendel%E2%80%93Bormann%20oscillator%20model
The Brendel–Bormann oscillator model is a mathematical formula for the frequency dependence of the complex-valued relative permittivity, sometimes referred to as the dielectric function. The model has been used to fit to the complex refractive index of materials with absorption lineshapes exhibiting non-Lorentzian broadening, such as metals and amorphous insulators, across broad spectral ranges, typically near-ultraviolet, visible, and infrared frequencies. The dispersion relation bears the names of R. Brendel and D. Bormann, who derived the model in 1992, despite first being applied to optical constants in the literature by Andrei M. Efimov and E. G. Makarova in 1983. Around that time, several other researchers also independently discovered the model. The Brendel-Bormann oscillator model is aphysical because it does not satisfy the Kramers–Kronig relations. The model is non-causal, due to a singularity at zero frequency, and non-Hermitian. These drawbacks inspired J. Orosco and C. F. M. Coimbra to develop a similar, causal oscillator model. Mathematical formulation The general form of an oscillator model is given by where is the relative permittivity, is the value of the relative permittivity at infinite frequency, is the angular frequency, is the contribution from the th absorption mechanism oscillator. The Brendel-Bormann oscillator is related to the Lorentzian oscillator and Gaussian oscillator , given by where is the Lorentzian strength of the th oscillator, is the Lorentzian resonant frequency of the th oscillator, is the Lorentzian broadening of the th oscillator, is the Gaussian broadening of the th oscillator. The Brendel-Bormann oscillator is obtained from the convolution of the two aforementioned oscillators in the manner of , which yields where is the Faddeeva function, . The square root in the definition of must be taken such that its imaginary component is positive. This is achieved by: References See also Cauchy equation Sellmeier equation Forouhi–Bloomer model Tauc–Lorentz model Lorentz oscillator model Condensed matter physics Electric and magnetic fields in matter Optics
Brendel–Bormann oscillator model
Physics,Chemistry,Materials_science,Engineering
513
1,365,611
https://en.wikipedia.org/wiki/Frank%20Ostrowski
Frank Ostrowski (1960 – 2011) was a German programmer best known for his implementations of the BASIC programming language. After his time with the German Federal Armed Forces, Frank Ostrowski was unemployed for three years. During this time, he developed Turbo-Basic XL for the Atari 8-bit computers. It was published in the German language Happy Computer Magazine in December 198 as Listing of the Month. Turbo-Basic XL is both much faster and has more features than the standard Atari BASIC. He soon got a job with GFA Systemtechnik GmbH (at the time known as Integral Hydraulik) where he wrote GFA BASIC on the Atari ST which became one of the more popular BASICs on that platform. Frank Ostrowski died in 2011 after a severe disease. References German computer programmers Atari 2011 deaths 1960 births BASIC programming language
Frank Ostrowski
Technology
170
72,695,599
https://en.wikipedia.org/wiki/Danica%20Galoni%C4%87%20Fujimori
Danica Galonić Fujimori () is a Serbian-American chemical biologist who is a professor at the University of California, San Francisco. Her research considers nucleic acid synthesis and tissue engineering. In the search for new therapeutics and vaccines, she has studied the interactions between ribosomes and SARS-CoV-2. Early life and education Galonić Fujimori earned her undergraduate degree at the University of Belgrade. She moved to the University of Illinois Urbana-Champaign for her doctoral research, where she earned a PhD in biochemistry. Her research considered the development of two strategies for site-selective peptide modification. She then moved to the Harvard Medical School where she worked alongside Christopher T. Walsh. Research and career Galonić Fujimori has studied various biological processes, including chromatin formation, transcriptional regulation and DNA repair. Methylation impacts the regulation of biological processes, and the deregulation of methylation is associated with various diseases. As such, understanding and exploiting enzymatic regulation of methylation could provide an opportunity for therapeutic intervention. She has studied Jumonji domain-containing histone-lysine demethylases, complex proteins which catalyze the removal of methylation marks on the lysine residues of multiple histones can contain chromatin reader domains. These reader domains interact with chromatin, an interaction which is modulated by chromatin modifications. To probe the cellular function of the Jumonji family, the Galonić Fujimori laboratory develop small molecule inhibitors. She proposes that these molecules can be used to inhibit the aberrant demethylation that occurs in certain diseases. She has investigated the methylation of RNA, and how this impacts the cellular function of RNA. Fujimori investigates how bacteria acquire immune responses to antibiotics. She has focused her efforts on antibiotics that target the ribosome of bacteria, which is involved with protein synthesis. Antibiotics such as linezolid bind to sites such as the peptidyl transferase center, blocking protein biosynthesis. During the COVID-19 pandemic, Galonić Fujimori started working on virus-host interactions in response to SARS-CoV-2. She showed that bromodomain and extraterminal (BET) proteins were involved in the body's response to COVID-19 infection. She started working on pharmaceuticals to tackle future pandemics. Awards and honors 2011 National Science Foundation CAREER Award 2015 University of California, San Francisco Haile T. Debas Academy of Medical Educators Excellence in Teaching Award 2015 University of California, Berkeley Sackler Sabbatical Exchange Program 2017 University of California, San Francisco Byers Award Lecture in Basic Sciences 2020 Keck Foundation WM Keck Medical Research Award 2020 Bowes Biomedical Investigator Award Selected publications References Year of birth missing (living people) Place of birth missing (living people) University of California, San Francisco faculty Harvard Medical School people University of Belgrade alumni University of Illinois Urbana-Champaign alumni 21st-century American biochemists Serbian emigrants to the United States Women biochemists 21st-century American women scientists Living people
Danica Galonić Fujimori
Chemistry
627
71,554,903
https://en.wikipedia.org/wiki/Chuanyi%20Wang
Chuanyi Wang is a Chinese American, environmental chemistry scientist, academic, and an author. He is a Distinguished Professor and Academic Dean at the School of Environmental Science and Engineering at the Shaanxi University of Science & Technology. He is recognized for his research in environmental photocatalysis, environmental materials, surface/interface chemistry, nanomaterials, and pollution controlling. Wang is the author and editor of two books, Recent Research Developments in Physical Chemistry: Surfaces And Interfaces of Nanostructured Systems and Encyclopedia of Surface and Colloid Science. Wang is a Fellow of Royal Society of Chemistry, and International Association of Advanced Materials (IAAM). Education Born in China on July 25, 1966, Wang graduated with Undergraduate Diplomas in Chemistry from Yancheng Teachers University in 1986 and Soochow University in 1991. He completed his PhD in 1998 from Technical Institute of Physics and Chemistry, Chinese Academy of Sciences. Career After completing his PhD in 1998, Wang held the Alexander von Humboldt Research Fellowship at the Free University Berlin and Institute for Solar Energy Research in Germany from 1999 to 2000. Between 2001 and 2006, he held the appointment of Research Associate and post-doctoral Research Associate at Tufts University. Following this appointment, he occupied the position of Research Assistant Professor at University of Missouri-Kansas City for two years. Starting from 2008 till 2009, he joined the University of Missouri-Kansas City as an Adjunct PhD Faculty. From 2010 to 2017, he served as a Distinguished Professor of Chinese Academy of Sciences (CAS). Currently, he holds the appointment of Honorary Professor at Wuhan University since 2014 and a Visiting Scientist at Tufts from 2019. He holds an appointment as a Distinguished Professor in the department of Environmental Science and Engineering at Shaanxi University of Science & Technology. As of 2021, Wang is serving as an Academic Dean at School of Environmental Science and Engineering in Shaanxi University of Science & Technology. He served as a Director of Laboratory of Environmental Sciences and Technology, XJIPC and Vice-Director of Key Laboratory of Functional Materials & Devices for Special Environments of CAS. Research Wang has authored more than 270 publications. Wang's research work spans on environmental remediation, eco-materials, and surface/interface chemistry, and catalysis focused on nanosized metals and semiconductors. Photocatalysis Wang's research on photocatalysis is significant in reducing contaminants. He studied the selective photocatalytic N2 fixation induced by the nitrogen vacancies and indicated that Photocatalytic N2 fixation supported by nitrogen vacancies (NVs) leads to improved graphitic carbon nitride (g-C3N4). Wang's research work focuses on the performance of nanostructured TiO2 particles. He conducted a comparative study that aimed to characterize the performance of TiO2 particles created in three different ways. The results from the study concluded that TiO2 nanoparticles prepared from organic precursors demonstrated an increased photocatalytic activity. Based on this method, Wang developed a method to uniformly distribute doped species like metal ions in semiconductor photocatalyst matrix. Wang presented an in-depth view into the effectiveness of photocatalytic production under carbon vacancies. The findings suggested that Photocatalytic H2O2 production at Graphitic carbon nitrides (g-C3N4) carries the possibility to increase by 14 times with the carbon vacancies. He also studied the role of oxygen vacancies in the photocatalytic removal of NO under visible light. The study demonstrated that oxygen vacancies carry the potential to support selective photoreduction of NO to N2 and hinder the production of more toxic nitrogen dioxide. Pollution Controlling Wang's research characterized the importance of heavy metal adsorption by clay minerals. In a study conducted in 2019, he highlighted the primary adsorption mechanisms of the clay minerals like halloysite, bentonite, and attapulgite. This study reveals how wastewater contamination can be tackled with the utilization of clay mineral adsorbents. Wang also focused his research on the removal of microplastics from the environment. In a recent study, he reviewed the removal methods, mechanisms, advantages of the efficient methods as well as the disadvantages of many microplastics removal methods. Nanoparticles Wang has extensively carried out research on nanoparticles and its implications for the environment. He formulated and characterized chitosan–poly(vinyl alcohol)/bentonite nanocomposites. The study of adsorption of Hg(II) ions by nanocomposites revealed that they carry high adsorption capacity for mercury ions, and can promote the adsorption selectivity of the nanocomposites. Wang reviewed the interaction between silver nanoparticles and other nanoparticles. Discarded into the aquatic environment via waste or intentional release, the silver nanoparticles can lead to adverse effects on the aquatic life. With his study, it was revealed that Titanium oxide nanoparticles help in reducing the toxicity and dissolution of silver nanoparticles. Surface/interface chemistry Wang conducted a surface chemistry study on typical photocatalytic material TiO2 by means of second-order nonlinear laser spectroscopy, clarifying the distribution characteristics of hydroxyl groups on the surface of TiO2, and the properties of probe molecules methanol and acetic acid, as well as their adsorption modes and competitive adsorption with water molecules. Awards and honors 1998 - Humboldt Research Fellowship, Alexander von Humboldt Foundation 1998 – Excellent Prize, President Scholarship of Chinese Academy of Sciences 2011 – Science and Technology Award, Chinese Materials Research Society 2014 – Tianshan Award of China, Government of Xinjiang Uygur Autonomous Region 2016 – China's Overseas Chinese Community Contribution Award (Innovative Talents), China Association for Science and Technology (CAST) 2018- Fellow of Royal Society of Chemistry (FRSC) 2020 – Named in the top 2 % of the most influential scientists in the world in their scientific career, 2021 (Physical Chemistry, #169 in 2020). Stanford University 2020 – IAAM Scientist Award, International Association of Advanced Materials 2022 - Fellow of International Association of Advanced Materials 2022 - Named in the top 2 % of the most influential scientists in the world in their scientific career, 2022 (Physical Chemistry, #87 in 2021). Stanford University Bibliography Books/chapters Encyclopedia of Surface and Colloid Science, Third Edition (2002) Recent Research Developments in Physical Chemistry: Surfaces And Interfaces of Nanostructured Systems (2017) Selected articles Wang, C. Y., Bahnemann, D. W., & Dohrmann, J. K. (2000). A novel preparation of iron-doped TiO2 nanoparticles with enhanced photocatalytic activity. Chemical Communications, (16), 1539–1540. Wang, C. Y., Böttcher, C., Bahnemann, D. W., & Dohrmann, J. K. (2003). A comparative study of nanometer sized Fe (III)-doped TiO2 photocatalysts: synthesis, characterization and activity. Journal of Materials Chemistry, 13(9), 2322–2329. Chen, S., Slattum, P., Wang, C., & Zang, L. (2015). Self-assembly of perylene imide molecules into 1D nanostructures: methods, morphologies, and applications. Chemical reviews, 115(21), 11967-11998. Dong, G., Ho, W., & Wang, C. (2015). Selective photocatalytic N2 fixation dependent on gC3N4 induced by nitrogen vacancies. Journal of Materials Chemistry A, 3(46), 23435-23441. Li, S., Dong, G., Hailili, R., Yang, L., Li, Y., Wang, F., ... & Wang, C. (2016). Effective photocatalytic H2O2 production under visible light irradiation at g-C3N4 modulated by carbon vacancies. Applied Catalysis B: Environmental, 190, 26–35. References Living people 1966 births Environmental scientists Scientists from Jiangsu Soochow University (Suzhou) alumni Academic staff of Shaanxi University of Science and Technology University of Missouri–Kansas City faculty People's Republic of China emigrants to the United States Fellows of the Royal Society of Chemistry 21st-century American chemists
Chuanyi Wang
Environmental_science
1,766
35,766,427
https://en.wikipedia.org/wiki/Mechanical%20metamaterial
Mechanical metamaterials are rationally designed artificial materials/structures of precision geometrical arrangements leading to unusual physical and mechanical properties. These unprecedented properties are often derived from their unique internal structures rather than the materials from which they are made. Inspiration for mechanical metamaterials design often comes from biological materials (such as honeycombs and cells), from molecular and crystalline unit cell structures as well as the artistic fields of origami and kirigami. While early mechanical metamaterials had regular repeats of simple unit cell structures, increasingly complex units and architectures are now being explored. Mechanical metamaterials can be seen as a counterpart to the rather well-known family of optical metamaterials and electromagnetic metamaterials. Mechanical properties, including elasticity, viscoelasticity, and thermoelasticity, are central to the design of mechanical metamaterials. They are often also referred to as elastic metamaterials or elastodynamic metamaterials. Their mechanical properties can be designed to have values that cannot be found in nature, such as negative stiffness, negative Poisson’s ratio, negative compressibility, and vanishing shear modulus. Classical mechanical metamaterials 3D printing, or additive manufacturing, has revolutionized the field in the past decade by enabling the fabrication of intricate mechanical metamaterial structures. Some of the unprecedented and unusual properties of classical mechanical metamaterials include: Negative Poisson's ratio (auxetics) Poisson's ratio defines how a material expands (or contracts) transversely when being compressed longitudinally. While most natural materials have a positive Poisson's ratio (coinciding with our intuitive idea that by compressing a material, it must expand in the orthogonal direction), a family of extreme materials known as auxetic materials can exhibit Poisson's ratios below zero. Examples of these can be found in nature, or fabricated, and often consist of a low-volume microstructure that grants the extreme properties. Simple designs of composites possessing negative Poisson's ratio (inverted hexagonal periodicity cell) were published in 1985. In addition, certain origami folds such as the Miura fold and, in general, zigzag-based folds are also known to exhibit negative Poisson's ratio. Negative stiffness Negative stiffness (NS) mechanical metamaterials are engineered structures that exhibit a counterintuitive property: as an external force is applied, the material deforms in a way that reduces the applied force rather than increasing it. This is in contrast to conventional materials that resist deformation. NS metamaterials are typically constructed from periodically arranged elements that undergo elastic instability under load. This instability leads to a negative stiffness behavior within a specific deformation range. The overall effect is a material that can absorb energy more efficiently and exhibit unique mechanical properties compared to traditional materials. Negative thermal expansion These mechanical metamaterials can exhibit coefficients of thermal expansion larger than that of either constituent. The expansion can be arbitrarily large positive or arbitrarily large negative, or zero. These materials substantially exceed the bounds for thermal expansion of a two-phase composite. They contain considerable void space. High strength to density ratio A high strength-to-density ratio mechanical metamaterial is a synthetic material engineered to possess exceptional mechanical properties relative to its weight. This is achieved through carefully designed internal microstructures, often periodic or hierarchical, which contribute to the material's overall performance. Negative compressibility In a closed thermodynamic system in equilibrium, both the longitudinal and volumetric compressibility are necessarily non-negative because of stability constraints. For this reason, when tensioned, ordinary materials expand along the direction of the applied force. It has been shown, however, that metamaterials can be designed to exhibit negative compressibility transitions, during which the material undergoes contraction when tensioned (or expansion when pressured). When subjected to isotropic stresses, these metamaterials also exhibit negative volumetric compressibility transitions. In this class of metamaterials, the negative response is along the direction of the applied force, which distinguishes these materials from those that exhibit negative transversal response (such as in the study of negative Poisson's ratio). Negative bulk modulus Mechanical metamaterials with negative effective bulk modulus exhibit intriguing and counterintuitive properties. Unlike conventional materials that compress under pressure, these materials expand. This anomalous behavior stems from their carefully engineered microstructure, which allows for internal deformation mechanisms that counteract the applied stress. Potential applications for these materials are vast. They could be employed to design acoustic or phononic metamaterials,advanced shock absorbers, and energy dissipation systems. Furthermore, their unique elastic properties may find utility in creating novel structural components with enhanced resilience and adaptability to dynamic loads. Vanishing shear modulus A pentamode metamaterial is an artificial three-dimensional structure which, despite being a solid, ideally behaves like a fluid. Thus, it has a finite bulk but vanishing shear modulus, or in other words it is hard to compress yet easy to deform. Speaking in a more mathematical way, pentamode metamaterials have an elasticity tensor with only one non-zero eigenvalue and five (penta) vanishing eigenvalues. Pentamode structures have been proposed theoretically by Graeme Milton and Andrej Cherkaev in 1995 but have not been fabricated until early 2012. According to theory, pentamode metamaterials can be used as the building blocks for materials with completely arbitrary elastic properties. Anisotropic versions of pentamode structures are a candidate for transformation elastodynamics and elastodynamic cloaking. Chiral micropolar elasticity Very often Cauchy elasticity is sufficient to describe the effective behavior of mechanical metamaterials. When the unit cells of typical metamaterials are not centrosymmetric it has been shown that an effective description using chiral micropolar elasticity (or Cosserat ) was required. Micropolar elasticity combines the coupling of translational and rotational degrees of freedom in the static case and shows an equivalent behavior to the optical activity. Infinite mechanical tunability In addition to the well-known unprecedented mechanical properties of mechanical metamaterials, "infinite mechanical tunability" is another crucial aspect of mechanical metamaterials. This is particularly important for structural materials as their microstructure and stiffness can be tuned to effectively achieve theoretical upper bounds for specific stiffness and strength. While theoretical composites that achieve the same upper bound have existed for some time, they have been impractical to fabricate as they require features on multiple length scales. Single length scale designs are amenable to additive manufacturing, where they can enable engineered systems that maximize lightweight stiffness, strength and energy absorption. Active Mechanical Metamaterials To date, most mainstream studies on mechanical metamaterials have focused on passive structures with fixed properties, lacking active sensing or feedback capabilities. Deep integration of advanced functionalities is a critical challenge in exploring the next generation of metamaterials. Composite mechanical metamaterials could be the key to achieving this goal. However, the entire concept of composite mechanical metamaterials is still in its infancy. Obtaining programmable behavior through the interplay between material and structure in composite mechanical metamaterials enables integrating advanced functionalities into their texture beyond their mechanical properties. The “mechanical metamaterial tree of knowledge” implies that chiral, lattice and negative metamaterials (e.g., negative bulk modulus or negative elastic modulus) are ripe followed by origami and cellular metamaterials. Recent research trends have been entering a space beyond merely exploring unprecedented mechanical properties. Emerging directions envisioned are sensing, energy harvesting, and actuating mechanical metamaterials.The tree of knowledge reveals that digital computing, digital data storage, and micro/nano-electromechanical systems (MEMS/NEMS) applications are one of the pillars of the mechanical metamaterials future research. Along this direction of evolution, the final target can be active mechanical metamaterials with a level of cognition. Cognitive abilities are crucial elements in a truly "intelligent mechanical metamaterials". Similar to complex living organisms, intelligent mechanical metamaterials can potentially deploy their cognitive abilities for sensing, self-powering, and information processing to interact with the surrounding environments, optimizing their response, and creating a sense–decide–respond loop. Programmable mechanical metamaterials Programmable response is an emerging direction for mechanical metamaterials beyond mechanical properties. Electrical responsiveness is an important functionality for designing adaptive, actuating, and autonomous mechanical metamaterials. For example, research ideas have been opened by active and adaptive mechanical metamaterials that design electrical materials into the microstructural units of metamaterials to autonomously convert mechanical-strain input into electrical-signal output. Responsive mechanical metamaterials Integrating functional materials and mechanical design is an emerging research area to explore responsive mechanical metamaterials. Recent studies explore new classes of mechanical metamaterials that can response to different excitation types such acoustic, thermophotovoltaic and magnetic. Sensing and energy harvesting mechanical metamaterials Recent studies have explored the integration of sensing and energy harvesting functionalities into the fabric of mechanical metamaterials. Meta-tribomaterials proposed in 2021 are a new class of multifunctional composite mechanical metamaterials with intrinsic sensing and energy harvesting functionalities. These material systems are composed of finely tailored and topologically different triboelectric microstructures. Meta-tribomaterials can serve as nanogenerators and sensing media to directly collect information about its operating environment. They naturally inherit the enhanced mechanical properties offered by classical mechanical metamaterials. Under mechanical excitations, meta-tribomaterials generate electrical signals which can be used for active sensing and empowering sensors and embedded electronics. Electronic mechanical metamaterials Electronic mechanical metamaterials are active mechanical metamaterials with digital computing and information storage capabilities. They have built the foundation for a new scientific field of meta-mechanotronics (mechanical metamaterial electronics) proposed in 2023. These material systems are created via integrating mechanical metamaterials, digital electronics and nano energy harvesting (e.g. triboelectric, piezoelectric, pyroelectric) technologies. Electronic mechanical metamaterials hold the potential to function as digital logic gates, paving the way for the development of mechanical metamaterial computers (MMCs) that could complement traditional electronic systems. Such computing metamaterial systems can be particularly useful under extreme loads and harsh environments (e.g. high pressure, high/low temperature and radiation exposure) where traditional semiconductor electronics cannot maintain their designed logical functions. References Metamaterials
Mechanical metamaterial
Materials_science,Engineering
2,276
70,013,634
https://en.wikipedia.org/wiki/Civilization%27s%20Waiting%20Room
Sivilisasjonens venterom (Norwegian for "Civilization's Waiting Room") was a research larp (live-action roleplaying game) held in Bergen in November 2021. It was designed to explore the potential of larps as a research methodology and as research dissemination, and was specifically intended to investigate ethical questions that arise when encountering new surveillance technologies. Background The project was funded by the Research Council of Norway as part of a scheme to increase the Norwegian impact of EU-funded research. The stated goal was to "create arenas where the general public can practice making ethical decisions about the use of new technologies, specifically machine vision technologies such as facial recognition, deepfakes and VR" The creative lead for the project was veteran larp developer Anita Myhre Andersen, working with Harald Misje, Jon Andreas Edland, Toril Mjelva Saatvedt, Sebastian Sjøvold and Eskil Mjelva Saatvedt. The researchers in the development team were Marianne Gunderson, Kristian A. Bjørkelo, and Jill Walker Rettberg, who had initiated the project. The larp drew upon the Nordic larp genre as well as on research on educational larping (Edu-larp) and larps as research tools. In a scholarly paper about Sivilisasjonens venterom, Malthe Stavning Erslev describes it as a research larp, which is "a method of academic knowledge development in its own right". Setting and gameplay Civilization's Waiting Room was set in a future where society has unravelled due to climate change and war. The Civilization (Sivilisasjonen) is a city state that is a rare refuge from the surrounding wilderness. It is run by a benevolent AI known as Intelligensen ("the Intelligence") that bases all of its decisions on the sum of all the opinions and interests of the citizens, as it interprets these based on the extensive data it collects and is fed by the citizens. Sivilisasjonen was therefore imagined as an AI-based democracy. The overall story arc of Sivilisasjonens Venterom unfolded over a dramatic day in the reception hall, starting in the morning with new applicants arriving, and ending in the evening with a ceremony in which those who had learned to manipulate the system were granted citizenship and access to Sivilisasjonen. During the day there were small personal dramas, planned plot twists and unplanned incidents, as well as large-scale hacking of the Intelligence undermining the foundational ideology of Sivilisasjonen. Players experienced conflicts on a personal level, as their characters had their interpersonal relationship challenged by technological mediation, as well as by their shifting interpretation of how this society worked. Participants also experienced large-scale drama as a group when the social framework of the Intelligence cracked and for a little while was replaced by a small group of more individuality-oriented hackers led by one of the organizers. Three related larps set in the same fictional world were Ettersynsing ("Opticionated"), a short form larp using a dinner table setting that was run at the NORA 2021 conference on AI, Mønsterakademiet, a short larp set in a school that trained citizens for the Civilization, and Hawa, a larp for children run by the larp development company Tidsreiser that was set in another part of the world where there are no adults, and robots bring up children in an attempt to mould them into peaceful, productive citizens. Reception Malthe Stavning Erslev analyses his experience of playing Trin in the larp, discussing larps as a mimetic method related to design fiction. However, he found that the focus on the aspects of surveillance that are visible, such as screens and cameras, led to less focus on data-intensive surveillance, and thus the larp could be said "not to challenge imaginaries, but to solidify them." In his MA thesis, Jon Andreas Edland argued that the "opportunity to observe a theme or situation from different sides and thus grants a larger room for reflection and understanding based on the context of the situation". References Live-action role-playing games Research Council of Norway Machine vision Government by algorithm Design of experiments November 2021 events in Norway
Civilization's Waiting Room
Engineering
900
15,291,723
https://en.wikipedia.org/wiki/Hierarchical%20control%20system
A hierarchical control system (HCS) is a form of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system. Overview A human-built system with complex behavior is often organized as a hierarchy. For example, a command hierarchy has among its notable features the organizational chart of superiors, subordinates, and lines of organizational communication. Hierarchical control systems are organized similarly to divide the decision making responsibility. Each element of the hierarchy is a linked node in the tree. Commands, tasks and goals to be achieved flow down the tree from superior nodes to subordinate nodes, whereas sensations and command results flow up the tree from subordinate to superior nodes. Nodes may also exchange messages with their siblings. The two distinguishing features of a hierarchical control system are related to its layers. Each higher layer of the tree operates with a longer interval of planning and execution time than its immediately lower layer. The lower layers have local tasks, goals, and sensations, and their activities are planned and coordinated by higher layers which do not generally override their decisions. The layers form a hybrid intelligent system in which the lowest, reactive layers are sub-symbolic. The higher layers, having relaxed time constraints, are capable of reasoning from an abstract world model and performing planning. A hierarchical task network is a good fit for planning in a hierarchical control system. Besides artificial systems, an animal's control systems are proposed to be organized as a hierarchy. In perceptual control theory, which postulates that an organism's behavior is a means of controlling its perceptions, the organism's control systems are suggested to be organized in a hierarchical pattern as their perceptions are constructed so. Control system structure The accompanying diagram is a general hierarchical model which shows functional manufacturing levels using computerised control of an industrial control system. Referring to the diagram; Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves Level 1 contains the industrialised Input/Output (I/O) modules, and their associated distributed electronic processors. Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens. Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets Level 4 is the production scheduling level. Applications Manufacturing, robotics and vehicles Among the robotic paradigms is the hierarchical paradigm in which a robot operates in a top-down fashion, heavy on planning, especially motion planning. Computer-aided production engineering has been a research focus at NIST since the 1980s. Its Automated Manufacturing Research Facility was used to develop a five layer production control model. In the early 1990s DARPA sponsored research to develop distributed (i.e. networked) intelligent control systems for applications such as military command and control systems. NIST built on earlier research to develop its Real-Time Control System (RCS) and Real-time Control System Software which is a generic hierarchical control system that has been used to operate a manufacturing cell, a robot crane, and an automated vehicle. In November 2007, DARPA held the Urban Challenge. The winning entry, Tartan Racing employed a hierarchical control system, with layered mission planning, motion planning, behavior generation, perception, world modelling, and mechatronics. Artificial intelligence Subsumption architecture is a methodology for developing artificial intelligence that is heavily associated with behavior based robotics. This architecture is a way of decomposing complicated intelligent behavior into many "simple" behavior modules, which are in turn organized into layers. Each layer implements a particular goal of the software agent (i.e. system as a whole), and higher layers are increasingly more abstract. Each layer's goal subsumes that of the underlying layers, e.g. the decision to move forward by the eat-food layer takes into account the decision of the lowest obstacle-avoidance layer. Behavior need not be planned by a superior layer, rather behaviors may be triggered by sensory inputs and so are only active under circumstances where they might be appropriate. Reinforcement learning has been used to acquire behavior in a hierarchical control system in which each node can learn to improve its behavior with experience. James Albus, while at NIST, developed a theory for intelligent system design named the Reference Model Architecture (RMA), which is a hierarchical control system inspired by RCS. Albus defines each node to contain these components. Behavior generation is responsible for executing tasks received from the superior, parent node. It also plans for, and issues tasks to, the subordinate nodes. Sensory perception is responsible for receiving sensations from the subordinate nodes, then grouping, filtering, and otherwise processing them into higher level abstractions that update the local state and which form sensations that are sent to the superior node. Value judgment is responsible for evaluating the updated situation and evaluating alternative plans. World Model is the local state that provides a model for the controlled system, controlled process, or environment at the abstraction level of the subordinate nodes. At its lowest levels, the RMA can be implemented as a subsumption architecture, in which the world model is mapped directly to the controlled process or real world, avoiding the need for a mathematical abstraction, and in which time-constrained reactive planning can be implemented as a finite-state machine. Higher levels of the RMA however, may have sophisticated mathematical world models and behavior implemented by automated planning and scheduling. Planning is required when certain behaviors cannot be triggered by current sensations, but rather by predicted or anticipated sensations, especially those that come about as result of the node's actions. See also Command hierarchy, a hierarchical power structure Hierarchical organization, a hierarchical organizational structure References Further reading External links The RCS (Realtime Control System) Library Texai An open source project to create artificial intelligence using an Albus hierarchical control system Control engineering Control theory Artificial intelligence Robot architectures
Hierarchical control system
Mathematics,Engineering
1,208
16,216,013
https://en.wikipedia.org/wiki/IC%20405
IC 405 (also known as the Flaming Star Nebula, SH 2-229, or Caldwell 31) is an emission and reflection nebula in the constellation Auriga north of the celestial equator, surrounding the bluish, irregular variable star AE Aurigae. It shines at magnitude +6.0. Its celestial coordinates are RA dec . It is located near the emission nebula IC 410, the open clusters M38 and M36, and the K-class star Iota Aurigae. The nebula measures approximately 37.0' x 19.0', and lies about 1,500 light-years away from Earth. It is believed that the proper motion of the central star can be traced back to the Orion's Belt area. The nebula is about 5 light-years across. Gallery See also Auriga (Chinese astronomy) Caldwell catalogue Cosmic dust List of largest nebulae Notes Sources External links Diffuse nebulae Emission nebulae Reflection nebulae 0405 031b 229 Auriga
IC 405
Astronomy
205
18,134,112
https://en.wikipedia.org/wiki/Golden%20age%20of%20cosmology
The golden age of cosmology is a term often used to describe the period from 1992 to the present in which important advances in observational cosmology have been made. The golden age of cosmology is a term used to describe a period of time that spans from 1992 to the present day. This period marks an era of tremendous progress in the field of observational cosmology, characterized by significant breakthroughs and discoveries that have transformed our understanding of the universe. Prior to the golden age of cosmology, our understanding of the universe was limited to what we could observe through telescopes and other instruments. Theories and models were developed based on limited data and observations, and there was much speculation and debate regarding the true nature of the universe. In 1992, however, the situation changed dramatically with the launch of the Cosmic Background Explorer (COBE) satellite. This mission was designed to study the cosmic microwave background (CMB) radiation, which is the leftover radiation from the Big Bang. The COBE mission made the first precise measurements of the CMB, and these measurements provided evidence in support of the Big Bang theory. The COBE mission also discovered small fluctuations in the CMB radiation, which were believed to be the seeds of galaxy formation. This discovery was a major breakthrough in our understanding of the early universe, as it provided evidence for the inflationary universe model. This model suggests that the universe underwent a rapid expansion in the first few moments after the Big Bang, which would have caused the tiny fluctuations in the CMB. In the years following the COBE mission, there were several other important discoveries in observational cosmology. One of the most significant was the discovery of dark matter. This mysterious substance makes up approximately 27% of the universe, yet it cannot be observed directly. Its existence was inferred from its gravitational effects on visible matter. The discovery of dark matter was followed by the discovery of dark energy, which makes up approximately 68% of the universe. Dark energy is believed to be responsible for the accelerated expansion of the universe, which was first observed in 1998 by two independent teams of astronomers. The discovery of dark matter and dark energy, along with the observations of the CMB and the large-scale structure of the universe, have led to the development of the Lambda-CDM model of the universe. This model suggests that the universe is composed of approximately 5% ordinary matter, 27% dark matter, and 68% dark energy. In addition to these discoveries, there have been numerous other important advances in observational cosmology in recent years. For example, the Planck satellite, which was launched in 2009, made even more precise measurements of the CMB radiation than the COBE mission. These measurements provided even more evidence in support of the inflationary universe model and helped to refine our understanding of the universe's initial conditions. Another significant development in recent years has been the discovery of gravitational waves. These ripples in the fabric of spacetime were predicted by Albert Einstein's theory of general relativity, but it was not until 2015 that they were first detected. This discovery was made by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and confirmed a major prediction of general relativity. The golden age of cosmology has also seen the development of new observational techniques and technologies. For example, the use of telescopes in space has revolutionized our ability to observe the universe. Space-based observatories such as the Hubble Space Telescope (launched in 1990) and the James Webb Space Telescope (launched in 2021) have provided stunning images and data that have expanded our understanding of the universe. In addition, ground-based telescopes have also undergone significant improvements in recent years. For example, the Atacama Large Millimeter Array (ALMA) in Chile is a revolutionary new telescope that is able to observe the universe in unprecedented detail. It has already made significant contributions to our understanding of star formation and the early universe. References Cosmology History of physics Cosmology History of science
Golden age of cosmology
Technology
816
49,302,234
https://en.wikipedia.org/wiki/Potassium%20uptake%20permease
The potassium (K+) uptake permease (KUP) family (TC# 2.A.72) is a member of the APC superfamily of secondary carriers. Proteins of the KUP/HAK/KT family include the KUP (TrkD) protein of E. coli and homologues in both Gram-positive and Gram-negative bacteria. High affinity (20 μM) K+ uptake systems (Hak1, TC# 2.A.72.2.1) of the yeast Debaryomyces occidentalis as well as the fungus, Neurospora crassa, and several homologues in plants have been characterized. Arabidopsis thaliana and other plants possess multiple KUP family paralogues. While many plant proteins cluster tightly together, the Hak1 proteins from yeast as well as the two Gram-positive and Gram-negative bacterial proteins are distantly related on the phylogenetic tree for the KUP family. All currently classified members of the KUP family can be found in the Transporter Classification Database. Structure and function Escherichia coli The E. coli protein is 622 amino acyl residues long and has 12 established transmembrane spanners (440 residues) with a requisite hydrophilic, C-terminal domain of 182 residues, localized to the cytoplasmic side of the membrane. Deletion of most of the hydrophilic domain reduces but does not abolish KUP transport activity. The function of the C-terminal domain is not known. The E. coli KUP protein is believed to be a secondary transporter. Uptake is blocked by protonophores such as CCCP (but not arsenate), and evidence for a proton symport mechanism has been presented. The N. crassa protein was earlier shown to be a K+:H+ symporter, establishing that the KUP family consists of secondary carriers. Yeast The yeast high affinity (KM = 1 μM) K+ transporter Hak1 is 762 amino acyl residues long with 12 putative TMSs. Like the E. coli KUP protein, it possesses a C-terminal hydrophilic domain, probably localized to the cytoplasmic side of the membrane. Hak1 may be able to accumulate K+ 106-fold against a concentration gradient. The plant high and low affinity K+ transporters can complement K+ uptake defects in E. coli. TRK TRK transporters, responsible for the bulk of K+ accumulation in plants, fungi, and bacteria, mediate ion currents driven by the large membrane voltages (-150 to -250 mV) common to non-animal cells. Bacterial TRK proteins resemble K+ channels in their primary sequence, crystallize as membrane dimers having intramolecular K+-channel-like folding, and complex with a cytoplasmic collar formed of four RCK domains. Fungal TRK proteins possess a large built-in regulatory domain and a highly conserved pair of transmembrane helices (TMSs 7 and 8, ahead of the C-terminus), postulated to facilitate intramembranal oligomerization. These fungal HAK proteins are chloride channels mediating efflux, a process suppressed by osmoprotective agents. It involve hydrophobic gating and resembles conduction by Cys-loop ligand-gated anion channels. Possibly, the tendency of hydrophobic or amphipathic transmembrane helices to self-organize into oligomers creates novel ionic pathways through membranes: hydrophobic nanopores, pathways of low selectivity governed by the chaotropic behavior of individual ionic species under the influence of membrane voltage. Transport reaction The generalized transport reaction for members of the KUP family is: K+ (out) + energy → K+ (in). See also Transporter Classification Database Membrane transport protein References Further reading Protein families Solute carrier family
Potassium uptake permease
Biology
824
1,848,250
https://en.wikipedia.org/wiki/Esky
Esky is a brand of portable coolers, originally Australian, derived from the word "Eskimo". The term "esky" is also commonly used in Australia to generically refer to portable coolers or ice boxes and is part of the Australian vernacular, in place of words like "cooler" or "cooler box" and the New Zealand "chilly bin". The brand name was purchased by American firm Coleman Company, (a subsidiary of Newell Brands) in 2009. History Some historians have credited Malley's with the invention of the portable ice cooler. According to the company, the Esky was "recognised as the first official portable cooler in the world." The company's own figures claim that, by 1960, 500,000 Australian households owned one (in a country of approximately 3 million households at the time). The brand "Esky" was used from around 1945, for an Australian-made ice chest, a free-standing insulated cabinet with two compartments: the upper to carry a standard () block of ice, and the lower for food and drinks. It was made in Sydney by Malleys but did not carry their name until around 1949. The first (metal-cased) portable Esky appeared in 1952, sized to accommodate six bottles of beer or soft drink, as advertised nationally. By 1965 "esky" (no capital E) was being used in Australian literature for such coolers, and in 1973 Malleys, owners of the tradename, acknowledged that the term had entered the vernacular and was being used for lightweight plastic imitations. One such brand was Willow, an Australian manufacturer, previously known for domestic "tinware" — buckets, bins, cake tins and oven trays. Nylex started making the plastic-cased Esky in 1984. In 1993 Nylex Corporation was still defending their ownership of the "Esky" trademark, but by 2002 they had allowed it to lapse. Outdoor recreation company Coleman Australia bought the Esky brands from Nylex Ltd after the company went into administration in February 2009, and later that year Coleman was producing most of the Esky line in Melbourne. The sale was seen as symptomatic of the decline of Australian-made goods due to cheaper imports. Construction Current models are constructed with two layers: polypropylene on the outer shell, with a polyurethane inner layer. This makes it lightweight and portable with excellent insulation. The original Esky had a lightweight galvanized iron outer shell and lining, and used cork compound insulation. Later models had a plastic inner and polystyrene foam insulation. Later coolers have been moulded entirely from polystyrene foam. They are lightweight and inexpensive, but are easily damaged or destroyed. The lightweight construction makes most eskies float in water, and they have been recommended by safety specialists to be used as an improvised lifebuoy, if more specialised equipment is not available. Numerous people have been saved after using either the whole esky or the esky lid as flotation devices after boating accidents. Generic use In Australia, the 'esky' name has become, or as a legal matter nearly has become, genericised: the popularity of the product has led to the use of its name to refer to any cooler box, regardless of the brand. Many dictionaries, including the Australian National Dictionary and the Macquarie Dictionary, now include definitions in their publications defining it as such. However, the use of the Esky trademark must be approved by the brand owner to avoid any liability. Government agencies and media outlets in Australia have used the term in preference to generic alternatives. In Australian culture The esky has played a unique role in Australian culture, especially with regard to outdoor activities, camping, and sporting events, and for reasons of novelty. In particular, the design and use of the esky has evolved through its relationship with Australia's drinking culture. The first portable Esky was designed to carry six "standard" 26 fluid ounce (740ml) bottles as well as a triple level food section. Malley's Esky was created as a tool for camping and caravanning holidays and was called the Esky Auto Box, encouraged by the post-war popularity of the private motor vehicle. The esky became an essential part of the beach, outdoor dining and barbecue culture that developed in Australia during the 60s and 70s. Due to their portability and extensive use outdoors, an esky can also double as makeshift cricket stumps, with some companies making hybrid products that include retractable stumps (among other useful features such as a bottle opener). Though not unique to Australia, Australian media have widely reported on a number of high-profile incidents involving motorised eskies fitted with small motors and wheels. Police have impounded offending vehicles and have issued fines to those operating them on public property. Spectators at the closing ceremony at the 2000 Summer Olympics in Sydney each received a promotional pack of a small polystyrene Esky containing other items of memorabilia. In another uniquely Australian piece of culture, poly-foam bodyboards used in the surf are often referred to by the slang term, "Esky-lid”, or “shark biscuit”. See also Cool-box List of oldest companies in Australia Notes References External links https://www.esky.com.au/ Australian companies established in 1884 Australian brands Brands that became generic Food preservation Cooler manufacturers Vacuum flasks Australian subsidiaries of foreign companies Camping equipment manufacturers
Esky
Physics
1,120
48,697,490
https://en.wikipedia.org/wiki/Deep%20Decarbonization%20Pathways%20initiative
The Deep Decarbonization Pathways initiative (DDPi) is a global consortium formed in 2013 which researches methods to limit the rise of global temperature due to global warming to 2°C or less. The focus of the DDPP is on decarbonization pathways for sustainable energy systems, other sectors of the economy, such as agriculture and land-use, are not directly considered. Methods Analyses of possible scenarios assume no major changes in culture and rely on existing technology. They assume no major changes in the lifestyles of people in developed countries and do not include possible future technologies such as nuclear fusion. Population growth of 1% per year and economic growth of 3% is assumed. Analyses show a need for continued research on energy technologies. The DDPi rejects an incrementalist approach to climate protection. Instead, meeting the climate change mitigation challenge (as set out in the 2015 Paris Agreement) will require backcasting to a suitable attractor, such as complete decarbonization. This method allows short-term policy options to be developed that are consistent with the selected long-term target. Even so, there are numerous possible deep decarbonization pathways (DDP) for each country and stakeholders and policymakers will need to debate and choose one, building the necessary political consensus as they go. DDPs can help avoid dead-end investments that reduce emissions in the short-term but obstruct deep decarbonization in the long-term and thereby reduce the risk of becoming stranded. In 2016, the project proposed a new conceptual decision framework for deep development pathways analysis across its 16 participating countries. This includes an agenda for the further development of modelling methodologies. A key motivation is to address the "intertwined goals of transparency, communicability and policy credibility." Findings DDPP's analyses show that meeting a goal of limiting the rise of global temperature due to 2°C or less is barely possible using existing technology, if it were deployed, however long term plans are not in place to do so. As of early 2016, the DDPP was composed of energy researchers and institutions across the following economies and covering 74% of global energy-related greenhouse gas emissions: Australia, Brazil, Canada, China, France, Germany, India, Indonesia, Italy, Japan, Mexico, Russia, South Africa, South Korea, the UK and the US. The 2015 global synthesis and country reports can be downloaded. Country-specific modelling varied in its sophistication. Some countries accounted for land-use change effects and the macroeconomic impacts on GDP, welfare, and economic structure, while other did not. See also Climate change mitigation Climate change mitigation scenarios Energy modeling Energy Modeling Forum, and in particular EMF22 Jeffrey Sachs, one of the founders of the Deep Decarbonization Pathways Project Open Energy Modelling Initiative References Further reading Deep Decarbonization Pathways Project Reports , the synthesis report and the country reports External links Deep Decarbonization Pathways Project homepage International climate change organizations Environmental research institutes
Deep Decarbonization Pathways initiative
Environmental_science
615
36,009,567
https://en.wikipedia.org/wiki/Bangladesh%20Power%20Development%20Board
The Bangladesh Power Development Board (BPDB) is a government agency operating under the Ministry of Power, Energy and Mineral Resources, Government of the People's Republic of Bangladesh. It was created as a public-sector organization to boost the country's power sector after the emergence of Bangladesh as an independent state in 1972. This government organization is responsible for planning and developing the nation's power infrastructure and for operating much of its power generation facilities. The BPDB is responsible for the major portion of generation and distribution of electricity mainly in urban areas of the country. Engr. Md. Rezaul Karim is the present chairman of the board. The board holds Members and Directors from Bangladesh Administrative Service and from different cadres of government services. History After the creation of Pakistan, the then Pakistan government formed Electricity directorate to develop the power sector of the country. In 1957, the electricity directorate acquired all the private power stations and transmission lines in the country. In the year of 1958, East Pakistan Water and Power Development Authority (EPWAPDA) was formed to effectively manage the power sector in the then East Pakistan. In 1960, the electricity directorate with all its assets was merged with EPWAPDA. Chattogram, Khulna and Shiddhirganj power stations was constructed at that time of which Shiddhirganj power station was the largest with 10MW installed capacity. In 1962, the Karnafuli Hydropower Station at Kaptai became operational. With two units of 40MW installed capacity each, it became the largest power plant in the country. The first long range transmission line was built connecting Kaptai with Shiddhirganj via 273 km long 132kV transmission line in 1962. After the independence of Bangladesh, WAPDA was separated by presidential order 59 (PO-59) and Bangladesh Power Development Board (BPDB) was formed with an installed generation capacity of 500MW. Subsequently, the Rural Electrification Board (REB) and the Dhaka Electric Supply Authority (DESA) was formed dividing the BPDB. In 2000, the transmission lines were handed over to the newly formed Power Grid Company of Bangladesh. BPDB is now the parent company of Ashuganj Power Station Company Ltd, Coal Power Generation Company Bangladesh Limited, Power Grid Company of Bangladesh, Electricity Generation Company of Bangladesh, North West Power Generation Company Limited, North West Zone Power Distribution Company Limited, and West Zone Power Distribution Company Limited. On 4 October 2022 70-80% of the countries 168 million residence were hit with blackouts and only 45% of residences were restored with power by nightfall. There was a shortage of natural gas because of the 2021–present global energy crisis where 77 natural gas power plants had insufficient fuel to meet demand. The electricity sector in Bangladesh is heavily reliant on natural gas. The government stopped buying spot price Liquefied natural gas in June 2022, they were importing 30% of their LNG on the spot market this year down from 40% last year. They are still importing LNG on futures exchange markets. Operations BPDB is responsible for generation and distribution of a large part of country's total electricity demand. As of January 2020, BPDB had a total installed capacity of 5613 MW at its own power plants located in different parts of the country. The main fuel used for power generation in BPDB plants is indigenous natural gas. BPDP operations also include projects that utilize renewable power sources including offshore wind power generation. The maximum demand served during peak hours was 16,477 MW on 30 April 2024. The total distribution network length under BPDB is 30,051 km, including 33kV, 11kV and 0.4kV lines. See also WAPDA Sports Club Raozan power station Nuclear energy in Bangladesh Barapukuria Power Station Northwest Power Generation Company Limited Rooppur Nuclear Power Plant Dhaka Electric Supply Company Limited Dhaka Power Distribution Company West Zone Power Distribution Company Limited References Energy organizations Government agencies of Bangladesh Energy in Bangladesh 1972 establishments in Bangladesh Organisations based in Dhaka Government boards of Bangladesh Government agencies established in 1972
Bangladesh Power Development Board
Engineering
827
24,342,006
https://en.wikipedia.org/wiki/C24H25NO3
{{DISPLAYTITLE:C24H25NO3}} The molecular formula C24H25NO3 may refer to: Benzylmorphine, an opioid analgesic Cyphenothrin, a pyrethroid insecticide 4'-Hydroxynorendoxifen N-Phenethylnormorphine
C24H25NO3
Chemistry
75
52,434,656
https://en.wikipedia.org/wiki/Snow%20science
Snow science addresses how snow forms, its distribution, and processes affecting how snowpacks change over time. Scientists improve storm forecasting, study global snow cover and its effect on climate, glaciers, and water supplies around the world. The study includes physical properties of the material as it changes, bulk properties of in-place snow packs, and the aggregate properties of regions with snow cover. In doing so, they employ on-the-ground physical measurement techniques to establish ground truth and remote sensing techniques to develop understanding of snow-related processes over large areas. History Snow was described in China, as early as 135 BCE in Han Ying's book Disconnection, which contrasted the pentagonal symmetry of flowers with the hexagonal symmetry of snow. Albertus Magnus proved what may be the earliest detailed European description of snow in 1250. Johannes Kepler attempted to explain why snow crystals are hexagonal in his 1611 book, Strena seu De Nive Sexangula. In 1675 Friedrich Martens, a German physician, catalogued 24 types of snow crystal. In 1865, Frances E. Chickering published Cloud Crystals - a Snow-Flake Album. In 1894, A. A. Sigson photographed snowflakes under a microscope, preceding Wilson Bentley's series of photographs of individual snowflakes in the Monthly Weather Review. Ukichiro Nakaya began an extensive study on snowflakes in 1932. From 1936 to 1949, Nakaya created the first artificial snow crystals and charted the relationship between temperature and water vapor saturation, later called the Nakaya Diagram and other works of research in snow, which were published in 1954 by Harvard University Press publishes as Snow Crystals: Natural and Artificial. Teisaku Kobayashi, verified and improves the Nakaya Diagram with the 1960 Kobayashi Diagram, later refined in 1962. Further interest in artificial snowflake genesis continued in 1982 with Toshio Kuroda and Rolf Lacmann, of the Braunschweig University of Technology, publishing Growth Kinetics of Ice from the Vapour Phase and its Growth Forms. In August 1983, Astronauts synthesized snow crystals in orbit on the Space Shuttle Challenger during mission STS-8. By 1988 Norihiko Fukuta et al. confirmed the Nakaya Diagram with artificial snow crystals, made in an updraft and Yoshinori Furukawa demonstrated snow crystal growth in space. Measurement Snow scientists typically excavate a snow pit within which to make basic measurements and observations. Observations can describe features caused by wind, water percolation, or snow unloading from trees. Water percolation into a snowpack can create flow fingers and ponding or flow along capillary barriers, which can refreeze into horizontal and vertical solid ice formations within the snowpack. Among the measurements of the properties of snowpacks (together with their codes) that the International Classification for Seasonal Snow on the Ground presents are: Height (H) is measured vertically from the ground surface, usually in centimeters. Thickness (D) is snow depth measured at right angles to the slope on inclined snow covers, usually in centimeters. Height of snowpack (HS) is the total depth of the snowpack, measured vertically in centimetres from base to snow surface. Height of new snow (HN) is the depth in centimeters of freshly fallen snow that accumulated on a snow board during a period of 24 hours or some other, specified period. Snow water equivalent (SWE) is the depth of water that would result if the snow mass melted completely, whether over a given region or a confined snow plot, calculated as the product of the snow height in meters times the vertically-integrated density in kilograms per cubic meter. Water equivalent of snowfall (HNW) is the snow water equivalent of snowfall, measured for a standard observing period of 24 hours or other period. Snow strength (Σ) whether compressive, tensile, or shear, snow strength can be regarded as the maximum stress snow can withstand without failing or fracturing, expressed in pascals per second, squared. Penetrability of snow surface (P) is the depth that an object penetrates into the snow from the surface, usually measured with a Swiss rammsonde, or more crudely by a person standing or on skis, in centimeters. Surface features (SF) describes the general appearance of the snow surface, owing to deposition, redistribution and erosion by wind, melting and refreezing, sublimation and evaporation, and rain. The following processes have the corresponding results: smooth—deposition without wind; wavy—wind deposited snow; concave furrows—melt and sublimation; convex furrows—rain or melt; random furrows—erosion. Snow covered area (SCA) describes the extent of snow-covered ground, usually expressed as a fraction (%) of the total. Slope angle (Φ) is the angle measured from the horizontal to the plane of a slope with a clinometer. Aspect of slope (AS) is the compass direction towards which a slope faces, normal to the contours of elevation, given either degrees from true North N = 0° = 360° or as N, NE, E, SE, S, SW, W, NW. Time (t) is usually given in seconds for a measurement duration or in longer units to describe the age of snow deposits and layers. Instruments Depth – Depth of snow is measured with a snowboard (typically a piece of plywood painted white) observed during a six-hour period. At the end of the six-hour period, all snow is cleared from the measuring surface. For a daily total snowfall, four six-hour snowfall measurements are summed. Snowfall can be very difficult to measure due to melting, compacting, blowing and drifting. Liquid equivalent by snow gauge – The liquid equivalent of snowfall may be evaluated using a snow gauge or with a standard rain gauge having a diameter of 100 mm (4 in; plastic) or 200 mm (8 in; metal). Rain gauges are adjusted to winter by removing the funnel and inner cylinder and allowing the snow/freezing rain to collect inside the outer cylinder. Antifreeze liquid may be added to melt the snow or ice that falls into the gauge. In both types of gauges once the snowfall/ice is finished accumulating, or as its height in the gauge approaches , the snow is melted and the water amount recorded. Classification The International Classification for Seasonal Snow on the Ground has a more extensive classification of deposited snow than those that pertain to airborne snow. A list of the main categories (quoted together with their codes) comprises: Precipitation particles (PP) (See below) Machine-made snow (MM) – May be round polycrystalline particles from freezing of very small water droplets from the surface inward or crushed ice particles from crushing and forced distribution Decomposing and fragmented precipitation particles (DF) — Decomposition is caused by a decrease of surface area to reduce surface free energy initial break up by light winds. Wind causes fragmentation, packing and rounding of particles. Rounded Grains (RG) – Vary from rounded, usually elongated particles of size around 0.25 mm, which are highly sintered. They may be wind packed or faceted rounded, as well. Faceted Crystals (FC) – Grow with grain-to-grain vapour diffusion driven by a large temperature gradient is the primary driver of faceted crystals within the dry snowpack. Depth Hoar (DH) – Grain-to-grain vapour diffusion driven by large temperature gradient is the primary driver of depth hoar within the dry snowpack. Surface Hoar (SH) – Rapid growth of crystals at the snow surface by transfer of water vapor from the atmosphere toward the snow surface, which is cooled by radiative cooling below ambient temperature. Melt Forms (MF) – Range from clustered round grains of wet snow through melt-freeze rounded polycrystals when water in veins freezes to loosely bonded, fully rounded single crystals and polycrystals.to polycrystals from a surface layer of wet snow that refroze after having been wetted by melt or rainfall. Ice Formations (IF) – Encompass the following features: Horizontal layers, resulting from rain or meltwater from the surface percolating into cold snow and refreezing along layer barriers. Vertical fingers of frozen drained water. A basal crust resurgent from melt water ponding above a substrate and freezes. A glaze of ice on the snow surface, resulting from freezing rain on snow. A sun crust from melt water at the surface snow refreezes at the surface due to radiative cooling. Precipitation particles The classification of frozen particulates extends the prior classifications of Nakaya and his successors and are quoted in the following table: All are formed in cloud, except for rime, which forms on objects exposed to supercooled moisture, and some plate, dendrites and stellars, which can form in a temperature inversion under clear sky. Physical properties Each such layer of a snowpack differs from the adjacent layers by one or more characteristics that describe its microstructure or density, which together define the snow type, and other physical properties. Thus, at any one time, the type and state of the snow forming a layer have to be defined because its physical and mechanical properties depend on them. The International Classification for Seasonal Snow on the Ground lays out the following measurements of snow properties (together with their codes): Microstructure of snow is complex and hard to measure, yet has a critical influence on the thermal, mechanical, and electromagnetic properties of snow. Although there are multiple means for characterizing microstructure, there is no standard method. Grain shape (F ) includes both natural and artificial depositions, which may have decomposed or include newly formed crystals freeze-thaw or from hoar frost. Grain size (E ) represents the average size of grains, each measured at its greatest extension, measured in millimetres. Snow density (ρs ) is the mass per unit volume of snow of a known volume, calculated as kg/m3. Classification runs from very fine at below 0.2 mm to very coarse (2.0–5.0 mm) and beyond. Snow hardness (R ) is the resistance to penetration of an object into snow. Most snow studies use a fist or fingers for softer snows (very soft through medium) and a pencil (hard) or knife (very hard) below the hardness boundary of ice. Liquid water content (LWC ) (or free-water content) is the amount of water within the snow in the liquid phase from either melt, rain, or both. Measurements are expressed as a volume or mass fraction in percent. Dry snow has a 0% mean volume fraction. Wet snow 5.5% and soaked is greater than 15%. Snow temperature (Ts ) is frequently measured at various elevations in and above the snow column: at the ground, at the surface and a reported height above the surface in °C. Impurities (J ) commonly are dust, sand, soot, acids, organic and soluble materials; each should be fully described and reported as mass fraction (%, ppm). Layer thickness (L ) of each stratum of a snowpack is measured in cm. Satellite data and analysis Remote sensing of snowpacks with satellites and other platforms typically includes multi-spectral collection of imagery. Sophisticated interpretation of the data obtained allows inferences about what is observed. The science behind these remote observations has been verified with ground-truth studies of the actual conditions. Satellite observations record a decrease in snow-covered areas since the 1960s, when satellite observations began. In some regions such as China, a trend of increasing snow cover has been observed (from 1978 to 2006). These changes are attributed to global climate change, which may lead to earlier melting and less aea coverage. However, in some areas there may be an increase in snow depth because of higher temperatures for latitudes north of 40°. For the Northern Hemisphere as a whole the mean monthly snow-cover extent has been decreasing by 1.3% per decade. Satellite observation of snow relies on the usefulness of the physical and spectral properties of snow for analysing remotely sensed data. Dietz, et al. summarize this, as follows: Snow reflects a high proportion of incident radiation in visible wavelengths. The Earth continuously emits microwave radiation from its surface that can be measured from space using passive microwave sensors. The use of active microwave data to map snow-cover characteristics is limited by the fact that only wet snow can be recognized reliably. The most frequently used methods to map and measure snow extent, snow depth and snow water equivalent employ multiple inputs on the visible–infrared spectrum to deduce the presence and properties of snow. The National Snow and Ice Data Center (NSIDC) uses the reflectance of visible and infrared radiation to calculate a normalized difference snow index, which is a ratio of radiation parameters that can distinguish between clouds and snow. Other researchers have developed decision trees, employing the available data to make more accurate assessments. One challenge to this assessment is where snow cover is patchy, for example during periods of accumulation or ablation and also in forested areas. Cloud cover inhibits optical sensing of surface reflectance, which has led to other methods for estimating ground conditions underneath clouds. For hydrological models, it is important to have continuous information about the snow cover. Applicable techniques involve interpolation, using the known to infer the unknown. Passive microwaves sensors are especially valuable for temporal and spatial continuity because they can map the surface beneath clouds and in darkness. When combined with reflective measurements, passive microwave sensing greatly extends the inferences possible about the snowpack. Models Snow science often leads to predictive models that include snow deposition, snow melt, and snow hydrology—elements of the Earth's water cycle—which help describe global climate change. Global climate change Global climate change models (GCMs) incorporate snow as a factor in their calculations. Some important aspects of snow cover include its albedo (reflectivity of light) and insulating qualities, which slow the rate of seasonal melting of sea ice. As of 2011, the melt phase of GCM snow models were thought to perform poorly in regions with complex factors that regulate snowmelt, such as vegetation cover and terrain. These models compute snow water equivalent (SWE) in some manner, such as: SWE = [ –ln( 1 – fc )] / D where: fc = fractional coverage of snow D = masking depth of vegetation (≈ 0.2 m worldwide) Snowmelt Given the importance of snowmelt to agriculture, hydrological runoff models that include snow in their predictions address the phases of accumulating snowpack, melting processes, and distribution of the meltwater through stream networks and into the groundwater. Key to describing the melting processes are solar heat flux, ambient temperature, wind, and precipitation. Initial snowmelt models used a degree-day approach that emphasized the temperature difference between the air and the snowpack to compute snow water equivalent (SWE) as: SWE = M (Ta – Tm) when Ta ≥ Tm = 0 when Ta < Tm where: M = melt coefficient Ta = air temperature Tm = snowpack temperature More recent models use an energy balance approach that take into account the following factors to compute the energy available for melt (Qm) as: Qm = Q* +Qh + Qe + Qg + Qr – QΘ where: Q* = net radiation Qh = convective transfer of sensible heat between snowpack and airmass Qe = latent heat lost through evaporation from or condensation onto the snowpack Qg = conduction of heat from the ground into the snowpack Qr = advection of heat through rain QΘ = rate of change of internal energy per unit of surface area Calculation of the various heat flow quantities (Q ) requires measurement of a much greater range of snow and environmental factors than just temperatures. Engineering Knowledge gained from science translates into engineering. Four examples are the construction and maintenance of facilities on polar ice caps, the establishment of snow runways, the design of snow tires and ski sliding surfaces. Buildings on snow foundations – The US Army Cold Regions Research and Engineering Laboratory (CRREL) played a role in assisting the U.S. Air Force to establish and maintain a system of Distant Early Warning (DEW) Line facilities during the Cold War era. In 1976, a CRREL researcher was instrumental in the moving of a 10-story-high, DEW Line facility on the Greenland Ice Cap from a foundation that had been compromised by the movement of the ice on which it was built to a new foundation. This required the measurement of in-situ snow strength and its use in the design of new foundations for the building. Snow runways – In 2016, CRREL Research Civil Engineers designed, built and tested a new snow runway for the McMurdo Station, called "Phoenix". It is designed accommodate approximately 60 annual sorties of heavy, wheeled transport aircraft. The compacted snow runway was designed and constructed to service a Boeing C-17 weighing more than . This required engineering knowledge of the properties of mechanically-hardened snow. Snow tires – Snow tires perform three functions: compaction, shear bonding and bearing. On roadways they compact the snow in front of them and provide a shear bond between the treads and the compacted snow. Off-road, they also provide bearing on the compacted snow. The bearing contact must be low enough for the tires not to sink too deeply for forward progress to become impeded by compacting the snow in front of them. Tread design is critical to snow tires used on roads and represent a tradeoff between on-snow traction and dry and wet-road comfort and handling. Snow sliders – The ability of a ski or other runner to slide over snow depends on both the properties of the snow and the ski to result in an optimum amount of lubrication from melting the snow by friction with the ski—too little and the ski interacts with solid snow crystals, too much and capillary attraction of meltwater retards the ski. Before a ski can slide, it must overcome the maximum value static friction, , for the ski/snow contact, where is the coefficient of static friction and is the normal force of the ski on snow. Kinetic (or dynamic) friction occurs when the ski is moving over the snow. References External links United Nations Environment Programme: Global Outlook for Ice and Snow Institute of Low Temperature Science, Hokkaido University Swiss Federal Institute for Forest, Snow and Landscape Research website U.S. National Snow and Ice Data Center Snow Science website American Society of Civil Engineers ground snow loads interactive map for the continental US The International Classification for Seasonal Snow on the Ground (ICSSG) Snow Branches of meteorology Applied and interdisciplinary physics Physical geography
Snow science
Physics
3,859
22,138,684
https://en.wikipedia.org/wiki/Alcohol%20and%20sex
Alcohol and sex deals with the effects of the consumption of alcohol on sexual behavior. The effects of alcohol are balanced between its suppressive effects on sexual physiology, which will decrease sexual activity, and its suppression of sexual inhibitions. A large portion of sexual assaults involve alcohol consumption by the perpetrator, victim, or both. Alcohol is a depressant. After consumption, alcohol causes the body's systems to slow down. Often, feelings of drunkenness are associated with elation and happiness but other feelings of anger or depression can arise. Balance, judgment, and coordination are also negatively affected. One of the most significant short term side effects of alcohol is reduced inhibition. Reduced inhibitions can lead to an increase in sexual behavior. In men Low to moderate alcohol consumption is shown to have protective effect for men's erectile function. Several reviews and meta-analyses of existing literature show that low to moderate alcohol consumption significantly decreases erectile dysfunction risk. Men's sexual behaviors can be affected dramatically by high alcohol consumption. Both chronic and acute alcohol consumption have been shown in most studies (but not all) to inhibit testosterone production in the testes. This is believed to be caused by the metabolism of alcohol reducing the NAD+/NADH ratio both in the liver and the testes; since the synthesis of testosterone requires NAD+, this tends to reduce testosterone production. As testosterone is critical for libido and physical arousal, alcohol tends to have deleterious effects on male sexual performance. Studies have been conducted that indicate increasing levels of alcohol intoxication produce a significant degradation in male masturbatory effectiveness (MME). This degradation was measured by measuring blood alcohol concentration (BAC) and ejaculation latency. Alcohol intoxication can decrease sexual arousal, decrease pleasureability and intensity of orgasm, and increase difficulty in attaining orgasm. In women In women, the effects of alcohol on libido in the literature are mixed. Some women report that alcohol increases sexual arousal and desire, however, some studies show alcohol lowers the physiological signs of arousal. A 2016 study found that alcohol negatively affected how positive the sexual experience was in both men and women. Studies have shown that acute alcohol consumption tends to cause increased levels of testosterone and estradiol. Since testosterone controls in part the strength of libido in women, this could be a physiological cause for an increased interest in sex. Also, because women have a higher percentage of body fat and less water in their bodies, alcohol can have a quicker, more severe impact. Women's bodies take longer to process alcohol; more precisely, a woman's body often takes one-third longer to eliminate the substance. Sexual behavior in women under the influence of alcohol is also different from men. Studies have shown that increased BAC is associated with longer orgasmic latencies and decreased intensity of orgasm. Some women report a greater sexual arousal with increased alcohol consumption as well as increased sensations of pleasure during orgasm. Because ejaculatory response is visual and can more easily be measured in males, orgasmic response must be measured more intimately. In studies of the female orgasm under the influence of alcohol, orgasmic latencies were measured using a vaginal photoplethysmograph, which essentially measures vaginal blood volume. Psychologically, alcohol has also played a role in sexual behavior. It has been reported that women who were intoxicated believed they were more sexually aroused than before consumption of alcohol. This psychological effect contrasts with the physiological effects measured, but refers back to the loss of inhibitions because of alcohol. Often, alcohol can influence the capacity for a woman to feel more relaxed and in turn, be more sexual. Alcohol may be considered by some women to be a sexual disinhibitor. Risky sexual behavior Some studies have made a connection between hookup culture and substance use. Most students said that their hookups occurred after drinking alcohol. Frietas stated that in her study, the relationships between drinking and the party scene and between alcohol and hookup culture were "impossible to miss". Studies suggest that the degree of alcoholic intoxication in young people directly correlates with the level of risky behavior, such as engaging in multiple sex partners. In 2018, the first study of its kind, found that alcohol and caffeinated energy drinks is linked with casual, risky sex among college-age adults. Sexually transmitted infections and unintended pregnancy Alcohol intoxication is associated with an increased risk that people will become involved in risky sexual behaviors, such as unprotected sex. Both men, and women, reported higher intentions to avoid using a condom when they were intoxicated by alcohol. Coitus interruptus, also known as withdrawal, pulling out or the pull-out method, is a method of birth control during penetrative sexual intercourse, whereby the penis is withdrawn from a vagina or anus prior to ejaculation so that the ejaculate (semen) may be directed away in an effort to avoid insemination. Coitus interruptus carries a risk of STIs and unintended pregnancy. This risk is especially high during alcohol intoxication because lowered sexual inhibition can make it difficult to withdraw in time. Women with unintended pregnancies are more likely to smoke tobacco, drink alcohol during pregnancy, and binge drink during pregnancy, which results in poorer health outcomes. (See also: fetal alcohol spectrum disorder) Sexual assaults Rape is any sexual activity that occurs without the freely given consent of one of the parties involved. This includes alcohol-facilitated sexual assault which is considered rape in most if not all jurisdictions, or non-consensual condom removal which is criminalized in some countries (see the map below). A 2008 study found that rapists typically consumed relatively high amounts of alcohol and infrequently used condoms during assaults, which was linked to a significant increase in STI transmission. This also increases the risk of pregnancy from rape for female victims. Some people turn to drugs or alcohol to cope with emotional trauma after a rape; use of these during pregnancy can harm the fetus. Alcohol-facilitated sexual assault One of the most common date rape drugs is alcohol, administered either surreptitiously or consumed voluntarily, rendering the victim unable to make informed decisions or give consent. The perpetrator then facilitates sexual assault or rape, a crime known as alcohol- or drug-facilitated sexual assault (DFSA). Many perpetrators use alcohol because their victims often drink it willingly, and can be encouraged to drink enough to lose inhibitions or consciousness. However, sex with an unconscious victim is considered rape in most if not all jurisdictions, and some assailants have committed "rapes of convenience" whereby they have assaulted a victim after he or she had become unconscious from drinking too much. The risk of individuals either experiencing or perpetrating sexual violence and risky sexual behavior increases with alcohol abuse, and by the consumption of caffeinated alcoholic drinks. Non-consensual condom removal Non-consensual condom removal, or "stealthing", is the practice of a person removing a condom during sexual intercourse without consent, when their sex partner has only consented to condom-protected sex. Purposefully damaging a condom before or during intercourse may also be referred to as stealthing, regardless of who damaged the condom. Consuming alcohol can be risky in sexual situations. It can impair judgment and make it difficult for both people to give or receive informed sexual consent. However, a history of sexual aggression and alcohol intoxication are factors associated with an increased risk of men employing non-consensual condom removal and engaging in sexually aggressive behavior with female partners. Wartime sexual violence The use of alcohol is a documented factor in wartime sexual violence. For example, rape during the liberation of Serbia was committed by Soviet Red Army soldiers against women during their advance to Berlin in late 1944 and early 1945 during World War II. Serbian journalist Vuk Perišić said about the rapes: "The rapes were extremely brutal, under the influence of alcohol and usually by a group of soldiers. The Soviet soldiers did not pay attention to the fact that Serbia was their ally, and there is no doubt that the Soviet high command tacitly approved the rape." While there wasn't a codified international law specifically prohibiting rape during World War II, customary international law principles already existed that condemned violence against civilians. These principles formed the basis for the development of more explicit laws after the war, including the Nuremberg Principles established in 1950. "Beer goggles" A study published in 2003 supported the beer goggles hypothesis; however, it also found that another explanation is that regular drinkers tend to have personality traits that mean they find people more attractive, whether or not they are under the influence of alcohol at the time. A 2009 study showed that while men found adult women (who were wearing makeup) more attractive after consuming alcohol, the alcohol did not interfere with their ability to determine a woman's age. A 2021 study found that bar patrons rated themselves as more attractive towards the end of the night, regardless of their level of intoxication, and that this effect had more to do with motivations to attract a mate. The "closing time effect" was tested in Danish bars, with researchers separating responses based on whether bar patrons had filled out their survey in the afternoon, evening, or night, and finding that people attending the bar at night rated themselves as more attractive than earlier visitors. See also Sex and drugs References Footnotes Sources Further reading Alcohol Human sexuality Sex and drugs
Alcohol and sex
Chemistry,Biology
1,940
52,093,689
https://en.wikipedia.org/wiki/Quadrans%20Vetus
The Quadrans Vetus is a medieval astronomical instrument. Known as the quadrans vetus ["old quadrant"], the three surviving medieval examples are in the Museo Galileo in Florence, the Museum of the History of Science in Oxford, and the British Museum in London. There are two sights on one of the straight sides. The front carries the shadow square, the hour lines, and a mobile zodiacal cursor in its guide, to be positioned for the desired latitude. The back is inscribed with the zodiacal calendar. The instrument displays Gothic characters. Designed to measure heights, distances, and depths, the instrument could also be used as a universal dial. A similar quadrant is documented in a drawing by Antonio da Sangallo the Younger (c. 1520?) at the Gabinetto dei Disegni e delle Stampe (Department of Drawings and Prints) of the Uffizi. References Bibliography Astronomical instruments
Quadrans Vetus
Astronomy
191
17,191,332
https://en.wikipedia.org/wiki/Vegetation%20and%20slope%20stability
Vegetation and slope stability are interrelated by the ability of the plant life growing on slopes to both promote and hinder the stability of the slope. The relationship is a complex combination of the type of soil, the rainfall regime, the plant species present, the slope aspect, and the steepness of the slope. Knowledge of the underlying slope stability as a function of the soil type, its age, horizon development, compaction, and other impacts is a major underlying aspect of understanding how vegetation can alter the stability of the slope. There are four major ways in which vegetation influences slope stability: wind throwing, the removal of water, mass of vegetation (surcharge), and mechanical reinforcement of roots. Wind throwing Wind throw is the toppling of a tree due to the force of the wind, this exposes the root plate and adjacent soil beneath the tree and influences slope stability. Wind throw is a factor when considering one tree on a slope; however, it is of lesser importance when considering general slope stability for a body of trees as the wind forces involved represent a smaller percentage of the potential disturbing forces and the trees which are in the centre of the group will be sheltered by those on the outside. Removal of water Vegetation influences slope stability by removing water through transpiration. Transpiration is the vaporisation of liquid water contained in plant tissue and the vapour removal to the air. Water is drawn up from the roots and transported through the plant up to the leaves. The major effect of transpiration is the reduction of soil pore water pressures which counteracts the loss of strength which occurs through wetting, this is most readily seen as a loss of moisture around trees. However it is not easy to rely on tree and shrub roots to remove water from slopes and consequently help ensure slope stability. The ability to transpire in wet conditions is severely reduced and therefore any increase in soil strength previously gained in evaporation and transpiration will be lost or significantly reduced, consequently the effects of transpiration cannot be taken into account at these times. However it can be assumed that the chance of slope failure following saturation by storm events or periods of extended rainfall will be lessened as a result of transpiration. Moreover, although changes in moisture content will affect the undrained shear strength, the effective shear stress parameters as commonly used in routine slope stability analysis are not directly influenced by changing moisture content, although the water pressures (suctions) used in the analysis will change. It is important to note that desiccation cracks can potentially be extended by vegetation in dry weather promoting the deeper penetration of water to a potential slip plane and increased water pressure into the soil during the wet periods. Nevertheless, these cracks will be filled by roots growing deeper into the soil as they follow the path of least resistance. Studies in Malaysia have shown that there is a significant relationship between root length density, soil water content and ultimately slope stability. Slopes that had high root density (due to dense vegetation on the surface) were less likely to undergo slope failure. This is because a high root length density results in low soil water content which in turn results in an increase in shear strength and a decrease in soil permeability. It is suggested that root length density and soil water level could be used as indicators of slope stability and possibly could be used to predict future slope failure. Transpiration is accentuated when the vegetation has an extensive root system and rapid transpiration continues throughout winter. The removal of water is also affected by the shading provided by vegetation. Shading helps prevent the desiccation of the soils which results in shrinkage and cracking allowing the deep penetration of rain water. Plants need to have a high leaf to root ratio and have the ability to persist through hot summer months in order to provide effective shading of the soils. The mass of vegetation The mass of vegetation is only likely to have an influence on slope stability when larger trees are growing on the slope. A tree of 30–50m height is likely to have a loading of approximately 100–150 kN/m2. The larger trees should be planted at the toe of the slope with a potential rotational failure as this could increase the factor of safety by 10%. However, if the tree is planted at the top of the slope this could reduce the factor of safety by 10%. Each slope stability situation should be considered independently for the vegetation involved. Transpiration will reduce the weight of the slope as moisture is lost. This can be significant on slopes of marginal stability. If larger trees are removed from the toe area of a slope there will be both a reduction in soil strength due to the loss of evapotranspiration effects and a reduction in applied loading which may result in temporary suctions in clay soils which could lead to softening as the available water is drawn in to compensate for the suction forces. This is similar to the recognised softening of overconsolidated clays due to the relaxation of overburden pressures when placed in the top layers of an embankment from deep cutting. Mechanical reinforcement of roots Roots reinforce the soil by growing across failure planes, root columns acting as piles, and by limiting surface erosion. Root growth across failure planes When roots grow across the plane of potential failure there is an increase in shear strength by binding particles. The roots anchor the unstable surficial soil into the deeper stable layers or bedrock. This most readily occurs when there is rapid deep growth (1.5m deep) of roots which last for more than two years. However, the strength exerted by roots generally only extends down to while most failures occur between soil depth. Root reinforcement model The root Reinforced Earth root model is the result of the root elongation across a potential slip plane which produces a tensile root force which is transferred to the soil by cohesive and frictional contacts between the root and the soil. Tensile root strength contribution and pull out resistance The pull out resistance of a root is the measured resistance of root structure to be pulled out of the ground and is likely to be only a little less than the measured tensile strength of the root which is the roots resistance to breaking as measured in the laboratory. In the cases where there is no pull out data available the tensile strength data maybe used as a rough guide to the maximum pull out resistance available. The tensile root strength of a range of diameters over a range of species has been tested in the laboratory and has been found to be approximately 5–60 MN/m2. In order for the root to actually enhance slope stability the root must have sufficient embedment and adhesion with the soil. The way that roots interact with the soil is intricate but for engineering purposes the available force contributions may be measured with in situ pull out tests. Root morphology and modes of failure The root length and the type of root branching effects the way that root failure occurs Three different modes of failure have been identified in hawthorn roots which relate to the root soil relationship which is shown in the shape of the roots and the shape of the failure curve. Roots which have no branches tend to fail in tension and pull straight out of the ground with minimal resistance. Roots which have multiple branches generally fail in stages as each branch breaks inside the soil. These roots can then separated into two different groups: 1) those that initially reach their maximum peak force and then maintain a high force that progressively decreases as the root branches fail after significant strain and 2) those that break with increasingly applied force. In a number of tests considerable adhesion between a segment of the root and the soil can be measured prior to the root eventually slipping out of the soil mass. Type A failure Roots that do not have branches generally fail in tension and pull straight out of the ground with only minimal resistance. The root reaches its maximum pullout resistance then rapidly fails at a weak point. The root easily slips out of the soil due to the gradual tapering (progressive decrease in root diameter along its length) which means that as the root is pulled out it is moving through a space that is larger than its diameter which consequently has no further bonds or interaction with the surrounding soil. Type B failure Type B failure occurs when branched roots initially reach their maximum peak resistance then sustain a high resistance which slowly reduces as the branches of the roots fail after significant strain. In some tests considerable adhesion between a section of the root and the soil mass can be measured before the root eventually slips out. Forked roots require a greater force to be pulled out as the cavity above the fork is thinner than the root which is trying to move through the cavity, this can then result in deformation of the soil as the root moves through the soil. Type C failure Roots that have multiple branches or forked branches also can undergo tensile failure but predominantly fail in stages as each branch breaks within the soil. These roots break with increasingly applied force in stages in the form of stepped peaks corresponding to the progressive breaking of roots of greater diameters. The root progressively releases its bonds with the soil until final tensile failure. In some cases when the root has a sinusoidal shape with many small rootlets along its length the root reaches its maximum pull out resistance on straightening and then breaks at the weakest point; however, at this point the root is not pulled out of the soil as it adheres and interacts with the soil producing a residual strength. If pulling was stopped at this point, the root would give increased strength to the soil. However, if the root is completely pulled out of the ground then there is no further interaction with soil and therefore no increase in soil strength is provided. Factors which affect root pull out resistance Studies have shown that the pull out resistance of hawthorn and oak roots are affected by intra species differences, inter-species variations and root size (diameter) in a similar as way as root tensile strength varies (as measured in the laboratory). In the pull out test the applied force acting on the root acts across a larger root area, which involves multiple branches, longer lengths) than the short (approximately 150mm) length of root used in tensile strength tests. In a pull out test the root is likely to fail at weak points such as branching points, nodes or damaged areas. The studies also showed that there is a positive correlation between maximum root pull out resistance and root diameter for hawthorn and oat root. Smaller diameter roots had a lower pull out resistance or breaking force than the larger diameter roots. Root columns acting as piles Trees and root columns can prevent shallow mass movement by acting as piles when there is buttressing and soil arching through a woody deep root system which has multiple sinker roots with embedded stems and laterals. Limitation of surface erosion Vegetation can also control water erosion by limiting surface processes such as sheet wash and overland flow. Vegetation can contribute considerably to slope stability through enhancing soil cohesion. This cohesion is dependent upon the morphological characteristics of root systems and the tensile strength of single roots. There is considerable evidence of fine roots resisting surface erosion. The role of fine roots in general slope stability is not fully understood. Fine roots are thought to help keep the surface soil together and prevent surface erosion. The fine root network may have an apparent enhanced cohesion, which is comparable to geosynthetic mesh elements. The limitation of surface erosion processes is particularly apparent in areas of shrub and grass where the fine root distribution is consistent and clearly defined; however, cohesion is generally limited to the top of soil. See also Landslide Mudslide Surface runoff Tillage erosion References Sources British Broadcasting Corporation 2007, Biology, viewed 10 June 2007, www.bbc.co.uk/.../gcsebitesize/img/bi05006.gif Greenwood, J.; Norris, J. & Wint, J. 2007, ‘Discussion: Assessing the contribution of vegetation to slope stability’, Proceedings of the Institution of Civil Engineers, vol. 160, no. 1, pp. 51–53. INTBAU 2007, International network for traditional building, architecture and urbanism, viewed 2 June 2007, www.intbau.org/Images/Scarano/scarano3.580.jpg Selby, M. 1993, Hillslope materials and processes, Oxford University Press, Oxford, Great Britain. Watson, A. & Marden, M. 2004, Root tensile strength as an indicator of the performance of indigenous riparian plants – how do they rank?, Landcare Research, Lincoln, NZ. Geomorphology Physical geography Geological processes Horticulture Soil erosion Ecological restoration Environmental terminology
Vegetation and slope stability
Chemistry,Engineering
2,532
11,157,713
https://en.wikipedia.org/wiki/Cortland%20Street%20Drawbridge
The Cortland Street Drawbridge (originally known as the Clybourn Place drawbridge) over the Chicago River is the original Chicago-style fixed-trunnion bascule bridge, designed by John Ericson and Edward Wilmann. When it opened in 1902, on Chicago's north side, it was the first such bridge built in the United States. The bridge was a major advance in American movable bridge engineering, and was the prototype for over 50 additional bridges in Chicago alone. The bridge was designated as an ASCE Civil Engineering Landmark in 1981, and a Chicago Landmark in 1991. Design This is the bridge type for which Chicago engineers are most well-known. The trunnion bascule has two bridge leaves with pivots on the opposing riverbanks and are raised on large trunnion bearings by large counterweights which offset the weight of the leaves. They take their names from the French word bascule, meaning seesaw, and the counterweights. Unlike most of the subsequent bascule bridges of Chicago, the gear rack that moves this bridge is visible above the roadway, on the curved arcs at each end of the superstructure. History The bridge was built under the supervision of Mayor Carter Harrison, Jr., and Frederick W. Blocki, the Commissioner of Public Works. This is the second bridge built on this site, which replaced a swing bridge with a mid-river pier supporting the swing span. The current bridge eliminated the need for the mid-river pier, allowing more room in the shipping channel. While the machinery of the current bridge is intact, the bridge is no longer operable and the leaves are clamped together. The bridge was traversed by streetcars of Line 73-Armitage Avenue, in addition to other traffic until February 25, 1951. The following day the bridge was temporarily closed for repairs and the Chicago Transit Authority (CTA) substituted buses for streetcars east of the bridge, subsequently abandoning the remainder of the Armitage Avenue streetcar line in June. Electric trolley buses also crossed the bridge, starting on February 1, 1953, when CTA replaced the motor buses on route 73. Trolley buses operated until October 15, 1966, when the agency converted the route to diesel buses. The Cortland Street Bridge is currently used for two-way vehicle traffic, pedestrian, and bicycle traffic. See also List of bridges documented by the Historic American Engineering Record in Illinois References External links Bridges in Chicago Bridges completed in 1902 Bascule bridges in Illinois Historic American Engineering Record in Illinois Historic Civil Engineering Landmarks Chicago Landmarks Road bridges in Illinois Steel bridges in the United States
Cortland Street Drawbridge
Engineering
529
31,422,551
https://en.wikipedia.org/wiki/Youla%E2%80%93Kucera%20parametrization
In control theory the Youla–Kučera parametrization (also simply known as Youla parametrization) is a formula that describes all possible stabilizing feedback controllers for a given plant P, as function of a single parameter Q. Details The YK parametrization is a general result. It is a fundamental result of control theory and launched an entirely new area of research and found application, among others, in optimal and robust control. The engineering significance of the YK formula is that if one wants to find a stabilizing controller that meets some additional criterion, one can adjust the parameter Q such that the desired criterion is met. For ease of understanding and as suggested by Kučera it is best described for three increasingly general kinds of plant. Stable SISO plant Let be a transfer function of a stable single-input single-output system (SISO) system. Further, let be a set of stable and proper functions of . Then, the set of all proper stabilizing controllers for the plant can be defined as , where is an arbitrary proper and stable function of s. It can be said, that parametrizes all stabilizing controllers for the plant . General SISO plant Consider a general plant with a transfer function . Further, the transfer function can be factorized as , where , are stable and proper functions of s. Now, solve the Bézout's identity of the form , where the variables to be found must be also proper and stable. After proper and stable are found, we can define one stabilizing controller that is of the form . After we have one stabilizing controller at hand, we can define all stabilizing controllers using a parameter that is proper and stable. The set of all stabilizing controllers is defined as . General MIMO plant In a multiple-input multiple-output (MIMO) system, consider a transfer matrix . It can be factorized using right coprime factors or left factors . The factors must be proper, stable and doubly coprime, which ensures that the system is controllable and observable. This can be written by Bézout identity of the form: . After finding that are stable and proper, we can define the set of all stabilizing controllers using left or right factor, provided having negative feedback. where is an arbitrary stable and proper parameter. Let be the transfer function of the plant and let be a stabilizing controller. Let their right coprime factorizations be: then all stabilizing controllers can be written as where is stable and proper. References D. C. Youla, H. A. Jabr, J. J. Bongiorno: Modern Wiener-Hopf design of optimal controllers: part II, IEEE Trans. Automat. Contr., AC-21 (1976) pp319–338 V. Kučera: Stability of discrete linear feedback systems. In: Proceedings of the 6th IFAC. World Congress, Boston, MA, USA, (1975). C. A. Desoer, R.-W. Liu, J. Murray, R. Saeks. Feedback system design: the fractional representation approach to analysis and synthesis. IEEE Trans. Automat. Contr., AC-25 (3), (1980) pp399–412 John Doyle, Bruce Francis, Allen Tannenbaum. Feedback control theory. (1990). Control theory
Youla–Kucera parametrization
Mathematics
702
8,087,219
https://en.wikipedia.org/wiki/65%2C537
65537 is the integer after 65536 and before 65538. In mathematics 65537 is the largest known prime number of the form (), and is most likely the last one. Therefore, a regular polygon with 65537 sides is constructible with compass and unmarked straightedge. Johann Gustav Hermes gave the first explicit construction of this polygon. In number theory, primes of this form are known as Fermat primes, named after the mathematician Pierre de Fermat. The only known prime Fermat numbers are In 1732, Leonhard Euler found that the next Fermat number is composite: In 1880, showed that 65537 is also the 17th Jacobsthal–Lucas number, and currently the largest known integer n for which the number is a probable prime. Applications 65537 is commonly used as a public exponent in the RSA cryptosystem. Because it is the Fermat number with , the common shorthand is "F" or "F4". This value was used in RSA mainly for historical reasons; early raw RSA implementations (without proper padding) were vulnerable to very small exponents, while use of high exponents was computationally expensive with no advantage to security (assuming proper padding). 65537 is also used as the modulus in some Lehmer random number generators, such as the one used by ZX Spectrum, which ensures that any seed value will be coprime to it (vital to ensure the maximum period) while also allowing efficient reduction by the modulus using a bit shift and subtract. References Integers
65,537
Mathematics
336
450,663
https://en.wikipedia.org/wiki/Pizza-box%20form%20factor
In computing, a pizza box is a style of case design for desktop computers or network switches. Pizza box cases tend to be wide and flat, normally in height, resembling pizza delivery boxes and thus the name. This is in contrast to a tower system, whose case height is much greater than the width and has an "upright" appearance. In modern usage, the term "pizza box" is normally reserved for very flat cases with height no more than , while those taller than 2 inches are referred to as desktop cases instead. The common setup of a pizza box system is to have the display monitor placed directly on top of the case, which serves as a podium to elevate the monitor more towards the user's eye level, and to have other peripherals placed in front and alongside the case. Occasionally, the pizza box may be laid on its sides in a tower-like orientation. History With the tagline "Who just fit mainframe power in a pizza box?" in a 1991 advertisement for its Aviion Unix server products, Data General was an early adopter of the expression in advertising, returning to the theme on later occasions. However, such usage was preceded by other occurrences of the expression in print, notably Time's 1989 coverage of Sun Microsystems and its SPARCstation 1 product. The expression was reportedly already in use as early as 1987 to refer to the profile of an expansion unit for the Digital Equipment Corporation VAXmate. Most computers generally referred to as pizza box systems were high-end desktop systems such as Sun's workstations of the 1990s. Other notable examples have been among the highest-performing desktop computers of their generations, including the SGI Indy, the NeXTstation, and the Amiga 1000. The pizza box form factor was also seen in budget and lower-end lines such as the Macintosh LC family, which was popular in the education market. The original SPARCstation 1 design included an expansion bus technology, SBus, expressly designed for the form factor; expansion cards were small, especially in comparison to other expansion cards in use at the time such as VMEbus, and were mounted horizontally instead of vertically. PC-compatible computers in this type of case typically use the PCI expansion bus and are usually either a) limited to one or two horizontally placed expansion cards or b) require special low-profile expansion cards, shorter than the PCI cards regular PCs use. The density of computing power and stackability of pizza box systems also made them attractive for use in data centers. Systems originally designed for desktop use were placed on shelves inside of 19-inch racks, sometimes requiring that part of their cases be cut off for them to fit. Since the late 1990s, pizza boxes have been a common form factor in office cubicles, schools, data centers or industrial applications, where desktop space, rack room and density are critical. Servers in this form factor, as well as higher-end Ethernet switches, are now designed for rack mounting. Rack mount 1U computers come in all types of configurations and depths. The pizza box form factor for smaller personal systems and thin clients remains in use well into the 21st century, though it is increasingly being superseded by laptops, nettops or All-in-One PC designs that embed the already size-reduced computer onto the keyboard or display monitor. See also Desktop form factor References External links Pizza box in the Jargon File Computer enclosure Desktop computers Networking hardware
Pizza-box form factor
Engineering
695
73,254,746
https://en.wikipedia.org/wiki/Lycopersene
Lycopersene is a carotenoid found in Corynebacterium, Lemna minor, and Zea mays. It has the chemical formula of C40H66. It has antioxidant, antimutagenic, antiproliferative, cytotoxicity, antibacterial and pesticide effects. References Carotenoids Organic pigments
Lycopersene
Chemistry,Biology
83
66,583,710
https://en.wikipedia.org/wiki/Rubidium%20acetate
Rubidium acetate is a rubidium salt that is the result of reacting rubidium metal, rubidium carbonate, or rubidium hydroxide with acetic acid. It is soluble in water like other acetates. Uses Rubidium acetate is used as a catalyst for the polymerization of silanol terminated siloxane oligomers. References Rubidium compounds Acetates Catalysts
Rubidium acetate
Chemistry
82
795,170
https://en.wikipedia.org/wiki/Noise%20floor
In signal theory, the noise floor is the measure of the signal created from the sum of all the noise sources and unwanted signals within a measurement system, where noise is defined as any signal other than the one being monitored. In radio communication and electronics, this may include thermal noise, black body, cosmic noise as well as atmospheric noise from distant thunderstorms and similar and any other unwanted man-made signals, sometimes referred to as incidental noise. If the dominant noise is generated within the measuring equipment (for example by a receiver with a poor noise figure) then this is an example of an instrumentation noise floor, as opposed to a physical noise floor. These terms are not always clearly defined, and are sometimes confused. Avoiding interference between electrical systems is the distinct subject of electromagnetic compatibility (EMC). In a measurement system such as a seismograph, the physical noise floor may be set by the incidental noise, and may include nearby foot traffic or a nearby road. The noise floor limits the smallest measurement that can be taken with certainty since any measured amplitude can on average be no less than the noise floor. A common way to lower the noise floor in electronics systems is to cool the system to reduce thermal noise, when this is the major noise source. In special circumstances, the noise floor can also be artificially lowered with digital signal processing techniques. Signals that are below the noise floor can be detected by using different techniques of spread spectrum communications, where signal of a particular information bandwidth is deliberately spread in the frequency domain resulting in a signal with a wider occupied bandwidth. Every additional 6.02 dB of noise floor corresponds to a 1-bit reduction of the effective number of bits of an analog-to-digital converter or digital-to-analog converter. See also Atmospheric noise Blackbody Cosmic noise Noise Noise (electronic) Signal-to-noise ratio Thermal noise References Noise (electronics) fr:Bruit de fond Acoustics Sound
Noise floor
Physics
393
50,364,516
https://en.wikipedia.org/wiki/Optomyography
Optomyography (OMG) was proposed in 2015 as a technique that could be used to monitor muscular activity. It is possible to use OMG for the same applications where Electromyography (EMG) and Mechanomyography (MMG) are used. However, OMG offers superior signal-to-noise ratio and improved robustness against the disturbing factors and limitations of EMG and MMG. The basic principle of OMG is to use active near-infra-red optical sensors to measure the variations in the measured signals that are reflected from the surface of the skin while activating the muscles below and around the skin spot where the photoelectric sensor is focusing to measure the signals reflected from this spot. Applications A glasses based optomyography device was patented for measuring facial expressions and emotional responses particularly for mental health monitoring . Generating proper control signals is the most important task to be able to control any kind of a prosthesis, computer game or any other system which contains a human-computer interaction unit or module. For this purpose, surface-Electromyographic (s-EMG) and Mechanomyographic (MMG) signals are measured during muscular activities and used, not only for monitoring and assessing these activities, but also to help in providing efficient rehabilitation treatment for patients with disabilities as well as in constructing and controlling sophisticated prostheses for various types of amputees and disabilities. However, while the existing s-EMG and MMG based systems have compelling benefits, many engineering challenges still remain unsolved, especially with regard to the sensory control system. References Biomedical engineering Biological engineering Signal processing
Optomyography
Technology,Engineering,Biology
330
546,674
https://en.wikipedia.org/wiki/126%20%28number%29
126 (one hundred [and] twenty-six) is the natural number following 125 and preceding 127. In mathematics As the binomial coefficient , 126 is a central binomial coefficient, and in Pascal's Triangle, it is a pentatope number. 126 is a sum of two cubes, and since 125 + 1 is σ3(5), 126 is the fifth value of the sum of cubed divisors function. 126 is the fifth -perfect Granville number, and the third such not to be a perfect number. Also, it is known to be the smallest Granville number with three distinct prime factors, and perhaps the only such Granville number. 126 is a pentagonal pyramidal number and a decagonal number. 126 is also the different number of ways to partition a decagon into even polygons by diagonals, and the number of crossing points among the diagonals of a regular nonagon. There are exactly 126 binary strings of length seven that are not repetitions of a shorter string, and 126 different semigroups on four elements (up to isomorphism and reversal). There are exactly 126 positive integers that are not solutions of the equation where a, b, c, and d must themselves all be positive integers. 126 is the number of root vectors of simple Lie group E7. In physics 126 is the seventh magic number in nuclear physics. For each of these numbers, 2, 8, 20, 28, 50, 82, and 126, an atomic nucleus with this many protons is or is predicted to be more stable than for other numbers. Thus, although there has been no experimental discovery of element 126, tentatively called unbihexium, it is predicted to belong to an island of stability that might allow it to exist with a long enough half life that its existence could be detected. References Integers
126 (number)
Mathematics
379
4,484,806
https://en.wikipedia.org/wiki/De%20re%20metallica
De re metallica (Latin for On the Nature of Metals [Minerals]) is a book in Latin cataloguing the state of the art of mining, refining, and smelting metals, published a year posthumously in 1556 due to a delay in preparing woodcuts for the text. The author was Georg Bauer, whose pen name was the Latinized Georgius Agricola ("Bauer" and "Agricola" being respectively the German and Latin words for "farmer"). The book remained the authoritative text on mining for 180 years after its publication. It was also an important chemistry text for the period and is significant in the history of chemistry. Mining was typically left to professionals, craftsmen and experts who were not eager to share their knowledge. Much experiential knowledge had been accumulated over the course of time. This knowledge was consecutively handed down orally within a small group of technicians and mining overseers. In the Middle Ages these people held the same leading role as the master builders of the great cathedrals, or perhaps also alchemists. It was a small, cosmopolitan elite within which existing knowledge was passed on and further developed but not shared with the outside world. Only a few writers from that time wrote anything about mining itself. Partly, that was because this knowledge was very difficult to access. Most writers also found it simply not worth the effort to write about it. Only in the Renaissance did this perception begin to change. With the improved transport and the invention of the printing press knowledge spread much more easily and faster than before. In 1500, the first printed book dedicated to mining engineering, called the Nützlich Bergbüchleyn ("The Useful Little Mining Book") by Ulrich Rülein von Calw, was published. The most important works in this genre were, however, the twelve books of De Re Metallica by Georgius Agricola, published in 1556. Agricola had spent nine years in the Bohemian town of Joachimsthal (now Jáchymov in the Czech Republic). After Joachimsthal, he spent the rest of his life in Chemnitz in Saxony, another prominent mining town in the Ore Mountains. The book was greatly influential, and for more than a century after it was published, De Re Metallica remained a standard treatise used throughout Europe. The German mining technology it portrayed was acknowledged as the most advanced at the time, and the metallic wealth produced in German mining districts was the envy of many other European nations. The book was reprinted in a number of Latin editions, as well as in German and Italian translations. Publication in Latin meant that it could be read by any educated European of the time. The 292 superb woodcut illustrations and the detailed descriptions of machinery made it a practical reference for those wishing to replicate the latest in mining technology. The drawings from which the woodcuts were made were done by an artist in Joachimsthal named Blasius Weffring or Basilius Wefring. The woodcuts were then prepared in the Froben publishing house by Hans Rudolf Manuel Deutsch and Zacharias Specklin. In 1912, the first English translation of De Re Metallica was privately published in London by subscription. The translators were married couple Herbert Hoover, a mining engineer (and later President of the United States), and Lou Henry Hoover, a geologist and Latinist. The translation is notable not only for its clarity of language, but for the extensive footnotes, which detail the classical references to mining and metals. Subsequent translations into other languages, including German, owe much to the Hoover translations, as their footnotes detail their difficulties with Agricola's invention of several hundred Latin expressions to cover Medieval German mining and milling terms that were unknown to classical Latin. The most important translation—outside English—was the one published by the Deutsches Museum in Munich. Summary The book consists of a preface and twelve chapters, labelled books I to XII, without titles. It also has numerous woodcuts that provide annotated diagrams illustrating equipment and processes described in the text. Preface Agricola addresses the book to prominent German aristocrats, the most important of whom were Maurice, Elector of Saxony and his brother Augustus, who were his main patrons. He then describes the works of ancient and contemporary writers on mining and metallurgy, the chief ancient source being Pliny the Elder. Agricola describes several books contemporary to him, the chief being a booklet by Calbus of Freiberg in German. The works of alchemists are then described. Agricola does not reject the idea of alchemy, but notes that alchemical writings are obscure and that we do not read of any of the masters who became rich. He then describes fraudulent alchemists, who deserve the death penalty. Agricola completes his introduction by explaining that, since no other author has described the art of metals completely, he has written this work, setting forth his scheme for twelve books. Finally, he again directly addresses his audience of German princes, explaining the wealth that can be gained from this art. Book I: Arguments for and against this art This book consists of the arguments used against the art and Agricola's counter arguments. He explains that mining and prospecting are not just a matter of luck and hard work; there is specialized knowledge that must be learned. A miner should have knowledge of philosophy, medicine, astronomy, surveying, arithmetic, architecture, drawing and law, though few are masters of the whole craft and most are specialists. This section is full of classical references and shows Agricola's classical education to its fullest. The arguments range from philosophical objections to gold and silver as being intrinsically worthless, to the danger of mining to its workers and its destruction of the areas in which it is carried out. He argues that without metals, no other activity such as architecture or agriculture are possible. The dangers to miners are dismissed, noting that most deaths and injuries are caused by carelessness, and other occupations are hazardous too. Clearing forests for timber is advantageous as the land can be farmed. Mines tend to be in mountains and gloomy valleys with little economic value. The loss of food from the forests destroyed can be replaced by purchase from profits, and metals have been placed underground by God and man is right to extract and use them. Finally, Agricola argues that mining is an honorable and profitable occupation. Book II: The miner and a discourse on the finding of veins This book describes the miner and the finding of veins. Agricola assumes that his audience is the mine owner, or an investor in mines. He advises owners to live at the mine and to appoint good deputies. It is recommended to buy shares in mines that have not started to produce as well as existing mines to balance the risks. The next section of this book recommends areas where miners should search. These are generally mountains with wood available for fuel and a good supply of water. A navigable river can be used to bring fuel, but only gold or gemstones can be mined if no fuel is available. The roads must be good and the area healthy. Agricola describes searching streams for metals and gems that have been washed from the veins. He also suggests looking for exposed veins and also describes the effects of metals on the overlying vegetation. He recommends trenching to investigate veins beneath the surface. He then describes dowsing with a forked twig although he rejects the method himself. The passage is the first written description of how dowsing is done. Finally he comments on the practice of naming veins or shafts. Book III: Veins and stringers and seams in the rocks This book is a description of the various types of veins that can be found. There are 30 illustrations of different forms of these veins, forming the majority of Book III. Agricola also describes a compass to determine the direction of veins and mentions that some writers claim that veins lying in certain directions are richer, although he provides counter-examples. He also mentions the theory that the sun draws the metals in veins to the surface, although he himself doubts this. Finally he explains that gold is not generated in the beds of streams and rivers and east-west streams are not more productive than others inherently. Gold occurs in streams because it is torn from veins by the water. Book IV: Delimiting veins and the functions of mining officials This book describes how an official, the Bergmeister, is in charge of mining. He marks out the land into areas called meers when a vein is discovered. The rest of the book covers the laws of mining. There is a section on how the mine can be divided into shares. The roles of various other officials in regulating mines and taxing the production are stated. The shifts of the miners are fixed. The chief trades in the mine are listed and are regulated by both the Bergmeister and their foremen. Book V: The digging of ore and the surveyor's art This book covers underground mining and surveying. When a vein below ground is to be exploited a shaft is begun and a wooden shed with a windlass is placed above it. The tunnel dug at the bottom follows the vein and is just big enough for a man. The entire vein should be removed. Sometimes the tunnel eventually connects with a tunnel mouth in a hill side. Stringers and cross veins should be explored with cross tunnels or shafts when they occur. Agricola next describes that gold, silver, copper and mercury can be found as native metals, the others very rarely. Gold and silver ores are described in detail. Agricola then states that it is rarely worthwhile digging for other metals unless the ores are rich. Gems are found in some mines, but rarely have their own veins, lodestone is found in iron mines and emery in silver mines. Various minerals and colours of earths can be used to give indications of the presence of metal ores. The actual mineworking varies with the hardness of the rock, the softest is worked with a pick and requires shoring with wood, the hardest is usually broken with fire. Iron wedges, hammers and crowbars are used to break other rocks. Noxious gases and the ingress of water are described. Methods for lining tunnels and shafts with timber are described. The book concludes with a long treatise on surveying, showing the instruments required and techniques for determining the course of veins and tunnels. Surveyors allow veins to be followed, but also prevent mines removing ore from other claims and stop mine workings from breaking into other workings. Book VI: The miners' tools and machines This book is extensively illustrated and describes the tools and machinery associated with mining. Handtools and different sorts of buckets, wheelbarrows and trucks on wooded plankways are described. Packs for horses and sledges are used to carry loads above ground. Agricola then provides details of various kinds of machines for lifting weights. Some of these are man-powered and some powered by up to four horses or by waterwheels. Horizontal drive shafts along tunnels allow lifting in shafts not directly connected to the surface. If this is not possible, treadmills will be installed underground. Instead of lifting weights, similar machines use chains of buckets to lift water. Agricola also describes several designs of piston force pumps, which are either man or animal-powered, or powered by water wheels. Because these pumps can only lift water about 24 feet, batteries of pumps are required for the deepest mines. Water pipe designs are also covered in this section. Designs of wind scoop for ventilating shafts or forced air using fans or bellows are also described. Finally, ladders and lifts using wicker cages are used to get miners up and down shafts. Book VII: On the assaying of ore This book deals with assaying techniques. Various designs of furnaces are detailed. Then cupellation, crucibles, scorifiers and muffle furnaces are described. The correct method of preparation of the cupels is covered in detail with beech ashes being preferred. Various other additives and formulae are described, but Agricola does not judge between them. Triangular crucibles and scorifiers are made of fatty clay with a temper of ground-up crucibles or bricks. Agricola then describes in detail which substances should be added as fluxes as well as lead for smelting or assaying. The choice is made by which colour the ore burns out which gives an indication of the metals present. The lead should be silver-free or be assayed separately. The prepared ore is wrapped in paper, placed on a scorifier and then placed under a muffle covered in burning charcoal in the furnace. The cupel should be heated at the same time. The scorifier is removed and the metal transferred to the cupel. Alternatively the ore can be smelted in a triangular crucible, and then have lead mixed with it when it is added to the cupel. The cupel is placed in the furnace and copper is separated into the lead which forms litharge in the cupel leaving the noble metal. Gold and silver are parted using an aqua which is probably nitric acid. Agricola describes precautions for ensuring the amount of lead is correct and also describes the amalgamation of gold with mercury. Assay techniques for base metals such as tin are described as well as techniques for alloys such as silver tin. The use of a touchstone to assay gold and silver is discussed. Finally detailed arithmetical examples show the calculations needed to give the yield from the assay. Book VIII: Roasting, crushing and washing ore In this book Agricola provides a detailed account of beneficiation of different ores. He describes the processes involved in ore sorting, roasting and crushing. The use of water for washing ores is discussed in great detail, e.g. the use of launders and washing tables. Several different types of machinery for crushing ore and washing it are illustrated and different techniques for different metals and different regions are described. Book IX: Methods of smelting ores This book describes smelting, which Agricola describes as perfecting the metal by fire. The design of furnaces is first explained. These are very similar for smelting different metals, constructed of brick or soft stone with a brick front and mechanically driven bellows at the rear. At the front is a pit called the fore-hearth to receive the metal. The furnace is charged with beneficiated ore and crushed charcoal and lit. In some gold and silver smelting a lot of slag is produced because of the relative poverty of the ore and the tap hole has to be opened at various times to remove different slag materials. When the furnace is ready, the forehearth is filled with molten lead into which the furnace is tapped. In other furnaces the smelting can be continuous, and lead is placed into the furnace if there is none in the ore. The slag is skimmed off the top of the metal as it is tapped. The lead containing the gold is separated by cupellation, the metal rich slags are re-smelted. Other smelting processes are similar, but lead is not added. Agricola also describes making crucible steel and distilling mercury and bismuth in this book. Book X: Separating silver from gold and lead from gold or silver Agricola describes parting silver from gold in this book by using acids. He also describes heating with antimony sulphide (stibium), which would give silver sulphide and a mixture of gold and antimony. The gold and silver can then be recovered with cupellation. Gold can also be parted using salts or using mercury. Large scale cupellation using a cupellation hearth is also covered in this book. Book XI: Separating silver from copper This book describes separating silver from copper or iron. This is achieved by adding large amounts of lead at a temperature just above the melting point of lead. The lead will liquate out with the silver. This process will need to be repeated several times. The lead and silver can be separated by cupellation. Book XII: Manufacturing salt, soda, alum, vitriol, sulphur, bitumen, and glass This describes the preparation of what Agricola calls "juices": salt, soda, nitre, alum, vitriol, saltpetre, sulphur and bitumen. Finally glass making is covered. Agricola seems less secure about this process. He is not clear about making glass from the raw ingredients but clearer about remelting glass to make objects. Prof. Philippus Bechius (1521–1560), a friend of Agricola, translated De re metallica libri XII into German. It was published with the German title Vom Bergkwerck XII Bücher in 1557. The Hoovers describe the translation as "a wretched work, by one who knew nothing of the science," but it, like the Latin original, saw further editions. In 1563 Agricola's publisher, Froben and Bischoff ("Hieronimo Frobenio et Nicolao Episcopio") in Basel, published an Italian translation by Michelangelo Florio as well. Publication history Although Agricola died in 1555, the publication was delayed until the completion of the extensive and detailed woodcuts one year after his death. A German translation was published in 1557 and a second Latin edition appeared in 1561. A version in Spanish, though not a mere translation, was produced by Bernardo Pérez de Vargas in 1569. This was translated into French as Traité singulier de metallique in 1743. In 1912, the first English translation of De Re Metallica was privately published in London by subscription. The translators and editors were Herbert Hoover, a mining engineer (and later President of the United States), and his wife, Lou Henry Hoover, a geologist and Latinist. The translation is notable not only for its clarity of language, but for the extensive footnotes, which detail the classical references to mining and metals, such as the Naturalis Historia of Pliny the Elder, the history of mining law in England, France, and the German states; safety in mines, including historical safety; and known minerals at the time that Agricola wrote De Re Metallica. No expense was spared for this edition: in its typography, fine paper and binding, quality of reproduced images, and vellum covers, the publisher attempted to match the extraordinarily high standards of the sixteenth-century original. As a consequence, copies of this 1912 edition are now both rare and valuable. Fortunately, the translation has been reprinted by Dover Books. Subsequent translations into other languages, including German, owe much to the Hoover translations, as their footnotes detail their difficulties with Agricola's invention of several hundred Latin expressions to cover Medieval German mining and milling terms unknown to classical Latin. Editions Agricola, Georg. De re metallica. 1st ed. Basil: Hieronymus Froben & Nicolaus Episcopius, 1556. Agricola, Georg. Vom Bergkwerck. Translated by Philipp Bech. Basel: Hieronymus Froben & Nicolaus Episcopius, 1557. Agricola, Georg. De re metallica. 2nd ed. Basil: Hieronymus Froben & Nicolaus Episcopius, 1561. Agricola, Georg. Opera di Giorgio Agricola de L'Arte de Metalli. Basil: Hieronymus Froben & Nicolaus Episcopius, 1563. Agricola, Georg. De re metallica. Basil: Ludwig König, 1621. Agricola, Georg. Bergwerck Buch. Translated by Philipp Bech. Basil: Ludwig König, 1621. Agricola, Georg. De Re Metallica. Basil: Emanuel König, 1657. Agricola, Georg. De Re Metallica. Translated by Herbert Clark Hoover and Lou Henry Hoover. 1st English ed. London: The Mining Magazine, 1912. Agricola, Georg. Zwölf Bücher vom Berg- und Hüttenwesen. Edited by Carl Schiffner and others. Translated by Carl Schiffner. Berlin: VDI-Verlag, 1928. Agricola, Georg. De Re Metallica. Translated by Herbert Clark Hoover and Lou Henry Hoover. New York: Dover Publications, 1950. Reprint of the 1912 edition. Agricola, Georg. De Re Metallica. Translated by Herbert Clark Hoover and Lou Henry Hoover. New York: Dover Publications, 1986. Reprint of the 1950 reprint of the 1912 edition. See also De la pirotechnia Naturalis Historia Pliny the Elder Scientific literature Theophrastus References External links Original text of De re metallica (Latin version) All the illustrations of the 1561 edition in high resolution 1556 books History of metallurgy Works about mining Books by Georgius Agricola 16th-century books in Latin Herbert Hoover
De re metallica
Chemistry,Materials_science
4,331
20,847,621
https://en.wikipedia.org/wiki/Universal%20Darwinism
Universal Darwinism, also known as generalized Darwinism, universal selection theory, or Darwinian metaphysics, is a variety of approaches that extend the theory of Darwinism beyond its original domain of biological evolution on Earth. Universal Darwinism aims to formulate a generalized version of the mechanisms of variation, selection and heredity proposed by Charles Darwin, so that they can apply to explain evolution in a wide variety of other domains, including psychology, linguistics, economics, culture, medicine, computer science, and physics. Basic mechanisms At the most fundamental level, Charles Darwin's theory of evolution states that organisms evolve and adapt to their environment by an iterative process. This process can be conceived as an evolutionary algorithm that searches the space of possible forms (the fitness landscape) for the ones that are best adapted. The process has three components: variation of a given form or template. This is usually (but not necessarily) considered to be blind or random, and happens typically by mutation or recombination. selection of the fittest variants, i.e. those that are best suited to survive and reproduce in their given environment. The unfit variants are eliminated. heredity or retention, meaning that the features of the fit variants are retained and passed on, e.g. in offspring. After those fit variants are retained, they can again undergo variation, either directly or in their offspring, starting a new round of the iteration. The overall mechanism is similar to the problem-solving procedures of trial-and-error or generate-and-test: evolution can be seen as searching for the best solution for the problem of how to survive and reproduce by generating new trials, testing how well they perform, eliminating the failures, and retaining the successes. The generalization made in "universal" Darwinism is to replace "organism" by any recognizable pattern, phenomenon, or system. The first requirement is that the pattern can "survive" (maintain, be retained) long enough or "reproduce" (replicate, be copied) sufficiently frequently so as not to disappear immediately. This is the heredity component: the information in the pattern must be retained or passed on. The second requirement is that during survival and reproduction variation (small changes in the pattern) can occur. The final requirement is that there is a selective "preference" so that certain variants tend to survive or reproduce "better" than others. If these conditions are met, then, by the logic of natural selection, the pattern will evolve towards more adapted forms. Examples of patterns that have been postulated to undergo variation and selection, and thus adaptation, are genes, ideas (memes), theories, technologies, neurons and their connections, words, computer programs, firms, antibodies, institutions, law and judicial systems, quantum states and even whole universes. History and development Conceptually, "evolutionary theorizing about cultural, social, and economic phenomena" preceded Darwin, but was still lacking the concept of natural selection. Darwin himself, together with subsequent 19th-century thinkers such as Herbert Spencer, Thorstein Veblen, James Mark Baldwin and William James, was quick to apply the idea of selection to other domains, such as language, psychology, society, and culture. However, this evolutionary tradition was largely banned from the social sciences in the beginning of the 20th century, in part because of the bad reputation of social Darwinism, an attempt to use Darwinism to justify social inequality. Starting in the 1950s, Donald T. Campbell was one of the first and most influential authors to revive the tradition, and to formulate a generalized Darwinian algorithm directly applicable to phenomena outside of biology. In this, he was inspired by William Ross Ashby's view of self-organization and intelligence as fundamental processes of selection. His aim was to explain the development of science and other forms of knowledge by focusing on the variation and selection of ideas and theories, thus laying the basis for the domain of evolutionary epistemology. In the 1990s, Campbell's formulation of the mechanism of "blind-variation-and-selective-retention" (BVSR) was further developed and extended to other domains under the labels of "universal selection theory" or "universal selectionism" by his disciples Gary Cziko, Mark Bickhard, and Francis Heylighen. Richard Dawkins may have first coined the term "universal Darwinism" in 1983 to describe his conjecture that any possible life forms existing outside the solar system would evolve by natural selection just as they do on Earth. This conjecture was also presented in 1983 in a paper entitled “The Darwinian Dynamic” that dealt with the evolution of order in living systems and certain nonliving physical systems. It was suggested “that ‘life’, wherever it might exist in the universe, evolves according to the same dynamical law” termed the Darwinian dynamic. Henry Plotkin in his 1997 book on Darwin machines makes the link between universal Darwinism and Campbell's evolutionary epistemology. Susan Blackmore, in her 1999 book The Meme Machine, devotes a chapter titled 'Universal Darwinism' to a discussion of the applicability of the Darwinian process to a wide range of scientific subject matters. The philosopher of mind Daniel Dennett, in his 1995 book Darwin's Dangerous Idea, developed the idea of a Darwinian process, involving variation, selection and retention, as a generic algorithm that is substrate-neutral and could be applied to many fields of knowledge outside of biology. He described the idea of natural selection as a "universal acid" that cannot be contained in any vessel, as it seeps through the walls and spreads ever further, touching and transforming ever more domains. He notes in particular the field of memetics in the social sciences. In agreement with Dennett's prediction, over the past decades the Darwinian perspective has spread ever more widely, in particular across the social sciences as the foundation for numerous schools of study including memetics, evolutionary economics, evolutionary psychology, evolutionary anthropology, neural Darwinism, and evolutionary linguistics. Researchers have postulated Darwinian processes as operating at the foundations of physics, cosmology and chemistry via the theories of quantum Darwinism, observation selection effects and cosmological natural selection. Similar mechanisms are extensively applied in computer science in the domains of genetic algorithms and evolutionary computation, which develop solutions to complex problems via a process of variation and selection. Author D. B. Kelley has formulated one of the most all-encompassing approaches to universal Darwinism. In his 2013 book The Origin of Phenomena, he holds that natural selection involves not the preservation of favored races in the struggle for life, as shown by Darwin, but the preservation of favored systems in contention for existence. The fundamental mechanism behind all such stability and evolution is therefore what Kelley calls "survival of the fittest systems." Because all systems are cyclical, the Darwinian processes of iteration, variation and selection are operative not only among species but among all natural phenomena both large-scale and small. Kelley thus maintains that, since the Big Bang especially, the universe has evolved from a highly chaotic state to one that is now highly ordered with many stable phenomena, naturally selected. Examples of universal Darwinist theories The following approaches can all be seen as exemplifying a generalization of Darwinian ideas outside of their original domain of biology. These "Darwinian extensions" can be grouped in two categories, depending on whether they discuss implications of biological (genetic) evolution in other disciplines (e.g. medicine or psychology), or discuss processes of variation and selection of entities other than genes (e.g. computer programs, firms or ideas). However, there is no strict separation possible, since most of these approaches (e.g. in sociology, psychology and linguistics) consider both genetic and non-genetic (e.g. cultural) aspects of evolution, as well as the interactions between them (see e.g. gene-culture coevolution). Gene-based Darwinian extensions Evolutionary psychology assumes that our emotions, preferences and cognitive mechanisms are the product of natural selection Evolutionary educational psychology applies evolutionary psychology to education Evolutionary developmental psychology applies evolutionary psychology to cognitive development Darwinian Happiness applies evolutionary psychology to understand the optimal conditions for human well-being Darwinian literary studies tries to understand the characters and plots of narrative on the basis of evolutionary psychology Evolutionary aesthetics applies evolutionary psychology to explain our sense of beauty, especially for landscapes and human bodies Evolutionary musicology applies evolutionary aesthetics to music Evolutionary anthropology studies the evolution of human beings Sociobiology proposes that social systems in animals and humans are the product of Darwinian biological evolution Human behavioral ecology investigates how human behavior has become adapted to its environment via variation and selection Evolutionary medicine investigates the origin of diseases by looking at the evolution both of the human body and of its parasites Paleolithic diet proposes that the most healthy nutrition is the one to which our hunter-gatherer ancestors have adapted over millions of years Paleolithic lifestyle generalizes the paleolithic diet to include exercise, behavior and exposure to the environment Molecular evolution studies evolution at the level of DNA, RNA and proteins Biosocial criminology studies crime using several different approaches that include genetics and evolutionary psychology Evolutionary linguistics studies the evolution of language, biologically as well as culturally Other Darwinian extensions Quantum Darwinism sees the emergence of classical states in physics as a natural selection of the most stable quantum properties Cosmological natural selection hypothesizes that universes reproduce and are selected for having fundamental constants that maximize "fitness" Complex adaptive systems models the dynamics of complex systems in part on the basis of the variation and selection of its components Evolutionary archaeology is a Darwinian approach to the cultural evolution of tools Evolutionary computation is a Darwinian approach to the generation of adapted computer programs Genetic algorithms, a subset of evolutionary computation, models variation by "genetic" operators (mutation and recombination) Evolutionary robotics applies Darwinian algorithms to the design of autonomous robots Artificial life uses Darwinian algorithms to let organism-like computer agents evolve in a software simulation Evolutionary art uses variation and selection to produce works of art Evolutionary music does the same for works of music Clonal selection theory sees the creation of adapted antibodies in the immune system as a process of variation and selection Neural Darwinism proposes that neurons and their synapses are selectively pruned during brain development Evolutionary epistemology of theories assumes that scientific theories develop through variation and selection Memetics is a theory of the variation, transmission, and selection of cultural items, such as ideas, fashions, and traditions Dual inheritance theory a framework for cultural evolution developed largely independently of memetics Cultural selection theory is a theory of cultural evolution related to memetics Cultural materialism is an anthropological approach that contends that the physics Environmental determinism is a social science theory that proposes that it is the environment that ultimately determines human culture. Evolutionary economics studies the variation and selection of economic phenomena, such as commodities, technologies, institutions and organizations. Evolutionary ethics investigates the origin of morality, and uses Darwinian foundations to formulate ethical values Big History is the science-based narrative integrating the history of the universe, earth, life, and humanity. Scholars consider Universal Darwinism to be a possible unifying theme for the discipline. Books Campbell, John. Universal Darwinism: the path of knowledge. Cziko, Gary. Without Miracles: Universal Selection Theory and the Second Darwinian Revolution. Hodgson, Geoffrey Martin; Knudsen, Thorbjorn. Darwin's Conjecture: The Search for General Principles of Social and Economic Evolution. Kelley, D. B. The Origin of Everything via Universal Selection, or the Preservation of Favored Systems in Contention for Existence. Plotkin, Henry. Evolutionary Worlds without End. Plotkin, Henry. Darwin Machines and the Nature of Knowledge. Dennett, Daniel. Darwin's Dangerous Idea. References External links UniversalSelection.com Darwinism Evolutionary biology Evolution
Universal Darwinism
Biology
2,399
24,145,794
https://en.wikipedia.org/wiki/C23H34O4
The molecular formula C23H34O4 may refer to: Androstenediol diacetate Calcitroic acid Digitoxigenin Prebediolone acetate Rostafuroxin Testosterone diacetate Molecular formulas
C23H34O4
Physics,Chemistry
49
72,436,518
https://en.wikipedia.org/wiki/Quiet%20and%20loud%20aliens
The concept of quiet and loud aliens is used in the modelling of hypotheses for the prevalence of extraterrestrial intelligence, particularly in the context of the Fermi Paradox. Hypothetical "loud" aliens expand their sphere of influence rapidly in a highly detectable way; hypothetical "quiet" aliens are hard or impossible to detect. A special type of loud alien civilizations are "grabby aliens" who also inhibit the development of other technological civilizations in their sphere of influence. See also Anthropic principle Dark forest hypothesis Search for extraterrestrial intelligence References Astrobiology Fermi paradox Search for extraterrestrial intelligence
Quiet and loud aliens
Astronomy,Biology
127
56,201,820
https://en.wikipedia.org/wiki/Self-hosting%20%28web%20services%29
Self-hosting is the practice of running and maintaining a website or service using a private web server, instead of using a service outside of the administrator's own control. Self-hosting allows users to have more control over their data, privacy, and computing infrastructure, as well as potentially saving costs and improving skills. History The practice of self-hosting web services became more feasible with the development of cloud computing and virtualization technologies, which enabled users to run their own servers on remote hardware or virtual machines. The first public cloud service, Amazon Web Services (AWS), was launched in 2006, offering Simple Storage Service (S3) and Elastic Compute Cloud (EC2) as its initial products. Self-hosting web services became more popular with the rise of free software projects, open source software projects and free and open-source software projects that provide alternatives to various web-based services and applications, such as file storage, password management, media streaming, home automation, and more. There is also a sizeable hobbyist community around self-hosting, made up of hobbyists, technology professionals and privacy conscious individuals. Benefits Some of the benefits of self-hosting are: The user has complete control over their data and can decide how and where it is hosted. The user can customize the site design and functionality according to their preferences and needs. The user can potentially save money by using a lower-cost hosting service or combining multiple services on one server. The user can improve their skills and knowledge by learning how to set up and manage their own server and services. The user can avoid relying on third-party providers that may have privacy issues, security breaches, outages, or changes in policies. Challenges Some of the challenges of self-hosting are: The user has to take responsibility for maintaining and updating their server and services, which may require technical skills and time. The user has to ensure that their server and services are secure and compliant with relevant laws and regulations. The user has to deal with potential issues such as hardware failures, network problems or power outages. The user may have to find reliable and affordable hosting providers that offer the features and resources they need. The user has to ensure that the server is adequately protected from denial of service attacks (DoS) and any other security threats Examples There are many examples of self-hosted services and applications that can replace or complement web-based ones, such as: Bitwarden - A password manager that stores all passwords in an encrypted vault Home Assistant - A software for home automation that puts local control and privacy first Nextcloud - A suite of client-server software for creating and using file hosting services See also Cloud computing Decentralized web Dedicated hosting service On-premises software Web hosting service References External links Awesome-Selfhosted - List of network and web services which can be self-hosted Self Hosted - A podcast about self-hosting Self-Hosting Guide Internet hosting Internet terminology Web hosting
Self-hosting (web services)
Technology
599
19,613,365
https://en.wikipedia.org/wiki/Beno%20Eckmann
Beno Eckmann (31 March 1917 – 25 November 2008) was a Swiss mathematician who made contributions to algebraic topology, homological algebra, group theory, and differential geometry. Life Born to a Jewish family in Bern, Eckmann received his master's degree from Eidgenössische Technische Hochschule Zürich (ETH) in 1939. Later, he studied there under Heinz Hopf, obtaining his Ph.D. in 1941. His dissertation, on homotopy theory, was jointly supervised by Heinz Hopf and Ferdinand Gonseth. In 1942 he obtained a lecturer position at the University of Lausanne. He became an extraordinary professor there before, in 1948, taking a full professorship at ETH Zurich, where he remained until his 1984 retirement. He was also President of the Swiss Mathematical Society for 1961–1962, and the founding head of the Mathematics Research Institute at ETH Zurich from 1964 until his retirement. Recognition A colloquium in honor of Eckmann's 60th birthday was held in Zurich in 1977. Eckmann was elected to the Academia Europaea in 1993. He was the 2008 recipient of the Albert Einstein Medal. He was awarded honorary degrees by the University of Fribourg in 1964, the École Polytechnique Fédérale de Lausanne in 1967, and the Technion – Israel Institute of Technology in 1983. Legacy Calabi–Eckmann manifolds, Eckmann–Hilton duality, the Eckmann–Hilton argument, and the Eckmann–Shapiro lemma are named after Eckmann. Family Eckmann's son is mathematical physicist Jean-Pierre Eckmann. References External links 1917 births 2008 deaths People from Bern 20th-century Swiss mathematicians 21st-century Swiss mathematicians ETH Zurich alumni Academic staff of the University of Lausanne Academic staff of ETH Zurich Topologists Albert Einstein Medal recipients Members of Academia Europaea
Beno Eckmann
Mathematics
387
78,179,899
https://en.wikipedia.org/wiki/JNJ-37654032
JNJ-37654032 is a selective androgen receptor modulator (SARM) which was developed by Johnson & Johnson for the potential treatment of muscular atrophy but was never marketed. The drug is a nonsteroidal androgen receptor (AR) modulator with mixed agonistic (androgenic) and antagonistic (antiandrogenic) effects. In animals, it has shown full agonist-like effects in muscle, agonistic suppressive effects on follicle-stimulating hormone (FSH) secretion, and antagonistic or partially agonistic effects in the prostate. It was the lead compound of a novel benzimidazole series of SARMs described as being reminiscent of but distinct from the arylpropionamides (e.g., enobosarm). JNJ-37654032 did not advance past the preclinical research and was never tested in humans. It was first described in the scientific literature by 2008. References Abandoned drugs Benzimidazoles Chloroarenes Experimental sex-hormone agents Trifluoromethyl compounds Selective androgen receptor modulators
JNJ-37654032
Chemistry
240
25,444,252
https://en.wikipedia.org/wiki/Molecular%20processor
A molecular processor is a processor that is based on a molecular platform rather than on an inorganic semiconductor in integrated circuit format. Current technology Molecular processors are currently in their infancy and currently only a few exist. At present a basic molecular processor is any biological or chemical system that uses a complementary DNA (cDNA) template to form a long chain amino acid molecule. A key factor that differentiates molecular processors is "the ability to control output" of protein or peptide concentration as a function of time. Simple formation of a molecule becomes the task of a chemical reaction, bioreactor or other polymerization technology. Current molecular processors take advantage of cellular processes to produce amino acid based proteins and peptides. The formation of a molecular processor currently involves integrating cDNA into the genome and should not replicate and re-insert, or be defined as a virus after insertion. Current molecular processors are replication incompetent, non-communicable and cannot be transmitted from cell to cell, animal to animal or human to human. All must have a method to terminate if implanted. The most effective methodology for insertion of cDNA (template with control mechanism) uses capsid technology to insert a payload into the genome. A viable molecular processor is one that dominates cellular function by re-task and or reassignment but does not terminate the cell. It will continuously produce protein or produce on demand and have method to regulate dosage if qualifying as a "drug delivery" molecular processor. Potential applications range from up-regulation of functional CFTR in cystic fibrosis and hemoglobin in sickle cell anemia to angiogenesis in cardiovascular stenosis to account for protein deficiency (used in gene therapy.) Example A vector inserted to form a molecular processor is described in part. The objective was to promote angiogenesis, blood vessel formation and improve cardiovasculature. Vascular endothelial growth factor (VEGF) and enhanced green fluorescent protein (EGFP) cDNA was ligated to either side of an internal ribosomal re-entry site (IRES) to produce inline production of both the VEGF and EGFP proteins. After in vitro insertion and quantification of integrating units (IUs), engineered cells produce a bioluminescent marker and a chemotactic growth factor. In this instance, increased fluorescence of EGFP is used to show VEGF production in individual cells with active molecular processors. The production was exponential in nature and regulated through use of an integrating promoter, cell numbers, the number of integrated units (IUs) of molecular processors and or cell numbers. The measure the molecular processors efficacy was performed by FC/FACS to indirectly measure VEGF through fluorescence intensity. Proof of functional molecular processing was quantified by ELISA to show VEGF effect through chemotactic and angiogenesis models. The result involved directed assembly and coordination of endothelial cells for tubule formation by engineered cells on endothelial cells. The research goes on to show implantation and VEGF with dosage capabilities to promote revascularization, validating mechanisms of molecular processor control. See also Biocomputers Computational gene DNA computing List of emerging technologies Molecular electronics Organic semiconductor References External links CNN -Moving toward molecular chips Newscientist - Atomic Logic Softpedia - IBM Is Working on DNA-Based processors Biological engineering Molecular electronics Nanoelectronics DNA
Molecular processor
Chemistry,Materials_science,Engineering,Biology
688
5,220,340
https://en.wikipedia.org/wiki/Reeler
A reeler is a mouse mutant, so named because of its characteristic "reeling" gait. This is caused by the profound underdevelopment of the mouse's cerebellum, a segment of the brain responsible for locomotion. The mutation is autosomal and recessive, and prevents the typical cerebellar folia from forming. Cortical neurons are generated normally but are abnormally placed, resulting in disorganization of cortical laminar layers in the central nervous system. The reason is the lack of reelin, an extracellular matrix glycoprotein, which, during the corticogenesis, is secreted mainly by the Cajal–Retzius cells. In the reeler neocortex, cortical plate neurons are aligned in a practically inverted fashion ("outside-in"). In the ventricular zone of the cortex fewer neurons have been found to have radial glial processes. In the dentate gyrus of hippocampus, no characteristic radial glial scaffold is formed and no compact granule cell layer is established. Therefore, the reeler mouse presents a good model in which to investigate the mechanisms of establishment of the precise neuronal network during development. Types of reelers There are two types of the reeler mutation: Albany2 mutation (Reln(rl-Alb2) Orleans mutation (Reln-rl-orl, or rl-orl), in which reelin lacks a C-terminal region and a part of the eighth reelin repeat. This hampers the secretion of the protein from the cell. In order to unravel the reelin signaling chain, attempts are made to cut the signal downstream of reelin, leaving reelin expression intact but creating the reeler phenotype, sometimes a partial phenotype, thus confirming the role of downstream molecules. The examples include: Double knockout of VLDLR and ApoER2 receptors; Double knockout of Src and Fyn kinases. Cre-loxP recombination mice model that lacks Crk and CrkL in most neurons. Was used to show that Crk/CrkL lie downstream of DAB1] in the reelin signaling pathway. Scrambler mouse Key pathological findings in the reeler brain structure Inversion of cortical layers. Subplate cells become abnormally located in the subpial zone above the cortical plate. This hampers their function in establishing the transient circuits between the incoming thalamic axons and layer IV cells of the cortical plate. Thalamic axons have to grow past the cortical plate to reach the mispositioned subplate cells in the so-called superplate and then turn back down to contact their appropriate targets. This creates a curious "looping" thalamocortical connection seen in the adult reeler brain. Dispersion of neurons within cortical layers. Decreased cerebellar size. Failure of preplate to split Failure to establish a distinct granule cell layer in the dentate gyrus. Normal dentate gyrus demonstrates a clear segregation of granule cells and hilar mossy cells, which are identified, respectively, by their expression of calbindin and calretinin. In the reeler DG, the two cell types intermingle. Impaired dendrite outgrowth. In one study, reeler mice were shown to have attenuated methamphetamine-induced hyperlocomotion, which was also reduced by a targeted disruption of reelin activity in wildtype mice. Reeler mice in the same study demonstrated a decrease in D1 and D2 receptor-mediated dopaminergic function together with reduced numbers of D1\D2 receptors. Heterozygous reeler mouse Heterozygous reeler mice, also known as HRM, while lacking the apparent phenotype seen in the homozygous reeler, also show some brain abnormalities due to the reelin deficit. Heterozygous (rl/+) mice express reelin at 50% of wild-type levels and have grossly normal brains but exhibit a progressive loss during aging of a neuronal target of reelin action, Purkinje cells. The mice have reduced density of parvalbumin-containing interneurons in circumscribed regions of striatum, according to one study. Studies reveal a 16% deficit in the number of Purkinje cells in 3-month-old (+/rl) and a 24% one in 16-month-old animals: surprisingly this deficit is only present in the (+/rl) males, while the females are spared. History of research First mention of reeler mouse mutation dates back to 1951. In the later years, histopathological studies revealed that the reeler cerebellum is dramatically decreased in size and the normal laminar organization found in several brain regions is disrupted (Hamburgh, 1960). In 1995, the RELN gene and reelin protein were discovered at chromosome 7q22 by Tom Curran and colleagues. See also Shaking rat Kawasaki has a reduced reelin expression due to missplicing of the reelin gene. Sticky mouse References External links Development of the Cerebral Cortex: III. The Reeler Mutation - by Paul J. Lombroso, M.D. Molecular neuroscience Molecular genetics Laboratory mouse strains Articles containing video clips Behavioural genetics
Reeler
Chemistry,Biology
1,141
15,158,793
https://en.wikipedia.org/wiki/Pittsburgh%20Life%20Sciences%20Greenhouse
Pittsburgh Life Sciences Greenhouse (PLSG) is an investment firm based in the South Side neighborhood of Pittsburgh, Pennsylvania that provides resources and tools to entrepreneurial life sciences enterprises in Pittsburgh and western Pennsylvania in order to advance research and patient care. History Since PLSG began operations in 2002, it has assisted more than 435 life sciences companies and has affected more than 10,000 jobs in western Pennsylvania. PLSG has provided 34 companies with office or laboratory space, and 14 have been relocated to Pittsburgh from outside the region. PLSG has invested over $20 million in 77 companies, which has leveraged over $1.5 billion in additional capital to the region. PLSG guides researchers, entrepreneurs and emerging companies through the challenges faced in early stages of company development. They provide support to companies developing product and service innovations in biotechnology tools, diagnostics/screening, healthcare IT, medical devices and therapeutics. PLSG also helps in the expansion of more mature life science companies, by supporting new product and market developments and connecting them to investors. Pittsburgh Life Sciences Greenhouse grew out of an original plan known as BioVenture, developed by CMU and Pitt. The initiative received a major boost in 2001 when money from the state's settlement with the tobacco industry was pledged to create a life science greenhouse in Western Pennsylvania. In 2003, Pittsburgh Biomedical Corporation, a non-profit established in 1988 by the Pittsburgh Technology Council, consolidated with PLSG. Today, PLSG exists as a partnership between the Commonwealth of Pennsylvania, University of Pittsburgh, Carnegie Mellon University, University of Pittsburgh Medical Center and the regional foundation of community. Their mission is to "create, nurture and help establish a globally dominant life sciences industry in western Pennsylvania." References External links Pittsburgh Life Sciences Greenhouse Video WQED OnQ feature on the Pittsburgh Life Sciences Greenhouse University of Pittsburgh Carnegie Mellon University University of Pittsburgh Medical Center Biotechnology Business incubators of the United States 2002 establishments in Pennsylvania Economy of Pittsburgh Life sciences industry
Pittsburgh Life Sciences Greenhouse
Biology
395