id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
1,280,758
https://en.wikipedia.org/wiki/Heim%20theory
Heim theory, first proposed by German physicist Burkhard Heim publicly in 1957, is an attempt to develop a theory of everything in theoretical physics. The theory claims to bridge some of the disagreements between quantum mechanics and general relativity. The theory has received little attention in the scientific literature and is regarded as being outside mainstream science but has attracted some interest in popular and fringe media. Development Heim attempted to resolve incompatibilities between quantum theory and general relativity. To meet that goal, he developed a mathematical approach based on quantizing spacetime. Others have attempted to apply Heim theory to nonconventional space propulsion and faster than light concepts, as well as the origin of dark matter. Heim claimed that his theory yields particle masses directly from fundamental physical constants and that the resulting masses are in agreement with experiment, but this claim has not been confirmed. Heim's theory is formulated mathematically in six or more dimensions and uses Heim's own version of difference equations. References External links Chronological Overview of the Research of Burkhard Heim (5 pages, English translation by John Reed, Feb 2011) Heim Theory Falsified. Next Big Future. 1 July 2011. This article posts John Reed's comments. General Discussions . Heim Theory. The Physics Forum. 2013-03-26. Heim Theory Translation. Borje Mansson and Anton Mueller. 2006. Discussion about Burkhard Heim's Particle Structure Theory. Physforum. May 2011. Faster-than-light travel Fringe physics Quantum gravity
Heim theory
Physics
318
27,397,276
https://en.wikipedia.org/wiki/Origin%20PC
Origin PC Corp. is a custom personal computer manufacturing company located in Miami, Florida. Founded by former Alienware employees in 2009, Origin PC assembles high-performance gaming and professional-use desktop and laptop computers from third-party components. History Soon after the acquisition of Alienware by Dell, former executives Kevin Wasielewski, Richard Cary, and Hector Penton formed Origin PC in Miami, Florida. The company states that the name Origin came from the company's intention to get back to the roots of building custom, high-performance computers for gamers and hardware enthusiasts. Origin PC's first products were the GENESIS desktop and the EON18 laptop. In 2014, Origin PC announced a new line of EVO series laptops. On January 7, 2014, at CES, Origin PC announced and launched Genesis (Full-Tower) and Millennium (Mid-Tower) desktop case. In July 2019, Corsair Components, Inc. announced its acquisition of Origin PC Corp. In February 2024, Corsair announced it would be shutting down Origin's Miami facility and relocating production to Atlanta. 55 employees were laid off as a result. Hardware Origin gaming laptops are based upon the Clevo whitebox notebook chassis. See also List of computer system manufacturers References External links Computer companies established in 2009 Computer companies of the United States Computer hardware companies Computer enclosure companies Computer systems companies Manufacturing companies based in Miami Privately held companies based in Florida 2009 establishments in Florida Gaming computers 2019 mergers and acquisitions Corsair Gaming
Origin PC
Technology
308
71,131,359
https://en.wikipedia.org/wiki/White%20House%20Task%20Force%20to%20Address%20Online%20Harassment%20and%20Abuse
The White House Task Force to Address Online Harassment and Abuse is a United States task force whose stated function is to address and prevent online harassment and abuse. It will particularly focus on online harassment and abuse against LGBT people and women, who are disproportionately affected. The task force was launched on June 16, 2022 in an announcement made by Vice President Kamala Harris. Reception Conservatives and libertarians have criticized the task force, including former New York congresswoman Nan Hayworth, Media Research Center founder and CEO Brent Bozell, Conservative commentator Matt Whitlock, and the libertarian organization Young Americans for Liberty. Some of them have accused the task force of being similar to the recently paused Disinformation Governance Board (DGB). Conservatives have also accused the task force of being designed to censor conservative speech. References 2022 establishments in the United States 2022 in LGBTQ history 2022 in women's history Biden administration controversies Harassment and bullying Government agencies established in 2022 Freedom of speech in the United States Cyberbullying
White House Task Force to Address Online Harassment and Abuse
Biology
213
52,891,920
https://en.wikipedia.org/wiki/Regional%20associations%20of%20road%20authorities
This article lists the main regional associations for road authorities from around the world. Many of these are associated with the World Road Association. Africa The Association des Gestionnaires et Partenaires Africains de la Route (AGEPAR) or African Road Managers and Partners Association is the association for road authorities predominantly in north and western africa. The Association of Southern Africa National Road Agencies (ASANRA) is an association of national roads agencies or authorities in the Southern African Development Community. Asia and Australasia The Road Engineering Association of Asia and Australasia was established in 1973 as a regional body to promote and advance the science and practice of road engineering and related professions. Europe and Asia The Baltic Roads Association was established for the cooperation of the Estonian, Latvian and Lithuanian Road Administrations. The Conference of European Directors of Roads or Conférence Européenne des Directeurs des Routes is a Brussels-based organisation for the Directors of National Road Authorities in Europe. Межправительственный совет дорожников (MSD) or Intergovernmental Council of Roads, is the road authority organisation of the Commonwealth of Independent States. MSD was founded in 1992 as the Interstate Council of Roads In 1998 the Council of Roads was given Intergovernmental organisation status. It assists in the cooperation between member road administrations in the field of design, construction, maintenance and scientific and technological policies in the road sector. The Nordic Road Association (NVF) was established in 1935. The founding members were Denmark, Finland, Iceland, Norway and Sweden; the Faroe Islands became a member in 1975. North and South America The Consejo de Directores de Carreteras de Iberia e Iberoamérica (DIRCAIBEA) / Board of Directors of Iberia and Latin America Roads was created in 1995. Twenty-two countries have representation in DIRCAIBEA; The two Iberian countries, Spain and Portugal, and 20 countries of the Americas and the Caribbean, Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Cuba; Ecuador, El Salvador, Guatemala, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Puerto Rico, the Dominican Republic, Uruguay and Venezuela. American Association of State Highway and Transportation Officials (AASHTO): although technically a national association of state authorities, AASHTO activities also include most Canadian provinces. References Civil engineering organizations Road authorities
Regional associations of road authorities
Engineering
503
67,033,210
https://en.wikipedia.org/wiki/Charles%20M.%20Sommerfield
Charles M. Sommerfield was a theoretical physicist and professor emeritus at Yale University. He is the namesake of the Bogomol'nyi–Prasad–Sommerfield bound. Biography Sommerfield studied for his bachelor's at Brooklyn College and earned his Ph.D. from Julian Schwinger at Harvard University in 1957. He worked at Berkeley and Harvard for two years each before becoming professor at Yale in 1961. He was a fellow of Trumbull College. Additionally, he has held visiting positions at the Institute for Advanced Study and the University of Florida. He is notable for his work on high-energy physics. He is one of the namesakes of the Bogomol'nyi–Prasad–Sommerfield bound, along with his graduate student M.K. Prasad. Another notable graduate student he supervised was Howard Georgi. Sommerfield was elected a fellow of the American Physical Society in 1968. See also Bogomol'nyi–Prasad–Sommerfield state References Yale University faculty 20th-century American physicists Particle physicists Harvard University alumni Brooklyn College alumni Fellows of the American Physical Society
Charles M. Sommerfield
Physics
230
5,522,291
https://en.wikipedia.org/wiki/Eric%20Brill
Eric Brill is a computer scientist specializing in natural language processing. He created the Brill tagger, a supervised part of speech tagger. Another research paper of Brill introduced a machine learning technique now known as transformation-based learning. Biography Brill earned a BA in mathematics from the University of Chicago in 1987 and a MS in Computer Science from UT Austin in 1989. In 1994, he completed his PhD at the University of Pennsylvania. He was an assistant professor at Johns Hopkins University from 1994 to 1999. In 1999, he left JHU for Microsoft Research, he developed a system called "Ask MSR" that answered search engine queries written as questions in English, and was quoted in 2004 as predicting the shift of Google's web-page based search to information based search. In 2009 he moved to eBay to head their research laboratories. References Artificial intelligence researchers Living people Year of birth missing (living people) Place of birth missing (living people) Johns Hopkins University faculty University of Texas at Austin College of Natural Sciences alumni University of Chicago alumni University of Pennsylvania alumni Natural language processing researchers Computational linguistics researchers
Eric Brill
Technology
224
3,858,973
https://en.wikipedia.org/wiki/Ice%20bridge
An ice bridge is a frozen natural structure formed over seas, bays, rivers or lake surfaces. They facilitate migration of animals or people over a water body that was previously uncrossable by terrestrial animals, including humans. The most significant ice bridges are formed by glaciation, spanning distances of many miles over sometimes relatively deep water bodies. An example of such a major ice bridge was that connecting the island of Öland with mainland Sweden approximately 9000 BC. This bridge reached its maximum utility when the glacier was in retreat, forming a low-lying frozen bridge. The Öland ice bridge allowed the first human migration to the island of Öland, which is most readily documented by archaeological studies of the Alby People. In Jules Verne's 1873 novel The Fur Country, a group of fur trappers establishes a fort on what they think is stable ground, only to find later on that is merely an iceberg temporarily attached by an ice bridge to the mainland. See also Ice road Land bridge Snow bridge References Human migration Glaciers Bodies of ice Bridges Water ice
Ice bridge
Engineering
213
56,603,808
https://en.wikipedia.org/wiki/Diethyl%20lutidinate
Diethyl lutidinate is a chemical compound. It has been studied for its potential use in hair care. It can be synthesized by reacting lutidinic acid with ethanol at elevated temperature in presence of sulfuric acid. References Pyridines Ethyl esters
Diethyl lutidinate
Chemistry
57
32,668,737
https://en.wikipedia.org/wiki/Spell%20%28Unix%29
is the standard English language spell checker for Unix, Plan 9, and Unix-like operating systems. Appearing in Version 6 Unix, was originally written by Stephen C. Johnson of Bell Labs in 1975. Douglas McIlroy later improved its accuracy, performance, and memory use, and described his work and spell in general in his 1982 paper "Development of a Spelling list". Spell has a simple command-line interface: It goes over all the words in a given text file, and prints a sorted list of unique misspelled words in that file. It does not provide any interface for looking for those words in the file, or helping to correct the mistakes. In 1983, a different spell-checker, (the interactive spell-checker), was ported to Unix. had a user interface for showing the spelling mistakes in context and suggesting how to correct them. Since then, the original Spell tool has been mostly considered obsolete. Another reason Spell is considered obsolete is that it only supports the English language. Modern spell-checkers for Unix and Linux systems, such as aspell, MySpell and hunspell, support a multitude of different languages and character sets. The Single Unix Specification has officially declared Spell a "legacy application", stating that this was done "because there is no known technology that can be used to make it recognise general language for user-specified input without providing a complete dictionary along with the input file." Nevertheless, the Single Unix Specification does not standardize any other spell-checking utility to take Spell's place. Because of Spell's problems and the superiority of its alternatives, a free software version of McIlroy's has never been written. Instead, in 1996 Thomas Morgan of GNU wrote a simple wrapper to (which was already popular at the time) to replicate spell's original behaviour. Many Linux distributions include this GNU , or an even simpler shell script; For example, the "" command in Fedora Linux simply runs , as: cat "$@" | aspell -l --mode=none | sort -u See also ispell aspell MySpell pspell hunspell Writer's Workbench References External links Original Unix spell source code (link does not work) How Unix Spell Ran in 64kB RAM 1975 software Standard Unix programs Plan 9 commands
Spell (Unix)
Technology
482
26,297,589
https://en.wikipedia.org/wiki/Owen%27s%20T%20function
In mathematics, Owen's T function T(h, a), named after statistician Donald Bruce Owen, is defined by The function was first introduced by Owen in 1956. Applications The function T(h, a) gives the probability of the event (X > h and 0 < Y < aX) where X and Y are independent standard normal random variables. This function can be used to calculate bivariate normal distribution probabilities and, from there, in the calculation of multivariate normal distribution probabilities. It also frequently appears in various integrals involving Gaussian functions. Computer algorithms for the accurate calculation of this function are available; quadrature having been employed since the 1970s. Properties Here Φ(x) is the standard normal cumulative distribution function More properties can be found in the literature. References Software Owen's T function (user web site) - offers C++, FORTRAN77, FORTRAN90, and MATLAB libraries released under the LGPL license LGPL Owen's T-function is implemented in Mathematica since version 8, as OwenT. External links Why You Should Care about the Obscure (Wolfram blog post) Normal distribution Computational statistics Functions related to probability distributions
Owen's T function
Mathematics
252
40,381,230
https://en.wikipedia.org/wiki/Institute%20for%20Computational%20Cosmology
The Institute for Computational Cosmology (ICC) is a research institute at Durham University, England. It was founded in November 2002 as part of the Ogden Centre for Fundamental Physics, which also includes the Institute for Particle Physics Phenomenology (IPPP). The ICC's primary mission is to advance fundamental knowledge in cosmology. Topics of active research include: the nature of dark matter and dark energy, the evolution of cosmic structure, the formation of galaxies, and the determination of fundamental parameters. The current director of the ICC is Shaun Cole. ICC researchers have played a central role in the development of the standard model of cosmology, Lambda-CDM model (ΛCDM). The complex nature of questions in cosmology often means that advances require supercomputer simulations in which a virtual Universe is allowed to evolve for 13.8 billion years from the Big Bang to the present day. The simulation is rerun with altered pre-conditions or physics, until it matches the observed Universe. This approach has required one of the most powerful supercomputers for academic research in the world, the “Cosmology Machine (COSMA)” as part of the DiRAC supercomputing consortium. History Durham University's extragalactic astronomy group was founded in the late 1970s, and secured in 1984–5 with the appointments of Carlos Frenk, Richard Ellis and Tom Shanks. A group researching theoretical cosmology grew steadily during the 1980s and 1990s, mainly funded by the UK Particle Physics and Astronomy Research Council (PPARC). A dedicated building for theoretical cosmology was then funded through private donations, principally from alumnus Peter Ogden, and opened in 2002 by the Prime Minister, Tony Blair. The group has grown in these new facilities, and the ICC now hosts more than 60 researchers, including theoretical and observational cosmologists, as well as astroparticle physicists. Although the ICC is strictly speaking a theoretical institute, theory and observations in cosmology are intimately interwoven. Uniquely amongst Durham University's Research Institutes, the ICC and IPPP are structurally integrated within an academic and teaching department, Physics. The physics department as a whole was awarded grade 5A in the 2001 Research Assessment Exercise (RAE) carried out by the UK government, with the international excellence of research in Astronomy and Particle Physics specifically highlighted. The department's research in Space Science and Astrophysics was rated as number one in Europe and fourth in the world by Thomson Reuters from its Essential Science Indicators (1998–2008). In November 2016, the ICC moved into the brand new Ogden Centre for Fundamental Physics building, designed by Studio Daniel Libeskind. The new building now houses all three astronomy groups in the Department of Physics, including the Centre for Advanced Instrumentation and the Centre for Extragalactic Astronomy, as well as the Institute for Computational Cosmology. Supercomputer The ICC's highest resolution simulations of the evolution of the Universe are performed on the Cosmology Machine (COSMA). COSMA-5 was installed in October 2012, as a hub of the UK national Distributed Research utilitising Advanced Computing (DiRAC) consortium. COSMA-5 includes 6720 2.6 GHz Intel Sandy Bridge Central processing unit (CPU) cores, 53,760 GB of RAM, and 2.4 PB of data storage; it is one of the most powerful supercomputers in the world. The ICC acts as one of the two main nodes of the international Virgo Consortium for cosmological supercomputer simulations. Outreach A founding goal of the ICC is to "stimulate young people to aspire to be the scientists of tomorrow". A full-time outreach officer is employed to develop teaching materials that draw upon current research and coordinate a programme of activities in schools across the North East of England. The ICC has been involved in a number of outreach events aimed at communicating science to the general public, notably: The ICC's 3D short movie "Cosmic Origins", which combines sequences of real astronomical data and supercomputer simulations, won first prize for best stereoscopic movie at Stereoscopic Displays and Applications XXI. The movie, and its sequel "Cosmic Origins 2" provided the core entertainment of a touring public exhibition that visited the Royal Society’s Summer Science Exhibitions 2009, 2010 and 2013, See Further 2010, the British Science Festival 2013, and Thailand's National Science and Technology Fair 2013. In 2015, the ICC collaborated on The World Machine project, the centrepiece of the 2015 Durham Lumiere light festival. This was a celebration of cosmology, projected onto the facade of Durham Cathedral. In July 2016, the ICC hosted an exhibition titled Galaxy Makers: How to make a galaxy at the Royal Society 2016 Summer Exhibition. References Astronomy institutes and departments Research institutes in the United Kingdom Astronomy in the United Kingdom Educational institutions established in 2002 2002 establishments in England
Institute for Computational Cosmology
Astronomy
998
71,725,016
https://en.wikipedia.org/wiki/Collusion%20%28psychology%29
The concept of collusion in couples' relations with two partners is a psychological term for behavioral patterns in relationships for couples therapy. In contemporary psychotherapeutical practice, collusion often refers to a failure of the therapist to maintain neutrality or objectivity, such as when the therapist aligns too closely with a client's distorted perspectives or defenses. It highlights the importance of self-awareness and reflective practice for the therapist. Introduction Karl Jaspers introduced ideas relevant to collusion in his seminal work General Psychopathology (Allgemeine Psychopathologie), first published in 1913. However, Jaspers did not use the term "collusion" explicitly in the way it is commonly understood today. Instead, his work laid the groundwork for understanding interpersonal dynamics and the therapist's influence on the therapeutic relationship. The term "collusion" in psychotherapy was first introduced by Sándor Ferenczi in 1933. He described collusion as an unconscious process linking the transference reactions of the patient with the countertransference reactions of clinicians, leading to specific and often complex dynamics in the therapeutic relationship. Later, in 1967, Henry V. Dicks expanded on this concept in his work Marital Tensions, where he explored collusion within marital relationships. Dicks defined collusion as an unconscious, unresolved issue shared by two or more participants, who become interlocked in a defensive maneuver. In 1975, Jürg Willi further explored the concept in his book The Dyadic Relationship (Die Zweierbeziehung). In this book he introduces his concept of collusion. He interprets collusion to be the unconscious interaction between partners. He delivers an overview of classical phases of two partner couples' relationships. The book is centered around the avoidance of conflicts in these phases. The avoidance is triggering the emergence of collusions. The author understands conflicts of couples as joint neurotic disturbance of the conflict parties. Not every couples' conflict is a collusion, but every destructive attempt of clarification can lead to a collusion. The suggested collusion concept tries to unite different therapy schools in a single theory. He combines different aspects of psychoanalytical (Psychoanalysis), family therapeutic (Family therapy) and communication therapeutic methods. He derives four collusion patterns: Love as to be one in the narcissistic collusion. Love as caring for each other in the oral collusion. Love as totally belonging to each other in the anal-sadistic collusion. Love as test of masculinity in the phallic-oedipal collusion. The author understands the dyad as a half-open system and describes the function of third persons in the collusion conflict. For the advancement of a couple, relationships with third persons are necessary. The author restricts himself to considering only those forms, that contribute to not carry out a conflict. He describes different the roles, third persons can take and their effect on the couples' dynamic. Furthermore, he considers psychosomatic couple illness and their consequences for the collusion. A psychosomatic illness has a similar meaning as a third person. Finally, the author describes therapeutic aspects of couples therapy and their effect and application of the collusion concept. A complex topic, which itself fills a second book „Therapie der Zweierbeziehung“. Narcissistic collusion Ideally the relationship of a narcissistic collusion presents as follows: Partner A, mostly the male, shows himself grandiose, his partner (complementary narcissistic) reacts adoringly. She herself feels small and not worthy of love, she is fixated on him or a third person and presents herself unobtrusive, with a tendency to self-destructive behavior, for example overloading or drug use. He sees her as a decorative part of himself, she seeks a substitute self in him. He represses thereby, that he identifies with a foreign determined replacement self, she represses her claim of an own ideal self. Oral collusion In the oral collusion one partner takes the role of the caretaker and one partner takes the role of the fosterling. Because the couple is committed to their roles, the conflict develops, where the caretaker perceives the fosterling to be insatiable and ungrateful and the caretaker is perceived accusing and dismissive by the fosterling. The fosterling often reacts depressive. Basically, both partners agree, that the meaning of love is to take care of each other. Their joint resistance, the common fear directs against the idea, that the fosterling must take nursing tasks towards the caretaker. Counseling can help the couple to practice their roles and to reflect experiences and resistances. Anal-sadistic collusion Both partners have the common resistance against the idea, to question that the relationship would break, if both partners behaved freely and autonomously. This leads to power struggles, sadomasochism, and jealousy-infidelity patterns. These actions serve the purpose, of secure bonding and being related to each other. Phallic-oedipal collusion From a psychological point of view every human goes through a complex developing process as a small child, that leads to a sexual identity as boy or girl. Background for the phallic-oedipal collusion of couples are the difficulties, that can arise throughout this process. If the theme of the marriage is the search for confirmation, then most likely both partners have an unresolved relationship to their opposite sex parent and did not have a model in the same sex parent. In the phallic collusion the male partner follows inflated male claims, while he stays passive-reserved. The frequency and the shaping of sexual encounters are entrusted with the female partner. Not uncommonly there is no sex at all. As a compensation the male partner seek confirmation for example in extreme or dangerous sports. The female partner delegates responsibility and initiative to him, but does not have to be afraid of male expectations from his side. The mating choice in the oedipal collusion is more directly tied to the opposite sex parent. Often a much older partner is chosen and sometimes the son stays with the mother, or the daughter stays with the father. Sometimes a partner is chosen, who is completely unlike the opposite sex parent, to avoid the tight connectedness from childhood. Humans in deep oedipal entanglement tend to invade the marriages of other humans. References Sources Therapy Psychology
Collusion (psychology)
Biology
1,317
68,336,959
https://en.wikipedia.org/wiki/Scouring%20%28textiles%29
Scouring is a preparatory treatment of certain textile materials. Scouring removes soluble and insoluble impurities found in textiles as natural, added and adventitious impurities: for example, oils, waxes, fats, vegetable matter, as well as dirt. Removing these contaminants through scouring prepares the textiles for subsequent processes such as bleaching and dyeing. Though a general term, "scouring" is most often used for wool. In cotton, it is synonymously called "boiling out", and in silk, and "boiling off. Purpose of scouring Scouring is an essential pre-treatment for the subsequent finishing stages that include bleaching, dyeing, and printing. Raw and unfinished textiles contain a significant amount of impurities, both natural and foreign. It is necessary to eliminate these impurities to make the products ready for later steps in textile manufacturing. For instance, fatty substances and waxy matters are the major barriers in the hydrophilicity of the natural fibers. Absorbency helps the penetration of chemicals in the stages such as dyeing and printing or finishing of the textiles. These fats and waxy substances are converted into soluble salts with the help of alkali. This treatment is called Saponification. Impurities Foreign matter in addition to fiber is known as "impurities." Textile fibers contain many types of impurities. e.g.: Natural impurities: Impurities gathered from the natural environment by the fibres. Natural impurities also include non-fibrous parts that are incorporated into the fiber during its growth. Notably, these are not present in synthetic fibres, which are manufactured artificially. Added: Oils and waxes during spinning, crocheting or knitting or weaving. Accidental: dirt or mishandling, foreign contaminants. Etymology, and history Etymology The term "scouring" refers to the "act of cleaning with a rubbing action". History Textile manufacturing was once an everyday household activity. In Europe, women were often involved in textile manufacturing. They used to spin, weave, process, and finish the products they needed at home. In the pre-industrial era, scouring (wool scouring) was a part of the fulling process of cloth making, in which the cloths were cleaned, and then milled (a thickening process). Fulling used to be done by pounding the woolen cloth with a club, or by the fuller's feet or hands. This process was associated with waulking songs, which were sung by women in the Scottish Gaelic tradition to set the pace. Earliest scouring agents Scouring agents are the cleaning agents that remove the impurities from the textiles during the scouring process. While these are now industrially-produced, scouring agents were once produced locally; lant or stale urine and lixivium, a solution of alkaline salts extracted from wood ashes, were among the earliest scouring agents. Lant, which contains ammonium carbonate, was used to scour the wool. Wool scouring The removal of lanolin, vegetable materials and other wool contaminants before use is an example of wool scouring. Wool scouring is the next process after the woollen fleece of a sheep is cut off. Raw wool is also known as ''Greasy wool.'' "Grease" or "yolk'' is a combined form of dried sweat, oil and fatty matter. Lanolin is the major component (5-25%) of raw wool which is a waxy substance secreted by the sebaceous glands of wool-bearing animals. Greasy matter varies by breed. Following the cleaning process, the wool fibers possess a chemical composition of keratin. Process Three steps comprise the complete cleaning process for wool: steeping, scouring, and rinsing. Steeping Potash and wool fat are two beneficial substances among the contaminants in wool, necessitating the development of specific cleaning techniques capable of recovering these compounds. Steeping is an alternative scouring process, In steeping system, scouring entails in parts. Wool steeping is carried out in stages such as immersing it in lukewarm water for many hours. When the wool includes only a little amount of yolk, the steeping method for recovering the yolk can be skipped. Scouring treatment Scouring is the process of cleaning wool that makes it free from grease, suint (residue from perspiration), dead skin and dirt and vegetable matter present as impurities in the wool. It may consist of a simple boiling of wool in water or an industrial process of treating wool with alkalis and detergents (or soap and Sodium carbonate.) Bath temperature is maintained (at 65 degree Celsius) to melt wool grease. (Lanolin melts at a temperature of 38-44 °C.) The next treatment is carbonization, a treatment with strong acids that convert vegetable matter into carbon. Rinsing Rinsing is the process of thoroughly washing the cleaned wool. Alternative method The alternative method is solvent scouring. Solvent method Solvent scouring of wool replaces soap, detergent, and alkalies with a solvent liquid such as carbon tetrachloride, ether, petroleum naphtha, Chloroform, benzene, or carbon disulfide, etc. These liquids are capable of dissolving impurities but highly volatile and flammable. Hence, they need extra care in handling. Gallery Cotton scouring In cotton, non-cellulosic substances such as waxes, lipids, pectic substances, organic acids contribute to around ten percent of the weight. Cotton, in particular, has fewer impurities than wool. Cotton scouring refers to removing impurities such as natural wax, pectins, and non-fibrous matter with a wetting agent and caustic soda. In comparison, alkaline boiling has no effect on cellulose. Impurities in cotton Cotton Pectins, waxes, proteins, mineral compounds, and ash, etc. Methods Continuous scouring Discontinuous scouring In discontinuous method certain machines are used such as dyeing vessels, winches, jiggers and Kier. Kier boiling Kier is a large cylindrical vessel, upright, with egg shaped ends made of boilerplate that has a capacity of treating one to three tonnes of material at a time. Kier boiling and ''Boiling off'' is the scouring process that involves boiling the materials with the caustic solution in the Kier, which is an enclosed vessel, so that the fabric can boil under pressure. Open kiers were also used with temperatures below 100 °C (at atmospheric pressure). Biotechnology Biotechnology in textiles is the advanced way of processing, textiles, it contributes to numerous treatments of cellulosic materials such as desizing, denim washing, biopolishing, and scouring, etc. Scouring with enzymes Enzymes are helpful in bio-singeing, bio-scouring and removing impurities from cotton, which is more environmentally friendly. Biopolishing is an alternative method that is an enzymetic treatment to clean the surface of cellulosic fabrics or yarns. It is also named ''Biosingeing.'' Pectinase enzymes, breaks down pectin, a polysaccharide found in cellulosic materials such as cotton. Gallery Silk scouring Silk is an animal fiber it consists 70–80% fibroin and 20–30% sericin (the gum coating the fibres). It carries impurities like dirt, oils, fats and sericin. The purpose of silk scouring is to remove the coloring matter and the gum that is a sticky substance which envelops the silk yarn. The process is also called ''degumming''. The gum contributes nearly 30 percent of the weight of unscoured silk threads. Silk is called ''boiled off'' when the gum is removed. The process includes the boiling the silk in a soap solution and rinsing it out. Gallery Manmade material Scouring Oil and dirt are the impurities in Synthetic materials. Certain oils and waxes are applied as lubricants during spinning or fabric manufacturing stages such as knitting or weaving. Mild detergents can remove the impurities effectively. Effluent of scouring Effluent is waste water that is thrown away in the water bodies. Industrial wastewater contaminated with scouring residues is heavily contaminated and extremely polluted. See also Grassing (textiles) Singeing (textiles) References Notes Bibliography External links Textiles Textile techniques Textile chemistry Industrial processes
Scouring (textiles)
Chemistry
1,777
2,585,099
https://en.wikipedia.org/wiki/Verifone
Verifone, Inc. is an American multinational corporation headquartered in New York City, New York. Verifone provides technology for electronic payment transactions and value-added services at the point-of-sale. Verifone sells merchant-operated, consumer-facing and self-service payment systems to the financial, retail, hospitality, petroleum, government and healthcare industries. The company's products consist of POS electronic payment devices that run its own operating systems, security and encryption software, and certified payment software, and that are designed for both consumer-facing and unattended environments. Its products process a range of payment types, including signature and personal identification number (PIN)-based debit cards, credit cards, contactless/radio frequency identification cards, smart cards, prepaid gift and other stored-value cards, electronic bill payment, check authorization and conversion, signature capture and Electronic Benefit Transfer (EBT). In 2018, Verifone was acquired by Francisco Partners for $3.4 billion. The company's architecture enables multiple applications, including third-party applications, such as gift card and loyalty card programs, healthcare insurance eligibility, and time and attendance tracking, and allows these services to reside on the same system without requiring recertification upon the addition of new applications. Overview As of October 31, 2013, the company held trademark registration in 22 jurisdictions (including registration in the EU that covers various country level registrations that the company had previously filed) for the ‘VERIFONE’ trademark and in 32 jurisdictions (including registration in the EU that covers various country level registrations that the company had previously filed) for ‘VERIFONE’ trademark, including its ribbon logo. On November 3, 2014, The company unveiled a new corporate logo, and brand identity that represents a new Verifone that is driving the future of commerce in a rapidly evolving digital world where electronic payments, commerce and mobility are converging. From its beginnings as the first payment device manufacturer, Verifone's product and point-of-sale service offerings have changed considerably. Verifone offer payment technology expertise, and services that add value to the point of sale with merchant-operated, consumer-facing and self-service POS payment systems. Founded in Hawaii, U.S. in 1981, Verifone in 2014 operated in more than 150 countries worldwide and employed nearly 5,000 people globally. Verifone's steady growth came organically through a dedication both to innovation and strategic partnerships, and from smart acquisitions. Core focus and growth areas for the company include mobile commerce, security, services and emerging global markets. Verifone has headquarters in New York City and office presence in more than 45 countries. History Verifone was founded by William "Bill" Melton and incorporated in Hawaii in 1981, and named itself after its first product, the name standing for Verification telephone. Since the late 1980s, Verifone has held more than 60 percent of the U.S. market, and during the 1990s the company captured more than half of the international market for such systems. In 1996, the company placed its five millionth system. Domestic and international sales of POS systems continue to form the majority of Verifone's annual sales, which hit $387 million in 1995 and were expected to top $500 million in subsequent years. The sudden growth of the Internet, and especially the World Wide Web, in the mid-1990s created a demand for secure online financial transaction applications. Verifone has designed applications conforming to the Secure Electronic Transaction (SET) standards developed by Visa Inc. and MasterCard. With the $28 million 1995 acquisition of Enterprise Integration Technologies, the company that developed the Secure HyperText Transfer Protocol (S-HTTP), and a $4 million equity investment in CyberCash, Inc., led by Verifone founder William Melton, and with 1996 partnership agreements with web browser leaders Netscape, Oracle Corporation, and Microsoft, Verifone has rolled out a suite of software products targeted at consumers, merchants, and financial institutions allowing secure purchases and other transactions online. Purchases over the Internet, which still produced as little as $10 million in 1995, were expected to reach into the billions by the turn of the century. Verifone has also been working to marry the smart card to the Internet; in 1996, the company introduced the Personal ATM (P-ATM), a small smart card reader designed to be attached to the consumer's home computer, which will enable the consumer not only to make purchases over the Internet, but also to "recharge" the value on the card. Verifone has also partnered with Key Tronic to incorporate a P-ATM interface directly into that company's computer keyboards. 1980s: Introduction and rapid growth By the beginning of the 1980s, the major credit card companies began seeking methods to reduce processing costs and losses due to fraud. In 1981, Visa and MasterCard began offering merchants discounts on their transactions if they agreed to use newly developed automated transaction technology for all credit card purchases greater than $50. This move opened the way for the creation of an industry devoted to producing POS authorization systems. Early systems typically had starting prices of $900. Verifone introduced its first POS product in 1982. By lowering manufacturing and operating costs through outsourcing production, Verifone brought its first system to the market at $500. Working with Visa, Verifone captured a large share of the POS market. In 1984, the company introduced ZON credit card authorization system was priced at $125. It took advantage of improvements in processor speeds and the lowering cost of both processors and memory. The following year, Verifone moved its headquarters to Redwood City, outside of San Francisco. The company's revenues grew to $15.3 million, earning a net profit of $864,000. The company doubled revenues, to about $30 million, in 1986. By January 1988, Verifone controlled more than 53 percent of the POS systems market. Revenues had reached $73.4 million, with net earnings of more than $6 million. The following year, the company increased its dominance in the industry with the purchase of the transaction automation business of Icot Corp., then second in the market with a 20.5 percent share. The acquisition boosted Verifone's revenues to $125 million. By then, Verifone had entered the international market, starting with Australia in 1988 and placing its millionth ZON system in Finland in 1989. 1990s: Expansion into new markets Verifone went public in March 1990, raising more than $54 million. As the credit card industry matured, Verifone pushed to install its systems into new markets, such as restaurants, movie theaters, and taxis, as well as developing software capacity to bring its systems into the health care and health insurance markets and to government functions. International sales also began to build, as use of credit cards became increasingly accepted in foreign markets. VeriFone was also building its global operation, opening facilities in Bangalore, Singapore, England, Dallas, and Ft. Lauderdale, in addition to its Hawaii and California facilities. Rolling out its Gemstone line of transaction systems, which added inventory control, pricing, and other capabilities, Verifone was aided by announcements from Visa and MasterCard that the companies would no longer provide printed warning bulletins, while requiring merchants to seek authorization for all credit card transactions by 1994. Revenues jumped from $155 million in 1990 to $226 million in 1992. Verifone had just placed its two millionth system a year ago. By 1993, Verifone systems were in place in more than 70 countries, including its three millionth system, in Brazil, representing the company's expansion in the Latin American market. International sales, which had contributed less than ten percent of revenues before 1990, now accounted for more than 30 percent of the company's nearly $259 million in annual revenues. Banks began rolling out debit cards in the mid-1990s. In response, Verifone produced terminals designed with keypads for PIN numbers. But as the domestic credit and debit card markets neared saturation, Verifone changed its primary focus to producing software applications, offering vertically integrated systems, including applications for standard computer operating systems. It also built a new plant near Shanghai in China in 1994 to increase production capacity. Verifone moved to take the lead in the coming smart card revolution, teaming up with Gemplus International, a France-based maker of the cards, and MasterCard International to form the joint venture SmartCash. To place the company close to technological developments in France and the rest of Europe, Verifone opened its Paris research and development center in 1994. The company launched its smart card in May 1995. The company introduced its Personal ATM, a palm-sized smart card reader capable of reading a variety of smart card formats, in September 1996, with the product expected to ship in 1997. Among the first customers already signed to support the P-ATM were American Express, MasterCard International, GTE, Mondex International, Visa International, Wells Fargo Bank, and Sweden's Sparbanken Bank. Contracts for each called for the purchase of a minimum of 100,000 units; the total market potential for the device was estimated at more than 100 million households. In addition, Verifone began developing smart card readers to supplement and eventually replace its five million credit and debit card authorization systems. In 1995, Verifone began the first of its aggressive steps to enter an entirely new area, that of Internet-based transactions. In May 1995, the company partnered with Broadvision Inc., a developer of Internet, interactive television, computer network, and other software, to couple Verifone's Virtual Terminal software—a computer-based version of its standard transaction terminal—with BroadVision's offerings, thereby extending Verifone's products beyond the retail counter for the first time. In August 1995, however, Verifone took an even bigger step into the Internet transaction arena, with its $28 million acquisition of Enterprise Integration Technologies, developer of the S-HTTP industry standard for safeguarding transactions over the World Wide Web. Verifone followed that acquisition with a $4 million investment in William Melton's latest venture, CyberCash Inc., also working to develop Internet transaction systems. By 1996, Verifone was ready with its Payment Transaction Application Layer (PTAL) lineup of products, including the Virtual terminal interface for merchants conducting sales with consumers; Internet Gateway or vGATE, to conduct transactions between merchants and financial institutions; and the Pay Window interface for consumers making purchases on the Internet. After securing agreements from Netscape, Oracle, and Microsoft to include Verifone software in their web browsers, Verifone and Microsoft announced in August 1996 that Verifone's virtual point of sale (vPOS) would be included in the Microsoft Merchant System to be released by the end of the year. Verifone's announcement of the P-ATM, able to be attached as a computer peripheral, wedded the company's smart card and Internet transaction efforts. Hewlett-Packard acquired Verifone in a $1.18bn stock-swap deal in April 1997. Four years later Verifone was sold to Gores Technology Group in May 2001. In 2002 Verifone was recapitalized by GTCR Golder Rauner, LLC. In 2005, Verifone was listed as a public company on New York Stock Exchange (NYSE: PAY). 2000s: Acquisitions In October 2004, Israeli-based Lipman Electronics had acquired United Kingdom-based Dione plc, to go alongside its "NURIT" brand. On November 1, 2006 Verifone completed its acquisition of Lipman, and added both Dione and NURIT products to its portfolio for an undisclosed sum. In September 2006 Verifone acquired some divisions of Irish terminal and payment services company Trintech Group plc – headquartered in Dublin with offices in Montevideo, Neu-Isenburg and London amongst other locations – in a 12.1M USD (€9.4M) cash transaction. The company was formerly known as VeriFone Holdings, Inc. and changed its name to VeriFone Systems, Inc. in 2010. In 2014 the company rebranded itself as Verifone with a lowercase 'f'. Verifone in 2014 was based in San Jose, California, and has marketing and sales offices across the world. High economic growth abroad, coupled with infrastructure development, support from governments seeking to increase value-added tax (“VAT”) and Sales Tax collections, and the expanding presence of IP and Wireless communication networks resulted in revenue from abroad exceeding revenue generated from domestic sales. Specifically, its North America market share fell from 57.4% of total revenues in 2006, to only 39% or $359.14 million in total revenues for fiscal 2008. On the other hand, International operations went from comprising only 42.5% of total revenues in 2006 to 61% or $564.46 million of total revenues in 2008. In April 2018, Verifone was acquired by Francisco Partners for US$3.4 billion. Products Verifone, Inc. is an international producer and designer of electronic payment methods and devices. The company divides its business into two segments: Systems Solutions and Services. Systems Solutions consists of operations related to the sale of electronic payment products that enable electronic transactions. The Services segment includes warranty and support services. In fiscal year 2008, Verifone's Systems Solutions segment generated 87.5% of total revenues, which amounted to $807.46 million, while its Services segment contributed 12.5%, or $114.46 million in revenue. Its principal product lines have included point-of-sale, merchant-operated, consumer-facing and self-service payment systems. It provides countertop electronic payment terminals that accept card payment options Mobile payment, chip and PIN, and Contactless payment, including Near field communication (NFC) as well as support Credit and Debit cards, EBT cards, EMV, and other PIN-based transactions; an array of software applications and application libraries; and portable devices that support 3G, GPRS, Bluetooth, and WiFi technologies. The company also offers multimedia consumer facing POS devices; unattended and self-service payment methods designed to enable payment transactions in self-service environments; and integrated electronic payment systems that combine electronic payment processing, fuel dispensing, and ECR functions, as well as payment systems for integration. In addition, it provides mobile payment methods for various segments of the mobile point of sale environment; contactless peripherals; network access devices; security systems; payment-as-a-service and other managed services, terminal management, payment-enabled media, and payment system security; and server-based payment processing software and middleware. Further, the company offers equipment repair or maintenance, gateway processing, remote terminal management, software post-contract support, customized application development, helpdesk, customer service, warehousing, and encryption or tokenization services. Countertop and PIN Pads The company's countertop devices accept various card payment options, including payment options using Near field communication (NFC) technology, mobile wallets, chip and PIN, QR code and contact-less payments. Its VX Evolution generation of countertop devices supports a range of applications, such as pre-paid products, including gift cards and loyalty programs. The VX Evolution devices also integrate the company's NFC software technology to manage multiple NFC-based mobile wallets, applications, and programs. It also offers various other VX model countertop devices, including a hybrid device that reads both magnetic strip and chip card transactions using a single card reader, offering options for a range of connectivity choices, and battery operated and color displays. The company also supplies PIN pads that support credit and debit card, EBT, EMV, and other PIN-based transactions, and include multiple connectivity options, including a 3G option and NFC capability. Its countertop devices also support a range of applications that are either built into electronic payment systems or connect to electronic cash registers (ECRs) and POS systems. In addition, it offers a range of certified software applications and application libraries that enable its countertop systems and PIN pads to interface with major ECR and POS systems. Verifone has sold numerous point-of-sale credit card reading products, including the ZON Jr (1984), Tranz 330 and ZON Jr XL(1987), Omni 460 (1991) and Omni 3200 (1999) which were the most successful transaction terminals of their times. The company's most popular current products include the Omni 3700 Family, featuring the Omni 3750 and Omni 3740. In 2004, Verifone introduced its newest line of products, Vx Solutions (also called VerixV). These include the Vx510, Vx520 and Vx570, which are countertop terminals offering dial-up or Ethernet access, and the Vx610 and Vx670 which are portable, include batteries, and an integrated wireless communications module. The Vx610 is offered in GPRS, CDMA, and WiFi wireless configurations, and is considered a 'countertop mobile' product. The Vx670 is a true portable or 'handover' version available with GPRS, WiFi, and as of November 2007, Bluetooth-integrated communications modules. The Vx670, in particular, is a deterrent against the theft of credit information because the customer is not required to relinquish possession of his or her credit card; instead transacting directly with the Vx670 in a 'pay at table' sense. The Vx510 is repackaged as Omni 3730, capitalizing the huge sales of the Omni 3700 series. A derivative of Omni3730 is the Omni 3750LE, which has reduced features, but lower price. The VX Evolution devices integrate the company's NFC software technology to manage multiple NFC-based mobile wallets, applications, and programs. Multimedia Customer Facing The company's range of multimedia consumer facing POS devices are designed to allow merchants, primarily in the multi-lane retail environment, to engage in direct customer interaction through customized multimedia content, in-store promotions, digital offers, and other value-added services using a POS device. Its multimedia consumer-facing products are offered under its “MX solutions” brand. These products include color graphic displays, interfaces, ECR compatibility, key pads, signature capture functionality, and other features that serve customers in a multi-lane retail environment. The company's “MX solutions” also feature a modular hardware architecture that allows merchants to introduce capabilities, such as contactless or NFC. Its “MX solutions” include a range of products that support these same features in self-service market segments, such as taxis, parking lots/garages, ticketing machines, vending machines, gas pumps, self-checkouts, and quick service restaurants. In 2005 Verifone released its first full color EFT-POS terminal, the MX 870. The MX 870 is capable of full screen video and is used to build applications by Verifone customers. The MX 870 is the first in the MX 800 series of Visual Payment Terminals, to compete with MX 850, MX 860 and MX 880. All of these terminals run Embedded Linux and use FST FancyPants and the Opera (browser) for their GUI platform. In 2009, Verifone partnered with Hypercom and Ingenico to found the Secure POS Vendor Alliance, a non-profit organization whose goal is to increase awareness of and improve payment industry security. In 2010 Verifone announced the VX Evolution product line, designed to PCI PED 2.0 specs and providing native support for VeriShield Total Protect, Verifone's encryption and tokenization solution. The VX Evolution line is an extension of Verifone's countertop and PIN pad products and included a number of upgrades from earlier models, such as full color display, ARM 11 processors and a fully programmable PIN pad. Competition Verifone is one of the top three providers of electronic payment systems and services in the world. The markets for this company's products are highly competitive, and have been subject to price pressures. Competition from manufacturers, distributors, and providers of similar products have caused price reductions, reduced margins, and a loss of market share (need date that this started). For example, one of Verifone's former customers--First Data Corporation has developed its own series of proprietary electronic payment systems for the U.S. market. Moreover, Verifone competes with suppliers of cash registers that provide built in electronic payment capabilities and producers of software that support electronic payment over the internet, as well as other manufacturers or distributors of electronic payment systems. Finally, Verifone competes with smaller companies that have been able to develop strong local or regional customer bases. The firm's main competitors are: PAX Technology Ingenico NCR Corporation Verifone acquired a smaller competitor Hypercom in an all-stock transaction deal in 2011. Acquisitions Divestments Global Bay sold to Manhattan Associates in 2014. Corporate affairs The company is run by a board of directors made up of mostly company outsiders, as is customary for publicly traded companies. Members of the board of directors as of June 2014 are: Robert W. Alspaugh(Director), Karen Austin(Director), Paul Galant(Director; Chief Executive Officer), Alex W. (Pete) Hart(Chairman of the Board of Directors), Robert B. Henske(Director), Wenda Harris Millard(Director), Eitan Raff(Director), Jonathan I. Schwart(Director), Jane J. Thompson(Director). Under the Corporate Governance Guidelines of the company, the Board is free to select its Chairman and company's CEO in the manner it considers to be in its best interests at any given point in time. Since 2008 the positions of Chairman of the Board and CEO have been held by separate persons. The Board believes that this structure is appropriate for them because it allows CEO to focus his time and energy on leading its key business and strategic initiatives while the Board focuses on oversight of management, overall enterprise risk management and corporate governance. The Board and its committees meet throughout the year on a set schedule, usually at least once a quarter, and also hold special meetings from time to time. Agendas and topics for Board and committee meetings are developed through discussions between management and members of the Board and its committees. Information and data that are important to the issues to be considered are distributed in advance of each meeting. Board meetings and background materials focus on key strategic, operational, financial, enterprise risk, governance and compliance matters. Board’s Role in Risk Oversight The Board executes its risk management responsibility directly and through its committees. As set forth in its charter and annual work plan, Audit Committee has primary responsibility for overseeing their enterprise risk management process. The Audit Committee receives updates and discusses individual and overall risk areas during its meetings, including financial risk assessments, operations risk management policies, major financial risk exposures, exposures related to compliance with legal and regulatory requirements, and management's actions to monitor and control such exposures. The Vice President of Internal Audit reviews with the Audit Committee company's annual operational risk assessment results and at least once each quarter the results of internal audits, including the adequacy of internal controls over financial reporting. The Vice President of Internal Audit and Chief Information Officer report to the Audit Committee on information systems controls and security. Throughout each fiscal year, the Audit Committee invites appropriate members of management to its meetings to provide enterprise-level reports relevant to the Audit Committee's oversight role, including adequacy and effectiveness of management reporting and controls systems used to monitor adherence to policies and approved guidelines, information systems and security over systems and data, treasury, insurance structure and coverage, tax structure and planning, worldwide disaster recovery planning and the overall effectiveness of company's operations risk management policies. The Audit Committee is generally scheduled to meet at least twice a quarter, and generally covers one or more areas relevant to its risk oversight role in at least one of these meetings. The Compensation Committee oversees risks associated with company's compensation policies and practices with respect to executive compensation and executive recruitment and retention, as well as compensation generally. In establishing and reviewing the executive compensation program, Compensation Committee consults with independent compensation experts and seeks to structure the program so as to not encourage unnecessary or excessive risk taking. Company's compensation program utilizes a mix of base salary and short-term and long-term incentive awards designed to align the executive compensation with success, particularly with respect to financial performance and stockholder value. The Compensation Committee sets the amount of company's executives’ base salaries at the beginning of each fiscal year. A substantial portion of bonus amounts are tied to overall corporate performance and stockholder value. Compensation provided to the executive officers also includes a substantial portion in the form of long-term equity awards that help align executives’ interests with those of its stockholders over a longer term. The Corporate Governance and Nominating committee oversees risks related to company's overall corporate governance, including development of corporate governance principles applicable to company, evaluation of federal securities laws and regulations with respect to its insider trading policy, development of standards to be applied in making determinations as to the absence of material relationships between company and a director and formal periodic evaluations of the Board and management. Adoption of Majority Voting Provision In considering best practices of corporate governance among peer companies and governance practices recommended by shareholder advisory organizations and supported by company's stockholders, company amended its Bylaws and the Corporate Governance Guidelines in fiscal year 2013 to adopt a majority voting provision which became effective immediately following the close of it 2013 Annual Meeting of Stockholders. Such provision provides that, in an uncontested election of directors, each director shall be elected by the vote of the majority of the votes cast (meaning the number of shares voted “for” a nominee must exceed the number of shares voted “against” such nominee), and in a contested election, each director shall be elected by a plurality of the votes cast. A contested election is defined as an election for which company's Corporate Secretary determines that the number of director nominees exceeds the number of directors to be elected as of the date that is ten days preceding the date their first mail notice of meeting for such meeting to stockholders. Under the amended Corporate Governance Guidelines, any nominee in an uncontested election who receives a greater number of “against” votes than “for” votes shall promptly tender his or her resignation following certification of the vote. The 12 Corporate Governance and Nominating Committee shall consider the resignation offer and shall recommend to the Board the action to be taken. In considering whether to recommend accepting or rejecting the tendered resignation, the Corporate Governance and Nominating Committee will consider all factors that it deems relevant including, but not limited to, any reasons stated by stockholders for their “withheld” votes for election of the director, the length of service and qualifications of the director, their Corporate Governance Guidelines and the director's overall contributions as a member of Board. The Board will consider these and any other factors it deems relevant, as well as the Corporate Governance and Nominating Committee's recommendation, when deciding whether to accept or reject the tendered resignation. Any director whose resignation is under consideration shall not participate in the Corporate Governance and Nominating Committee deliberation and recommendation regarding whether to accept the resignation. The Board shall take action within 90 days following certification of the vote, unless a longer period of time is necessary in order to comply with any applicable NYSE or SEC rule or regulation, in which event the Board shall take action as promptly as is practicable while satisfying such requirements. See also References External links American companies established in 1981 Financial services companies established in 1981 Manufacturing companies based in San Jose, California Companies formerly listed on the New York Stock Exchange Electronics companies established in 1981 1981 establishments in Hawaii Credit cards Hewlett-Packard acquisitions Payment service providers Point of sale companies Retail point of sale systems Private equity portfolio companies 1997 mergers and acquisitions 2001 mergers and acquisitions 2005 initial public offerings 2018 mergers and acquisitions
Verifone
Technology
5,885
5,044,322
https://en.wikipedia.org/wiki/Total%20ionic%20strength%20adjustment%20buffer
Total ionic strength adjustment buffer (TISAB) is a buffer solution which increases the ionic strength of a solution to a relatively high level. This is important for potentiometric measurements, including ion selective electrodes, because they measure the activity of the analyte rather than its concentration. TISAB essentially masks minor changes made in the ionic strength of the solution and hence increases the accuracy of the reading. Theory TISAB is very commonly applied to fluoride ion analysis such as in fluoride ion selective electrodes. There are four main constituents to TISAB, namely CDTA (cyclohexylenedinitrilotetraacetate), sodium hydroxide, sodium chloride and acetic acid (ethanoic acid), which are all dissolved in deionised water. Hence, TISAB has a density ~1.0 kg/L, though this can vary to 1.18 kg/L. Each constituent plays an important role in controlling the ionic strength and pH of the analyte solution, which may otherwise cause error and inaccuracy. The activity of a substance in solution depends on the product of its concentration and the activity coefficient in that solution. The activity coefficient depends on the ionic strength of the solution in which the potentiometric measurements are made. This can be calculated for dilute solutions using the Debye–Hückel equation; for more concentrated solutions other approximations must be used. In most cases, the analyst's goal is simply to make sure that the activity coefficient is constant across a set of solutions, with the assumption that no significant ion pairing exists in the solutions. Example: An ion-selective electrode might be calibrated using dilute solutions of the analyte in distilled water. If this calibration is used to calculate the concentration of the analyte in sea water (high ionic strength), significant error is introduced by the difference between the activity of the analyte in the dilute solutions and the concentrated sample. This can be avoided by adding a small amount of ionic-strength buffer to the standards, so that the activity coefficients match more closely. Adding a TISAB buffer to increase the ionic strength of the solution helps to "fix" the ionic strength at a stable level, making a linear correlation between the logarithm of the concentration of analyte and the measured voltage. By also adding the TISAB buffer to the samples from which the potentiometric equipment are calibrated, the linear correlation can be used to calculate the concentration of analyte in the solution. where is measured voltage, is the gas constant, the temperature measured in kelvins, is the Faraday constant and the charge of the analyte. is the concentration of analyte. TISAB buffers often include chelators which bind ions that could otherwise interfere with the analyte. References Analytical chemistry
Total ionic strength adjustment buffer
Chemistry
579
25,418,827
https://en.wikipedia.org/wiki/Desert%20greening
Desert greening is the process of afforestation or revegetation of deserts for ecological restoration (biodiversity), sustainable farming and forestry, but also for reclamation of natural water systems and other ecological systems that support life. The term "desert greening" is intended to apply to both cold and hot arid and semi-arid deserts . It does not apply to ice capped or permafrost regions. It pertains to roughly 32 million square kilometres of land. Deserts span all seven continents of the Earth and make up nearly a fifth of the Earth's landmass, areas that recently have been increasing in size. As some of the deserts expand and global temperatures increase, the different methods of desert greening may provide a possible response. Planting suitable flora in deserts has a range of environmental benefits from carbon sequestration to providing habitat for desert fauna to generating employment opportunities to creation of habitable areas for local communities. The prevention of land desertification is one of 17 Sustainable Development Goals outlined by the United Nations. Desert greening is a process that aims to not only combat desertification but to foster an environment where plants can create a sustainable environment for all forms of life while preserving its integrity. Desert greening techniques When establishing or re-establishing vegetation in desert ecosystems there are many factors to consider before implementing a specific strategy. It is important to account for factors such as the geographical location of the area, amount of annual precipitation, average temperature, soil quality, nutrient availability, native plant and animal life, along with the human impact when aiming to restore a degraded or disrupted desert biome. Planting Planting strategies in the desert are different from conventional planting practices, especially in the initial stages. Deserts are regions in which annual precipitation is considerably less than the evaporation, making it difficult for plants and animals that are not specialized to the biome to survive. One of the ways to ensure the success of the plant life is that prior to being planted in the desert, plants are often grown first in greenhouses, allowing for root systems to develop. Often the plant species that are planted in desert regions are those that are capable of surviving on limited water and able to withstand the sun's direct rays. However, deserts also vary, with some being hot and dry and others being semiarid, and plants that may survive in a coastal desert might not be able to endure the considerably higher temperatures of hot and dry deserts. Therefore, when planting in deserts as an effort to restore the ecosystem or to create a greener space it is important that the vegetation being planted is suitable to the desert in which it is being planted. Utilizing pioneer desert species like the Acamptopappus shockleyi or Lepidium fremontii which are native to the Mojave Desert, and halophytes such as Salicornia contribute positively to desert greening efforts. Planting of trees which store water, inhibit soil erosion through wind, raise water from underlying aquifers, reduce evaporation after a rain, attract animals (and thereby fertility through feces), and they can cause more rain to fall (by temperature reduction and other effects), if the planted area is large enough. Another method of introducing or re-introducing vegetation to deserts is through seeding which involves the scattering of seeds either manually or aerially depending upon the size of the region undergoing vegetation efforts. Using seeding as a desert greening technique on a large scale requires a longer time for the ecosystem to recover and for the vegetation to establish itself as was seen in the Mu Us Desert. Additionally, there are potential downsides due to the environmental vulnerability and predation by desert animals putting the success of this technique at risk. Landscaping and green infrastructure With the growth of human population in urban areas that are located close to deserts, ecoscaping has become an important strategy when designing and building infrastructure. Using the National Tree Benefit Calculator software it was established that if Acacia tortilis, Ziziphus spina-christi, and Phoenix dactylifera were planted in a desert city like Doha, this would yield a host of environmental benefits along with economic gains including carbon sequestration, air pollution reduction, lowering of the urban heat index, prevention of storm water runoff and increase in property values. As global temperatures increase, environmental impacts are considerably greater in dry regions with reduced precipitation levels which are vulnerable to desertification. Some of the effects that are beneficial for desert-greening which trees offer can also be provided by buildings that have incorporated architectural elements that allow them to shade exposed walls consequently reducing the heat absorption by the building. Another example of a building designed to offer beneficial effects of vegetation in the desert is the IBTS Greenhouse. Agriculture Desert farming also known as desert agriculture or arid farming, refers to the practice of cultivating and growing crops in arid or desert regions where water scarcity and extreme climatic conditions pose significant challenges to traditional agriculture. Desert farming involves employing various techniques with the help of technology to overcome the agricultural limitations imposed by an arid environment. Some common approaches used in desert farming include water management, soil improvement, crop selection, shade and windbreaks, greenhouses and controlled environments. Overall, desert farming aims to maximize the efficient use of water resources while improving soil quality, and planting crops suitable to the environment to overcome the challenges of arid environments. This allows farmers to cultivate crops and sustain agricultural production in regions traditionally considered inhospitable for farming. Greenhouse cultivation also known as greenhouse farming or controlled environment agriculture, refers to the practice of cultivating plants within an enclosed structure called a greenhouse. It is a method of crop production that involves creating a controlled environment to optimize plant growth and protect crops from external factors such as extreme weather conditions, pests, and diseases. In a greenhouse, various environmental factors such as temperature, humidity, light intensity, and carbon dioxide levels can be monitored and adjusted to create ideal growing conditions for plants. This is achieved using various technologies such as heating and cooling systems, ventilation, irrigation systems, artificial lighting, and pest control measures. Greenhouses are typically made of transparent materials like glass or plastic, which allow sunlight to enter while trapping heat inside. This helps maintain a warmer temperature compared to the outside environment, extending the growing season and enabling the cultivation of plants that are not naturally suited to the local climate. Seawater greenhouses are innovative systems that use seawater to grow crops in arid and water-scarce regions. These greenhouses employ a combination of evaporative cooling, humidification, and desalination techniques to create a controlled environment for plant growth. One prominent example of a seawater greenhouse is the Seawater Foundation. The Seawater Foundation is a non-profit organization that aims to address global food and water scarcity by utilizing seawater greenhouses. Their greenhouse system uses evaporative cooling to create a humid atmosphere for crops while seawater is used for humidification and cooling purposes. Another notable example is the IBTS (Integrated Biosphere Tectonics Systems) Greenhouse, developed by Seawater Greenhouse Ltd, the IBTS Greenhouse utilizes seawater to cool and humidify the air inside the greenhouse. It incorporates solar desalination systems to convert seawater into freshwater, which is then used to irrigate the plants. The concept of seawater greenhouses offers several advantages. Firstly, it allows for the cultivation of crops in arid regions with limited freshwater availability, reducing the pressure on traditional freshwater sources. Secondly, the humid and cooler environment created within these greenhouses promotes efficient plant growth, even in hot climates. Lastly, the evaporative cooling process can potentially produce freshwater as a byproduct, contributing to water sustainability. By harnessing the power of seawater and innovative greenhouse technologies, these initiatives are contributing to sustainable agriculture and addressing the challenges posed by water scarcity and climate change. Water resources management Water availability Desert greening is substantially a function of water availability. Water can be made available through saving, reusing, rainwater harvesting, desalination, or direct use of seawater for salt-loving plants. Reuse of treated water and the closing of cycles is the most efficient because closed cycles stand for unlimited and sustainable supply – rainwater management is a decentralized solution and applicable for inland areas – desalination is very secure as long as the primary energy for the operation of the desalination plant is available. In the Sahara Forest Project desalination is carried out by solar stills for the generation of the freshwater. Another technique that is used is cloud seeding which helps in producing precipitation in areas with dryer climates. With the new techniques and latest technology used to produce rainfall in areas that had dryer climates, there are often floods due to the urban infrastructure in those areas being insufficient for precipitation that exceeds conventional levels. Dehumidification is a technique that uses "atmospheric water generation" or air to water, used by the military for potable water generation. However, this technology uses 200 times more energy than desalination, making it unsuitable for large scale desert greening. Collecting rainwater and storing it in ponds, reservoirs, or underground tanks is one of the simplest ways to improve soil moisture content, helping to increase green cover and crop production in arid areas. It is an effective method for increasing water availability in arid regions and can contribute to desert greening in several ways, such as increasing soil moisture so that farmers have a reliable water source for their crops, even during periods of low rainfall. Also, it plays an important role in recharging groundwater, since in many arid areas the groundwater is easily depleted, which could further exacerbate the aridity. This can help to combat desertification, reduce soil erosion, and promote biodiversity. Additionally, it helps alleviate water scarcity in areas with limited access to reliable water sources. Rainwater harvesting can serve as a practical and sustainable solution. It reduces the stress on scarce water resources, such as rivers or underground wells, and it provides a decentralized water supply system. Overall, rainwater harvesting contributes to desert greening by increasing soil moisture, promoting vegetation growth, and conserving water resources. It is a cost-effective and environmentally friendly technique that can be implemented at various scales, from individual households to large-scale agricultural systems to make desert areas more productive and sustainable. Water distribution The fresh water or seawater contained in centralized systems may be distributed by canals or in some instances aqueducts (both options cause water to evaporate due to environmental exposure), troughs (as used in the Keita Project), earthenware piping (semi-open or closed) or even underground systems like qanāt. The mode of water distribution influences how its distribution to plants, which include drip irrigation (used only in pipes) a costly solution, wadis (V-shaped ponds dug in the earth) or by simply planting the trees in holes inside/over the water pipe itself allowing the roots access to the water straight from the pipe (used in qanāt, hydroponics etc.). Water can also be distributed through semi-open pipes as seen in the dug throughs in the Keita Project. Disadvantages The use of water for desert greening in arid regions, however, is not without its disadvantages. Desert greening by the Helmand and Arghandab Valley Authority irrigation scheme in Afghanistan significantly reduced the water flowing from the Helmand River into Lake Hamun and this, together with drought, was cited as a key reason for the severe damage to the ecology of Lake Hamun, much of which has degenerated since 1999 from a wetland of international importance into salt flats. Similarly in northwestern China, desert greening practices fueled by economic and environmental benefits, resulted in the exhaustion of the groundwater sources which impacted soil integrity. History The practice of recent desert greening can be traced back to a Japanese horticulture professor and agriculturist, Seiei Toyama, who spent 30 years of his life in efforts to green the Kubuqi Desert in China. He authored the text Greening the Deserts: Techniques and Achievements of Two Japanese Agriculturists along with Masao Toyama which was published in 1995. During his time as a professor at Tottori University, Toyama was able to revitalize the surrounding sandy dunes into revenue generating farms through his irrigation techniques and knowledge of plant species. After his retirement in 1972, he pursued agricultural projects in China which included the conservation of eroding banks of Yellow River by planting Kudzu vines, introduction of grape growing techniques in Ningxia Huizu Autonomous Region, and his most well known project in the Engebei Desert Development, an oasis in the Kubuqi Desert of Inner Mongolia. Examples Asia The history of modern desert greening in Asia focuses on initiatives that are aimed at reducing desertification and promoting sustainable land management practices. However, the challenges faced by nations in the Asian continent are varied, and the solutions have been tailored to meet specific needs. One of the earliest and most notable examples of desert greening in Asia occurred in China in the 1970s, the "Green Great Wall" program, aimed at planting trees along the border of the Gobi Desert to halt its expansion. The program involved planting over 100 billion trees across a thousand miles of desert within a decade. The initiative was successful in reducing sandstorms and increasing rainfall in the region, and the program has since been expanded to other parts of China. In the Middle East, Israel's desert greening initiatives have been aimed at the Negev Desert. Initiatives include the establishment of research and development centers for desert agriculture, the introduction of drip irrigation techniques, and the use of treated wastewater for irrigation. In the Indian subcontinent, India and Pakistan's desert greening initiatives have focused on afforestation and soil conservation. These initiatives involve planting trees, shrubs, and grasses to hold the soil in place, prevent erosion, and improve water retention. Overall, the history of modern desert greening in Asia reflects the need to address environmental challenges such as desertification and promote sustainable land management practices. These initiatives have often been successful in addressing these challenges and improving the livelihoods of people in arid regions. China The Three-North Shelter Forest Program, also nicknamed the "Great Green Wall", is a series of windbreaking forests in China designed to hold back the expansion of the Gobi Desert and reduce the incidence of dust storms that have long caused problems for northern China, as well as also providing timber to the local population. The program started in 1978 with the proposed end result of raising northern China's forested area from 5 to 15 percent, and is planned to be completed around 2050, at which point it will be long. In 2008, winter storms destroyed 10% of the new forest stock, causing the World Bank to advise China to focus more on quality rather than quantity in its stock species. As of 2009, China's planted forest covered more than , increasing tree coverage from 12% to 18%. It is the largest artificial forest in the world. According to Foreign Affairs, the program successfully transitioned the economic model in the Gobi Desert region from ecologically harmful industrial farming and pastoralism to beneficial ecotourism, fruticulture and forestry. In 2018, United States' National Oceanic and Atmospheric Administration found the increase in forest coverage observed by satellites is consistent with the Chinese government data. According to Shixiong Cao, an ecologist at Beijing Forestry University, the Chinese Government recognized the water shortage problems in arid regions and changed the approach towards vegetation with lower water requirements. Zhang Jianlong, head of the Forestry Department, told the media that the goal was to sustain the health of vegetation and choose suitable plant species and irrigation techniques. According to BBC News report in 2020, China's tree plantation programs resulted in significant carbon fixation and helped mitigated climate change, and the benefit was underestimated by previous research. The program also reversed the desertification of the Gobi desert, which grew per year in the 1980s, but had shrunk by more than in 2022. India The soil of the Thar Desert in India remains dry for much of the year and is prone to soil erosion. High speed winds blow the soil from the desert, depositing it on neighboring fertile lands, and causing shifting sand dunes within the desert which bury fences and block roads and railway tracks. A permanent solution to shifting sand dunes can be provided by planting appropriate species on the dunes to prevent further movement and planting windbreaks and shelterbelts. These solutions also provide protection from hot or cold and desiccating winds and the invasion of sand. The Rajasthan Canal system in India is the major irrigation scheme of the Thar Desert and is intended to reclaim it. There are few local tree species suitable for planting in the desert region and they are slow growing. Introduction of exotic tree species in the desert has become a necessity, many species of Eucalyptus, Acacia, Cassia and other genera from Israel, Australia, US, Russia, Zimbabwe, Chile, Peru, and Sudan have been tried in the Thar Desert. Vachellia tortilis has proved to be the most promising species for desert greening in this region. Prevention of shifting sand dunes can be accomplished through planting trees like the Vachellia tortilis near Laxmangarh town. Another promising species is jojoba which is economically valuable as well. Africa Modern desert greening in Africa is a relatively recent phenomenon and was primarily initiated in the 1950s and 1960s. The initiative was largely driven by a desire to combat desertification, the process by which fertile land becomes barren and unsuitable for farming, across the continent. One of the earliest and most notable examples of desert greening in Africa occurred in Algeria. In the 1950s, the Algerian government launched an ambitious program to transform over 20,000 square kilometers of arid land into productive agricultural land. This project involved the construction of dams, wells, and irrigation networks, as well as the introduction of modern farming techniques and seed varieties. The program was part of a broader effort to address food insecurity and improve livelihoods in rural areas. In the following decades, similar projects were undertaken in other countries, such as Mali, Niger, and Senegal. These initiatives focused on promoting sustainable agriculture and land management practices, as well as reforestation and the protection of natural ecosystems. Some of the key strategies employed included the use of drought-resistant crops, the introduction of agroforestry techniques, and the establishment of community-based management systems. In recent years, desert greening efforts have also been boosted by the development of renewable energy technologies, such as solar and wind power. These technologies provide a sustainable source of energy for desert regions, which can be used to power irrigation systems and other farming equipment. Greening projects that integrate renewable energy solutions are often more effective and cost-efficient in the long run. Overall, modern desert greening in Africa has made significant progress in reducing the impact of desertification and improving the sustainability of agriculture and natural resource management in arid areas. However, many challenges remain, such as lack of funding, political instability, and climate change. As such, ongoing research and development of innovative strategies, including the integration of new technologies, will be essential for continued success in this area. The "Great Green Wall of the Sahara and the Sahel" is a project adopted by the African Union in 2007, initially conceived as a way to combat desertification in the Sahel region and hold back expansion of the Sahara Desert by planting a wall of trees stretching across the entire Sahel from Djibouti City to Dakar. The original dimensions of the "wall" were slated to be wide and long, but the program has expanded to encompass nations in both North and West Africa. The modern green wall has since evolved into a program promoting water harvesting techniques, greenery protection and improving indigenous land use techniques, aimed at creating a mosaic of green and productive landscapes across North Africa. The ongoing goal of the project is to restore 100 million hectares of degraded land and capture 250 million tonnes of carbon dioxide, and create 10 million jobs in the process all by 2030. As of March 2019, 15 per cent of the wall was complete with significant gains made in Nigeria, Senegal and Ethiopia. In Senegal, over 11 million trees had been planted. Nigeria has restored of degraded land, and Ethiopia has reclaimed . A report commissioned by the United Nations Convention to Combat Desertification (UNCCD) was published on September 7, 2020, that the Great Green Wall had only covered 4% of the planned area, with only planted. Ethiopia has had the most success with 5.5 billion seedlings planted, but Chad has only planted 1.1 million. Doubt was also raised over the survival rate of the 12 million trees planted in Senegal. In January 2021, the project received a boost at the One Planet Summit, where its partners pledged 14.3 billion USD to launch the Great Green Wall Accelerator, aimed at facilitating the collaboration and coordination among donors and involved stakeholders across 11 countries. In September 2021, the French Development Agency estimated that 20 million hectares have been restored and 350,000 jobs have been created. According to the second edition of the Global Land Outlook''' published by the UNCCD in April 2022, one reason the project has experienced implementation challenges is the political risk associated with investing in more fragile nations as well as the fact that many "GGW projects generate low economic returns compared to the significant environmental and social benefits accrued that often have little or no market value". Furthermore, international donors seem to favor investing in more stable nations, picking and choosing which projects they will fund, and leaving nations with less stable governments behind. Australia Australia is the world's driest inhabited continent, with a significant portion covered by arid or semi-arid deserts. In recent years, there have been various efforts and initiatives focused on desert greening in Australia. One notable example is the "Great Green Wall" project, inspired by similar initiatives in Africa, which aims to create a vegetation barrier of local native plants across Australia's east coast to prevent desertification and erosion. [Reference needed] Another approach to desert greening in Australia involves the use of regenerative farming and land management techniques. These techniques aim to restore degraded soils and improve water retention, which can support the growth of vegetation and increase biodiversity. [Reference needed] Additionally, there are ongoing research and development projects that explore innovative techniques to facilitate desert greening, such as solar-powered desalination plants, drought-resistant crop varieties, and the use of native plant species that can thrive in arid environments. [Reference needed] It's important to note that the success of desert greening initiatives depends on various factors, including local climate conditions, access to water resources, suitable plant species, and sustainable land management practices. Sundrop Farms launched a greenhouse in 2016 to produce 15,000 tonnes of tomatoes using only desert soil and desalinated water piped from Spencer Gulf. See also Al Baydha Project Algerian Green Dam Arid Forest Research Institute Effects of climate change on the water cycle Fertilizer tree Oasification Restoration ecology United Nations Convention to Combat Desertification References External links "How to green the desert and reverse climate change" Allan Savory, TED talk'', February 2013. Greening the Desert II – Final Animation of Desert Greening in Egypt with the IBTS Greenhouse LivingDesert Group Greening Desertification Ecosystems Land reclamation
Desert greening
Biology
4,742
2,481,120
https://en.wikipedia.org/wiki/Meniscal%20cyst
Meniscal cyst is a well-defined cystic lesion located along the peripheral margin of the meniscus, a part of the knee, nearly always associated with horizontal meniscal tears. Signs and symptoms Pain and swelling or focal mass at the level of the joint. The pain may be related to a meniscal tear or distension of the knee capsule or both. The mass varies in consistency from soft/fluctuant to hard. Size is variable, and meniscal cysts are known to change in size with knee flexion/extension. Cause Various etiologies have been proposed, including trauma, hemorrhage, chronic infection, and mucoid degeneration. The most widely accepted theory describes meniscal cysts resulting from extrusion of synovial fluid through a peripherally extended horizontal meniscal tear, accumulating outside the joint capsule. They arise more commonly from the lateral joint margin, and occur most often in 20- to 40-year-old males. Diagnosis Magnetic resonance imaging is the modality of choice for diagnosis of meniscal cysts. In their most subtle form, meniscal cysts present as focal areas of high signal intensity within a swollen meniscus. It is not uncommon for radiologists to miss this type of meniscal cyst because the signal intensity is not quite as great as fluid on T2 weighted sequences.2 When this fluid is extruded into the adjacent soft tissues, the swollen meniscus subsequently assumes a more normal shape, and the extruded fluid demonstrates a higher T2 signal typical of parameniscal cysts. Medial meniscus horizontal tear extending into a meniscal cyst. Sagittal T2 images of a medial meniscus horizontal tear extending into a meniscal cyst. Large medial meniscus cyst. Treatment Treatment of meniscal cysts consists of a combination of cyst decompression (intraarticular decompression versus open cystectomy) and arthroscopic repair of any meniscal abnormalities. Success rates are significantly higher when both the cyst and meniscal tear are treated compared to treating only one disease process. See also Knee pain Knee osteoarthritis Discoid meniscus References Meniscal Cysts by Rob Gutierrez, M.D. Campbell, SE, Sanders, TG, Morrison, WB. MR imaging of meniscal cysts: Incidence, location, and clinical significance. AJR 2001;177:409-413. Helms, CA. The meniscus: Recent advances in MR imaging of the knee. AJR 2002;179(5):1115-1112. Pathology
Meniscal cyst
Biology
564
2,668,460
https://en.wikipedia.org/wiki/Lambda%20Serpentis
Lambda Serpentis, Latinized from λ Serpentis, is a star in the constellation Serpens, in its head (Serpens Caput). It has an apparent visual magnitude of 4.43, making it visible to the naked eye. Based upon parallax measurements, this star lies at a distance of about from Earth. Lambda Serpentis is moving toward the Solar System with a radial velocity of 66.4 km s−1. In about 166,000 years, this system will make its closest approach of the Sun at a distance of , before moving away thereafter. This star is 36% larger and 9% more massive than the Sun, although it has a similar stellar classification. It is shining with nearly double the Sun's luminosity and this energy is being radiated from the star's outer atmosphere at an effective temperature of 5,901 K. A periodicity of 1837 days (5.03 years) was suspected by Morbey & Griffith (1987), but it is probably bound to stellar activity. However, McDonald Observatory team has set limits to the presence of one or more exoplanets around Lambda Serpentis with masses between 0.16 and 2 Jupiter masses and average separations spanning between 0.05 and 5.2 Astronomical Units. Planetary system In 2020, a candidate planet was detected orbiting Lambda Serpentis (HD 141004). With a minimum mass of 0.043 (13.6 ) and an orbital period of 15 days, this would most likely be a hot Neptune. The discovery of planet was confirmed in 2021. References Further reading G-type main-sequence stars Serpentis, Lambda Suspected variables Planetary systems with one confirmed planet Serpens J15462661+0721109 Serpentis, Lambda BD+7 3023 Serpentis, 27 141004 077257 5868
Lambda Serpentis
Astronomy
381
2,545,679
https://en.wikipedia.org/wiki/National%20Archive%20of%20Computerized%20Data%20on%20Aging
The National Archive of Computerized Data on Aging (NACDA), located within ICPSR in Michigan, is funded by the US National Institute on Aging (NIA). NACDA's mission is to advance research on aging by helping researchers to profit from the under-exploited potential of a broad range of datasets. NACDA acquires and preserves data relevant to gerontological research, processing as needed to promote effective research use, disseminates them to researchers, and facilitates their use. By preserving and making available the largest library of electronic data on aging in the United States, NACDA offers opportunities for secondary analysis on major issues of scientific and policy relevance. Description A program within the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan. The NACDA collection consists of over sixteen hundred datasets relevant to gerontological research and represents the world's largest collection of publicly available research data on the aging lifecourse. History The NACDA Program on Aging began in 1976 under the sponsorship of the United States Administration on Aging (AoA). At that time NACDA was seen as a novel experiment - neither the concept of a research archive devoted to aging issues nor the idea of making research data freely available to the public were well established. Over the years, NACDA’s mission has changed both in scope and in direction. Originally conceived as a storehouse for data, NACDA has aggressively pursued a role of increasing involvement in the research community by actively promoting and distributing data. In 1984, the NIA became the sponsor of the National Archive of Computerized Data on Aging, and NACDA has flourished under its support. Over the years, NACDA has evolved and grown in response to changes in technology. In many instances, leading the pace of change in methodology related to the storage, protection, and distribution of data. NACDA was one of the first organizations to develop and release studies on CD-ROM. NACDA was also one of the first archives to experiment with the idea of offering electronic research data as a public good, free to all interested individuals at no charge. The initial collection of 28 public use datasets first offered on the internet in 1992 has now expanded to over 1,600 datasets that are freely available to any researcher. The entire collection is stored online at the NACDA website, offering immediate access to gerontological researchers. See also National Institute on Aging (NIA) United States Administration on Aging (AoA) National Health and Nutrition Examination Survey References External links NACDA ICPSR Databases in the United States Gerontology
National Archive of Computerized Data on Aging
Biology
550
38,782,269
https://en.wikipedia.org/wiki/Luhman%2016
Luhman 16 (also designated WISE 1049−5319 or WISE J104915.57−531906.1) is a binary brown-dwarf system in the southern constellation Vela at a distance of from the Sun. These are the closest-known brown dwarfs and the closest system found since the measurement of the proper motion of Barnard's Star in 1916, and the third-closest-known system to the Sun (after the Alpha Centauri system and Barnard's Star). The primary is of spectral type L7.5 and the secondary of type (and is hence near the L–T transition). The masses of Luhman 16 A and B are 35.4 and 29.4 Jupiter masses, respectively, and their ages are estimated to be 400–800 million years. Luhman 16 A and B orbit each other at a distance of about 3.5 astronomical units with an orbital period of approximately 26.6 years. Discovery This system was discovered by Kevin Luhman, astronomer from Pennsylvania State University and a researcher at Penn State's Center for Exoplanets and Habitable Worlds, from images made by the Wide-field Infrared Survey Explorer (WISE) Earth-orbiting satellite—NASA infrared-wavelength space telescope, a mission that lasted from December 2009 to February 2011; the discovery images were taken from January 2010 to January 2011, and the discovery was announced in 2013 (the pair are the only two objects announced in the discovery paper). The system was found by comparing WISE images at different epochs to reveal objects that have high proper motions. Luhman 16 appears in the sky close to the galactic plane, which is densely populated by stars; the abundance of light sources makes it difficult to spot faint objects. This explains why an object so near to the Sun was not discovered in earlier searches. Discovery of companion The second component of the system was also discovered by Luhman in 2013, and was announced in the same article as the primary. Its discovery image in the i-band was taken on the night of 23 February 2013 with the Gemini Multi-Object Spectrograph (GMOS) at the Gemini South telescope, Chile. The components of the system were resolved with an angular distance of 1.5 arcseconds, corresponding to a projected separation of 3 AU, and a magnitude difference of 0.45 mag. Precovery Although the system was first found on images taken by WISE in 2010–2011, afterwards it was precovered from the Digitized Sky Survey (DSS, 1978 (IR) & 1992 (red)), Infrared Astronomical Satellite (IRAS, 1983), ESO Schmidt telescope (1984 (red)), Guide Star Catalog (GSC, 1995), Deep Near Infrared Survey of the Southern Sky (DENIS, 1999), Two Micron All-Sky Survey (2MASS, 1999), and the AKARI satellite (2007). On the ESO Schmidt telescope image, taken in 1984, the source looks elongated with a position angle of 138°. The similarity of this position angle with that of the resolved pair in the GMOS image (epoch 2013) in Fig. 1 of Luhman (2013) suggests that the time period between 1984 and 2013 may be close to the orbital period of the system (not far from original orbital period estimate by Luhman (2013)). Name Eric E. Mamajek proposed the name Luhman 16 for the system, with the components called Luhman 16A and Luhman 16B. The name originates from the frequently updated Washington Double Star Catalog (WDS). Kevin Luhman had already published several new discoveries of binary stars that have been compiled in the WDS with discovery identifier "LUH". The WDS catalog now lists this system with the identifier 10493−5319 and discoverer designation LUH 16. The rationale is that Luhman 16 is easier to remember than WISE J104915.57−531906.1 and that "it seems silly to call this object by a 24-character name (space included)". The "phone number names" also include WISE J1049−5319 and WISE 1049−5319. Luhman–WISE 1 was proposed as another alternative. As a binary object it is also called Luhman 16AB. Astrometry Position in the sky Luhman 16 is located in the southern celestial hemisphere in the constellation Vela. As of July 2015, its components are the nearest-known celestial objects in this constellation outside the Solar System. Its celestial coordinates: RA = , Dec = . Distance The trigonometric parallax of Luhman 16 as published by Sahlmann & Lazorenko (2015) is arcsec, corresponding to a distance of . Subsequent observations with Hubble and Gaia improved the parallax to ±0.050 mas, corresponding to a distance of ±0.0002 parsec, which is accurate to about 50 astronomical units. Proximity to the Solar System Currently Luhman 16 is the third-closest-known star/brown-dwarf system to the Sun after the triple Alpha Centauri system (4.37 ly) and Barnard's Star (5.98 ly), pushing Wolf 359 (7.78 ly) to the fifth place, along with the discovery of WISE 0855−0714. It also holds several records: the nearest brown dwarf, the nearest L-type dwarf, and possibly the nearest T-type dwarf (if component B is of T-type). Proximity to Alpha Centauri Luhman 16 is the nearest-known star/brown-dwarf system to Alpha Centauri, located from Alpha Centauri AB, and from Proxima Centauri. Both systems are located in neighboring constellations, in the same part of the sky as seen from Earth, but Luhman 16 is a bit farther away. Before the discovery of Luhman 16, the Solar System was the nearest-known system to Alpha Centauri. Luhman 16 is closer to Proxima Centauri than to Alpha Centauri AB, just like Earth, even though Luhman 16 is farther from Earth than is the Alpha Centauri system. Therefore Luhman 16 has smaller angular distance to Proxima Centauri than to Alpha Centauri AB in Earth's sky, and this makes more contribution to the distance difference from Luhman 16 to Alpha Centauri than to the distance difference between them and Earth. Proper motion The proper motion of Luhman 16 as published by Garcia et al. (2017), is about 2.79″/year, which is relatively large due to the proximity of Luhman 16. Radial velocity The radial velocity of is , and the radial velocity of is . Since values of the radial velocity are positive, the system currently is moving away from the Solar System. Assuming these values for the components, and a mass ratio of from Sahlmann & Lazorenko (2015) of 0.78, the system's barycentre radial velocity is about . This implies that passed by the Solar System around 36,000 years ago at a minimal distance of about . Orbit and masses In Luhman 16's original discovery paper, Luhman et al. (2013) estimated the orbital period of its components to be about 25 years. Garcia et al. (2017), using archival observations extending over 31 years, found an orbital period of 27.4 years with a semi-major axis of 3.54 AU. This orbit has an eccentricity of 0.35 and an inclination of 79.5°. The masses of the components were found to be and , respectively, with their mass ratio being about 0.82. With the data from Gaia DR2 in 2018, their orbit was refined to a period of years, with a semi-major axis of , an eccentricity of , and an inclination of (facing the opposite direction as the 2017 study found). Their masses were additionally refined to and . In 2024 the distance and orbit was further refined, resulting in a semi-major axis of 3.52 AU (assuming a parallax of 500.993 mas), an eccentricity of and an inclination of , bringing the inclination in line with previous measurements. The secondary has a mass which is that of the primary. The individual masses were measured to be 35.2±0.2 and 29.4±0.2 Jupiter masses. These results are consistent with all previous estimates of the orbit and component masses. By comparing the rotation periods of the brown dwarfs with the projected rotational velocities, it appears that both brown dwarfs are viewed roughly equator-on, and they are aligned well to their orbits. Age A 2013 paper, published shortly after Luhman 16 was discovered, concluded that the brown dwarf belongs to the thin disk of the Milky Way with 96% probability, and therefore does not belong to a young moving group. Based on lithium absorption lines the system has a maximum age of about 3–4.5 Gyr. Observations with the VLT showed that the system is older than 120 Myr. However, in 2022, Luhman 16 was found to be a member of the newly discovered Oceanus moving group, which has an age of Myr. Age estimates of 400–800 Myr in 2024 is in line with the membership with this group. The age estimates are mismatched for both components, which could be due to different cloud coverage resulting in different cooling efficiency. Alternatively this could be due to inaccurate luminosities or errors in the evolutionary models. Search for planets In December 2013, perturbations of the orbital motions in the system were reported, suggesting a third body in the system. The period of this possible companion was a few months, suggesting an orbit around one of the brown dwarfs. Any companion would necessarily be below the brown-dwarf mass limit, as otherwise it would have been detected through direct imaging. Researchers estimated the odds of a false positive as 0.002%, assuming the measurements had not been made in error. If confirmed, this would have been the first exoplanet discovered astrometrically. They estimate the planet to likely have a mass between "a few" and , although they mention that a more massive planet would be brighter and therefore would affect the "photocenter" or measured position of the star. This would make it difficult to measure the astrometric movement of an exoplanet around it. Subsequent astrometric monitoring of Luhman 16 with the Very Large Telescope has excluded the presence of any third object with a mass greater than orbiting around either brown dwarf with a period between 20 and 300 days. Luhman 16 does not contain any close-in giant planets. Observations with the Hubble Space Telescope in 2014–2016 confirmed the nonexistence of any additional brown dwarfs in the system. It additionally ruled out any Neptune mass () objects with an orbital period of one to two years. This makes the existence of the previously found exoplanet candidate highly unlikely. Additional observations with Hubble rules out the existence of a planet with >1.5 Neptune masses at an orbit of 400 to 5000 days. This study did however not rule out planets with a mass of less than 3 Neptune masses and a shorter period of 2 to 400 days. Atmosphere A study by Gillon et al. (2013) found that Luhman 16B exhibited uneven surface illumination during its rotation. On 5 May 2013, Crossfield et al. (2014) used the European Southern Observatory Very Large Telescope (VLT) to directly observe the Luhman 16 system for five hours, the equivalent of a full rotation of Luhman 16B. Their research confirmed the observation of Gillon et al., finding a large, dark region at the middle latitudes, a bright area near its upper pole, and mottled illumination elsewhere. They suggest this variant illumination indicates "patchy global clouds", where darker areas represent thick clouds and brighter areas are holes in the cloud layer permitting light from the interior. Luhman 16B's illumination patterns change rapidly, on a day-to-day basis. Luhman 16B is one of the most photometrically variable brown dwarfs known, sometimes varying with an amplitude of over 20%. Only 2MASS J21392676+0220226 is known to be more variable. Heinze et al. (2021) observed variability in spectral lines of alkali metals such as potassium and sodium; they suggested that the variations were caused by changes in cloud cover, which changed the local chemical equilibrium with chlorides. Lightning or aurorae were deemed possible, but less likely. Luhman 16B's lightcurve shows evidence of differential rotation. There is evidence of equatorial regions and mid-latitude regions with different rotation periods. The main period is 5.28 hours, corresponding to the rotation period of the equatorial region. Meanwhile, the rotation period of Luhman 16A is likely 6.94 hours. Biller et al. 2024 observed both components with JWST for 8 hours with MIRI LRS and directly followed by an 7 hour observation with NIRSpec. The observations found water vapor, carbon monoxide and methane absorption in both brown dwarfs, which is typical for L/T dwarfs. Luhman 16A shows a flat plateau beyond 8.5 μm, which is indicative of small grain silicates. The lightcurves produced from the observations show that both components are variable, with Luhman 16B being considerable more variable than Luhman 16A. The variability has a complex wavelength dependent trend. The researchers identified changes in behaviour at 2.3 μm and 4.2 μm coincident with the CO band and changes in behaviour at 8.3–8.5 μm coincident with silicate absorption. These changes in behaviour were interpreted as changes of average pressure at three different depths of the atmosphere. The observations also tested if patchy clouds could produce the variability. While small silicate grains corresponding to high-altitude silicate clouds were found in Luhman 16A, it is unlikely to be a patchy cloud layer. Luhman 16B does not have this small grained silicate feature, but larger grained silicate clouds deeper in the atmosphere are possible. The researchers also tested general circulation models (GCM) and hotspots, but the lightcurves are more complex than these models predict. Using data collected by TESS,the research team, Dániel Apai, Domenico Nardiello and Luigi R. Bedin, found that the brown dwarf, between star and gas giant, is more similar to Jupiter in that its high-speed winds form stripes parallel to the equators of Luhman 16 AB. Radio and X-ray activity In a study by Osten et al. (2015), Luhman 16 was observed with the Australia Telescope Compact Array in radio waves and with the Chandra X-ray Observatory in X-rays. No radio or X-ray activity was found at Luhman 16 AB, and constraints on radio and X-ray activity were presented, which are "the strongest constraints obtained so far for the radio and X-ray luminosity of any ultracool dwarf". See also Substellar object Notes References Further reading See related slideshow . External links WISE 1049-5319 AB at Solstation.com "WISE Nabs the Closest Brown Dwarfs Yet Discovered" at Universe Today "Checking Out Our New Neighbors" at Astrobites.org Nearest Brown Dwarf Might Remind You Of Jupiter AstroBob, 5/13/20 20130323 Binary stars Local Bubble L-type brown dwarfs T-type brown dwarfs Vela (constellation) J104915.57-531906.1 Articles containing video clips J10491891-5319100 119862115
Luhman 16
Astronomy
3,248
23,444,071
https://en.wikipedia.org/wiki/Salientia
The Salientia (Latin salire, salio meaning "to jump") are a total group of amphibians that includes the order Anura, the frogs and toads, and various extinct proto-frogs that are more closely related to the frogs than they are to the Urodela, the salamanders and newts. The oldest fossil "proto-frog" appeared in the early Triassic of Madagascar, but molecular clock dating suggests their origins may extend further back to the Permian, 265 million years ago. Characteristics Very few fossils of early salientians have been found, which makes defining the characteristics of the group and their taxonomic relationships difficult. The arrangement of pectoral elements and the number of vertebrae are some guides, but the degree of vertebral articulation and the arrangement of the bones in the leg have not been found to be reliable indicators. The early proto-frogs developed from temnospondyl ancestors in which some of the elements of their vertebrae remained separate. The structure of the salientian pelvis and hind limb was probably developed for swimming rather than jumping. From the structure of the vertebrae, the group appears not to be monophyletic. The evolution of salientians seems to have been rapid and radiative. The essential features of recent groupings seem to have been established during the Mesozoic or early Tertiary. The families Alytidae, Pipidae, and Pelobatidae are ecologically isolated, the harlequin frogs, restricted to a neotropical range in Central and South America, and the Ranidae and Bufonidae probably radiated from tropical regions of Africa and Asia. Evolution The origins and evolutionary relationships between the three main groups of amphibians are hotly debated. A molecular phylogeny based on rDNA analysis dating from 2005 suggests that salamanders and caecilians are more closely related to each other than they are to frogs, and the divergence of the three groups took place in the Paleozoic or early Mesozoic before the breakup of the supercontinent Pangaea and soon after their divergence from the lobe-finned fishes. This would help account for the relative scarcity of amphibian fossils from the period before the groups split. Another molecular phylogenetic analysis conducted about the same time concluded the lissamphibians first appeared about 330 million years ago and that the temnospondyl-origin hypothesis is more credible than other theories. The neobatrachians seemed to have originated in Africa/India, the salamanders in East Asia and the caecilians in tropical Pangaea. Other researchers, while agreeing with the main thrust of this study, questioned the choice of calibration points used to synchronise the data. They proposed that the date of lissamphibian diversification be put in the Permian, rather less than 300 million years ago, a date in better agreement with the palaeontological data. A further study in 2011 using both extinct and living taxa sampled for morphological, as well as molecular data, came to the conclusion that the Lissamphibia are monophyletic and should be nested within the Lepospondyli rather than within the Temnospondyli. The study postulated the Lissamphibia originated no earlier than the late Carboniferous, some 290 to 305 million years ago. The split between Anura and Caudata was estimated as taking place 292 million years ago, rather later than most molecular studies suggest, with the caecilians splitting off 239 million years ago. In 2008, Gerobatrachus hottoni, a temnospondyl with many frog- and salamander-like characteristics, was discovered in Texas. It dated back 290 million years and was hailed as a missing link, a stem batrachian close to the common ancestor of frogs and salamanders, consistent with the widely accepted hypothesis that frogs and salamanders are more closely related to each other (forming a clade called the Batrachia) than they are to caecilians. However, others have suggested that Gerobatrachus hottoni was only a dissorophoid temnospondyl unrelated to extant amphibians. The earliest known salientians (see below), closer to the extant frogs than to the extant salamanders, are Triadobatrachus massinoti, from the Early Triassic of Madagascar (about 250 million years ago), and the fragmentary Czatkobatrachus polonicus from the Early Triassic of Poland (about the same age as Triadobatrachus). The skull of Triadobatrachus is frog-like, being broad with large eye sockets, but the fossil has features diverging from modern frogs. These include a longer body with more vertebrae. The tail has separate vertebrae, unlike the fused urostyle or coccyx found in modern frogs. The tibia and fibula bones are also separate, making it probable that Triadobatrachus was not an efficient leaper. The Salientia (Latin salere (salio), "to jump") are a stem group including modern frogs in the order Anura and their close fossil relatives the "proto-frogs" (e.g., Triadobatrachus and Czatkobatrachus). The common features possessed by the "proto-frogs" in the Salientia group include 14 presacral vertebrae (modern frogs have eight or nine), a long and forward-sloping ilium in the pelvis, the presence of a frontoparietal bone, and a lower jaw without teeth. Species The earliest salientian yet discovered is Triadobatrachus massinoti, known from a single fossil specimen found in Madagascar. It dates back to the Early Triassic, about 250 million years ago. It had many frog-like features, but had 14 presacral vertebrae, while modern frogs have nine or 10. Previous fossil amphibians had many more presacral vertebrae than this and T. massinoti provides a missing link between salamanders and frogs. Other characteristics that distinguish it from modern frogs include the possession of a short tail with unfused vertebrae, a separate radius and ulna in the fore limb, and separate tibia and fibula in the hind limb. The features it shares with modern frogs include a forward-sloping ilium, the fusion of the frontal and parietal bones into a single structure known as the frontoparietal, and a lower jaw bone with no teeth. Czatkobatrachus is another proto-frog with some characteristics similar to Triadobatrachus. It is from the early Triassic in Poland and has a shortened vertebral column, reduced tail, and elongated ilium. Another early proto-frog was Prosalirus bitis, several fossil specimens of which have been found in Arizona. It dates back to the Early Jurassic, 190 million years ago. It has primitive features, but has a urostyle and an elongated, forward-directed ilium in its pelvis. These adaptations made it better able to absorb the impact of landing after a jump. Dating back to a similar date is Vieraella herbsti, a single specimen of which has been found in Santa Cruz Province, Argentina. It had 10 presacral vertebrae, but is considered to be more basal than Notobatrachus and living frogs. Several specimens of Notobatrachus degiustoi have been found in Patagonia, Argentina. They date back to the Middle Jurassic, 160 million years ago. Whether it should be considered the first modern frog or be placed in a sister group to Anura is uncertain. Phylogeny Cladogram from Tree of Life Web Project. References Amphibians Taxa named by Josephus Nicolaus Laurenti
Salientia
Biology
1,620
77,583,575
https://en.wikipedia.org/wiki/Type%20IIB%20supergravity
In supersymmetry, type IIB supergravity is the unique supergravity in ten dimensions with two supercharges of the same chirality. It was first constructed in 1983 by John Schwarz and independently by Paul Howe and Peter West at the level of its equations of motion. While it does not admit a fully covariant action due to the presence of a self-dual field, it can be described by an action if the self-duality condition is imposed by hand on the resulting equations of motion. The other types of supergravity in ten dimensions are type IIA supergravity, which has two supercharges of opposing chirality, and type I supergravity, which has a single supercharge. The theory plays an important role in modern physics since it is the low-energy limit of type IIB string theory. History After supergravity was discovered in 1976, there was a concentrated effort to construct the various possible supergravities that were classified in 1978 by Werner Nahm. He showed that there exist three types of supergravity in ten dimensions, later named type I, type IIA and type IIB. While both type I and type IIA can be realised at the level of the action, type IIB does not admit a covariant action. Instead it was first fully described through its equations of motion, derived in 1983 by John Schwartz, and independently by Paul Howe and Peter West. In 1995 it was realised that one can effectively describe the theory using a pseudo-action where the self-duality condition is imposed as an additional constraint on the equations of motion. The main application of the theory is as the low-energy limit of type IIB strings, and so it plays an important role in string theory, type IIB moduli stabilisation, and the AdS/CFT correspondence. Theory Ten-dimensional supergravity admits both and supergravities, which differ by the number of the Majorana–Weyl spinor supercharges that they possess. The type IIB theory has two supercharges of the same chirality, equivalent to a single Weyl supercharge, with it sometimes denoted as the ten-dimensional supergravity. The field content of this theory is given by the ten dimensional chiral supermultiplet . Here is the metric corresponding to the graviton, while are 4-form, 2-form, and 0-form gauge fields. Meanwhile, is the Kalb–Ramond field and is the dilaton. There is also a single left-handed Weyl gravitino , equivalent to two left-handed Majorana–Weyl gravitinos, and a single right-handed Weyl fermion , also equivalent to two right-handed Majorana–Weyl fermions. Algebra The superalgebra for ten-dimensional supersymmetry is given by Here with are the two Majorana–Weyl supercharges of the same chirality. They therefore satisfy the projection relation where is the left-handed chirality projection operator and is the ten-dimensional chirality matrix. The matrices allowed on the right-hand side are fixed by the fact that they must be representations of the R-symmetry group of the type IIB theory, which only allows for , and trace-free symmetric matrices . Since the anticommutator is symmetric under an exchange of the spinor and indices, the maximally extended superalgebra can only have terms with the same chirality and symmetry property as the anticommutator. The terms are therefore a product of one of the matrices with , where is the charge conjugation operator. In particular, when the spinor matrix is symmetric, it multiplies or while when it is antisymmetric it multiplies . In ten dimensions is symmetric for modulo and antisymmetric for modulo . Since the projection operator is a sum of the identity and a gamma matrix, this means that the symmetric combination works when modulo and the antisymmetric one when modulo . This yields all the central charges found in the superalgebra up to Poincaré duality. The central charges are each associated to various BPS states that are found in the theory. The central charges correspond to the fundametnal string and the D1 brane, is associated with the D3 brane, while and give three 5-form charges. One is the D5-brane, another the NS5-brane, and the last is associated with the KK monopole. Self-dual field For the supergravity multiplet to have an equal number of bosonic and fermionic degrees of freedom, the four-form has to have 35 degrees of freedom. This is achieved when the corresponding field strength tensor is self-dual , eliminating half of the degrees of freedom that would otherwise be found in a 4-form gauge field. This presents a problem when constructing an action since the kinetic term for the self-dual 5-form field vanishes. The original way around this was to only work at the level of the equations of motion where self-duality is just another equation of motion. While it is possible to formulate a covariant action with the correct degrees of freedom by introducing an auxiliary field and a compensating gauge symmetry, the more common approach is to instead work with a pseudo-action where self-duality is imposed as an additional constraint on the equations of motion. Without this constraint the action cannot be supersymmetric since it does not have an equal number of fermionic and bosonic degrees of freedom. Unlike for example type IIA supergravity, type IIB supergravity cannot be acquired as a dimensional reduction of a theory in higher dimensions. Pseudo-action The bosonic part of the pseudo-action for type IIB supergravity is given by Here and are modified field strength tensors for the 2-form and 4-form gauge fields, with the resulting Bianchi identity for the 5-form being given by . The notation employed for the kinetic terms is where are the regular field strength tensors associated to the gauge fields. Self-duality has to be imposed by hand onto the equations of motion, making this a pseudo-action rather than a regular action. The first line in the action contains the Einstein–Hilbert action, the dilaton kinetic term, and the Kalb–Ramond field strength tensor . The first term on the second line has the appropriately modified field strength tensors for the three gauge fields, while the last term is a Chern–Simons term. The action is written in the string frame which allows one to equate the fields to type IIB string states. In particular, the first line consists of kinetic terms for the NSNS fields, with these terms being identical to those found in type IIA supergravity. The first integral on the second line meanwhile consists of the kinetic term for the RR fields. Global symmetry Type IIB supergravity has a global noncompact symmetry. This can be made explicit by rewriting the action into the Einstein frame and defining the axio-dilaton complex scalar field . Introducing the matrix and combining the two 3-form field strength tensors into a doublet , the action becomes This action is manifestly invariant under the transformation which transforms the 3-forms and the axio-dilaton as Both the metric and the self-dual field strength tensor are invariant under these transformations. The invariance of the 3-form field strength tensors follows from the fact that . Supersymmetry transformations The equations of motion acquired from the supergravity action are invariant under the following supersymmetry transformations Here are the field strength tensors associated with the gauge fields, including all their magnetic duals for , while . Additionally, when is even and when it is odd. The type IIB pseudo-action can also be reformulated in a way that treats all RR fluxes equally in the so-called democratic formulation. Here the action is expressed in terms of all even fluxes up to , with a duality constraint imposed on all of them to get the correct number of degrees of freedom. Relation to string theory Type IIB supergravity is the low-energy limit of type IIB string theory. The fields of the supergravity in the string frame are directly related to the different massless states of the string theory. In particular, the metric, Kalb–Ramond field, and dilaton are NSNS fields, while the three p-forms are RR fields. Meanwhile, the gravitational coupling constant is related to the Regge slope through . The global symmetry of the supergravity is not a symmetry of the full type IIB string theory since it would mix the and fields. This does not happen in the string theory since one of these is an NSNS field and the other an RR field, with these having different physics, such as the former coupling to strings but the latter not. The symmetry is instead broken to the discrete subgroup which is believed to be a symmetry of the full type IIB string theory. The quantum theory is anomaly free, with the gravitational anomalies cancelling exactly. In string theory the pseudo-action receives much studied corrections that are classified into two types. The first are quantum corrections in terms of the string coupling and the second are string corrections terms of the Regge slope . These corrections play an important role in many moduli stabilisation scenarios. Dimensional reduction of type IIA and type IIB supergravities necessarily results in the same nine-dimensional theory since only one superalgebra of this type exists in this dimension. This is closely linked to the T-duality between the corresponding string theories. Notes References Supersymmetric quantum field theory Theories of gravity String theory
Type IIB supergravity
Physics,Astronomy
2,024
166,084
https://en.wikipedia.org/wiki/Compressibility
In thermodynamics and fluid mechanics, the compressibility (also known as the coefficient of compressibility or, if the temperature is held constant, the isothermal compressibility) is a measure of the instantaneous relative volume change of a fluid or solid as a response to a pressure (or mean stress) change. In its simple form, the compressibility (denoted in some fields) may be expressed as , where is volume and is pressure. The choice to define compressibility as the negative of the fraction makes compressibility positive in the (usual) case that an increase in pressure induces a reduction in volume. The reciprocal of compressibility at fixed temperature is called the isothermal bulk modulus. Definition The specification above is incomplete, because for any object or system the magnitude of the compressibility depends strongly on whether the process is isentropic or isothermal. Accordingly, isothermal compressibility is defined: where the subscript indicates that the partial differential is to be taken at constant temperature. Isentropic compressibility is defined: where is entropy. For a solid, the distinction between the two is usually negligible. Since the density of a material is inversely proportional to its volume, it can be shown that in both cases For instance, for an ideal gas, . Hence . Consequently, the isothermal compressibility of an ideal gas is . The ideal gas (where the particles do not interact with each other) is an abstraction. The particles in real materials interact with each other. Then, the relation between the pressure, density and temperature is known as the equation of state denoted by some function . The Van der Waals equation is an example of an equation of state for a realistic gas. . Knowing the equation of state, the compressibility can be determined for any substance. Relation to speed of sound The speed of sound is defined in classical mechanics as: It follows, by replacing partial derivatives, that the isentropic compressibility can be expressed as: Relation to bulk modulus The inverse of the compressibility is called the bulk modulus, often denoted (sometimes or ).). The compressibility equation relates the isothermal compressibility (and indirectly the pressure) to the structure of the liquid. Thermodynamics The isothermal compressibility is generally related to the isentropic (or adiabatic) compressibility by a few relations: where is the heat capacity ratio, is the volumetric coefficient of thermal expansion, is the particle density, and is the thermal pressure coefficient. In an extensive thermodynamic system, the application of statistical mechanics shows that the isothermal compressibility is also related to the relative size of fluctuations in particle density: where is the chemical potential. The term "compressibility" is also used in thermodynamics to describe deviations of the thermodynamic properties of a real gas from those expected from an ideal gas. The compressibility factor is defined as where is the pressure of the gas, is its temperature, and is its molar volume, all measured independently of one another. In the case of an ideal gas, the compressibility factor is equal to unity, and the familiar ideal gas law is recovered: can, in general, be either greater or less than unity for a real gas. The deviation from ideal gas behavior tends to become particularly significant (or, equivalently, the compressibility factor strays far from unity) near the critical point, or in the case of high pressure or low temperature. In these cases, a generalized compressibility chart or an alternative equation of state better suited to the problem must be utilized to produce accurate results. Earth science The Earth sciences use compressibility to quantify the ability of a soil or rock to reduce in volume under applied pressure. This concept is important for specific storage, when estimating groundwater reserves in confined aquifers. Geologic materials are made up of two portions: solids and voids (or same as porosity). The void space can be full of liquid or gas. Geologic materials reduce in volume only when the void spaces are reduced, which expel the liquid or gas from the voids. This can happen over a period of time, resulting in settlement. It is an important concept in geotechnical engineering in the design of certain structural foundations. For example, the construction of high-rise structures over underlying layers of highly compressible bay mud poses a considerable design constraint, and often leads to use of driven piles or other innovative techniques. Fluid dynamics The degree of compressibility of a fluid has strong implications for its dynamics. Most notably, the propagation of sound is dependent on the compressibility of the medium. Aerodynamics Compressibility is an important factor in aerodynamics. At low speeds, the compressibility of air is not significant in relation to aircraft design, but as the airflow nears and exceeds the speed of sound, a host of new aerodynamic effects become important in the design of aircraft. These effects, often several of them at a time, made it very difficult for World War II era aircraft to reach speeds much beyond . Many effects are often mentioned in conjunction with the term "compressibility", but regularly have little to do with the compressible nature of air. From a strictly aerodynamic point of view, the term should refer only to those side-effects arising as a result of the changes in airflow from an incompressible fluid (similar in effect to water) to a compressible fluid (acting as a gas) as the speed of sound is approached. There are two effects in particular, wave drag and critical mach. One complication occurs in hypersonic aerodynamics, where dissociation causes an increase in the "notional" molar volume because a mole of oxygen, as O2, becomes 2 moles of monatomic oxygen and N2 similarly dissociates to 2 N. Since this occurs dynamically as air flows over the aerospace object, it is convenient to alter the compressibility factor , defined for an initial 30 gram moles of air, rather than track the varying mean molecular weight, millisecond by millisecond. This pressure dependent transition occurs for atmospheric oxygen in the 2,500–4,000 K temperature range, and in the 5,000–10,000 K range for nitrogen. In transition regions, where this pressure dependent dissociation is incomplete, both beta (the volume/pressure differential ratio) and the differential, constant pressure heat capacity greatly increases. For moderate pressures, above 10,000 K the gas further dissociates into free electrons and ions. for the resulting plasma can similarly be computed for a mole of initial air, producing values between 2 and 4 for partially or singly ionized gas. Each dissociation absorbs a great deal of energy in a reversible process and this greatly reduces the thermodynamic temperature of hypersonic gas decelerated near the aerospace object. Ions or free radicals transported to the object surface by diffusion may release this extra (nonthermal) energy if the surface catalyzes the slower recombination process. Negative compressibility For ordinary materials, the bulk compressibility (sum of the linear compressibilities on the three axes) is positive, that is, an increase in pressure squeezes the material to a smaller volume. This condition is required for mechanical stability. However, under very specific conditions, materials can exhibit a compressibility that can be negative. See also Mach number Mach tuck Poisson ratio Prandtl–Glauert singularity, associated with supersonic flight Shear strength References Thermodynamic properties Fluid dynamics Mechanical quantities
Compressibility
Physics,Chemistry,Mathematics,Engineering
1,584
76,005,038
https://en.wikipedia.org/wiki/Neptunium%20nitride
Neptunium nitride is a binary inorganic compound of neptunium and nitrogen with the chemical formula . Preparation Neptunium nitride can be prepared by the reaction of freshly obtained neptunium hydride and ammonia: The reaction of neptunium and nitrogen can also obtain neptunium nitride: Physical properties Neptunium nitride forms black crystals in the cubic system with Fm3m space group. It is insoluble in water and decomposes if heated. Uses Neptunium nitride is used as a target material for plutonium-238 production. + → References Nitrides Neptunium compounds Nitrogen compounds
Neptunium nitride
Chemistry
145
6,456,560
https://en.wikipedia.org/wiki/Common%20Hybrid%20Interface%20Protocol%20System
Common Hybrid Interface Protocol System (CHIPS) is the definition of a computer network that consists of a mixture of common serial data protocols such as RS-232 and RS-485, or can be even PC keyboard interface communication. CHIPS may also consist of Bluetooth and Wi-Fi for wireless communication can be installed on all major hardware platforms. There are several CHIPS projects and products available today where such systems are i.e. MISOLIMA DOLLx8 and Olivetti's "Mael Gateasy". As new bus systems are gaining market shares, there will always be needs for CHIPS to enable serial network protocols to be integrated into one single connection point. By using CHIPS, it will be possible to control I/O data from different sources and systems without having the need to install several serial interface cards and drivers. CHIPS users will, in most cases, be able to work with several serial data transceiver sources at the same time. Such serial data might originate from PC Keyboards, CANbus, RS and wireless communication where all data connects into one or several CHIPS units that communicate over the mixed serial data protocols. Due to some mixed baud rates between the connected systems, the compatibility with CHIPS means that some devices will have reduced transfer rates, but CHIPS are primary designed for Lab-, office-, home-, factory- and building automation also used in Internet of Things. References Network architecture
Common Hybrid Interface Protocol System
Engineering
284
78,106,210
https://en.wikipedia.org/wiki/LR105
The LR105 is a liquid-fuel rocket engine that served as the sustainer engine for the Atlas rocket family. Developed by Rocketdyne in 1957 as the S-4, it is called a sustainer engine because it continues firing after the LR89 booster engines have been jettisoned, providing thrust during the ascent phase. Description The LR105 is a liquid-propellant engine using RP-1/LOX. The engine operates on a gas-generator cycle, where a small portion of the propellant is burned in a gas generator to drive the turbopumps, which supply the engine with fuel and oxidizer. The engine was designed to be throttleable, meaning its thrust could be adjusted during flight to optimize performance. The LR105 also features regenerative cooling, where RP-1 fuel is circulated through cooling channels in the engine's nozzle and combustion chamber before being injected into the combustion process, preventing overheating and improving efficiency. Versions The LR105 engine underwent several upgrades over its operational life, leading to multiple variants: See also Rocketdyne LR89 Rocketdyne LR101 SM-65 Atlas Rocketdyne References Rocket engines using kerosene propellant Rocketdyne engines Rocket engines using the pressure-fed cycle Rocket engines of the United States Atlas (rocket family)
LR105
Astronomy
283
59,730,114
https://en.wikipedia.org/wiki/Parallel%20external%20memory
In computer science, a parallel external memory (PEM) model is a cache-aware, external-memory abstract machine. It is the parallel-computing analogy to the single-processor external memory (EM) model. In a similar way, it is the cache-aware analogy to the parallel random-access machine (PRAM). The PEM model consists of a number of processors, together with their respective private caches and a shared main memory. Model Definition The PEM model is a combination of the EM model and the PRAM model. The PEM model is a computation model which consists of processors and a two-level memory hierarchy. This memory hierarchy consists of a large external memory (main memory) of size and small internal memories (caches). The processors share the main memory. Each cache is exclusive to a single processor. A processor can't access another’s cache. The caches have a size which is partitioned in blocks of size . The processors can only perform operations on data which are in their cache. The data can be transferred between the main memory and the cache in blocks of size . I/O complexity The complexity measure of the PEM model is the I/O complexity, which determines the number of parallel blocks transfers between the main memory and the cache. During a parallel block transfer each processor can transfer a block. So if processors load parallelly a data block of size form the main memory into their caches, it is considered as an I/O complexity of not . A program in the PEM model should minimize the data transfer between main memory and caches and operate as much as possible on the data in the caches. Read/write conflicts In the PEM model, there is no direct communication network between the P processors. The processors have to communicate indirectly over the main memory. If multiple processors try to access the same block in main memory concurrently read/write conflicts occur. Like in the PRAM model, three different variations of this problem are considered: Concurrent Read Concurrent Write (CRCW): The same block in main memory can be read and written by multiple processors concurrently. Concurrent Read Exclusive Write (CREW): The same block in main memory can be read by multiple processors concurrently. Only one processor can write to a block at a time. Exclusive Read Exclusive Write (EREW): The same block in main memory cannot be read or written by multiple processors concurrently. Only one processor can access a block at a time. The following two algorithms solve the CREW and EREW problem if processors write to the same block simultaneously. A first approach is to serialize the write operations. Only one processor after the other writes to the block. This results in a total of parallel block transfers. A second approach needs parallel block transfers and an additional block for each processor. The main idea is to schedule the write operations in a binary tree fashion and gradually combine the data into a single block. In the first round processors combine their blocks into blocks. Then processors combine the blocks into . This procedure is continued until all the data is combined in one block. Comparison to other models Examples Multiway partitioning Let be a vector of d-1 pivots sorted in increasing order. Let be an unordered set of N elements. A d-way partition of is a set , where and for . is called the i-th bucket. The number of elements in is greater than and smaller than . In the following algorithm the input is partitioned into N/P-sized contiguous segments in main memory. The processor i primarily works on the segment . The multiway partitioning algorithm (PEM_DIST_SORT) uses a PEM prefix sum algorithm to calculate the prefix sum with the optimal I/O complexity. This algorithm simulates an optimal PRAM prefix sum algorithm. // Compute parallelly a d-way partition on the data segments for each processor i in parallel do Read the vector of pivots into the cache. Partition into d buckets and let vector be the number of items in each bucket. end for Run PEM prefix sum on the set of vectors simultaneously. // Use the prefix sum vector to compute the final partition for each processor i in parallel do Write elements into memory locations offset appropriately by and . end for Using the prefix sums stored in the last processor P calculates the vector of bucket sizes and returns it. If the vector of pivots M and the input set A are located in contiguous memory, then the d-way partitioning problem can be solved in the PEM model with I/O complexity. The content of the final buckets have to be located in contiguous memory. Selection The selection problem is about finding the k-th smallest item in an unordered list of size . The following code makes use of PRAMSORT which is a PRAM optimal sorting algorithm which runs in , and SELECT, which is a cache optimal single-processor selection algorithm. if then return end if //Find median of each for each processor in parallel do end for // Sort medians // Partition around median of medians if then return else return end if Under the assumption that the input is stored in contiguous memory, PEMSELECT has an I/O complexity of: Distribution sort Distribution sort partitions an input list of size into disjoint buckets of similar size. Every bucket is then sorted recursively and the results are combined into a fully sorted list. If the task is delegated to a cache-optimal single-processor sorting algorithm. Otherwise the following algorithm is used: // Sample elements from for each processor in parallel do if then Load in -sized pages and sort pages individually else Load and sort as single page end if Pick every 'th element from each sorted memory page into contiguous vector of samples end for in parallel do Combine vectors into a single contiguous vector Make copies of : end do // Find pivots for to in parallel do end for Pack pivots in contiguous array // Partition around pivots into buckets // Recursively sort buckets for to in parallel do recursively call on bucket of size using processors responsible for elements in bucket end for The I/O complexity of PEMDISTSORT is: where If the number of processors is chosen that and the I/O complexity is then: Other PEM algorithms Where is the time it takes to sort items with processors in the PEM model. See also Parallel random-access machine (PRAM) Random-access machine (RAM) External memory (EM) References Algorithms Models of computation Analysis of parallel algorithms External memory algorithms Cache (computing)
Parallel external memory
Mathematics
1,344
12,786,203
https://en.wikipedia.org/wiki/History%20of%20virtual%20learning%20environments%20in%20the%201990s
In the history of virtual learning environments, the 1990s was a time of growth, primarily due to the advent of the affordable computer and of the Internet. 1980s 1985 The Free Educational Mail (FrEdMail) network was created by San Diego educators, Al Rogers and Yvonne Marie Andres, in 1985. More than 150 schools and school districts were using the network for free international email access and curriculum services. 1990s 1990 Formal Systems Inc. of Princeton, NJ, USA introduces a DOS-based Assessment Management System. An internet version was introduced in 1997. (In 2000, Formal Systems changed its name to Pedagogue Solutions. The Athena Project at MIT, which started in 1983, has evolved into a system of "shared services" that look remarkably like many current VLEs or learning management systems. The network hosted software from multiple vendors, and made it all work together. Here is a list of the features of the system as of 1990: printing, electronic mail, electronic messaging (Zephyr), bulletin board conferencing (Discuss), on-line consulting (OLC), on-line teaching assistant (OLTA), on-line help (OLH), assignment exchange (Turn in/pick up), access to system libraries, authentication for system security (Kerberos), naming-for linking system components together (Hcsiod), and a service management system (Moira). Pavel Curtis created LambdaMOO, an early Multi-User Dungeon (MUD), at Xerox PARC. HyperCourseware created by Kent Norman at the University of Maryland, College Park was originally written for use in the AT&T Teaching Theater, a prototype electronic classroom. The original version was written in WinPlus, a Hypercard like program, and ran on a local area network with one server and numerous client workstations. It included an online syllabus, online lecture notes and readings, synchronous chat rooms, asynchronous discussion boards, online student profiles with pictures, online assignments and exams, online grading, and a dynamic seating chart. A Web-based version was introduced in January 1996, which continued to function up until the end of 2017. The US Navy's Naval Technical Training System was designed as a curriculum development system. It included course management tools for the storage, retrieval and dissemination of information. An article in Electronic Learning by Therese Mageau describes Integrated Learning Systems (ILS) as "networked computers running broad-based curriculum software with a management system that tracks student progress." A report by George Mann and Joe Kitchens reviews the Curriculum Management System (CMS), a system that generated individualized learning plans every two weeks for each student. FirstClass is launched by SoftArc, initially for the Macintosh platform. 1991 Thousands of FrEdMail users gained access to the NSFNET via newly established gateways at two NSFNET mid-level network locations: Merit/MichNet in Ann Arbor, MI, and CERFnet (California Education and Research Federation Network) in San Diego, CA. FrEdMail subscribers began to exchange project-based learning electronic mail with the entire Internet community. The FrEdMail-NSFNET Gateway Software was available free of cost to any mid-level network, college, or university which had an interest in collaborating with local K-12 school districts to bring electronic networking to teachers and students. Through FrEdMail, educators were able share classroom experiences, distribute curriculum ideas and teaching materials, as well as obtain information about workshops, job opportunities, and legislation affecting education. At its peak, FrEdMail was used by 12,000 schools and 350 nodes worldwide. When the World Wide Web became available to the public in 1993, the FrEdMail Foundation became the Global SchoolNet Foundation and launched its first website, GlobalSchoolhouse.org. The following year the National Science Foundation also awarded Global SchoolNet a grant to introduce a desktop video-conferencing program called CU-SeeMe. CU-SeeMe was used for many educational video-conferences and in 1995 by World News Now for the first television broadcast live on the Internet, which featured an interview by World News Now anchor Kevin Newman and Yvonne Andrés. iEARN (International Education and Resource Network) launched among schools in nine countries, using the IGC/APC system of "conferences/newsgroups" to better enable students to conduct theme-based online projects. The history page of the TEDS company states that they developed the first Learning Management System. Jakob Ziv-El of Interactive Communication Systems, Inc. files for a patent for an Interactive Group Communication System (# 5,263,869) (similar to the prior art of the IBM 1500 system). A 1990 foreign patent and a 1972 patent by Jakob Zawels (# 3,641,685) are referenced. The patent is granted in 1993. The patent is referenced in a 2000 patent filing (# 6,988,138) by representatives of BlackBoard, Inc. Murray Turoff, the guru of EIES, publishes This distills lessons from a research programme run by him over the preceding 16 years, from 1974. A collaboration of faith based groups (Ecunet) start using a product called BizLink (which later became Convene) in teaching their missionaries and staff around the world using the internet. Gloria Gery publishes Electronic performance support systems: how and why to remake the workplace through the strategic application of technology, which influences thinking about technology and learning in the workplace. 1992 CAPA (Computer Assisted Personalized Approach) system was developed at Michigan State University. It was first used in a small (92 student) physics class in the Fall of 1992. Students accessed randomized (personalized) homework problems through telnet. Convene International is founded by Jeffery Stein and Reda Athanasios to provide collaboration tools via the Internet. UNI-C, the Danish State Centre for Computing in Education (which became a Blackboard user in the 2000s) supports a wide range of online distance courses using PortaCOM, a conferencing platform, for example in the TUDIC project, funded under the EU's COMET Programme. Extensive theoretical work undertaken by, amongst others, Elsebeth Korsgaard Sorensen, whose web site has a detailed bibliography. Collaborative Learning Through Computer Conferencing, also known as the Najaden Papers, edited by Anthony Kaye in the NATO ASI Series, and published by Springer-Verlag (). Provides several case studies of online learning in action, and an overview by Jacob Palme providing a comprehensive inventory of the functionalities available in computer conferencing systems, including SuperKOM. This last paper describes in detail the underlying functions of what would now be called a virtual learning environment, including, for example, roles, voting, expiration times, exams, moderation, deferred operations. Open University (UK) installs FirstClass on a Mac server (reputed to be server license number 3) after an extensive evaluation of tools suitable to deliver online learning across Europe for the just-started JANUS project funded by the European Commission under the DELTA programme. (FirstClass was then a product of SoftArc in Ontario, Canada.). The New York University School of Continuing Education (SCE) introduces its Virtual College and develops a digital network to deliver courses to students. SCE uses Lotus Notes at least through 1997 for computer conferencing and to provide online computer laboratory access to student home PCs. GeoMetrix Data Systems founded. They produce the learning management system called TrainingPartner. [LearnFrame] of Draper, Utah founded. They initially produced online courseware and an authoring tool, and in 1995 developed Pinnacle Learning Manager, that accepted and managed courses from a wide variety of vendors. Following several years of preparatory studies, the European Commission DELTA programme starts. (DELTA stands for Developing European Learning through Technological Advances.) Over 30 projects are funded, each lasting for around three years, many relevant to VLEs, perhaps the most relevant ones being MTS, JANUS and EAST. The DELTA programme built on preparatory studies going on since 1985 into portable educational tools environments (proto-VLEs), networked multimedia and hypermedia, satellite networks, and a Learning Systems Reference Model (in some ways a precursor of IMS). There seems to be almost no Web information now on the preparatory studies, except for an interview with Luis Rosello in DEOSNews. Authorware Inc. merges with MacroMind/ParaComp to create Macromedia. MacroMind specialized in animation software (Director) and ParaComp specialized in 3D imagery (Swivel 3D). Macromedia goes public only months after the merger and remains the leading purveyor of multimedia tools. Terry Hedegaard of UOP online picks Convene International's Internet collaboration tools to run a pilot for teaching UOP students online exclusively. The MUD Institute (TMI/TMI-2) provides the TMI Mudlib and online environment for learning MUD programming, including e-mail, bulletin boards, shared file spaces, real time chat, and instant messaging. Terry Anderson coordinates a net-based "virtual conference" in conjunction with the 16th World Congress of the International Council for Distance Education. This project used email lists and Usenet groups distributed on the early Internet, Usenet, BitNet, and NetNorth. Humber College's Digital Electronics program used a learning management system to support a set of online courses. The program featured individualized instruction and continuous intake. University of Wales, Aberystwyth awarded internal funding to further develop its 'integrated project support environment for teaching software engineering'. 1993 Jakob Ziv-El of Discourse Technologies, Inc. files for a patent for a Remote Teaching System (# 5,437,555) (similar to the prior art of the PLATO system), referencing his 1991 patent. The patent is granted in 1995. The patent is referenced in a 2000 patent filing (# 6,988,138) by representatives of BlackBoard, Inc. XT001 Renewable energy, a "landmark" experimental course developing techniques for collaborative and resource-based online learning at a distance, was the first "real" course to use FirstClass as its core online tool at the Open University. There are many references (mostly forgotten now) but particularly useful is. Convene International contracted to work with University of Phoenix to develop the first large-scale commercial product for use in Virtual Classrooms. Convene's unique characteristic enabled students to capture data and then work offline (at a time when people were often charged by the hour or minute for online time). University of Phoenix' Thomas Bishop brands the product "ALEX" for Apollo Learning Exchange." As Convene finishes the development of ALEX for University of Phoenix the pilot enrollment grows to 600 students within the first few months of implementation. Brandon Hall puts out the first issue of his Multimedia and Internet Training Newsletter, one of the first regular publications in the field. Jisc (the Joint Information Systems Committee of the UK Higher Education Funding and Research Councils) is established on 1 April 1993, as a successor body to the Information Systems Committee. Also in 1993, ALT - the Association for Learning Technology - was founded in the UK, initially with the assistance of a donation by BT. Michael Hammer and James A. Champy publish "Reengineering the Corporation: a Manifesto for Business Revolution" (New York: HarperCollins, 1993). As usual with business theories it took some time for Reengineering, or Business Process Reengineering in full (BPR in short), to percolate to higher education; but in fact Reengineering spread (to a few) much faster than some other approaches (such as Activity Based Costing or Benchmarking) - already in the 1995-98 period a number of university e-learning experts in UK, Netherlands and Malaysia were using the language, in many cases to the dismay of their colleagues. It is a moot point whether BPR accelerated the development of e-learning or inhibited it - certainly at CEO level in some universities the ideas were for a while seductive. BPR has a sharp edge - the gentler but vaguer approach of Change Management seems to be more enduring Scott Gray, a mathematics graduate student at Ohio State, develops The Web Workshop, a system that allows users to create Web pages online while learning. The pedagogical technique called Useractive Learning was developed to emulate the teaching techniques used in the Calculus & Mathematica courses taught at Ohio State. Bill Davis, Jerry Uhl, Bruce Carpenter, and Lee Wayand launch MathEverywhere, Inc. to market and sell the coursework used in Calculus & Mathematica courses. 1994 In 1994, NKI Distance Education in Norway starts its second generation, online, distance education courses. The courses were provided on the Internet through EKKO, NKI's self-developed Learning Management System (LMS). The experiences are described in the article NKI Fjernundervisning: Two Decades of Online Sustainability in Morten Flate Paulsen's book Online Education and Learning Management Systems. CALCampus launches online-based school through which administration, real-time classroom instruction, and materials are provided. The Tarrson Family Endowed Chair in Periodontics at UCLA is established with a testamentary gift to design, develop and launch the UCLA Periodontics Information Center for sharing periodontal practices and concepts with the worldwide dental community via CD-ROM and the Internet. Lotus Development Corporation acquires the Human Interest Group. The system evolves into the Lotus Learning Management System and Lotus Virtual Classroom now owned by IBM. Links to articles that describes how IBM has previously implemented the "inventions" described in the Blackboard patent. SUNY Learning Network begins in 1994. Traditional faculty were hired to create online courses for asynchronous delivery into the home via computer. Each faculty member worked with an instructional design partner to implement the course. From the fall of 1995 through spring of 1997, forty courses were developed and delivered. SLN now supports over 3,000 faculty, 100,000 enrollments on 40 of the State University of New York's campuses. WEST 1.0 is released by WBT Systems. It eventually is renamed TopClass. Bob Jensen and Petrea Sandlin publish Electronic Teaching and Learning: Trends in Adapting to Hypertext, Hypermedia, and Networks in Higher Education - republished 1997. Text includes identification of ten leading LMS systems in 1994 (discussed in detail in chapter 3 of their book): Quest from Allen Communication Tourguide from American Training International (Tourguide is no longer listed as a product at Infotec.) Multimedia ToolBook from Asymetrix Corporation, bought by Click2Learn, bought by SumTotal Systems Lesson Builder from the Center for Education Technology in Accounting (this product never was completed) Tencore from Computer Teaching Corporation Course Builder from Discovery Systems International, Inc. Training Icon Environment (TIE) from Global Information Systems Technology, Inc. tbtAuthor from HyperGraphics Corporation (HyperGraphics no longer lists tbtAuthor in its product line) Authorware from Macromedia Corporation Personal Education Authoring Kit (PEAK) from Major Educational Resources Corp. PEAK is for Mac users only and has been discontinued. However, while they last you can get free copies at 800-989-5353. Banking on the tremendous commercial success and rapid growth for the UOP program, Reda Athanasios of Convene International starts making the online virtual classroom suite, built in collaboration with UOP, available for all other schools aiming at success for their distance education programs. The JANUS project led by the Open University releases in September 1994 Deliverable 45 describing the interim evaluations of the first three online courses delivered across Europe in conjunction with the JANUS project, including AD280 "What is Europe", DM863 "Lisp Programming" and D309 "Cognitive Psychology" Virtual Summer School. Later in the year the Open University releases a longer final report purely on the Virtual Summer School. September 1994: The JANUS User Association holds its first AGM and conference at the Dutch Open University. It is one of the first Europe-wide associations focussed on e-learning. It later changed its name to LearnTel and continued until 1999. An online archive of the newsletter is still available via the support of pjb Associates. Athabasca University (Canada) implements first on-line Executive MBA program using Lotus Notes. TeleEducation NB introduces a DOS-based working LMS in 1993. In 1994 a more powerful system was proposed for the WWW. A description of the concept was published in 1995 with some of the principal features of an LMS. Taking advantage of Convene International's online virtual classroom and hoping for similar success to that of UOP online, several schools start working with Convene in wiring their Distance Education programs and offering it online via the Internet. Mark Lavenant and John Kruper present "The Phoenix Project at the University of Chicago: Developing a Secure, Distributed Hypermedia Authoring Environment Built on the World Wide Web" at the First International World-Wide Web Conference in Geneva, Switzerland. "The Phoenix Project" later became the Web-based learning environment within the Division of the Biological Sciences at the University of Chicago. Swanton High School in Ohio used learning management systems to track student progress, as well as testing results, satellite courses, videodiscs, Hypercard, QuickTime video, and Internet connections. Intralearn comes out with a Learning Management System for the Mid Market. This system has the facility to conduct courses to students from different locations using internet, interact with them, send them mails and conduct examinations. Tufts University released (1994) the Health Sciences Database which subsequently (2003) became known as TUSK, Tufts university sciences knowledgebase. In 1997 using MYSQL created version 3 - hsdb3. There has been a steady development of features through versions hsdb4, hsdb45, TUSK 1.0 and now TUSK 2.0. From its inception its basis was integration of clinical information with its ubiquitous availability across space and time. Students and authors had specific permissions within the system. TUSK is a combination learning management system, content/knowledge management system and course management system. The system is used at the three health sciences schools at Tufts and now a 7 partner schools in the U.S., Africa and India. July 1994: First international gathering of educators using online technologies to conduct classroom project-based learning was held by iEARN (International Education and Resource Network) in Puerto Madryn, Argentina. 120 educators from 20 countries gathered to share experiences. Out of this conference came the first international iEARN constitution and plans to expand school networking globally. 1995 Jerrold Maddox, at Penn State University, taught a course, Commentary on Art, on the web starting in January 1995. It was the first course taught at a distance using the web. By January 1995 there are dozens of MUDs and MOOs, including Diversity University, in use for educational purposes. Elliott Masie and Rebekah Wolman publish the first edition of The Computer Training Handbook (Minneapolis: Lakewood Books). Pardner Wynn introduces a free web-based interactive course at testprep.com for SAT test preparation, possibly the first interactive learning course on the internet. Over 1 millions hits are registered within three months, encouraging the development of the first commercial web-based e-learning course authoring, publishing, and management system, IBTauthor (announced January 1996 in Brandon Hall's Multimedia Training Newsletter). This product became the basis for VC-backed Docent, Inc. (funded in 1997, IPO in 2000), now named SumTotal Systems. European Commission establishes the European Multimedia Task Force, to analyse the status of educational media in Europe. The field covered by the Task Force includes all educational and cultural products and services that can be accessed by TVs and computers, whether via telematics networks or not, and used in the home, industry or educational contexts. Lotus Notes used for course materials, syllabi, handouts, homework collection, teams, and multi-instructor, multi-team teaching in the MBA program. Results reported at several academic conferences (ICIS-17, AIS-2) in 1996. Mallard web-based course management system developed at the University of Illinois. Mallard allows for multiple roles. For example, a graduate student can be an instructor in one course and a student in another. WOLF (Wolverhampton Online Learning Framework) is developed at the University of Wolverhampton's Broadnet project under the guidance of Stephen Molyneux to deliver training materials to local SMEs (Small to Medium Enterprises). In 1999, WOLF is both adopted as the university's VLE, and sold for commercial distribution to Granada Learning, who rebrand the product in partnership with the university and market it to the UK FE and HE sectors under the name Learnwise. WOLF is still in use at the university today, and undergoing continual development to meet the ever-changing needs of education. Nicenet ICA launched to the public. Murray Goldberg begins development of WebCT at the University of British Columbia in Vancouver, Canada, with a $45,000-grant from UBC's Teaching and Learning Enhancement Fund. WebCT would go on to become the world's most widely used VLE used by millions of students in 80 countries. FirstClass is named the Best General Purpose Tool/School Program by Technology & Learning magazine. Professors Michael Gage and Arnold Pizer develop the WeBWorK Online Homework Delivery System at the University of Rochester. Virtual Science and Mathematics Fair used static HTML pages created by children and a threaded discussion for comment posts left by judges and visitors. PhD research reported by Kevin C. Facemyer, 1996. The Future of Networking Technologies for Learning Workshop held, sponsored by US Department of Education. "In an attempt to answer the question, "What is the future of networking technologies for learning," the U.S. Department of Education's Office of Educational Technology commissioned a series of white papers on various aspects of educational networking and hosted a workshop to discuss the issues. The white papers and the workshop report are here." The European Commission release in May 1995 a 104-page report describing the 30 projects commissioned under the DELTA programme of Framework 3. Several of these are concerned with online learning using what many might today call a "virtual learning environment". (The phrase is not used as such but the phrases "learning environment", "interactive learning environment" and "collaborative learning environment" are used quite frequently.) About the same time the JANUS project releases the JANUS Final Report describing the project over its three-year lifetime and all the online courses it has supported during 1993-1994 across Europe. The report Telematics for Distance Education in North America is released in public form in November 1995 after wide dissemination within European research circles. It describes the situation as it pertains to e-learning at 20 organisations including universities and most major vendors, based on a 3-week study trip in summer 1995 by Bacsich and Mason. A short article in the LIGIS newsletter for November 1995 on FirstClass confirms that at the time of its writing FirstClass did not have a Web interface. (It also notes that its then rival CAUCUS did have a Web interface and that WEST, later TopClass from WBTSystems, had been recently developed.) Northern Virginia Community College (NVCC)'s Extended Learning Institute develops and delivers four math, science, and engineering courses using Lotus Notes for computer conferencing/groupware functionality. Edward Barrett at MIT received a grant to create a prototype "Electronic Multimedia Online Textbook in Engineering" (EMOTE) for use in classes taught through the new Writing Initiative. WebTeach, a web-based asynchronous communication system using chronological threads in the "Confer style" originally developed in the mid 70s by Robert Parnes, was first used in 1995 in the Professional Development Centre at UNSW. It was written in Apple's Hypercard as a CGI script running behind WebStar by Dr. Chris Hughes and Dr. Lindsay Hewson at UNSW. The 1996 versions supported a Notice Board, a Seminar Room and a Coffee Shop for each class group, and added email notifications, a Quiz function, and a range of pre-programmed communication modes that emulated small group teaching strategies including brainstorming, questioning, case studies and commitment exercises. The modes were characterised by changes in layout, font colours, and the options available to teachers and students. The software was refined in subsequent years, with additional modes, including a formal debate mode, being added. In 2002 it was completely rewritten in Cold Fusion and refined to include many more features, including private groups, voting modes and fully functional web-based administration pages. WebTeach supports an approach to teaching and learning on the web that is more akin to an asynchronous virtual classroom than it is to an instructionally designed and packaged educational experience. Communication forms the basis of the teaching (as opposed to content provision) and the teacher in a group can switch teaching strategies (modes) easily, in order to respond to student contributions. Many online schools appear on the educational scenes after working with Convene International. Some of them emerge as leaders of Internet Education like, Baker College and Pacific Oaks College and UCLA extension to name a few. The Stanford Center for Professional Development (SCPD, formerly SITN) launches Stanford Online, which "was the first university internet delivery system incorporating text and graphics with video and audio, using technology developed at Stanford." "Constructing Educational Courseware using NCSA Mosaic and the World Wide Web" is presented by J.K. Campbell, S. Hurley, S.B. Jones, and N.M. Stephens at the 3rd International World-Wide Web Conference in Darmstadt, Germany. Lee A. Newberg, Richard Rouse III, and John Kruper publish "Integrating the World-Wide Web and Multi-User Domains to Support Advanced Network-Based Learning Environments" in the Proceedings of the World Conference on Educational Multimedia and Hypermedia (1995), Association for the Advancement of Computing in Education, Graz, Austria. From May to July 1995 Georg Fuellen, Robert Giegerich and others give the "BioComputing Course" using the Electronic Conferencing system BioMOO, later winning the "Multimedia Transfer 1997" presented during the exhibition Learntec 1997. Work began at University of Wales, Aberystwyth in developing its integrated Remote Advisory System, a system designed to provide students with remotely sited tutors, sharing workspaces, audio and video. Supported by Internal Outlook Enterprise Funding. Sue Polyson, Robert Godwin-Jones, and Steve Saltzberg of Virginia Commonwealth University (VCU), at a Fall 1995 meeting of the "Partnership for Distributed Learning" (a consortium of US schools organized by University of North Carolina, Chapel Hill) proposed the concept for developing a web-based course management system named "Web Course in a Box". They described the basic system features and proposed that interested schools work together to develop a working prototype of this system. The VCU group began work on the prototype with input from the consortium. Work continued through the Winter, 1995 and Spring 1996. A first beta of Web Course in a Box was presented to the group in Spring, 1996. The idea for Web Course in a Box grew out of work that Polyson had begun in 1994–1995 at VCU to develop a web-based interface for delivery of course materials to support VCU's Executive Masters in Health Administration, one of the first distance-delivered master's degree programs in the country. During this time, Godwin-Jones, also at VCU, had been working to develop web-based content for foreign language instruction. This work was described in two articles published by Syllabus Press, in the September 1995 issue of Syllabus (Volume 9, No.1) titled "Distributed Learning on the World Wide Web" and "Technology Across the Curriculum - Case Studies", both authored by Saltzberg and Polyson. QuestionMark brings out first web based assessment management system QM Web, following on from DOS and Windows assessment systems. Online Learning Circles move from the AT&T Learning Network to their current home on the International Education and Resources Network (iearn) 1996 The Project for OnLine Instructional Support is designed and developed at the University of Arizona. This tool provides innovative dialog-based lessons to students. To support use of these lessons a method for providing online course context, course organization and course communications tools is created. In 1996, NKI Distance Education in Norway starts its third generation online distance education courses. The courses were web-based and provided through EKKO (renamed to SESAM), NKI's self-developed Learning Management System (LMS). The experiences are described in the article NKI Fjernundervisning: Two Decades of Online Sustainability in Morten Flate Paulsen's book Online Education and Learning Management Systems. In 1996, after hearing about the Virtual Office Hours Project developed by Prof. Craig Merlic and Matthew Walker in UCLA's Department of Biochemistry, UCLA Social Sciences reviewed it with some of the faculty and decided to try writing a custom version. The deciding factor was finding Jeff Carnahan's Upload.pl Perl CGI Script (available at Misc CGI Scripts - click on FileUploader 6.0 for free, but registration required) that did File Uploads via a web browser. With that, Matt Wright's WWWBoard, a Calendar script, later discarded, and a script written by Social Sciences Computing to edit files on the fly, there were enough tools to make something useful. Originally the plan was to have instructors fill out a web form to request a site. But due to problems getting the email to work, sites were created instantly instead. That turned out to be easier. A password was added and emailed to all the Social Sciences faculty. ClassWeb was first offered to UCLA Social Sciences Faculty in the Spring Quarter of 1997. Eight instructors set up ClassWeb sites (see Spring 1997 sites). Early 1996, Dan Cane, a sophomore student at Cornell University begins working Cindy van Es, a senior lecturer in Agricultural, Resource and Managerial Economics (ARME) as part of an independent study project to build course web pages. In turn he develops automated scripts to provide basic interactive functionality for announcements and the beginnings of a suite of tools called The Teachers Toolbox. These ideas later become the foundation for CourseInfo. The UCLA Periodontics Information Center was established in 1996 within the UCLA School of Dentistry with generous gifts from the Tarrson Family and Sun Microsystems. The initial thrust was to provide the most comprehensive website on Periodontics including Tutorials, Case Studies and Continuing Education Credits. European Commission agrees to the European Council's 'Learning in the Information Society' action plan. Webtester and ChiTester developed at Weber State University through a grant from the Utah Higher Education Technology Initiative. ChiTester early history Sue Polyson and Robert Godwin-Jones, of Virginia Commonwealth University released the first beta version of Web course in a Box (WCB) in Spring, 1996. (See this 1997 presentation). This web-based system was designed to be an easy-to-use, template-based interface that allowed instructors to create an integrated set of web pages for presenting course material. The system featured logins for instructors and students, the ability for instructors to enroll students in their courses so that access to course materials could be controlled, the easy setup of web-based discussion forums for use by students within the class, document sharing through the upload of files to the discussion forum, schedule and announcement pages, content links, and personal home pages for both students and instructors. The WCB system was made available, free of charge, for use by any school that wished to use it. The source code was copyrighted by Virginia Commonwealth University, and Web Course in a Box was trademarked by VCU in 1997. Web Course in a Box was described in an article in "A Practical Guide to Teaching with the World Wide Web", by Polyson, Saltzberg, and Godwin-Jones, published in the September 1996 issue of Syllabus magazine, by Syllabus Press. For a feature and version history of web course in a box, please see, Doncaster College in South Yorkshire, England, submitted a bid under the "Further Education Competitiveness Fund" proposing to use the Fretwell Downing "Common Learning Environment" integrated into newsgroups, the WWW, and conferencing, all combined into an on-line learning environment. Diagram and a single paragraph from the bid, dated 4 March 1996. The full document is much more explicit, making reference to the use of email, conferencing, newsgroups for the delivery of National Vocational Qualifications and distance learning over the internet and the UK Joint Academic Network. Slides from a presentation, including diagram of the learning environment. 8 May 1996 - Paris, France: Murray Goldberg presents paper at the 5th WWW conference, introducing WebCT - See session PS10, paper P29. For paper, see: World Wide Web - Course Tool (Web-CT): An Environment for Building WWW-Based Courses. The reaction to WebCT caused Goldberg to begin giving away free licenses to the software. Word spread very quickly and within six months approximately 100 institutions were using WebCT. In January, Nat Kannan, Carl Tyson, and Michael Anderson form UOL Publishing (now VCampus) and release an Internet course delivery platform; the Java client accesses PLATO content on a CDC mainframe. In November, UOL releases a browser-based course authoring and delivery platform based on the Informix OO database. The UOL system supports multiple campuses (with "buildings" on each "campus" for the different academic functions) and enables multiple roles (admin/author/instructor/student) for every user on a course by course basis. UOL's virtual campus is adopted by Graybar Electric and the University of Texas TeleCampus (among others) in early 1997. Paul McKey publishes the design specifications for an "Interactive on-line Tutorial Session Model" in his Masters Thesis "The Development of the On-line Educational Institute", SCU, Australia, July 1996. Electronic, network-based assignment submission tool in use at Australian National University Department of Computer Science. Web-based course pages also implemented at ANU DCS (both submission tool and course pages may have been in use prior to 1996). The University of Michigan launches the UMIE project (the University of Michigan Instructional Environment), a combination of systems to enhance learning online and to create a Learning Management System for use by the campus. In the summer of 1996, the Research Station Petnica (Yugoslavia) started the action of digital transformation of teaching, published a digital book on the Internet and laid the foundations for online teaching. University of Southern Queensland (USQ) offers its first fully online program, a Graduate Certificate in Open and Distance Learning, using a system that linked together course materials presented in web pages, online discussion via newsgroups (NNTP) and a purpose-built system for online submission of student work. The development of COSE was funded from September 1996 to August 1999 by the JISC Technology Applications Programme (JTAP). COSE has continued to gain support from the Jisc in its work on interoperability. The JTAP programme also funded the Toomol project which produced the Colloquia P2P VLE, developed by Liber, Olivier, Britain and Beauvoir, which has had a major influence in the more recent development of the Personal Learning Environment (PLE) concept. World Wide Satellite Broadcasting (WSB) Inc. develops a satellite-based distance learning system using synchronized video and audio courseware provided by UCLA. Content is delivered via Philips' CleverCast content distribution system to Windows PCs running Active Desktop via the Astro MEASAT Direct To Home (DTH) network, covering Malaysia, Thailand and India. The TELSI (Telematic Environment for Language Simulations) VLE is developed at the University of Oulu in Finland. Development was headed by Eric Rouselle and was continued into present day Discendum Optima. Marine Corps Management and Simulation Office (MCMSO) adapts DOOM II into Marine Doom, a Virtual Learning Environment for training four-man fire teams. KnowledgePlanet introduced the world's first Web-based Learning Management System in 1996. See: KnowledgePlanet: History & Milestones Stephen Downes, Jeff McLaughlin and Terry Anderson demonstrate and document the MAUD (Multi-Academic User Domain), holding a Canadian Association for Distance Education Seminar on the system, Online Teaching and Learning, 29 January 1996. Michigan State University's Virtual University opened. By 1997, its fully online courses included registration, payment, quizzing, discussions, dropbox, and, of course, course content. The system was created and developed by in-house programmers. Now Garry Main and Kevan Gartland, University of Abertay Dundee, UK, A system (webtest) was developed and deployed for use in testing students in the School of Molecular and Life Sciences. This was later extended to allow images to be labelled, self-testing and teaching. Also in use at the time was the QuestionMark product. The work at Abertay was presented as a keynote talk at the BALANCE workshops KeyNote Presentations in 1997/8. Initial release of the ETUDES software at Foothill College, California. Real Education founded (later changed to eCollege.com) as an LMS/CMS Application Service Provider company. WEST (later WBTSystems) announce in early 1996 a new release of WEST (later renamed TopClass). Among the enhancements mentioned are: support of multiple-choice tests and "fill in the blanks" questions, including choosing questions randomly from a list (question bank?); support of multiple classes with multiple content and students able to take more than one class. The article Lotus Notes in the Telematic University written for LIGIS in September 1996 confirms that several US universities are using Lotus Notes for e-learning, including via a Web interface. It goes on to observe that "Lotus Notes already has offered for a year or more several of the groupware and Internet features that other systems like FirstClass and Microsoft Exchange are only just now getting". Another article in the same edition of LIGIS confirms that FirstClass, to the relief of many of its users, in August announced a Web interface. FirstClass, but see also FirstClass Education Summit - May 1996 Report) The 304-page PDF manual for the FirstClass Intranet Client (Part Number SOF3122) is widely and freely distributed by SoftArc across many bulletin boards and web servers and remains available at several universities (e.g. at the University of Maine], a long-standing user of FirstClass. Not to be outdone by the UK Open University, the FernUniversitat Hagen (German OU) described its web-based virtual campus in a LIGIS article in October 1996 on University of Hagen Online by Schlageter and others. The project "goes beyond current approaches in that it integrates all functions of a university, thus producing a complete and homogeneous system. This does not only include all kinds of learning material delivered via electronic network (most "online university" approaches focus almost exclusively on this aspect) - but for a promising approach the following is absolutely essential: user-friendly and powerful communication, especially also between users themselves for collaborative learning (peer learning) and for social interconnecting, possibilities of group-work (cscw), seminar support, new forms of exercise and practical via net, easy access to library and administration, information and tutoring systems". Microsoft announces MS Exchange at Networld+Interop. An article of the era speculates on its relevance to e-learning. An article nominates 1996 as "the year of virtual universities". There were a large number of conferences - in particular at Ed-Media Boston there was a packed session even though organised at short notice. WebSeminar (Gary Brown, Eric Miraglia, Doug Winther, and Information Management Group) (now retired, news release here) an interactive web-based space for integrating discussion and media rich modules. The Virtual Classroom (Brown, Burke and Miraglia). (retired) a web-based threaded composition environment. A WSU Boeing grant award and Microsoft, Information Management Group partnership Northern Virginia Community College (NVCC)'s Extended Learning Institute switches from Lotus Notes to FirstClass and uses First Class in over 35 courses during the Fall 1996 semester March 1996. Allaire releases Allaire Forums, "a Web conferencing application built entirely on the ColdFusion platform. Forums provided a feature-rich server application for creating Internet, Intranet and Interprise collaborative environments. Already in use by hundreds of leading companies worldwide, Forums was the first in a new line of end-user Web applications." Bruce Landon makes a proposal to British Columbia to set up a comparison service for VLEs, which made its first report (on nine systems) in 1997. It was first called Landonline, then later called Edutools. Hermann Maurer (Graz University of Technology, Austria) publishes "LATE: A Unified Concept for a Unified Teaching and Learning Environment" in the Journal of Universal Computer Science. Based on the Hyper-G/HyperWave system developed by Maurer, LATE prefigures many of the features available in virtual learning environments, including content-authoring modules, digital libraries, asynchronous and synchronous discussion, and virtual whiteboards. The University of Manitoba conducts an evaluation of course management systems that includes Learning Space (University of Washington), Top Class, WebCT and ToolBook. Iowa State University develops Classnet, a web-based "tightly integrated, automated class management system". It was created to help with the administrative aspects of course management. The Oracle Learning Architecture (OLA) is a course management system with over 75 training titles. It has the following features: Home page, bulletin board, Help, User Profile, My Courses, Course Catalog, and Reports. It served up web-based courses, download courses, vendor demos and assessments. Empower Corporation developed the Online Learning Infrastructure (OLI), a training management system that used a relational database as a central repository for courses and/or learning objects. It had built-in tools and templates for authoring learning objects. It also had a middleware layer called the Multimedia Learning Object Broker that mapped learning objects as they moved in and out of the database. TeamSpace's Learning Junction is an Internet-based training management system founded by several ex-Oracle employees. It was developed in Java. The program displayed a graphical list of courses, certification plans and needed skills. Students registered online, and were given an individualized learning plan. The Jisc Technology Applications Programme (JTAP) coMentor VLE starts development at the University of Huddersfield, UK. The coMentor web site indicates that a further [dissemination phase] of the software started in 1998. Work was funded at the University of Wales, Aberystwyth to further develop its Integrated Project Support Environment for Teaching started in 1992. Work funded at the University of Wales, Aberystwyth by the Joint Information Systems Committee Technology Applications Programme £164,000 for NEAT - Networked Expertise, Advise and Tuition. A system for students to obtain help across the Internet from tutors - sharing workspace, audio and video. Tufts University presents to Special Library Association. Article is published in Proceedings of the Contributed Paper Session to the Biological Sciences Division of the Special Libraries Association - 12 June 1996, describing the creation of networked relational document database which integrates text and multimedia and the creation of tools which address the changing needs in medical education 1997 Digitalbrain plc, founded by David Clancy in 1997, quickly established itself as the most heavily used learning platform in the UK; which is still the case in April 2007. Digitalbrain was the first learning platform to be deployed using an on-demand software model and, as the name implies, the first designed around a user-centric approach. "A truly foresighted design" according to the heaviest users of the platform. The combination of the on-demand and user centric approach meant that a single, flexible learning platform could be rolled out across a host of different school and institutional user groups, each with multiple but inter-related user hierarchies, each with different software bundles and functional capabilities - easily, quickly and cheaply. At a time when users had little understanding of why they needed a learning platform, let alone what they would do with it, this approach encouraged user experimentation, at an affordable price. Early 1997, CourseInfo is founded by Dan Cane and Stephen Gilfus, an undergraduate student and teaching assistant, and launches the Interactive Learning Network 1.5 based on scripts that Dan Cane began writing in 2006. The product is one of the first systems to be based on a relational database with internet forms and scripts that provided announcements, document uploading and quiz and survey functionality. In 1997, Instructional Design for New Media – an online course on how to develop online courses was created using forums, interactive exercises and the notion of collaborative learning by a community of instructors and students. Developed by a Canadian consortium led by Christian Blanchette (Learn Ontario) and funded by the Canadian government, it was featured in the May 1998. Brandon Hall publishes the "Web-Based Training Cookbook: everything you need to know for online training" (New York: John Wiley). The book contains many examples of online training software and content already in commercial use. Brandon Hall also publishes the first of his annual reviews of Learning Management Systems, entitled "Training Management Systems: How to Choose a Program Your Company Can Live With." There are 27 learning management systems listed in this report. Elliott Masie publishes the second edition of the "Computer Training Handbook" (the first version was published in 1995, and co-authored by Rebekah Wolman). In this book Elliott describes teaching a pilot course via the Internet called "Training Skills for Teaching New Technology". The book also has a chapter entitled "On-line and Internet-Based Learning". The Stanford Learning Lab, an applied research organization, was created to improve teaching and learning with effective use of information technologies. It carried out many projects that developed techniques and tools for large lecture, geographically distributed, and project-based courses. A study of web-supported large lecture course, The Word and the World tested online structured reading assignments, asynchronous forums, and student projects. Software developed included: panFora: an online discussion environment for the development of critical thinking skills; CourseWork: an online, rationale-based, problem set design and administration environment; E-Folio: ubiquitous, web-based, portable electronic knowledge databases that are private, personalized and sharable; Helix: web-based software developed to coordinate the iterative review of research papers; and RECALLtm: to capture, index, retrieve, and replay concept generation over time in the form of a sketch and the corresponding audio and video rationale annotation. In June 1997, Gotham Writers' Workshop (writingclasses.com) launched its online division; classes feature blackboard lectures, class discussion bulletin boards, interactive chat, homework posting/individual teacher response, group assignment posting/group critique files. Virginia Commonwealth University licensed Web Course in a Box (WCB) to madDuck Technologies in early 1997. madDuck Technologies was a company formed in early 1997 by Sue Polyson, Robert Godwin-Jones and Steve Saltzberg. The company was formed by the WCB developers in order to provide support and services to other educational institutions who were using WCB. WCB version 1 was released in February 1997 (beta version were released in 1996, and the product was in use at VCU and several other institutions in 1996). WCB V2 was released in September 1997 and added web-based quizzing, as well as more course site customization to the feature set. The Oncourse Project at Indiana University utilizes the notion and design of a "template - based course management system." Other systems used a similar approach including CourseInfo, WebCT, and other Course Management systems. Lotus LearningSpace deployed as the learning and student team environment for the Indiana University Accounting MBA program and reported in the proceedings of HICSS-32. Lotus LearningSpace presented at NERCOMP 24 March 1997: "Interactive Distributed Learning Solutions: Lotus Notes-Based LearningSpace" by Peter Rothstein, Director, Research and Development Programs, Lotus Institute. Plateau released TMS 2, an enterprise-class learning management system. TMS 2 was adopted by both the U.S. Air Force and Bristol-Myers Squibb at the time of its release. The Bodington VLE deployed at the University of Leeds, UK. (The Bodington System - Patently Previous) By 1997, the Bodington VLE included many of the features listed in the Blackboard US Patent #6,988,138, including the variable-role authentication/authorization system. A full record exists of all activity in the Bodington VLE at Leeds going back to October 1997. First versions of COSE deployed at Staffordshire University. COSE includes facilities for the publication and reuse of content, facilities for the creation and management of groups and sub-groups of learners by tutors and for the assignment of learning opportunities to those groups and to individual learners. For article (1997) see Active Learning with COSE. This article was republished in 1998 in Australia. Published mid-1998. Ziff Davis launches ZDNet University for $4.95/month. Offering courses in programming, graphics and web management. Cisco Systems In 1993, Cisco embarked on an initiative to design practical, cost-effective networks for schools. It quickly became apparent that designing and installing the networks was not enough, schools also needed some way to maintain the networks after they were up and running. Cisco Senior Consulting Engineer George Ward developed training for teachers and staff for maintenance of school networks. The students in particular were eager to learn and the demand was such that in 1997 it led to the creation of the Cisco Networking Academy Program. The Cisco Networking Academy Program, established in 1997, teaches students networking and other information technology-related skills, preparing them for jobs as well as for higher education in engineering, computer science and related fields. Since its launch, the program has grown to more than 10,000 Academies in 50 U.S. states and more than 150 countries with a curriculum taught in nine different languages. More than 400,000 students participate in Academies operating in high schools, colleges and universities, technical schools, community-based organizations, and other educational programs around the world. The Networking Academy program blends face-to-face teaching with web-based curriculum, hands-on lab exercises, and Internet-based assessment. Fretwell Downing, based in Sheffield, England, is working on the development of a virtual learning environment, under the auspices of the "LE Club" a partnership between the company and eleven English Further Education colleges. Dr Bob Banks's outline specification for a Learning Environment. The "LE" had arisen from a 1995-1997 EU ACTS Project - Renaissance - in which Fretwell Downing was the prime contractor. Convene International is recruited by Microsoft to become their first Education marketing partner. Convene helps Microsoft with establishing licensing parameters for the ASP companies. Foundation of Blackboard Inc as consulting firm. WebAssign developed by faculty at North Carolina State University for the online submission of student assignments and a mechanism for immediate assessment and feedback. WebCT spins out of UBC forming independent company with several hundred university customers. Release of TWEN (The West Education Network), a system which "connects you with the most useful and current legal information and news, while helping you to organize your course information and participate in class discussions". (See Homepage) Future Learning Environment (FLE) research and development project starts in Helsinki, Finland (See Homepage) Stephen Downes presents Web-Based Courses: The Assiniboine Model at NAWeb 1997, describing the LMS in detail. A collaborative writing project between Junior High students and University pre-teachers, using Filemaker Pro to create collaborative writing spaces, January-March 1997. The Manhattan Project (now known as the Manhattan Virtual Classroom) is launched at Western New England College in Springfield, MA as a supplement to classroom courses in February 1997. It is later released as an open source project. The Manhattan Project (history and description) Delivery starts of the LETTOL course in South Yorkshire, England. Characteristics: delivery over the Internet; materials, tasks/assignments, discussion-board. chat system all accessible by browser; browser-based amending of the materials; learners and tutors all over the world, with learners enrolled to several of the institutions in the (then) South Yorkshire Further Education Consortium, and tutors employed by several different institutions. The Web Project at California State University, Northridge, adapted HyperNews from The Turing Institute, a shareware discussion board that created specific courses with faculty and students. In addition, QuizMaker from the University of Hawaii, and Internet Relay Chat (IRC), were shortly thereafter added to the shareware suite and indexed to faculty webpages. The Virtual 7 were seven faculty who began to teach online in 1995, with this software. University of Aberdeen starts a project to research and evaluate web-based course management and communication tools. Project notes are available, including the original administrator guides for TopClass v.1.2.2b, October 1997 (PDF). Aberdeen ultimately chooses WebCT, and rolls out a live system in 1998. Pioneer developed by MEDC (University of Paisley). Pioneer was an online learning environment developed initially for colleges in Scotland. Pioneer was web-based and featured: online course materials (published by the lecturers themselves); integral email to allow communications between students and tutors; forum tools; chat tools; timeatable/calendar; activities. The main driver for Pioneer was Jackie Galbraith. When MEDC was closed, the Pioneer development team moved to SCET in 1998 taking Pioneer with them when it became SCETPioneer. SCETPioneer was used by Glasgow Colleges and a number of other colleges and schools in Scotland. SCET merged with the SCCC and became Learning and Teaching Scotland Bob Jensen and Petrea Sandlin republish "Electronic Teaching and Learning: Trends in Adapting to Hypertext, Hypermedia, and Networks in Higher Education" - first published 1994, text of both versions available via hyperlink. Speakeasy Studio and Café (Gary Brown, Travis Beard, Dennis Bennett, Eric Miraglia, and others) (now retired, but many references remain on WSU websites, e.g., these) a course delivery system hosted by Washington State University] and used on multiple campuses for web-based discussion and collaborative writing. Speakeasy had a primitive portfolio view that allowed instructors and students to find all the writings of a given author within a course space, by discussion topic or in a calendar view. The Cougar Crystal Ball (Gary Brown, Randy Lagier, Peg Collins, Greg Turner & Lori Eveleth-Baker and others). an online learning profile and corresponding university resource inventory, implements ideas related to selective release of material based on learner preparedness. The WSU OWL (Online Writing Lab) (Gary Brown, Eric Miraglia, Greg Turner Rahman, Jessie Wolf, & Dennis Bennett) (still in use at WSU and by others) an interactive forum for peer tutoring in writing (WSU Boeing grant award), involves a simple threaded discussion. OWL retires in favor of eTutoring March 2008. The VIRTUS project at University of Cologne, Germany, has started the development of the web-based ILIAS learning management system in 1997. A first version with an integrated web-based authoring environment has been going online at 2 November 1998. In 2000 ILIAS became open source software under the GPL. Serf was invented at the University of Delaware by Dr. Fred Hofstetter during the summer of 1997. Initially used to deliver the U.S.'s first PBS TeleWEBcourse (on Internet Literacy), Serf has been used to deliver hundreds of courses. Serf "began as a self-paced multimedia learning environment that enabled students to navigate a syllabus, access instructional resources, communicate, and submit assignments over the Web," and the Serf feature set was expanded from 1997 to 1999 as described in this article (from College & University Media Review (Fall, 1999), 99–123), which includes a detailed table describing the history of Serf's feature development for versions 1 through 3. University of Maryland University College (UMUC) offers its first classes using WebTycho, a customized "program developed by UMUC to facilitate course delivery via the World Wide Web." Paul McKey launches BigTree Online, a commercial, integrated online learning environment for managing the Apple certification program in Asia Pacific. Built with FileMaker Pro from a model first described in his Masters Thesis in 1996. Saba founded. Now one of the pre-eminent corporate learning management systems. FutureMedia (established in 1982) commenced the development of Solstra with BT Group PLC, launching the first version of the product in February 1998. (Annual report for 2001 to SEC) (March 1997) Oleg Liber presents his paper "Viewdata and the World Wide Web: Information or Communication" at CAL 97 at the University of Exeter, England. In it he looks back to the use of videotex in education in the 1980s and forward to a more communications-oriented Web - what we would call Web 2.0 these days - but this was 9 years ago. The paper is worthy of note since Liber is still active in e-learning and as one of the few papers dealing with history of e-learning. Formal Systems Inc. of Princeton, NJ, USA introduces an internet version of its Assessment Management System, which started as a DOS program in 1990. (In 2000, Formal Systems changed its name to Pedagogue Solutions). Educom's IMS Design Requirements released in document dated 19 December 1997. Teaching in the switched-on classroom: An introduction to electronic education and HyperCourseware is published online by Kent Norman at the University of Maryland, College Park, MD: Laboratory for Automation Psychology. Bob Godwin-Jones and Sue Polyson give a presentation at EDUCOM '97 entitled "Tools for Creating and Managing Interactive Web-based Learning". The presentation compared the features of Web Course in a Box and TopClass. The slides for the presentation are still available online. The MadDuck Technologies web site listed the many distinctive features of the Web Course in a Box course management system. An online column by Tom Creed called "The Virtual Companion" lists a number of course management systems including Web Course in a Box, WebCT, Nicenet, NetForum, and WebCT. Virtual-U, a course management system for universities, was developed at Simon Fraser University (SFU) in British Columbia, Canada. A design paper Virtual-U Development Plan: Issues and Process dated 25 June 1997 gives a clear description including screen shots. By early 1998 the system was deployed in a number of universities and colleges across Canada, including SFU, Laval, Douglas College, McGill, University of Winnipeg, University of Guelph, University of Waterloo, and Aurora College. (Source: The Peak, Simon Fraser University's Student Newspaper, Volume 98, Issue 6, 16 February 1998.) A press release dated 10 March 1997 announced that "DLJ's Pershing Division Aligns with Princeton Learning Systems and KnowledgeSoft to Create On-line University". Knowledgesoft's LOIS (Learning Organization Information System) was described by Brandon Hall, in his book The Web-based Training Cookbook (New York: John Wiley, 1997), as an "innovative Web-based training administration tool." It had three core modules: a competency management system, an assessment system, and a training management system. The University of Lincoln and Humberside (ULH) in the UK (later the University of Lincoln) begins development of its "Virtual Campus" software, which was later incorporated into a spin-out company called Teknical, which in 2003 was bought by Serco. Historical references seem fragmentary but some indication of the date of origin is contained in the overview material on the joint SRHE/Lincolnconference on 'Managing Learning Innovation' which took place on 1 and 2 September 97 at the university. Substantial funding came from BP as noted in a web page of the former Learning Development Unit at ULH. Two key papers on Role-Based Access Control (RBAC) are published: a Kuhn paper on separation of duty; necessary and sufficient conditions for separation safety - and an Osborn paper (in PostScript) on the relationship between RBAC and multilevel security mandatory access (MLS/MAC) security policy models; role lemma relating RBAC and multilevel security. Al Seagren and Britt Watwood present "The Virtual Classroom: What Works?" at the Annual International Conference of the Chair Academy. Reno, NV. See ERIC Document Reproduction Service No. ED407029. This presentation reviewed two years of the use of Lotus Notes as a learning management system in a masters and doctoral level education degree from the University of Nebraska. July 1997: The Report of the National Committee of Enquiry into Higher Education, usually called the Dearing Report, is published in the UK. Many of its recommendations were influential not only in the development of e-learning but in the development of the national-level support structures for it, including leading eventually to the Higher Education Academy. The report web site is maintained by the University of Leeds. April 1997: The project Kolibri (Kooperatives Lernen mittels Internet-basierter Informationstechniken, Cooperative Learning with Internet-based IT) was launched at the University Dortmund and went live in February 1998 with a course for Fuzzy Logic. The Kolibri system was a generic web-based application which supported multiple courses and several user groups (student administration, tutors, students). The application supported personal course histories, personal notes to content, automatic tests and interactive cooperative applets for teamwork in lessons. The system further contains a chat-system and a blackboard for information exchange. A report in German is available as PDF In January 1997, Scott Gray, Tricia Gray, Kendell Welch, and Debra Woods launch Useractive an online learning resource dedicated to the useractive learning pedagogical technique. This technique has its roots in constructivism except with computer aided guidance. This asynchronous system is enabled by embedding tutorials and learning management functions into development tools. In October 1997, the French University of Technology at Compiègne (UTC) launched the first French fully on-line degree, Dicit, training documentation engineers, using the Lotus Learning Space platform. The degree was created by Pr. Dominique Boullier and Pr. Jean-Paul Barthes. It offered 15 different courses, a serious game and several case studies on CD-ROM as well as a close coaching of the 20 to 25 students enrolled each year. The format was more of a blended learning type since the students met every two months for a face to face session. The degree was given for 10 years until 2007. Papers were written on this successful experimentation: BOULLIER, Dominique.- " Les choix techniques sont des choix pédagogiques : les dimensions multiples d’une expérience de formation à distance "A 50 dimensions multiples d'une expèrience de formation à distance.pdf, Sciences et Techniques Educatives, vol. 8, n° 3-4 /2001, pp. 275–299. 1998 On 11 August 1998 Indiana University, IUPUI Campus, issued a press release "Prototype for Web-based Teaching and Learning Environment to be Tested at IUPUI This Year". Ucompass.com is founded on 23 July 1998 and begins marketing its Educator Course Management System. CourseWork, a web-based, problem set manager, was developed by the at Stanford University's Learning Lab. It formed the core of the CourseWork CMS. This version supported authoring, distribution, completion, and reviewing of automatically graded assignments by students and instructors. Humboldt State University's Courseware Development Center] builds the ExamMaker application for online testing. ExamMaker supports banks of questions, which may include audio and/or video segments, that may be true/false, fill-in-the-blank, multiple choice, or essay. Essay questions are emailed to the teacher for grading, then sent back to ExamMaker to display the graded essays to the students. ExamMaker grades all other types of questions and provides the student immediate feedback as soon as the exam is completed, including an explanation of the correct answers, and automatically posts the grade. Full Description: Assured Student Access To Computing And The Network On 1 June 1998, a paper describing a web based Peer Review and Assessment tool developed by the Courseware Development Center at Humboldt State University was presented at the 1998 ASEE Annual Conference & Exposition: Engineering Education Contributing to U.S. Competitiveness. The Peer Review was a set of web forms that enabled students to upload documents, review each other's work, and for an instructor to review and grade student's uploaded work. More. On 2 November 1998, the web-based learning management system ILIAS is gone online at University of Cologne. Within one year more than 30 courses have been created and published for blended learning in economics, business administration and social sciences. In the spring of 1998 TeleTOP, a set of fill-in forms on top of Lotus Domino, saw the light at Twente University, The Netherlands. It was not the first ELO that was used there, but it was the first one where teachers themselves could create a course without any ICT knowledge. Core of this product was and is the central task-scheme ("The Roster"), where the teacher could create a row of activities for each week. A demo course has been available online since 1998. You still can login with UN: docent.test and PW: docent.test. Unfortunately this is an old version of TeleTOP. Since 1998 the look and feel has completely changed and the ELO has a lot more functionalities. Modules like Digital Portfolio and Assessment Centre have been developed to measure the pupils' competence and developments. Open standards such as SCORM, IEE-LOM, Dublin Core and AICC where implemented from the start for reuse and research possibilities. Further information can be found on Teletop. On 14 May 1998, Indiana University ARTI receives a "Disclosure of Invention" for the Oncourse (case #9853) describing the invention of a comprehensive course management system by Ali Jafari and his WebLab developers, a comprehensive CMS system with message board, announcement, chat, syllabus, etc. including the dynamic method of creating courses for students and faculty based on the data from the campus SIS system. The Cisco Networking Academy Management System (CNAMS) is released to facilitate communication and course management of the largest blended learning initiative of its time, the Cisco Networking Academy. It includes tools to maintain rosters, gradebooks, forums, as well as a scalable, robust assessment engine. Cisco Networking Academy Program. The Advanced Information Technology Lab at Indiana University-Purdue University Indianapolis piloted Oncourse. (A description of the initial software was published in 1999 in The Journal.) Nicenet Internet Classroom Assistant (ICA2) is launched with web-based conferencing, personal messaging, document sharing, scheduling and link/resource sharing to a variety of learning environments. See their website. DiscoverWare, Inc. builds and begins to deploy its "Nova" course management system, involving a client/server architecture to deploy rich interactive content in a desktop application, and storing/sharing information on content, users, courses, and quizzes on a central server. This was an adaptive LMS, in that quizzes were generated based on the user's progress through the content, and courses were generated based on the user's responses to a quiz. The playback engine evolved a browser-based version that was SCORM Level 2 Compliant, enabling deployment of DiscoverWare content in third-party LMS such as Pathware. Public release of EDUCOM/NLII Instructional Management Systems Specifications Document Version 0.5 (29 April 1998), produced by an IMS Technical Team including Steve Griffin (COLLEGIS Research Institute), Andy Doyle (International Thomson Publishers), Bob Alcorn (Blackboard), Brad Cox (George Mason University), Frank Farance (Farance Inc), John Barkley (NIST), Ken Schweller (Buena Vista University), Kirsten Boehner (COLLEGIS Research Institute), Mike Pettit (Blackboard), Neal Nored (IBM), Tom Rhodes (NIST), Tom Wason (UNC), Udo Schuermann (Blackboard). Available as DOC. Blackboard LLC merges with CourseInfo LLC to form Blackboard Inc and changes the CourseInfo product name to Blackboard's CourseInfo. Web Course in a Box, Version 3 is released in 1998. This version added a WhiteBoard feature as well as Student Portfolios, Access Tracking, Course Copying between instructors, and batch account administration. The Instructional Technology Group at Yale University Yale University puts the "Classes" system into production for Fall semester. (A copy of the original site is captured in the Internet Archive for Spring of 1999.) WebTestr built and deployed by Nicholas Crosby at SIAST. Fretwell-Downing Education Ltd (now part of Tribal Group plc) builds a pilot web-based learning environment for use in delivering accredited courses in internet skills (information retrieval, web design and online collaboration) in the UK. (Partial details, dated 30 December 1997.) The learning environment is a contribution to the work of the Living IT consortium, which includes The Sheffield College and Manchester College or Arts and Technology as well as Fretwell-Downing Education Ltd, and which had been delivering these courses since 1997. (In 1999, the company demonstrates this learning environment as part of its successful tender to build a larger, more sophisticated learning environment for learndirect, which was subsequently used by hundreds of thousands of learners in England and Wales.) Teemu Leinonen and Hanni Muukkonen publish a paper on Future Learning Environment - Innovative Methods and Applications for Collaborative Learning. Future Learning Environment (FLE) research and development project releases the first version of FLE software. The FLE software is afterwards known as Fle3. The survey article "Embedding computer conferencing in university teaching" (Mason and Bacsich) is published in Computers and Education, Volume 30, Number 3, April 1998, pp. 249–258. This describes experiences with using CoSy and FirstClass in online learning at the Open University in the period up to 1995. (Article available online e.g. via Ingenta.) CU Online, the virtual campus of the University of Colorado, is described in an online article by Terri Taylor Straut first presented in 1997 at the FLISH97 conference in Sheffield, UK. CU Online uses the LMS from Real Education, later eCollege.com. Virtual U, "a Web Based Environment Customised to Support Collaborative Learning and Knowledge Building", is described in an online article by Linda Harasim, Tom Calvert and others also first presented at FLISH97. The paper makes it clear that development of Virtual-U has been under way since 1994. CTLSilhouette (Gary Brown Randy Lagier, Peg Collins, Josh Yeidel, Greg Turner & Lori Eveleth-Baker) an online survey and automated response generator. Allows authors to use create custom question types in addition to questions made by wizard. Lacks scoring and feedback features of online test/quiz. CTLSilhouette powers The TLT Group's Flashlight Online system, which includes the Flashlight Current Student Inventory item bank, a useful tool for evaluations of Virtual Learning Environments and scholarship of teaching and learning by instructors. NextEd founded by its CEO Terry Hilsberg in 1998 to deliver global e-learning from bases in Hong Kong and Australia. Its first prominent university client/partner was the University of Southern Queensland, a major Australian distance learning provider. Paul McKey joins NextEd as a foundation employee and CTO and begins development of an online learning management system first described in his Masters Thesis "The Development of the On-line Educational Institute", SCU, Australia, July 1996. In September 1998 the Computer Science department at RMIT University, Australia began delivering its online courses with Serf. Over 10,000 Open University Australia student enrollments used Serf's comprehensive LMS features until 2004 when RMIT's corporate Blackboard was phased in. During this period, Serf versions 1 to 3 hosted 13 ugrad CS courses, 5 pgrad CS courses and 3 continuously repeating, short IT courses. September 1998: The EU SCHEMA project releases via the Oulu team a "State of the art" review specification on CMC techniques applicable to open and distance learning (Deliverable D5.1). This includes a feature and architectural comparison of FirstClass, LearningSpace, TopClass and WebCT. It also describes a desired system Proto. There is a full discussion of roles. The diagrams are particularly informative. See "The use of CMC in applied social science training". In May 1998, Interlynx Multimedia, Inc. of Toronto, received a contract to develop a learning management system for the Canadian Imperial Bank of Commerce. The LMS, designed by Dr. Gary Woodill and Dr. Karen Anderson was built in Microsoft ASP. It included a rudimentary authoring system that allowed HTML pages and multiple choice questions to be built and posted online. The generic code for this LMS became the PROFIS LMS, which was then licensed to several other corporations. Later Operitel Corporation of Peterborough acquired the rights to this LMS which was then renamed LearnFlex. Operitel was sold to Open Text in 2012, and Gary Woodill is now CEO of i5 Research. The Aircraft Industry CBT Committee (AICC) certifies web-based Pathware 3 as its "First Instructional Management Product". Asymetrix (later becoming Click2Learn and then SumTotal) buys Meliora Systems' software for learning management called Ingenium, and merges it with its own learning management product, Toolbook II Librarian, a training management and administration system used with an Oracle, MS SQL Server or other ODBC database. Authoring is done either through Asymetrix' Toolbook II Instructor, Toolbook II Assistant, or through Asymetrix IconAuthor. In October 1998, CoursePackets.com is founded by Alan Blake, a University of Texas at Austin student, with the goal of posting course packs online. By the end of 1998, Indiana University's Oncourse system had grown to support some 9,000 students. December 1998 the School of Pharmacy at the University of Strathclyde launch their online learning environment SPIDER WebDAV gave a standard method of uploading documents. It was already described in publications in 1998. E.g. WEBDAV: IETF Standard for Collaborative Authoring on the Web IEEE Internet Computing, September/October 1998, pages 34–40 and Collaborative Authoring on the Web: Introducing WebDAV] Bulletin of the American Society for Information Science, Vol. 25, No. 1, October/November 1998, pages 25–29. By May 1998, a number of course management systems and collaborative environments were available. These systems included CyberProf, a course management system from the University of Illinois; Mallard 3.0, a course management system from the University of Illinois; netLearningPlace, a collaborative environment for teaching and learning; PlaceWare, software for live presentations; POLIS, a system from the University of Arizona; The Learning Manager (TLM), from Campus America, Inc.; Toolbox II from Asymetrix Corporation; TopClass, from WBT Systems; Virtual Classroom Interface (VCI), from the University of Illinois; Virtual Object Interactive Classroom Environment (VOICE), a graphic MOO; Web Course in a Box, developed at Virginia Commonwealth University; WebCT, from the University of British Columbia; Web Instructional Services Headquarters (WISH), from Penn State University; and Web Lecture System (WLS), a web lecturing system from North Carolina State University.(Source: Distance Learning Environments Feature List, University of Iowa, last updated 13 May 1998). Of these, WebCT is by far the most widely used with licenses at roughly 500 institutions by year end. 1999 Fronter, a European software company, launches its environment for web based collaboration. During 1999 to 2001, the system is implemented by the majority of Norwegian higher education institutions and used as their platform for learning and collaboration. In January 1999 CoursePackets.com goes live, serving dozens of courses at the University of Texas at Austin. The service allowed for the posting of course packs online at a substantial discount over the cost of printed materials. By May 1999, CoursePackets.com begins work on a courseware system for launch in January 2000. The courseware system is comparable to Blackboard, and actively marketed as "CourseNotes.com" beginning in the summer of '99. February 1999: Ossidian Technologies is launched in Dublin, Ireland. Within 6 months the company has developed OLAS, its first web-based LMS. The company begins the process of developing a complete library of eLearning for wireless telecom (cellular, satellite, broadcast, personal and fixed wireless, operations). September 1999: The IEEE magazine Web-based Learning and Collaboration publishes A Framework for Online Learning: The Virtual-U, describing the history of the Virtual-U system from its inception in 1993. There are screen shots and descriptions. In particular it has a "user interface that gives instructors or moderators the ability to easily set up collaborative groups and define structures, tasks, and objectives". Further, system administrators have tools to help in "creating and maintaining accounts, defining access privileges, and establishing courses on the system". In October 1999, The UCLA School of Dentistry Media Center and Dr. Glenn Clark, develop an Internet-based authoring tool, labeled Internet Courseware (iic), which provides DDS students simulation modules for diagnosis and treatment planning of patients across a large breadth of possible medical conditions as well as access to lecture notes, exam reviews, course supplements and faculty contact information. Users are presented access to virtual patients based on class, previous coursework and patient/dentist activity within the system. The project was described in the Journal of Dental Education in 1999 (Clark GT, Carnahan J, Masson P and Watanabe, T. Case-Based Courseware for Distance Learning. J. Dent Educ. 63:71 (#191) 1999). In October 1999 Liber and Britain publish Framework for Pedagogical Evaluation of Virtual Learning Environments (MS Word file), a study for the United Kingdom Joint Information Systems Committee evaluating 12 different VLEs in detail. The report contains a schematic of a prototypical VLE, comprising 15 generic functionalities, and describes each of these functionalities in turn. There is a narrative description of each of the evaluated VLEs, and a comparative table summarising which features each provides. The Oncourse Project invented and introduced the notion of "Enterprise Course management system" where data from the Student Information System (SIS) was used to automatically and dynamically create CMS course site for all the courses offered at the IUPUI Campus (more than 6,000 courses offered to more than 27,000 students). Martin Dougiamas trials early prototypes of Moodle at Curtin University of Technology, built during 1998 and 1999. This paper "Improving the effectiveness of tools for Internet-based education" published in January 2000 details one case study and includes screenshots. The LON-CAPA project is started at Michigan State University. Desire2Learn is founded. The University of Michigan launches CourseTools, originally a product of the UMIE project (launched in 1996), and moved into its own development and production team due to the scale and scope of the LMS being launched and created. The Omnium Project based at The College of Fine Arts at the University of New South Wales ran its first global creative studio project online for 50 design students from 11 countries. September 1999 - The brand new Technical University of British Columbia admits its first students. Their 'Course Management System' is a home-grown system with 2+ years of development behind it at this point. Web Course in a Box, version 4 was released by madDuck Technologies in early 1999. WCB Version 4, added a gradebook and assignment manager. Companion products, Web Campus in a Box (for creating web pages for a department or program) and Web CourseBuilder Toolbox (for creating faculty web pages and forums, and course listings that were independent of the WCB system) were released in this same time period. WebCT purchased by Universal Learning Technology. Roughly 1000 campuses using WebCT by end of year. "Courseware Accessibility Study"] published, evaluating seven online courseware systems for their accessibility. Stephen Downes publishes Web-Based Courses: The Assiniboine Model in the Online Journal of Distance Learning Administration. The University of South Australia launches its web-based online learning platform, UniSAnet in March 1999. UniSAnet was developed over 9 months in 1998 and 1999, following a paper to its Academic Board in May 1998. Wolfgang Appelt and Peter Mambrey publish a paper on using BSCW as a virtual learning environment. ETUDES 2.3 released. ETUDES 2.5 is released in December. The system is used at several community colleges in California, including Foothill, LasPositas, and Miracosta. "Practical Know How: Distance Education and Training over the Internet" (Jissen Nouhau Inta-netto de Enkaku Kyouiku/Kenshuu) by Douyama Shinichi published in April 1999 by NTT publishing. . "It would seem easy to begin distance learning and distance education over the Internet, as an extension of (conventional) distance learning. When it comes to teaching several hundred students in this way, there are a number of problems still to be resolved at this time. In this book we will consider, the selection of teaching materials, making online contents, management methods, and introduce concrete practical know how with good cost performance and lots of practical advice." Chapter one details the trial of an Internet distance learning system, from sending out invitations to graduation. Sheffield company Fretwell Downing is marketing its "LE" (Learning Environment) product. September 1999 product overview. Washington State University publishes online a comparison of 24 VLE's, focusing on eight that were considered candidates for adoption at WSU. (Note: Only the final draft survives in the archives.) Thorough "Comparison of Online Course Delivery Software Products" published by Marshall University - with stated last update of 1 October 1999 - examining in detail the features and functionalities of 16 mainly US and Canadian systems. The Bridge (Gary Brown, Mathew Shirey, Dennis Bennett, Greg Turner-Rahman). (now retired, but available available read-only) a course management system with sub-spaces for teams that empowers students to create resource objects (threaded discussion, file upload, web links, notes, and quizzes) in the course. Bridge also had a "personal workspace" that provided the same collaborative and ePortfolio tools to individuals outside any course offering. The concept was not fully implemented as there was no mechanism to authorize users into one's personal workspace. Northern Virginia Community College (NVCC)'s Extended Learning Institute (ELI) begins using Allaire Forums for web-based conferencing in a variety of online/distance courses. University of Maryland University College (UMUC)'s unveils Version 2.0 of its customized WebTycho program with a new interface design. Through Fall 1999, UMUC has installed WebTycho servers on three continents and served over 26,000 students and faculty in over 1,000 WebTycho courses. In spring 1999 the development of the open source LMS OLAT was initiated by Sabina Jeger, Franziska Schneider and Florian Gnägi to support a tutoring course with 900 students at University of Zurich. The system was put into production in fall 1999 where the 900 students registered to 25 classes that were coached by older students. This first version of OLAT was built on LAMP technology. Later, the system was completely rebuilt on Java EE technology to support the e-learning needs of a whole campus. IBM's Lotus group buys Macromedia's Pathware 4 learning management system. This LMS is later merged into the Lotus Learning Space LMS. For article on the purchase, see here. Isopia (founded in 1998) entered the e-Learning landscape in 1999 with the launch of its Integrated Learning Management System (ILMS), its Web-based infrastructure software. Built on Enterprise Java Beans, Isopia claimed to be "a flexible, open system that allows for massive scalability and adapts to a variety of learning needs and rapidly-growing user communities". Isopia certainly rapidly grew in clients and deals (e.g. see the industry testimonials to its feature list from 1999 and early 2000 here until being bought by Sun Microsystems in 2001. See Learning Trends by Elliott Masie. Knowledge Navigators International releases its third version of LearningEngine as MyLearningPlace. Used by the United Nations Development Programme for several years for worldwide communities of practice and adopted by large architectural firm in CA. Company closed in 2001. New incarnation of software lives as Coachingplatform.com. "First Annual WebCT Conference on Learning Technologies" takes place at University of British Columbia in Vancouver, Canada from 17 to 18 June. Tim Barker presents a paper "Community Based Virtual Learning: A WebCT Physics Course" comparing three VLEs (WebCT, Topclass and Learning Space) plus Eventware (web annotations & chat), Ceilidh & Tree of Knowledge (discussion boards), Netmeeting (Whiteboard, chat etc.), Inspiration (Concept Mapping) & Composer/Writers Assistant (scaffolds writing process). Additionally Tim proposes integrating a Learning Companion. This conference represents a milestone as one of the first VLE user conferences. It is a significant success with 700 in attendance and poses a logistical exercise for organisers who were originally expecting between 50 and 100. Registration had to be closed due to the large numbers over a month before the conference date. 5 December 1999: Randy Graebner's proposal for his master's thesis, Online Education Through Shared Resources. The BENVIC project started in late 1999 and ran for two years. Its aim was to benchmark the various virtual campuses (i.e. university-level distance e-learning services) operating across Europe. The BENVIC web site contains several useful outcomes. The project became quiescent in early 2002. It represented a move beyond benchmarking VLEs to benchmarking e-learning at a higher level, i.e. the services which the VLEs underpinned. Dennis Tsichritzis of the University of Geneva publishes "Reengineering the University" (Communications of the ACM, Vol. 42, Issue 6, June 1999). One reviewer observes "This is a must-read article for academics" but later cautions that "most traditional college students, particularly in the US, do not have the self-discipline to adjust to the educational environment Tsichritzis describes." Scholastic Corporation publishes Read180, an application for Macs & PCs to improve reading skills in schools. Read180 shipped with sets of CD-ROMs on various topics, each with video presentations and interactive tests. Audio recording sessions by students were sent over the network to a teacher's workstation for evaluation. References History of human–computer interaction
History of virtual learning environments in the 1990s
Technology
18,267
24,280,199
https://en.wikipedia.org/wiki/Metal%20L-edge
Metal L-edge spectroscopy is a spectroscopic technique used to study the electronic structures of transition metal atoms and complexes. This method measures X-ray absorption caused by the excitation of a metal 2p electron to unfilled d orbitals (e.g. 3d for first-row transition metals), which creates a characteristic absorption peak called the L-edge. Similar features can also be studied by Electron Energy Loss Spectroscopy. According to the selection rules, the transition is formally electric-dipole allowed, which not only makes it more intense than an electric-dipole forbidden metal K pre-edge (1s → 3d) transition, but also makes it more feature-rich as the lower required energy (~400-1000 eV from scandium to copper) results in a higher-resolution experiment. In the simplest case, that of a cupric (CuII) complex, the 2p → 3d transition produces a 2p53d10 final state. The 2p5 core hole created in the transition has an orbital angular momentum L=1 which then couples to the spin angular momentum S=1/2 to produce J=3/2 and J=1/2 final states. These states are directly observable in the L-edge spectrum as the two main peaks (Figure 1). The peak at lower energy (~930 eV) has the greatest intensity and is called the L3-edge, while the peak at higher energy (~950 eV) has less intensity and is called the L2-edge. Spectral components As we move left across the periodic table (e.g. from copper to iron), we create additional holes in the metal 3d orbitals. For example, a low-spin ferric (FeIII) system in an octahedral environment has a ground state of (t2g)5(eg)0 resulting in transitions to the t2g (dπ) and eg (dσ) sets. Therefore, there are two possible final states: t2g6eg0 or t2g5eg1(Figure 2a). Since the ground-state metal configuration has four holes in the eg orbital set and one hole in the t2g orbital set, an intensity ratio of 4:1 might be expected (Figure 2b). However, this model does not take into account covalent bonding and, indeed, an intensity ratio of 4:1 is not observed in the spectrum. In the case of iron, the d6 excited state will further split in energy due to d-d electron repulsion (Figure 2c). This splitting is given by the right-hand (high-field) side of the d6 Tanabe–Sugano diagram and can be mapped onto a theoretical simulation of a L-edge spectrum (Figure 2d). Other factors such as p-d electron repulsion and spin-orbit coupling of the 2p and 3d electrons must also be considered to fully simulate the data. For a ferric system, all of these effects result in 252 initial states and 1260 possible final states that together will comprise the final L-edge spectrum (Figure 2e). Despite all of these possible states, it has been established that in a low-spin ferric system, the lowest energy peak is due to a transition to the t2g hole and the more intense and higher energy (~3.5 eV) peak is to that of the unoccupied eg orbitals. Feature mixing In most systems, bonding between a ligand and a metal atom can be thought of in terms of metal-ligand covalent bonds, where the occupied ligand orbitals donate some electron density to the metal. This is commonly known as ligand-to-metal charge transfer or LMCT. In some cases, low-lying unoccupied ligand orbitals (π*) can receive back-donation (or backbonding) from the occupied metal orbitals. This has the opposite effect on the system, resulting in metal-to-ligand charge transfer, MLCT, and commonly appears as an additional L-edge spectral feature. An example of this feature occurs in low-spin ferric [Fe(CN)6]3−, since CN− is a ligand that can have backbonding. While backbonding is important in the initial state, it would only warrant a small feature in the L-edge spectrum. In fact, it is in the final state where the backbonding π* orbitals are allowed to mix with the very intense eg transition, thus borrowing intensity and resulting in the final dramatic three peak spectrum (Figure 3 and Figure 4). Model construction X-ray absorption spectroscopy (XAS), like other spectroscopies, looks at the excited state to infer information about the ground state. To make a quantitative assignment, L-edge data is fitted using a valence bond configuration interaction (VBCI) model where LMCT and MLCT are applied as needed to successfully simulate the observed spectral features. These simulations are then further compared to density functional theory (DFT) calculations to arrive at a final interpretation of the data and an accurate description of the electronic structure of the complex (Figure 4). In the case of iron L-edge, the excited state mixing of the metal eg orbitals into the ligand π* make this method a direct and very sensitive probe of backbonding. See also Metal K-edge Ligand K-edge Extended X-ray absorption fine structure References X-ray absorption spectroscopy
Metal L-edge
Chemistry,Materials_science,Engineering
1,134
6,815,943
https://en.wikipedia.org/wiki/Radical-nucleophilic%20aromatic%20substitution
Radical-nucleophilic aromatic substitution or SRN1 in organic chemistry is a type of substitution reaction in which a certain substituent on an aromatic compound is replaced by a nucleophile through an intermediary free radical species: The substituent X is a halide and nucleophiles can be sodium amide, an alkoxide or a carbon nucleophile such as an enolate. In contrast to regular nucleophilic aromatic substitution, deactivating groups on the arene are not required. This reaction type was discovered in 1970 by Bunnett and Kim and the abbreviation SRN1 stands for substitution radical-nucleophilic unimolecular as it shares properties with an aliphatic SN1 reaction. An example of this reaction type is the Sandmeyer reaction. Reaction mechanism In this radical substitution the aryl halide 1 accepts an electron from a radical initiator forming a radical anion 2. This intermediate collapses into an aryl radical 3 and a halide anion. The aryl radical reacts with the nucleophile 4 to a new radical anion 5 which goes on to form the substituted product by transferring its electron to new aryl halide in the chain propagation. Alternatively the phenyl radical can abstract any loose proton from 7 forming the arene 8 in a chain termination reaction. The involvement of a radical intermediate in a new type of nucleophilic aromatic substitution was invoked when the product distribution was compared between a certain aromatic chloride and an aromatic iodide in reaction with potassium amide. The chloride reaction proceeds through a classical aryne intermediate: The isomers 1a and 1b form the same aryne 2 which continues to react to the anilines 3a and 3b in a 1 to 1.5 ratio. Clear-cut cine-substitution would give a 1:1 ratio, but additional steric and electronic factors come into play as well. Replacing chlorine by iodine in the 1,2,4-trimethylbenzene moiety drastically changes the product distribution: It now resembles ipso-substitution with 1a forming preferentially 3a and 1b forming 3b. Radical scavengers suppress ipso-substitution in favor of cine-substitution and the addition of potassium metal as an electron donor and radical initiator does exactly the opposite. See also Birch reduction Nucleophilic aromatic substitution References Free radical reactions Reaction mechanisms
Radical-nucleophilic aromatic substitution
Chemistry
507
11,524,295
https://en.wikipedia.org/wiki/Tetrahalomethane
Tetrahalomethanes are fully halogenated methane derivatives of general formula CFkCllBrmInAtp, where:Tetrahalomethanes are on the border of inorganic and organic chemistry, thus they can be assigned both inorganic and organic names by IUPAC: tetrafluoromethane - carbon tetrafluoride, tetraiodomethane - carbon tetraiodide, dichlorodifluoromethane - carbon dichloride difluoride. Each halogen (F, Cl, Br, I, At) forms a corresponding halomethane, but their stability decreases in order CF4 > CCl4 > CBr4 > CI4 from exceptionally stable gaseous tetrafluoromethane with bond energy 515 kJ·mol−1 to solid tetraiodomethane, depending on bond energy. Many mixed halomethanes are also known, such as CBrClF2. Uses Fluorine, chlorine, and sometimes bromine-substituted halomethanes were used as refrigerants, commonly known as CFCs (chlorofluorocarbons). See also Monohalomethane Dihalomethane Trihalomethane Inorganic carbon compounds Nonmetal halides
Tetrahalomethane
Chemistry
269
3,131,507
https://en.wikipedia.org/wiki/RNA-binding%20protein
RNA-binding proteins (often abbreviated as RBPs) are proteins that bind to the double or single stranded RNA in cells and participate in forming ribonucleoprotein complexes. RBPs contain various structural motifs, such as RNA recognition motif (RRM), dsRNA binding domain, zinc finger and others. They are cytoplasmic and nuclear proteins. However, since most mature RNA is exported from the nucleus relatively quickly, most RBPs in the nucleus exist as complexes of protein and pre-mRNA called heterogeneous ribonucleoprotein particles (hnRNPs). RBPs have crucial roles in various cellular processes such as: cellular function, transport and localization. They especially play a major role in post-transcriptional control of RNAs, such as: splicing, polyadenylation, mRNA stabilization, mRNA localization and translation. Eukaryotic cells express diverse RBPs with unique RNA-binding activity and protein–protein interaction. According to the Eukaryotic RBP Database (EuRBPDB), there are 2961 genes encoding RBPs in humans. During evolution, the diversity of RBPs greatly increased with the increase in the number of introns. Diversity enabled eukaryotic cells to utilize RNA exons in various arrangements, giving rise to a unique RNP (ribonucleoprotein) for each RNA. Although RBPs have a crucial role in post-transcriptional regulation in gene expression, relatively few RBPs have been studied systematically. It has now become clear that RNA–RBP interactions play important roles in many biological processes among organisms. Structure Many RBPs have modular structures and are composed of multiple repeats of just a few specific basic domains that often have limited sequences. Different RBPs contain these sequences arranged in varying combinations. A specific protein's recognition of a specific RNA has evolved through the rearrangement of these few basic domains. Each basic domain recognizes RNA, but many of these proteins require multiple copies of one of the many common domains to function. Diversity As nuclear RNA emerges from RNA polymerase, RNA transcripts are immediately covered with RNA-binding proteins that regulate every aspect of RNA metabolism and function including RNA biogenesis, maturation, transport, cellular localization and stability. All RBPs bind RNA, however they do so with different RNA-sequence specificities and affinities, which allows the RBPs to be as diverse as their targets and functions. These targets include mRNA, which codes for proteins, as well as a number of functional non-coding RNAs. NcRNAs almost always function as ribonucleoprotein complexes and not as naked RNAs. These non-coding RNAs include microRNAs, small interfering RNAs (siRNA), as well as spliceosomal small nuclear RNAs (snRNA). Function RNA processing and modification Alternative splicing Alternative splicing is a mechanism by which different forms of mature mRNAs (messengers RNAs) are generated from the same gene. It is a regulatory mechanism by which variations in the incorporation of the exons into mRNA leads to the production of more than one related protein, thus expanding possible genomic outputs. RBPs function extensively in the regulation of this process. Some binding proteins such as neuronal specific RNA-binding proteins, namely NOVA1, control the alternative splicing of a subset of hnRNA by recognizing and binding to a specific sequence in the RNA (YCAY where Y indicates pyrimidine, U or C). These proteins then recruit splicesomal proteins to this target site. SR proteins are also well known for their role in alternative splicing through the recruitment of snRNPs that form the splicesome, namely U1 snRNP and U2AF snRNP. However, RBPs are also part of the splicesome itself. The splicesome is a complex of snRNA and protein subunits and acts as the mechanical agent that removes introns and ligates the flanking exons. Other than core splicesome complex, RBPs also bind to the sites of Cis-acting RNA elements that influence exons inclusion or exclusion during splicing. These sites are referred to as exonic splicing enhancers (ESEs), exonic splicing silencers (ESSs), intronic splicing enhancers (ISEs) and intronic splicing silencers (ISSs) and depending on their location of binding, RBPs work as splicing silencers or enhancers. RNA editing The most extensively studied form of RNA editing involves the ADAR protein. This protein functions through post-transcriptional modification of mRNA transcripts by changing the nucleotide content of the RNA. This is done through the conversion of adenosine to inosine in an enzymatic reaction catalyzed by ADAR. This process effectively changes the RNA sequence from that encoded by the genome and extends the diversity of the gene products. The majority of RNA editing occurs on non-coding regions of RNA; however, some protein-encoding RNA transcripts have been shown to be subject to editing resulting in a difference in their protein's amino acid sequence. An example of this is the glutamate receptor mRNA where glutamine is converted to arginine leading to a change in the functionality of the protein. Polyadenylation Polyadenylation is the addition of a "tail" of adenylate residues to an RNA transcript about 20 bases downstream of the AAUAAA sequence within the three prime untranslated region. Polyadenylation of mRNA has a strong effect on its nuclear transport, translation efficiency, and stability. All of these as well as the process of polyadenylation depend on binding of specific RBPs. All eukaryotic mRNAs with few exceptions are processed to receive 3' poly (A) tails of about 200 nucleotides. One of the necessary protein complexes in this process is CPSF. CPSF binds to the 3' tail (AAUAAA) sequence and together with another protein called poly(A)-binding protein, recruits and stimulates the activity of poly(A) polymerase. Poly(A) polymerase is inactive on its own and requires the binding of these other proteins to function properly. Export After processing is complete, mRNA needs to be transported from the cell nucleus to cytoplasm. This is a three-step process involving the generation of a cargo-carrier complex in the nucleus followed by translocation of the complex through the nuclear pore complex and finally release of the cargo into cytoplasm. The carrier is then subsequently recycled. TAP/NXF1:p15 heterodimer is thought to be the key player in mRNA export. Over-expression of TAP in Xenopus laevis frogs increases the export of transcripts that are otherwise inefficiently exported. However TAP needs adaptor proteins because it is unable interact directly with mRNA. Aly/REF protein interacts and binds to the mRNA recruiting TAP. mRNA localization mRNA localization is critical for regulation of gene expression by allowing spatially regulated protein production. Through mRNA localization proteins are translated in their intended target site of the cell. This is especially important during early development when rapid cell cleavages give different cells various combinations of mRNA which can then lead to drastically different cell fates. RBPs are critical in the localization of this mRNA that insures proteins are only translated in their intended regions. One of these proteins is ZBP1. ZBP1 binds to beta-actin mRNA at the site of transcription and moves with mRNA into the cytoplasm. It then localizes this mRNA to the lamella region of several asymmetric cell types where it can then be translated. In 2008 it was proposed that FMRP was involved in the stimulus-induced localization of several dendritic mRNAs in the neuronal dendrites of cultured hippocampal neurons. More recent studies of FMRP-bound RNAs present in microdissected dendrites of CA1 hippocampal neurons revealed no changes in localization in wild type versus FMRP-null mouse brains. Translation Translational regulation provides a rapid mechanism to control gene expression. Rather than controlling gene expression at the transcriptional level, mRNA is already transcribed but the recruitment of ribosomes is controlled. This allows rapid generation of proteins when a signal activates translation. ZBP1 in addition to its role in the localization of B-actin mRNA is also involved in the translational repression of beta-actin mRNA by blocking translation initiation. ZBP1 must be removed from the mRNA to allow the ribosome to properly bind and translation to begin. Protein–RNA interactions RNA-binding proteins exhibit highly specific recognition of their RNA targets by recognizing their sequences, structures, motifs and RNA modifications. Specific binding of the RNA-binding proteins allow them to distinguish their targets and regulate a variety of cellular functions via control of the generation, maturation, and lifespan of the RNA transcript. This interaction begins during transcription as some RBPs remain bound to RNA until degradation whereas others only transiently bind to RNA to regulate RNA splicing, processing, transport, and localization. Cross-linking immunoprecipitation (CLIP) methods are used to stringently identify direct RNA binding sites of RNA-binding proteins in a variety of tissues and organisms. In this section, three classes of the most widely studied RNA-binding domains (RNA-recognition motif, double-stranded RNA-binding motif, zinc-finger motif) will be discussed. RNA-recognition motif (RRM) The RNA recognition motif, which is the most common RNA-binding motif, is a small protein domain of 75–85 amino acids that forms a four-stranded β-sheet against the two α-helices. This recognition motif exerts its role in numerous cellular functions, especially in mRNA/rRNA processing, splicing, translation regulation, RNA export, and RNA stability. Ten structures of an RRM have been identified through NMR spectroscopy and X-ray crystallography. These structures illustrate the intricacy of protein–RNA recognition of RRM as it entails RNA–RNA and protein–protein interactions in addition to protein–RNA interactions. Despite their complexity, all ten structures have some common features. All RRMs' main protein surfaces' four-stranded β-sheet was found to interact with the RNA, which usually contacts two or three nucleotides in a specific manner. In addition, strong RNA binding affinity and specificity towards variation are achieved through an interaction between the inter-domain linker and the RNA and between RRMs themselves. This plasticity of the RRM explains why RRM is the most abundant domain and why it plays an important role in various biological functions. Double-stranded RNA-binding motif The double-stranded RNA-binding motif (dsRM, dsRBD), a 70–75 amino-acid domain, plays a critical role in RNA processing, RNA localization, RNA interference, RNA editing, and translational repression. All three structures of the domain solved as of 2005 possess uniting features that explain how dsRMs only bind to dsRNA instead of dsDNA. The dsRMs were found to interact along the RNA duplex via both α-helices and β1-β2 loop. Moreover, all three dsRBM structures make contact with the sugar-phosphate backbone of the major groove and of one minor groove, which is mediated by the β1-β2 loop along with the N-terminus region of the alpha helix 2. This interaction is a unique adaptation for the shape of an RNA double helix as it involves 2'-hydroxyls and phosphate oxygen. Despite the common structural features among dsRBMs, they exhibit distinct chemical frameworks, which permits specificity for a variety for RNA structures including stem-loops, internal loops, bulges or helices containing mismatches. Zinc fingers CCHH-type zinc-finger domains are the most common DNA-binding domain within the eukaryotic genome. In order to attain high sequence-specific recognition of DNA, several zinc fingers are utilized in a modular fashion. Zinc fingers exhibit ββα protein fold in which a β-hairpin and a α-helix are joined via a ion. Furthermore, the interaction between protein side-chains of the α-helix with the DNA bases in the major groove allows for the DNA-sequence-specific recognition. Despite its wide recognition of DNA, there has been recent discoveries that zinc fingers also have the ability to recognize RNA. In addition to CCHH zinc fingers, CCCH zinc fingers were recently discovered to employ sequence-specific recognition of single-stranded RNA through an interaction between intermolecular hydrogen bonds and Watson-Crick edges of the RNA bases. CCHH-type zinc fingers employ two methods of RNA binding. First, the zinc fingers exert non-specific interaction with the backbone of a double helix whereas the second mode allows zinc fingers to specifically recognize the individual bases that bulge out. Differing from the CCHH-type, the CCCH-type zinc finger displays another mode of RNA binding, in which single-stranded RNA is identified in a sequence-specific manner. Overall, zinc fingers can directly recognize DNA via binding to dsDNA sequence and RNA via binding to ssRNA sequence. Role in embryonic development RNA-binding proteins' transcriptional and post-transcriptional regulation of RNA has a role in regulating the patterns of gene expression during development. Extensive research on the nematode C. elegans has identified RNA-binding proteins as essential factors during germline and early embryonic development. Their specific function involves the development of somatic tissues (neurons, hypodermis, muscles and excretory cells) as well as providing timing cues for the developmental events. Nevertheless, it is exceptionally challenging to discover the mechanism behind RBPs' function in development due to the difficulty in identifying their RNA targets. This is because most RBPs usually have multiple RNA targets. However, it is indisputable that RBPs exert a critical control in regulating developmental pathways in a concerted manner. Germline development In Drosophila melanogaster, Elav, Sxl and tra-2 are RNA-binding protein encoding genes that are critical in the early sex determination and the maintenance of the somatic sexual state. These genes impose effects on the post-transcriptional level by regulating sex-specific splicing in Drosophila. Sxl exerts positive regulation of the feminizing gene tra to produce a functional tra mRNA in females. In C. elegans, RNA-binding proteins including FOG-1, MOG-1/-4/-5 and RNP-4 regulate germline and somatic sex determination. Furthermore, several RBPs such as GLD-1, GLD-3, DAZ-1, PGL-1 and OMA-1/-2 exert their regulatory functions during meiotic prophase progression, gametogenesis, and oocyte maturation. Somatic development In addition to RBPs' functions in germline development, post-transcriptional control also plays a significant role in somatic development. Differing from RBPs that are involved in germline and early embryo development, RBPs functioning in somatic development regulate tissue-specific alternative splicing of the mRNA targets. For instance, MEC-8 and UNC-75 containing RRM domains localize to regions of hypodermis and nervous system, respectively. Furthermore, another RRM-containing RBP, EXC-7, is revealed to localize in embryonic excretory canal cells and throughout the nervous system during somatic development. Neuronal development ZBP1 was shown to regulate dendritogenesis (dendrite formation) in hippocampal neurons. Other RNA-binding proteins involved in dendrite formation are Pumilio and Nanos, FMRP, CPEB and Staufen 1 Role in cancer RBPs are emerging to play a crucial role in tumor development. Hundreds of RBPs are markedly dysregulated across human cancers and showed predominant downregulation in tumors related to normal tissues. Many RBPs are differentially expressed in different cancer types for example KHDRBS1(Sam68), ELAVL1(HuR), FXR1 and UHMK1. For some RBPs, the change in expression are related with Copy Number Variations (CNV), for example CNV gains of BYSL in colorectal cancer cells and ESRP1, CELF3 in breast cancer, RBM24 in liver cancer, IGF2BP2, IGF2BP3 in lung cancer or CNV losses of KHDRBS2 in lung cancer. Some expression changes are cause due to protein affecting mutations on these RBPs for example NSUN6, ZC3H13, ELAC1, RBMS3, and ZGPAT, SF3B1, SRSF2, RBM10, U2AF1, SF3B1, PPRC1, RBMXL1, HNRNPCL1 etc. Several studies have related this change in expression of RBPs to aberrant alternative splicing in cancer. Current research As RNA-binding proteins exert significant control over numerous cellular functions, they have been a popular area of investigation for many researchers. Due to its importance in the biological field, numerous discoveries regarding RNA-binding proteins' potentials have been recently unveiled. Recent development in experimental identification of RNA-binding proteins has extended the number of RNA-binding proteins significantly RNA-binding protein Sam68 controls the spatial and temporal compartmentalization of RNA metabolism to attain proper synaptic function in dendrites. Loss of Sam68 results in abnormal posttranscriptional regulation and ultimately leads to neurological disorders such as fragile X-associated tremor/ataxia syndrome. Sam68 was found to interact with the mRNA encoding β-actin, which regulates the synaptic formation of the dendritic spines with its cytoskeletal components. Therefore, Sam68 plays a critical role in regulating synapse number via control of postsynaptic β-actin mRNA metabolism. Neuron-specific CELF family RNA-binding protein UNC-75 specifically binds to the UUGUUGUGUUGU mRNA stretch via its three RNA recognition motifs for the exon 7a selection in C. elegans''' neuronal cells. As exon 7a is skipped due to its weak splice sites in non-neuronal cells, UNC-75 was found to specifically activate splicing between exon 7a and exon 8 only in the neuronal cells. The cold inducible RNA binding protein CIRBP plays a role in controlling the cellular response upon confronting a variety of cellular stresses, including short wavelength ultraviolet light, hypoxia, and hypothermia. This research yielded potential implications for the association of disease states with inflammation. Serine-arginine family of RNA-binding protein Slr1 was found exert control on the polarized growth in Candida albicans. Slr1 mutations in mice results in decreased filamentation and reduces damage to epithelial and endothelial cells that leads to extended survival rate compared to the Slr1 wild-type strains. Therefore, this research reveals that SR-like protein Slr1 plays a role in instigating the hyphal formation and virulence in C. albicans''. See also DNA-binding protein RNA-binding protein database Ribonucleoprotein External links starBase platform: a platform for decoding binding sites of RNA binding proteins (RBPs) from large-scale CLIP-Seq (HITS-CLIP, PAR-CLIP, iCLIP, CLASH) datasets. RBPDB database: a database of RNA binding proteins. oRNAment: a database of putative RBP binding site instances in both coding and non-coding RNA in various species. ATtRACt database: a database of RNA binding proteins and associated motifs. SplicedAid-F: a database of hand -cureted human RNA binding proteins database. RsiteDB: RNA binding site database SPOT-Seq-RNA: Template-based prediction of RNA binding proteins and their complex structures. SPOT-Struct-RNA: RNA binding proteins prediction from 3D structures. ENCODE Project: A collection of genomic datasets (i.e. RNA Bind-n-seq, eCLIP, RBP targeted shRNA RNA-seq) for RBPs RBP Image Database: Images showing the cellular localization of RBPs in cells RBPSpot Software: A Deep-Learning based highly accurate software to detect RBP-RNA interaction. It also provides a module to build new RBP-RNA interaction models. References Cell biology
RNA-binding protein
Biology
4,324
1,500,452
https://en.wikipedia.org/wiki/Steering%20pole
A steering pole is a light spar extending from the bow of a straight deck ship which aids the wheelsman in steering. Ancient literature indicates that steering poles have long been part of boat construction, and are referred to in ancient texts such as the Epic of Gilgamesh. References Shipbuilding
Steering pole
Engineering
59
39,412,827
https://en.wikipedia.org/wiki/Nidec%20Leroy-Somer
Nidec Leroy-Somer is a French company based in Angoulême, Charente which manufactures mainly electric motors. It was established in 1919 by Marcellin Leroy. The firm has now expanded in Czech Republic, Hungary, Poland, Romania, China and India, with almost 10,000 employees. Since January 31, 2017, Leroy-Somer has become a part of the Japanese Nidec Group. References External links Manufacturing companies established in 1919 Engineering companies of France Industrial machine manufacturers French companies established in 1919 French brands 2017 mergers and acquisitions Electric motor manufacturers
Nidec Leroy-Somer
Engineering
115
36,695,492
https://en.wikipedia.org/wiki/Architech
The Architectural Association of Universiti Teknologi Malaysia ( or PSUTM), more commonly known as Architech, is the official association for students of architecture in UTM. It was formed in the 70s during the Kuala Lumpur days, and one of the earliest students association to be registered under Office of Students' Affair (HEMA, previously HEP), and is acknowledged as one of the most active students association in UTM. It resides within the Department of Architecture, Universiti Teknologi Malaysia, under the Faculty of Built Environment. Role The main role of Architech is to manage the welfare of the UTM architecture students, organize events of various forms for the benefits of its members, to become the voice of the students in the school, and ultimately to ensure the advancement of non-academic qualities amongst its members. Logo The Architech logo was a result of a design competition held by the association in 1992, initiated by the president at the time, Zulhisham and Che Wan Ahmad Faizal. At the time, the name Architech have not come into being, and the logo competition was actually for PSUTM. The prize advertised was RM200, however the participations were poor and uninspiring (less than 20 entries), hence no winner was picked. Instead, Che Wan Ahmad Faizal then approached Fauzee Nasir to design a logo worthy for the association. Using Aldus Freehand 2.0. on a classic Mac, Fauzee designed the logo, incorporating the name Architech for the first time. This was the first instance of the name in use. The logo and the name Architech was accepted by the general population, and remains in use until now. Description The logo utilizes simple and straightforward typography, using Times New Roman, beginning with a capital A and with italic type applied to the last four letters. The letters "Archi" will normally be black on white/light background, or white on black/dark background. The letters "tech" will normally be red, or gray in monochrome print. Rights and Ownership The Architech logo was designed and still owned by Fauzee Nasir until today. In a deal struck with the association, the association agreed not to pay the RM200 prize money to Fauzee Nasir as the design was not part of the competition. Hence, the designer retains ownership and rights to the logo. However, in a mutual agreement the designer gave right to use and reproduce the logo as long as it is not for commercial use. Any commercial use must have written permission from the designer. Student Composition Architech members consist of the entire population of architecture students in UTM, as students enrolled in the programme are automatically registered under it unless otherwise requested. Previously it also included students of landscape architecture until they formed their own students society in the late 1990s. When the diploma programme was shifted to UTM KL campus under the Razak School, the diploma students formed their own society called ARCO, which was intended as a sister society to Architech. However, due to logistics problem, especially since both are now under two different schools, ARCO became an independent body, but maintained close relations with Architech. Structure Top Committee Architech is led by its president, democratically elected amongst its members for a term of one year. The president is assisted by the vice president, secretary and treasurer, all of which are democratically elected as well. Together, they will form their government, electing individuals of various skills into official positions where their skills could be harnessed for the benefit of the society. These positions evolve and vary throughout the years to meet any needs. Some example of the positions over the years are: Technical - in charge of management of technical aspects, such as asset, equipment and facilities management. Graphics - in charge of spearheading the graphical movement and activities. Sports - in charge of sporting events, be it competitive or recreational. Insaniah - in charge of spiritual development amongst its members, normally for the Muslim community. Website - in charge of development and maintenance of Architech's online presence (Blog, Forum, Facebook etc.) Publications Architext Architext was initiated as a periodical newsletter for Architech. It started around 1997, led by Roshida Majid and advised by Dr. Mohd Tajuddin Mohd Rasdi. Copies of the first two issues were lost; #3 was the earliest remaining copy today. It was distributed free bilingually in Bahasa Malaysia and English, featured several articles relating to architectural studies, listed events and activities, as well as a cartoon sketch. Architext was never resumed and was presumably forgotten, until recently some of its original members began discussing of bringing it back. Design Folio The Design Folio is a publication collecting selected students' thesis works from a particular year. It is published with the notion of showcasing works based on specific theme for each publication. Students works are then selected based on the theme, ideally spanning between three and five years of thesis works. However, in its two innumeration, the selection is limited within a group of students in a particular year. The publication is intended for commercial publications. The editorial consists of entirely students from the thesis group, advised and consulted by the Thesis Coordinators. To date, there have been two successful publications: the first in 2005 and the second in 2007. There was another manuscript managed to be completed in 2008, however due to funding limitations, it did not get published. Design Thesis Synopsis 1997 Architectural Design Thesis. Anthem The association have adopted a song written by one of the students in the late 80s, Zuhairi Harun. The song was titled 'Warisan', touching on the struggle of heritage and identity of architecture in Malaysia. The song have gone through a number of renditions, including quickening of tempo and change of melody, however the spirit of the song remains. Activities Architectural Workshop The architectural workshop is an annual national gatherings of architectural students in Malaysia. It began in 1987 in UTM, using the format of a jamboree, organised by Persatuan Arkitek (Architech) in Kuala Lumpur. Concurrently during the workshop, the PAM-Education Liaison Meeting would also take place to take advantage of the gatherings of students from various schools in one location. Architech was the pioneer of the architectural workshop, which exists until today. The last architectural workshop hosted by Architech was in June 2011 using the theme "Terang". It was the biggest architectural workshop ever hosted by Architech, involving over 24 schools of architecture from all over Malaysia. It was officiated by PAM Vice President Ar. Saifuddin Ahmad. One of the outcome of the module was street furnitures, which became a feature in Jalan Wong Ah Fook, Johor Bahru. Over 20 sculptures were displayed and used by the public for a period of several days. At the moment, it is undecided on when the next hosting by Architech will be, but it is estimated around 2016. References External links Official website Department of Architecture, Universiti Teknologi Malaysia Faculty of Built Environment Architectural education
Architech
Engineering
1,476
11,989,095
https://en.wikipedia.org/wiki/Latent%20semantic%20mapping
Latent semantic mapping (LSM) is a data-driven framework to model globally meaningful relationships implicit in large volumes of (often textual) data. It is a generalization of latent semantic analysis. In information retrieval, LSA enables retrieval on the basis of conceptual content, instead of merely matching words between queries and documents. LSM was derived from earlier work on latent semantic analysis. There are 3 main characteristics of latent semantic analysis: Discrete entities, usually in the form of words and documents, are mapped onto continuous vectors, the mapping involves a form of global correlation pattern, and dimensionality reduction is an important aspect of the analysis process. These constitute generic properties, and have been identified as potentially useful in a variety of different contexts. This usefulness has encouraged great interest in LSM. The intended product of latent semantic mapping, is a data-driven framework for modeling relationships in large volumes of data. Mac OS X v10.5 and later includes a framework implementing latent semantic mapping. See also Latent semantic analysis Notes References Information retrieval techniques Natural language processing
Latent semantic mapping
Technology
220
2,033,002
https://en.wikipedia.org/wiki/Floral%20design
Floral design or flower arrangement is the art of using plant material and flowers to create an eye-catching and balanced composition or display. Evidence of refined floral design is found as far back as the culture of ancient Egypt. Floral designs, called arrangements, incorporate the five elements and seven principles of floral design. Floral design is considered a section of floristry. But floral design pertains only to the design and creation of arrangements. It does not include the marketing, merchandising, caring of, growing of, or delivery of flowers. Common flower arrangements in floral design include vase arrangements, wreaths, nosegays, garlands, festoons, boutonnieres, corsages, and bouquets. History The Eastern, Western, and European styles have all influenced the commercial floral design industry as it is today. Western design historically is characterized by symmetrical, asymmetrical, horizontal, and vertical style of arrangements. The history of flower arrangement first dates back to Ancient Egypt, and has gradually evolved over time. Ancient civilizations Egyptians were among the first to place lotus flowers and buds in vases nearly 4,000 years ago. Egyptians also created bouquets, wreaths, garlands, headwear, and collars. These arrangements often used lotus and papyrus, as they were seen as sacred plants to the goddess Isis. Ancient Greeks and Romans also created garlands and wreaths to wear. Greeks and Romans also created cornucopias full of fruits and vegetables as religious offerings. Asia Chinese and Korean arrangements were, and still are, traditionally based upon the Confucian idea of reflection, the Buddhist principle of preservation, and Taoist symbolism. The arrangements of the Chinese and Koreans often use containers of varying height and shape, and use natural elements, such as rocks. Ikebana is the Japanese style of floral design, and incorporates the three main line placements that correspond with heaven, humans, and the earth. Europe During the Renaissance, pieces often had a degree of symbolism and used bright, vivid, and contrasting triadic colors. Designs were symmetrical and combined fresh and dried material, as well as fruits and vegetables. These arrangements were often triangular, arching, or ellipse-shaped. In French design, arrangements often used soft pastel colors. Arrangements were often light and airy, and stressed the individual beauty of each flower itself, rather than the entire arrangement. Pieces were semi-ovoid, soft and airy, had a feminine design, were symmetrical, and had no focal point. They accentuated rhythm with curves, lines, and flourishes of plant material. English design drew from the vast variety of plant materials that were available in estates and the countryside. Most arrangements during the various periods were formal pieces, generally triangular in shape, and symmetrical. The Americas In the Americas, during the Colonial Period (1607–1699), arrangements were made used gathered wildflowers, grasses, and seed pods. These arrangements reflected a simplistic lifestyle with few luxuries; a reflection of the first colonists to arrive there. American arrangements then evolved from numerous influences, primarily European. As such, American pieces began to reflect the sophistication, symmetry, and shapes of European design ideals of the time. Modern day In the mid 20th century, flower arranging and floral design came to be seen as an art form. While modern floral designers and arrangers are still inspired by the naturalistic, 19th century designs, modern designers tend to want to break free from the rigid patterns and restrictions of past period designs. This led to the creation of abstract designs in modern floral arrangement. Other modern designers, however, did not feel inspired or drawn to abstract designs. As such, these designers began to create new design styles. Today's floral arrangements are born out of these two factors. Modern arrangements range from zero abstraction, in which pieces and components are untreated and organized naturally, to total abstraction, which totally disregards patterns and rules. Today, there are many styles of floral design including the Botanical Style, the Garden Style (Hand Tied, Compote or Armature), the Crescent Corsage, the Nosegay Corsage, Pot au Fleur, the Inverted "T", Parallel Systems, Western Line, the Hedgerow Design, Mille de Fleur, and Formal Linear. Design Principles When creating flower arrangements, there are generally seven principles that floral designers must incorporate into their arrangement to create a flattering and appealing piece. These seven principles include: Proportion: the relationship between the sizes of elements used to create the design (e.g., flowers, foliage, vase, accessories). Scale: the relationship between the overall size of the design and the setting it is placed in. Balance: contains physical balance and visual balance. Physical balance is the distribution of materials and weight across the arrangement; the arrangement should be stable and not at risk of falling over. Visual balance is the poise an arrangement contains upon first glance. There are three types of visual balance: symmetrical, asymmetrical, and open. Focal point: the main feature of the design and/or the first thing that attracts the viewer's eye. Rhythm: the visual flow of the arrangement. This element should encourage the viewer's gaze to move inward, outward, up, and down while looking at the arrangement. Achieved through colors, shapes, lines, textures, and spaces. Harmony: the pleasing combination of colors, material, and texture used in the arrangement. Unity: everything is placed with purpose; achieved when the other 6 principles are in order. It is important to keep in mind that not every arrangement will use all seven principles of design. For example, French Baroque and Rococo style arrangements do not include a focal point. Rococo designs also disregarded proportion; they were to be much taller than they were wide. Some traditional designs disregarded space (and therefore a part of rhythm). Modern abstract designers may disregard the seven principles entirely. Elements In addition to the seven principles, there are also five elements of design a designer must keep in mind when arranging flowers. These five elements include: Line: provides the shape and structure for the design. Line also creates paths for the viewer's eye to follow when viewing the arrangement. Lines can defined (clearly visible) or implied (suggested by changes in color, tone, and texture). Line helps build the dimensions and overall shape of the design. Color: the color of the arrangement. There are numerous color schemes, such as monochromatic, triadic, analogous, or complimentary. Different color schemes provide different effects on the feel of the arrangement. Form: the height, width, and depth of the arrangement. Form also helps build the dimensions and overall shape of the design, much like Line. Space: the spacing of flowers, foliage, and other materials. Space ensures every flower is visible, and that the design is not too clumpy, constricted, spaced out, or empty. Texture: the different textures used in an arrangement. Texture gives the arrangement diversity and interest. Texture is one way a floral designer can achieve rhythm. Textures can be smooth, wrinkled, rough, glossy, etc. Media Fresh The vast majority of the media used in floral design is fresh, or living, media. Fresh media includes flowers and foliage. Flowers Flowers used in floral design are often broke into four categories: line flowers, form flowers, mass flowers, and filler flowers. Each category serves its own purpose in achieving an element or principle of design. The four categories are listed as follows: Line flowers are tall spikes of flowers that bloom along the stem of the plant. They create the outline for an arrangement and determine the height and width of the design. They can be straight, or naturally curving. Most line flowers have larger flowers at the bottom of the stem, that gradually become smaller the closer they are located to the end of the spike. This creates rhythm in the design, as the eye naturally follows the progression. Examples of line flowers include snapdragons, delphiniums, liatris, gladiolus, stock, cattails, and pussywillows. Form flowers are flowers that have interesting colors, textures, and/or patterns that draw attention and stand out among the other pieces in the arrangement. They are most often used as the focal point of the arrangement. Form flowers include irises, calla lilies, anthurium, and orchids. Mass flowers consist of a single stem with one solid, rounded head at the top of the stem. They add mass and visual weight to an arrangement. Mass flowers are often inserted near the rim of the container to draw attention to the focal point, or to serve as the focal point themselves. Mass flowers are often considered the "star of the show" in an arrangement. Oftentimes, more than one type of mass flower is used to create variety and to avoid monotony, or, to otherwise make the arrangement less boring. Mass flowers include carnations, chrysanthemums, daisies, anemone, dahlias, hydrangeas, and roses. Filler flowers are composed of small "sprays" of flowers. Filler flowers are used, as the name suggests, to fill in empty spaces among mass flowers and the framework of the design. Filler flowers also add further dimension to the arrangement. Examples of filler flowers are baby's breath and statice. Just because a flower is defined in one category, that does not exclude it from other categories. For example, chrysthanthemums can be considered both a mass flower or a filler flower, depending on the size and variety of the bloom. Anthuriums and orchids can be considered form flowers, as well as mass flowers. Other flowers commonly used by floral designers include peruvian lilies, cosmos, freesias, gardenia, hyacinth, kalanchoe, larkspur, lavender, lilac, lilies, limonium, lupine, peonies, phlox, protea, ranunculus, sedum, solidago, sunflowers, tulips, and zinnias. Foliage Much like flowers, foliage can also be divided into the same four categories. Usually, they are meant to accent what is being done by their flower counterparts. Line foliages are effective for repeating and complimenting lines established by the line flowers. This creates repetition and unity within the arrangement. Much like line flowers, they can also be straight or curved. Examples of line foliage include bear grass, flax, ivy, and flat ferns, such as sword fern. Form foliages also have unique textures, patterns, or colors that allow them to shine through and stand out in an arrangement. Form foliage are often used to achieve the element of space. They also are used to drag the viewer's eye to the focal point. Form foliages include seeded eucalyptus, calathea, equisetum, diffenbachia, and galax. Mass foliages have the same purpose as mass flowers: to add mass and visual weight to the arrangement. However, they are also effective in filling empty space not occupied by flowers and hiding the mechanics of the design (e.g., floral foam, pot tape, etc.). Mass foliages include leatherleaf fern and salal. Filler foliages are used as accents to create harmony and unity. Depending on the texture of the filler foliage, there can be different effects on the feel of the arrangements. Fillers like plumosa asparagus and sprengeri fern lighten and soften, whereas coarse textures of plants like huckleberry and boxwood create contrast. Another similarity shared in the categorization of foliage in the same way as flowers is that a certain type of foliage may be included in more than one category. Leatherleaf fern can be considered a mass foliage or a line foliage, and ruscus can be considered form foliage or a line foliage. Other foliage used by floral designers today include Italian ruscus, Israeli ruscus, dusty miller, monstera deliciosa, eucalyptus (including silver dollar, gunnii, and baby blue), various types of ferns (such as tree fern), camellia, olive branches, hypericum berries, and pittosporum. Preserved Dried materials such as bark, wood, dried flowers, dried (and often aromatic) inflorescences, leaves, leaf skeletons, and other preserved materials are common extensions of the art and media of floral design. They are of practical importance in that they last indefinitely and are independent of the seasons. Their materials offer effects and associations complementary to, and contrasting with, fresh flowers and foliage. Tools To create an arrangement, a floral designer has to use a multitude of tools. In general, the most common tools are floral tape, pot tape, glue, flower frogs, cutting tools, floral foam, containers, and wire. Vases and other containers are used to hold the arrangement. They often lend to the final look of a piece, and come in a variety of shapes and sizes to suit numerous types of projects. Floral foam is a piece of dense foam that holds moisture and keeps flowers in place. Most floral foam has a specific container that can hold the foam without anything more than placing it into the container. However, floral foam can be cut into any shape, and therefore placed in any container. In recent years, there has been controversy over the environmental impact of floral foam, as well as the potential negative health effects from inhaling the powder created from unsoaked foam. Nevertheless, floral foam is still an essential tool in floral design. Cutting tools, such as floral knives, floral shears, pruners, and ribbon scissors can be used to cut a variety of materials in floral design. Knives can be used to cut flowers or floral foam. Shears and pruners can also be used to trim and cut foliage and flowers. Ribbon scissors are used to cut ribbon and twine. Adhesive tools include floral tape, pot tape, floral adhesive (also known as cold glue), and hot glue. Floral tape is most often used to secure flowers together or to cover the mechanics of an arrangement, especially when creating a boutonniere or corsage. Pot tape is used to create a grid pattern in vases, which helps keeps flowers and foliage in place. Pot tape can also be used to secure floral foam to a container. Cold glue is used to secure fresh, living flowers together or in place for an arrangement. Hot glue is used to glue non-living media in place or together. Wire is used in floral design for a variety of purposes. It can be used to secure ribbons in place, fix broken stems, or provide strength to weak or flimsy material. Wire comes in different gauges, or sizes, which are used for different applications. Flower frogs are devices that keep flowers upright. They usually have holes to place the flowers into, or spikes to "spear" the cut end of the flower into. Education With the ever-growing interest in the natural world and flowers, the floral industry continues to grow. The increase in educational institutes providing training in floral design has expanded to many state universities, certified design schools, and even high schools worldwide. Schools that teach floral design courses teach techniques to arrange flowers, plant identification, foliage and flower care for both fresh and preserved media, retail floral shop practices, and how to place and receive flower orders. Most of these programs reward students with certificates or degrees in floral design, shop management, or artisanship. Floral design course are typically cheaper than most higher education programs, and can cost anywhere from US$125 to over US$25,000. Most courses take around six to eighteen months to complete. The following list is composed of schools and organizations that offer floral design courses: Rittners School of Floral Design Texas Tech University School of Floral Design Golden West College Anne Arundel Community College New York Institute of Art & Design American Institute of Floral Designers California Polytechnic State University Mississippi State University The London Flower School Judith Blacklock Flower School Hong Kong Academy of Flower Arrangement Nobleman School of Floral Design Community Floral shops Floral shops are business establishments that create and sell floral designs. Floral shops often have a vast variety of flowers and foliage to use in creating arrangements, which can be custom ordered or pre-designed. Floral shops usually receive a majority of their business on the following holidays and events: Christmas, Valentine's Day, Administrative Professionals' Day, Mothers' Day, All Souls Day, Advent, Easter, weddings and funerals. Floral shops also include the other aspects of floristry, including marketing, buying and selling of flowers, production, etc. Street vendors Street vendors that sell flowers and arrangements are called flower sellers. Flower sellers are popular in countries like Mexico, India, Vietnam and Southwestern states in the United States. Associations Prominent industry associations that promote floral design worldwide include the American Institute of Floral Designers (AIFD), the Society of American Florists (SAF), and the National Association of Flower Arranging Societies (NAFAS). In the United States, there are also numerous floriculture and floral design organizations for nearly all of the 50 states in the country. These associations promote floral design through workshops, conferences, flower shows, and seminars. Designers Notable floral designers include Daniel Ost, Junichi Kakizaki, Paula Pryke, Phil Rulloda, Catherine Conlin, Constance Spry, Jennifer McGarigle, Judith Blacklock, Stanlee Gatti, Irene Hayes, Julia Clements, Azuma Makoto, and the White House Chief Floral Designer. See also Floristry Floral shop History of flower arrangement Flower seller Ikebana Floral Jamming The Big Flower Fight Interior design Fashion design References Design Floristry Arts occupations
Floral design
Engineering
3,625
13,056,751
https://en.wikipedia.org/wiki/Cefamandole
Cefamandole (INN, also known as cephamandole) is a second-generation broad-spectrum cephalosporin antibiotic. The clinically used form of cefamandole is the formate ester cefamandole nafate, a prodrug which is administered parenterally. Cefamandole is no longer available in the United States. The chemical structure of cefamandole, like that of several other cephalosporins, contains an N-methylthiotetrazole (NMTT or 1-MTT) side chain. As the antibiotic is broken down in the body, it releases free NMTT, which can cause hypoprothrombinemia (likely due to inhibition of the enzyme vitamin K epoxide reductase) and a reaction with ethanol similar to that produced by disulfiram (Antabuse), due to inhibition of aldehyde dehydrogenase. Vitamin K supplement is recommended during therapy, and consumption of ethanol and ethanol-containing substances is discouraged. Cefamandole has a broad spectrum of activity and can be used to treat bacterial infections of the skin, bones and joints, urinary tract, and lower respiratory tract. The following represents cefamandole MIC susceptibility data for a few medically significant microorganisms. Escherichia coli: 0.12 - 400 μg/ml Haemophilus influenzae: 0.06 - >16 μg/ml Staphylococcus aureus: 0.1 - 12.5 μg/ml CO2 is generated during the normal constitution of cefamandole and ceftazidime, potentially resulting in an explosive-like reaction in syringes. See also Cefazolin Ceforanide References Acetaldehyde dehydrogenase inhibitors Cephalosporin antibiotics Enantiopure drugs Phenylethanolamines Tetrazoles
Cefamandole
Chemistry
416
11,421,042
https://en.wikipedia.org/wiki/Pyrococcus%20C/D%20box%20small%20nucleolar%20RNA
In molecular biology, Pyrococcus C/D box small nucleolar RNA are non-coding RNA (ncRNA) molecules identified in the archaeal genus Pyrococcus which function in the modification of ribosomal RNA (rRNA) and transfer RNA (tRNA). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell, which is a major site of ribosomal RNA and snRNA biogenesis, but there is no corresponding visible structure in archaeal cells. This group of ncRNAs are known as small nucleolar RNAs (snoRNA) and also often referred to as a guide RNAs because they direct associated protein enzymes to add a modification to specific nucleotides in target RNAs. C/D box RNAs guide the addition of a methyl group (-CH3) to the 2'-O position in the RNA backbone. Computational screens of archaeal genomes have identified C/D box snoRNAs in a number of genomes. In particular 46 small RNAs were identified to be conserved in the genomes of three hyperthermophile Pyrococcus species. References External links snoRNAdb Small nuclear RNA
Pyrococcus C/D box small nucleolar RNA
Chemistry
254
2,350,490
https://en.wikipedia.org/wiki/Road%20roller
A road roller (sometimes called a roller-compactor, or just roller) is a compactor-type engineering vehicle used to compact soil, gravel, concrete, or asphalt in the construction of roads and foundations. Similar rollers are used also at landfills or in agriculture. Road rollers are frequently referred to as steamrollers, regardless of their method of propulsion. History The first road rollers were horse-drawn, and were probably borrowed farm implements (see Roller). Since the effectiveness of a roller depends to a large extent on its weight, self-powered vehicles replaced horse-drawn rollers from the mid-19th century. The first such vehicles were steam rollers. Single-cylinder steam rollers were generally used for base compaction and run with high engine revs with low gearing to promote bounce and vibration from the crankshaft through to the rolls in much the same way as a vibrating roller. The double cylinder or compound steam rollers became popular from around 1910 onwards and were used mainly for the rolling of hot-laid surfaces due to their smoother running engines, but both cylinder types are capable of rolling the finished surface. Steam rollers were often dedicated to a task by their gearing as the slower engines were for base compaction whereas the higher geared models were often referred to as "chip chasers" which followed the hot tar and chip laying machines. Some road companies in the US used steamrollers through the 1950s. In the UK some remained in service until the early 1970s. As internal combustion engines improved during the 20th century, kerosene-, gasoline- (petrol), and diesel-powered rollers gradually replaced their steam-powered counterparts. The first internal-combustion powered road rollers were similar to the steam rollers they replaced. They used similar mechanisms to transmit power from the engine to the wheels, typically large, exposed spur gears. Some users disliked them in their infancy, as the engines of the era were typically difficult to start, particularly the kerosene-powered ones. Virtually all road rollers in use today use diesel power. Uses on a road Road rollers use the weight of the vehicle to compress the surface being rolled (static) or use mechanical advantage (vibrating). Initial compaction of the substrate on a road project is done using a padfoot or "sheep's foot" drum roller, which achieves higher compaction density due to the pads having less surface area. On large freeways, a four-wheel compactor with padfoot drum and a blade, such as a Caterpillar 815/825 series machine, would be used due to its high weight, speed, and the powerful pushing force to spread bulk material. On regional roads, a smaller single padfoot drum machine may be used. The next machine is usually a single smooth drum compactor that compacts the high spots down until the soil is smooth. This is usually done in combination with a motor grader to obtain a level surface. Sometimes at this stage a pneumatic tyre roller is used. These rollers feature two rows (front and back) of pneumatic tyres that overlap, and the flexibility of the tyres provides a kneading action that seals the surface and with some vertical movement of the wheels, enables the roller to operate effectively on uneven ground. Once the soil base is flat the pad drum compactor is no longer used on the road surface. The next course (road base) is compacted using a smooth single drum, smooth tandem roller, or pneumatic tyre roller in combination with a grader and a water truck to achieve the desired flat surface with the correct moisture content for optimum compaction. Once the road base is compacted, the smooth single drum compactor is no longer used on the road surface (there is an exception if the single drum has special flat-wide-base tyres on the machine). The final wear course of asphalt concrete (known as asphalt or blacktop in North America, or macadam in England) is laid using a paver and compacted using a tandem smooth drum roller, a three-point roller or a pneumatic tyre roller. Three point rollers on asphalt were once common and are still used, but tandem vibrating rollers are the usual choice now. The pneumatic tyre roller's kneading action is the final roller to seal the surface. Rollers are also used in landfill compaction. Such compactors typically have padfoot drums, and do not achieve a smooth surface. The pads aid in compression, due to the smaller area contacting the ground. Configurations The roller can be a simple drum with a handle that is operated by one person and weighs or as large as a ride-on road roller weighing and costing more than US$150,000. A landfill unit may weigh . Roller types Pedestrian-operated Rammer (bounce up and down) Walk-behind plate compactor/light Trench roller (manual unit or radio-frequency remote control) Walk-behind roller/light (single drum) Walk-behind roller/heavy (double drum) Ride-on smooth finish Tandem drum (static) Tandem drum (vibrating) Single drum roller (smooth) Pneumatic-tyred Roller, called rubber tyre or multi-wheel Combination roller (single row of tyres and a steel drum) Three point roller (steam rollers are usually three-point) Ride-on soil/landfill compactor with pads/feet/spikes Single drum roller (soil) 4-wheel (soil/landfill) 3-point (soil/landfill) Tandem drum (soil/landfill) Other Tractor-mounted and tractor-powered (conversion – see gallery picture below) Drawn rollers or towed rollers (once common, now rare) Impact compactor (uses a square or polygon drum to strike the ground hard for proof rolling or deep lift compacting) Drum roller with rubber coated drum for asphalt compaction Log skidder converted to compactor for landfill Wheel loader converted to compactor for landfill Drum types Drums are available in widths ranging from . Tyre roller types Tyre rollers are available in widths ranging up to , with between 7 and 11 wheels (e.g. 3 wheels at front, 4 at back): 7 and 8 wheel types are normally used in Europe and Africa; 9 and 11 in America; and any type in Asia. Very heavy tyre rollers are used to compact soil. Variations and features On some machines, the drums may be filled with water on site to achieve the desired weight. When empty, the lighter machine is easier and cheaper to transport between work sites. On pneumatic tyre rollers the body may be ballasted with water or sand, or for extra compaction wet sand is used. Modern tyre rollers may be filled with steel ballast, which gives a more even balance for better compaction. Additional compaction may be achieved by vibrating the roller drums, allowing a small, light machine to perform as well as a much heavier one. Vibration is typically produced by a free-spinning hydrostatic motor inside the drum to whose shaft an eccentric weight has been attached. Some rollers have a second weight that can be rotated relative to the main weight, to adjust the vibration amplitude and thus the compacting force. Water lubrication may be provided to the drum surface from on-board "sprinkler tanks" to prevent hot asphalt sticking to the drum. Hydraulic transmissions permit greater design flexibility. While early examples used direct mechanical drives, hydraulics reduce the number of moving parts exposed to contamination and allows the drum to be driven, providing extra traction on inclines. Human-propelled rollers may only have a single roller drum. Self-propelled rollers may have two drums, mounted one in front of the other (format known as "duplex"), or three rolls, or just one, with the back rollers replaced with treaded pneumatic tyres for increased traction. Gallery Manufacturers ABG (Germany) — SD/TD (purchased by Ingersoll Rand and now part of Volvo CE) Albaret (France) — PT/TD (now part of Caterpillar) Ammann Group (Switzerland) — Aveling-Barford (England) — TD/PT/3P BOMAG (Germany) — SD/TD/PT (BOMAG/HYPAC in the US market) Case CE (US) — SD (brands the Ammann/Sta machines as Case in the US) Caterpillar Inc. (US) — SD/TD/PT (has the former lines of RAYGO, BROS and Bitelli) Dynapac (Sweden) — SD/TD/PT/3P Galion (US) — Hamm AG (Germany) — SD/TD/PT/3P (now part of the Wirtgen Group) HEPCO (Iran) — SD/TD/PT/3P Hitachi (Japan) — SD/TD/PT/3P Huber Company, (US) — Hyster (US) — SD/TD/PT (part of HYPAC and Bomag USA) Ingersoll Rand (US) — SD/TD/PT (now owned by Volvo) Kamani Engineering Corporation (India) — tractor-mounted (now part of the RPG Group; production ended –1980) LiuGong, HQ at Liuzhou, China — Marshall (England) — TD Sakai Heavy Industries (Japan) — SD/TD/PT/3P Sany (China) — SD/TD/PT World Equipment(China) — SD/TD/PT Tampo (US) — SD/TD Vibromax (Germany) — SD/TD/PT (purchased by JCB, now branded JCB) Wacker Neuson (Germany) — KEY: SD = Single drum TD = Tandem drum PT = Pneumatic tyre — Rubber tyre or multi-tyre are also common 3P = 3-point rollers — These are very similar to the old steam roller design See also Tractor Roller (agricultural tool) for farm rollers Roller (disambiguation) for other types of roller Landfill compaction vehicle Mine roller a demining device References External links Road Roller Association — UK-based society dedicated to the preservation of steam and motor rollers, and ancillary road-making equipment. Engineering vehicles Road construction
Road roller
Engineering
2,107
3,303,198
https://en.wikipedia.org/wiki/Fokker%20periodicity%20block
Fokker periodicity blocks are a concept in tuning theory used to mathematically relate musical intervals in just intonation to those in equal tuning. They are named after Adriaan Daniël Fokker. These are included as the primary subset of what Erv Wilson refers to as constant structures, where "each interval occurs always subtended by the same number of steps". The basic idea of Fokker's periodicity blocks is to represent just ratios as points on a lattice, and to find vectors in the lattice which represent very small intervals, known as commas. Treating pitches separated by a comma as equivalent "folds" the lattice, effectively reducing its dimension by one; mathematically, this corresponds to finding the quotient group of the original lattice by the sublattice generated by the commas. For an n-dimensional lattice, identifying n linearly independent commas reduces the dimension of the lattice to zero, meaning that the number of pitches in the lattice is finite; mathematically, its quotient is a finite abelian group. This zero-dimensional set of pitches is a periodicity block. Frequently, it forms a cyclic group, in which case identifying the m pitches of the periodicity block with m-equal tuning gives equal tuning approximations of the just ratios that defined the original lattice. Note that octaves are usually ignored in constructing periodicity blocks (as they are in scale theory generally) because it is assumed that for any pitch in the tuning system, all pitches differing from it by some number of octaves are also available in principle. In other words, all pitches and intervals can be considered as residues modulo octave. This simplification is commonly known as octave equivalence. Definition of periodicity blocks Let an n-dimensional lattice (i.e. integer grid) embedded in n-dimensional space have a numerical value assigned to each of its nodes, such that moving within the lattice in one of the cardinal directions corresponds to a shift in pitch by a particular interval. Typically, n ranges from one to three. Simultaneously the two-dimensional case, the lattice is a square lattice. In the 3-D case, the lattice is cubic. Examples of such lattices are the following (x, y, z and w are integers): In the one-dimensional case, the interval corresponding to a single step is generally taken to be a perfect fifth, with ratio 3/2, defining 3-limit just tuning. The lattice points correspond to the integers, with the point at position x being labeled with the pitch value 3x/2y for a number y chosen to make the resulting value lie in the range from 1 to 2. Thus, A(0) = 1, and surrounding it are the values ... 128/81, 32/27, 16/9, 4/3, 1, 3/2, 9/8, 27/16, 81/64, ... In the two-dimensional case, corresponding to 5-limit just tuning, the intervals defining the lattice are a perfect fifth and a major third, with ratio 5/4. This gives a square lattice in which the point at position (x,y) is labeled with the value 3x5y2z. Again, z is chosen to be the unique integer that makes the resulting value lie in the interval [1,2). The three-dimensional case is similar, but adds the harmonic seventh to the set of defining intervals, leading to a cubic lattice in which the point at position (x,y,z) is labeled with a value 3x5y7z2w with w chosen to make this value lie in the interval [1,2). Once the lattice and its labeling is fixed, one chooses n nodes of the lattice other than the origin whose values are close to either 1 or 2. The vectors from the origin to each one of these special nodes are called unison vectors. These vectors define a sublattice of the original lattice, which has a fundamental domain that in the two-dimensional case is a parallelogram bounded by unison vectors and their shifted copies, and in the three-dimensional case is a parallelepiped. These domains form the tiles in a tessellation of the original lattice. The tile has an area or volume given by the absolute value of the determinant of the matrix of unison vectors: i.e. in the 2-D case if the unison vectors are u and v, such that and then the area of a 2-D tile is Each tile is called a Fokker periodicity block. The area of each block is always a natural number equal to the number of nodes falling within each block. Examples Example 1: Take the 2-dimensional lattice of perfect fifths (ratio 3/2) and just major thirds (ratio 5/4). Choose the commas 128/125 (the diesis, the distance by which three just major thirds fall short of an octave, about 41 cents) and 81/80 (the syntonic comma, the difference between four perfect fifths and a just major third, about 21.5 cents). The result is a block of twelve, showing how twelve-tone equal temperament approximates the ratios of the 5-limit. Example 2: However, if we were to reject the diesis as a unison vector and instead choose the difference between five major thirds (minus an octave) and a fourth, 3125/3072 (about 30 cents), the result is a block of 19, showing how 19-TET approximates ratios of the 5-limit. Example 3: In the 3-dimensional lattice of perfect fifths, just major thirds and just harmonic sevenths, the identification of the syntonic comma, the septimal kleisma (225/224, about 8 cents) and the ratio 1029/1024 (the difference between three septimal whole tones and a perfect fifth, about 8.4 cents) results in a block of 31, showing how 31-TET approximates ratios of the 7-limit. Mathematical characteristics of periodicity blocks The periodicity blocks form a secondary, oblique lattice, superimposed on the first one. This lattice may be given by a function φ: which is really a linear combination: where point (x0, y0) can be any point, preferably not a node of the primary lattice, and preferably so that points φ(0,1), φ(1,0) and φ(1,1) are not any nodes either. Then membership of primary nodes within periodicity blocks may be tested analytically through the inverse φ function: Let then let the pitch B(x,y) belong to the scale MB iff i.e. For the one-dimensional case: where L is the length of the unison vector, For the three-dimensional case, where is the determinant of the matrix of unison vectors. References Further reading . Paul Erlich, (1999), A Gentle Introduction to Fokker Periodicity Blocks: Part 1; Part 2; etc. Lattice points Pitch space Dutch inventions
Fokker periodicity block
Mathematics
1,458
1,189,192
https://en.wikipedia.org/wiki/StarHub
StarHub Limited, commonly known as StarHub, is a Singaporean multinational telecommunications conglomerate and one of the major telcos operating in the country. Founded in 1998, it is listed on the Singapore Exchange (SGX). History Early years StarHub was awarded the license to provide fixed networks and mobile services on 23 April 1998, when the government announced that the telecommunications sector in Singapore would be completely liberalised by 2002. In 2000, the government announced the date for complete liberalisation would be brought forward to 1 April 2000, and the 49% cap on foreign ownership of public telecommunications companies in Singapore would be lifted. StarHub was officially launched on 1 April 2000 with ST Telemedia, Singapore Power, BT Group and Nippon Telegraph and Telephone (NTT) as its major shareholders. On 21 January 1999, StarHub acquired internet service provider CyberWay and it became a subsidiary within the StarHub group. It was renamed as StarHub Internet on 3 December 1999 in a move to integrate CyberWay into the StarHub brand. 2000s In 2001, Singapore Power divested its shares in StarHub and sold its 25.5% stake to ST Telemedia for S$400 million. BT Group subsequently divested its 18% stake as a result of consolidation, after accumulating debt acquired during the bidding round for 3G licences in the United Kingdom. On 1 October 2002, the company merged with Singapore's sole cable television operator, Singapore Cable Vision. As a result of the merger, it acquired SCV's cable television as well as broadband internet access operations. StarHub was publicly listed on the Singapore Exchange on 13 October 2004. On 12 January 2007, StarHub announced a 'Strategic Alliance' with Qatar Telecom. On 1 May 2009, the Infocomm Development Authority of Singapore announced that StarHub's wholly owned subsidiary, Nucleus Connect, was selected as the Operating Company (OpCo) to design, build and operate the active infrastructure of the Next Generation Nationwide Broadband Network (Next Gen NBN). Next Gen NBN is now simply known as Nationwide Broadband Network or NBN. On 14 July 2009, StarHub announced the retirement of long-standing chief executive Terry Clontz. Neil Montefiore, the former chief executive of the country's smallest telecommunications company M1 Limited, took over as chief executive officer on 1 January 2010. Terry Clontz remains as a non-executive director of StarHub. On 1 August 2009, StarHub relocated its corporate office to StarHub Green building at Ubi from its previous office location at StarHub building at Cuppage. 2010s On 7 February 2013, StarHub announced the retirement of Neil Montefiore as chief executive officer by end February 2013. StarHub's chief operating officer Tan Tong Hai was appointed CEO on 1 March 2013. On 13 July 2015, StarHub announced the retirement of Tan Guong Ching as chairman. StarHub's former chief executive officer Terry Clontz was appointed chairman on 15 July 2015. In December 2016, StarHub's new innovation centre and converged operations cockpit Hubtricty went operational. Located at Mediapolis@one-north, the facility also contains a co-working space and data analytics centre. 2020s On 29 April 2020, a joint venture between StarHub and M1 Limited was awarded a license to create a 5G network in Singapore by the Singapore Government. In August 2021, Starhub announced that it will be launching Nvidia's GeForce Now in Singapore, the first country in the region and the only local operator in the country to introduce the cloud gaming service. On 17 February 2022, StarHub was introduced as a programme partner for DBS #CyberWellness. In March 2022, IMDA approves deal for StarHub to buy a majority stake in MyRepublic business. In an article written by Straits Times, "The Infocomm Media Development Authority (IMDA) has approved local telco StarHub's proposal to buy a majority 50.1 per cent stake in rival Internet service provider MyRepublic's fibre broadband business for residential and enterprise customers in Singapore." The proposed transaction was announced by StarHub and MyRepublic in September last year. At the time, the deal was worth $70.8 million. This would grow Starhub broadband market share from 34% to about 40% in Singapore. Subsidiaries The StarHub group consists of several subsidiaries, which include: Network Services Mobile services StarHub provides mobile services through its subsidiary StarHub Mobile. Since its launch on 1 April 2000, StarHub has been Singapore's fastest growing mobile operator. It has close to two million customers and is the second largest mobile network operator with close to 30% market share. On 27 May 2003, it became the first mobile operator in Singapore to commercially launch BlackBerry, a hand-held wireless device providing e-mail, telephone, text messaging, web browsing and other wireless data access. Customers trials of 3G services began in November 2004, and was released in April 2005. In January 2005, StarHub announced that it would form an exclusive strategic partnership for i-mode in Singapore with NTT DoCoMo, a subsidiary of StarHub's major shareholder NTT. Customer trials started in October 2005, and the service was launched on 18 November. On 15 July 2009, StarHub became the first mobile operator in the Asia Pacific region to commercially launch a HSPA+ service. Branded as MaxMobile Elite, StarHub's HSPA+ service offers download speeds up to 21 Mbit/s nationwide. On 19 September 2012, StarHub began the enhancement of its high-speed mobile broadband network with Long Term Evolution (LTE) and Dual Cell High-Speed Packet Access Plus (DC-HSPA+), which improved peak downlink speeds of up to 75 Mbit/s and 42 Mbit/s respectively. On 7 March 2013, StarHub became the first telecommunications company in Singapore to offer High Definition (HD) Voice. Over a year later, the company launched 4G Voice over LTE services. Both technologies enhance mobile call experience by improving speech clarity and reducing background noise. In September 2015, StarHub was ranked world's fastest 4G network by independent mobile coverage checker OpenSignal. Five months later, OpenSignal reported that according to its study, Singapore is the fastest country with LTE. Singapore's StarHub and Singtel as well as Canada's SaskTel tied in the world's fastest operator category. As of the second quarter of 2016, StarHub's 4G outdoor coverage was at 99.69%. In comparison, Singtel's coverage was at 99.95% coverage and M1's at 99.29%. In November 2016, StarHub and Vodafone renewed their partnership agreement for Singapore for a further three years. The partnership was formed in 2012 to offer innovative mobile services to enterprise customers. On 1 December 2016, StarHub rolled out a travel data plan allowing 2 GB or 3 GB use over 30 days across all mobile networks in nine Asia-Pacific destinations. In January 2017, StarHub switched embedded SIM (eSIM) on its 4G network to support devices that come without a physical SIM. The Samsung Gear S3 Frontier (LTE) is the first eSIM wearable to be made available in Singapore. Pay TV StarHub provides cable television services through its subsidiary Singapore Cable Vision Ltd. Its Hybrid Optical Fibre-Coaxial network reaches 99% of households in Singapore. In November 2004, it announced the launch of digital cable services over its cable network, which added more channels and allowed greater consumer interactivity. On 18 January 2007, StarHub introduced a commercial high definition television service. On 7 June 2012, StarHub launched TV Anywhere, a multi-platform service which allows subscribers to watch TV channels and on-demand content on their personal devices such as laptops and tablets. On 18 March 2013, StarHub started offering commercial customers StarHub TV on Fibre, its Internet Protocol television (IPTV) service. On 8 April 2015, IPTV was rolled out to residential customers. On 12 August 2015, an online streaming service called StarHub Go was launched, and TV Anywhere was merged into it. Internet services StarHub provides broadband internet access through its subsidiaries StarHub Internet and StarHub Online respectively. StarHub Internet was formed after the acquisition of internet access provider CyberWay, while StarHub Online was formed after a merger with Singapore Cable Vision. StarHub has 475,000 home broadband customers as of the third quarter of 2016. On 3 December 1999, a free surf plan was announced in conjunction with the rebranding of CyberWay, a first in Singapore's consumer internet industry. Customers could surf the internet for free via dial-up and pay only normal local telephone charges. Over 180,000 people signed up for the free surf plan in less than three months since it was announced. StarHub provides broadband internet access on the same network it uses for cable television services using cable modems based on the DOCSIS standard. StarHub is a founding member of the global Wireless Broadband Alliance and provides wireless broadband services at numerous locations throughout Singapore. In November 2004, it announced an agreement with Connexion by Boeing which provide StarHub customers the ability to access the internet and digital content in flight. On 28 December 2006, StarHub became the first operator in the world to commercially launch a 100 Mbit/s residential broadband service nationwide. Known as MaxOnline Ultimate, it is one of three cable broadband services offered by StarHub, the other two being MaxOnline Express and MaxOnline Premium. StarHub also launched 100 Mbit/s, 150 Mbit/s, 200 Mbit/s and 1000 Mbit/s residential fibre broadband service in April 2010. It is known as MaxInfinity Ultimate, MaxInfinity Elite, MaxInfinity Supreme and MaxInfinity Platinum. In October 2012, StarHub launched two new gamer-centric broadband plans under the name MaxInfinity LVL99 for gamers to enjoy priority. The plans have since been discontinued. In November 2014, StarHub started bundling a 100 Mbit/s cable broadband connection with its 1 Gbit/s fibre broadband plans, branded as "Dual Broadband", for customers to get two broadband links in their home. Fixed network services StarHub's fixed network, built since inception, extends more than around Singapore and directly connects more than 800 commercial buildings. It provides a wide range of fixed network services, broadly categorised as data services and Internet Protocol and Voice services. Data services include: Asynchronous Transfer Mode service Domestic Leased Circuit Facilities Management Frame Relay International Private Leased Circuit Internet protocol services include: Corporate Dialup and ADSL via access to DSLAMs located in office buildings Dedicated Leased lines Global internet protocol network Global Virtual Private Network Metropolitan Ethernet services Internet Protocol Transit/Backbone services Managed Security Services Hosting Services Co-location Service References Members of the Conexus Mobile Alliance Internet in Singapore Mobile phone companies of Singapore Telecommunications companies of Singapore Telecommunications companies established in 1998 1998 establishments in Singapore Companies listed on the Singapore Exchange Internet service providers of Singapore Singaporean brands
StarHub
Technology
2,345
27,240,903
https://en.wikipedia.org/wiki/Online%20vetting
Online vetting, also known as cyber-vetting is used by potential employers and other acquaintances to vet people's online presence or "internet reputation" ("netrep") on search engines such as Google and Yahoo, and social networking services such as Facebook, Twitter, Instagram and LinkedIn. Employers may check profiles, posts, and photographs for indications that the candidate is unsuitable for a certain job or position. Views and practice Social media has tremendously increased over the decades. In the United States, there are about 327 million users on social media platforms as of 2021. With so many users online, recruiters have pivoted to directly asking candidates' for their social media platforms on the initial application. This allows for recruiters to fully access and see what their candidates are doing and posting online. A survey in 2007 found that half of UK employees would be outraged if their employers looked up information about them on social networking sites, and 56% thought it would be unethical. Employer surveys found that between approximately 20-67% of employers conduct internet searches, including of social networking sites, and that some have turned down applicants as a result of their searches. 21% of colleges and universities surveyed said they looked at the social networking of prospective students, usually for those applying for scholarships and other limited awards and programmes. Prospective political appointees to the Obama administration were asked to list all their blog posts, any emails, text messages, and instant messages that could suggest a conflict of interest or public source of embarrassment, the URLs of any sites that featured them in a personal or professional capacity, and all of their online aliases. Job applicants have been refused due to criticising previous employers and discussing company information online, as well as for posting provocative and inappropriate photographs, drinking or drug use, poor communication skills, making discriminatory comments, and lying about qualifications. Several companies offer online reputation management services, including helping to remove embarrassing information from websites. According to a CareerBuilder study, it found that 57% of employers rejected potential employees when an online vetting scan happened. In 2017, research findings conducted with recruiters listed three primary function of a cybervetting process: Screening - Process considered analogous to conventional background check and résumé analysis; Efficiency - A more effective way to gather information from a candidate than the conventional process; Relational - Analysis of a candidate relationship and behavior through social network posts. While online vetting can be an advantage for recruiters who want to learn more about their candidates, online vetting can also cause recruiters to learn false information about their candidates. With people knowing their online presence is being seen by hundreds of people, some people would only post certain things on their social media profiles to help boost their profile and make themselves look better for other people. For example, someone might engage in a certain activity only to post it on their social media to make themselves look like a good person and make the audience think they actually care for that activity. This can cause recruiters to get a false impression of the candidate from online vetting. Due to this, some potential employees will connect with their recruiter on social media. This can be good or bad depending on if the potential employee has vetted their own social media to make sure there is nothing on there that will make them look bad. Legal position Online vetting has begun as a main component within the vetting process for a new employee. However, there are implications that the grey areas surrounding it have issues with legality. Online vetting blurs together, a candidate's personal life with their professional livelihood, which blurs defining boundaries that are socially prominent elsewhere. Legal experts have warned human resources departments about vetting prospective employees online, due to the possibility of discrimination and the unreliability of this information. The chairman of the UK Children's Charities' Coalition on Internet Safety argued in 2007 that it was "possibly illegal, but certainly unethical". While the Information Commissioner's Office advised that just looking at information on someone's social networking profiles would not be illegal, an employment law specialist noted that under the Data Protection Act 1998, processing and storing the information or using it to make discriminatory decisions could be. Age discrimination might result from such a practice, due to the age profile of users of social networking sites. Failed candidates may be able to use discrimination legislation to ask about vetting operations and even ask for IT records to check access to social networks. In the US, vetting using social networking sites risks breaching the Fair Credit Reporting Act (FCRA), which requires employers to gain the consent of applicants before doing a background check, state laws that limit the consideration of off-duty conduct in making employment decisions, and any searches risk breaching prohibitions against commercial use contained in the terms of service of the social networking sites. In 2006, a trainee teacher at a high school in Pennsylvania was denied her teaching degree after her supervisor found a picture she posted on MySpace captioned "Drunken pirate" and deemed it "unprofessional". She sued, arguing that by acting on the basis of her legal out-of-hours behavior Millersville University had breached her First Amendment rights, but a federal district court ruled that the photograph was not "protected speech" under the First Amendment. See also Cyber-stalking Digital footprint Online identity Social media background check Notes and references Further reading External links Employers Tap Web for Employee Information – 2006 National Public Radio report Social media E-recruitment Internet privacy
Online vetting
Technology
1,112
73,759,515
https://en.wikipedia.org/wiki/7%20Leonis%20Minoris
7 Leonis Minoris (7 LMi) is a star located in the northern constellation Leo Minor. It is also designated as HD 82087 and HR 3764. 7 LMi is faintly visible to the naked eye as a yellow-hued point of light with an apparent magnitude of 5.86. Gaia DR3 parallax measurements imply a distance of 462 light-years and it is currently receding with a heliocentric radial velocity of . At its current distance, 7 LMi's brightness is diminished by 0.12 magnitudes due to interstellar extinction and it has an absolute magnitude of −0.03. There have been disagreements on the object's stellar classification. 7 LMi is either a G-type giant star with a class of either G8 or G9 III, or it is a K-type giant with a class of K0 III. It is most likely on the horizontal branch (95% fit), generating energy via helium fusion at its core. It has 2.74 times the mass of the Sun but at the age of 575 million years, it has expanded to 13.41 times the radius of the Sun. It radiates 96 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . 7 LMi has a near solar metallicity at [Fe/H] = −0.03 and it spins very slowly with a projected rotational velocity of . 7 LMi has two visual companions. AG +33°954 is a background star located much farther away than 7 LMi and it is a close spectroscopic binary itself. References G-type giants Triple stars Leo Minor BD+34 01999 082087 046652 3764
7 Leonis Minoris
Astronomy
362
4,460,592
https://en.wikipedia.org/wiki/NetTop
NetTop is an NSA project to run Multiple Single-Level systems with a Security-Enhanced Linux host running VMware with Windows as a guest operating system. NetTop has . External links NSA web page on NetTop VMware PR page on NetTop HP NetTop web page TCS Trusted Workstation based on NetTop Linux security software National Security Agency operations
NetTop
Technology
75
501,339
https://en.wikipedia.org/wiki/Topogenic%20sequence
A topogenic sequence is a collective term used for a peptide sequence present at nascent proteins essential for their insertion and orienting in cellular membranes. The sequences are also used to translocate proteins across various intracellular membranes, and ensure they are transported to the correct organelle after synthesis. The position of the sequence may be at the end, e.g. N-terminal signal sequence, or in mid parts of the nascent protein, e.g. stop-transfer anchor sequences and signal-anchor sequences. If the sequence is at the end of the polypeptide, it is cleaved off after entering the ER-lumen (via a translocon) by a signal peptidase, and subsequently degraded. As an example, the vast majority of all known complex plastid preproteins (an 'unactivated' protein) encoded in the nucleus possess a topogenic sequence. See also Protein targeting Target peptide Translocon Signal peptide References Peptide sequences
Topogenic sequence
Chemistry,Biology
203
55,883,075
https://en.wikipedia.org/wiki/NGC%201989
NGC 1989 (also known as ESO 423-21) is a lenticular galaxy in the Columba constellation. It is about 482 million light-years away from the Milky Way. The galaxy was discovered by John Herschel on January 28, 1835. Its apparent magnitude is 12.9 and its size is 1.40 by 1.1 arc minutes. References External links Lenticular galaxies 423-21 -05-14-04 1989 017464 Columba (constellation) Astronomical objects discovered in 1835 Discoveries by John Herschel
NGC 1989
Astronomy
114
20,741,256
https://en.wikipedia.org/wiki/Danish%20design
Danish design is a style of functionalistic design and architecture that was developed in mid-20th century. Influenced by the German Bauhaus school, many Danish designers used the new industrial technologies, combined with ideas of simplicity and functionalism to design buildings, furniture and household objects, many of which have become iconic and are still in use and production. Prominent examples are the Egg chair, the PH lamps and the Sydney Opera House (Australia). History The Danish Culture Canon credits Thorvald Bindesbøll (1846–1908) with early contributions to design in the areas of ceramics, jewellery, bookbinding, silver and furniture although he is known in the rest of the world for creating the Carlsberg logo (1904), still in use today. The Canon also includes Knud V. Engelhardt (1882–1931) for a more industrial approach, especially in the rounded contours of his electric tramcar designs which were widely copied. In the area of textiles, Marie Gudme Leth (1895–1997) brought the screen printing process to Denmark, opening a factory in 1935 which allowed her colourful patterns to be manufactured on an industrial basis. August Sandgren introduced functionalism in the design of his masterful bookbindings. In the late 1940s, shortly after the end of the Second World War, conditions in Denmark were ideally suited to success in design. The emphasis was on furniture but architecture, silver, ceramics, glass and textiles also benefitted from the trend. Denmark's late industrialisation combined with a tradition of high-quality craftsmanship formed the basis of gradual progress towards industrial production. After the end of the war, Europeans were keen to find novel approaches such as the light wood furniture from Denmark. Last but not least, support in Denmark for freedom of individual expression assisted the cause. The newly established Furniture School at the Royal Danish Academy of Art played a considerable part in the development of furniture design. Kaare Klint taught functionalism based on the size and proportions of objects, wielding considerable influence. Hans J. Wegner, who had been trained as a cabinetmaker, contributed with a unique sense of form, especially in designing chairs. As head of FDB Møbler, Børge Mogensen designed simple and robust objects of furniture for the average Danish family. Finn Juhl demonstrated an individualistic approach in designing chairs with an appealing but functional look. In the early 1950s, American design also influenced Danish furniture. The American Charles Eames designed and manufactured chairs of moulded wood and steel pipes. These encouraged Arne Jacobsen to design his worldfamous Ant Chair, Denmark's first industrially manufactured chair. Furthermore, as Shaker furniture—and especially its reputation for stripped down chairs—began to be more and more known abroad, it also influenced Danish designers. Poul Kjærholm, Verner Panton and Nanna Ditzel followed a few years later, continuing the successful story of Danish design. Kjærholm worked mainly in steel and leather, Panton left Denmark during the 1960s to continue designing imaginative but highly unconventional plastic chairs while Nanna Ditzel, who also had a strongly individualistic approach, was successful in helping to renew Danish furniture design in the 1980s. Modern trends During the 1970s, Verner Panton made some of his most important designs, including the Pantonova and the 1-2-3 System. Danish furniture design during the 1980s did not include prominent contributions. By contrast, industrial designers began to prosper, making use of principles such as focus on the user, as well as attention to materials and to detail. For example, there are well known Danish designers, like Tobias Jacobsen (the grandson of Arne Jacobsen), who focused on the single elements of a violin when creating his chair "Vio" or on a boomerang when designing his eponymous sideboard. The Bernadotte & Bjørn studio, established in 1950, was the first to specialise in industrial design, with an emphasis on office machines, domestic appliances and functional articles such as the thermos jug. The electronics manufacturer Bang & Olufsen, in collaboration with Bernadotte & Bjørn and later with Jacob Jensen and David Lewis, went on to excel in modern design work. Around the same time, the Stelton company collaborated with Arne Jacobsen and Erik Magnussen to produce their iconic vacuum jug, a huge international success. Another successful design field is medical technology. Danish design companies like 3PART, Designit and CBD have worked in this area with individual designers such as Steve McGugan and Anders Smith. In 2002 the Danish Government and the City of Copenhagen launched an effort to establish a world event for design in Copenhagen. Originally understood as a tool for branding traditional Danish design, the non-profit organization INDEX: shifted focus after worldwide research and coined the concept of Design to Improve Life, which rapidly became celebrated in Denmark and around the world. The organization now hands out the biggest design award in the world biannual in Copenhagen, tours large scale outdoor exhibition around the world, run educational program as well as design labs and hosts a global network. Today, there is strong focus on design in Denmark as industry increasingly appreciates the importance of design in the business environment. In addition, as part of its trade and industry policy, the Danish government has launched the DesignDenmark initiative which aims to restore Denmark to the international design elite. Architecture Modern architecture has also contributed to the concept of Danish design. Arne Jacobsen was not just a furniture designer but one of the leading architects of his times. Among his achievements are the Bellevue Theater and restaurant, Klampenborg (1936), the Århus City Hall (with Erik Møller; 1939–42) and the SAS Royal Hotel (1958–60). Jørn Utzon (1918–2008), Denmark's most widely recognized architect, is remembered for his expressionist Sydney Opera House (1966) and the later Bagsværd Church (1976) with its wavy concrete roof. Henning Larsen (b. 1925) is the architect who designed the boldly modern Copenhagen Opera House on the island of Holmen which was completed in 2005. Danish architecture is currently in a new-wave era, not receiving more attention since the golden age of Arne Jacobsen and Jørn Utzon, being focused on function and concept rather than aesthetics and an impeccable finish. Bjarke Ingels of Bjarke Ingels Group (BIG) and Dan Stubbergaard's architectural firm Cobe who met at the former drawing office Plot, are both part of the new wave. Mentionable projects are BIG's Amager Bakke (Copenhill) and Cobe's Nørreport Station. Recent achievements Today, the concept of Danish design is thriving in an ever-wider number of fields. Among recent highlights are: The Museum of Modern Art in New York has chosen to outfit 95% of its new Yoshio Taniguchi-designed home with furniture by Danish design company GUBI. The Danish Zenvo ST1 supercar. The Evita Peroni suite of women's accessories which now has some 300 stores in 30 countries. The Halifax Central Library in Halifax, Nova Scotia, Canada, was designed by the Danish architectural firm Schmidt Hammer Lassen. After it was completed in 2014, it has received widespread acclaim and several architecture awards. Designers Among the most successful designers associated with the concept are Børge Mogensen (1914–72), Finn Juhl (1912–89), Hans Wegner (1914–2007), Arne Jacobsen (1902–71), Poul Kjærholm (1929–80), Poul Henningsen (1894–1967) and Verner Panton (1926–98). Other designers of note include Kristian Solmer Vedel (1923–2003) in the area of industrial design, Jens Harald Quistgaard (1919–2008) for kitchen furniture and implements, Gertrud Vasegaard (1913–2007) for ceramics, and Ole Wanscher (1903–85), who had a classical approach to furniture design. Museums The Danish Museum of Art & Design (or, Designmuseum Denmark) in Copenhagen exhibits many of the artifacts associated with Danish design, especially furniture. The New York Museum of Modern Art also has a large Danish design collection. The Danish Design Centre in the centre of Copenhagen has both permanent and special exhibitions promoting Danish design. See also BoConcept Carl Hansen & Søn Danish Culture Canon Anders Nørgaard FDB Møbler References External links . . . Danish design on Dezeen Industrial design Architectural styles Modernist architecture in Scandinavia Functionalist architecture Art movements Danish art Architectural design
Danish design
Engineering
1,796
1,060,236
https://en.wikipedia.org/wiki/Simplicial%20homology
In algebraic topology, simplicial homology is the sequence of homology groups of a simplicial complex. It formalizes the idea of the number of holes of a given dimension in the complex. This generalizes the number of connected components (the case of dimension 0). Simplicial homology arose as a way to study topological spaces whose building blocks are n-simplices, the n-dimensional analogs of triangles. This includes a point (0-simplex), a line segment (1-simplex), a triangle (2-simplex) and a tetrahedron (3-simplex). By definition, such a space is homeomorphic to a simplicial complex (more precisely, the geometric realization of an abstract simplicial complex). Such a homeomorphism is referred to as a triangulation of the given space. Many topological spaces of interest can be triangulated, including every smooth manifold (Cairns and Whitehead). Simplicial homology is defined by a simple recipe for any abstract simplicial complex. It is a remarkable fact that simplicial homology only depends on the associated topological space. As a result, it gives a computable way to distinguish one space from another. Definitions Orientations A key concept in defining simplicial homology is the notion of an orientation of a simplex. By definition, an orientation of a k-simplex is given by an ordering of the vertices, written as (), with the rule that two orderings define the same orientation if and only if they differ by an even permutation. Thus every simplex has exactly two orientations, and switching the order of two vertices changes an orientation to the opposite orientation. For example, choosing an orientation of a 1-simplex amounts to choosing one of the two possible directions, and choosing an orientation of a 2-simplex amounts to choosing what "counterclockwise" should mean. Chains Let be a simplicial complex. A simplicial -chain is a finite formal sum where each is an integer and is an oriented -simplex. In this definition, we declare that each oriented simplex is equal to the negative of the simplex with the opposite orientation. For example, The group of -chains on is written . This is a free abelian group which has a basis in one-to-one correspondence with the set of -simplices in . To define a basis explicitly, one has to choose an orientation of each simplex. One standard way to do this is to choose an ordering of all the vertices and give each simplex the orientation corresponding to the induced ordering of its vertices. Boundaries and cycles Let be an oriented -simplex, viewed as a basis element of . The boundary operator is the homomorphism defined by: where the oriented simplex is the face of , obtained by deleting its vertex. In , elements of the subgroup are referred to as cycles, and the subgroup is said to consist of boundaries. Boundaries of boundaries Because , where is the second face removed, . In geometric terms, this says that the boundary of a boundary of anything has no boundary. Equivalently, the abelian groups form a chain complex. Another equivalent statement is that is contained in . As an example, consider a tetrahedron with vertices oriented as . By definition, its boundary is given by . The boundary of the boundary is given by Homology groups The homology group of is defined to be the quotient abelian group It follows that the homology group is nonzero exactly when there are -cycles on which are not boundaries. In a sense, this means that there are -dimensional holes in the complex. For example, consider the complex obtained by gluing two triangles (with no interior) along one edge, shown in the image. The edges of each triangle can be oriented so as to form a cycle. These two cycles are by construction not boundaries (since every 2-chain is zero). One can compute that the homology group is isomorphic to , with a basis given by the two cycles mentioned. This makes precise the informal idea that has two "1-dimensional holes". Holes can be of different dimensions. The rank of the homology group, the number is called the Betti number of . It gives a measure of the number of -dimensional holes in . Example Homology groups of a triangle Let be a triangle (without its interior), viewed as a simplicial complex. Thus has three vertices, which we call , and three edges, which are 1-dimensional simplices. To compute the homology groups of , we start by describing the chain groups : is isomorphic to with basis is isomorphic to with a basis given by the oriented 1-simplices , , and . is the trivial group, since there is no simplex like because the triangle has been supposed without its interior. So are the chain groups in other dimensions. The boundary homomorphism : is given by: Since , every 0-chain is a cycle (i.e. ); moreover, the group of the 0-boundaries is generated by the three elements on the right of these equations, creating a two-dimensional subgroup of . So the 0th homology group is isomorphic to , with a basis given (for example) by the image of the 0-cycle (). Indeed, all three vertices become equal in the quotient group; this expresses the fact that is connected. Next, the group of 1-cycles is the kernel of the homomorphism ∂ above, which is isomorphic to , with a basis given (for example) by . (A picture reveals that this 1-cycle goes around the triangle in one of the two possible directions.) Since , the group of 1-boundaries is zero, and so the 1st homology group is isomorphic to . This makes precise the idea that the triangle has one 1-dimensional hole. Next, since by definition there are no 2-cycles, (the trivial group). Therefore the 2nd homology group is zero. The same is true for for all not equal to 0 or 1. Therefore, the homological connectivity of the triangle is 0 (it is the largest for which the reduced homology groups up to are trivial). Homology groups of higher-dimensional simplices Let be a tetrahedron (without its interior), viewed as a simplicial complex. Thus has four 0-dimensional vertices, six 1-dimensional edges, and four 2-dimensional faces. The construction of the homology groups of a tetrahedron is described in detail here. It turns out that is isomorphic to , is isomorphic to too, and all other groups are trivial. Therefore, the homological connectivity of the tetrahedron is 0. If the tetrahedron contains its interior, then is trivial too. In general, if is a -dimensional simplex, the following holds: If is considered without its interior, then and and all other homologies are trivial; If is considered with its interior, then and all other homologies are trivial. Simplicial maps Let S and T be simplicial complexes. A simplicial map f from S to T is a function from the vertex set of S to the vertex set of T such that the image of each simplex in S (viewed as a set of vertices) is a simplex in T. A simplicial map f: determines a homomorphism of homology groups for each integer k. This is the homomorphism associated to a chain map from the chain complex of S to the chain complex of T. Explicitly, this chain map is given on k-chains by if are all distinct, and otherwise . This construction makes simplicial homology a functor from simplicial complexes to abelian groups. This is essential to applications of the theory, including the Brouwer fixed point theorem and the topological invariance of simplicial homology. Related homologies Singular homology is a related theory that is better adapted to theory rather than computation. Singular homology is defined for all topological spaces and depends only on the topology, not any triangulation; and it agrees with simplicial homology for spaces which can be triangulated. Nonetheless, because it is possible to compute the simplicial homology of a simplicial complex automatically and efficiently, simplicial homology has become important for application to real-life situations, such as image analysis, medical imaging, and data analysis in general. Another related theory is Cellular homology. Applications A standard scenario in many computer applications is a collection of points (measurements, dark pixels in a bit map, etc.) in which one wishes to find a topological feature. Homology can serve as a qualitative tool to search for such a feature, since it is readily computable from combinatorial data such as a simplicial complex. However, the data points have to first be triangulated, meaning one replaces the data with a simplicial complex approximation. Computation of persistent homology involves analysis of homology at different resolutions, registering homology classes (holes) that persist as the resolution is changed. Such features can be used to detect structures of molecules, tumors in X-rays, and cluster structures in complex data. More generally, simplicial homology plays a central role in topological data analysis, a technique in the field of data mining. Implementations Exact, efficient, computation of simplicial homology of large simplical complexes can be computed using the GAP Simplicial Homology. A MATLAB toolbox for computing persistent homology, Plex (Vin de Silva, Gunnar Carlsson), is available at this site. Stand-alone implementations in C++ are available as part of the Perseus, Dionysus and PHAT software projects. For Python, there are libraries such as scikit-tda, Persim, giotto-tda and GUDHI, the latter aimed at generating topological features for machine learning. These can be found at the PyPI repository. See also Simplicial homotopy References External links Topological methods in scientific computing Computational homology (also cubical homology) Computational topology
Simplicial homology
Mathematics
2,124
4,141,563
https://en.wikipedia.org/wiki/Predictive%20analytics
Predictive analytics, or predictive AI, encompasses a variety of statistical techniques from data mining, predictive modeling, and machine learning that analyze current and historical facts to make predictions about future or otherwise unknown events. In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision-making for candidate transactions. The defining functional effect of these technical approaches is that predictive analytics provides a predictive score (probability) for each individual (customer, employee, healthcare patient, product SKU, vehicle, component, machine, or other organizational unit) in order to determine, inform, or influence organizational processes that pertain across large numbers of individuals, such as in marketing, credit risk assessment, fraud detection, manufacturing, healthcare, and government operations including law enforcement. Definition Predictive analytics is a set of business intelligence (BI) technologies that uncovers relationships and patterns within large volumes of data that can be used to predict behavior and events. Unlike other BI technologies, predictive analytics is forward-looking, using past events to anticipate the future. Predictive analytics statistical techniques include data modeling, machine learning, AI, deep learning algorithms and data mining. Often the unknown event of interest is in the future, but predictive analytics can be applied to any type of unknown whether it be in the past, present or future. For example, identifying suspects after a crime has been committed, or credit card fraud as it occurs. The core of predictive analytics relies on capturing relationships between explanatory variables and the predicted variables from past occurrences, and exploiting them to predict the unknown outcome. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions. Predictive analytics is often defined as predicting at a more detailed level of granularity, i.e., generating predictive scores (probabilities) for each individual organizational element. This distinguishes it from forecasting. For example, "Predictive analytics—Technology that learns from experience (data) to predict the future behavior of individuals in order to drive better decisions." In future industrial systems, the value of predictive analytics will be to predict and prevent potential issues to achieve near-zero break-down and further be integrated into prescriptive analytics for decision optimization. Analytical techniques The approaches and techniques used to conduct predictive analytics can broadly be grouped into regression techniques and machine learning techniques. Machine learning Machine learning can be defined as the ability of a machine to learn and then mimic human behavior that requires intelligence. This is accomplished through artificial intelligence, algorithms, and models. Autoregressive Integrated Moving Average (ARIMA) ARIMA models are a common example of time series models. These models use autoregression, which means the model can be fitted with a regression software that will use machine learning to do most of the regression analysis and smoothing. ARIMA models are known to have no overall trend, but instead have a variation around the average that has a constant amplitude, resulting in statistically similar time patterns. Through this, variables are analyzed and data is filtered in order to better understand and predict future values. One example of an ARIMA method is exponential smoothing models. Exponential smoothing takes into account the difference in importance between older and newer data sets, as the more recent data is more accurate and valuable in predicting future values. In order to accomplish this, exponents are utilized to give newer data sets a larger weight in the calculations than the older sets. Time series models Time series models are a subset of machine learning that utilize time series in order to understand and forecast data using past values. A time series is the sequence of a variable's value over equally spaced periods, such as years or quarters in business applications. To accomplish this, the data must be smoothed, or the random variance of the data must be removed in order to reveal trends in the data. There are multiple ways to accomplish this. Single moving average Single moving average methods utilize smaller and smaller numbered sets of past data to decrease error that is associated with taking a single average, making it a more accurate average than it would be to take the average of the entire data set. Centered moving average Centered moving average methods utilize the data found in the single moving average methods by taking an average of the median-numbered data set. However, as the median-numbered data set is difficult to calculate with even-numbered data sets, this method works better with odd-numbered data sets than even. Predictive modeling Predictive modeling is a statistical technique used to predict future behavior. It utilizes predictive models to analyze a relationship between a specific unit in a given sample and one or more features of the unit. The objective of these models is to assess the possibility that a unit in another sample will display the same pattern. Predictive model solutions can be considered a type of data mining technology. The models can analyze both historical and current data and generate a model in order to predict potential future outcomes. Regardless of the methodology used, in general, the process of creating predictive models involves the same steps. First, it is necessary to determine the project objectives and desired outcomes and translate these into predictive analytic objectives and tasks. Then, analyze the source data to determine the most appropriate data and model building approach (models are only as useful as the applicable data used to build them). Select and transform the data in order to create models. Create and test models in order to evaluate if they are valid and will be able to meet project goals and metrics. Apply the model's results to appropriate business processes (identifying patterns in the data doesn't necessarily mean a business will understand how to take advantage or capitalize on it). Afterward, manage and maintain models in order to standardize and improve performance (demand will increase for model management in order to meet new compliance regulations). Regression analysis Generally, regression analysis uses structural data along with the past values of independent variables and the relationship between them and the dependent variable to form predictions. Linear regression In linear regression, a plot is constructed with the previous values of the dependent variable plotted on the Y-axis and the independent variable that is being analyzed plotted on the X-axis. A regression line is then constructed by a statistical program representing the relationship between the independent and dependent variables which can be used to predict values of the dependent variable based only on the independent variable. With the regression line, the program also shows a slope intercept equation for the line which includes an addition for the error term of the regression, where the higher the value of the error term the less precise the regression model is. In order to decrease the value of the error term, other independent variables are introduced to the model, and similar analyses are performed on these independent variables. Applications Analytical Review and Conditional Expectations in Auditing An important aspect of auditing includes analytical review. In analytical review, the reasonableness of reported account balances being investigated is determined. Auditors accomplish this process through predictive modeling to form predictions called conditional expectations of the balances being audited using autoregressive integrated moving average (ARIMA) methods and general regression analysis methods, specifically through the Statistical Technique for Analytical Review (STAR) methods. The ARIMA method for analytical review uses time-series analysis on past audited balances in order to create the conditional expectations. These conditional expectations are then compared to the actual balances reported on the audited account in order to determine how close the reported balances are to the expectations. If the reported balances are close to the expectations, the accounts are not audited further. If the reported balances are very different from the expectations, there is a higher possibility of a material accounting error and a further audit is conducted. Regression analysis methods are deployed in a similar way, except the regression model used assumes the availability of only one independent variable. The materiality of the independent variable contributing to the audited account balances are determined using past account balances along with present structural data. Materiality is the importance of an independent variable in its relationship to the dependent variable. In this case, the dependent variable is the account balance. Through this the most important independent variable is used in order to create the conditional expectation and, similar to the ARIMA method, the conditional expectation is then compared to the account balance reported and a decision is made based on the closeness of the two balances. The STAR methods operate using regression analysis, and fall into two methods. The first is the STAR monthly balance approach, and the conditional expectations made and regression analysis used are both tied to one month being audited. The other method is the STAR annual balance approach, which happens on a larger scale by basing the conditional expectations and regression analysis on one year being audited. Besides the difference in the time being audited, both methods operate the same, by comparing expected and reported balances to determine which accounts to further investigate. Business Value As we move into a world of technological advances where more and more data is created and stored digitally, businesses are looking for ways to take advantage of this opportunity and use this information to help generate profits. Predictive analytics can be used and is capable of providing many benefits to a wide range of businesses, including asset management firms, insurance companies, communication companies, and many other firms. In a study conducted by IDC Analyze the Future, Dan Vasset and Henry D. Morris explain how an asset management firm used predictive analytics to develop a better marketing campaign. They went from a mass marketing approach to a customer-centric approach, where instead of sending the same offer to each customer, they would personalize each offer based on their customer. Predictive analytics was used to predict the likelihood that a possible customer would accept a personalized offer. Due to the marketing campaign and predictive analytics, the firm's acceptance rate skyrocketed, with three times the number of people accepting their personalized offers. Technological advances in predictive analytics have increased its value to firms. One technological advancement is more powerful computers, and with this predictive analytics has become able to create forecasts on large data sets much faster. With the increased computing power also comes more data and applications, meaning a wider array of inputs to use with predictive analytics. Another technological advance includes a more user-friendly interface, allowing a smaller barrier of entry and less extensive training required for employees to utilize the software and applications effectively. Due to these advancements, many more corporations are adopting predictive analytics and seeing the benefits in employee efficiency and effectiveness, as well as profits. Cash-flow Prediction ARIMA univariate and multivariate models can be used in forecasting a company's future cash flows, with its equations and calculations based on the past values of certain factors contributing to cash flows. Using time-series analysis, the values of these factors can be analyzed and extrapolated to predict the future cash flows for a company. For the univariate models, past values of cash flows are the only factor used in the prediction. Meanwhile the multivariate models use multiple factors related to accrual data, such as operating income before depreciation. Another model used in predicting cash-flows was developed in 1998 and is known as the Dechow, Kothari, and Watts model, or DKW (1998). DKW (1998) uses regression analysis in order to determine the relationship between multiple variables and cash flows. Through this method, the model found that cash-flow changes and accruals are negatively related, specifically through current earnings, and using this relationship predicts the cash flows for the next period. The DKW (1998) model derives this relationship through the relationships of accruals and cash flows to accounts payable and receivable, along with inventory. Child protection Some child welfare agencies have started using predictive analytics to flag high risk cases. For example, in Hillsborough County, Florida, the child welfare agency's use of a predictive modeling tool has prevented abuse-related child deaths in the target population. Predicting outcomes of legal decisions The predicting of the outcome of juridical decisions can be done by AI programs. These programs can be used as assistive tools for professions in this industry. Portfolio, product or economy-level prediction Often the focus of analysis is not the consumer but the product, portfolio, firm, industry or even the economy. For example, a retailer might be interested in predicting store-level demand for inventory management purposes. Or the Federal Reserve Board might be interested in predicting the unemployment rate for the next year. These types of problems can be addressed by predictive analytics using time series techniques (see below). They can also be addressed via machine learning approaches which transform the original time series into a feature vector space, where the learning algorithm finds patterns that have predictive power. Underwriting Many businesses have to account for risk exposure due to their different services and determine the costs needed to cover the risk. Predictive analytics can help underwrite these quantities by predicting the chances of illness, default, bankruptcy, etc. Predictive analytics can streamline the process of customer acquisition by predicting the future risk behavior of a customer using application level data. Predictive analytics in the form of credit scores have reduced the amount of time it takes for loan approvals, especially in the mortgage market. Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default. Predictive analytics can be used to mitigate moral hazard and prevent accidents from occurring. See also Actuarial science Artificial intelligence in healthcare Analytical procedures (finance auditing) Big data Computational sociology Criminal Reduction Utilising Statistical History Decision management Disease surveillance Learning analytics Odds algorithm Pattern recognition Predictive inference Predictive policing Social media analytics References Further reading Financial crime prevention Statistical analysis Business intelligence Actuarial science analytics Types of analytics Management cybernetics
Predictive analytics
Mathematics,Technology
2,832
36,943,036
https://en.wikipedia.org/wiki/Tosylamide/formaldehyde%20resin
Tosylamide/formaldehyde resin (TSFR) is a polymeric resin used as a plasticizer and secondary film-forming agent in nail polishes. While it was still in use its use has been diminishing in favor of hypoallergenic alternatives, due to the prevalence of reactions causing allergic contact dermatitis of the eyelids, face, and neck. Production TSFR is produced by condensing (often impure, containing ) with formaldehyde. It consists of a mixture of short oligomers with repeating unit , terminated by or groups. Allergenicity Nail polishes containing TSFR were first introduced in 1939, becoming a major cause of allergic contact dermatitis of the eyelids, face, and neck. Allergic responses are caused by the water-soluble contaminants 5-tosyl-1,3,5-dioxazinane () and 3,5-ditosyl-1,3,5-oxadiazinane (), rather than formaldehyde, which is only present in trace quantities (<0.5%). Since the early 2010s, the prevalence of allergic reactions to TSFR has decreased, due to the widespread use of hypoallergenic alternatives such as tosylamide/epoxy resin (first introduced in the late 1980s), cellulose acetate butyrate, and polyester resins (e.g. –isophthalic acid–trimellitic anhydride). References Synthetic resins P-Tosyl compounds Allergology
Tosylamide/formaldehyde resin
Chemistry
327
14,705,543
https://en.wikipedia.org/wiki/Civic%20intelligence
Civic intelligence is an "intelligence" that is devoted to addressing public or civic issues. The term has been applied to individuals and, more commonly, to collective bodies, like organizations, institutions, or societies. Civic intelligence can be used in politics by groups of people who are trying to achieve a common goal. Social movements and political engagement in history might have been partly involved with collective thinking and civic intelligence. Education, in its multiple forms, has helped some countries to increase political awareness and engagement by amplifying the civic intelligence of collaborative groups. Increasingly, artificial intelligence and social media, modern innovations of society, are being used by many political entities and societies to tackle problems in politics, the economy, and society at large. The concept Like the term social capital, civic intelligence has been used independently by several people since the beginning of the 20th century. Although there has been little or no direct contact between the various authors, the different meanings associated with the term are generally complementary to each other. The first usage identified was made in 1902 by Samuel T. Dutton, Superintendent of Teachers College Schools on the occasion of the dedication of the Horace Mann School when it noted that "increasing civic intelligence" is a "true purpose of education in this country." More recently, in 1985, David Matthews, president of the Kettering Foundation, wrote an article entitled Civic Intelligence in which he discussed the decline of civic engagement in the United States. A still more recent version is Douglas Schuler's "Cultivating Society's Civic Intelligence: Patterns for a New 'World Brain'". In Schuler's version, civic intelligence is applied to groups of people because that is the level where public opinion is formed and decisions are made or at least influenced. It applies to groups, formal or informal, who are working towards civic goals such as environmental amelioration or non-violence among people. This version is related to many other concepts that are currently receiving a great deal of attention including collective intelligence, civic engagement, participatory democracy, emergence, new social movements, collaborative problem-solving, and Web 2.0. When Schuler developed the Liberating Voices pattern language for communication revolution, he made civic intelligence the first of 136 patterns. Civic intelligence is similar to John Dewey's "cooperative intelligence" or the "democratic faith" that asserts that "each individual has something to contribute, and the value of each contribution can be assessed only as it entered into the final pooled intelligence constituted by the contributions of all". Civic intelligence is implicitly invoked by the subtitle of Jared Diamond's 2004 book, Collapse: Why Some Societies Choose to Fail or Succeed and to the question posed in Thomas Homer-Dixon's 2000 book Ingenuity Gap: How Can We Solve the Problems of the Future? that suggests civic intelligence will be needed if humankind is to stave off problems related to climate change and other potentially catastrophic occurrences. With these meanings, civic intelligence is less a phenomenon to be studied and more of a dynamic process or tool to be shaped and wielded by individuals or groups. Civic intelligence, according to this logic, can affect how society is built and how groups or individuals can utilize it as a tool for collective thinking or action. Civic intelligence sometimes involves large groups of people, but other times it involves only a few individuals. civic intelligence might be more evidently seen in smaller groups when compared to bigger groups due to more intimate interactions and group dynamics. Robert Putnam, who is largely responsible for the widespread consideration of "social capital", has written that social innovation often occurs in response to social needs. This resonates with George Basalla's findings related to technological innovation, which simultaneously facilitates and responds to social innovation. The concept of "civic intelligence," an example of social innovation, is a response to a perceived need. The reception that it receives or doesn't receive will be in proportion to its perceived need by others. Thus, social needs serve as causes for social innovation and collective civic intelligence. Civic intelligence focuses on the role of civil society and the public for several reasons. At a minimum, the public's input is necessary to ratify important decisions made by business or government. Beyond that, however, civil society has originated and provided the leadership for a number of vital social movements. Any inquiry into the nature of civic intelligence is also collaborative and participatory. Civic intelligence is inherently multi-disciplinary and open-ended. Cognitive scientists address some of these issues in the study of "distributed cognition." Social scientists study aspects of it with their work on group dynamics, democratic theory, social systems, and many other subfields. The concept is important in business literature ("organizational learning") and in the study of "epistemic communities" (scientific research communities, notably). Civic intelligence and politics Politically, civic intelligence brings people together to form collective thoughts or ideas to solve political problems. Historically, Jane Addams was an activist who reformed Chicago's cities in terms of housing immigrants, hosting lecture events on current issues, building the first public playground, and conducting research on cultural and political elements of communities around her. She is just one example of how civic intelligence can influence society. Historical movements in America such as those related to human rights, the environment, and economic equity have been started by ordinary citizens, not by governments or businesses. To achieve changes in these topics, people of different backgrounds come together to solve both local and global issues. Another example of civic intelligence is how governments in 2015 came together in Paris to formulate a plan to curb greenhouse gas emission and alleviate some effects of global warming. Politically, no atlas of civic intelligence exists, yet the quantity and quality of examples worldwide is enormous. While a comprehensive "atlas" is not necessarily a goal, people are currently developing online resources to record at least some small percentage of these efforts. The rise in the number of transnational advocacy networks, the coordinated worldwide demonstrations protesting the invasion of Iraq, and the World Social Forums that provided "free space" for thousands of activists from around the world, all support the idea that civic intelligence is growing. Although smaller in scope, efforts like the work of the Friends of Nature group to create a "Green Map" of Beijing are also notable. Political engagement of citizens sometimes comes from the collective intelligence of engaging local communities through political education. Tradition examples of political engagement includes voting, discussing issues with neighbors and friends, working for a political campaign, attending rallies, forming political action groups, etc. Today, social and economic scientists such as Jason Corburn and Elinor Ostrom continue to analyze how people come together to achieve collective goals such as sharing natural resources, combating diseases, formulating political action plans, and preserving the natural environment. From one study, the author suggests that it might be helpful for educational facilities such as colleges or even high schools to educate students on the importance of civic intelligence in politics so that better choices could be made when tackling societal issues through a collective citizen intelligence. Harry C. Boyte, in an article he wrote, argues that schools serve as a sort of "free space" for students to engage in community engagement efforts as describe above. Schools, according to Boyte, empower people to take actions in their communities, thus rallying increasing number of people to learn about politics and form political opinions. He argues that this chain reaction is what then leads to civic intelligence and the collective effort to solve specific problems in local communities. It is shown by one study that citizens who are more informed and more attentive to the world of politics around them are more politically engaged both at the local and national level. One study, aggregating the results of 70 articles about political awareness, finds that political awareness is important in the onset of citizen participation and voicing opinion. In recent years and the modern world, there is a shift in how citizens stay informed and become attentive to the political world. Although traditional political engagement methods are still being used by most individuals, particularly older people, there is a trend shifting towards social media and the internet in terms of political engagement and civic intelligence. Economics and civic engagement Civic intelligence is involved in economic policymaking and decision-making around the world. According to one article, community members in Olympia, Washington worked with local administrations and experts on affordable housing improvements in the region. This collaboration utilized the tool of civic intelligence. In addition, the article argues that nonprofit organizations can facilitate local citizen participation in discussions about economic issues such as public housing, wage rates, etc. In Europe, according to RSA's report on Citizens' Economic Council, democratic participation and discussions have positive impacts on economic issues in society such as poverty, housing situations, the wage gap, healthcare, education, food availability, etc. The report emphasizes citizen empowerment, clarity and communication, and building legitimacy around economic development. The RSA's economic council is working towards enforcing more crowdsourced economic ideas and increasing the expertise level of fellows who will advise policymakers on engaging citizens in the economy. The report argues that increasing citizen engagement makes governments more legitimate through increased public confidence, stockholder engagement, and government political commitment. Ideas such as creating citizen juries, citizen reference panels, and the devolution process of policymaking are explored in more depth in the report. Collective civic intelligence is seen as a tool by the RSA to improve economic issues in society. Globally, civic participation and intelligence interact with the needs of businesses and governments. One study finds that increased local economic concentration is correlated with decreased levels of civic engagement because citizen's voices are covered up by the needs of corporations. In this situation, governments overvalue the needs of big corporations when compared to the needs of groups of individual citizens. This study points out that corporations can negatively impact civic intelligence if citizens are not given enough freedom to voice their opinions regarding economic issues. The study shows that the US has faced civic disengagement in the past three decades due to monopolizations of opinions by corporations. On the other hand, if a government supports local capitalism and civic engagement equally, there might be beneficial socioeconomic outcomes such as more income equality, less poverty, and less unemployment. The article adds that in a period of global development, local forces of civic intelligence and innovation will likely benefit citizen's lives and distinguish one region from another in terms of socioeconomic status. The concept of civic health is introduced by one study as a key component to the wellbeing of local or national economy. According to the article, civic engagement can increase citizens's professional employment skills, foster a sense of trust in communities, and allow a greater amount of community investment from citizens themselves. Artificial intelligence One recent prominent example of civic intelligence in the modern world is the creation and improvements of artificial intelligence. According to one article, AI enables people to propose solutions, communicate with each other more effectively, obtain data for planning, and tackle society issues from across the world. In 2018, at the second annual AI for Good Global summit, industry leaders, policymakers, research scientists, AI enthusiasts all came together to formulate plans and ideas regarding how to use artificial intelligence to solve modern society issues, including political problems in countries of different backgrounds. The summit proposed ideas regarding how AI can benefit safety, health, and city governance. The article mentions that in order for artificial intelligence to achieve effective use in society, researchers, policymakers, community members, and technology companies all need to work together to improve artificial intelligence. With this logic, it takes coordinated civic intelligence to make artificial intelligence work. There are some shortcomings to artificial intelligence. According to one report, AI is increasingly being used by governments to limit civil freedom of citizens through authoritarian regimes and restrictive regulations. Technology and the use of automated systems are used by powerful governments to dismiss civic intelligence. There is also the concern for losing civic intelligence and human jobs if AI was to replace many sectors of the economy and political landscapes around the world. AI has the dangerous possibility of getting out of control and self-replicate destructive behaviors that might be detrimental to society. However, according to one article, if world communities work together to form international standards, improve AI regulation policies, and educate people about AI, political and civil freedom might be more easily achieved. Social media Recent shifts towards modern technology, social media, and the internet influence how civic intelligence interact with politics in the world. New technologies expand the reach of data and information to more people, and citizens can engage with each other or the government more openly through the internet. Civic intelligence can take a form of increased presence among groups of individuals, and the speed of civic intelligence onset is intensified as well. The internet and social media play roles in civic intelligence. Social Medias like Facebook, Twitter, and Reddit became popular sites for political discoveries, and many people, especially younger adults, choose to engage with politics online. There are positive effects of social media on civic engagement. According to one article, social media has connected people in unprecedented ways. People now find it easier to form democratic movements, engage with each other and politicians, voice opinions, and take actions virtually. Social media has been incorporated into people's lives, and many people obtain news and other political ideas from online sources. One study explains that social media increase political participation through more direct forms of democracy and bottom-up approach of solving political, social, or economical issues. The idea is that social media will lead people to participate politically in novel ways other than traditional actions of voting, attending rallies, and supporting candidates in real life. The study argues that this leads to new ways of enacting civic intelligence and political participation. Thus, the study points out that social media is designed to gather civic intelligence at one place, the internet. A third article featuring an Italian case study finds that civic collaboration is important in helping a healthy government function in both local and national communities. The article explains that there seems to be more individualized political actions and efforts when people choose to innovate new ways of political participation. Thus, one group's actions of political engagement might be entirely different than those of another group. However, social media also has some negative effects on civic intelligence in politics or economics. One study explains that even though social media might have increased direct citizen participation in politics and economics, it might have also opened more room for misinformation and echo chambers. More specifically, trolling, the spread of false political information, stealing of person data, and usage of bots to spread propaganda are all examples of negative consequences of internet and social media. These negative results, along the lines of the article, influence civic intelligence negatively because citizens have trouble discovering the lies from the truths in the political arena. Thus, civic intelligence would either be misleading or vanish altogether if a group is using false sources or misleading information. A second article points out that a filter bubble is created through group isolation as a result of group polarization. False information and deliberate deception of political agendas play a major role in forming filter bubbles of citizens. People are conditioned to believe what they want to believe, so citizens who focus more on one-sided political news might form one's own filter bubble. Next, a research journal found that Twitter increases political knowledge of users while Facebook decrease the political knowledge of users. The journal points out that different social media platforms can affect users differently in terms of political awareness and civic intelligence. Thus, social media might have uncertain political effects on civic intelligence. References Information society Collective intelligence Active citizenship
Civic intelligence
Technology
3,135
19,268,325
https://en.wikipedia.org/wiki/Beta%20Equulei
Beta Equulei, Latinized from β Equulei, is the Bayer designation for a solitary star in the northern constellation of Equuleus. It is faintly visible to the naked eye with an apparent visual magnitude of 5.16. The annual parallax shift is 11.27 mas, indicating a separation of around 289 light years from the Sun. It is drifting closer with a radial velocity of −11 km/s. This is an ordinary A-type main sequence star with a stellar classification of A3 V. It has 2.7 times the mass of the Sun and about four times the Sun's radius. The star is around 600 million years old – 93% of the way through its main sequence lifetime – and is spinning with a projected rotational velocity of 58 km/s. It is radiating 78 times the luminosity of the Sun from its photosphere at an effective temperature of about 9,000 K. The star emits an infrared excess indicating the presence of a dusty debris disk. The mean temperature of the dust is 85 K, indicating the semimajor axis of its orbit is 104 AU. β Equulei has four optical companions. They are not physically associated with the star described above. References External links Equuleus A-type main-sequence stars Equulei, Beta Equulei, 10 203562 105570 BD+06 4811 8178
Beta Equulei
Astronomy
296
19,180,599
https://en.wikipedia.org/wiki/Lacto-2%20RNA%20motif
The lacto-2 RNA motif is an RNA structure that is conserved amongst bacteria within the order Lactobacillales. The motif consists of a stem-loop whose stem is interrupted by many internal loops and bulges. Nucleotide identities in many places are conserved, and one internal loop in particular is highly conserved. As lacto-2 RNAs are not consistently located in 5′ UTRs, they are presumed to correspond to non-coding RNAs. However, most (80%) of the RNAs are in a position that may correspond to the 5′ UTR, so it is not inconceivable that the RNA has a role as a cis-regulatory element. Many lacto-2 RNAs are present in operons that encode tRNAs and rRNAs, and many are adjacent to genes encoding protein subunits of the ribosome, although they are not necessarily in the same operon as these protein-coding genes. Lacto-2 RNAs also have a weak association with genes involved in nucleotide biosynthesis and transport, including several independent genes within the de-novo purine biosynthesis pathway and some in pyrimidine biosynthesis. References Non-coding RNA
Lacto-2 RNA motif
Chemistry
257
6,234,521
https://en.wikipedia.org/wiki/Building%20performance
Building performance is an attribute of a building that expresses how well that building carries out its functions. It may also relate to the performance of the building construction process. Categories of building performance are quality (how well the building fulfills its functions), resource savings (how much of a particular resource is needed to fulfill its functions) and workload capacity (how much the building can do). The performance of a building depends on the response of the building to an external load or shock. Building performance plays an important role in architecture, building services engineering, building regulation, architectural engineering and construction management. Furthermore, improving building performance (particularly energy efficiency) is important for addressing climate change, since buildings account for 30% of global energy consumption, resulting in 27% of global greenhouse gas emissions. Prominent building performance aspects are energy efficiency, occupant comfort, indoor air quality and daylighting. Background Building performance has been of interest to humans since the very first shelters were built to protect us from the weather, natural enemies and other dangers. Initially design and performance were managed by craftsmen who combined their expertise in both domains. More formal approaches to building performance appeared in the 1970s and 1980s, with seminal works being the book on Building Performance and CIB Report 64. Further progress on building performance studies took place in parallel with the development of building science as a discipline, and with the introduction of personal computing (especially computer simulation) in the field; for a good overview of the role of simulation in building design see the chapter by Augenbroe. A more general overview that also includes physical measurement, expert judgement and stakeholder evaluation is presented in the book Building Performance Analysis. While energy efficiency, thermal comfort, indoor air quality and (day)lighting are very prominent in the debate on building performance, there is much longer list of building performance aspect that includes things like resistance against burglary, flexibility for change of use, and many others; for an overview see the building performance analysis platform website in the external links below. Building performance standards There are several different building performance standards widely used for designing building codes and energy-efficiency certifications. For instance, the standards produced by ASHRAE (American Society of Heating, Refrigeration, and Air Conditioning Engineers) and the IECC (International Energy Conservation Code) have been widely used to inform local building codes and energy-efficiency certification programs, such as Passive House, Energy Star, and LEED. Building performance standards include specifications on the building envelope (which includes the windows, walls, roofs, and foundation), the HVAC system, electric lighting, hot water consumption, and home appliances, among others. See also Building energy simulation Ecological design Energy audit Environmental impact assessment Green retrofit Sociology of architecture Sustainable architecture Sustainable design Weatherization References External links ASHRAE - measuring commercial building performance Global Buildings Performance Network BPI Building Performance Institute - U.S. organization setting home performance technical standards Building Performance Association - U.S. trade association of home performance contractors and others promoting performance based energy retrofits. Building Performance Journal - Home performance articles. Platform for discussion of theory on building performance - Building Performance Analysis book companion website Building engineering Energy conservation
Building performance
Engineering
639
28,405,015
https://en.wikipedia.org/wiki/Data%20Processing%20Iran%20Co.
Data Processing Iran Company (DPI) (, Dadheperdazi-ye Iran) is a computer, technology and IT Consulting corporation headquartered in Tehran, Iran. DPI is currently the largest technology provider in Iran DPI manufactures and sells computer hardware and software (with a focus on the latter), and offers infrastructure services, hosting services, and consulting services in areas ranging from mainframe computers to nanotechnology. The company also offers a series of Internet-related services, namely dedicated servers; colocation services; Web hosting services, such as shared hosting, shared mail, DNS recording, and domain registration services; and managed services, including network services, security services, managed application services, storage and backup services, monitoring and reporting, and professional services. History DPI was established in 1959 as a regional branch for the IBM corporation. The company operated as a subsidiary until 1981, when IBM's operations in Iran were ceded to the Iranian government. In 2001, DPI became a private company, listed under the Tehran Stock Exchange. Over the company's history, DPI has signed numerous technology-sharing agreements with other software companies, including Mindscape, Dataproducts and Hypercom. See also Communications in Iran References Former IBM subsidiaries Computer companies of Iran Companies listed on the Tehran Stock Exchange Cloud computing providers Computer hardware companies Computer storage companies Display technology companies Iranian brands Computer companies established in 1957 Companies based in Tehran Iranian companies established in 1957
Data Processing Iran Co.
Technology
293
1,358,453
https://en.wikipedia.org/wiki/Astrophysical%20jet
An astrophysical jet is an astronomical phenomenon where outflows of ionised matter are emitted as extended beams along the axis of rotation. When this greatly accelerated matter in the beam approaches the speed of light, astrophysical jets become relativistic jets as they show effects from special relativity. The formation and powering of astrophysical jets are highly complex phenomena that are associated with many types of high-energy astronomical sources. They likely arise from dynamic interactions within accretion disks, whose active processes are commonly connected with compact central objects such as black holes, neutron stars or pulsars. One explanation is that tangled magnetic fields are organised to aim two diametrically opposing beams away from the central source by angles only several degrees wide Jets may also be influenced by a general relativity effect known as frame-dragging. Most of the largest and most active jets are created by supermassive black holes (SMBH) in the centre of active galaxies such as quasars and radio galaxies or within galaxy clusters. Such jets can exceed millions of parsecs in length. Other astronomical objects that contain jets include cataclysmic variable stars, X-ray binaries and gamma-ray bursts (GRB). Jets on a much smaller scale (~parsecs) may be found in star forming regions, including T Tauri stars and Herbig–Haro objects; these objects are partially formed by the interaction of jets with the interstellar medium. Bipolar outflows may also be associated with protostars, or with evolved post-AGB stars, planetary nebulae and bipolar nebulae. Relativistic jets Relativistic jets are beams of ionised matter accelerated close to the speed of light. Most have been observationally associated with central black holes of some active galaxies, radio galaxies or quasars, and also by galactic stellar black holes, neutron stars or pulsars. Beam lengths may extend between several thousand, hundreds of thousands or millions of parsecs. Jet velocities when approaching the speed of light show significant effects of the special theory of relativity; for example, relativistic beaming that changes the apparent beam brightness. Massive central black holes in galaxies have the most powerful jets, but their structure and behaviours are similar to those of smaller galactic neutron stars and black holes. These SMBH systems are often called microquasars and show a large range of velocities. SS 433 jet, for example, has a mean velocity of 0.26c. Relativistic jet formation may also explain observed gamma-ray bursts, which have the most relativistic jets known, being ultrarelativistic. Mechanisms behind the composition of jets remain uncertain, though some studies favour models where jets are composed of an electrically neutral mixture of nuclei, electrons, and positrons, while others are consistent with jets composed of positron–electron plasma. Trace nuclei swept up in a relativistic positron–electron jet would be expected to have extremely high energy, as these heavier nuclei should attain velocity equal to the positron and electron velocity. Rotation as possible energy source Because of the enormous amount of energy needed to launch a relativistic jet, some jets are possibly powered by spinning black holes. However, the frequency of high-energy astrophysical sources with jets suggests combinations of different mechanisms indirectly identified with the energy within the associated accretion disk and X-ray emissions from the generating source. Two early theories have been used to explain how energy can be transferred from a black hole into an astrophysical jet: Blandford–Znajek process. This theory explains the extraction of energy from magnetic fields around an accretion disk, which are dragged and twisted by the spin of the black hole. Relativistic material is then feasibly launched by the tightening of the field lines. Penrose mechanism. Here energy is extracted from a rotating black hole by frame dragging, which was later theoretically proven by Reva Kay Williams to be able to extract relativistic particle energy and momentum, and subsequently shown to be a possible mechanism for jet formation. This effect includes using general relativistic gravitomagnetism. Relativistic jets from neutron stars Jets may also be observed from spinning neutron stars. An example is pulsar IGR J11014-6103, which has the largest jet so far observed in the Milky Way, and whose velocity is estimated at 80% the speed of light (0.8c). X-ray observations have been obtained, but there is no detected radio signature nor accretion disk. Initially, this pulsar was presumed to be rapidly spinning, but later measurements indicate the spin rate is only 15.9 Hz. Such a slow spin rate and lack of accretion material suggest the jet is neither rotation nor accretion powered, though it appears aligned with the pulsar rotation axis and perpendicular to the pulsar's true motion. Other images See also disk wind slower wide-angle outflow, often occurring together with a jet Accretion disk Bipolar outflow Blandford–Znajek process Herbig–Haro object Penrose process CGCG 049-033, elliptical galaxy located 600 million light-years from Earth, known for having the longest galactic jet discovered Gamma-ray burst Solar jet References External links NASA – Ask an Astrophysicist: Black Hole Bipolar Jets SPACE.com – Twisted Physics: How Black Holes Spout Off Hubble Video Shows Shock Collision inside Black Hole Jet (Article) Space plasmas Black holes Jet, Astrophysical Concepts in stellar astronomy
Astrophysical jet
Physics,Astronomy
1,147
32,050,924
https://en.wikipedia.org/wiki/WSSUS%20model
The WSSUS (Wide-Sense Stationary Uncorrelated Scattering) model provides a statistical description of the transmission behavior of wireless channels. "Wide-sense stationarity" means the second-order moments of the channel are stationary, which means that they depends only on the time difference, while "uncorrelated scattering" refers to the delay τ due to scatterers. Modelling of mobile channels as WSSUS (wide sense stationary uncorrelated scattering) has become popular among specialists. The model was introduced by Phillip A. Bello in 1963. A commonly used description of time variant channel applies the set of Bello functions and the theory of stochastic processes. References Kurth, R. R.; Snyder, D. L.; Hoversten, E. V. (1969) "Detection and Estimation Theory", Massachusetts Institute of Technology, Research Laboratory of Electronics, Quarterly Progress Report, No. 93 (IX), 177–205 Primary documents Bello, Phillip A., "Characterization of randomly time-variant linear channels", IEEE Transactions on Communications Systems, vol. 11, iss. 4, pp. 360-393, December 1963. External links Wide Sense Stationary Uncorrelated Scattering at www.WirelessCommunication.NL Information theory Scattering Scattering, absorption and radiative transfer (optics) Signal processing Stochastic models Telecommunication theory Wireless Wireless networking
WSSUS model
Physics,Chemistry,Materials_science,Mathematics,Technology,Engineering
286
733,536
https://en.wikipedia.org/wiki/Arm%20%26%20Hammer
Arm & Hammer is a brand of baking soda-based consumer products marketed by Church & Dwight, a major American manufacturer of household products. The logo of the brand depicts the ancient symbol of a muscular arm holding a hammer inside a red circle with the brand name and slogan. Originally associated solely with baking soda and washing soda, the company began to expand the brand to other products in the 1970s by using baking soda as a deodorizing ingredient. The new products included toothpaste, laundry detergent, underarm deodorant, and cat litter. History Name and logo The original Arm and hammer logo usage dates back to the 1860s. James A. Church, son of Dr. Austin Church, ran a spice business known as Vulcan Spice Mills. According to the company, the Arm and Hammer logo represents Vulcan, the Roman god of fire and metalworking. It is often claimed that the brand name originated with tycoon Armand Hammer; however, the Arm & Hammer brand was in use 31 years before Hammer was born. Hammer was often asked about the Church & Dwight brand when he attempted to buy the company. While unsuccessful, Hammer's Occidental Petroleum acquired enough stock for him to join the Church & Dwight board of directors in 1986. Hammer remained one of the owners of Arm & Hammer until his death in 1990. Baking soda Arm and Hammer started as John Dwight and Company in 1846 when John Dwight and Austin Church used their sodium bicarbonate in their kitchen. They formerly made the Cow Brand trademark on their baking soda. In 1886, Austin retired and his two sons succeeded in selling Arm and Hammer Baking Soda through their name Church and Co as a competing company to the John Dwight Company which continued selling Cow Brand baking soda. The Church & Dwight Company was formed when the two were merged. Odor control In 1972, Arm & Hammer launched an advertising campaign promoting the idea that a box of baking soda in the refrigerator could control odors. The campaign is considered a classic of marketing, leading within a year to more than half of American refrigerators containing a box of baking soda. This claim has often been repeated since then. However, there is little evidence that it is effective in this application. Arm & Hammer further claims that the box must be replaced monthly. Armex In 1986, Arm & Hammer created the Armex brand, a line of soda blasting agents originally used to aid in the conservation-restoration of the Statue of Liberty. Gallery Industry trade cards See also List of toothpaste brands References Food product brands Brands of toothpaste Cleaning products Church & Dwight brands Products introduced in 1867
Arm & Hammer
Chemistry
523
173,522
https://en.wikipedia.org/wiki/KHTML
KHTML is a discontinued browser engine that was developed by the KDE project. It originated as the engine of the Konqueror browser in the late 1990s, but active development ceased in 2016. It was officially discontinued in 2023. Built on the KParts framework and written in C++, KHTML had relatively good support for Web standards during its prime. Engines forked from KHTML are used by most of the browsers that are widely used today, including WebKit (Safari) and Blink (Google Chrome, Chromium, Microsoft Edge, Opera, Vivaldi and Brave). History Origins KHTML was preceded by an earlier engine called khtmlw or the KDE HTML Widget, developed by Torben Weis and Martin Jones, which implemented support for HTML 3.2, HTTP 1.0, and HTML frames, but not the DOM, CSS, or JavaScript. KHTML itself came into existence on November 4, 1998, as a fork of the khtmlw library, with some slight refactoring and the addition of Unicode support and changes to support the move to Qt 2. Waldo Bastian was among those who did the work of creating that early version of KHTML. Re-write and improvement The real work on KHTML actually started between May and October 1999, with the realization that the choice facing the project was "either do a significant effort to move KHTML forward or to use Mozilla" and with adding support for JavaScript as the highest priority. So in May 1999, Lars Knoll began doing research with an eye toward implementing the DOM specification, finally announcing on August 16, 1999 that he had checked in what amounted to a complete rewrite of the KHTML library—changing KHTML to use the standard DOM as its internal document representation. That in turn allowed the beginnings of JavaScript support to be added in October 1999, followed shortly afterwards with the integration of KJS by Harri Porten. In the closing months of 1999 and first few months of 2000, Knoll did further work with Antti Koivisto and Dirk Mueller to add CSS support and to refine and stabilize the KHTML architecture, with most of that work being completed by March 2000. Among other things, those changes enabled KHTML to become the second browser after Internet Explorer to correctly support Hebrew and Arabic and languages written right-to-left—before Mozilla had such support. KDE 2.0 was the first KDE release (on October 23, 2000) to include KHTML (as the rendering engine of the new Konqueror file and web browser, which replaced the monolithic KDE File Manager). Other modules KSVG was first developed in 2001 by Nikolas Zimmermann and Rob Buis; however, by 2003, it was decided to fork the then-current KSVG implementation into two new projects: KDOM/KSVG2 (to improve the state of DOM rendering in KHTML underneath a more formidable SVG 1.0 render state) and Kcanvas (to abstract any rendering done within khtml/ksvg2 in a single shared library, with multiple backends for it, e.g., Cairo/Qt, etc.). KSVG2 is also a part of WebKit. Sunsetting KHTML was scheduled to be removed in KDE Frameworks 6. Active development ended in 2016, just the necessary maintenance to work with updates to Frameworks 5. It was officially discontinued in 2023. Standards compliance The following standards are supported by the KHTML engine: HTML 4.01 HTML 5 support CSS 1 CSS 2.1 (screen and paged media) CSS 3 Selectors (fully as of KDE 3.5.6) CSS 3 Other (multiple backgrounds, box-sizing and text-shadow) PNG, MNG, JPEG, GIF graphic formats DOM 1, 2 and partially 3 ECMA-262/JavaScript 1.5 Partial Scalable Vector Graphics support Descendants KHTML and KJS were adopted by Apple in 2002 for use in the Safari web browser. Apple publishes the source code for their fork of the KHTML engine, called WebKit. In 2013, Google began development on a fork of WebKit, called Blink. See also Comparison of browser engines References External links Web Browser – the Konqueror website KHTML – KDE's HTML library – description at developer.kde.org KHTML at the KDE API Reference KHTML at the KDE git repository From KDE to WebKit: The Open Source Engine That's Here to Stay – presentation at Yahoo! office by Lars Knoll and George Staikos on December 8, 2006 (video) 1999 software Free layout engines Free software programmed in C++ KDE Frameworks KDE Platform
KHTML
Technology
1,014
9,896,434
https://en.wikipedia.org/wiki/Prokaryotic%20DNA%20replication
Prokaryotic DNA Replication is the process by which a prokaryote duplicates its DNA into another copy that is passed on to daughter cells. Although it is often studied in the model organism E. coli, other bacteria show many similarities. Replication is bi-directional and originates at a single origin of replication (OriC). It consists of three steps: Initiation, elongation, and termination. Initiation All cells must finish DNA replication before they can proceed for cell division. Media conditions that support fast growth in bacteria also couples with shorter inter-initiation time in them, i.e. the doubling time in fast growing cells is less as compared to the slow growth. In other words, it is possible that in fast growth conditions the grandmother cells starts replicating its DNA for grand daughter cell. For the same reason, the initiation of DNA replication is highly regulated. Bacterial origins regulate orisome assembly, a nuclei-protein complex assembled on the origin responsible for unwinding the origin and loading all the replication machinery. In E. coli, the direction for orisome assembly are built into a short stretch of nucleotide sequence called as origin of replication (oriC) which contains multiple binding sites for the initiator protein DnaA (a highly homologous protein amongst bacterial kingdom). DnaA has four domains with each domain responsible for a specific task. There are 11 DnaA binding sites/boxes on the E. coli origin of replication out of which three boxes R1, R2 and R4 (which have a highly conserved 9 bp consensus sequence 5' - TTATC/ACACA ) are high affinity DnaA boxes. They bind to DnaA-ADP and DnaA-ATP with equal affinities and are bound by DnaA throughout most of the cell cycle and forms a scaffold on which rest of the orisome assembles. The rest eight DnaA boxes are low affinity sites that preferentially bind to DnaA-ATP. During initiation, DnaA bound to high affinity DnaA box R4 donates additional DnaA to the adjacent low affinity site and progressively fill all the low affinity DnaA boxes. Filling of the sites changes origin conformation from its native state. It is hypothesized that DNA stretching by DnaA bound to the origin promotes strand separation which allows more DnaA to bind to the unwound region. The DnaC helicase loader then interacts with the DnaA bound to the single-stranded DNA to recruit the DnaB helicase, which will continue to unwind the DNA as the DnaG primase lays down an RNA primer and DNA Polymerase III holoenzyme begins elongation. Regulation Chromosome replication in bacteria is regulated at the initiation stage. DnaA-ATP is hydrolyzed into the inactive DnaA-ADP by RIDA (Regulatory Inactivation of DnaA), and converted back to the active DnaA-ATP form by DARS (DnaA Reactivating Sequence, which is itself regulated by Fis and IHF). However, the main source of DnaA-ATP is synthesis of new molecules. Meanwhile, several other proteins interact directly with the oriC sequence to regulate initiation, usually by inhibition. In E. coli these proteins include DiaA, SeqA, IciA, HU, and ArcA-P, but they vary across other bacterial species. A few other mechanisms in E. coli that variously regulate initiation are DDAH (datA-Dependent DnaA Hydrolysis, which is also regulated by IHF), inhibition of the dnaA gene (by the SeqA protein), and reactivation of DnaA by the lipid membrane. Elongation Once priming is complete, DNA polymerase III holoenzyme is loaded into the DNA and replication begins. The catalytic mechanism of DNA polymerase III involves the use of two metal ions in the active site, and a region in the active site that can discriminate between deoxyribonucleotides and ribonucleotides. The metal ions are general divalent cations that help the 3' OH initiate a nucleophilic attack onto the alpha phosphate of the deoxyribonucleotide and orient and stabilize the negatively charged triphosphate on the deoxyribonucleotide. Nucleophilic attack by the 3' OH on the alpha phosphate releases pyrophosphate, which is then subsequently hydrolyzed (by inorganic phosphatase) into two phosphates. This hydrolysis drives DNA synthesis to completion. Furthermore, DNA polymerase III must be able to distinguish between correctly paired bases and incorrectly paired bases. This is accomplished by distinguishing Watson-Crick base pairs through the use of an active site pocket that is complementary in shape to the structure of correctly paired nucleotides. This pocket has a tyrosine residue that is able to form van der Waals interactions with the correctly paired nucleotide. In addition, dsDNA (double stranded DNA) in the active site has a wider major groove and shallower minor groove that permits the formation of hydrogen bonds with the third nitrogen of purine bases and the second oxygen of pyrimidine bases. Finally, the active site makes extensive hydrogen bonds with the DNA backbone. These interactions result in the DNA polymerase III closing around a correctly paired base. If a base is inserted and incorrectly paired, these interactions could not occur due to disruptions in hydrogen bonding and van der Waals interactions. DNA is read in the 3' → 5' direction, therefore, nucleotides are synthesized (or attached to the template strand) in the 5' → 3' direction. However, one of the parent strands of DNA is 3' → 5' while the other is 5' → 3'. To solve this, replication occurs in opposite directions. Heading towards the replication fork, the leading strand is synthesized in a continuous fashion, only requiring one primer. On the other hand, the lagging strand, heading away from the replication fork, is synthesized in a series of short fragments known as Okazaki fragments, consequently requiring many primers. The RNA primers of Okazaki fragments are subsequently degraded by RNase H and DNA Polymerase I (exonuclease), and the gaps (or nicks) are filled with deoxyribonucleotides and sealed by the enzyme ligase. Rate of replication The rate of DNA replication in a living cell was first measured as the rate of phage T4 DNA elongation in phage-infected E. coli. During the period of exponential DNA increase at 37 °C, the rate was 749 nucleotides per second. The mutation rate per base pair per replication during phage T4 DNA synthesis is 1.7 per 108. Termination Termination of DNA replication in E. coli is completed through the use of termination sequences and the Tus protein. These sequences allow the two replication forks to pass through in only one direction, but not the other. DNA replication initially produces two catenated or linked circular DNA duplexes, each comprising one parental strand and one newly synthesised strand (by nature of semiconservative replication). This catenation can be visualised as two interlinked rings which cannot be separated. Topoisomerase 2 in E. coli unlinks or decatenates the two circular DNA duplexes by breaking the phosphodiester bonds present in two successive nucleotides of either parent DNA or newly formed DNA and thereafter the ligating activity ligates that broken DNA strand and so the two DNA get formed. Other Prokaryotic replication models The theta type replication has been already mentioned. There are other types of prokaryotic replication such as rolling circle replication and D-loop replication Rolling Circle Replication This is seen in bacterial conjugation where the same circulartemplate DNA rotates and around it the new strand develops. . When conjugation is initiated by a signal the relaxase enzyme creates a nick in one of the strands of the conjugative plasmid at the oriT. Relaxase may work alone or in a complex of over a dozen proteins known collectively as a relaxosome. In the F-plasmid system the relaxase enzyme is called TraI and the relaxosome consists of TraI, TraY, TraM and the integrated host factor IHF. The nicked strand, or T-strand, is then unwound from the unbroken strand and transferred to the recipient cell in a 5'-terminus to 3'-terminus direction. The remaining strand is replicated either independent of conjugative action (vegetative replication beginning at the oriV) or in concert with conjugation (conjugative replication similar to the rolling circle replication of lambda phage). Conjugative replication may require a second nick before successful transfer can occur. A recent report claims to have inhibited conjugation with chemicals that mimic an intermediate step of this second nicking event. D-loop replication D-loop replication is mostly seen in organellar DNA, Where a triple stranded structure called displacement loop is formed. References DNA replication
Prokaryotic DNA replication
Biology
1,887
58,458,383
https://en.wikipedia.org/wiki/Ecosystem%20collapse
An ecosystem, short for ecological system, is defined as a collection of interacting organisms within a biophysical environment. Ecosystems are never static, and are continually subject to both stabilizing and destabilizing processes. Stabilizing processes allow ecosystems to adequately respond to destabilizing changes, or perturbations, in ecological conditions, or to recover from degradation induced by them: yet, if destabilizing processes become strong enough or fast enough to cross a critical threshold within that ecosystem, often described as an ecological 'tipping point', then an ecosystem collapse (sometimes also termed ecological collapse). occurs. Ecosystem collapse does not mean total disappearance of life from the area, but it does result in the loss of the original ecosystem's defining characteristics, typically including the ecosystem services it may have provided. Collapse of an ecosystem is effectively irreversible more often than not, and even if the reversal is possible, it tends to be slow and difficult. Ecosystems with low resilience may collapse even during a comparatively stable time, which then typically leads to their replacement with a more resilient system in the biosphere. However, even resilient ecosystems may disappear during the times of rapid environmental change, and study of the fossil record was able to identify how certain ecosystems went through a collapse, such as with the Carboniferous rainforest collapse or the collapse of Lake Baikal and Lake Hovsgol ecosystems during the Last Glacial Maximum. Today, the ongoing Holocene extinction is caused primarily by human impact on the environment, and the greatest biodiversity loss so far had been due to habitat degradation and fragmentation, which eventually destroys entire ecosystems if left unchecked. There have been multiple notable examples of such an ecosystem collapse in the recent past, such as the collapse of the Atlantic northwest cod fishery. More are likely to occur without a change in course, since estimates show that 87% of oceans and 77% of the land surface have been altered by humanity, with 30% of global land area is degraded and a global decline in ecosystem resilience. Deforestation of the Amazon rainforest is the most dramatic example of a massive, continuous ecosystem and a biodiversity hotspot being under the immediate threat from habitat destruction through logging, and the less-visible, yet ever-growing and persistent threat from climate change. Biological conservation can help to preserve threatened species and threatened ecosystems alike. However, time is of the essence. Just as interventions to preserve a species have to occur before it falls below viable population limits, at which point an extinction debt occurs regardless of what comes after, efforts to protect ecosystems must occur in response to early warning signals, before the tipping point to a regime shift is crossed. Further, there is a substantial gap between the extent of scientific knowledge how extinctions occur, and the knowledge about how ecosystems collapse. While there have been efforts to create objective criteria used to determine when an ecosystem is at risk of collapsing, they are comparatively recent, and are not yet as comprehensive. While the IUCN Red List of threatened species has existed for decades, the IUCN Red List of Ecosystems has only been in development since 2008. Definition Ecosystem collapse has been defined as a "transformation of identity, loss of defining features, and replacement by a novel ecosystem", and involves the loss of "defining biotic or abiotic features", including the ability to sustain the species which used to be associated with that ecosystem. According to another definition, it is "a change from a baseline state beyond the point where an ecosystem has lost key defining features and functions, and is characterised by declining spatial extent, increased environmental degradation, decreases in, or loss of, key species, disruption of biotic processes, and ultimately loss of ecosystem services and functions". Ecosystem collapse has also been described as "an analogue of species extinction", and in many cases, it is irreversible, with a new ecosystem appearing instead, which may retain some characteristics of the previous ecosystem, yet has agreatly altered structure and function. There are exceptions where an ecosystem can be recovered past the point of a collapse, but by definition, will always be far more difficult to reverse than allowing a disturbed yet functioning ecosystem to recover, requiring active intervention and/or a prolonged period of time even if it can be reversed. Drivers While collapse events can occur naturally with disturbances to an ecosystem—through fires, landslides, flooding, severe weather events, disease, or species invasion—there has been a noticeable increase in human-caused disturbances over the past fifty years. The combination of environmental change and the presence of human activity is increasingly detrimental to ecosystems of all types, as our unrestricted actions often increase the risk of abrupt (and potentially irreversible) changes post-disturbance; when a system would otherwise have been able to recover. Some behaviors that induce transformation are: human intervention in the balance of local diversity (through introduction of new species or overexploitation), alterations in the chemical balance of environments through pollution, modifications of local climate or weather with anthropogenic climate change, and habitat destruction or fragmentation in terrestrial/marine systems. For instance, overgrazing was found to cause land degradation, specifically in Southern Europe, which is another driver of ecological collapse and natural landscape loss. Proper management of pastoral landscapes can mitigate risk of desertification. Despite the strong empirical evidence and highly visible collapse-inducing disturbances, anticipating collapse is a complex problem. The collapse can happen when the ecosystem's distribution decreases below a minimal sustainable size, or when key biotic processes and features disappear due to environmental degradation or disruption of biotic interactions. These different pathways to collapse can be used as criteria for estimating the risk of ecosystem collapse. Although states of ecosystem collapse are often defined quantitatively, few studies adequately describe transitions from pristine or original state towards collapse. Geological record In another example, 2004 research demonstrated how during the Last Glacial Maximum (LGM), alternations in the environment and climate led to a collapse of Lake Baikal and Lake Hovsgol ecosystems, which then drove species evolution. The collapse of Hovsgol's ecosystem during the LGM brought forth a new ecosystem, with limited biodiversity in species and low levels of endemism, in Hovsgol during the Holocene. That research also shows how ecosystem collapse during LGM in Lake Hovsgol led to higher levels of diversity and higher levels of endemism as a byproduct of subsequent evolution. In the Carboniferous period, coal forests, great tropical wetlands, extended over much of Euramerica (Europe and America). This land supported towering lycopsids which fragmented and collapsed abruptly. The collapse of the rainforests during the Carboniferous has been attributed to multiple causes, including climate change and volcanism. Specifically, at this time climate became cooler and drier, conditions that are not favourable to the growth of rainforests and much of the biodiversity within them. The sudden collapse in the terrestrial environment made many large vascular plants, giant arthropods, and diverse amphibians to go extinct, allowing seed-bearing plants and amniotes to take over (but smaller relatives of the affected ones survived also). Historic examples of collapsed ecosystems The Rapa Nui subtropical broadleaf forests in Easter Island, formerly dominated by an endemic Palm, are considered collapsed due to the combined effects of overexplotaition, climate change and introduced exotic rats. The Aral Sea was an endorheic lake between Kazakhstan and Uzbekistan. It was once considered one of the largest lakes in the world but has been shrinking since the 1960s after the rivers that fed it were diverted for large scale irrigation. By 1997, it had declined to 10% of its original size, splitting into much smaller hypersaline lakes, while dried areas have transformed into desert steppes. The regime shift in the northern Benguela upwelling ecosystem is considered an example of ecosystem collapse in open marine environments. Prior to the 1970s sardines were the dominant vertebrate consumers, but overfishing and two adverse climatic events (Benguela Niño in 1974 and 1984) lead to an impoverished ecosystem state with high biomass of jellyfish and pelagic goby. Another notable example is the collapse of the Grand Banks cod in the early 1990s, when overfishing reduced fish populations to 1% of their historical levels. Contemporary risk There are two tools commonly used together to assess risks to ecosystems and biodiversity: generic risk assessment protocols and stochastic simulation models. The most notable of the two tactics is risk assessment protocol, particularly because of the IUCN Red List of Ecosystems (RLE), which is widely applicable to many ecosystems even in data-poor circumstances. However, because using this tool is essentially comparing systems to a list of criteria, it is often limited in its ability to look at ecosystem decline holistically; and is thus often used in conjunction with simulation models that consider more aspects of decline such as ecosystem dynamics, future threats, and social-ecological relationships. The IUCN RLE is a global standard that was developed to assess threats to various ecosystems on local, regional, national, and global scales, as well as to prompt conservation efforts in the face of the unparalleled decline of natural systems in the last decade. And though this effort is still in the earlier stages of implementation, the IUCN has a goal to assess the risk of collapse for all of the world's ecosystems by 2025. The concept of ecosystem collapse is used in the framework to establish categories of risk for ecosystems, with the category Collapsed used as the end-point of risk assessment. Other categories of threat (Vulnerable, Endangered and Critically Endangered) are defined in terms of the probability or risk of collapse. A paper by Bland et al. suggests four aspects for defining ecosystem collapse in risk assessments: qualitatively defining initial and collapsed states describing collapse and recovery transitions identifying and selecting indicators of collapse setting quantitative collapse thresholds. Early detection and monitoring Scientists can predict tipping points for ecosystem collapse. The most frequently used model for predicting food web collapse is called R50, which is a reliable measurement model for food web robustness. However, there are others: i.e. marine ecosystem assessments can use RAM Legacy Stock Assessment Database. In one example, 154 different marine fish species were studied to establish the relationship between pressures on fish populations such as overfishing and climate change, these populations; traits like growth rate, and the risk of ecosystem collapse. The measurement of "critical slowing down" (CSD) is one approach for developing early warning signals for a potential or likely onset of approaching collapse. It refers to increasingly slow recovery from perturbations. In 2020, one paper suggested that once a 'point of no return' is reached, breakdowns do not occur gradually but rapidly and that the Amazon rainforest could shift to a savannah-type mixture of trees and grass within 50 years and the Caribbean coral reefs could collapse within 15 years once a state of collapse has been reached. Another indicated that large ecosystem disruptions will occur earlier under more intense climate change: under the high-emissions RCP8.5 scenario, ecosystems in the tropical oceans would be the first to experience abrupt disruption before 2030, with tropical forests and polar environments following by 2050. In total, 15% of ecological assemblages would have over 20% of their species abruptly disrupted if as warming eventually reaches ; in contrast, this would happen to fewer than 2% if the warming were to stay below . Rainforest collapse Rainforest collapse refers to the actual past and theoretical future ecological collapse of rainforests. It may involve habitat fragmentation to the point where little rainforest biome is left, and rainforest species only survive in isolated refugia. Habitat fragmentation can be caused by roads. When humans start to cut down the trees for logging, secondary roads are created that will go unused after its primary use. Once abandoned, the plants of the rainforest will find it difficult to grow back in that area. Forest fragmentation also opens the path for illegal hunting. Species have a hard time finding a new place to settle in these fragments causing ecological collapse. This leads to extinction of many animals in the rainforest. A classic pattern of forest fragmentation is occurring in many rainforests including those of the Amazon, specifically a 'fishbone' pattern formed by the development of roads into the forest. This is of great concern, not only because of the loss of a biome with many untapped resources and wholesale death of living organisms, but also because plant and animal species extinction is known to correlate with habitat fragmentation. In the year 2022, research found that more than three-quarters of the Amazon rainforest has been losing resilience due to deforestation and climate change since the early 2000s as measured by recovery-time from short-term perturbations (the critical slowing down), reinforcing the theory that it is approaching a critical transition. Another study from 2022 found that tropical, arid and temperate forests are substantially losing resilience. Coral reefs A major concern for marine biologists is the collapse of coral reef ecosystems.). An effect of global climate change is the rising sea levels which can lead to reef drowning or coral bleaching. Human activity, such as fishing, mining, deforestation, etc., serves as a threat for coral reefs by affecting the niche of the coral reefs. For example, there is a demonstrated correlation between a loss in diversity of coral reefs by 30-60% and human activity such as sewage and/or industrial pollution. Conservation and reversal As of now there is still not much information on effective conservation or reversal methods for ecosystem collapse. Rather, there has been increased focus on the predictability of ecosystem collapse, whether it is possible, and whether it is productive to explore. This is likely because thorough studies of at-risk ecosystems are a more recent development and trend in ecological fields, so collapse dynamics are either too recent to observe or still emerging. Since studies are not yet long term, conclusions about reversibility or transformation potential are often hard to draw from newer, more focused studies. See also Arctic shrinkage Ecological resilience Ecosystem services Environmental degradation Overshoot (ecology) Tipping points in the climate system References Ecosystems Biological systems IUCN Red List of Ecosystems
Ecosystem collapse
Biology
2,880
56,463,402
https://en.wikipedia.org/wiki/For%20All%20Moonkind
For All Moonkind, Inc. is a volunteer international nonprofit organization which is working with the United Nations and the international community to manage the preservation of history and human heritage in outer space. The organization believes that the lunar landing sites and items from space missions are of great value to the public and is pushing the United Nations to create rules that will protect lunar items and secure heritage sites on the Moon and other celestial bodies. Protection is necessary as many nations and companies are planning on returning to the Moon, and it is not difficult to imagine the damage an autonomous vehicle or an errant astronaut—an explorer, colonist or tourist—could to one of the Moon landing sites, whether intentionally or unintentionally. Formed in 2017, the organization aims to work with space agencies around the world to draw up a protection plan which will be submitted to the UN Committee on the Peaceful Uses of Outer Space. The goal is to present the international community with a proposal prepared by a diverse group of space law experts, preservation law experts, scientists and engineers which takes into consideration all the necessary aspects of law, policy and science. The effort will be modeled on the United Nations Educational, Scientific and Cultural Organization's World Heritage Convention. Simonetta Di Pippo, currently the director of the United Nations Office for Outer Space Affairs, has acknowledged the work of For All Moonkind and confirmed that UNOOSA supports and facilitates international cooperation in the peaceful uses of outer space. In November 2017, the UNOOSA United Arab Emirates High Level Forum 2017 acknowledged the work of For All Moonkind and recommended that the international community should consider proclaiming universal heritage sites in outer space. In January 2018, a draft resolution was considered by the UN Committee on the Peaceful Uses of Outer Space Scientific and Technical Subcommittee recommended the creations of "a universal space heritage sites programme ... with specific focus on sites of special relevance on the Moon and other celestial bodies." For All Moonkind is also working directly private companies to preserve human heritage in outer space. German company PTScientists, which is planning to send a rover to revisit the Apollo 17 landing site, was the first private company to make a public pledge of support for For All Moonkind. In February 2018, For All Moonkind was named a Top Ten Innovator in Space in 2018 "for galvanizing agencies to preserve Moon artifacts." The honor was repeated in 2019 when the organization was recognized for its innovative "campaign to create and international agreement to preserve human artifacts in space." In May 2018, the organization announced that it is teaming up with TODAQ Financial to map heritage sites on the Moon using blockchain. In December 2018, the United Nations General Assembly granted to For All Moonkind Observer status, on a provisional basis, for a period of three years, pending on the status of their application for consultative status with the United Nations Economic and Social Council. In spring 2019, For All Moonkind worked closely with the office of Gary Peters to develop the One Small Step Act, legislation designed to permanently protect the Apollo landing sites from intentional and unintentional disturbances by codifying existing NASA preservation recommendations. The bipartisan bill, which was cosponsored by Senator Ted Cruz was passed unanimously by the United States Senate on 18 July 2019. In the United States House of Representatives, it was cosponsored by Representatives Eddie Bernice Johnson, Brian Babin, Kendra Horn, Frank Lucas (Oklahoma politician), Lizzie Fletcher and Brian Fitzpatrick (American politician). It was passed by the United States House of Representatives in December 2020 and became law on 31 December 2021. In October 2020, the United States and seven other countries signed the Artemis Accords. Section 9 of the Accords specifically includes the agreement to preserve outer space heritage, which the signatories consider to comprise historically significant human or robotic landing sites, artifacts, spacecraft, and other evidence of activity, and to contribute to multinational efforts to develop practices and rules to do so. This is the first time the protection of human heritage in space has ever been referenced in a multilateral agreement. As of 30 October, a total of 13 nations have signed the Accords. In March 2021, the organization revealed the first-of-its-kind digital registry of all the historic landing sites on the Moon. The For All Moonkind Moon Registry is free to all. Astronaut and second-to-last human on the Moon, Harrison Schmitt called the registry a "worthy cause", while fellow astronaut and moonwalker Charles Duke said it is a "spectacular resource". Neil Armstrong biographer James Hansen calls it "an all-access pass to the history of human activity on the Moon." In March 2023, the organization formed the Institute on Space Law and Ethics a "new nonprofit organization will go beyond advocating for protecting off-world heritage sites and contemplate the ethics around some activities in space that are not fully covered in existing international law." While Space ethics is a discipline that discusses the moral the ethical implications of space exploration the Institute on Space Law and Ethics will look to address current issues in space exploration. Human heritage in outer space Space heritage has been defined as heritage related to the process of carrying our science in space; heritage related to crewed space flight/exploration; and human cultural heritage that remains off the surface of planet Earth. The field of space archaeology is the research-based study of all the various human-made items in outer space. Human heritage in outer space includes Tranquility Base (Apollo 11's lunar landing site) and the robotic and crewed sites that preceded and followed Apollo 11. This also comprises all the Luna programme vehicles, including the Luna 2 (first object) and Luna 9 (first soft-landing) missions, the Surveyor program and the Yutu rover. Human heritage in outer space also includes satellites like Vanguard 1 and Asterix-1 which, though nonoperational, remain in orbit. History The organization was founded by Tim and Michelle Hanlon in 2017. In February 2018, For All Moonkind was named a Top Ten Innovator in Space in 2018 "for galvanizing agencies to preserve Moon artifacts." The honor was repeated in 2019 when the organization was recognized for its innovative "campaign to create and international agreement to preserve human artifacts in space." In May 2018, the organization announced that it is teaming up with TODAQ Financial to map heritage sites on the Moon using blockchain. In December 2018, the United Nations General Assembly granted to For All Moonkind observer status, on a provisional basis, for a period of three years, pending on the status of their application for consultative status with the United Nations Economic and Social Council. In spring 2019, For All Moonkind worked closely with the office of Gary Peters to develop the One Small Step Act, legislation designed to permanently protect the Apollo landing sites from intentional and unintentional disturbances by codifying existing NASA preservation recommendations. The bipartisan bill, which was cosponsored by Senator Ted Cruz was passed unanimously by the United States Senate on 18 July 2019. It passed the House in December 2020 and became law on 31 December 2020. Leadership and advisory councils For All Moonkind is an entirely volunteer endeavor with a Leadership Board and three Advisory Councils. The team includes space lawyers and policymakers, scientists and technical experts – including space archaeologists – and communications professionals from around the world. Noteworthy members include: Astronaut Col. Mike Mullane, USAF, Retired Astronaut Col. Robert C. Springer, USMC, Retired American space historian Robert Pearlman James R. Hansen, author of First Man: The Life of Neil A. Armstrong, the official biography of Neil Armstrong Space entrepreneur Rick Tumlinson See also Space archaeology References External links ">"Moon Registry" Catalogs Human Heritage Left Behind on Lunar Surface collectSPACE "> Moon Registry: Cataloging the Past ... and Future of Lunar Exploration Inside Outer Space President Signs Law Protecting Lunar Heritage Sites SpacePolicyOnline.com " Protecting Human Heritage in Outer Space with Michelle L.D. Hanlon Clayming Space " How Star Trek's Prime Directive is Influencing Real-Time Space Law SyFyWire Pushing the Outer Limits of Preservation with Michelle Hanlon PreserveCast Fighting to Preserve Human History on the Moon Supercluster We Need that Boot Print. Inside the Fight to Save the Moon's Historic Sites Before it's Too Late Time The Nation Celebrates the 50th Anniversary of Apollo 11 The Wall Street Journal European Space Agency Chief Urges Humanity to Protect Apollo 11, Lunokhod 1 Landing Sites Gizmodo Life After Launch: Inside the Massive Effort to Preserve NASA's Space Artifacts Digital Trends Apollo 11 Site Should be Granted Heritage Status The Guardian What is the Apollo 11 Landing Site Like Now? The Atlantic Preserving Apollo's Historic Landing Sites on the Moon WJCT Preserving Neil Armstrong's Footprints on the Moon is an Easy Decision Slate One Giant Leap for Preservation: Kent Wins Landmark Status for Boeing's Moon Buggies GeekWire America's Greatest Space Landmarks Could be at Risk Due to a Lack of Space Law Cheddar Should Neil Armstrong's Bootprints Be on the Moon Forever? The New York Times The Moon now has Hundreds of Artefacts - Should They be Protected? The Straits Times Apollo 11 Brought a Message of Peace to the Moon Snopes One Small Step: What Will the Moon Look Like in 50 Years? CNET Apollo Astronauts Left Trash, Mementos and Experiments on the Moon Science News Historic Preservation Taken Out of This World The Commercial Dispatch Apollo 11 50th Anniversary: Who Gets to Own Moon Landing Memorabilia Vox How a Park on the Moon Could Lead to More Consensus on Space Exploration Politico Space Act Calls for Protection of Apollo 11 Landing Site Space.com 50 Yrs of Moon Landing: Let's Not Forget, or Forsake, the Lessons of the Past Business Standard " Preserving Human Cultural Heritage For All Moonkind Late Night Live How do You Preserve History on the Moon? National Public Radio " The Case for Protecting the Apollo Landing Areas as Heritage Sites Discover A World Heritage Site on the Moon? That's Not as Spacey as it Sounds Los Angeles Times Mapping the Moon WMFE-FM We Make Mars More Like Earth? Making New Worlds PTScientists 'Mission to the Moon' to Take Care not to Harm Apollo 17 Landing Site collectSPACE Return to the Moon, BBC Click talks to Michelle Hanlon, For All Moonkind BBC World Service, Click SpaceWatchME Interviews: Michelle L.D. Hanlon of For All Moonkind Space Watch Middle East Broadcast 2994, Michelle Hanlon The Space Show Preserving Historic Sites on the Moon Air & Space Non-profit strives to preserve moon landing sites for posterity The Westside Story A Preservation Group Wants UNESCO-Style Protection for Apollo Moon-landing Sites Fast Company Organizations established in 2017 Space advocacy organizations Historical footprints
For All Moonkind
Astronomy
2,194
32,274,109
https://en.wikipedia.org/wiki/G%C3%BCnter%20Ropohl
Günter Ropohl (14 June 1939 in Cologne, Germany – 28 January 2017) was a German philosopher of technology. Biography Günter Ropohl studied mechanical engineering and philosophy at Stuttgart University, where he was a scholar of the philosopher Max Bense. After his PhD (Dr.-Ing.) in 1970, he wrote his Habilitation thesis in Philosophy und Sociology at Karlsruhe University 1978 under the supervision of Hans Lenk. His work dealt with the systems theory of "Technik" (engl. technique), leading to the concept of general technology. In 1979, Ropohl became professor at the Universität Karlsruhe (TH). Soon after, in 1981, he became professor for Allgemeine Technologie (general technology) and philosophy of technology at the Johann Wolfgang Goethe-Universität in Frankfurt am Main, Germany (until 2004). In the 1980s, he visited his colleague and friend Carl Mitcham in the United States. From 1983 to 1991, i.e. during the period of the Cold War, he was course director and visiting lecturer at the Inter-University Centre Dubrovnik (Croatia). In 1988, he was invited as visiting professor at the Rochester Institute of Technology, Rochester NY. Ropohl was an honorary member of the German Engineering Association (VDI), due to his interdisciplinary engagement for the philosophy of technology. He was co-editor of an anthology of the classics in the philosophy of technology in a Continental-European tradition. Ropohl published 15 monographs, (co-)edited another 15 books and published more than 180 articles. He died on 28 January 2017 at the age of 77. Philosophy A central concept in his work was sociotechnical systems, i.e. he regarded techniques as societal structures. Ropohl was a critic of the systems theory of Niklas Luhmann and voted for the recognition of material culture. His definition of (German) "Technik" included a) the utility, b) artificiality and c) functionality. In the focus of his work is the combination of technique as artefact and action, whereas knowledge insinuates the meta-concept of technology. Therefore, he differentiated between engineering sciences and technical sciences. Ropohl was well known in the German-speaking academia for his writings on the concepts of Technik and Technologie, the ethics of technology, technology assessment, professional ethics for engineers and on the societal need for educating towards technology literacy. He received a Festschrift with contributions from academic scholars, focusing on his work and related discourses, both on his 65th and 75th birthday (edited by Nicole C. Karafyllis, see literature), including a complete list of his publications from the late 1960s to 2014. Selected publications in English Article on systems-theoretical approaches and morphological methods in forecasting, in; Technological Forecastings in Practice, Farnborough/Lexington, MA: Saxon Jouse 1973, pp. 29–38. Article in Research in Philosophy and Technology, Vol. 2, ed. P. T. Durbin, Greenwich, CT: Jai Press 1979, pp. 15–52. "Information doesn't make sense", in Carl Mitcham and Alois Huning (Eds.): Philosophy and Technology II: Information Technology and Computers in Theory and Practice, Dordrecht/Boston MA 1986, pp. 63–74. "Deficiencies in Engineering Education", in P. T. Durbin (Ed.), Critical Perspectives on Nonacademic Science and Engineering, Bethlehem, PA: Lehigh University Press and London/Toronto: Associated University Presses 1991, pp. 278–295. "Knowledge Types in Technology", in: M.J. de Vries & A. Tamir (Eds.), Shaping Concepts of Technology. From Philosophical Perspective to Mental Images, Dordrecht 1997. "Technological enlightenment as a continuation of modern thinking", in Carl Mitcham (Ed.): Research in philosophy and technology, vol. 17, Technology, ethics and culture, Creenwich CT/London: Jai Press 1998, pp. 239–248. Philosophy of socio-technical systems, in: Society for Philosophy and Technology, Spring 1999, Volume 4, Number 3, 1999. "Mixed prospects of engineering ethics". European Journal of Engineering Education, 27 (2) (2002), pp. 149–155. Monographs in German Flexible Fertigungssysteme : zur Automatisierung der Serienfertigung. Mainz: Krausskopf 1971 (= Produktionstechnik heute 1, ed. H. J. Warnecke; dissertation Universität Stuttgart 1970) Eine Systemtheorie der Technik : zur Grundlegung der Allgemeinen Technologie. München/Wien: Hanser 1979 (Habilitationsschrift Universität Karlsruhe 1978). 2nd ed. 1999, 3rd ed. Karlsruhe 2009. Die unvollkommene Technik. Frankfurt/M: Suhrkamp 1985 Technologische Aufklärung : Beiträge zur Technikphilosophie. Frankfurt/M: Suhrkamp 1991, 2nd ed. 1999 Ethik und Technikbewertung. Frankfurt/M: Suhrkamp 1996 Wie die Technik zur Vernunft kommt : Beiträge zum Paradigmenwechsel in den Technikwissenschaften. Amsterdam: G+B Fakultas 1998 Vom Wert der Technik. Stuttgart: Kreuz Verlag 2003 Sinnbausteine : Ein weltlicher Katechismus. Leipzig: Reclam 2003 Arbeits- und Techniklehre : Philosophische Beiträge zur technologischen Bildung. Berlin: Edition Sigma 2004 Kleinzeug : Satiren – Limericks – Aphorismen. Münster: LIT Verlag 2004 Allgemeine Technologie : Eine Systemtheorie der Technik. 3rd ed. of the 1979 book, Karlsruhe: Universitätsverlag 2009; http://digbib.ubka.uni-karlsruhe.de/volltexte/1000011529 Signaturen der technischen Welt : Neue Beiträge zur Technikphilosophie, Berlin/Münster: LIT Verlag 2009 Besorgnisgesellschaft, Berlin: Parodos 2014 Books about his work Karl Eugen Kurrer: The history of the theory of structures, Ernst & Sohn 2008, pp. 148–152. Nicole C. Karafyllis/Tilmann Haar (Eds.): Technikphilosophie im Aufbruch. Festschrift für Günter Ropohl. Berlin: Edition Sigma, 2004 (in German; includes a complete list of his publications from the late 1960s to 2004). Nicole C. Karafyllis (Ed.): Das Leben führen? Lebensführung zwischen Technikphilosophie und Lebensphilosophie. Für Günter Ropohl zum 75. Geburtstag. Berlin: Edition sigma, 2014 (in German, Festschrift for Ropohl's 75th birthday; continues in the appendix the list of publications 2004-2014) Elisabeth Gräb-Schmidt: Technikethik und ihre Fundamente. Dargestellt in Auseinandersetzung mit den technikethischen Ansätzen von Günter Ropohl und Walter Zimmerli. De Gruyter 2002. Friedrich Rapp: "Philosophy of Technology after twenty years: A German perspective", Society for Philosophy and Technology, 1995. External links Josef Bordat: Mensch, Natur, Handlung. Zu Ropohls Systemtheorie der Technik. 2001. References 1939 births 2017 deaths German male writers 20th-century German philosophers 21st-century German philosophers Academic staff of Goethe University Frankfurt Academic staff of the Karlsruhe Institute of Technology Writers from Cologne German philosophers of technology Rochester Institute of Technology faculty Scientists in technology assessment and policy University of Stuttgart alumni Academic staff of the University of Dubrovnik
Günter Ropohl
Technology
1,669
44,029,942
https://en.wikipedia.org/wiki/Toughie%20%28frog%29
Toughie was the last known living Rabbs' fringe-limbed treefrog. The species, scientifically known as Ecnomiohyla rabborum, is thought to be extinct, as the last specimen—Toughie—died in captivity on September 26, 2016. Captivity Toughie was captured as an adult in Panama in 2005, when researchers went on a conservation mission to rescue species from Batrachochytrium dendrobatidis, a fungus deadly to amphibians. Toughie was one of "several dozen" frogs and tadpoles of the same species to be transported back to the United States. Toughie lived at the Atlanta Botanical Garden in Georgia. At the Garden, he was placed in a special containment area called the "frogPOD", a biosecure enclosure. Visitors to the Garden are not allowed to visit the frogPOD, as it is used to house critically endangered animals. While in captivity at the Garden, Toughie sired tadpoles with a female, but none survived. After the female died, the only other known specimen in the world was a male, leaving Toughie no other options of reproducing. The other male, who lived at the Zoo Atlanta, was euthanized on February 17, 2012, due to health concerns. Since Toughie was brought in as an adult to the Garden, they do not know his age but estimated that he was at least 12 years old. On December 15, 2014, Toughie was recorded vocalizing again. It was his first known call since being collected as an adult in 2005. Toughie died on September 26, 2016, at the Garden. Personal characteristics Toughie was given his name by Mark Mandica's son Anthony. Mark Mandica was Toughie's caretaker for many years at the Atlanta Botanical Garden. Toughie did not like to be handled. He would pinch a handler's hand in an attempt to "say 'let me go'", according to handler Leslie Phillips. She continued with, "For me it is incredibly motivating working with the Rabbs' frog. Having him here is a constant reminder of what can potentially happen to other species if we don't continue the conservation work that we do here at the Atlanta Botanical Garden. Honestly, it is also nerve-racking at times working with him. It can be a challenging balance between leaving him alone as much as possible to avoid undue stress, while still providing the best possible care... He is just really cool. No other frog I have seen is quite like him. He is muscular and has giant webbed feet and big eyes ... He is a very handsome frog." Handlers tried to touch him as little as possible, but they did weigh him once a week to keep track of his health. Featured in projects In July 2013, National Geographic featured Toughie and his species in their magazine. It is part of The Photo Ark project run by photographer Joel Sartore. They also focused on the Atlanta Botanical Garden's Amphibian Conservation Program. In 2014, Louie Psihoyos filmed Toughie for his 2015 film Racing Extinction, including footage of Sartore photographing him. To promote the film and the extinction crisis, a 30-story series of photographs was projected onto the side of the United Nations Building in New York City in September 2014. Included was a photograph of Toughie. Toughie was the subject of the 2024 song The Endling by Talia Schlanger. See also Conservation status Endling Extinction Lists of extinct animals List of recently extinct amphibians Rare species References 2016 animal deaths Ecnomiohyla Endlings Individual animals in the United States Individual frogs
Toughie (frog)
Biology
754
77,480,764
https://en.wikipedia.org/wiki/Williamson%20theorem
In the context of linear algebra and symplectic geometry, the Williamson theorem concerns the diagonalization of positive definite matrices through symplectic matrices. More precisely, given a strictly positive-definite Hermitian real matrix , the theorem ensures the existence of a real symplectic matrix , and a diagonal positive real matrix , such that where denotes the 2x2 identity matrix. Proof The derivation of the result hinges on a few basic observations: The real matrix , with , is well-defined and skew-symmetric. Any skew-symmetric real matrix can be block-diagonalized via orthogonal real matrices, meaning there is such that with a real positive-definite diagonal matrix containing the singular values of . For any orthogonal , the matrix is such that . If diagonalizes , meaning it satisfies then is such that Therefore, taking , the matrix is also a symplectic matrix, satisfying . See also Symplectic geometry Symplectic matrices Definite matrix References Matrices Symplectic geometry
Williamson theorem
Mathematics
199
44,002,665
https://en.wikipedia.org/wiki/Japanese%20Society%20for%20Bioinformatics
The Japanese Society for Bioinformatics (JSBi) is a Japanese research society on the subjects of bioinformatics and computational biology established in 1999. The society is an affiliated regional group of the International Society for Computational Biology (ISCB), and a member of Association of Asian Societies for Bioinformatics (AASBi). Supporting corporate members include the Japanese companies Hitachi, Fujitsu and Shionogi. Since 2001, the JSBi and Oxford University Press have awarded the Japanese Society for Bioinformatics Prize for outstanding young scientists in bioinformatics. Presidents The following people have been president of the JSBi: Minoru Kanehisa, Kyoto University (1999–2004) Satoru Miyano, The University of Tokyo (2004–2005) Yukihiro Eguchi, Mitsui Knowledge Industry Co., Ltd (2005–2006) Kenta Nakai, The University of Tokyo (2006–2008) Osamu Gotoh, Kyoto University (2008–2010) Hideo Matsuda, Osaka University (2010–2013) Kiyoshi Asai (scientist), The University of Tokyo (2013–2015) Kentaro Shimizu (scientist), The University of Tokyo (2015–2017) Kengo Kinoshita, Tohoku University (2017–2019) Wataru Iwasaki, The University of Tokyo (2019–) References External links Bioinformatics organizations Organizations established in 1999 Scientific organizations based in Japan 1999 establishments in Japan
Japanese Society for Bioinformatics
Biology
306
30,234,767
https://en.wikipedia.org/wiki/Living%20Earth%20Simulator%20Project
The Living Earth simulator is a proposed massive computer simulation system intended to simulate the interactions of all aspects of life, human economic activity, climate, and other physical processes on the planet Earth as part of the FuturICT project, in response to the European FP7 "Future and Emerging Technologies Flagship" initiative. The Future and Emerging Technologies 'flagship' competition offered a 10-years, ~€1 billion funding to the winning teams; the competition attracted over 300 international teams. The FuturICT project was not selected and thus the Living Earth Simulator was never developed. The two winners, announced as of March 2013, were Graphene and Human Brain. References External links FuturICT website (archived) Can we really model society? scientists think we can Numerical climate and weather models Science and technology in Europe Scientific simulation software
Living Earth Simulator Project
Technology
168
19,609,364
https://en.wikipedia.org/wiki/Q-Charlier%20polynomials
In mathematics, the q-Charlier polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. give a detailed list of their properties. Definition The polynomials are given in terms of the basic hypergeometric function by References Orthogonal polynomials Q-analogs Special hypergeometric functions
Q-Charlier polynomials
Mathematics
63
248,189
https://en.wikipedia.org/wiki/Gaia%20hypothesis
The Gaia hypothesis (), also known as the Gaia theory, Gaia paradigm, or the Gaia principle, proposes that living organisms interact with their inorganic surroundings on Earth to form a synergistic and self-regulating complex system that helps to maintain and perpetuate the conditions for life on the planet. The Gaia hypothesis was formulated by the chemist James Lovelock and co-developed by the microbiologist Lynn Margulis in the 1970s. Following the suggestion by his neighbour, novelist William Golding, Lovelock named the hypothesis after Gaia, the primordial deity who personified the Earth in Greek mythology. In 2006, the Geological Society of London awarded Lovelock the Wollaston Medal in part for his work on the Gaia hypothesis. Topics related to the hypothesis include how the biosphere and the evolution of organisms affect the stability of global temperature, salinity of seawater, atmospheric oxygen levels, the maintenance of a hydrosphere of liquid water and other environmental variables that affect the habitability of Earth. The Gaia hypothesis was initially criticized for being teleological and against the principles of natural selection, but later refinements aligned the Gaia hypothesis with ideas from fields such as Earth system science, biogeochemistry and systems ecology. Even so, the Gaia hypothesis continues to attract criticism, and today many scientists consider it to be only weakly supported by, or at odds with, the available evidence. Overview Gaian hypotheses suggest that organisms co-evolve with their environment: that is, they "influence their abiotic environment, and that environment in turn influences the biota by Darwinian process". Lovelock (1995) gave evidence of this in his second book, Ages of Gaia, showing the evolution from the world of the early thermo-acido-philic and methanogenic bacteria towards the oxygen-enriched atmosphere today that supports more complex life. A reduced version of the hypothesis has been called "influential Gaia" in the 2002 paper "Directed Evolution of the Biosphere: Biogeochemical Selection or Gaia?" by Andrei G. Lapenis, which states the biota influence certain aspects of the abiotic world, e.g. temperature and atmosphere. This is not the work of an individual but a collective of Russian scientific research that was combined into this peer-reviewed publication. It states the coevolution of life and the environment through "micro-forces" and biogeochemical processes. An example is how the activity of photosynthetic bacteria during Precambrian times completely modified the Earth atmosphere to turn it aerobic, and thus supports the evolution of life (in particular eukaryotic life). Since barriers existed throughout the twentieth century between Russia and the rest of the world, it is only relatively recently that the early Russian scientists who introduced concepts overlapping the Gaia paradigm have become better known to the Western scientific community. These scientists include Piotr Alekseevich Kropotkin (1842–1921) (although he spent much of his professional life outside Russia), Rafail Vasil’evich Rizpolozhensky (1862 – ), Vladimir Ivanovich Vernadsky (1863–1945), and Vladimir Alexandrovich Kostitzin (1886–1963). Biologists and Earth scientists usually view the factors that stabilize the characteristics of a period as an undirected emergent property or entelechy of the system; as each individual species pursues its own self-interest, for example, their combined actions may have counterbalancing effects on environmental change. Opponents of this view sometimes reference examples of events that resulted in dramatic change rather than stable equilibrium, such as the conversion of the Earth's atmosphere from a reducing environment to an oxygen-rich one at the end of the Archaean and the beginning of the Proterozoic periods. Less accepted versions of the hypothesis claim that changes in the biosphere are brought about through the coordination of living organisms and maintain those conditions through homeostasis. In some versions of Gaia philosophy, all lifeforms are considered part of one single living planetary being called Gaia. In this view, the atmosphere, the seas and the terrestrial crust would be results of interventions carried out by Gaia through the coevolving diversity of living organisms. The Gaia paradigm was an influence on the deep ecology movement. Details The Gaia hypothesis posits that the Earth is a self-regulating complex system involving the biosphere, the atmosphere, the hydrospheres and the pedosphere, tightly coupled as an evolving system. The hypothesis contends that this system as a whole, called Gaia, seeks a physical and chemical environment optimal for contemporary life. Gaia evolves through a cybernetic feedback system operated by the biota, leading to broad stabilization of the conditions of habitability in a full homeostasis. Many processes in the Earth's surface, essential for the conditions of life, depend on the interaction of living forms, especially microorganisms, with inorganic elements. These processes establish a global control system that regulates Earth's surface temperature, atmosphere composition and ocean salinity, powered by the global thermodynamic disequilibrium state of the Earth system. The existence of a planetary homeostasis influenced by living forms had been observed previously in the field of biogeochemistry, and it is being investigated also in other fields like Earth system science. The originality of the Gaia hypothesis relies on the assessment that such homeostatic balance is actively pursued with the goal of keeping the optimal conditions for life, even when terrestrial or external events menace them. Regulation of global surface temperature Since life started on Earth, the energy provided by the Sun has increased by 25–30%; however, the surface temperature of the planet has remained within the levels of habitability, reaching quite regular low and high margins. Lovelock has also hypothesised that methanogens produced elevated levels of methane in the early atmosphere, giving a situation similar to that found in petrochemical smog, similar in some respects to the atmosphere on Titan. This, he suggests, helped to screen out ultraviolet light until the formation of the ozone layer, maintaining a degree of homeostasis. However, the Snowball Earth research has suggested that "oxygen shocks" and reduced methane levels led, during the Huronian, Sturtian and Marinoan/Varanger Ice Ages, to a world that very nearly became a solid "snowball". These epochs are evidence against the ability of the pre Phanerozoic biosphere to fully self-regulate. Processing of the greenhouse gas CO2, explained below, plays a critical role in the maintenance of the Earth temperature within the limits of habitability. The CLAW hypothesis, inspired by the Gaia hypothesis, proposes a feedback loop that operates between ocean ecosystems and the Earth's climate. The hypothesis specifically proposes that particular phytoplankton that produce dimethyl sulfide are responsive to variations in climate forcing, and that these responses lead to a negative feedback loop that acts to stabilise the temperature of the Earth's atmosphere. Currently the increase in human population and the environmental impact of their activities, such as the multiplication of greenhouse gases may cause negative feedbacks in the environment to become positive feedback. Lovelock has stated that this could bring an extremely accelerated global warming, but he has since stated the effects will likely occur more slowly. Daisyworld simulations In response to the criticism that the Gaia hypothesis seemingly required unrealistic group selection and cooperation between organisms, James Lovelock and Andrew Watson developed a mathematical model, Daisyworld, in which ecological competition underpinned planetary temperature regulation. Daisyworld examines the energy budget of a planet populated by two different types of plants, black daisies and white daisies, which are assumed to occupy a significant portion of the surface. The colour of the daisies influences the albedo of the planet such that black daisies absorb more light and warm the planet, while white daisies reflect more light and cool the planet. The black daisies are assumed to grow and reproduce best at a lower temperature, while the white daisies are assumed to thrive best at a higher temperature. As the temperature rises closer to the value the white daisies like, the white daisies outreproduce the black daisies, leading to a larger percentage of white surface, and more sunlight is reflected, reducing the heat input and eventually cooling the planet. Conversely, as the temperature falls, the black daisies outreproduce the white daisies, absorbing more sunlight and warming the planet. The temperature will thus converge to the value at which the reproductive rates of the plants are equal. Lovelock and Watson showed that, over a limited range of conditions, this negative feedback due to competition can stabilize the planet's temperature at a value which supports life, if the energy output of the Sun changes, while a planet without life would show wide temperature changes. The percentage of white and black daisies will continually change to keep the temperature at the value at which the plants' reproductive rates are equal, allowing both life forms to thrive. It has been suggested that the results were predictable because Lovelock and Watson selected examples that produced the responses they desired. Regulation of oceanic salinity Ocean salinity has been constant at about 3.5% for a very long time. Salinity stability in oceanic environments is important as most cells require a rather constant salinity and do not generally tolerate values above 5%. The constant ocean salinity was a long-standing mystery, because no process counterbalancing the salt influx from rivers was known. Recently it was suggested that salinity may also be strongly influenced by seawater circulation through hot basaltic rocks, and emerging as hot water vents on mid-ocean ridges. However, the composition of seawater is far from equilibrium, and it is difficult to explain this fact without the influence of organic processes. One suggested explanation lies in the formation of salt plains throughout Earth's history. It is hypothesized that these are created by bacterial colonies that fix ions and heavy metals during their life processes. In the biogeochemical processes of Earth, sources and sinks are the movement of elements. The composition of salt ions within our oceans and seas is: sodium (Na+), chlorine (Cl−), sulfate (SO42−), magnesium (Mg2+), calcium (Ca2+) and potassium (K+). The elements that comprise salinity do not readily change and are a conservative property of seawater. There are many mechanisms that change salinity from a particulate form to a dissolved form and back. Considering the metallic composition of iron sources across a multifaceted grid of thermomagnetic design, not only would the movement of elements hypothetically help restructure the movement of ions, electrons, and the like, but would also potentially and inexplicably assist in balancing the magnetic bodies of the Earth's geomagnetic field. The known sources of sodium i.e. salts are when weathering, erosion, and dissolution of rocks are transported into rivers and deposited into the oceans. The Mediterranean Sea as being Gaia's kidney is found (here) by Kenneth J. Hsu, a correspondence author in 2001. Hsu suggests the "desiccation" of the Mediterranean is evidence of a functioning Gaia "kidney". In this and earlier suggested cases, it is plate movements and physics, not biology, which performs the regulation. Earlier "kidney functions" were performed during the "deposition of the Cretaceous (South Atlantic), Jurassic (Gulf of Mexico), Permo-Triassic (Europe), Devonian (Canada), and Cambrian/Precambrian (Gondwana) saline giants." Regulation of oxygen in the atmosphere The Gaia hypothesis states that the Earth's atmospheric composition is kept at a dynamically steady state by the presence of life. The atmospheric composition provides the conditions that contemporary life has adapted to. All the atmospheric gases other than noble gases present in the atmosphere are either made by organisms or processed by them. The stability of the atmosphere in Earth is not a consequence of chemical equilibrium. Oxygen is a reactive compound, and should eventually combine with gases and minerals of the Earth's atmosphere and crust. Oxygen only began to persist in the atmosphere in small quantities about 50 million years before the start of the Great Oxygenation Event. Since the start of the Cambrian period, atmospheric oxygen concentrations have fluctuated between 15% and 40% of atmospheric volume. Traces of methane (at an amount of 100,000 tonnes produced per year) should not exist, as methane is combustible in an oxygen atmosphere. Dry air in the atmosphere of Earth contains roughly (by volume) 78.09% nitrogen, 20.95% oxygen, 0.93% argon, 0.039% carbon dioxide, and small amounts of other gases including methane. Lovelock originally speculated that concentrations of oxygen above about 25% would increase the frequency of wildfires and conflagration of forests. This mechanism, however, would not raise oxygen levels if they became too low. If plants can be shown to robustly over-produce O2 then perhaps only the high oxygen forest fires regulator is necessary. Recent work on the findings of fire-caused charcoal in Carboniferous and Cretaceous coal measures, in geologic periods when O2 did exceed 25%, has supported Lovelock's contention. Processing of CO2 Gaia scientists see the participation of living organisms in the carbon cycle as one of the complex processes that maintain conditions suitable for life. The only significant natural source of atmospheric carbon dioxide (CO2) is volcanic activity, while the only significant removal is through the precipitation of carbonate rocks. Carbon precipitation, solution and fixation are influenced by the bacteria and plant roots in soils, where they improve gaseous circulation, or in coral reefs, where calcium carbonate is deposited as a solid on the sea floor. Calcium carbonate is used by living organisms to manufacture carbonaceous tests and shells. Once dead, the living organisms' shells fall. Some arrive at the bottom of shallow seas where the heat and pressure of burial, and/or the forces of plate tectonics, eventually convert them to deposits of chalk and limestone. Much of the falling dead shells, however, redissolve into the ocean below the carbon compensation depth. One of these organisms is Emiliania huxleyi, an abundant coccolithophore algae which may have a role in the formation of clouds. CO2 excess is compensated by an increase of coccolithophorid life, increasing the amount of CO2 locked in the ocean floor. Coccolithophorids, if the CLAW Hypothesis turns out to be supported (see "Regulation of Global Surface Temperature" above), could help increase the cloud cover, hence control the surface temperature, help cool the whole planet and favor precipitation necessary for terrestrial plants. Lately the atmospheric CO2 concentration has increased and there is some evidence that concentrations of ocean algal blooms are also increasing. Lichen and other organisms accelerate the weathering of rocks in the surface, while the decomposition of rocks also happens faster in the soil, thanks to the activity of roots, fungi, bacteria and subterranean animals. The flow of carbon dioxide from the atmosphere to the soil is therefore regulated with the help of living organisms. When CO2 levels rise in the atmosphere the temperature increases and plants grow. This growth brings higher consumption of CO2 by the plants, who process it into the soil, removing it from the atmosphere. History Precedents The idea of the Earth as an integrated whole, a living being, has a long tradition. The mythical Gaia was the primal Greek goddess personifying the Earth, the Greek version of "Mother Nature" (from Ge = Earth, and Aia = PIE grandmother), or the Earth Mother. James Lovelock gave this name to his hypothesis after a suggestion from the novelist William Golding, who was living in the same village as Lovelock at the time (Bowerchalke, Wiltshire, UK). Golding's advice was based on Gea, an alternative spelling for the name of the Greek goddess, which is used as prefix in geology, geophysics and geochemistry. Golding later made reference to Gaia in his Nobel prize acceptance speech. In the eighteenth century, as geology consolidated as a modern science, James Hutton maintained that geological and biological processes are interlinked. Later, the naturalist and explorer Alexander von Humboldt recognized the coevolution of living organisms, climate, and Earth's crust. In the twentieth century, Vladimir Vernadsky formulated a theory of Earth's development that is now one of the foundations of ecology. Vernadsky was a Ukrainian geochemist and was one of the first scientists to recognize that the oxygen, nitrogen, and carbon dioxide in the Earth's atmosphere result from biological processes. During the 1920s he published works arguing that living organisms could reshape the planet as surely as any physical force. Vernadsky was a pioneer of the scientific bases for the environmental sciences. His visionary pronouncements were not widely accepted in the West, and some decades later the Gaia hypothesis received the same type of initial resistance from the scientific community. Also in the turn to the 20th century Aldo Leopold, pioneer in the development of modern environmental ethics and in the movement for wilderness conservation, suggested a living Earth in his biocentric or holistic ethics regarding land. Another influence for the Gaia hypothesis and the environmental movement in general came as a side effect of the Space Race between the Soviet Union and the United States of America. During the 1960s, the first humans in space could see how the Earth looked as a whole. The photograph Earthrise taken by astronaut William Anders in 1968 during the Apollo 8 mission became, through the Overview Effect an early symbol for the global ecology movement. Formulation of the hypothesis Lovelock started defining the idea of a self-regulating Earth controlled by the community of living organisms in September 1965, while working at the Jet Propulsion Laboratory in California on methods of detecting life on Mars. The first paper to mention it was Planetary Atmospheres: Compositional and other Changes Associated with the Presence of Life, co-authored with C.E. Giffin. A main concept was that life could be detected in a planetary scale by the chemical composition of the atmosphere. According to the data gathered by the Pic du Midi observatory, planets like Mars or Venus had atmospheres in chemical equilibrium. This difference with the Earth atmosphere was considered to be a proof that there was no life in these planets. Lovelock formulated the Gaia Hypothesis in journal articles in 1972 and 1974, followed by a popularizing 1979 book Gaia: A new look at life on Earth. An article in the New Scientist of February 6, 1975, and a popular book length version of the hypothesis, published in 1979 as The Quest for Gaia, began to attract scientific and critical attention. Lovelock called it first the Earth feedback hypothesis, and it was a way to explain the fact that combinations of chemicals including oxygen and methane persist in stable concentrations in the atmosphere of the Earth. Lovelock suggested detecting such combinations in other planets' atmospheres as a relatively reliable and cheap way to detect life. Later, other relationships such as sea creatures producing sulfur and iodine in approximately the same quantities as required by land creatures emerged and helped bolster the hypothesis. In 1971 microbiologist Dr. Lynn Margulis joined Lovelock in the effort of fleshing out the initial hypothesis into scientifically proven concepts, contributing her knowledge about how microbes affect the atmosphere and the different layers in the surface of the planet. The American biologist had also awakened criticism from the scientific community with her advocacy of the theory on the origin of eukaryotic organelles and her contributions to the endosymbiotic theory, nowadays accepted. Margulis dedicated the last of eight chapters in her book, The Symbiotic Planet, to Gaia. However, she objected to the widespread personification of Gaia and stressed that Gaia is "not an organism", but "an emergent property of interaction among organisms". She defined Gaia as "the series of interacting ecosystems that compose a single huge ecosystem at the Earth's surface. Period". The book's most memorable "slogan" was actually quipped by a student of Margulis'. James Lovelock called his first proposal the Gaia hypothesis but has also used the term Gaia theory. Lovelock states that the initial formulation was based on observation, but still lacked a scientific explanation. The Gaia hypothesis has since been supported by a number of scientific experiments and provided a number of useful predictions. First Gaia conference In 1985, the first public symposium on the Gaia hypothesis, Is The Earth a Living Organism? was held at University of Massachusetts Amherst, August 1–6. The principal sponsor was the National Audubon Society. Speakers included James Lovelock, Lynn Margulis, George Wald, Mary Catherine Bateson, Lewis Thomas, Thomas Berry, David Abram, John Todd, Donald Michael, Christopher Bird, Michael Cohen, and William Fields. Some 500 people attended. Second Gaia conference In 1988, climatologist Stephen Schneider organised a conference of the American Geophysical Union. The first Chapman Conference on Gaia, was held in San Diego, California, on March 7, 1988. During the "philosophical foundations" session of the conference, David Abram spoke on the influence of metaphor in science, and of the Gaia hypothesis as offering a new and potentially game-changing metaphorics, while James Kirchner criticised the Gaia hypothesis for its imprecision. Kirchner claimed that Lovelock and Margulis had not presented one Gaia hypothesis, but four: CoEvolutionary Gaia: that life and the environment had evolved in a coupled way. Kirchner claimed that this was already accepted scientifically and was not new. Homeostatic Gaia: that life maintained the stability of the natural environment, and that this stability enabled life to continue to exist. Geophysical Gaia: that the Gaia hypothesis generated interest in geophysical cycles and therefore led to interesting new research in terrestrial geophysical dynamics. Optimising Gaia: that Gaia shaped the planet in a way that made it an optimal environment for life as a whole. Kirchner claimed that this was not testable and therefore was not scientific. Of Homeostatic Gaia, Kirchner recognised two alternatives. "Weak Gaia" asserted that life tends to make the environment stable for the flourishing of all life. "Strong Gaia" according to Kirchner, asserted that life tends to make the environment stable, to enable the flourishing of all life. Strong Gaia, Kirchner claimed, was untestable and therefore not scientific. Lovelock and other Gaia-supporting scientists, however, did attempt to disprove the claim that the hypothesis is not scientific because it is impossible to test it by controlled experiment. For example, against the charge that Gaia was teleological, Lovelock and Andrew Watson offered the Daisyworld Model (and its modifications, above) as evidence against most of these criticisms. Lovelock said that the Daisyworld model "demonstrates that self-regulation of the global environment can emerge from competition amongst types of life altering their local environment in different ways". Lovelock was careful to present a version of the Gaia hypothesis that had no claim that Gaia intentionally or consciously maintained the complex balance in her environment that life needed to survive. It would appear that the claim that Gaia acts "intentionally" was a statement in his popular initial book and was not meant to be taken literally. This new statement of the Gaia hypothesis was more acceptable to the scientific community. Most accusations of teleologism ceased, following this conference. Third Gaia conference By the time of the 2nd Chapman Conference on the Gaia Hypothesis, held at Valencia, Spain, on 23 June 2000, the situation had changed significantly. Rather than a discussion of the Gaian teleological views, or "types" of Gaia hypotheses, the focus was upon the specific mechanisms by which basic short term homeostasis was maintained within a framework of significant evolutionary long term structural change. The major questions were: "How has the global biogeochemical/climate system called Gaia changed in time? What is its history? Can Gaia maintain stability of the system at one time scale but still undergo vectorial change at longer time scales? How can the geologic record be used to examine these questions?" "What is the structure of Gaia? Are the feedbacks sufficiently strong to influence the evolution of climate? Are there parts of the system determined pragmatically by whatever disciplinary study is being undertaken at any given time or are there a set of parts that should be taken as most true for understanding Gaia as containing evolving organisms over time? What are the feedbacks among these different parts of the Gaian system, and what does the near closure of matter mean for the structure of Gaia as a global ecosystem and for the productivity of life?" "How do models of Gaian processes and phenomena relate to reality and how do they help address and understand Gaia? How do results from Daisyworld transfer to the real world? What are the main candidates for "daisies"? Does it matter for Gaia theory whether we find daisies or not? How should we be searching for daisies, and should we intensify the search? How can Gaian mechanisms be collaborated with using process models or global models of the climate system that include the biota and allow for chemical cycling?" In 1997, Tyler Volk argued that a Gaian system is almost inevitably produced as a result of an evolution towards far-from-equilibrium homeostatic states that maximise entropy production, and Axel Kleidon (2004) agreed stating: "...homeostatic behavior can emerge from a state of MEP associated with the planetary albedo"; "...the resulting behavior of a symbiotic Earth at a state of MEP may well lead to near-homeostatic behavior of the Earth system on long time scales, as stated by the Gaia hypothesis". M. Staley (2002) has similarly proposed "...an alternative form of Gaia theory based on more traditional Darwinian principles... In [this] new approach, environmental regulation is a consequence of population dynamics. The role of selection is to favor organisms that are best adapted to prevailing environmental conditions. However, the environment is not a static backdrop for evolution, but is heavily influenced by the presence of living organisms. The resulting co-evolving dynamical process eventually leads to the convergence of equilibrium and optimal conditions". Fourth Gaia conference A fourth international conference on the Gaia hypothesis, sponsored by the Northern Virginia Regional Park Authority and others, was held in October 2006 at the Arlington, Virginia campus of George Mason University. Martin Ogle, Chief Naturalist, for NVRPA, and long-time Gaia hypothesis proponent, organized the event. Lynn Margulis, Distinguished University Professor in the Department of Geosciences, University of Massachusetts-Amherst, and long-time advocate of the Gaia hypothesis, was a keynote speaker. Among many other speakers: Tyler Volk, co-director of the Program in Earth and Environmental Science at New York University; Dr. Donald Aitken, Principal of Donald Aitken Associates; Dr. Thomas Lovejoy, President of the Heinz Center for Science, Economics and the Environment; Robert Corell, Senior Fellow, Atmospheric Policy Program, American Meteorological Society and noted environmental ethicist, J. Baird Callicott. Criticism After initially receiving little attention from scientists (from 1969 until 1977), thereafter for a period the initial Gaia hypothesis was criticized by a number of scientists, including Ford Doolittle, Richard Dawkins and Stephen Jay Gould. Lovelock has said that because his hypothesis is named after a Greek goddess, and championed by many non-scientists, the Gaia hypothesis was interpreted as a neo-Pagan religion. Many scientists in particular also criticized the approach taken in his popular book Gaia, a New Look at Life on Earth for being teleological—a belief that things are purposeful and aimed towards a goal. Responding to this critique in 1990, Lovelock stated, "Nowhere in our writings do we express the idea that planetary self-regulation is purposeful, or involves foresight or planning by the biota". Stephen Jay Gould criticized Gaia as being "a metaphor, not a mechanism." He wanted to know the actual mechanisms by which self-regulating homeostasis was achieved. In his defense of Gaia, David Abram argues that Gould overlooked the fact that "mechanism", itself, is a metaphor—albeit an exceedingly common and often unrecognized metaphor—one which leads us to consider natural and living systems as though they were machines organized and built from outside (rather than as autopoietic or self-organizing phenomena). Mechanical metaphors, according to Abram, lead us to overlook the active or agentic quality of living entities, while the organismic metaphors of the Gaia hypothesis accentuate the active agency of both the biota and the biosphere as a whole. With regard to causality in Gaia, Lovelock argues that no single mechanism is responsible, that the connections between the various known mechanisms may never be known, that this is accepted in other fields of biology and ecology as a matter of course, and that specific hostility is reserved for his own hypothesis for other reasons. Aside from clarifying his language and understanding of what is meant by a life form, Lovelock himself ascribes most of the criticism to a lack of understanding of non-linear mathematics by his critics, and a linearizing form of greedy reductionism in which all events have to be immediately ascribed to specific causes before the fact. He also states that most of his critics are biologists but that his hypothesis includes experiments in fields outside biology, and that some self-regulating phenomena may not be mathematically explainable. Natural selection and evolution Lovelock has suggested that global biological feedback mechanisms could evolve by natural selection, stating that organisms that improve their environment for their survival do better than those that damage their environment. However, in the early 1980s, W. Ford Doolittle and Richard Dawkins separately argued against this aspect of Gaia. Doolittle argued that nothing in the genome of individual organisms could provide the feedback mechanisms proposed by Lovelock, and therefore the Gaia hypothesis proposed no plausible mechanism and was unscientific. Dawkins meanwhile stated that for organisms to act in concert would require foresight and planning, which is contrary to the current scientific understanding of evolution. Like Doolittle, he also rejected the possibility that feedback loops could stabilize the system. Margulis argued in 1999 that "Darwin's grand vision was not wrong, only incomplete. In accentuating the direct competition between individuals for resources as the primary selection mechanism, Darwin (and especially his followers) created the impression that the environment was simply a static arena". She wrote that the composition of the Earth's atmosphere, hydrosphere, and lithosphere are regulated around "set points" as in homeostasis, but those set points change with time. Evolutionary biologist W. D. Hamilton called the concept of Gaia Copernican, adding that it would take another Newton to explain how Gaian self-regulation takes place through Darwinian natural selection. More recently Ford Doolittle building on his and Inkpen's ITSNTS (It's The Song Not The Singer) proposal proposed that differential persistence can play a similar role to differential reproduction in evolution by natural selections, thereby providing a possible reconciliation between the theory of natural selection and the Gaia hypothesis. Criticism in the 21st century The Gaia hypothesis continues to be broadly skeptically received by the scientific community. For instance, arguments both for and against it were laid out in the journal Climatic Change in 2002 and 2003. A significant argument raised against it are the many examples where life has had a detrimental or destabilising effect on the environment rather than acting to regulate it. Several recent books have criticised the Gaia hypothesis, expressing views ranging from "... the Gaia hypothesis lacks unambiguous observational support and has significant theoretical difficulties" to "Suspended uncomfortably between tainted metaphor, fact, and false science, I prefer to leave Gaia firmly in the background" to "The Gaia hypothesis is supported neither by evolutionary theory nor by the empirical evidence of the geological record". The CLAW hypothesis, initially suggested as a potential example of direct Gaian feedback, has subsequently been found to be less credible as understanding of cloud condensation nuclei has improved. In 2009 the Medea hypothesis was proposed: that life has highly detrimental (biocidal) impacts on planetary conditions, in direct opposition to the Gaia hypothesis. In a 2013 book-length evaluation of the Gaia hypothesis considering modern evidence from across the various relevant disciplines, Toby Tyrrell concluded that: "I believe Gaia is a dead end. Its study has, however, generated many new and thought provoking questions. While rejecting Gaia, we can at the same time appreciate Lovelock's originality and breadth of vision, and recognize that his audacious concept has helped to stimulate many new ideas about the Earth, and to champion a holistic approach to studying it". Elsewhere he presents his conclusion "The Gaia hypothesis is not an accurate picture of how our world works". This statement needs to be understood as referring to the "strong" and "moderate" forms of Gaia—that the biota obeys a principle that works to make Earth optimal (strength 5) or favourable for life (strength 4) or that it works as a homeostatic mechanism (strength 3). The latter is the "weakest" form of Gaia that Lovelock has advocated. Tyrrell rejects it. However, he finds that the two weaker forms of Gaia—Coeveolutionary Gaia and Influential Gaia, which assert that there are close links between the evolution of life and the environment and that biology affects the physical and chemical environment—are both credible, but that it is not useful to use the term "Gaia" in this sense and that those two forms were already accepted and explained by the processes of natural selection and adaptation. Anthropic principle As emphasized by multiple critics, no plausible mechanism exists that would drive the evolution of negative feedback loops leading to planetary self-regulation of the climate. Indeed, multiple incidents in Earth's history (see the Medea hypothesis) have shown that the Earth and the biosphere can enter self-destructive positive feedback loops that lead to mass extinction events. For example, the Snowball Earth glaciations appeared to result from the development of photosynthesis during a period when the Sun was cooler than it is now. These mechanisms will have some effect, but any understanding of glacial-interglacial cycles requires study of the variations in the Earth’s orbit around the Sun, the tilt of its axis of rotation, and the ‘wobble’ in that rotational movement which causes the periodicity in Northern Hemisphere insolation, thereby setting the Earth’s thermal regime. Including studies from the fields of mathematics and Earth science, the fields of geology and geography provide insight into the causes of ice ages. Meanwhile, the removal of carbon dioxide from the atmosphere, along with the oxidation of atmospheric methane by the released oxygen, resulted in a dramatic diminishment of the greenhouse effect. The resulting expansion of the polar ice sheets decreased the overall fraction of sunlight absorbed by the Earth, resulting in a runaway ice–albedo positive feedback loop ultimately resulting in glaciation over nearly the entire surface of the Earth. However, volcanic processes at this scale should be understood as relating to the pressure exerted on the Earth’s crust, and released during periods of ice sheet retreat. Breaking out of the Earth from the frozen condition appears to have directly been due to the release of carbon dioxide and methane by volcanos, although release of methane by microbes trapped underneath the ice could also have played a part. Lesser contributions to warming would come from the fact that coverage of the Earth by ice sheets largely inhibited photosynthesis and lessened the removal of carbon dioxide from the atmosphere by the weathering of siliceous rocks. However, in the absence of tectonic activity, the snowball condition could have persisted indefinitely. Geologic events with amplifying positive feedbacks (along with some possible biologic participation) led to the greatest mass extinction event on record, the Permian–Triassic extinction event about 250 million years ago. The precipitating event appears to have been volcanic eruptions in the Siberian Traps, a hilly region of flood basalts in Siberia. These eruptions released high levels of carbon dioxide and sulfur dioxide which elevated world temperatures and acidified the oceans. Estimates of the rise in carbon dioxide levels range widely, from as little as a two-fold increase, to as much as a twenty-fold increase. Amplifying feedbacks increased the warming to considerably greater than that to be expected merely from the greenhouse effect of carbon dioxide: these include the ice albedo feedback, the increased evaporation of water vapor (another greenhouse gas) into the atmosphere, the release of methane from the warming of methane hydrate deposits buried under the permafrost and beneath continental shelf sediments, and increased wildfires. The rising carbon dioxide acidified the oceans, leading to widespread die-off of creatures with calcium carbonate shells, killing mollusks and crustaceans like crabs and lobsters and destroying coral reefs. Their demise led to disruption of the entire oceanic food chain. It has been argued that rising temperatures may have led to disruption of the chemocline separating sulfidic deep waters from oxygenated surface waters, which led to massive release of toxic hydrogen sulfide (produced by anerobic bacteria) to the surface ocean and even into atmosphere, contributing to the (primarily methane-driven) collapse of the ozone layer, and helping to explain the die-off of terrestrial animal and plant life. According to the weak anthropic principle, our observation of such stabilizing feedback loops is an observer selection effect. In all the universe, it is only planets with Gaian properties that could have evolved intelligent, self-aware organisms capable of asking such questions. One can imagine innumerable worlds where life evolved with different biochemistries or where the worlds had different geophysical properties such that the worlds are presently dead due to runaway greenhouse effect, or else are in perpetual Snowball, or else due to one factor or another, life has been inhibited from evolving beyond the microbial level. If no means exists for natural selection to operate at the biosphere level, then it would appear that the anthropic principle provides the only explanation for the survival of Earth's biosphere over geologic time. But in recent years, this strictly reductionistic view has been modified by recognition that natural selection can operate at multiple levels of the biological hierarchy — not just at the level of individual organisms. Traditional Darwinian natural selection requires reproducing entities that display inheritable properties or abilities that result in their having more offspring than their competitors. Successful biospheres clearly cannot reproduce to spawn copies of themselves, and so traditional Darwinian natural selection cannot operate. A mechanism for biosphere-level selection was proposed by Ford Doolittle: Although he had been a strong and early critic of the Gaia hypothesis, he had by 2015 started to think of ways whereby Gaia might be "Darwinised", seeking means whereby the planet could have evolved biosphere-level adaptations. Doolittle has suggested that differential persistence — mere survival — could be considered a legitimate mechanism for natural selection. As the Earth passes through various challenges, the phenomenon of differential persistence enables selected entities to achieve fixation by surviving the death of their competitors. Although Earth's biosphere is not competing against other biospheres on other planets, there are many competitors for survival on this planet. Collectively, Gaia constitutes the single clade of all living survivors descended from life’s last universal common ancestor (LUCA). Various other proposals for biosphere-level selection include sequential selection, entropic hierarchy, and considering Gaia as a holobiont-like system. Ultimately speaking, differential persistence and sequential selection are variants of the anthropic principle, while entropic hierarchy and holobiont arguments may possibly allow understanding the emergence of Gaia without anthropic arguments. See also SimEarth – 1990 video game References Notes Citations Cited sources Further reading External links Lovelock, James (2006), interviewed in How to think about science, CBC Ideas (radio program), broadcast January 3, 2008. Link "Lovelock: 'We can't save the planet BBC Sci Tech News Interview: Jasper Gerard meets James Lovelock 1965 introductions Astronomical hypotheses Biogeochemistry Biometeorology Biological hypotheses Climate change feedbacks Cybernetics Earth Ecological theories Evolution of the biosphere Evolution Gaia Meteorological hypotheses Superorganisms Syncretism Words and phrases derived from Greek mythology
Gaia hypothesis
Chemistry,Astronomy,Biology,Environmental_science
8,460
36,758,083
https://en.wikipedia.org/wiki/Y%20Sagittarii
Y Sagittarii is a variable star in the constellation of Sagittarius. It is a Cepheid variable with an apparent magnitude that ranges around +5.77. The measure of its parallax by Hubble Space Telescope puts Y Sagittarii to 1,293 light-years away from the Solar System. The brightness ranges in Y Sagittarii's apparent magnitude varies from +5.25 and +6.24 in a period of 5.7736 days. The spectral type of this star is F8II, while the effective temperature is 5370 K. It has a radius 50 times larger than the Sun, while its projected rotational velocity of 16 km / s and it has an estimated mass six times that of the Sun. The star's metal content is similar to Sun, with an index of metallicity [Fe / H] = +0.05. For other metals tested, it shows some overabundance of copper, zinc, yttrium and sodium; the level of the elements is almost double that of the Sun ([Na / H] = +0.27). There is evidence that Y Sagittarii may be a spectroscopic binary. It has been suggested that the orbital period for the system is on the order of 10,000 to 12,000 days. However, subsequent studies assume eccentricity zero for orbit, and they have failed to find a convincing orbital solution. Instead, it appears to be a distant visual companion. References F-type bright giants Sagittarius (constellation) Durchmusterung objects 168608 089968 6863 Sagittarii, Y
Y Sagittarii
Astronomy
346
1,748,160
https://en.wikipedia.org/wiki/Cell%20junction
Cell junctions or junctional complexes are a class of cellular structures consisting of multiprotein complexes that provide contact or adhesion between neighboring cells or between a cell and the extracellular matrix in animals. They also maintain the paracellular barrier of epithelia and control paracellular transport. Cell junctions are especially abundant in epithelial tissues. Combined with cell adhesion molecules and extracellular matrix, cell junctions help hold animal cells together. Cell junctions are also especially important in enabling communication between neighboring cells via specialized protein complexes called communicating (gap) junctions. Cell junctions are also important in reducing stress placed upon cells. In plants, similar communication channels are known as plasmodesmata, and in fungi they are called septal pores. Types In vertebrates, there are three major types of cell junction: Adherens junctions, desmosomes and hemidesmosomes (anchoring junctions) Gap junctions (communicating junction) Tight junctions (occluding junctions) Invertebrates have several other types of specific junctions, for example septate junctions (a type of occluding junction) or the C. elegans apical junction. In multicellular plants, the structural functions of cell junctions are instead provided for by cell walls. The analogues of communicative cell junctions in plants are called plasmodesmata. Anchoring junctions Cells within tissues and organs must be anchored to one another and attached to components of the extracellular matrix. Cells have developed several types of junctional complexes to serve these functions, and in each case, anchoring proteins extend through the plasma membrane to link cytoskeletal proteins in one cell to cytoskeletal proteins in neighboring cells as well as to proteins in the extracellular matrix. Three types of anchoring junctions are observed, and differ from one another in the cytoskeletal protein anchor as well as the transmembrane linker protein that extends through the membrane: Anchoring-type junctions not only hold cells together but provide tissues with structural cohesion. These junctions are most abundant in tissues that are subject to constant mechanical stress such as skin and heart. Desmosomes Desmosomes, also termed as maculae adherentes, can be visualized as rivets through the plasma membrane of adjacent cells. Intermediate filaments composed of keratin or desmin are attached to membrane-associated attachment proteins that form a dense plaque on the cytoplasmic face of the membrane. Cadherin molecules form the actual anchor by attaching to the cytoplasmic plaque, extending through the membrane and binding strongly to cadherins coming through the membrane of the adjacent cell. Hemidesmosomes Hemidesmosomes form rivet-like links between cytoskeleton and extracellular matrix components such as the basal laminae that underlie epithelia. Like desmosomes, they tie to intermediate filaments in the cytoplasm, but in contrast to desmosomes, their transmembrane anchors are integrins rather than cadherins. Adherens junctions Adherens junctions share the characteristic of anchoring cells through their cytoplasmic actin filaments. Similarly to desmosomes and hemidesmosomes, their transmembrane anchors are composed of cadherins in those that anchor to other cells and integrins (focal adhesion) in those that anchor to extracellular matrix. There is considerable morphologic diversity among adherens junctions. Those that tie cells to one another are seen as isolated streaks or spots, or as bands that completely encircle the cell. The band-type of adherens junctions is associated with bundles of actin filaments that also encircle the cell just below the plasma membrane. Spot-like adherens junctions called focal adhesions help cells adhere to extracellular matrix. The cytoskeletal actin filaments that tie into adherens junctions are contractile proteins and in addition to providing an anchoring function, adherens junctions are thought to participate in folding and bending of epithelial cell sheets. Thinking of the bands of actin filaments as being similar to 'drawstrings' allows one to envision how contraction of the bands within a group of cells would distort the sheet into interesting patterns. Gap junctions Gap junctions or communicating junctions, allow for direct chemical communication between adjacent cellular cytoplasm through diffusion without contact with the extracellular fluid. This is possible due to six connexin proteins interacting to form a cylinder with a pore in the centre called a connexon. The connexon complexes stretches across the cell membrane and when two adjacent cell connexons interact, they form a complete gap junction channel. Connexon pores vary in size, polarity and therefore can be specific depending on the connexin proteins that constitute each individual connexon. Whilst variation in gap junction channels do occur, their structure remains relatively standard, and this interaction ensures efficient communication without the escape of molecules or ions to the extracellular fluid. Gap junctions play vital roles in the human body, including their role in the uniform contractile of the heart muscle. They are also relevant in signal transfers in the brain, and their absence shows a decreased cell density in the brain. Retinal and skin cells are also dependent on gap junctions in cell differentiation and proliferation. Tight junctions Found in vertebrate epithelia, tight junctions act as barriers that regulate the movement of water and solutes between epithelial layers. Tight junctions are classified as a paracellular barrier which is defined as not having directional discrimination; however, movement of the solute is largely dependent upon size and charge. There is evidence to suggest that the structures in which solutes pass through are somewhat like pores. Physiological pH plays a part in the selectivity of solutes passing through tight junctions with most tight junctions being slightly selective for cations. Tight junctions present in different types of epithelia are selective for solutes of differing size, charge, and polarity. Proteins There have been approximately 40 proteins identified to be involved in tight junctions. These proteins can be classified into four major categories;signalling proteins. Roles Scaffolding proteins – organise the transmembrane proteins, couple transmembrane proteins to other cytoplasmic proteins as well as to actin filaments. Signaling proteins – involved in junctions assembly, barrier regulation, and gene transcription. Regulation proteins – regulate membrane vesicle targeting. Transmembrane proteins – including junctional adhesion molecule, occludin, and claudin. It is believed that claudin is the protein molecule responsible for the selective permeability between epithelial layers. A three-dimensional image is still yet to be achieved and as such specific information about the function of tight junctions is yet to be determined. Tricellular junctions Tricellular junctions seal epithelia at the corners of three cells. Due to the geometry of three-cell vertices, the sealing of the cells at these sites requires a specific junctional organization, different from those in bicellular junctions. In vertebrates, components tricellular junctions are tricellulin and lipolysis-stimulated lipoprotein receptors. In invertebrates, the components are gliotactin and anakonda. Tricellular junctions are also implicated in the regulation of cytoskeletal organization and cell divisions. In particular they ensure that cells divide according to the Hertwig rule. In some Drosophila epithelia, during cell divisions tricellular junctions establish physical contact with spindle apparatus through astral microtubules. Tricellular junctions exert a pulling force on the spindle apparatus and serve as a geometrical clue to determine orientation of cell divisions. Cell junction molecules The molecules responsible for creating cell junctions include various cell adhesion molecules. There are four main types: selectins, cadherins, integrins, and the immunoglobulin superfamily. Selectins are cell adhesion molecules that play an important role in the initiation of inflammatory processes. The functional capacity of selectin is limited to leukocyte collaborations with vascular endothelium. There are three types of selectins found in humans; L-selectin, P-selectin and E-selectin. L-selectin deals with lymphocytes, monocytes and neutrophils, P-selectin deals with platelets and endothelium and E-selectin deals only with endothelium. They have extracellular regions made up of an amino-terminal lectin domain, attached to a carbohydrate ligand, growth factor-like domain, and short repeat units (numbered circles) that match the complementary binding protein domains. Cadherins are calcium-dependent adhesion molecules. Cadherins are extremely important in the process of morphogenesis – fetal development. Together with an alpha-beta catenin complex, the cadherin can bind to the microfilaments of the cytoskeleton of the cell. This allows for homophilic cell–cell adhesion. The β-catenin–α-catenin linked complex at the adherens junctions allows for the formation of a dynamic link to the actin cytoskeleton. Integrins act as adhesion receptors, transporting signals across the plasma membrane in multiple directions. These molecules are an invaluable part of cellular communication, as a single ligand can be used for many integrins. Unfortunately, these molecules still have a long way to go in the ways of research. Immunoglobulin superfamily are a group of calcium independent proteins capable of homophilic and heterophilic adhesion. Homophilic adhesion involves the immunoglobulin-like domains on the cell surface binding to the immunoglobulin-like domains on an opposing cell's surface while heterophilic adhesion refers to the binding of the immunoglobulin-like domains to integrins and carbohydrates instead. Cell adhesion is a vital component of the body. Loss of this adhesion effects cell structure, cellular functioning and communication with other cells and the extracellular matrix and can lead to severe health issues and diseases. References External links Cell anatomy Cell communication Cell signaling
Cell junction
Biology
2,123
39,726,151
https://en.wikipedia.org/wiki/Negative%20hypergeometric%20distribution
In probability theory and statistics, the negative hypergeometric distribution describes probabilities for when sampling from a finite population without replacement in which each sample can be classified into two mutually exclusive categories like Pass/Fail or Employed/Unemployed. As random selections are made from the population, each subsequent draw decreases the population causing the probability of success to change with each draw. Unlike the standard hypergeometric distribution, which describes the number of successes in a fixed sample size, in the negative hypergeometric distribution, samples are drawn until failures have been found, and the distribution describes the probability of finding successes in such a sample. In other words, the negative hypergeometric distribution describes the likelihood of successes in a sample with exactly failures. Definition There are elements, of which are defined as "successes" and the rest are "failures". Elements are drawn one after the other, without replacements, until failures are encountered. Then, the drawing stops and the number of successes is counted. The negative hypergeometric distribution, is the discrete distribution of this . The negative hypergeometric distribution is a special case of the beta-binomial distribution with parameters and both being integers (and ). The outcome requires that we observe successes in draws and the bit must be a failure. The probability of the former can be found by the direct application of the hypergeometric distribution and the probability of the latter is simply the number of failures remaining divided by the size of the remaining population . The probability of having exactly successes up to the failure (i.e. the drawing stops as soon as the sample includes the predefined number of failures) is then the product of these two probabilities: Therefore, a random variable follows the negative hypergeometric distribution if its probability mass function (pmf) is given by where is the population size, is the number of success states in the population, is the number of failures, is the number of observed successes, is a binomial coefficient By design the probabilities sum up to 1. However, in case we want show it explicitly we have: where we have used that, which can be derived using the binomial identity, and the Chu–Vandermonde identity, which holds for any complex-values and and any non-negative integer . Expectation When counting the number of successes before failures, the expected number of successes is and can be derived as follows. where we have used the relationship , that we derived above to show that the negative hypergeometric distribution was properly normalized. Variance The variance can be derived by the following calculation. Then the variance is Related distributions If the drawing stops after a constant number of draws (regardless of the number of failures), then the number of successes has the hypergeometric distribution, . The two functions are related in the following way: Negative-hypergeometric distribution (like the hypergeometric distribution) deals with draws without replacement, so that the probability of success is different in each draw. In contrast, negative-binomial distribution (like the binomial distribution) deals with draws with replacement, so that the probability of success is the same and the trials are independent. The following table summarizes the four distributions related to drawing items: Some authors define the negative hypergeometric distribution to be the number of draws required to get the th failure. If we let denote this number then it is clear that where is as defined above. Hence the PMF If we let the number of failures be denoted by means that we have The support of is the set . It is clear that: and . References Discrete distributions Factorial and binomial topics
Negative hypergeometric distribution
Mathematics
736
77,781,911
https://en.wikipedia.org/wiki/Association%20of%20the%20Electrical%20and%20Digital%20Industry
ZVEI e. V., the German Electrical and Electronic Manufacturers' Association (formerly: German Electrical and Electronic Manufacturers' Association), represents the economic, technological and environmental interests of the German electrical and digital industry. With 910,400 employees across Germany and a total turnover of 242 billion euros (in 2023), the electrical industry is the second largest industrial sector in Germany (regarding in terms of employees) - behind mechanical engineering. With an additional 811,000 employees abroad, its value creation is highly globally networked (2021). In 2023, the industry spent 22.1 billion euros on research and development and nine billion euros on investments. Organisation Its headquarters are in Frankfurt am Main. There are offices in Berlin and Brussels. The ZVEI is also represented by an office in Beijing through its EuropeElectro working group. The association works with national trade associations and organisations, European industry and trade associations and international organisations. ZVEI is the second largest member of BDI, the Federation of German Industries. It is also a member of ORGALIM, the European umbrella organisation for the engineering industries. The ZVEI is also involved in the German TV Platform and the Industry 4.0 platform. The association is divided into 22 trade associations. The trade associations comprise all member companies that are active in the same market segment. A member may also belong to several trade associations due to its range of products and services. The ZVEI also maintains nine regional offices. They represent the interests of the electrical industry vis-à-vis the respective state governments. President and management Gunther Kegel has been ZVEI President since October 2020 and BDI Vice President since November 2020. Kegel is chairman of the management board of Pepperl+Fuchs. Wolfgang Weber has been Chairman of the ZVEI Management Board since 2020. Industry initiative Licht.de The Lighting Association in the ZVEI operates the industry initiative licht.de, which provides information about lighting and lighting technology as well as guidelines and standards that must be observed for professional lighting solutions. Among other things, the industry initiative accompanied the technological change from incandescent lamps to energy-efficient light sources in the course of the Ecodesign Directive with campaigns. Licht.de informs consumers, professional users, planners and architects through traditional media work and online offerings. One focus is on educating people about future technologies such as Human Centric Lighting (HCL). The industry initiative operates an information portal on the Internet and publishes the “licht.wissen” series of publications. It currently comprises 21 titles. As a rule, each issue is dedicated to a specific lighting application: for example, in schools, hospitals or offices, but also in museums or streets. Cross-cutting topics such as LEDs and the effect of light on people are also covered. The licht.forum series and other licht.de publications such as guidelines are published on current topics. The association founded the Fördergemeinschaft Gutes Licht (FGL) in 1970, which was renamed licht.de in 2007. The website is officially home to the LED lead market initiative. It was founded by the Federal Ministry of Education and Research and has been continued by the Federal Ministry for the Environment since the beginning of 2012 with the aim of supporting the broad market launch of LEDs in Germany and reducing CO2 emissions. Publications ZVEI-Spotlights (Digital annual review) ZVEI-Magazind Ampere Publications of ZVEI External links ZVEI.org Licht.de Entry in the German Bundestag lobby register References Automation organizations Electrical engineering organizations 1918 establishments Business organisations based in Germany
Association of the Electrical and Digital Industry
Engineering
746
67,067,209
https://en.wikipedia.org/wiki/Methyl%20isonicotinate
Methyl isonicotinate is a toxic compound, which is used as a semiochemical. Other names for this compound are 4-pyridine carboxylic acid, and isonicotinic acid methyl ester. This compound is slightly toxic to the human body. It has an irritating effect on the eyes, skin, and respiratory tract. Moreover, the compound is used as the active ingredient in several sticky thrip traps to monitor and catch thrips in greenhouses. History Methyl isonicotinate, a patented 4-pyridyl carbonyl compound, was found to be a useful semiochemical that does not use any type of pheromone. No specific history was found on the compound, other than that research has been performed to investigate how this chemical can be used for the management of thrip-pests. Structure and reactivity Methyl isonicotinate is a 4-pyridyl carbonyl compound consisting of a pyridine ring attached to methyl carboxylate. No data were available for the reactivity of methyl isonicotinate. Synthesis In order to synthesize methyl isonicotinate several substances need to react with one another. These compounds are isonicotinic acid, methanol, sulfuric acid, and sodium carbonate. Available forms Methyl isonicotinate (C7H7NO2) has many constitutional isomers. Examples are methyl nicotinate, 2-nitrotoluene, and salicylamide. All have the same molecular formula, but differ in connectivity between the atoms and are therefore different molecules with specific properties. Mechanism of action No information on the mechanism of action was found during the data-search. The only effect of this compound which is found is that it influences the movement of thrips, as mentioned later. Moreover, the compound is used on industrial sites as a laboratory chemical, where it aids in the synthesis of other substances. Efficacy and side effects Efficacy Sticky blue and yellow thrip traps are used to monitor pests on crops. Methyl isonicotinate is the active ingredient in for example LUREM-TR (Koppers biological systems) and is used to detect a pest in an early stage. The addition of methyl isonicotinate to thrip control methods can increase the success of the trap. The compound affects the movement of thrips as walking and take-off behavior increases. More movement leads to more captured thrips on sticky traps. Usage of methyl isonicotinate in traps can increase the catches up to 20 times depending on the species and the conditions. In addition, the increased movement of thrips increases the exposure of the species to insecticides or biopesticides. There are 10 thrip species known to react to methyl isonicotinate: Adverse effects Methyl isonicotinate is known for causing skin corrosion. In low concentration, this compound leads to irritation of the skin. An acute symptom is redness of the skin, but the effects of the substance can be delayed. If the skin has been in contact with the substance, the skin should be flushed with running water for at least 20 minutes and the person should see a doctor. In addition, it causes significant damage to the eye. When the eye has been contaminated with this compound, flush the eyes with running water for a minimum of 15 minutes straight away. During this process, the eyelids should be open. After, it is advised that the person should call an emergency medical center. Also, the respiratory tract may be affected when the system comes in contact with large amounts of methyl isonicotinate. Medical attention is needed. After overexposure one might experience nausea, a headache, dizziness, tiredness, or even vomiting. Lastly, methyl isonicotinate is a combustible liquid. Toxicity There are different studies performed on the acute toxicity of methyl isonicotinate. The acute toxicity relates to effects that occur after exposure to a substance or mixture. In the studies on acute toxicity of methyl isonicotinate, the effects were observed in vivo in rodents. Oral toxicity studies in rats To examine the oral toxicity of methyl isonicotinate, studies were performed to estimate the LD50. Based on the studies supported by Tsarichenko (1977), U.S. Library of Medicine (2017) and Gestis Substance Database (2017), Sprague-Dawley male and female rats were treated with methyl isonicotinate by oral gavage route. According to the estimated LD50 of 3906 mg/kg, methyl isonicotinate is classified as category V on GLP criteria for acute oral toxicity. Category V substances are identified as causing relatively low acute toxicity hazard. However, the substances could present a danger to vulnerable populations. In a treatment with isonicotinate acid, the rats showed clinical signs such as suppression of the general motor activity, an impairment of motor coordination and an assumption of a lateral position. These signs were all observed at a LD50 of 5000 mg/kg. Dermal toxicity in rabbits The dermal toxicity was estimated based on a study supported by Cosmetic ingredient review (2005) on methyl isonicotinate. In this study, rabbits were used to estimate the LD50 for the dermal toxicity of methyl isonicotinate. The rabbits were treated with methyl isonicotinate by dermal application. This study resulted in an estimated LD50 of 3828 mg/kg. Conclusively, methyl isonicotinate can be classified as category V for acute dermal toxicity. Effects on animals The European Chemical Agency (ECHA) mentions some effects on animals: Short term toxicity to aquatic invertebrates, aquatic algae, cyanobacteria, and micro-organisms. No other effects on animals were found during the data-search. Furthermore, as mentioned before, the compound can serve as a semiochemical in thrips, affecting their movement and therefore increasing the functionality of traps. References Semiochemicals
Methyl isonicotinate
Chemistry
1,225
41,217,061
https://en.wikipedia.org/wiki/Brochiraja%20aenigma
Brochiraja aenigma, also known as the Enigma skate, is a skate known from a single specimen recently identified in 2006. Based on the single specimen, its range includes at least the Wanganella Bank on the Norfolk Ridge. It is rare with further searches finding no specimens, and while it is not commonly fished or reported in commercial distribution, it can be used for fish meal. Due to the limited knowledge of its biology and extent of capture in fisheries, this species is assessed as Data Deficient by the IUCN. References Rajiformes Fish of the Pacific Ocean Fish described in 2006 Species known from a single specimen
Brochiraja aenigma
Biology
128
72,010,909
https://en.wikipedia.org/wiki/Fuzzy%20differential%20equation
Fuzzy differential equation are general concept of ordinary differential equation in mathematics defined as differential inclusion for non-uniform upper hemicontinuity convex set with compactness in fuzzy set. for all . First order fuzzy differential equation A first order fuzzy differential equation with real constant or variable coefficients where is a real continuous function and is a fuzzy continuous function such that . Linear systems of fuzzy differential equations A system of equations of the form where are real functions and are fuzzy functions Fuzzy partial differential equations A fuzzy differential equation with partial differential operator is for all . Fuzzy fractional differential equation A fuzzy differential equation with fractional differential operator is for all where is a rational number. References Fuzzy logic Differential equations
Fuzzy differential equation
Mathematics
136
42,241,725
https://en.wikipedia.org/wiki/Power-over-fiber
Power-over-fiber, or PoF, is a technology in which a fiber-optic cable carries optical power, which is used as an energy source rather than, or as well as, carrying data. This allows a device to be remotely powered, while providing electrical isolation between the device and the power supply. Such systems can be used to protect the power supply from dangerous voltages such as from lightning, or to prevent voltage from the supply from igniting explosives. Power over fiber may also be useful in applications or environments where it is important to avoid the electromagnetic fields created by electricity flowing through copper wire, such as around delicate sensors or in sensitive military applications. See also Phantom power Power over Ethernet (PoE) References Networking hardware Network appliances Electric power Power supplies Fiber to the premises
Power-over-fiber
Physics,Engineering
158
42,170,895
https://en.wikipedia.org/wiki/Curved%20screen
A curved screen is an electronic display device that, contrasting with the flat-panel display, features a concave viewing surface. Curved screen TVs were introduced to the consumer market in 2013, primarily due to the efforts of Korean companies Samsung and LG, while curved screen projection displays, such as the Cinerama, have existed since the 1950s. Analysis Curved screens are often marketed as being able to provide an "immersive" experience, and allowing a wider field of view. However, the field of view (FoV) is the extent of the observable world that is seen. This type of screen is an optical instrument with a solid angle through which we can see what's happening inside the "computer world". If a screen is curved, the FoV of a person who looks at a picture on screen would not change. Most curved screens are made with wide (16:9), ultra-wide (21:9 / 64:27) or super ultra-wide (32:9 or 18:5) aspect ratio. Wider screens provide a wider angle of view, field of view. Field of view calculations are the reason wider screens are able to display more information on this type of screen, not curvature in and of itself. The optimal position of viewing a screen is directly along the central axis of the TV with the central point of the screen at eye level. Other viewing positions or angles may cause degradations in picture quality ranging anywhere from minor to severe, the most notable being trapezoidal distortion. Manufacturers suggest that curved screens allow greater range in satisfactory viewing angles and offer minimal trapezoidal distortion in comparison to flat-screens. This claim is heavily disputed by another claim that a substantial offset from the center provides greater viewing distortion than that of a flat screen. However, the equidistant claim by manufacturers of the various parts of the screen from a centered view is supported. Additionally, curved TVs supposedly offer minimized glare from ambient light. Applications To reduce outer edge distortions and provide a panoramic view, large curved screens accomplish this free of bezel lines framing each screen, and the alternative was to use multiple flat-screen monitors around the viewer. Curved screens and multi-screens have applications in gaming. Backward curved screens have the potential to be used as digital signage that can be installed at various locations to produce marketing exposure. Projection screens When projecting images onto a completely flat screen, the distance light has to travel from its point of origin (i.e., the projector) increases the farther away the destination point is from the screen's center. This variance in the distance traveled results in a distortion phenomenon known as the pincushion effect, where the image at the left and right edges of the screen becomes bowed inwards and stretched vertically, making the entire image appear blurry. Curved screens are also widely used in IMAX and standard movie theaters for their ability to produce natural expressions and draw the audience deeper into the scene. In about 2009, NEC/Alienware together with Ostendo Technologies, Inc. (based in Carlsbad, CA) were offering a curved (concave) monitor that allows better viewing angles near the edges, covering 75% of peripheral vision in the horizontal direction. This monitor had 2880x900 resolution, 4 DLP rear projection systems with LED light sources and was marketed as suitable both for gaming and office work, while for $6499 it was rather expensive. Touch on curved screen One of the issues in the use of the curved screen in commercial electronics is how accurately it can work with a touch-sensor. To drive the solution, LG electronics has developed Infrared-based touch solutions for the curved display. History The first curved screen was the Cinerama, which debuted in New York in 1952. Multiple theaters, including the Cinerama Dome in Hollywood began to use horizontally curved screens to counter image distortions associated with super-wide formats such as 23:9 CinemaScope. 21:9 aspect ratio monitors were developed in order to display the maximum amount of information on a single screen. However, the extreme wideness of the screen created severe distortions on the left and right edges of the screen. Curved 21:9 monitors were then developed to address this issue and provide a distortion-free, wide-angle viewing environment. Manufacturing process The first curved panels were produced by bending flat panels that had already been manufactured. This technique resulted in performance issues, such as oval mura (clouding effect) and color mixture (which causes color impurity and image distortion) observed at its curved edges. Since the introduction of flexible glass, liquid crystal displays (LCDs) can be applied to curved surfaces without bending existing panels. The screen technologies used to create curved LCD screens are Vertical Alignment, which helps to reduce any white glow that may affect an angular view, and IPS Panels, which are more susceptible to distortion. Curvature measurement The radius of curvature of a curved display is the radius that a circle would have if it had the same curvature as the display. This value is typically given in millimeters, but expressed with the letter "R" instead of a unit (for example, a display with "3800R curvature" has a 3800mm radius of curvature. See also MSG Sphere IMAX Dome Evans & Sutherland References Television technology Display technology
Curved screen
Technology,Engineering
1,085
51,172,491
https://en.wikipedia.org/wiki/Amanda%20Swart
Amanda Cecilia Swart is a South African biochemist who holds a professorship in biochemistry at Stellenbosch University. She is known for her research on rooibos, a herbal tea popular in South Africa, has been funded by the South African Rooibos Council in her research, and is frequently quoted in South African media promoting the reported health benefits of rooibos. Education and career Swart completed her MSc in Biology in 1986 and her doctorate at Stellenbosch in 1999, and returned to Stellenbosch as a faculty member in 2002, where she teaches undergraduate and postgraduate courses in biochemistry. She was instrumental in establishing the P450 Steroid Research Group at Stellenbosch and in 2011 she was appointed associate professor. Her research areas include: Adrenal steroidogenesis, cytochrome P450 enzymes, prostate cancer, and products derived from the plants Aspalatus linearis (Rooibos), Salsola tuberculatiformis Botch. (Gannabos) and Sutherlandia frutescens (Cancer bush). Research Her primary research focus, and that of the P450 Steroid Research Group, is on the hormones (adrenal steroids) produced by the adrenal gland as well as on the steroidogenic enzymes which catalyse their biosynthesis, the metabolism of these steroids in prostate cancer, and their implications in endocrine disorders. Their research also involves the investigation of the effects of plant products on the endocrine system. Swart's research has been sponsored by the National Research Foundation, Cancer Association of South Africa and the SA Rooibos Council. The research is broken into three focus areas: 11β-hydroxyandrostenedione 11β-hydroxyandrostenedione is an adrenal steroid and has been implicated in prostate cancer as well as castration-resistant prostate cancer. Swart investigates the mechanism of this steroid within prostate cancer cells and other cancer cells. Rooibos There are two avenues of research regarding rooibos that Swart is pursuing. Prostate cancer metabolism Swart's research has suggested that rooibos may have beneficial effects on prostate cancer by inhibiting 17β-Hydroxysteroid dehydrogenases and blocking dihydrotestosterone. Her research is also looking at the effect of rooibos on the PSA enzyme marker which is used as a test for prostate cancer. Cortisol and stress Her research has suggested that drinking rooibos may lower stress through the effects of two compounds in it, aspalathin and nothofagin. Under laboratory conditions these compounds block the production of a stress hormone, cortisol. She has claimed that rooibos has the potential to prevent heart disease, reduce the effects of aging, and promote weight loss. References External links South African biochemists Women biochemists South African women chemists Stellenbosch University alumni Academic staff of Stellenbosch University Year of birth missing (living people) Living people
Amanda Swart
Chemistry
627
478,672
https://en.wikipedia.org/wiki/Sphere%20eversion
In differential topology, sphere eversion is the process of turning a sphere inside out in a three-dimensional space (the word eversion means "turning inside out"). It is possible to smoothly and continuously turn a sphere inside out in this way (allowing self-intersections of the sphere's surface) without cutting or tearing it or creating any crease. This is surprising, both to non-mathematicians and to those who understand regular homotopy, and can be regarded as a veridical paradox; that is something that, while being true, on first glance seems false. More precisely, let be the standard embedding; then there is a regular homotopy of immersions such that ƒ0 = ƒ and ƒ1 = −ƒ. History An existence proof for crease-free sphere eversion was first created by . It is difficult to visualize a particular example of such a turning, although some digital animations have been produced that make it somewhat easier. The first example was exhibited through the efforts of several mathematicians, including Arnold S. Shapiro and Bernard Morin, who was blind. On the other hand, it is much easier to prove that such a "turning" exists, and that is what Smale did. Smale's graduate adviser Raoul Bott at first told Smale that the result was obviously wrong . His reasoning was that the degree of the Gauss map must be preserved in such "turning"—in particular it follows that there is no such turning of S1 in R2. But the degrees of the Gauss map for the embeddings f and −f in R3 are both equal to 1, and do not have opposite sign as one might incorrectly guess. The degree of the Gauss map of all immersions of S2 in R3 is 1, so there is no obstacle. The term "veridical paradox" applies perhaps more appropriately at this level: until Smale's work, there was no documented attempt to argue for or against the eversion of S2, and later efforts are in hindsight, so there never was a historical paradox associated with sphere eversion, only an appreciation of the subtleties in visualizing it by those confronting the idea for the first time. See h-principle for further generalizations. Proof Smale's original proof was indirect: he identified (regular homotopy) classes of immersions of spheres with a homotopy group of the Stiefel manifold. Since the homotopy group that corresponds to immersions of in vanishes, the standard embedding and the inside-out one must be regular homotopic. In principle the proof can be unwound to produce an explicit regular homotopy, but this is not easy to do. There are several ways of producing explicit examples and mathematical visualization: Half-way models: these consist of very special homotopies. This is the original method, first done by Shapiro and Phillips via Boy's surface, later refined by many others. The original half-way model homotopies were constructed by hand, and worked topologically but weren't minimal. The movie created by Nelson Max, over a seven-year period, and based on Charles Pugh's chicken-wire models (subsequently stolen from the Mathematics Department at Berkeley), was a computer-graphics 'tour de force' for its time, and set the bench-mark for computer animation for many years. A more recent and definitive graphics refinement (1980s) is minimax eversions, which is a variational method, and consist of special homotopies (they are shortest paths with respect to Willmore energy). In turn, understanding behavior of Willmore energy requires understanding solutions of fourth-order partial differential equations, and so the visually beautiful and evocative images belie some very deep mathematics beyond Smale's original abstract proof. Thurston's corrugations: this is a topological method and generic; it takes a homotopy and perturbs it so that it becomes a regular homotopy. This is illustrated in the computer-graphics animation Outside In developed at the Geometry Center under the direction of Silvio Levy, Delle Maxwell and Tamara Munzner. Combining the above methods, the complete sphere eversion can be described by a set of closed equations giving minimal topological complexity Variations A six-dimensional sphere in seven-dimensional euclidean space admits eversion. With an evident case of an 0-dimensional sphere (two distinct points) in a real line and described above case of a two-dimensional sphere in there are only three cases when sphere embedded in euclidean space admits eversion. Gallery of eversion steps See also Whitney–Graustein theorem References Bibliography Iain R. Aitchison (2010) The `Holiverse': holistic eversion of the 2-sphere in R^3, preprint. arXiv:1008.0916. John B. Etnyre (2004) Review of "h-principles and flexibility in geometry", . George K. Francis & Bernard Morin (1980) "Arnold Shapiro's Eversion of the Sphere", Mathematical Intelligencer 2(4):200–3. Max, Nelson (1977) "Turning a Sphere Inside Out", https://www.crcpress.com/Turning-a-Sphere-Inside-Out-DVD/Max/9781466553941 Anthony Phillips (May 1966) "Turning a surface inside out", Scientific American, pp. 112–120. External links A History of Sphere Eversions "Turning a Sphere Inside Out" Software for visualizing sphere eversion Mathematics visualization: topology. The holiverse sphere eversion (Povray animation) The deNeve/Hills sphere eversion: video and interactive model Patrick Massot's project to formalise the proof in the Lean Theorem Prover An interactive exploration of Adam Bednorz and Witold Bednorz method of sphere eversion Outside In: A video exploration of sphere eversion, created by The Geometry Center of The University of Minnesota. Differential topology Mathematical paradoxes
Sphere eversion
Mathematics
1,250
8,150,709
https://en.wikipedia.org/wiki/Geek%20Pride%20Festival
The Geek Pride Festival was the name of a number of events between 1998 and 2000, organized by Tim McEachern and devoted to computer geek activities and interests. The name of the festival is most often associated with the large event held on March 31 and April 1, 2000, at the Park Plaza Castle in Boston, United States. Before that, there were two events at the now-closed Big House Brewery in Albany, New York. WAMC, the local NPR affiliate, sponsored the events which were organized by Tim McEachern. 2000 event The 2000 event was a major production, organised with the help of Susan Kaup, Chris O'Brien and many volunteers. The event began Friday night, with a swap meet / social event at the Modern Lounge in Boston's Landsdowne Street nightclub district. Drink tickets were offered at the door, and the DJ played computer-themed music. On Saturday, the main event occurred at the Castle, where admission was free. The middle of the floor held the "Email Garden", comprising about a dozen tables with PCs running Red Hat Linux, in a wired LAN network and providing email, Web, and general Internet access. At the front of the hall was a stage, which hosted a number of invited guests, including Rob Malda of Slashdot, Eric S. Raymond, Micky Metts of Channel1 ISP, the video game cover band Everyone. The stage was also host to the final round of a Quake III tournament, held in a back room, displayed on the stage's projection screen, as well as the final round of "Stump the Geek", a geek trivia contest. Aside from the main events, the main floor had computer workstations displaying live webcam feeds of "satellite" Festivals in remote locations. A live Shoutcast feed was also provided of the Boston event. A poll for "greatest geek hero" was also held; the official winner was Alan Turing. According to Science/AAAS magazine, 2,000 people attended, though the open-door free admission made an official count impossible. Corporate sponsors VA Linux (now SourceForge, Inc.) Andover.net (now OSTG, part of SourceForge) SwitcHouse (now Nintari) Addison-Wesley Newstrolls.com (now defunct) Speakers Alex Pentland Rob Malda Keith Dawson, editor of Tasty Bits from the Technology Front Eric S. Raymond The Cluetrain Manifesto authors Christopher Locke & David Weinberger Micky Metts aka [FreeScholar] Jeffrey Zeldman Dave Green & Danny O'Brien of UK-based ntk.net Other events The 2000 event is widely referred to as the "first annual" event, although McEachern organized at least one previous event named Geek Pride Festival (and/or Geek Pride Day) at a bar in Albany, New York. Some sources refer to the Boston event as the third annual. McEachern planned another event to take place later the same year in San Francisco but was never realized. References Notes External links Event photos taken by Gerald Oskoboiny Computing culture Recurring events established in 1998 Recurring events disestablished in 2000 Festivals in Boston History of subcultures Pride
Geek Pride Festival
Technology
653
20,841,681
https://en.wikipedia.org/wiki/Phosacetim
Phosacetim is a toxic organophosphate compound, which acts as an acetylcholinesterase inhibitor and is used as a rodenticide. References Acetylcholinesterase inhibitors Rodenticides 4-Chlorophenyl compounds Phosphoramidothioates Phenol ethers Amidines
Phosacetim
Chemistry,Biology
72
15,034,041
https://en.wikipedia.org/wiki/Landing%20footprint
A landing footprint, also called a landing ellipse, is the area of uncertainty of a spacecraft's landing zone on an astronomical body. After atmospheric entry, the landing point of a spacecraft will depend upon the degree of control (if any), entry angle, entry mass, atmospheric conditions, and drag. (Note that the Moon and the asteroids have no aerial factors.) By aggregating such numerous variables it is possible to model a spacecraft's landing zone to a certain degree of precision. By simulating entry under varying conditions an probable ellipse can be calculated; the size of the ellipse represents the degree of uncertainty for a given confidence interval. Mathematical explanation To create a landing footprint for a spacecraft, the standard approach is to use the Monte Carlo method to generate distributions of initial entry conditions and atmospheric parameters, solve the reentry equations of motion, and catalog the final longitude/latitude pair at touchdown. It is commonly assumed that the resulting distribution of landing sites follows a bivariate Gaussian distribution: where: is the vector containing the longitude/latitude pair is the expected value vector is the covariance matrix denotes the determinant of the covariance matrix Once the parameters are estimated from the numerical simulations, an ellipse can be calculated for a percentile . It is known that for a real-valued vector with a multivariate Gaussian joint distribution, the square of the Mahalanobis distance has a chi-squared distribution with degrees of freedom: This can be seen by defining the vector , which leads to and is the definition of the chi-squared statistic used to construct the resulting distribution. So for the bivariate Gaussian distribution, the boundary of the ellipse at a given percentile is . This is the equation of a circle centered at the origin with radius , leading to the equations: where is the angle. The matrix square root can be found from the eigenvalue decomposition of the covariance matrix, from which can be written as: where the eigenvalues lie on the diagonal of . The values of then define the landing footprint for a given level of confidence, which is expressed through the choice of percentile. See also List of landing ellipses on extraterrestrial bodies Mars landing Moon landing References Spaceflight concepts Atmospheric entry Statistical intervals
Landing footprint
Engineering
476
886,766
https://en.wikipedia.org/wiki/Bell%20test
A Bell test, also known as Bell inequality test or Bell experiment, is a real-world physics experiment designed to test the theory of quantum mechanics in relation to Albert Einstein's concept of local realism. Named for John Stewart Bell, the experiments test whether or not the real world satisfies local realism, which requires the presence of some additional local variables (called "hidden" because they are not a feature of quantum theory) to explain the behavior of particles like photons and electrons. The test empirically evaluates the implications of Bell's theorem. , all Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave. Many types of Bell tests have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests". Bell inequality violations are also used in some quantum cryptography protocols, whereby a spy's presence is detected when Bell's inequalities cease to be violated. Overview The Bell test has its origins in the debate between Einstein and other pioneers of quantum physics, principally Niels Bohr. One feature of the theory of quantum mechanics under debate was the meaning of Heisenberg's uncertainty principle. This principle states that if some information is known about a given particle, there is some other information about it that is impossible to know. An example of this is found in observations of the position and the momentum of a given particle. According to the uncertainty principle, a particle's momentum and its position cannot simultaneously be determined with arbitrarily high precision. In 1935, Einstein, Boris Podolsky, and Nathan Rosen published a claim that quantum mechanics predicts that more information about a pair of entangled particles could be observed than Heisenberg's principle allowed, which would only be possible if information were travelling instantly between the two particles. This produces a paradox which came to be known as the "EPR paradox" after the three authors. It arises if any effect felt in one location is not the result of a cause that occurred in its past light cone, relative to its location. This action at a distance seems to violate causality, by allowing information between the two locations to travel faster than the speed of light. However, it is a common misconception to think that any information can be shared between two observers faster than the speed of light using entangled particles; the hypothetical information transfer here is between the particles. See no-communication theorem for further explanation. Based on this, the authors concluded that the quantum wave function does not provide a complete description of reality. They suggested that there must be some local hidden variables at work in order to account for the behavior of entangled particles. In a theory of hidden variables, as Einstein envisaged it, the randomness and indeterminacy seen in the behavior of quantum particles would only be apparent. For example, if one knew the details of all the hidden variables associated with a particle, then one could predict both its position and momentum. The uncertainty that had been quantified by Heisenberg's principle would simply be an artifact of not having complete information about the hidden variables. Furthermore, Einstein argued that the hidden variables should obey the condition of locality: Whatever the hidden variables actually are, the behavior of the hidden variables for one particle should not be able to instantly affect the behavior of those for another particle far away. This idea, called the principle of locality, is rooted in intuition from classical physics that physical interactions do not propagate instantly across space. These ideas were the subject of ongoing debate between their proponents. In particular, Einstein himself did not approve of the way Podolsky had stated the problem in the famous EPR paper. In 1964, John Stewart Bell proposed his famous theorem, which states that no physical theory of hidden local variables can ever reproduce all the predictions of quantum mechanics. Implicit in the theorem is the proposition that the determinism of classical physics is fundamentally incapable of describing quantum mechanics. Bell expanded on the theorem to provide what would become the conceptual foundation of the Bell test experiments. A typical experiment involves the observation of particles, often photons, in an apparatus designed to produce entangled pairs and allow for the measurement of some characteristic of each, such as their spin. The results of the experiment could then be compared to what was predicted by local realism and those predicted by quantum mechanics. In theory, the results could be "coincidentally" consistent with both. To address this problem, Bell proposed a mathematical description of local realism that placed a statistical limit on the likelihood of that eventuality. If the results of an experiment violate Bell's inequality, local hidden variables can be ruled out as their cause. Later researchers built on Bell's work by proposing new inequalities that serve the same purpose and refine the basic idea in one way or another. Consequently, the term "Bell inequality" can mean any one of a number of inequalities satisfied by local hidden-variables theories; in practice, many present-day experiments employ the CHSH inequality. All these inequalities, like the original devised by Bell, express the idea that assuming local realism places restrictions on the statistical results of experiments on sets of particles that have taken part in an interaction and then separated. To date, all Bell tests have supported the theory of quantum physics, and not the hypothesis of local hidden variables. These efforts to experimentally validate violations of the Bell inequalities resulted in John Clauser, Alain Aspect, and Anton Zeilinger being awarded the 2022 Nobel Prize in Physics. Conduct of optical Bell test experiments In practice most actual experiments have used light, assumed to be emitted in the form of particle-like photons (produced by atomic cascade or spontaneous parametric down conversion), rather than the atoms that Bell originally had in mind. The property of interest is, in the best known experiments, the polarisation direction, though other properties can be used. Such experiments fall into two classes, depending on whether the analysers used have one or two output channels. A typical CHSH (two-channel) experiment The diagram shows a typical optical experiment of the two-channel kind for which Alain Aspect set a precedent in 1982. Coincidences (simultaneous detections) are recorded, the results being categorised as '++', '+−', '−+' or '−−' and corresponding counts accumulated. Four separate subexperiments are conducted, corresponding to the four terms E(a, b) in the test statistic S (equation (2) shown below). The settings a, a′, b and b′ are generally in practice chosen to be 0, 45°, 22.5° and 67.5° respectively — the "Bell test angles" — these being the ones for which the quantum mechanical formula gives the greatest violation of the inequality. For each selected value of a and b, the numbers of coincidences in each category (N++, N−−, N+− and N−+) are recorded. The experimental estimate for E(a, b) is then calculated as: Once all four E’s have been estimated, an experimental estimate of the test statistic can be found. If S is numerically greater than 2 it has infringed the CHSH inequality. The experiment is declared to have supported the QM prediction and ruled out all local hidden-variable theories. A strong assumption has had to be made, however, to justify use of expression (2), namely, that the sample of detected pairs is representative of the pairs emitted by the source. Denial of this assumption is called the fair sampling loophole. A typical CH74 (single-channel) experiment Prior to 1982 all actual Bell tests used "single-channel" polarisers and variations on an inequality designed for this setup. The latter is described in Clauser, Horne, Shimony and Holt's much-cited 1969 article as being the one suitable for practical use. As with the CHSH test, there are four subexperiments in which each polariser takes one of two possible settings, but in addition there are other subexperiments in which one or other polariser or both are absent. Counts are taken as before and used to estimate the test statistic. where the symbol ∞ indicates absence of a polariser. If S exceeds 0 then the experiment is declared to have infringed the CH inequality and hence to have refuted local hidden-variables. This inequality is known as CH inequality instead of CHSH as it was also derived in a 1974 article by Clauser and Horne more rigorously and under weaker assumptions. Experimental assumptions In addition to the theoretical assumptions, there are practical ones. There may, for example, be a number of "accidental coincidences" in addition to those of interest. It is assumed that no bias is introduced by subtracting their estimated number before calculating S, but that this is true is not considered by some to be obvious. There may be synchronisation problems — ambiguity in recognising pairs because in practice they will not be detected at exactly the same time. Nevertheless, despite all the deficiencies of the actual experiments, one striking fact emerges: the results are, to a very good approximation, what quantum mechanics predicts. If imperfect experiments give us such excellent overlap with quantum predictions, most working quantum physicists would agree with John Bell in expecting that, when a perfect Bell test is done, the Bell inequalities will still be violated. This attitude has led to the emergence of a new sub-field of physics known as quantum information theory. One of the main achievements of this new branch of physics is showing that violation of Bell's inequalities leads to the possibility of a secure information transfer, which utilizes the so-called quantum cryptography (involving entangled states of pairs of particles). Notable experiments Over the past half century, a great number of Bell test experiments have been conducted. The experiments are commonly interpreted to rule out local hidden-variable theories, and in 2015 an experiment was performed that is not subject to either the locality loophole or the detection loophole (Hensen et al.). An experiment free of the locality loophole is one where for each separate measurement and in each wing of the experiment, a new setting is chosen and the measurement completed before signals could communicate the settings from one wing of the experiment to the other. An experiment free of the detection loophole is one where close to 100% of the successful measurement outcomes in one wing of the experiment are paired with a successful measurement in the other wing. This percentage is called the efficiency of the experiment. Advancements in technology have led to a great variety of methods to test Bell-type inequalities. Some of the best known and recent experiments include: Kasday, Ullman and Wu (1970) Leonard Ralph Kasday, Jack R. Ullman and Chien-Shiung Wu carried out the first experimental Bell test, using photon pairs produced by positronium decay and analyzed by Compton scattering. The experiment observed photon polarization correlations consistent with quantum predictions and inconsistent with local realistic models that obey the known polarization dependence of Compton scattering. Due to the low polarization selectivity of Compton scattering, the results did not violate a Bell inequality. Freedman and Clauser (1972) Stuart J. Freedman and John Clauser carried out the first Bell test that observed a Bell inequality violation, using Freedman's inequality, a variant on the CH74 inequality. Aspect et al. (1982) Alain Aspect and his team at Orsay, Paris, conducted three Bell tests using calcium cascade sources. The first and last used the CH74 inequality. The second was the first application of the CHSH inequality. The third (and most famous) was arranged such that the choice between the two settings on each side was made during the flight of the photons (as originally suggested by John Bell). Tittel et al. (1998) The Geneva 1998 Bell test experiments showed that distance did not destroy the "entanglement". Light was sent in fibre optic cables over distances of several kilometers before it was analysed. As with almost all Bell tests since about 1985, a "parametric down-conversion" (PDC) source was used. Weihs et al. (1998): experiment under "strict Einstein locality" conditions In 1998 Gregor Weihs and a team at Innsbruck, led by Anton Zeilinger, conducted an experiment that closed the "locality" loophole, improving on Aspect's of 1982. The choice of detector was made using a quantum process to ensure that it was random. This test violated the CHSH inequality by over 30 standard deviations, the coincidence curves agreeing with those predicted by quantum theory. Pan et al. (2000) experiment on the GHZ state This is the first of new Bell-type experiments on more than two particles; this one uses the so-called GHZ state of three particles. Rowe et al. (2001): the first to close the detection loophole The detection loophole was first closed in an experiment with two entangled trapped ions, carried out in the ion storage group of David Wineland at the National Institute of Standards and Technology in Boulder. The experiment had detection efficiencies well over 90%. Go et al. (Belle collaboration): Observation of Bell inequality violation in B mesons Using semileptonic B0 decays of Υ(4S) at Belle experiment, a clear violation of Bell Inequality in particle-antiparticle correlation is observed. Gröblacher et al. (2007) test of Leggett-type non-local realist theories A specific class of non-local theories suggested by Anthony Leggett is ruled out. Based on this, the authors conclude that any possible non-local hidden-variable theory consistent with quantum mechanics must be highly counterintuitive. Salart et al. (2008): separation in a Bell Test This experiment filled a loophole by providing an 18 km separation between detectors, which is sufficient to allow the completion of the quantum state measurements before any information could have traveled between the two detectors. Ansmann et al. (2009): overcoming the detection loophole in solid state This was the first experiment testing Bell inequalities with solid-state qubits (superconducting Josephson phase qubits were used). This experiment surmounted the detection loophole using a pair of superconducting qubits in an entangled state. However, the experiment still suffered from the locality loophole because the qubits were only separated by a few millimeters. Giustina et al. (2013), Larsson et al (2014): overcoming the detection loophole for photons The detection loophole for photons has been closed for the first time by Marissa Giustina, using highly efficient detectors. This makes photons the first system for which all of the main loopholes have been closed, albeit in different experiments. Christensen et al. (2013): overcoming the detection loophole for photons The Christensen et al. (2013) experiment is similar to that of Giustina et al. Giustina et al. did just four long runs with constant measurement settings (one for each of the four pairs of settings). The experiment was not pulsed so that formation of "pairs" from the two records of measurement results (Alice and Bob) had to be done after the experiment which in fact exposes the experiment to the coincidence loophole. This led to a reanalysis of the experimental data in a way which removed the coincidence loophole, and fortunately the new analysis still showed a violation of the appropriate CHSH or CH inequality. On the other hand, the Christensen et al. experiment was pulsed and measurement settings were frequently reset in a random way, though only once every 1000 particle pairs, not every time. Hensen et al., Giustina et al., Shalm et al. (2015): "loophole-free" Bell tests In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally. The first published experiment by Hensen et al. used a photonic link to entangle the electron spins of two nitrogen-vacancy defect centres in diamonds 1.3 kilometers apart and measured a violation of the CHSH inequality (S = 2.42 ± 0.20). Thereby the local-realist hypothesis could be rejected with a p-value of 0.039. Both simultaneously published experiments by Giustina et al. and Shalm et al. used entangled photons to obtain a Bell inequality violation with high statistical significance (p-value ≪10−6). Notably, the experiment by Shalm et al. also combined three types of (quasi-)random number generators to determine the measurement basis choices. One of these methods, detailed in an ancillary file, is the “'Cultural' pseudorandom source” which involved using bit strings from popular media such as the Back to the Future films, Star Trek: Beyond the Final Frontier, Monty Python and the Holy Grail, and the television shows Saved by the Bell and Dr. Who. Schmied et al. (2016): Detection of Bell correlations in a many-body system Using a witness for Bell correlations derived from a multi-partite Bell inequality, physicists at the University of Basel were able to conclude for the first time Bell correlation in a many-body system composed by about 480 atoms in a Bose–Einstein condensate. Even though loopholes were not closed, this experiment shows the possibility of observing Bell correlations in the macroscopic regime. Handsteiner et al. (2017): "Cosmic Bell Test" - Measurement Settings from Milky Way Stars Physicists led by David Kaiser of the Massachusetts Institute of Technology and Anton Zeilinger of the Institute for Quantum Optics and Quantum Information and University of Vienna performed an experiment that "produced results consistent with nonlocality" by measuring starlight that had taken 600 years to travel to Earth. The experiment “represents the first experiment to dramatically limit the space-time region in which hidden variables could be relevant.” Rosenfeld et al. (2017): "Event-Ready" Bell test with entangled atoms and closed detection and locality loopholes Physicists at the Ludwig Maximilian University of Munich and the Max Planck Institute of Quantum Optics published results from an experiment in which they observed a Bell inequality violation using entangled spin states of two atoms with a separation distance of 398 meters in which the detection loophole, the locality loophole, and the memory loophole were closed. The violation of S = 2.221 ± 0.033 rejected local realism with a significance value of P = 1.02×10−16 when taking into account 7 months of data and 55000 events or an upper bound of P = 2.57×10−9 from a single run with 10000 events. The BIG Bell Test Collaboration (2018): “Challenging local realism with human choices” An international collaborative scientific effort used arbitrary human choice to define measurement settings instead of using random number generators. Assuming that human free will exists, this would close the “freedom-of-choice loophole”. Around 100,000 participants were recruited in order to provide sufficient input for the experiment to be statistically significant. Rauch et al (2018): measurement settings from distant quasars In 2018, an international team used light from two quasars (one whose light was generated approximately eight billion years ago and the other approximately twelve billion years ago) as the basis for their measurement settings. This experiment pushed the timeframe for when the settings could have been mutually determined to at least 7.8 billion years in the past, a substantial fraction of the superdeterministic limit (that being the creation of the universe 13.8 billion years ago). The 2019 PBS Nova episode Einstein's Quantum Riddle documents this "cosmic Bell test" measurement, with footage of the scientific team on-site at the high-altitude Teide Observatory located in the Canary Islands. Storz et al (2023): Loophole-free Bell inequality violation with superconducting circuits In 2023, an international team led by the group of Andreas Wallraff at ETH Zurich demonstrated a loophole-free violation of the CHSH inequality with superconducting circuits deterministically entangled via a cryogenic link spanning a distance of 30 meters. Loopholes Though the series of increasingly sophisticated Bell test experiments has convinced the physics community that local hidden-variable theories are indefensible; they can never be excluded entirely. For example, the hypothesis of superdeterminism in which all experiments and outcomes (and everything else) are predetermined can never be excluded (because it is unfalsifiable). Up to 2015, the outcome of all experiments that violate a Bell inequality could still theoretically be explained by exploiting the detection loophole and/or the locality loophole. The locality (or communication) loophole means that since in actual practice the two detections are separated by a time-like interval, the first detection may influence the second by some kind of signal. To avoid this loophole, the experimenter has to ensure that particles travel far apart before being measured, and that the measurement process is rapid. More serious is the detection (or unfair sampling) loophole, because particles are not always detected in both wings of the experiment. It can be imagined that the complete set of particles would behave randomly, but instruments only detect a subsample showing quantum correlations, by letting detection be dependent on a combination of local hidden variables and detector setting. Experimenters had repeatedly voiced that loophole-free tests could be expected in the near future. In 2015, a loophole-free Bell violation was reported using entangled diamond spins over a distance of and corroborated by two experiments using entangled photon pairs. The remaining possible theories that obey local realism can be further restricted by testing different spatial configurations, methods to determine the measurement settings, and recording devices. It has been suggested that using humans to generate the measurement settings and observe the outcomes provides a further test. David Kaiser of MIT told the New York Times in 2015 that a potential weakness of the "loophole-free" experiments is that the systems used to add randomness to the measurement may be predetermined in a method that was not detected in experiments. Detection loophole A common problem in optical Bell tests is that only a small fraction of the emitted photons are detected. It is then possible that the correlations of the detected photons are unrepresentative: although they show a violation of a Bell inequality, if all photons were detected the Bell inequality would actually be respected. This was first noted by Philip M. Pearle in 1970, who devised a local hidden variable model that faked a Bell violation by letting the photon be detected only if the measurement setting was favourable. The assumption that this does not happen, i.e., that the small sample is actually representative of the whole is called the fair sampling assumption. To do away with this assumption it is necessary to detect a sufficiently large fraction of the photons. This is usually characterized in terms of the detection efficiency , defined as the probability that a photodetector detects a photon that arrives at it. Anupam Garg and N. David Mermin showed that when using a maximally entangled state and the CHSH inequality an efficiency of is required for a loophole-free violation. Later Philippe H. Eberhard showed that when using a partially entangled state a loophole-free violation is possible for , which is the optimal bound for the CHSH inequality. Other Bell inequalities allow for even lower bounds. For example, there exists a four-setting inequality which is violated for . Historically, only experiments with non-optical systems have been able to reach high enough efficiencies to close this loophole, such as trapped ions, superconducting qubits, and nitrogen-vacancy centers. These experiments were not able to close the locality loophole, which is easy to do with photons. More recently, however, optical setups have managed to reach sufficiently high detection efficiencies by using superconducting photodetectors, and hybrid setups have managed to combine the high detection efficiency typical of matter systems with the ease of distributing entanglement at a distance typical of photonic systems. Locality loophole One of the assumptions of Bell's theorem is the one of locality, namely that the choice of setting at a measurement site does not influence the result of the other. The motivation for this assumption is the theory of relativity, that prohibits communication faster than light. For this motivation to apply to an experiment, it needs to have space-like separation between its measurements events. That is, the time that passes between the choice of measurement setting and the production of an outcome must be shorter than the time it takes for a light signal to travel between the measurement sites. The first experiment that strived to respect this condition was Aspect's 1982 experiment. In it the settings were changed fast enough, but deterministically. The first experiment to change the settings randomly, with the choices made by a quantum random number generator, was Weihs et al.'s 1998 experiment. Scheidl et al. improved on this further in 2010 by conducting an experiment between locations separated by a distance of . Coincidence loophole In many experiments, especially those based on photon polarization, pairs of events in the two wings of the experiment are only identified as belonging to a single pair after the experiment is performed, by judging whether or not their detection times are close enough to one another. This generates a new possibility for a local hidden variables theory to "fake" quantum correlations: delay the detection time of each of the two particles by a larger or smaller amount depending on some relationship between hidden variables carried by the particles and the detector settings encountered at the measurement station. The coincidence loophole can be ruled out entirely simply by working with a pre-fixed lattice of detection windows which are short enough that most pairs of events occurring in the same window do originate with the same emission and long enough that a true pair is not separated by a window boundary. Memory loophole In most experiments, measurements are repeatedly made at the same two locations. A local hidden variable theory could exploit the memory of past measurement settings and outcomes in order to increase the violation of a Bell inequality. Moreover, physical parameters might be varying in time. It has been shown that, provided each new pair of measurements is done with a new random pair of measurement settings, that neither memory nor time inhomogeneity have a serious effect on the experiment. Superdeterminism A necessary assumption to derive Bell's theorem is that the hidden variables are not correlated with the measurement settings. This assumption has been justified on the grounds that the experimenter has "free will" to choose the settings, and that such is necessary to do science in the first place. A (hypothetical) theory where the choice of measurement is determined by the system being measured is known as superdeterministic. Many-worlds loophole The many-worlds interpretation, also known as the Hugh Everett interpretation, is deterministic and has local dynamics, consisting of the unitary part of quantum mechanics without collapse. Bell's theorem does not apply because of an implicit assumption that measurements have a single outcome. See also Determinism – Quantum and classical mechanics Einstein's thought experiments Principle of locality Quantum indeterminacy References Further reading Quantum measurement
Bell test
Physics
5,757
36,713,796
https://en.wikipedia.org/wiki/Mean%20squared%20displacement
In statistical mechanics, the mean squared displacement (MSD, also mean square displacement, average squared displacement, or mean square fluctuation) is a measure of the deviation of the position of a particle with respect to a reference position over time. It is the most common measure of the spatial extent of random motion, and can be thought of as measuring the portion of the system "explored" by the random walker. In the realm of biophysics and environmental engineering, the Mean Squared Displacement is measured over time to determine if a particle is spreading slowly due to diffusion, or if an advective force is also contributing. Another relevant concept, the variance-related diameter (VRD, which is twice the square root of MSD), is also used in studying the transportation and mixing phenomena in the realm of environmental engineering. It prominently appears in the Debye–Waller factor (describing vibrations within the solid state) and in the Langevin equation (describing diffusion of a Brownian particle). The MSD at time is defined as an ensemble average: where N is the number of particles to be averaged, vector is the reference position of the -th particle, and vector is the position of the -th particle at time t. Derivation of the MSD for a Brownian particle in 1D The probability density function (PDF) for a particle in one dimension is found by solving the one-dimensional diffusion equation. (This equation states that the position probability density diffuses out over time - this is the method used by Einstein to describe a Brownian particle. Another method to describe the motion of a Brownian particle was described by Langevin, now known for its namesake as the Langevin equation.) given the initial condition ; where is the position of the particle at some given time, is the tagged particle's initial position, and is the diffusion constant with the S.I. units (an indirect measure of the particle's speed). The bar in the argument of the instantaneous probability refers to the conditional probability. The diffusion equation states that the speed at which the probability for finding the particle at is position dependent. The differential equation above takes the form of 1D heat equation. The one-dimensional PDF below is the Green's function of heat equation (also known as Heat kernel in mathematics): This states that the probability of finding the particle at is Gaussian, and the width of the Gaussian is time dependent. More specifically the full width at half maximum (FWHM)(technically/pedantically, this is actually the Full duration at half maximum as the independent variable is time) scales like Using the PDF one is able to derive the average of a given function, , at time : where the average is taken over all space (or any applicable variable). The Mean squared displacement is defined as expanding out the ensemble average dropping the explicit time dependence notation for clarity. To find the MSD, one can take one of two paths: one can explicitly calculate and , then plug the result back into the definition of the MSD; or one could find the moment-generating function, an extremely useful, and general function when dealing with probability densities. The moment-generating function describes the moment of the PDF. The first moment of the displacement PDF shown above is simply the mean: . The second moment is given as . So then, to find the moment-generating function it is convenient to introduce the characteristic function: one can expand out the exponential in the above equation to give By taking the natural log of the characteristic function, a new function is produced, the cumulant generating function, where is the cumulant of . The first two cumulants are related to the first two moments, , via and where the second cumulant is the so-called variance, . With these definitions accounted for one can investigate the moments of the Brownian particle PDF, by completing the square and knowing the total area under a Gaussian one arrives at Taking the natural log, and comparing powers of to the cumulant generating function, the first cumulant is which is as expected, namely that the mean position is the Gaussian centre. The second cumulant is the factor 2 comes from the factorial factor in the denominator of the cumulant generating function. From this, the second moment is calculated, Plugging the results for the first and second moments back, one finds the MSD, Derivation for n dimensions For a Brownian particle in higher-dimension Euclidean space, its position is represented by a vector , where the Cartesian coordinates are statistically independent. The n-variable probability distribution function is the product of the fundamental solutions in each variable; i.e., The Mean squared displacement is defined as Since all the coordinates are independent, their deviation from the reference position is also independent. Therefore, For each coordinate, following the same derivation as in 1D scenario above, one obtains the MSD in that dimension as . Hence, the final result of mean squared displacement in n-dimensional Brownian motion is: Definition of MSD for time lags In the measurements of single particle tracking (SPT), displacements can be defined for different time intervals between positions (also called time lags or lag times). SPT yields the trajectory , representing a particle undergoing two-dimensional diffusion. Assuming that the trajectory of a single particle measured at time points , where is any fixed number, then there are non-trivial forward displacements (, the cases when are not considered) which correspond to time intervals (or time lags) . Hence, there are many distinct displacements for small time lags, and very few for large time lags, can be defined as an average quantity over time lags: Similarly, for continuous time series : It's clear that choosing large and can improve statistical performance. This technique allow us estimate the behavior of the whole ensembles by just measuring a single trajectory, but note that it's only valid for the systems with ergodicity, like classical Brownian motion (BM), fractional Brownian motion (fBM), and continuous-time random walk (CTRW) with limited distribution of waiting times, in these cases, (defined above), here denotes ensembles average. However, for non-ergodic systems, like the CTRW with unlimited waiting time, waiting time can go to infinity at some time, in this case, strongly depends on , and don't equal each other anymore, in order to get better asymptotics, introduce the averaged time MSD: Here denotes averaging over N ensembles. Also, one can easily derive the autocorrelation function from the MSD: where is so-called autocorrelation function for position of particles. MSD in experiments Experimental methods to determine MSDs include neutron scattering and photon correlation spectroscopy. The linear relationship between the MSD and time t allows for graphical methods to determine the diffusivity constant D. This is especially useful for rough calculations of the diffusivity in environmental systems. In some atmospheric dispersion models, the relationship between MSD and time t is not linear. Instead, a series of power laws empirically representing the variation of the square root of MSD versus downwind distance are commonly used in studying the dispersion phenomenon. See also Root-mean-square deviation of atomic positions: the average is taken over a group of particles at a single time, where the MSD is taken for a single particle over an interval of time Mean squared error References Statistical mechanics Statistical deviation and dispersion Motion (physics)
Mean squared displacement
Physics
1,552
42,937,599
https://en.wikipedia.org/wiki/HD%2097413
HD 97413 is a binary star located in the southern constellation Centaurus. The system has a combined magnitude of 6.27, placing it near the limit for naked eye visibility. Based on parallax measurements from the Gaia spacecraft, the system is located 320 light years away from the Solar System. The objects binarity was detected in a Hipparcos survey. The two components can't be distinguished because both stars have an angular separation of . Nevertheless, speckle interferometry revealed the components to have a 2.6 magnitude difference. They are located along a position angle of 250°. The visible component – HD 97413 A – has a stellar classification of A1 V, indicating that it is an ordinary A-type main-sequence star. It has 1.94 times the mass of the Sun and a radius of . It radiates 19.6 times the luminosity of the Sun from its photosphere at an effective temperature of , giving it a white hue. However, this is not typical for an A1 star. Parameters determined by Gaia's extinction reveal HD 97413 A to have an iron abundance half of the Sun's, making it metal deficient. References Centaurus A-type main-sequence stars CD-45 06771 097413 054718 Binary stars Centauri, 7
HD 97413
Astronomy
280
41,077,071
https://en.wikipedia.org/wiki/Women%27s%20Network%20for%20Unity
The Women's Network for Unity (WNU) is a sex workers' organization in Cambodia which was established in 2000 and currently has about 6,400 members. It works against the stigmatization of sex work and lobbies for the legal and human rights of sex workers and for safer working conditions. Accordingly, the organization aims to amend the 2008 Law on Suppression of Human Trafficking and Sexual Exploitation. The WNU was established by the Women's Agenda for Change (WAC) organisation that was founded by the Australian aid worker Rosanna Barbero. In 1999 several NGOs working on the issue of women's rights in development came together to discuss sex workers' rights with the aim of creating spaces and opportunities for sex workers to be at the forefront of the development agenda. The WNU was sponsored, supported and received training from the WAC activist, both local and international staff. See also Prostitution in Cambodia References External links WNU's website Women's organizations based in Cambodia Sex worker organizations Prostitution in Cambodia
Women's Network for Unity
Biology
205
40,490,659
https://en.wikipedia.org/wiki/N-Methylmescaline
N-Methylmescaline is a phenethylamine isolated from Lophophora williamsii. Legality United States N-Methylmescaline is illegal in the United States as it is a positional-isomer of Trimethoxyamphetamine, a regulated substance. References Phenethylamine alkaloids Secondary amines Methoxy compounds
N-Methylmescaline
Chemistry
78
68,453,661
https://en.wikipedia.org/wiki/Arnidiol
Arnidiol is a cytotoxic triterpene with the molecular formula C30H50O2. Arnidiol has been first isolated from the bloom of the plant Arnica montana. Arnidiol has also been isolated from the plant Taraxacum officinale. References Further reading Triterpenes Diols Vinylidene compounds Pentacyclic compounds
Arnidiol
Chemistry
84
27,452,961
https://en.wikipedia.org/wiki/American%20Council%20of%20Engineering%20Companies
The American Council of Engineering Companies (ACEC) is the oldest and largest business association of engineering companies. It is organized as a federation of 52 state and regional councils with national headquarters in Washington, D.C., comprising thousands of engineering practices throughout the country. It administers extensive lobbying and education programs. History ACEC traces its roots to the Association of Architectural Engineers, founded in New York City in 1905 to promote the business interests of consulting engineers. The organization, which was based on individual memberships, changed its name to the American Institute of Consulting Engineers (AICE) in 1909. Similar organizations soon sprung up in many states. In 1930, notable member Blake R. Van Leer lobbied congress for a National Museum of Engineering and Industry. In 1956, representatives from 10 state associations created a national Consulting Engineers Council (CEC), representing engineering firms rather than individuals. By 1960, CEC had grown to 29 state member organizations, representing more than 1,000 firms. In 1973, CEC and AICE merged to create the American Consulting Engineers Council (ACEC). To accommodate the individual members of AICE, the new organization created a College of Fellows membership category. ACEC's advocacy activities covered a broad range of issues, sometimes necessitating the creation of spin-off groups to pursue specific goals. For example, in 1986, ACEC founded the American Tort Reform Association (ATRA) to lobby for the reform of unfair liability statutes nationwide. In 2000, ACEC changed its name to the American Council of Engineering Companies to reflect both its firm-based membership and the increasingly diversified and multi-disciplinary nature of engineering practices, including design-build practices. In 2006, President George W. Bush gave a major mid-term address at the ACEC Annual Convention in Washington, D.C., recognizing the organization for its public policy advocacy. In 2012, along with the American Public Works Association (APWA) and American Society of Civil Engineers (ASCE), ACEC founded the Institute for Sustainable Infrastructure (ISI), which has developed the Envision® sustainability rating system for infrastructure works. In May 2022, ACEC opened the doors to its new office located at 1400 L Street, NW in Washington, replacing the council's longstanding former office location at 1015 15th Street, NW. Advocacy ACEC advocates for the business interests of its member firms before legislatures, executive agencies, courts, and in public media. Qualifications-Based Selection (QBS). In 1972, the council was a prime mover in the passage of the A/E Selection Procedures Act, also known as the Brooks Act, which requires that the U.S. Federal Government procure engineering and architecture services through Qualifications-Based Selection (QBS) rather than solely by price. Most states have adopted similar legislation. ACEC today seeks to bolster and expand the reach of QBS as a business “best practice” to ensure innovation, successful performance and public safety. Contracting Out. The council is a strong proponent of both federal and state bodies contracting out engineering services, asserting that such work is not “inherently governmental” and can be best and most economically performed by the private sector. In 2000, ACEC was a key advocate of the "Thomas Amendment" in the Water Resources and Development Act, which limits the extent to which the U.S. Army Corps of Engineers can compete with private engineering firms in municipal works such as schools, hospitals, and utilities. In 2016, an ACEC-sponsored contracting out study, conducted by New York University, concluded that contracting out of engineering services by state departments of transportation provided demonstrable savings over having those same services performed by DOT in-house operations. Acquisition Regulations. In 2011, ACEC led a large business coalition that won repeal of the 3 percent withholding provision on federal, state, and local contracts. The council has also: secured reforms in Federal Acquisition Regulations (FAR) pertaining to overhead and audit requirements; removed the mandatory 10 percent retainage on fixed-private federal architectural/engineering (A/E) contracts; expanded opportunities for small firms to compete for Department of Defense contracts; exempted A/E services from Project Labor Agreements on federal projects; and secured reforms to the federal design-build competition process. Infrastructure Investment. ACEC has been a leading proponent of increased infrastructure investment in surface transportation, water, aviation and energy. ACEC has supported the passage of every long-term surface transportation program, including the Intermodal Surface Transportation Efficiency Act (ISTEA) (1991), Transportation Equity Act for the 21st Century (TEA-21) (1998), Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) (2005), Moving Ahead for Progress in the 21st Century Act (MAP-21) (2012), and Fixing America’s Surface Transportation Act (FAST Act) (2015). Tax Reform. In 2004, ACEC helped win passage of a 9 percent tax deduction for engineering and other firms as part of the American Jobs Creation Act; in 2011, defeated a mandate for filing 1099 forms for every purchase of goods valued at more than $600; and in 2016, secured the extension of key tax benefits for engineering firms, including the R&D tax credit, bonus depreciation, small business expensing, and renewable energy tax credits. The council also continues to seek the protection of the cash method of accounting. Risk Management. ACEC helps member firms in understanding and managing risk, and advocates for legislative and regulatory reforms to properly control and fairly allocate risk. In recent years, ACEC has won judicial or legislative victories on issues including indemnification provisions, the duty to defend, and the Economic Loss Doctrine. Each year, ACEC conducts annual Professional Liability Insurance Surveys of member and carriers to gauge current market conditions. Political Action Committee The American Council of Engineering Companies Political Action Committee (ACEC/PAC) is a $1 million-plus annual PAC, the largest PAC in the A/E industry and among the top 3 percent of all federal PACs. During the 2014-2016 election cycle, the PAC contributed more than $2 million to Congressional campaigns and had a 97 percent win record. ACEC/PAC is bipartisan, funded solely by ACEC member contributions, and supports pro-business candidates. Education The Council holds more than 100 online classes annually, covering a wide range of business management and engineering topics. ACEC's Senior Executives Institute (SEI) provides advanced management, leadership and public policy training for firm leaders. More than 500 executives have participated in the program. In cooperation with the Federal Highway Administration, ACEC holds regular on-site educational workshops on Federal Acquisition Regulations. Publications ACEC publishes Engineering Inc., a quarterly print and online magazine focusing on engineering business issues. Last Word is the council's weekly online membership newsletter. The "Tuesday Letter" e-mail communication from the ACEC CEO's office. The quarterly ACEC Engineering Business Index surveys member CEOs on the health of the engineering industry. The Engineers Joint Contract Document Committee (EJCDC) is a collaboration of ACEC, ASCE, and the National Society of Professional Engineers (NSPE) to develop and disseminate standard contract documents for use in design and construction projects. The Engineering Influence podcast. Conferences ACEC hosts two conferences per year. Each spring, the ACEC Annual Convention and Legislative Summit is held in Washington, D.C., featuring political speakers and member visits to congressional offices, as well as business education. The ACEC Fall Conference is held in different locations each year and focuses on business practice issues, markets, and political developments. Awards Programs Engineering Excellence Awards—Introduced in 1967, the Engineering Excellence Awards (EEA) program annually honors outstanding engineering accomplishments. Community Service Awards—Given annually to member firm principals who have made outstanding contributions to the quality of life in their community. Distinguished Award of Merit—The council's highest award, given to individuals for exemplary achievement. Recipients include former Presidents Dwight D. Eisenhower and Herbert Hoover, General Lucius Clay, Admiral Hyman G. Rickover, Carl Sagan, W. Edwards Deming, and Neil Armstrong. QBS Awards Program—Co-sponsored with NSPE, the QBS Awards recognize public and private entities that make exemplary use of the qualifications-based selection (QBS) process at the state and local levels. Young Professional of the Year Award—This award promotes the accomplishments of young engineers by highlighting their engineering contributions and the resulting impact on society. ACEC also awards six student scholarships annually. References External links Engineering organizations
American Council of Engineering Companies
Engineering
1,753
2,665,788
https://en.wikipedia.org/wiki/Gamma%20Serpentis
Gamma Serpentis (γ Serpentis, γ Ser) is a star in the equatorial constellation Serpens, in the part of the constellation that represents the serpent's head (Serpens Caput). It has an apparent visual magnitude +3.85, which means it is visible to the naked eye. Based upon parallax measurements by the Gaia spacecraft, this star is approximately 36.4 light years from Earth. Properties Gamma Serpentis is an ordinary F-type main sequence star with a stellar classification of F6 V, cirrently fusing atoms of hydrogen into helium at its core. It is 46% larger and 21% more massive than the Sun, with three times the solar luminosity. Based upon its mass, it may have a convection zone in its core region. The projected rotational velocity is 10.2 km/s, providing a lower limit to the azimuthal rotational velocity along the equator. It is younger than the Sun with an estimated age of 3.5 billion years. The effective temperature of the star's outer atmosphere is 6,300 K, giving it the yellow-white-hued glow of an F-type star. Occasionally Gamma Serpentis is listed as having two 10th magnitude companions, but it appears that these stars are just optical neighbours. Etymology It was a member of indigenous Arabic asterism al-Nasaq al-Sha'āmī, "the Northern Line" of al-Nasaqān "the Two Lines", along with β Her (Kornephoros), γ Her (Hejian, Ho Keen) and β Ser (Chow). According to the catalogue of stars in the Technical Memorandum 33-507 - A Reduced Star Catalog Containing 537 Named Stars, al-Nasaq al-Sha'āmī or Nasak Shamiya were the title for three stars:β Ser as Nasak Shamiya I, γ Ser as Nasak Shamiya II, γ Her as Nasak Shamiya III (exclude β Her). The star was later given the proper name Ainalhai, from the Arabic عين الحية ‘Ayn al-Ḥayyah "the Serpent's Eye". In Chinese, (), meaning Right Wall of Heavenly Market Enclosure, refers to an asterism which represents eleven old states in China and which marks the right borderline of the enclosure, consisting of γ Serpentis, β Herculis, γ Herculis, κ Herculis, β Serpentis, δ Serpentis, α Serpentis, ε Serpentis, δ Ophiuchi, ε Ophiuchi and ζ Ophiuchi. Consequently, the Chinese name for γ Serpentis itself is (, ), representing the state Zheng (鄭) (or Ching), together with 20 Capricorni (according to Ian Ridpath version) in Twelve States (asterism). References External links Gamma Serpentis by Professor Jim Kaler. Serpentis, Gamma Serpens F-type main-sequence stars Serpentis, Gamma Triple stars Serpentis, 41 078072 Suspected variables 5933 142860 Durchmusterung objects
Gamma Serpentis
Astronomy
643
23,533,351
https://en.wikipedia.org/wiki/Chamazulene
Chamazulene is an aromatic chemical compound with the molecular formula C14H16 found in a variety of plants including in chamomile (Matricaria chamomilla), wormwood (Artemisia absinthium), and yarrow (Achillea millefolium). It is a blue-violet derivative of azulene which is biosynthesized from the sesquiterpene matricin. Chamazulene has anti-inflammatory properties in vivo and inhibits the CYP1A2 enzyme. References Azulenes Hydrocarbons
Chamazulene
Chemistry
124
15,062,436
https://en.wikipedia.org/wiki/KAT8%20regulatory%20NSL%20complex%20subunit%201
KAT8 regulatory NSL complex subunit 1 is a protein that in humans is encoded by the KANSL1 gene. Interactions KIAA1267 has been shown to interact with CCDC85B. See also KAT8 KAT8 regulatory NSL complex subunit 2 KAT8 regulatory NSL complex subunit 3 References Further reading Uncharacterized proteins
KAT8 regulatory NSL complex subunit 1
Biology
70
56,212,511
https://en.wikipedia.org/wiki/Aspergillus%20dybowskii
Aspergillus dybowskii is a species of fungus in the genus Aspergillus which occurs in Southeast Asia. References Further reading dybowskii Fungi described in 1985 Fungus species
Aspergillus dybowskii
Biology
43
691,974
https://en.wikipedia.org/wiki/87P/Bus
Comet 87P/Bus is a periodic comet with an orbital period of 6.5 years. It fits the definition of an Encke-type comet with (TJupiter > 3; a < aJupiter). It was discovered by Schelte J. Bus in 1981 on a plate taken with the 1.2m UK Schmidt telescope at Siding Spring, Australia. The discovery was announced in IAU Circular 3578 on March 4, 1981. It has been observed on each of its subsequent apparitions, most recently in 2020. Its nucleus is estimated to have an effective radius of 0.27 ± 0.01 kilometers and to be elongated, with an a/b ratio greater than 2.2. Its rotational period is estimated to be 32 ± 9 hours. A close approach to Jupiter on 13 May 1952, at a distance of 0.0668 AU, lowered the orbital period from 12.46 years and the perihelion distance from 4.43 AU to 6.43 years and 2.13 AU respectively. Another close approach to Jupiter on 24 February 2023, at a distance of 0.182 AU, raised the perihelion to 3.62 AU and the orbital period to 9.58 years. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 87P/Bus – Seiichi Yoshida @ aerith.net Periodic comets 0087 Encke-type comets Discoveries by Schelte J. Bus 087P Astronomical objects discovered in 1981
87P/Bus
Astronomy
313
10,797,093
https://en.wikipedia.org/wiki/Karamata%27s%20inequality
In mathematics, Karamata's inequality, named after Jovan Karamata, also known as the majorization inequality, is a theorem in elementary algebra for convex and concave real-valued functions, defined on an interval of the real line. It generalizes the discrete form of Jensen's inequality, and generalizes in turn to the concept of Schur-convex functions. Statement of the inequality Let be an interval of the real line and let denote a real-valued, convex function defined on . If and are numbers in such that majorizes , then Here majorization means that and satisfies and we have the inequalities and the equality If   is a strictly convex function, then the inequality () holds with equality if and only if we have for all . Remarks If the convex function   is non-decreasing, then the proof of () below and the discussion of equality in case of strict convexity shows that the equality () can be relaxed to The inequality () is reversed if   is concave, since in this case the function   is convex. Example The finite form of Jensen's inequality is a special case of this result. Consider the real numbers and let denote their arithmetic mean. Then majorizes the -tuple , since the arithmetic mean of the largest numbers of is at least as large as the arithmetic mean of all the numbers, for every . By Karamata's inequality () for the convex function , Dividing by gives Jensen's inequality. The sign is reversed if   is concave. Proof of the inequality We may assume that the numbers are in decreasing order as specified in (). If for all , then the inequality () holds with equality, hence we may assume in the following that for at least one . If for an , then the inequality () and the majorization properties () and () are not affected if we remove and . Hence we may assume that for all . It is a property of convex functions that for two numbers in the interval the slope of the secant line through the points and of the graph of   is a monotonically non-decreasing function in for fixed (and vice versa). This implies that for all . Define and for all . By the majorization property (), for all and by (), . Hence, which proves Karamata's inequality (). To discuss the case of equality in (), note that by () and our assumption for all . Let be the smallest index such that , which exists due to (). Then . If   is strictly convex, then there is strict inequality in (), meaning that . Hence there is a strictly positive term in the sum on the right hand side of () and equality in () cannot hold. If the convex function   is non-decreasing, then . The relaxed condition () means that , which is enough to conclude that in the last step of (). If the function   is strictly convex and non-decreasing, then . It only remains to discuss the case . However, then there is a strictly positive term on the right hand side of () and equality in () cannot hold. References External links An explanation of Karamata's inequality and majorization theory can be found here. Inequalities Convex analysis Articles containing proofs
Karamata's inequality
Mathematics
665
44,370
https://en.wikipedia.org/wiki/Trefoil
A trefoil () is a graphic form composed of the outline of three overlapping rings, used in architecture, Pagan and Christian symbolism, among other areas. The term is also applied to other symbols with a threefold shape. A similar shape with four rings is called a quatrefoil. Architecture Ornamentation 'Trefoil' is a term in Gothic architecture given to the ornamental foliation or cusping introduced in the heads of window-lights, tracery, and panellings, in which the centre takes the form of a three-lobed leaf (formed from three partially overlapping circles). One of the earliest examples is in the plate tracery at Winchester Cathedral (1222–1235). The fourfold version of an architectural trefoil is a quatrefoil. A simple trefoil shape in itself can be symbolic of the Trinity, while a trefoil combined with an equilateral triangle was also a moderately common symbol of the Christian Trinity during the late Middle Ages in some parts of Europe, similar to a barbed quatrefoil. Two forms of a trefoil combined with a triangle are shown below: A dove, which symbolizes the Holy Spirit, is sometimes depicted within the outlined form of the trefoil combined with a triangle. Architectural layout In architecture and archaeology, a 'trefoil' describes a layout or floor plan consisting of three apses in clover-leaf shape, as for example in the Megalithic temples of Malta. Particularly in church architecture, such a layout may be called a "triconchos". Heraldry The heraldic 'trefoil' is a stylized clover. It should not be confused with the figure named in French heraldry ("threefoil"), which is a stylized flower with three petals, and differs from the heraldic trefoil in being not slipped. Symbols Symmetrical trefoils are particularly popular as warning and informational symbols. If a box containing hazardous material is moved around and shifted into different positions, it is still easy to recognize the symbol, while the distinctive trefoil design of the recycling symbol makes it easy for a consumer to notice and identify the packaging the symbol has been printed on as recyclable. Easily stenciled symbols are also favored. While the green trefoil is considered by many to be the symbol of Ireland, the harp has much greater officially recognized status. Therefore, shamrocks generally do not appear on Irish coins or postage stamps. A trefoil is also part of the logo for Adidas Originals, which also includes three stripes. See also Clover or Trefoil, a plant Fleur-de-Lys Foil (architecture) Quatrefoil Shamrock Trefoil arch Trefoil domain Trefoil knot Torus knot Explanatory notes References External links Explanation of Christian symbolism of Trefoil Christian symbols Heraldic charges Ornaments Piecewise-circular curves Symbols Visual motifs
Trefoil
Mathematics
579
377,537
https://en.wikipedia.org/wiki/Hankel%20matrix
In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a n x m matrix in which each ascending skew-diagonal from left to right is constant. For example, More generally, a Hankel matrix is any matrix of the form In terms of the components, if the element of is denoted with , and assuming , then we have for all Properties Any Hankel matrix is symmetric. Let be the exchange matrix. If is an Hankel matrix, then where is an Toeplitz matrix. If is real symmetric, then will have the same eigenvalues as up to sign. The Hilbert matrix is an example of a Hankel matrix. The determinant of a Hankel matrix is called a catalecticant. Hankel operator Given a formal Laurent series the corresponding Hankel operator is defined as This takes a polynomial and sends it to the product , but discards all powers of with a non-negative exponent, so as to give an element in , the formal power series with strictly negative exponents. The map is in a natural way -linear, and its matrix with respect to the elements and is the Hankel matrix Any Hankel matrix arises in this way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if is a rational function, that is, a fraction of two polynomials Approximations We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator. Note that the matrix does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory. Hankel matrix transform The Hankel matrix transform, or simply Hankel transform, of a sequence is the sequence of the determinants of the Hankel matrices formed from . Given an integer , define the corresponding -dimensional Hankel matrix as having the matrix elements Then the sequence given by is the Hankel transform of the sequence The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes as the binomial transform of the sequence , then one has Applications of Hankel matrices Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired. The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization. The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation. Method of moments for polynomial distributions The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation. Positive Hankel matrices and the Hamburger moment problems See also Cauchy matrix Jacobi operator Toeplitz matrix, an "upside down" (that is, row-reversed) Hankel matrix Vandermonde matrix Notes References Brent R.P. (1999), "Stability of fast algorithms for structured linear systems", Fast Reliable Algorithms for Matrices with Structure (editors—T. Kailath, A.H. Sayed), ch.4 (SIAM). Matrices Transforms
Hankel matrix
Mathematics
731
26,233,008
https://en.wikipedia.org/wiki/Hexahydroxybenzene%20trisoxalate
Hexahydroxybenzene trisoxalate is a chemical compound, an oxide of carbon with formula . Its molecule consists of a benzene core with the six hydrogen atoms replaced by three oxalate groups. It can be seen as a sixfold ester of benzenehexol and oxalic acid. The compound was first described by H. S. Verter and R. Dominic in 1967. See also Tetrahydroxy-1,4-benzoquinone bisoxalate Tetrahydroxy-1,4-benzoquinone biscarbonate Hexahydroxybenzene triscarbonate References Oxocarbons Oxalate esters Conjugated ketones
Hexahydroxybenzene trisoxalate
Chemistry
151