source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Newcastle%20Connection
The Newcastle Connection (or UNIX United) was a software subsystem from the early 1980s that could be added to each of a set of interconnected UNIX-like systems to build a distributed system. The latter would be functionally indistinguishable, at both user- and system-level, from a conventional UNIX system. It became a forerunner of Sun Microsystems' Network File System (NFS). The name derives from the research group at Newcastle University, under Brian Randell, which developed it. The term "UNIX United" describes the scheme of combining the overall filesystems of the participating UNIX machines; "Newcastle Connection" describes the underlying communication layer which enables this. A UNIX United system constructed with the Newcastle Connection is functionally indistinguishable from a centralised UNIX system at the system-call level. In essence, the concept of the "parent directory" was re-interpreted at the root of the filesystem, where it originally had no significant meaning, to mean "this directory is on a remote machine", similar to subsequent "Super-root (Unix)" usage. UNIX United As a reminder, a typical single UNIX directory tree might resemble: / home brian (current directory '.') a b UNIX United acts as an extra level above the / root. If the example machine is named "unix1", an overall UNIX United scheme with an additional second machine, "unix2", would look like: /.. unix1 home brian (the current directory '.') a b unix2 home brian b c If we wish to copy file a from "unix1" to "unix2" to sit alongside files b and c, example equivalent commands might be: cp /home/brian/a /../unix2/home/brian/a cp a /../unix2/home/brian/a ( cd /../unix2/home/brian ; cp /../unix1/home/brian/a a ) Internals It required no changes to the UNIX kernel. Rather, it ran in user-space, using a modified version of the C standard library of its day which was capable of recognising these new semantics. To a first approximation this was to reco
https://en.wikipedia.org/wiki/Nucleotide%20pyrophosphatase/phosphodiesterase
Nucleotide pyrophosphatase/phosphodiesterase (NPP) is a class of dimeric enzymes that catalyze the hydrolysis of phosphate diester bonds. NPP belongs to the alkaline phosphatase (AP) superfamily of enzymes. Humans express seven known NPP isoforms, some of which prefer nucleotide substrates, some of which prefer phospholipid substrates, and others of which prefer substrates that have not yet been determined. In eukaryotes, most NPPs are located in the cell membrane and hydrolyze extracellular phosphate diesters to affect a wide variety of biological processes. Bacterial NPP is thought to localize to the periplasm. Structure The catalytic site of NPP consists of a two-metal-ion (bimetallo) Zn2+ catalytic core. These Zn2+ catalytic components are thought to stabilize the transition state of the NPP phosphoryl transfer reaction. Mechanism Overview NPP catalyses the nucleophilic substitution of one ester bond on a phosphodiester substrate. It has a nucleoside binding pocket that excludes phospholipid substrates from the active site. A threonine nucleophile has been identified through site-directed mutagenesis, and the reaction inverts the stereochemistry of the phosphorus center. The sequence of bond breakage and formation has yet to be resolved. Ongoing Investigation Three extreme possibilities have been proposed for the mechanism of NPP-catalyzed phosphoryl transfer. They are distinguished by the sequence in which bonds to phosphorus are made and broken. Though this phenomenon is subtle, it is important for understanding the physiological roles of AP superfamily enzymes, and also to molecular dynamic modeling. Extreme mechanistic scenarios:1) A two-step "dissociative" (elimination-addition or DN + AN) mechanism that proceeds via a trigonal metaphosphate intermediate. This mechanism is represented by the red dashed lines in the figure at right. 2) A two-step "associative" (addition-elimination or AN + DN) mechanism that proceeds via a pentavalent phosphorane int
https://en.wikipedia.org/wiki/Digital%20modeling%20and%20fabrication
Digital modeling and fabrication is a design and production process that combines 3D modeling or computing-aided design (CAD) with additive and subtractive manufacturing. Additive manufacturing is also known as 3D printing, while subtractive manufacturing may also be referred to as machining, and many other technologies can be exploited to physically produce the designed objects. Modeling Digitally fabricated objects are created with a variety of CAD software packages, using both 2D vector drawing, and 3D modeling. Types of 3D models include wireframe, solid, surface and mesh. A design has one or more of these model types. Machines for fabrication Three machines are popular for fabrication: 1. CNC router 2. Laser cutter 3. 3D Printer CNC milling machine CNC stands for "computer numerical control". CNC mills or routers include proprietary software which interprets 2D vector drawings or 3D models and converts this information to a G-code, which represents specific CNC functions in an alphanumeric format, which the CNC mill can interpret. The G-codes drive a machine tool, a powered mechanical device typically used to fabricate components. CNC machines are classified according to the number of axes that they possess, with 3, 4 and 5 axis machines all being common, and industrial robots being described with having as many as 9 axes. CNC machines are specifically successful in milling materials such as plywood, plastics, foam board, and metal at a fast speed. CNC machine beds are typically large enough to allow 4' × 8' (123 cm x 246 cm) sheets of material, including foam several inches thick, to be cut. Laser cutter The laser cutter is a machine that uses a laser to cut materials such as chip board, matte board, felt, wood, and acrylic up to 3/8 inch (1 cm) thickness. The laser cutter is often bundled with a driver software which interprets vector drawings produced by any number of CAD software platforms. The laser cutter is able to modulate the speed of the las
https://en.wikipedia.org/wiki/Alexander%20Moiseevich%20Olevskii
Alexander Moiseevich Olevskii (, born February 12, 1939, in Moscow) is a Russian-Israeli mathematician at Tel Aviv University, specializing in mathematical analysis. As of July 2021, he is a professor emeritus. He graduated in 1963 with a Candidate of Sciences degree (PhD) from Moscow State University. There he received in 1966 a Russian Doctor of Sciences degree (habilitation). At the Moscow Institute of Electronics and Mathematics, he was from 1988 to 1992 head of the department of algebra and analysis. In the spring of 1996 he was at the Institute for Advanced Study. He has held visiting appointments at universities or institutes in several countries, including France, Australia, Germany, Italy, and the United States. In 1986 Olevskii was an invited speaker at the International Congress of Mathematicians in Berkeley, California. He was a member of the 2013 Class of Fellows of the American Mathematical Society (announced in 2012). In 2014 he was an invited speaker at the European Congress of Mathematics in Kraków. His doctoral students include Gady Kozma. Selected publications } 2004
https://en.wikipedia.org/wiki/ClearOS
ClearOS (also known as the ClearOS System, formerly ClarkConnect) is a Linux distribution by ClearFoundation, with network gateway, file, print, mail, and messaging services. History ClearOS is based on CentOS and Red Hat Enterprise Linux, designed for use in small and medium enterprises as a network gateway and network server with a web-based administration interface. It is positioned as an alternative to Windows Small Business Server. ClearOS is the successor to ClarkConnect. The software is built by ClearFoundation, and support services can be purchased from ClearCenter. ClearOS 5.1 removes previous limitations to mail, DMZ, and MultiWAN functions. As of the ClearOS 6.1 release, the distribution is a full-featured operating system for gateway, network and servers built from source packages for Red Hat Enterprise Linux. ClearOS aims to replace, as a small business server, Windows SBS. Features Features include: Stateful firewall (iptables), networking and security Intrusion detection and prevention system (SNORT) Virtual private networking (IPsec, PPTP, OpenVPN) Web proxy, with content filtering and antivirus (Squid, DansGuardian) E-mail services (Webmail, Postfix, SMTP, POP3/s, IMAP/s) Groupware (Kolab) Database and web server (easy to deploy LAMP stack) File and print services (Samba and CUPS) Flexshares (unified multi-protocol storage which currently employs SMB, HTTP/S, FTP/S, and SMTP) MultiWAN (Internet fault tolerant design) Built-in reports for system statistics and services (MRTG and others) Awards and recognition August 2009: CompTIA Breakaway — ClearCenter's ClearOS wins 'Best New Product' at CompTIA Breakaway. August 2010: CompTIA Breakaway — ClearCenter's ClearOS repeats win for 'Best New Product' at CompTIA Breakaway. July 2012: Softpedia — An Open Source, free and powerful network and gateway Linux server operating system February 2014: IDG Security Firewall Distributions Review June 2015: Small Business Computing — The 5 Best L
https://en.wikipedia.org/wiki/Viktor%20Schauberger
Viktor Schauberger (30 June 1885 – 25 September 1958) was an Austrian forest caretaker, naturalist, philosopher, inventor and pseudoscientist. Early life Schauberger was born in Holzschlag, Upper Austria on 30 June 1885. His parents were Leopold Schauberger and Josefa, née Klimitsch. From 1891 to 1897 he attended the elementary school in Aigen, then until 1900 the state grammar school in Linz. Until 1904 he went to the forestry school in Aggsbach in the Kartause Aggsbach, where he passed the exam as a forester. From 1904 to 1906 he was forest clerk in Groß-Schweinbarth in Lower Austria. Films In 1930, "Tragendes Wasser" was filmed, showing the functioning of the log flumes. Nature Was My Teacher, narrated by Tom Brown (1993, Borderland Science Research Foundation) Sacred Living Geometry: The Enlightened Environmental Theories of Viktor Schauberger, narrated by Callum Coats (1995, Talkstudio) Extraordinary Nature of Water, narrated by Callum Coats (2000, Filmstream) Viktor Schauberger: Comprehend and Copy Nature, directed by Franz Fitzke (2007, Schauberger Verlag) Books Schauberger, Viktor: Unsere sinnlose Arbeit – Die Quelle der Weltkrise, Der Aufbau durch Atomverwandlung, nicht Atomzertrümmerung (1933, Krystall-Verlag GmbH, 2001, Jörg Schauberger, ) (Released in English as "Our Senseless Toil – The Cause of the World Crisis – Progress Through Transformation of the Atom – Not its destruction!") Schauberger, Viktor & Coats, Callum: Eco-Technology 1: The Water Wizard – The Extraordinary Properties of Natural Water (1998, Gateway Books, ) Schauberger, Viktor & Coats, Callum: Eco-Technology 2: Nature as Teacher – New Principles in the Working of Nature (1999, Gateway Books, ) Schauberger, Viktor & Coats, Callum: Eco-Technology 3: The Fertile Earth – Nature's Energies in Agriculture, Soil Fertilisation and Forestry (1999, Gateway Books, ) Schauberger, Viktor & Coats, Callum: Eco-Technology 4: Energy Evolution – Harnessing Free Energy from Nature (2000
https://en.wikipedia.org/wiki/Trastuzumab%20emtansine
Trastuzumab emtansine, sold under the brand name Kadcyla, is an antibody-drug conjugate consisting of the humanized monoclonal antibody trastuzumab (Herceptin) covalently linked to the cytotoxic agent DM1. Trastuzumab alone stops growth of cancer cells by binding to the HER2 receptor, whereas trastuzumab emtansine undergoes receptor-mediated internalization into cells, is catabolized in lysosomes where DM1-containing catabolites are released and subsequently bind tubulin to cause mitotic arrest and cell death. Trastuzumab binding to HER2 prevents homodimerization or heterodimerization (HER2/HER3) of the receptor, ultimately inhibiting the activation of MAPK and PI3K/AKT cellular signalling pathways. Because the monoclonal antibody targets HER2, and HER2 is only over-expressed in cancer cells, the conjugate delivers the cytotoxic agent DM1 specifically to tumor cells. The conjugate is abbreviated T-DM1. In the EMILIA clinical trial of women with advanced HER2 positive breast cancer who were already resistant to trastuzumab alone, it improved median overall survival by 5.8 months (30.9 months vs. 25.1 months) compared to the combination of lapatinib and capecitabine. Based on that trial, the U.S. Food and Drug Administration (FDA) approved marketing on 22 February 2013. Trastuzumab emtansine was developed by Genentech, and is manufactured by Lonza. Medical uses In the United States, trastuzumab emtansine was approved specifically for treatment of HER2-positive metastatic breast cancer (mBC) in patients who have been treated previously with trastuzumab and a taxane (paclitaxel or docetaxel), and who have already been treated for mBC or developed tumor recurrence within six months of adjuvant therapy. Approval was based on the EMILIA study, a phase III clinical trial that compared trastuzumab emtansine versus capecitabine (Xeloda) plus lapatinib (Tykerb) in 991 people with unresectable, locally advanced or metastatic HER2-positive breast cancer who had previously be
https://en.wikipedia.org/wiki/Rice%20bran%20oil
Rice bran oil is the oil extracted from the hard outer brown layer of rice called bran. It is known for its high smoke point of and mild flavor, making it suitable for high-temperature cooking methods such as stir frying and deep frying. It is popular as a cooking oil in East Asia, the Indian subcontinent, and Southeast Asia including India, Nepal, Bangladesh, Indonesia, Japan, Southern China and Malaysia. Composition and properties Rice bran oil has a composition similar to that of peanut oil, with 38% monounsaturated, 37% polyunsaturated, and 25% saturated fatty acids. A component of rice bran oil is the γ-oryzanol, at around 2% of crude oil content. Thought to be a single compound when initially isolated, γ-oryzanol is now known to be a mixture of steryl and other triterpenyl esters of ferulic acids. Also present are tocopherols and tocotrienols (two types of vitamin E) and phytosterols. Fatty acid composition Physical properties of crude and refined rice bran oil Research Rice bran oil consumption has been found to significantly decrease total cholesterol (TC), LDL-C and triglyceride (TG) levels. Uses Rice bran oil is an edible oil which is used in various forms of food preparation. It is also the basis of some vegetable ghee. Rice bran wax, obtained from rice bran oil, is used as a substitute for carnauba wax in cosmetics, confectionery, shoe creams, and polishing compounds. Isolated γ-oryzanol from rice bran oil is available in China as an over-the-counter drug, and in other countries as a dietary supplement. There is no meaningful evidence supporting its efficacy for treating any medical condition. Comparison to other vegetable oils See also Cereal germ Bran Rice germ oil Wheat germ oil Wheat bran oil Yushō disease
https://en.wikipedia.org/wiki/B%C3%A9la%20Krek%C3%B3
Béla Krekó (29 September 1915 – 7 December 1994) was a Hungarian mathematician. His main research interests were linear programming and matrix ring. He was a university professor in Károly Marx University of Economics Biography Krekó's parents were Ferenc Krekó and Terézia Princz. He married his wife Katalin Kovács (1919-2010) in 1944. His children are Béla (1945), István (1946), Ágnes (1948), and László (1951). In 1940, he obtained a degree in mathematics at the Pázmány Péter University, then in 1948 he also obtained a qualification in economics and a doctorate from the József Nádor University of Technology and Economics. From 1949 to 1954 he was a college teacher at the Academy of Commerce and then the Academy of Economic Engineering. From 1954 he was an associate professor at the Department of Mathematics of the Károly Marx University of Economics. In 1957, he wrote his book, "Introduction to Linear Programming." Author of additional books that serve as a foundation for generations. In 1959, he was the head of the department at the Department of Mathematics at the university, where he began the reform of mathematics education and the integration of the most important areas of operations research into education. From 1967 to 1980, he was director of the University Computer Center. He was appointed university professor in 1969. He played a prominent and decisive role in the 1961 launch of the plan-mathematical economist program. In 1975 he defended his dissertation at the Hungarian Academy of Sciences. He is the organizer and regular speaker of the first computer science education conferences. Until his death in 1994, he took part in the modernization of university education. The National Memorial and Commemorative Committee decided to declare the resting place of Béla Krekó, economist, mathematician, part of the national cemetery by its decision No. 85/2022 (Budapest, Óbuda cemetery, 20-0-1-225) Notable works Bacskay Zoltán-Krekó Béla. Kombinatorika és valós
https://en.wikipedia.org/wiki/Agust%C3%ADn%20de%20Iturbide
Agustín de Iturbide (; 27 September 178319 July 1824), full name Agustín Cosme Damián de Iturbide y Arámburu and later known as Emperor Agustín I of Mexico was an officer in the royal Spanish army. During the Mexican War of Independence he initially fought insurgent forces rebelling against the Spanish crown before changing sides in 1820 and leading a coalition of former royalists and long-time insurgents under his Plan of Iguala. The combined forces under Iturbide brought about Mexican independence in September 1821. After securing the secession of Mexico from Spain, Iturbide was proclaimed president of the Regency in 1821; a year later, he was proclaimed Emperor, reigning from 19 May 1822 to 19 March 1823, when he abdicated. In May 1823 he went into exile in Europe. When he returned to Mexico in July 1824, he was arrested and executed. Family and early life Agustín Cosme Damián de Iturbide y Arámburu was born in what was then called Valladolid, now Morelia, the provincial capital of Michoacán, on 27 September 1783. He was baptized with the names of Saints Cosmas and Damian at the cathedral. The fifth child born to his parents, he was the only male to survive and eventually became head of the family. Iturbide's parents were part of the privileged landed class of Valladolid, owning agricultural land including the haciendas of Apeo and Guaracha as well as lands in nearby Quirio. Iturbide's father, Joaquín de Iturbide, came from a family of the Basque gentry who were confirmed in nobility by King Juan II of Aragon. One of his ancestors, Martín de Iturbide, was designated as Royal Merino in the High Valley of Baztan in the 1430s, and thereafter many in the family held political or administrative positions in the Basque Country from the 15th century. As a younger son, Joaquín was not in line to inherit the family lands, so he migrated to New Spain to make his fortune there. While the aristocratic and Spanish lineage of Agustín's father was not in doubt, his mother
https://en.wikipedia.org/wiki/Weinberg%E2%80%93Witten%20theorem
In theoretical physics, the Weinberg–Witten (WW) theorem, proved by Steven Weinberg and Edward Witten, states that massless particles (either composite or elementary) with spin j > 1/2 cannot carry a Lorentz-covariant current, while massless particles with spin j > 1 cannot carry a Lorentz-covariant stress-energy. The theorem is usually interpreted to mean that the graviton (j = 2) cannot be a composite particle in a relativistic quantum field theory. Background During the 1980s, preon theories, technicolor and the like were very popular and some people speculated that gravity might be an emergent phenomenon or that gluons might be composite. Weinberg and Witten, on the other hand, developed a no-go theorem that excludes, under very general assumptions, the hypothetical composite and emergent theories. Decades later new theories of emergent gravity are proposed and some high-energy physicists are still using this theorem to try and refute such theories. Because most of these emergent theories aren't Lorentz covariant, the WW theorem doesn't apply. The violation of Lorentz covariance, however, usually leads to other problems. Theorem Weinberg and Witten proved two separate results. According to them, the first is due to Sidney Coleman, who did not publish it: A 3 + 1D QFT (quantum field theory) with a conserved 4-vector current (see four-current) which is Poincaré covariant (and gauge invariant if there happens to be any gauge symmetry which hasn't been gauge-fixed) does not admit massless particles with helicity |h| > 1/2 that also have nonzero charges associated with the conserved current in question. A 3 + 1D QFT with a non-zero conserved stress–energy tensor which is Poincaré covariant (and gauge invariant if there happens to be any gauge symmetry which hasn't been gauge-fixed) does not admit massless particles with helicity |h| > 1. A sketch of the proof The conserved charge Q is given by . We shall consider the matrix elements of the charge and o
https://en.wikipedia.org/wiki/J%C3%B3nsson%20cardinal
In set theory, a Jónsson cardinal (named after Bjarni Jónsson) is a certain kind of large cardinal number. An uncountable cardinal number κ is said to be Jónsson if for every function there is a set of order type such that for each , restricted to -element subsets of omits at least one value in . Every Rowbottom cardinal is Jónsson. By a theorem of Eugene M. Kleinberg, the theories ZFC + “there is a Rowbottom cardinal” and ZFC + “there is a Jónsson cardinal” are equiconsistent. William Mitchell proved, with the help of the Dodd-Jensen core model that the consistency of the existence of a Jónsson cardinal implies the consistency of the existence of a Ramsey cardinal, so that the existence of Jónsson cardinals and the existence of Ramsey cardinals are equiconsistent. In general, Jónsson cardinals need not be large cardinals in the usual sense: they can be singular. But the existence of a singular Jónsson cardinal is equiconsistent to the existence of a measurable cardinal. Using the axiom of choice, a lot of small cardinals (the , for instance) can be proved to be not Jónsson. Results like this need the axiom of choice, however: The axiom of determinacy does imply that for every positive natural number n, the cardinal is Jónsson. A Jónsson algebra is an algebra with no proper subalgebras of the same cardinality. (They are unrelated to Jónsson–Tarski algebras). Here an algebra means a model for a language with a countable number of function symbols, in other words a set with a countable number of functions from finite products of the set to itself. A cardinal is a Jónsson cardinal if and only if there are no Jónsson algebras of that cardinality. The existence of Jónsson functions shows that if algebras are allowed to have infinitary operations, then there are no analogues of Jónsson cardinals.
https://en.wikipedia.org/wiki/Glicksberg%27s%20theorem
In the study of zero sum games, Glicksberg's theorem (also Glicksberg's existence theorem) is a result that shows certain games have a minimax value: . If A and B are Hausdorff compact spaces, and K is an upper semicontinuous or lower semicontinuous function on , then where f and g run over Borel probability measures on A and B. The theorem is useful if f and g are interpreted as mixed strategies of two players in the context of a continuous game. If the payoff function K is upper semicontinuous, then the game has a value. The continuity condition may not be dropped: see example of a game with no value.
https://en.wikipedia.org/wiki/Hara%20hachi%20bun%20me
(also spelled , and sometimes misspelled ) is a Confucian teaching that instructs people to eat until they are 80 percent full. The Japanese phrase translates to, "Eat until you are eight parts (out of ten) full", or "belly 80 percent full". Okinawans As of the early 21st century, Okinawans in Japan continue practicing . They consume about 1,800 to 1,900 kilo-calories per day. Their elders' typical body mass index (BMI) is about 18 to 22, compared to a typical BMI of 26 or 27 for adults over 60 in the United States. Okinawa has the world's highest proportion of centenarians, at approximately 50 per 100,000 people. Biochemist Clive McCay, a professor at Cornell University in the 1930s, reported that significant calorie restriction prolonged life in laboratory animals. Authors Bradley and Craig Wilcox and Makoto Suzuki believe that may act as a form of calorie restriction, thus extending practitioners' life expectancy. They believe assists in keeping the average Okinawan's BMI low, and this is thought to be due to the delay in the stomach stretch receptors that help signal satiety. The result of not practising is a constant stretching of the stomach which in turn increases the amount of food needed to feel full. In other cultures The approach to eating of is also found in other cultures. China The teaching is Confucian, a belief system dating back to the 5th century BCE China. A similar saying to the Confucian one is found in Traditional Chinese Medicine: (, "only eat 70 percent full, and wear 30 percent less.") India The principle also appears in Ayurvedic medicine, dating back to the 4th century BCE, where "you should fill one third of the stomach with liquid, another third with food, and leave the rest empty." Influence Zen In the 1965 book Three Pillars of Zen, the author quotes Hakuun Yasutani in his lecture for beginners as telling his students about the book (Precautions to Observe in ), written circa 1300, which advises practitioners to eat abo
https://en.wikipedia.org/wiki/List%20of%20canneries
This is a list of canneries. A cannery is involved in the processes of canning, a method of preserving food in which the food contents are processed and sealed in an airtight container. Canneries United States Bush Brothers Cannery - Chestnut Hill, Tennessee Calpak Plant No. 11 – located in Sacramento, California, it was constructed as a fruit cannery, and is used by Blue Diamond Almonds Edgett-Burnham Canning Company - former cannery in Camden, New York Empson Cannery, Longmont, Colorado, NRHP-listed Hovden Cannery - Monterey, California Kake Cannery - Kake, Alaska, listed on the National Register of Historic Places (NRHP) Kirkland Cannery Building - former cannery in Kirkland, Washington Kukak Bay Cannery - former cannery in Alaska Libby, McNeill and Libby Cannery - former cannery in Sacramento, California, NRHP-listed Libby, McNeill and Libby Building - former cannery and processing plant in Blue Island, Illinois Marshall J. Kinney Cannery - former cannery in Astoria, Oregon Samuel Elmore Cannery – was a U.S. National Historic Landmark in Astoria, Oregon that was designated in 1966 but was delisted in 1993. It was the home of "Bumble Bee" brand tuna. Wards Cove Packing Company - former cannery in Ketchikan, Alaska W.R. Roach Cannery - former cannery in Crosswell, Michigan, NRHP-listed Kukak Cannery Archeological Historic District, Kukak Bay, Alaska, NRHP-listed Thomas and Company Cannery, Gaithersburg, Maryland, NRHP-listed Thompson Fish House, Turtle Cannery and Kraals, Key West, Florida, NRHP-listed Libby, McNeill and Libby Cannery Gridley, California USA (Peaches/Pumpkins) British Columbia By type List of salmon canneries and communities See also Canned fish Canned water Food industry Salmon cannery
https://en.wikipedia.org/wiki/Craterellus%20fallax
Craterellus fallax is a species of "black trumpets" that occurs in Eastern North America where it replaces the European taxon Craterellus cornucopioides. C. fallax can also be separated by its yellow-orange spore print, where C. cornucopioides has a white spore print. It has often been considered a synonym of C. cornucopioides. C. fallax is mycorrhizal, forming associations with Tsuga and Quercus species, among others. C. fallax is a choice edible fungus, although is not substantial.
https://en.wikipedia.org/wiki/Nibricoccus
Nibricoccus is a Gram-negative, strictly aerobic and non-motile genus of bacteria from the family of Opitutaceae with one known species (Nibricoccus aquaticus). Nibricoccus aquaticus has been isolated from hyporheic freshwater from Korea.
https://en.wikipedia.org/wiki/LPXTGase
LPXTGase refers to an endopeptidase enzyme from Streptococci and Staphylococci with the capacity to cleave the carboxy-terminal LPXTG anchor motif of surface proteins similar to Sortase. However, LPXTGase differs significantly from Sortase in several ways: a) it is glycosylated, b) it contains unconventional amino acids, and c) it contains D-amino acids. The latter two characteristics indicate that ribosomes are not involve in the synthesis of LPXTGase. Data suggest that the enzymes responsible for cell wall assembly also assemble LPXTGase. This is the first enzyme of its kind ever reported and was discovered in streptococcal and staphylococcal lysates due to its high cleavage activity for the LPXTG sequence (200x more active than sortase). While there are only three publications describing this enzyme, all from the same source, they are in high quality peer reviewed journals. One reason for this limitation is that there is no single gene that codes for LPXTGase (it does not fit the "one gene one enzyme" theory) making its manipulation purely biochemical. Thus, reproducibility from other sources is a slow limiting process.
https://en.wikipedia.org/wiki/End%20mark
End mark may refer to Any terminal punctuation at the end of a sentence Especially the full stop (period) A symbol, such as a bullet, tombstone, or miniature logo, used primarily in magazine writing, that indicates the end of an article (especially one that has been interrupted by advertising or by being split up across different sections of the publication for layout purposes). Punctuation Typographical symbols
https://en.wikipedia.org/wiki/Rosenbrock%20methods
Rosenbrock methods refers to either of two distinct ideas in numerical computation, both named for Howard H. Rosenbrock. Numerical solution of differential equations Rosenbrock methods for stiff differential equations are a family of single-step methods for solving ordinary differential equations. They are related to the implicit Runge–Kutta methods and are also known as Kaps–Rentrop methods. Search method Rosenbrock search is a numerical optimization algorithm applicable to optimization problems in which the objective function is inexpensive to compute and the derivative either does not exist or cannot be computed efficiently. The idea of Rosenbrock search is also used to initialize some root-finding routines, such as fzero (based on Brent's method) in Matlab. Rosenbrock search is a form of derivative-free search but may perform better on functions with sharp ridges. The method often identifies such a ridge which, in many applications, leads to a solution. See also Rosenbrock function Adaptive coordinate descent
https://en.wikipedia.org/wiki/Rock%20cycle
The rock cycle is a basic concept in geology that describes transitions through geologic time among the three main rock types: sedimentary, metamorphic, and igneous. Each rock type is altered when it is forced out of its equilibrium conditions. For example, an igneous rock such as basalt may break down and dissolve when exposed to the atmosphere, or melt as it is subducted under a continent. Due to the driving forces of the rock cycle, plate tectonics and the water cycle, rocks do not remain in equilibrium and change as they encounter new environments. The rock cycle explains how the three rock types are related to each other, and how processes change from one type to another over time. This cyclical aspect makes rock change a geologic cycle and, on planets containing life, a biogeochemical cycle. Transition to igneous rock When rocks are pushed deep under the Earth's surface, they may melt into magma. If the conditions no longer exist for the magma to stay in its liquid state, it cools and solidifies into an igneous rock. A rock that cools within the Earth is called intrusive or plutonic and cools very slowly, producing a coarse-grained texture such as the rock granite. As a result of volcanic activity, magma (which is called lava when it reaches Earth's surface) may cool very rapidly on the Earth's surface exposed to the atmosphere and are called extrusive or volcanic rocks. These rocks are fine-grained and sometimes cool so rapidly that no crystals can form and result in a natural glass, such as obsidian, however the most common fine-grained rock would be known as basalt. Any of the three main types of rocks (igneous, sedimentary, and metamorphic rocks) can melt into magma and cool into igneous rocks. Secondary changes Epigenetic change (secondary processes occurring at low temperatures and low pressures) may be arranged under a number of headings, each of which is typical of a group of rocks or rock-forming minerals, though usually more than one of these alt
https://en.wikipedia.org/wiki/Wick%27s%20theorem
Wick's theorem is a method of reducing high-order derivatives to a combinatorics problem. It is named after Italian physicist Gian-Carlo Wick. It is used extensively in quantum field theory to reduce arbitrary products of creation and annihilation operators to sums of products of pairs of these operators. This allows for the use of Green's function methods, and consequently the use of Feynman diagrams in the field under study. A more general idea in probability theory is Isserlis' theorem. In perturbative quantum field theory, Wick's theorem is used to quickly rewrite each time ordered summand in the Dyson series as a sum of normal ordered terms. In the limit of asymptotically free ingoing and outgoing states, these terms correspond to Feynman diagrams. Definition of contraction For two operators and we define their contraction to be where denotes the normal order of an operator . Alternatively, contractions can be denoted by a line joining and , like . We shall look in detail at four special cases where and are equal to creation and annihilation operators. For particles we'll denote the creation operators by and the annihilation operators by . They satisfy the commutation relations for bosonic operators , or the anti-commutation relations for fermionic operators where denotes the Kronecker delta. We then have where . These relationships hold true for bosonic operators or fermionic operators because of the way normal ordering is defined. Examples We can use contractions and normal ordering to express any product of creation and annihilation operators as a sum of normal ordered terms. This is the basis of Wick's theorem. Before stating the theorem fully we shall look at some examples. Suppose and are bosonic operators satisfying the commutation relations: where , denotes the commutator, and is the Kronecker delta. We can use these relations, and the above definition of contraction, to express products of and in other ways. Example 1
https://en.wikipedia.org/wiki/No%20Fem%20el%20CIM
The "No fem el CIM" (NFC) movement was founded in 2003 in the Penedès region of Catalonia. It was then that locals learned that the Catalan government wanted to construct a dry port between the towns of Banyeres del Penedès, l'Arboç i Sant Jaume dels Domenys. However, it was not until August 2005 that the movement began to take-off, gaining national recognition, spurred by the release of the government's 544 acre (220 hectare) plan for the inland port. Since then, members of the movement have held numerous protests as well as meetings with local and national government entities in an attempt to prevent dry port construction. Support At the end of 2008, the groups that had come out in support of the NFC movement included: 1 Mayor Board of Baix Penedès 2 County Boards and Commissions: Baix Penedès, Alt Penedès 14 City Councils: l'Arboç, Banyeres del Penedès, Bellvei, la Bisbal del Penedès, la Granada, Llorenç del Penedès, Sant Jaume dels Domenys, Torrelavit, Olesa de Bonesvalls, Sant Pere de Riudebitlles, Cunit, El Vendrell, Vilafranca del Penedès i Vilanova i la Geltrú. 5 Business Associations 5 Labor Unions 7 Environmental Groups 9 Foundations and Associations 54 Cultural Groups various national and local political groups 5.385 Catalan residents 1.120 allegation to the PTPCT (planification of Tarragona Area) Structure The "No fem el CIM" movement is a grassroots movement that operates by means of five autonomous committees: The NFC Communication Committee The NFC Publicity Committee The NFC Fundraising Committee The NFC Political Action Committee The NFC Legal Action Committee External links NoFemelCIM Web NoFemelCIM Blog NoFemelCIM Youtube NoFemelCIM Facebook NoFemelCIM Twitter NoFemelCIM Wikipedia NoFemelCIM Flickr NoFemelCIM Vimeo NoFemelCIM Rss Organisations based in Catalonia Baix Penedès Alt Penedès Ecology organizations
https://en.wikipedia.org/wiki/ERV-Fc
ERV-Fc was an endogenous retrovirus (ERV), or a genus or family of them, related to the modern murine leukemia virus. It was active and infectious among many species of mammals in several orders, jumping species more than 20 times between about 33 million and about 15 million years ago, in the Oligocene and early Miocene, in all large areas of the world except for Australia and Antarctica. After about 15 million years ago, it became extinct as an active infectious virus, perhaps due to its hosts developing inherited resistance to it, but inactive damaged copies and partial copies and fragments of its DNA survive as inclusions in the hereditary nuclear DNA of many species of mammals, some in different orders, including humans and other great apes. That has let interspecies jump routes of the spreading virus be tracked, and timed by the molecular clock in their extant descendants, but with gaps where trails were lost by passing through infected animals who left no extant descendants or by loss of the integrated sequence in some lineages.
https://en.wikipedia.org/wiki/Bessel%20ellipsoid
The Bessel ellipsoid (or Bessel 1841) is an important reference ellipsoid of geodesy. It is currently used by several countries for their national geodetic surveys, but will be replaced in the next decades by modern ellipsoids of satellite geodesy. The Bessel ellipsoid was derived in 1841 by Friedrich Wilhelm Bessel, based on several arc measurements and other data of continental geodetic networks of Europe, Russia and the British Survey of India. It is based on 10 meridian arcs and 38 precise measurements of the astronomic latitude and longitude (see also astro geodesy). The dimensions of the Earth ellipsoid axes were defined by logarithms in keeping with former calculation methods. The Bessel and GPS ellipsoids The Bessel ellipsoid fits especially well to the geoid curvature of Europe and Eurasia. Therefore, it is optimal for National survey networks in these regions, although its axes are about 700 m shorter than that of the mean Earth ellipsoid derived by satellites. Below there are the two axes , and the flattening . For comparison, the data of the modern World Geodetic System WGS84 are shown, which is mainly used for modern surveys and the GPS system. Bessel ellipsoid 1841 (defined by log and ): = = 1 / = . Earth ellipsoid WGS84 (defined directly by and ): = = 1 / = . Usage The ellipsoid data published by Bessel (1841) were then the best and most modern data mapping the Earth's figure. They were used by almost all national surveys. Some surveys in Asia switched to the Clarke ellipsoid of 1880. After the arrival of the geophysical reduction techniques many projects used other examples such as the Hayford ellipsoid of 1910 which was adopted in 1924 by the International Association of Geodesy (IAG) as the International ellipsoid 1924. All of them are influenced by geophysical effects like vertical deflection, mean continental density, rock density and the distribution of network data. Every reference ellipsoid deviates from the worldw
https://en.wikipedia.org/wiki/Hamilton%27s%20optico-mechanical%20analogy
Hamilton's optico-mechanical analogy is a conceptual parallel between trajectories in classical mechanics and wavefronts in optics, introduced by William Rowan Hamilton around 1831. It may be viewed as linking Huygens' principle of optics with Maupertuis' principle of mechanics. While Hamilton discovered the analogy in 1831, it was not applied practically until Hans Busch used it to explain electron beam focusing in 1925. According to Cornelius Lanczos, the analogy has been important in the development of ideas in quantum physics. Erwin Schrödinger cites the analogy in the very first sentence of his paper introducing his wave mechanics. Later in the body of his paper he says: Quantitative and formal analysis based on the analogy use the Hamilton–Jacobi equation; conversely the analogy provides an alternative and more accessible path for introducing the Hamilton–Jacobi equation approach to mechanics. The orthogonality of mechanical trajectories characteristic of geometrical optics to the optical wavefronts characteristic of a full wave equation, resulting from the variational principle, leads to the corresponding differential equations. Hamilton's analogy The propagation of light can be considered in terms of rays and wavefronts in ordinary physical three-dimensional space. The wavefronts are two-dimensional curved surfaces; the rays are one-dimensional curved lines. Hamilton's analogy amounts to two interpretations of a figure like the one shown here. In the optical interpretation, the green wavefronts are lines of constant phase and the orthogonal red lines are the rays of geometrical optics. In the mechanical interpretation, the green lines denote constant values of action derived by applying Hamilton's principle to mechanical motion and the red lines are the orthogonal object trajectories. The orthogonality of the wavefronts to rays (or equal-action surfaces to trajectories) means we can compute one set from the other set. This explains how Kirchhoff's diff
https://en.wikipedia.org/wiki/Food%20and%20Bioprocess%20Technology
Food and Bioprocess Technology is a peer-reviewed scientific journal published by Springer Science+Business Media. The editor-in-chief is Da-Wen Sun (University College Dublin). Abstracting and indexing Food and Bioprocess Technology is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2011 impact factor of 3.703, ranking it 4th out of 138 journals in the category "Food Science & Technology".
https://en.wikipedia.org/wiki/Project%20Isabela
The Project Isabela () was an environmental restoration project in the Galápagos Islands of Ecuador that took place between 1997 and 2006, initiated by the Charles Darwin Foundation and the Galápagos National Park. Species introduced in the 16th and 17th centuries to the islands, mainly goats and some donkeys and pigs, brought ecological devastation to the islands and posed as a threat to the Galápagos tortoise that was by the 1990s near extinction. By 1997, plans had been officially implemented to eradicate these introduced species in northern Isabela, Santiago, and Pinta islands. Skilled park rangers used helicopters to hunt and sterilized Judas goats, fitted with radio collars to track down the feral goats. The initiative was brought into action in 1999, and by 2006, 150,000 goats alone were eradicated. As of 2011, the project was the world's largest ecological island restoration effort ever. See also List of animals in the Galápagos Islands
https://en.wikipedia.org/wiki/Integer%20lattice
In mathematics, the -dimensional integer lattice (or cubic lattice), denoted , is the lattice in the Euclidean space whose lattice points are -tuples of integers. The two-dimensional integer lattice is also called the square lattice, or grid lattice. is the simplest example of a root lattice. The integer lattice is an odd unimodular lattice. Automorphism group The automorphism group (or group of congruences) of the integer lattice consists of all permutations and sign changes of the coordinates, and is of order 2n n!. As a matrix group it is given by the set of all n × n signed permutation matrices. This group is isomorphic to the semidirect product where the symmetric group Sn acts on (Z2)n by permutation (this is a classic example of a wreath product). For the square lattice, this is the group of the square, or the dihedral group of order 8; for the three-dimensional cubic lattice, we get the group of the cube, or octahedral group, of order 48. Diophantine geometry In the study of Diophantine geometry, the square lattice of points with integer coordinates is often referred to as the Diophantine plane. In mathematical terms, the Diophantine plane is the Cartesian product of the ring of all integers . The study of Diophantine figures focuses on the selection of nodes in the Diophantine plane such that all pairwise distances are integers. Coarse geometry In coarse geometry, the integer lattice is coarsely equivalent to Euclidean space. Pick's theorem Pick's theorem, first described by Georg Alexander Pick in 1899, provides a formula for the area of a simple polygon with all vertices lying on the 2-dimensional integer lattice, in terms of the number of integer points within it and on its boundary. Let be the number of integer points interior to the polygon, and let be the number of integer points on its boundary (including both vertices and points along the sides). Then the area of this polygon is: The example shown has interior points and boundary po
https://en.wikipedia.org/wiki/Horizon%20%28general%20relativity%29
A horizon is a boundary in spacetime satisfying prescribed conditions. There are several types of horizons that play a role in Albert Einstein's theory of general relativity: Absolute horizon, a boundary in spacetime in general relativity inside of which events cannot affect an external observer Event horizon, a boundary in spacetime beyond which events cannot affect the observer, thus referring to a black hole's boundary and the boundary of an expanding universe Apparent horizon, a surface defined in general relativity Cauchy horizon, a surface found in the study of Cauchy problems Cosmological horizon, a limit of observability Killing horizon, a null surface on which there is a Killing vector field Particle horizon, the maximum distance from which particles can have travelled to an observer in the age of the universe General relativity
https://en.wikipedia.org/wiki/Normed%20vector%20lattice
In mathematics, specifically in order theory and functional analysis, a normed lattice is a topological vector lattice that is also a normed space whose unit ball is a solid set. Normed lattices are important in the theory of topological vector lattices. They are closely related to Banach vector lattices, which are normed vector lattices that are also Banach spaces. Properties Every normed lattice is a locally convex vector lattice. The strong dual of a normed lattice is a Banach lattice with respect to the dual norm and canonical order. If it is also a Banach space then its continuous dual space is equal to its order dual. Examples Every Banach lattice is a normed lattice. See also
https://en.wikipedia.org/wiki/Geophilus%20bluncki
Geophilus bluncki is a species of soil centipede in the family Geophilidae found in San Remo, Italy. It grows up to 23 millimeters in length; the males have about 61 leg pairs. The uniform pore fields and long antennae resemble Arctogeophilus glacialis, formerly Geophilus glacialis.
https://en.wikipedia.org/wiki/Dorsal%20metatarsal%20arteries
The arcuate artery of the foot gives off the second, third, and fourth dorsal metatarsal arteries, which run forward upon the corresponding Interossei dorsales; in the clefts between the toes, each divides into two dorsal digital branches for the adjoining toes. At the proximal parts of the interosseous spaces these vessels receive the posterior perforating branches from the plantar arch, and at the distal parts of the spaces they are joined by the anterior perforating branches, from the plantar metatarsal arteries. The fourth dorsal metatarsal artery gives off a branch which supplies the lateral side of the fifth toe. The first dorsal metatarsal artery runs forward on the first Interosseous dorsalis.
https://en.wikipedia.org/wiki/Matrix%20factorization%20of%20a%20polynomial
In mathematics, a matrix factorization of a polynomial is a technique for factoring irreducible polynomials with matrices. David Eisenbud proved that every multivariate real-valued polynomial p without linear terms can be written as a AB = pI, where A and B are square matrices and I is the identity matrix. Given the polynomial p, the matrices A and B can be found by elementary methods. Example: The polynomial x2 + y2 is irreducible over R[x,y], but can be written as
https://en.wikipedia.org/wiki/Khlea
Khlea or khlii () is a preserved meat, usually made with beef or lamb, originating from Morocco. Khlea is made by cutting meat into strips and letting it dry in the sun after marinating it in garlic, coriander and cumin. The meat is cooked in a mixture of water, oil and animal fat. Upon cooling, the meat is submerged in more animal fat and left to dry. Khlea can be preserved for up to two years at room temperature.
https://en.wikipedia.org/wiki/Transition%20constraint
A transition constraint is a way of enforcing that the data does not enter an impossible state because of a previous state. For example, it should not be possible for a person to change from being "married" to being "single, never married". The only valid states after "married" might be "divorced", "widowed", or "deceased". This is the database-centric interpretation of the term. In formal models in computer security, a transition constraint is a property that governs every valid transition from a state of the model to a successor state. It can be viewed as complementary to the state criteria that pertain to states per se but have no bearing on transitions between successive states.
https://en.wikipedia.org/wiki/Cosocle
In mathematics, the term cosocle (socle meaning pedestal in French) has several related meanings. In group theory, a cosocle of a group G, denoted by Cosoc(G), is the intersection of all maximal normal subgroups of G. If G is a quasisimple group, then Cosoc(G) = Z(G). In the context of Lie algebras, a cosocle of a symmetric Lie algebra is the eigenspace of its structural automorphism that corresponds to the eigenvalue +1. (A symmetric Lie algebra decomposes into the direct sum of its socle and cosocle.) In the context of module theory, the cosocle of a module over a ring R is defined to be the maximal semisimple quotient of the module. See also Socle Radical of a module
https://en.wikipedia.org/wiki/Factorial%20code
Most real world data sets consist of data vectors whose individual components are not statistically independent. In other words, knowing the value of an element will provide information about the value of elements in the data vector. When this occurs, it can be desirable to create a factorial code of the data, i.e., a new vector-valued representation of each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent. Later supervised learning usually works much better when the raw input data is first translated into such a factorial code. For example, suppose the final goal is to classify images with highly redundant pixels. A naive Bayes classifier will assume the pixels are statistically independent random variables and therefore fail to produce good results. If the data are first encoded in a factorial way, however, then the naive Bayes classifier will achieve its optimal performance (compare Schmidhuber et al. 1996). To create factorial codes, Horace Barlow and co-workers suggested to minimize the sum of the bit entropies of the code components of binary codes (1989). Jürgen Schmidhuber (1992) re-formulated the problem in terms of predictors and binary feature detectors, each receiving the raw data as an input. For each detector there is a predictor that sees the other detectors and learns to predict the output of its own detector in response to the various input vectors or images. But each detector uses a machine learning algorithm to become as unpredictable as possible. The global optimum of this objective function corresponds to a factorial code represented in a distributed fashion across the outputs of the feature detectors. Painsky, Rosset and Feder (2016, 2017) further studied this problem in the context of independent component analysis over finite alphabet sizes. Through a series of theorems they show that the factorial coding problem can be accurately solved
https://en.wikipedia.org/wiki/Audio%20mixing%20%28recorded%20music%29
In sound recording and reproduction, audio mixing is the process of optimizing and combining multitrack recordings into a final mono, stereo or surround sound product. In the process of combining the separate tracks, their relative levels are adjusted and balanced and various processes such as equalization and compression are commonly applied to individual tracks, groups of tracks, and the overall mix. In stereo and surround sound mixing, the placement of the tracks within the stereo (or surround) field are adjusted and balanced. Audio mixing techniques and approaches vary widely and have a significant influence on the final product. Audio mixing techniques largely depend on music genres and the quality of sound recordings involved. The process is generally carried out by a mixing engineer, though sometimes the record producer or recording artist may assist. After mixing, a mastering engineer prepares the final product for production. Audio mixing may be performed on a mixing console or in a digital audio workstation. History In the late 19th century, Thomas Edison and Emile Berliner developed the first recording machines. The recording and reproduction process itself was completely mechanical with little or no electrical parts. Edison's phonograph cylinder system utilized a small horn terminated in a stretched, flexible diaphragm attached to a stylus which cut a groove of varying depth into the malleable tin foil of the cylinder. Emile Berliner's gramophone system recorded music by inscribing spiraling lateral cuts onto a vinyl disc. Electronic recording became more widely used during the 1920s. It was based on the principles of electromagnetic transduction. The possibility for a microphone to be connected remotely to a recording machine meant that microphones could be positioned in more suitable places. The process was improved when outputs of the microphones could be mixed before being fed to the disc cutter, allowing greater flexibility in the balance. Bef
https://en.wikipedia.org/wiki/GPS%20Block%20III
GPS Block III (previously Block IIIA) consists of the first ten GPS III satellites, which will be used to keep the Navstar Global Positioning System operational. Lockheed Martin designed, developed and manufactured the GPS III Non-Flight Satellite Testbed (GNST) and all ten Block III satellites. The first satellite in the series was launched in December 2018. History The United States' Global Positioning System (GPS) reached Full Operational Capability on 17 July 1995, completing its original design goals. Advances in technology and new demands on the existing system led to the effort to modernize the GPS system. In 2000, the U.S. Congress authorized the effort, referred to as GPS III. The project involves new ground stations and new satellites, with additional navigation signals for both civilian and military users, and aims to improve the accuracy and availability for all users. Raytheon was awarded the Next Generation GPS Operational Control System (OCX) contract on 25 February 2010. The first satellite in the series was projected to launch in 2014, but significant delays pushed the launch to December 2018. The tenth and final GPS Block III launch is projected in FY2026. Development Block III satellites use Lockheed Martin's A2100M satellite bus structure. The propellant and pressurant tanks are manufactured by Orbital ATK from lightweight, high-strength composite materials. Each satellite will carry eight deployable JIB antennas designed and manufactured by Northrop Grumman Astro Aerospace Already delayed significantly beyond the first satellite's planned 2014 launch, on 27 April 2016, SpaceX, in Hawthorne, California, was awarded a US$82.7 million firm-fixed-price contract for launch services to deliver a GPS III satellite to its intended orbit. The contract included launch vehicle production, mission integration, and launch operations for a GPS III mission, to be performed in Hawthorne, California; Cape Canaveral Air Force Station, Florida; and McGreg
https://en.wikipedia.org/wiki/Restructured%20steak
Restructured steak is a catch-all term to describe a class of imitation beef steaks made from smaller pieces of beef fused together by a binding agent. Its development started from the 1970s. Restructured steak is sometimes made using cheaper cuts of beef such as the hind quarter or fore quarter of beef. Allowed food-grade agents include: Sodium chloride (table salt) and phosphate salts. Salt can prevent microbiological growth and make myosin-type proteins more soluble. The allowed amount of phosphate in end products is 0.5% in the United States. It increases the emulsification of fat. Animal blood plasma Alginate: Sodium alginate forms an adhesive gel in the presence of Ca2+ ion. Transglutaminase: an enzyme that helps the forming of cross-binding proteins. Problems Oxidation and food poisoning are the two most serious issues generally associated with restructured steak. To reduce the risk of food poisoning, restructured steaks should always be cooked until well-done.
https://en.wikipedia.org/wiki/Perturbation%20theory
In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In perturbation theory, the solution is expressed as a power series in a small parameter The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, usually by keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction. Perturbation theory is used in a wide range of fields, and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines. Description Perturbation theory develops an expression for the desired solution in terms of a formal power series known as a perturbation series in some "small" parameter, that quantifies the deviation from the exactly solvable problem. The leading term in this power series is the solution of the exactly solvable problem, while further terms describe the deviation in the solution, due to the deviation from the initial problem. Formally, we have for the approximation to the full solution a series in the small parameter (here called ), like the following: In this example, would be the known solution to the exactly solvable initial problem, and the terms represent the first-order, second-order, third-order, and higher-order terms, which may be found iteratively by a mechanistic but increasingly difficult procedure. For small these higher-order terms in the series generally (but not always) become successively small
https://en.wikipedia.org/wiki/Equalized%20odds
Equalized odds, also referred to as conditional procedure accuracy equality and disparate mistreatment, is a measure of fairness in machine learning. A classifier satisfies this definition if the subjects in the protected and unprotected groups have equal true positive rate and equal false positive rate, satisfying the formula: For example, could be gender, race, or any other characteristics that we want to be free of bias, while would be whether the person is qualified for the degree, and the output would be the school's decision whether to offer the person to study for the degree. In this context, higher university enrollment rates of African Americans compared to whites with similar test scores might be necessary to fulfill the condition of equalized odds, if the "base rate" of differs between the groups. The concept was originally defined for binary-valued . In 2017, Woodworth et al. generalized the concept further for multiple classes. See also Fairness (machine learning) Color blindness (racial classification)
https://en.wikipedia.org/wiki/Schottenstein%20Prize%20in%20Cardiovascular%20Sciences
The Schottenstein Prize in Cardiovascular Sciences is awarded biennially to physicians or biomedical scientists who are judged to have made extraordinary and sustained contributions to improving cardiovascular health. The award is worth US$100,000 and is conferred by The Ohio State University Wexner Medical Center’s Heart and Vascular Center. Laureates See also List of medicine awards
https://en.wikipedia.org/wiki/Pascal%27s%20wager
Pascal's wager is a philosophical argument advanced by Blaise Pascal (1623–1662), a notable seventeenth-century French mathematician, philosopher, physicist, and theologian. This argument posits that individuals essentially engage in a life-defining gamble regarding the belief in the existence of God. Pascal contends that a rational person should adopt a lifestyle consistent with the existence of God and actively strive to believe in God. The reasoning behind this stance lies in the potential outcomes: if God does not exist, the individual incurs only finite losses, potentially sacrificing certain pleasures and luxuries. However, if God does indeed exist, they stand to gain immeasurably, as represented for example by an eternity in Heaven in Abrahamic tradition, while simultaneously avoiding boundless losses associated with an eternity in Hell. The original articulation of this wager can be found in Pascal's posthumously published work titled Pensées ("Thoughts"), which comprises a compilation of previously unpublished notes. Notably, Pascal's wager is significant as it marks the initial formal application of decision theory, existentialism, pragmatism, and voluntarism. Critics of the wager question the ability to provide definitive proof of God's existence. The argument from inconsistent revelations highlights the presence of various belief systems, each claiming exclusive access to divine truths. Additionally, the argument from inauthentic belief raises concerns about the genuineness of faith in God if solely motivated by potential benefits and losses. Despite these critiques, Pascal's wager remains a subject of contemplation, sparking ongoing discussions about belief, rationality, and the complexities surrounding the existence of a higher power. The wager The wager uses the following logic (excerpts from Pensées, part III, §233): God is, or God is not. Reason cannot decide between the two alternatives A Game is being played... where heads or tails will
https://en.wikipedia.org/wiki/Alpha%20helix
An alpha helix (or α-helix) is a sequence of amino acids in a protein that are twisted into a coil (a helix). The alpha helix is the most common structural arrangement in the secondary structure of proteins. It is also the most extreme type of local structure, and it is the local structure that is most easily predicted from a sequence of amino acids. The alpha helix has a right hand-helix conformation in which every backbone N−H group hydrogen bonds to the backbone C=O group of the amino acid that is four residues earlier in the protein sequence. Other names The alpha helix is also commonly called a: Pauling–Corey–Branson α-helix (from the names of three scientists who described its structure). 3.613-helix because there are 3.6 amino acids in one ring, and there are an average of 13 residues per helical turn, with 13 atoms being involved in the ring formed by the hydrogen bond. Discovery In the early 1930s, William Astbury showed that there were drastic changes in the X-ray fiber diffraction of moist wool or hair fibers upon significant stretching. The data suggested that the unstretched fibers had a coiled molecular structure with a characteristic repeat of ≈. Astbury initially proposed a linked-chain structure for the fibers. He later joined other researchers (notably the American chemist Maurice Huggins) in proposing that: the unstretched protein molecules formed a helix (which he called the α-form) the stretching caused the helix to uncoil, forming an extended state (which he called the β-form). Although incorrect in their details, Astbury's models of these forms were correct in essence and correspond to modern elements of secondary structure, the α-helix and the β-strand (Astbury's nomenclature was kept), which were developed by Linus Pauling, Robert Corey and Herman Branson in 1951 (see below); that paper showed both right- and left-handed helices, although in 1960 the crystal structure of myoglobin showed that the right-handed form is the common
https://en.wikipedia.org/wiki/Simulation%20Optimization%20Library%3A%20Throughput%20Maximization
The problem of Throughput Maximization is a family of iterative stochastic optimization algorithms that attempt to find the maximum expected throughput in an n-stage Flow line. According to Pichitlamken et al. (2006), there are two solutions to the discrete service-rate moderate-sized problem. With an expected throughput (defined as the limiting throughput over a long time horizon, as opposed to the approximation induced through the need for a warm-up period and ratio-estimate as described under Measurement of time. Each simulation replication should consist of warming up the system with 2000 released jobs starting from an empty system, then recording the time T required to release the next 50 jobs, and estimating the throughput on this replication as 50=T jobs per unit time. Time is then measured in the number of simulation replications performed. Problem statement Consider an n-stage flow line with finite buffer storage in front of Stations 2, 3,..., n, denoted by b2, b3,..., bn, and an infinite number of jobs in front of Station 1. There is a single server at each station, and the service time at Station i is exponentially distributed with service rate ri, i = 1,..., n. If the buffer of Station i is full, then Station i-1 is blocked (production blocking), so that a finished job cannot be released from Station i-1, i = 2,..., n. The total buffer space and the service rates are limited. The goal is to find a buffer allocation and service rates such that the throughput (average output of the flow line per unit time) is maximized. One can optionally take the service rates as integers or as continuous variables. In either case the problem is still considered as integer-ordered because the buffer allocations are integers. The constraints are: b2+...+bn≤ B r1+r2+...+rn≤R Recommended parameter settings A moderate-sized problem is n = 3;B = 20;R = 20. A larger problem is n = 12;B = 80;R = 80. Starting solution(s) Take b2=...=bn as large as possible without violating
https://en.wikipedia.org/wiki/Ivy%20Bridge%20%28microarchitecture%29
Ivy Bridge is the codename for Intel's 22 nm microarchitecture used in the third generation of the Intel Core processors (Core i7, i5, i3). Ivy Bridge is a die shrink to 22 nm process based on FinFET ("3D") Tri-Gate transistors, from the former generation's 32 nm Sandy Bridge microarchitecture—also known as tick–tock model. The name is also applied more broadly to the Xeon and Core i7 Ivy Bridge-E series of processors released in 2013. Ivy Bridge processors are backward compatible with the Sandy Bridge platform, but such systems might require a firmware update (vendor specific). In 2011, Intel released the 7-series Panther Point chipsets with integrated USB 3.0 and SATA 3.0 to complement Ivy Bridge. Volume production of Ivy Bridge chips began in the third quarter of 2011. Quad-core and dual-core-mobile models launched on April 29, 2012 and May 31, 2012 respectively. Core i3 desktop processors, as well as the first 22 nm Pentium, were announced and available the first week of September 2012. Ivy Bridge is the last Intel platform on which Windows prior to Windows 7 (up to NT 6.0, X86 & X64) will be officially supported by Microsoft. It is also the earliest Intel microarchitecture to officially support Windows 10 64-bit (NT 10.0). Overview The Ivy Bridge CPU microarchitecture is a shrink from Sandy Bridge and remains largely unchanged. Like its predecessor, Sandy Bridge, Ivy Bridge was also primarily developed by Intel's Israel branch, located in Haifa, Israel. Notable improvements include: New 22 nm Tri-gate transistor ("3-D") technology offer as much as a 50% reduction to power consumption at the same performance level as compared to 2-D planar transistors on Intel's 32 nm process. A new pseudorandom number generator and the RDRAND instruction, codenamed Bull Mountain. Ivy Bridge features and performance The mobile and desktop Ivy Bridge chips also include some minor yet notable changes over Sandy Bridge: CPU F16C (16-bit floating-point conversion inst
https://en.wikipedia.org/wiki/Ethnomathematics
In mathematics education, ethnomathematics is the study of the relationship between mathematics and culture. Often associated with "cultures without written expression", it may also be defined as "the mathematics which is practised among identifiable cultural groups". It refers to a broad cluster of ideas ranging from distinct numerical and mathematical systems to multicultural mathematics education. The goal of ethnomathematics is to contribute both to the understanding of culture and the understanding of mathematics, and mainly to lead to an appreciation of the connections between the two. Development and meaning The term "ethnomathematics" was introduced by the Brazilian educator and mathematician Ubiratan D'Ambrosio in 1977 during a presentation for the American Association for the Advancement of Science. Since D'Ambrosio put forth the term, people - D'Ambrosio included - have struggled with its meaning ("An etymological abuse leads me to use the words, respectively, ethno and mathema for their categories of analysis and tics from (from techne)".). The following is a sampling of some of the definitions of ethnomathematics proposed between 1985 and 2006: "The mathematics which is practiced among identifiable cultural groups such as national-tribe societies, labour groups, children of certain age brackets and professional classes". "The mathematics implicit in each practice". "The study of mathematical ideas of a non-literate culture". "The codification which allows a cultural group to describe, manage and understand reality". "Mathematics…is conceived as a cultural product which has developed as a result of various activities". "The study and presentation of mathematical ideas of traditional peoples". "Any form of cultural knowledge or social activity characteristic of a social group and/or cultural group that can be recognized by other groups such as Western anthropologists, but not necessarily by the group of origin, as mathematical knowledge or mathematica
https://en.wikipedia.org/wiki/Gallium%20palladide
Gallium palladide (GaPd or PdGa) is an intermetallic combination of gallium and palladium. In the Iron monosilicide crystal structure. The compound has been suggested as an improved catalyst for hydrogenation reactions. In principle, gallium palladide can be a more selective catalyst since unlike substituted compounds, the palladium atoms are spaced out in a regular crystal structure rather than randomly.
https://en.wikipedia.org/wiki/Meridian%20%28geography%29
In geography and geodesy, a meridian is the locus connecting points of equal longitude, which is the angle (in degrees or other units) east or west of a given prime meridian (currently, the IERS Reference Meridian). In other words, it is a line of longitude. The position of a point along the meridian is given by that longitude and its latitude, measured in angular degrees north or south of the Equator. On a Mercator projection or on a Gall-Peters projection, each meridian is perpendicular to all circles of latitude. A meridian is half of a great circle on Earth's surface. The length of a meridian on a modern ellipsoid model of Earth (WGS 84) has been estimated as . Pre-Greenwich The first prime meridian was set by Eratosthenes in 200 BCE. This prime meridian was used to provide measurement of the earth, but had many problems because of the lack of latitude measurement. Many years later around the 19th century there were still concerns of the prime meridian. Multiple locations for the geographical meridian meant that there was inconsistency, because each country had their own guidelines for where the prime meridian was located. Etymology The term meridian comes from the Latin meridies, meaning "midday"; the subsolar point passes through a given meridian at solar noon, midway between the times of sunrise and sunset on that meridian. Likewise, the Sun crosses the celestial meridian at the same time. The same Latin stem gives rise to the terms a.m. (ante meridiem) and p.m. (post meridiem) used to disambiguate hours of the day when utilizing the 12-hour clock. International Meridian Conference Because of a growing international economy, there was a demand for a set international prime meridian to make it easier for worldwide traveling which would, in turn, enhance international trading across countries. As a result, a Conference was held in 1884, in Washington, D.C. Twenty-six countries were present at the International Meridian Conference to vote on an internation
https://en.wikipedia.org/wiki/ID.me
ID.me is an American online identity network company that allows people to provide proof of their legal identity online. ID.me digital credentials can be used to access government services, healthcare logins, or discounts from retailers. The company is based in McLean, Virginia. In the wake of the economic downturn caused by the COVID-19 pandemic, ID.me was contracted by numerous state unemployment agencies to verify the identities of claimants. The US Internal Revenue Service also uses ID.me as its only online option in accessing its online taxpayer tools. History Origins as TroopSwap and Troop ID ID.me was founded in early 2010 by Blake Hall and Matt Thompson as TroopSwap, a daily deals website similar to Groupon and LivingSocial with a focus on the American military community. The company evolved into Troop ID, which provided digital identity verification for military personnel and veterans. Troop ID allowed service members and veterans to access online benefits from retailers, such as military discounts, as well as government agencies like the United States Department of Veterans Affairs. Rebrand to ID.me In 2013, the company rebranded again as ID.me with the goal of providing a ubiquitous secure identity verification network. To that end, they expanded to include verification of credentials for first responders, nurses, and students for discounts. In 2013, ID.me was awarded a two-year grant by the United States Chamber of Commerce to participate in the President's National Strategy for Trusted Identities in Cyberspace (NSTIC), a pilot project intended to help develop secure digital identification methods. In late 2014, ID.me won a contract with the General Services Administration to provide digital identity credentials with Connect.gov. Co-founder Matt Thompson left the company in 2015. In March 2017, ID.me received $19 million in its Series B funding round. In 2018, ID.me became the first digital identity provider to be certified by the Kantara Initiati
https://en.wikipedia.org/wiki/Soil%20liquefaction
Soil liquefaction occurs when a cohesionless saturated or partially saturated soil substantially loses strength and stiffness in response to an applied stress such as shaking during an earthquake or other sudden change in stress condition, in which material that is ordinarily a solid behaves like a liquid. In soil mechanics, the term "liquefied" was first used by Allen Hazen in reference to the 1918 failure of the Calaveras Dam in California. He described the mechanism of flow liquefaction of the embankment dam as: The phenomenon is most often observed in saturated, loose (low density or uncompacted), sandy soils. This is because a loose sand has a tendency to compress when a load is applied. Dense sands, by contrast, tend to expand in volume or 'dilate'. If the soil is saturated by water, a condition that often exists when the soil is below the water table or sea level, then water fills the gaps between soil grains ('pore spaces'). In response to soil compressing, the pore water pressure increases and the water attempts to flow out from the soil to zones of low pressure (usually upward towards the ground surface). However, if the loading is rapidly applied and large enough, or is repeated many times (e.g., earthquake shaking, storm wave loading) such that the water does not flow out before the next cycle of load is applied, the water pressures may build to the extent that it exceeds the force (contact stresses) between the grains of soil that keep them in contact. These contacts between grains are the means by which the weight from buildings and overlying soil layers is transferred from the ground surface to layers of soil or rock at greater depths. This loss of soil structure causes it to lose its strength (the ability to transfer shear stress), and it may be observed to flow like a liquid (hence 'liquefaction'). Although the effects of soil liquefaction have been long understood, engineers took more notice after the 1964 Alaska earthquake and 1964 Niigata earth
https://en.wikipedia.org/wiki/NDUFAF1
Complex I intermediate-associated protein 30, mitochondrial (CIA30), or NADH dehydrogenase [ubiquinone] 1 alpha subcomplex assembly factor 1 (NDUFAF1), is a protein that in humans is encoded by the NDUFAF1 or CIA30 gene. The NDUFAF1 gene encodes a human homolog of a Neurospora crassa protein involved in the assembly of complex I. The NDUFAF1 protein is an assembly factor of NADH dehydrogenase (ubiquinone) also known as complex I, which is located in the mitochondrial inner membrane and is the largest of the five complexes of the electron transport chain. Variants of the NDUFAF1 gene are associated with hypertrophic cardiomyopathy, leukodystrophy, and cardioencephalomyopathy. Structure NDUFAF1 is located on the q arm of chromosome 15 in position 15.1. The NDUFAF1 gene produces a 37.8 kDa protein composed of 327 amino acids. NDUFAF1 is associated to complexes of 600 and 700 kDa. Complex I is structured in a bipartite L-shaped configuration, which is made up of a peripheral matrix arm, consisting of flavoproteins and iron-sulfur proteins involved in electron transfer, and a membrane arm, consisting of mtDNA-encoded subunits involved in ubiquinone reduction and proton pumping. NDUFAF1 has been shown to interact with assembly intermediates and may play roles in the correct assembly and combination of the peripheral arm to the complete membrane arm as well as in the stabilization and scaffolding of those intermediates through those close interactions. Function NDUFAF1 is an assembly factor that is important for the correct assembly of NADH dehydrogenase (ubiquinone). It ensures the correct combination of complex intermediates and is necessary for the correct functioning of NADH dehydrogenase (ubiquinone). Specifically, NDUFAF1 binds to the large membrane arm intermediate and is involved in the combination of the small and large membrane arm intermediates of complex I. It has also been suggested that NDUFAF1 is involved in the stabilization and scaffolding of assembly
https://en.wikipedia.org/wiki/1000%20Plant%20Genomes%20Project
The 1000 Plant Transcriptomes Initiative (1KP) was an international research effort to establish the most detailed catalogue of genetic variation in plants. It was announced in 2008 and headed by Gane Ka-Shu Wong and Michael Deyholos of the University of Alberta. The project successfully sequenced the transcriptomes (expressed genes) of 1000 different plant species by 2014; its final capstone products were published in 2019. 1KP was one of the large-scale (involving many organisms) sequencing projects designed to take advantage of the wider availability of high-throughput ("next-generation") DNA sequencing technologies. The similar 1000 Genomes Project, for example, obtained high-coverage genome sequences of 1000 individual people between 2008 and 2015, to better understand human genetic variation. This project providing a template for further planetary-scale genome projects including the 10KP Project sequencing the whole genomes of 10,000 Plants, and the Earth BioGenome Project, aiming to sequence, catalog, and characterize the genomes of all of Earth's eukaryotic biodiversity. Goals , the number of classified green plant species was estimated to be around 370,000, however, there are probably many thousands more yet unclassified. Despite this number, very few of these species have detailed DNA sequence information to date; 125,426 species in GenBank, , but most (>95%) having DNA sequence for only one or two genes. "...almost none of the roughly half million plant species known to humanity has been touched by genomics at any level". The 1000 Plant Genomes Project aimed to produce a roughly a 100x increase in the number of plant species with available broad genome sequence. Evolutionary relationships There have been efforts to determine the evolutionary relationships between the known plant species, but phylogenies (or phylogenetic trees) created solely using morphological data, cellular structures, single enzymes, or on only a few sequences (like rRNA) can be p
https://en.wikipedia.org/wiki/Bonwill%20Triangle
The Bonwill Triangle is a triangle formed by the contact point of the mandibular central incisors and the right and left mandibular condyles. Description The distance between those points is equal in most humans and amounts on average to about 10 cm. The triangle is therefore an equilateral triangle in those cases. William Gibson Arlington Bonwill (1833–1899) was the first to describe this. Two variants of the Bonwill Triangle have been described that differ in the exact location of the points on the condyles. In one the points are located on the articulating surfaces of the condyles whilst in the other they are located centrally within the heads of the mandible. Application In terms of practical application, the Bonwill Triangle is utilized in the construction and correct application of fixed articulators, which (in contrast to fully- and semi-adjustable articulators) are a type of articulator that does not allow for adjustments to imitate the specific movements of an individual's jaw. Instead, these articulators are designed to replicate the average movements of the jaw, and among others the Bonwill Triangle relationship is used to achieve this result. See also Frankfurt plane
https://en.wikipedia.org/wiki/9-Pin%20Protocol
The Sony 9-Pin Protocol or P1 protocol is a two-way communications protocol to control advanced video recorders. Sony introduced this protocol to control reel-to-reel type C video tape recorders (VTR) as well as videocassette recorders (VCR). It uses an DE-9 D-Sub connector with 9 pins (hence the name), where bi-directional communication takes place over a four wire cable according to the RS-422 standard. While nowadays all post-production editing is done with a non-linear editing system, in those days editing was done linearly, using online editing. Editing machines relied heavily on the 9-Pin Protocol to remotely control automatic players and recorders. Many modern hard disk recorders and solid-state drive recorders can still emulate a 1982 Sony BVW-75 Betacam tape recorder. Sony's standard also specifies a pinout: This 9-pin RS-422 pinout has become a de facto standard, used by most brands in the broadcast industry. In the new millennium, RS-422 is slowly phased out in favor of Ethernet for control functions. However its simple way to perform troubleshooting means it will stay around for a long time. In broadcast automation the Video Disk Control Protocol (VDCP) use the 9-Pin Protocol to playout broadcast programming schedules. External links Sony 9-Pin Remote Protocol (Archived) Copy of Sony 9-Pin Remote Protocol Brainboxes serial port 9-pin protocol support Drastic support of 9-pin protocol Blackmagic Decklink (a video capture/generation card) support of 9-pin protocol Blackmagic Hyperdeck (an SSD recorder) support of 9-pin protocol Ross Kiva (a presentation server) RS-422 9-pin connector JLCooper Grass Valley K2 Summit (a media server) RS-422 connections
https://en.wikipedia.org/wiki/Kotobank
is a Japanese-language online encyclopedia which allows users to search across dictionaries, encyclopedias, and databases provided by publishers and others. It is operated by Voyage Marketing Co. When the service was first launched in 2009, it was stylized "kotobank" in rōmaji, but has since been stylized in katakana. History In June 2008, The Asahi Shimbun and EC Navi Inc. launched the "Minna no Chiezo" service, an online version of "Chiezo," a dictionary of modern terms that was once published. The service was rebuilt as a dictionary platform in which various companies could participate. The "kotobank" service was launched on April 23, 2009, under the management of The Asahi Shimbun and EC Navi Inc. At the time of its launch, it claimed to cover a total of 430,000 entries in 44 dictionaries and encyclopedias, the core of which were provided by Kodansha, Shogakukan, and Asahi Shimbun Publishing. In its early days, the site had strong ties with The Asahi Shimbun, with related news from The Asahi Shimbun's website, asahi.com, appearing on its pages. The Asahi Shimbun and Genesix began distributing the "kotobank for iPhone" electronic dictionary platform application for the iPhone in March 2011. In October 2011, EC Navi, which had been operating the site, changed its name to Voyage Group Inc. On October 1, 2019, following a corporate reorganization of Voyage Group Inc, Voyage Marketing Inc, a subsidiary of Carta Holdings, will operate the company. In April 2021, The Asahi Shimbun logo will disappear from the site and become the sole display of Voyage Marketing, and the registered trademark was also transferred from The Asahi Shimbun to Voyage Marketing. At the same time, the link to Kotobank from The Asahi Shimbun homepage was also lost. Reliability When the service was launched in 2009, The Asahi Shimbun and other operators pointed out the unreliability of information on the Internet and stated, "We aim to be the largest free glossary site in Japan with high re
https://en.wikipedia.org/wiki/Bag-of-words%20model
The bag-of-words model is a model of text represented as an unordered collection of words. It is used in natural language processing and information retrieval (IR). It disregards grammar and word order but keeps multiplicity. The bag-of-words model has also been used for computer vision. The bag-of-words model is commonly used in methods of document classification where the (frequency of) occurrence of each word is used as a feature for training a classifier. An early reference to "bag of words" in a linguistic context can be found in Zellig Harris's 1954 article on Distributional Structure. The Bag-of-words model is one example of a Vector space model. Example implementation The following models a text document using bag-of-words. Here are two simple text documents: (1) John likes to watch movies. Mary likes movies too. (2) Mary also likes to watch football games. Based on these two text documents, a list is constructed as follows for each document: "John","likes","to","watch","movies","Mary","likes","movies","too" "Mary","also","likes","to","watch","football","games" Representing each bag-of-words as a JSON object, and attributing to the respective JavaScript variable: BoW1 = {"John":1,"likes":2,"to":1,"watch":1,"movies":2,"Mary":1,"too":1}; BoW2 = {"Mary":1,"also":1,"likes":1,"to":1,"watch":1,"football":1,"games":1}; Each key is the word, and each value is the number of occurrences of that word in the given text document. The order of elements is free, so, for example {"too":1,"Mary":1,"movies":2,"John":1,"watch":1,"likes":2,"to":1} is also equivalent to BoW1. It is also what we expect from a strict JSON object representation. Note: if another document is like a union of these two, (3) John likes to watch movies. Mary likes movies too. Mary also likes to watch football games. its JavaScript representation will be: BoW3 = {"John":1,"likes":3,"to":2,"watch":2,"movies":2,"Mary":2,"too":1,"also":1,"football":1,"games":1}; So, as we see in the bag alge
https://en.wikipedia.org/wiki/Importance%20Value%20Index
The Importance Value Index in Ecology, is the measure of how dominant a species is in a given ecosystem.
https://en.wikipedia.org/wiki/Search-based%20software%20engineering
Search-based software engineering (SBSE) applies metaheuristic search techniques such as genetic algorithms, simulated annealing and tabu search to software engineering problems. Many activities in software engineering can be stated as optimization problems. Optimization techniques of operations research such as linear programming or dynamic programming are often impractical for large scale software engineering problems because of their computational complexity or their assumptions on the problem structure. Researchers and practitioners use metaheuristic search techniques, which impose little assumptions on the problem structure, to find near-optimal or "good-enough" solutions. SBSE problems can be divided into two types: black-box optimization problems, for example, assigning people to tasks (a typical combinatorial optimization problem). white-box problems where operations on source code need to be considered. Definition SBSE converts a software engineering problem into a computational search problem that can be tackled with a metaheuristic. This involves defining a search space, or the set of possible solutions. This space is typically too large to be explored exhaustively, suggesting a metaheuristic approach. A metric (also called a fitness function, cost function, objective function or quality measure) is then used to measure the quality of potential solutions. Many software engineering problems can be reformulated as a computational search problem. The term "search-based application", in contrast, refers to using search-engine technology, rather than search techniques, in another industrial application. Brief history One of the earliest attempts to apply optimization to a software engineering problem was reported by Webb Miller and David Spooner in 1976 in the area of software testing. In 1992, S. Xanthakis and his colleagues applied a search technique to a software engineering problem for the first time. The term SBSE was first used in 2001 by Harman a
https://en.wikipedia.org/wiki/H%C3%BClle%20cell
Eduard Eidam first described Hülle cells in 1883 where he termed Hülle cells as a “Blasenhülle” or bubble envelope (Eidam 1883). In different species, Hülle cell like structures are known such as in Candida albicans which produce at the very end of the hyphae globose blisters named chlamydospores (Navarathna et al., 2016). Eidam suggested that Hülle cells originate from the tip of “secondary hyphae” which in turn emerge from “primary hyphae” and develop as a consequence of a swelling process. Hülle cells and the subtending hyphae are connected via two distinct types of septa. The inner one is a single perforate septum where woronin bodies can be observed and represents a typical ascomycetous septum. The second septum which separates Hülle cells from the subtending hyphae is unique and named basal septum. At the basal septum vesicle fusion is observable. Consequently, to this fusion so called lomasome-like accumulations are visible. These lomasome-like structures are membrane-invaginations. In Hülle cells several nuclei, mitochondria, lipid bodies and storage products can be observed (Ellis et al., 1973). During initial Hülle cell formation, it was shown that several nuclei fuse to form a marcronucleus (Carvalho et al., 2002). Different species of the genus Aspergillus produce Hülle cells, including Aspergillus nidulans and Aspergillus heterothallicus (Bayram and Braus 2012). Hülle cells have an average size of 12-20 μm, are of globose shape with an unusual thick cell wall and are mainly associated with the sexual developmental program. Hülle cells are known for all species in the section Nidulantes. In different species, Hülle cells vary in shape between the more elongated such as in Aspergillus ustus and the globose version like in Aspergillus nidulans. In Aspergillus nidulans and Aspergillus heterothallicus Hülle cells associate with the cleistothecia, whereas in Aspergillus protuberus and Aspergillus ustus Hülle cells are not in direct contact with the cleistothe
https://en.wikipedia.org/wiki/Age%20class%20structure
Age class structure in fisheries and wildlife management is a part of population assessment. Age class structures can be used to model many populations including trees and fish. This method can be used to predict the occurrence of forest fires within a forest population. Age can be determined by counting growth rings in fish scales, otoliths, cross-sections of fin spines for species with thick spines such as triggerfish, or teeth for a few species. Each method has its merits and drawbacks. Fish scales are easiest to obtain, but may be unreliable if scales have fallen off the fish and new ones grown in their places. Fin spines may be unreliable for the same reason, and most fish do not have spines of sufficient thickness for clear rings to be visible. Otoliths will have stayed with the fish throughout its life history, but obtaining them requires killing the fish. Also, otoliths often require more preparation before ageing can occur. Analyzing fisheries age class structure An example of using age class structure to learn about a population is a regular bell curve for the population of 1-5 year-old fish with a very low population for the 3-year-olds. An age class structure with gaps in population size like the one described earlier implies a bad spawning year 3 years ago in that species. Often fish in younger age class structures have very low numbers because they were small enough to slip through the sampling nets, and may in fact have a very healthy population. See also Identification of aging in fish Population pyramid Population dynamics of fisheries
https://en.wikipedia.org/wiki/IMA%20Journal%20of%20Numerical%20Analysis
The IMA Journal of Numerical Analysis is a quarterly peer-reviewed scientific journal published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. It was established in 1981 and covers all aspects of numerical analysis, including theory, development or use of practical algorithms, and interactions between these aspects. The editors-in-chief are C.M. Elliott (University of Warwick), A. Iserles (University of Cambridge), and E. Süli (University of Oxford). Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2019 impact factor of 2.275.
https://en.wikipedia.org/wiki/Flat%20cover
In algebra, a flat cover of a module M over a ring is a surjective homomorphism from a flat module F to M that is in some sense minimal. Any module over a ring has a flat cover that is unique up to (non-unique) isomorphism. Flat covers are in some sense dual to injective hulls, and are related to projective covers and torsion-free covers. Definitions The homomorphism F→M is defined to be a flat cover of M if it is surjective, F is flat, every homomorphism from flat module to M factors through F, and any map from F to F commuting with the map to M is an automorphism of F. History While projective covers for modules do not always exist, it was speculated that for general rings, every module would have a flat cover. This flat cover conjecture was explicitly first stated in . The conjecture turned out to be true, resolved positively and proved simultaneously by . This was preceded by important contributions by P. Eklof, J. Trlifaj and J. Xu. Minimal flat resolutions Any module M over a ring has a resolution by flat modules → F2 → F1 → F0 → M → 0 such that each Fn+1 is the flat cover of the kernel of Fn → Fn−1. Such a resolution is unique up to isomorphism, and is a minimal flat resolution in the sense that any flat resolution of M factors through it. Any homomorphism of modules extends to a homomorphism between the corresponding flat resolutions, though this extension is in general not unique.
https://en.wikipedia.org/wiki/Mathematical%20visualization
Mathematical phenomena can be understood and explored via visualization. Classically this consisted of two-dimensional drawings or building three-dimensional models (particularly plaster models in the 19th and early 20th century), while today it most frequently consists of using computers to make static two or three dimensional drawings, animations, or interactive programs. Writing programs to visualize mathematics is an aspect of computational geometry. Applications Mathematical visualization is used throughout mathematics, particularly in the fields of geometry and analysis. Notable examples include plane curves, space curves, polyhedra, ordinary differential equations, partial differential equations (particularly numerical solutions, as in fluid dynamics or minimal surfaces such as soap films), conformal maps, fractals, and chaos. Geometry Geometry can be defined as the study of shapes their size, angles, dimensions and proportions Linear algebra Complex analysis In complex analysis, functions of the complex plane are inherently 4-dimensional, but there is no natural geometric projection into lower dimensional visual representations. Instead, colour vision is exploited to capture dimensional information using techniques such as domain coloring. Chaos theory Differential geometry Topology Many people have a vivid “mind’s eye,” but a team of British scientists has found that tens of millions of people cannot conjure images. The lack of a mental camera is known as aphantasia, and millions more experience extraordinarily strong mental imagery, called hyperphantasia. Researchers are studying how these two conditions arise through changes in the wiring of the brain. Visualization played an important role at the beginning of topological knot theory, when polyhedral decompositions were used to compute the homology of covering spaces of knots. Extending to 3 dimensions the physically impossible Riemann surfaces used to classify all closed orientable 2-manifolds,
https://en.wikipedia.org/wiki/Colossal%20Typewriter
Colossal Typewriter by John McCarthy and Roland Silver was one of the earliest computer text editors. The program ran on the PDP-1 at Bolt, Beranek and Newman (BBN) by December 1960. About this time, both authors were associated with the Massachusetts Institute of Technology, but it is unclear whether the editor ran on the TX-0 on loan to MIT from Lincoln Laboratory or on the PDP-1 donated to MIT in 1961 by Digital Equipment Corporation. A "Colossal Typewriter Program" is in the BBN Program Library, and, under the same name, in the DECUS Program Library as BBN- 6 (CT). See also Expensive Typewriter TECO RUNOFF TJ-2 Notes 1960 software Text editors History of software
https://en.wikipedia.org/wiki/Alternant%20matrix
In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix. Generally, if are functions from a set to a field , and , then the alternant matrix has size and is defined by or, more compactly, . (Some authors use the transpose of the above matrix.) Examples of alternant matrices include Vandermonde matrices, for which , and Moore matrices, for which . Properties The alternant can be used to check the linear independence of the functions in function space. For example, let and choose . Then the alternant is the matrix and the alternant determinant is Therefore M is invertible and the vectors form a basis for their spanning set: in particular, and are linearly independent. Linear dependence of the columns of an alternant does not imply that the functions are linearly dependent in function space. For example, let and choose . Then the alternant is and the alternant determinant is 0, but we have already seen that and are linearly independent. Despite this, the alternant can be used to find a linear dependence if it is already known that one exists. For example, we know from the theory of partial fractions that there are real numbers A and B for which Choosing and we obtain the alternant . Therefore, is in the nullspace of the matrix: that is, . Moving to the other side of the equation gives the partial fraction decomposition If and for any then the alternant determinant is zero (as a row is repeated). If and the functions are all polynomials, then divides the alternant determinant for all In particular, if V is a Vandermonde matrix, then divides such polynomial alternant determinants. The ratio is therefore a polynomial in called the bialternant. The Schur polynomial is classically defined as the bialternant of the polynomials . Applications Alternant matrices are used in c
https://en.wikipedia.org/wiki/Visual%20information%20fidelity
Visual information fidelity (VIF) is a full reference image quality assessment index based on natural scene statistics and the notion of image information extracted by the human visual system. It was developed by Hamid R Sheikh and Alan Bovik at the Laboratory for Image and Video Engineering (LIVE) at the University of Texas at Austin in 2006. It is deployed in the core of the Netflix VMAF video quality monitoring system, which controls the picture quality of all encoded videos streamed by Netflix. Model overview Images and videos of the three-dimensional visual environments come from a common class: the class of natural scenes. Natural scenes from a tiny subspace in the space of all possible signals, and researchers have developed sophisticated models to characterize these statistics. Most real-world distortion processes disturb these statistics and make the image or video signals unnatural. The VIF index employs natural scene statistical (NSS) models in conjunction with a distortion (channel) model to quantify the information shared between the test and the reference images. Further, the VIF index is based on the hypothesis that this shared information is an aspect of fidelity that relates well with visual quality. In contrast to prior approaches based on human visual system (HVS) error-sensitivity and measurement of structure, this statistical approach used in an information-theoretic setting, yields a full reference (FR) quality assessment (QA) method that does not rely on any HVS or viewing geometry parameter, nor any constants requiring optimization, and yet is competitive with state of the art QA methods. Specifically, the reference image is modeled as being the output of a stochastic `natural' source that passes through the HVS channel and is processed later by the brain. The information content of the reference image is quantified as being the mutual information between the input and output of the HVS channel. This is the information that the brain coul
https://en.wikipedia.org/wiki/Digital%20current%20loop%20interface
For serial communications, a current loop is a communication interface that uses current instead of voltage for signaling. Current loops can be used over moderately long distances (tens of kilometres), and can be interfaced with optically isolated links. There are a variety of such systems, but one based on a 20 mA current level was used by the Teletype Model 33 and was particularly common on minicomputers and early microcomputer which used these as computer terminals. As a result, most computer terminals also supported this standard into the 1980s. History Long before the RS-232 standard, current loops were used to send digital data in serial form for teleprinters. More than two teleprinters could be connected on a single circuit allowing a simple form of networking. Older teleprinters used a 60 mA current loop. Later machines, such as the Teletype Model 33, operated on a lower 20 mA current level and most early minicomputers featured a 20 mA current loop interface, with an RS-232 port generally available as a more expensive option. The original IBM PC serial port card had provisions for a 20 mA current loop. Signaling conventions A digital current loop uses the absence of current for high (space or break), and the presence of current in the loop for low (mark). This is done to ensure that on normal conditions there is always current flowing and in the event of a line being cut the flow stops indefinitely, immediately raising the alarm of the event usually as the heavy noise of the teleprinter not being synchronized, something that would not have been possible if the idle state had been no current flowing. Electrical characteristics The maximum resistance for a current loop is limited by the available voltage. Current loop interfaces usually use voltages much higher than those found on an RS-232 interface, and cannot be interconnected with voltage-type inputs without some form of level translator circuit. For full-duplex communication between two devices
https://en.wikipedia.org/wiki/Category%20utility
Category utility is a measure of "category goodness" defined in and . It attempts to maximize both the probability that two objects in the same category have attribute values in common, and the probability that objects from different categories have different attribute values. It was intended to supersede more limited measures of category goodness such as "cue validity" (; ) and "collocation index" . It provides a normative information-theoretic measure of the predictive advantage gained by the observer who possesses knowledge of the given category structure (i.e., the class labels of instances) over the observer who does not possess knowledge of the category structure. In this sense the motivation for the category utility measure is similar to the information gain metric used in decision tree learning. In certain presentations, it is also formally equivalent to the mutual information, as discussed below. A review of category utility in its probabilistic incarnation, with applications to machine learning, is provided in . Probability-theoretic definition of category utility The probability-theoretic definition of category utility given in and is as follows: where is a size- set of -ary features, and is a set of categories. The term designates the marginal probability that feature takes on value , and the term designates the category-conditional probability that feature takes on value given that the object in question belongs to category . The motivation and development of this expression for category utility, and the role of the multiplicand as a crude overfitting control, is given in the above sources. Loosely , the term is the expected number of attribute values that can be correctly guessed by an observer using a probability-matching strategy together with knowledge of the category labels, while is the expected number of attribute values that can be correctly guessed by an observer the same strategy but without any knowledge of the category label
https://en.wikipedia.org/wiki/Voiceless%20upper-pharyngeal%20plosive
The voiceless upper-pharyngeal plosive or stop is a rare consonant. Pharyngeal consonants are typically pronounced at two regions of the pharynx, upper and lower. The lower region is epiglottal, so the upper region is often abbreviated as merely 'pharyngeal'. Among widespread speech sounds in the world's languages, the upper pharynx produces a voiceless fricative and a voiced sound that ranges from fricative to (more commonly) approximant, . The epiglottal region produces the plosive as well as sounds that range from fricative to trill, and . Because the latter pair is most often trilled and rarely simply fricative, these consonants have been classified together as simply pharyngeal, and distinguished as plosive, fricative/approximant and trill. No language is known to have a phonemic upper pharyngeal plosive. The Nǁng language (Nǀuu) is claimed to have an upper pharyngeal place of articulation among its click consonants: clicks in Nǁng have a rear closure that is said to vary between uvular or upper pharyngeal, depending on the click type. However, if the place were truly pharyngeal, they could not occur as nasal clicks, which they do. Otherwise upper pharyngeal plosives are only known from disordered speech. They appear for example in the speech of some children with cleft palate, as compensatory backing of stops to avoid nasalizing them. The extIPA provides the letter (a small-capital ), to transcribe such a voiceless upper pharyngeal plosive. Features Features of the voiceless upper-pharyngeal stop: See also Voiced upper-pharyngeal plosive
https://en.wikipedia.org/wiki/DECpc%20AXP%20150
The DECpc AXP 150, code-named Jensen, is an entry-level workstation developed and manufactured by Digital Equipment Corporation. Introduced on 25 May 1993, the DECpc AXP 150 was the first Alpha-based system to support the Windows NT operating system and the basis for the DEC 2000 AXP entry-level servers. It was discontinued on 28 February 1994, succeeded by the entry-level Multia and the entry-level and mid-range models of the AlphaStation family. The charter for the development and production of the DEC 2000 AXP was held by Digital's Entry Level Solutions Business, based in Ayr, Scotland. DEC 2000 AXP The DEC 2000 AXP family are entry-level servers based on the DECpc AXP 150. Differences were support for Digital's OpenVMS AXP and OSF/1 AXP (later renamed to Digital UNIX) operating systems and support for a VT-series terminal or equivalent. The DEC 2000 AXP family succeeded by the AlphaServer 1000. There are two models in the DEC 2000 AXP family: the Model 300 and Model 500. Model 300 The DEC 2000 Model 300 AXP, code-named Jensen, is identical to the DECpc AXP 150 but was intended to be used as a server. Some options available for the DECpc AXP 150 are not available for the Model 300. It was introduced on 12 October 1993, and was discontinued on 28 February 1994. Model 500 The DEC 2000 Model 500 AXP, code-named Culzean, was marketed as a server running either Windows NT Advanced Server, DEC OSF/1 AXP or OpenVMS. Introduced on 15 November 1993, the Model 500 was similar to the Model 300, but housed in a larger pedestal-type enclosure capable with space for up to 12 3.5 in hard disks and incorporating an Intelligent Front Panel (IFP) for system monitoring and control. The Model 500 also supported either two 415 W power supplies or one power supply plus a battery Standby Power Supply (SPS). It was discontinued on 30 December 1994. Description The DECpc AXP 150 systems used a 150 MHz DECchip 21064 microprocessor with an external 512 KB B-cache (L2 cache), wh
https://en.wikipedia.org/wiki/Noga%20Alon
Noga Alon (; born 1956) is an Israeli mathematician and a professor of mathematics at Princeton University noted for his contributions to combinatorics and theoretical computer science, having authored hundreds of papers. Education and career Alon was born in 1956 in Haifa, where he graduated from the Hebrew Reali School in 1974. He graduated summa cum laude from the Technion – Israel Institute of Technology in 1979, earned a master's degree in mathematics in 1980 from Tel Aviv University, and received his Ph.D. in Mathematics at the Hebrew University of Jerusalem in 1983 with the dissertation Extremal Problems in Combinatorics supervised by Micha Perles. After postdoctoral research at the Massachusetts Institute of Technology he returned to Tel Aviv University as a senior lecturer in 1985, obtained a permanent position as an associate professor there in 1986, and was promoted to full professor in 1988. He was head of the School of Mathematical Science from 1999 to 2001, and was given the Florence and Ted Baumritter Combinatorics and Computer Science Chair, before retiring as professor emeritus and moving to Princeton University in 2018. He was editor-in-chief of the journal Random Structures and Algorithms beginning in 2008. Research Alon has published more than five hundred research papers, mostly in combinatorics and in theoretical computer science, and one book, on the probabilistic method. He has also published under the pseudonym "A. Nilli", based on the name of his daughter Nilli Alon. His research contributions include the combinatorial Nullstellensatz, an algebraic tool with many applications in combinatorics; color-coding, a technique for fixed-parameter tractability of pattern-matching algorithms in graphs; and the Alon–Boppana bound in spectral graph theory. Selected works Book The Probabilistic Method, with Joel Spencer, Wiley, 1992. 2nd ed., 2000; 3rd ed., 2008; 4th ed., 2016. Research articles Previously in the ACM Symposium on Theory o
https://en.wikipedia.org/wiki/Genetic%20architecture
Genetic architecture is the underlying genetic basis of a phenotypic trait and its variational properties. Phenotypic variation for quantitative traits is, at the most basic level, the result of the segregation of alleles at quantitative trait loci (QTL). Environmental factors and other external influences can also play a role in phenotypic variation. Genetic architecture is a broad term that can be described for any given individual based on information regarding gene and allele number, the distribution of allelic and mutational effects, and patterns of pleiotropy, dominance, and epistasis. There are several different experimental views of genetic architecture. Some researchers recognize that the interplay of various genetic mechanisms is incredibly complex, but believe that these mechanisms can be averaged and treated, more or less, like statistical noise. Other researchers claim that each and every gene interaction is significant and that it is necessary to measure and model these individual systemic influences on evolutionary genetics. Applications Genetic architecture can be studied and applied at many different levels. At the most basic, individual level, genetic architecture describes the genetic basis for differences between individuals, species, and populations. This can include, among other details, how many genes are involved in a specific phenotype and how gene interactions, such as epistasis, influence that phenotype. Line-cross analyses and QTL analyses can be used to study these differences. This is perhaps the most common way that genetic architecture is studied, and though it is useful for supplying pieces of information, it does not generally provide a complete picture of the genetic architecture as a whole. Genetic architecture can also be used to discuss the evolution of populations. Classical quantitative genetics models, such as that developed by R.A. Fisher, are based on analyses of phenotype in terms of the contributions from different g
https://en.wikipedia.org/wiki/Linx%20S.A.
Linx S.A. is a Brazilian management software company and the largest software house in retail management systems in Latin America. According to the American technology consulting firm IDC, Linx retains 40.2% of the retail management software in Brazil. In 2007, Linx was listed for the 3rd time in the Valor 1000 annual report, which lists the 1000 biggest Brazilian companies. In August 2020, payment processor StoneCo merged with Linx's operations in a deal worth $1.1 billion. History Linx was founded in 1985 by São Paulo native, Nércio Fernandes. At age 22, Nércio dropped out of college, leaving his studies in Civil Engineering to invest in his own business in the micro-computing field. He and his partners gave the company its first name: Microserv Comércio & Consultoria Ltda. A few years after its inauguration, the company was serving small businesses in the regions of Brás and Bom Retiro in São Paulo, when MicroMalhas was created – a software geared towards fashion retail. In 1990, the software was renamed Linx, and later became Linx ERP – the group's flagship software, geared towards different retail segments. Linx Logística, a unit specialized in internal logistics, was created in 2000. With a more complex structure, and with the parallel operation of Linx Sistemas, Linx Logística and Linx Telecom – created to focus on the outsourcing of connectivity and telecommunication options for retail – the creation of a holding company was necessary in order to unify the business units. In 2004, LMI S.A. emerged, and later was renamed Linx S.A. In 2009, Linx received the contribution of BNDESPar to accomplish acquisitions with the objective of expanding the group's operations. With the group's expansion, a business division called Linx Prevenção de Perdas was created to minimize losses related to materials, opportunities, time and capital in several of its clients’ market verticals. Linx relocated in 2011 to a 10-story commercial building in the city of São Paulo. Als
https://en.wikipedia.org/wiki/Gilbert%20Morgan%20Smith%20Medal
The Gilbert Morgan Smith Medal is awarded by the U.S. National Academy of Sciences "in recognition of excellence in published research on marine or freshwater algae." It has been awarded every three years since 1979. List of Gilbert Morgan Smith Medal winners Source: NAS Mark E. Hay (2018) For developing algae as the major model for marine chemical ecology, and for elucidating how chemical cues and signals from algae structure marine and aquatic populations, communities, and ecosystems. Takao Kondo (2015) For demonstrating the occurrence of circadian clocks in prokaryotes, leading through genetic dissection to the discovery of the central bacterial clock genes, kaiABC, and to a new way of thinking about algal ecology. John B. Waterbury (2012) For the discovery and characterization of planktonic marine cyanobacteria, and viruses that infect them, setting in motion a paradigm shift in our understanding of ocean productivity, ecology, and biogeochemical cycles. Arthur R. Grossman (2009) For pioneering creative and comprehensive research on algae and cyanobacteria, elucidating molecular mechanisms by which they adapt to changes in light color and to nutrient stress. Sabeeha Merchant (2006) For her pioneering discoveries in the assembly of metalloenzymes and the regulated biogenesis of major complexes of the photosynthetic apparatus in green algae. Sarah P. Gibbs (2003) For her revolutionary concepts and evidence that constitute the foundation for the current theory of chloroplast evolution and the phylogenetic relationships of algae and plants. Shirley W. Jeffrey (2000) For her discovery and characterization of major algal pigments, their quantitative application in oceanography, and for providing phytoplankton cultures for international research. Isabella A. Abbott (1997) For her comprehensive investigations of the biogeography and systematics of marine algae in the eastern and central Pacific, with emphasis on Rhodophyta, the red algae. Elisabeth Ga
https://en.wikipedia.org/wiki/NextGenTel
NextGenTel is a Norwegian telecommunications company located in Bergen (HQ), Trondheim, Oslo, Stavanger, and Kristiansand. They offer ADSL, SHDSL, ADSL2+, VDSL2, IPTV, IP Telephony, mobilephone subscriptions, and WiMax solutions. They are, with 140,000 customers, Norway's third-largest Internet service provider (the largest being Telenor and Altibox). The company was established on March 1, 2000, and delivers Internet and TV solutions (triple play) over fiber and wireless broadband to private households and housing associations. NextGenTel was wholly owned by the Swedish-Finnish telecommunications company TeliaSonera until December 2012, when the company was acquired by Telio. Since 2019, NextGenTel has been a fully owned subsidiary of Telecom 3 Holding AS.
https://en.wikipedia.org/wiki/N%2CN%27-Dicyclohexylcarbodiimide
{{DISPLAYTITLE:N,N'''-Dicyclohexylcarbodiimide}} is an organic compound with the chemical formula (C6H11N)2C. It is a waxy white solid with a sweet odor. Its primary use is to couple amino acids during artificial peptide synthesis. The low melting point of this material allows it to be melted for easy handling. It is highly soluble in dichloromethane, tetrahydrofuran, acetonitrile and dimethylformamide, but insoluble in water. Structure and spectroscopy The C−N=C=N−C core of carbodiimides (N=C=N) is linear, being related to the structure of allene. The molecule has idealized C2 symmetry. The N=C=N moiety gives characteristic IR spectroscopic signature at 2117 cm−1. The 15N NMR spectrum shows a characteristic shift of 275 ppm upfield of nitric acid and the 13C NMR spectrum features a peak at about 139 ppm downfield from TMS. Preparation DCC is produced by the decarboxylation of cyclohexylisocyanate using phosphine oxides as a catalyst: 2 C6H11NCO → (C6H11N)2C + CO2 Alternative catalysts for this conversion include the highly nucleophilic OP(MeNCH2CH2)3N. Other methods Of academic interest, palladium acetate, iodine, and oxygen can be used to couple cyclohexyl amine and cyclohexyl isocyanide. Yields of up to 67% have been achieved using this route: C6H11NC + C6H11NH2 + O2 → (C6H11N)2C + H2O DCC has also been prepared from dicyclohexylurea using a phase transfer catalyst. The disubstituted urea, arenesulfonyl chloride, and potassium carbonate react in toluene in the presence of benzyl triethylammonium chloride to give DCC in 50% yield. Reactions Amide, peptide, and ester formation DCC is a dehydrating agent for the preparation of amides, ketones, and nitriles. In these reactions, DCC hydrates to form dicyclohexylurea (DCU), a compound that is nearly insoluble in most organic solvents and insoluble in water. The majority of the DCU is thus readily removed by filtration, although the last traces can be difficult to eliminate from non-polar products. DCC
https://en.wikipedia.org/wiki/List%20of%20long%20non-coding%20RNA%20databases
This is a list of long noncoding RNA databases, which provide information about lncRNAs. Long non-coding RNA databases
https://en.wikipedia.org/wiki/Airport%20privacy
Airport privacy involves the right of personal privacy for passengers when it comes to screening procedures, surveillance, and personal data being stored at airports. This practice intertwines airport security measures and privacy specifically the advancement of security measures following the 9/11 attacks in the United States and other global terrorist attacks. Several terrorist attacks, such as 9/11, have led airports all over the world to look to the advancement of new technology such as body and baggage screening, detection dogs, facial recognition, and the use of biometrics in electronic passports. Amidst the introduction of new technology and security measures in airports and the growing rates of travelers there has been a rise of risk and concern in privacy. History of airport policies Before the 9/11 terrorist attacks, the only security measure in place in U.S. airports were metal detectors. A metal detector's ability to only detect metal weapons made it inefficient in detecting nonmetals such as liquids, sharp objects, or explosives. After the 9/11 terrorist attacks in the United States, the Transportation Security Administration (TSA) increased security measures all over the airports. Policies were made to prohibit the carry on of liquids, sharp objects, and explosives. Airlines instructed passengers to arrive 2 hours before their flight is to depart if traveling domestically and 3 hours if traveling internationally. After passing through screening, passengers were selected at random for additional screening including bag checks. After an incident, that involved a passenger carrying a bomb in their shoe, security screeners asked passengers to remove their shoes when passing through checkpoints. In February 2002, the TSA officially took over the responsibility for airport security. In 2009, airport security measures were once again shaken when a passenger, now commonly known as the "underwear bomber," smuggled a bomb into the airport facility in his under
https://en.wikipedia.org/wiki/Keyswitch
A keyswitch is a type of small switch used for keys on keyboards. Key switch is also used to describe a switch operated by a key, usually used in burglar alarm circuits. A car ignition is also a switch of this type. Computer keyboards
https://en.wikipedia.org/wiki/Kautsky%20effect
In biophysics, the Kautsky effect (also fluorescence transient, fluorescence induction or fluorescence decay) is a phenomenon consisting of a typical variation in the behavior of a plant fluorescence when exposed to light. It was discovered in 1931 by H. Kautsky and A. Hirsch. When dark-adapted photosynthesising cells are illuminated with continuous light, chlorophyll fluorescence displays characteristic changes in intensity accompanying the induction of photosynthetic activity. Application of Kautsky effect The quantum yield of photosynthesis, which is also the photochemical quenching of fluorescence, is calculated through the following equation: Φp = (Fm-F0)/Fm = Fv/Fm F0 is the low fluorescence intensity, which is measured by a short light flash that is not strong enough to cause photochemistry, and thus induces fluorescence. Fm is the maximum fluorescence that can be obtained from a sample by measuring the highest intensity of fluorescence after a saturating flash. The difference between the measured values is the variable fluorescence Fv. Explanation When a sample (leaf or algal suspension) is illuminated, the fluorescence intensity increases with a time constant in the microsecond or millisecond range. After a few seconds the intensity decreases and reaches a steady-state level. The initial rise of the fluorescence intensity is attributed to the progressive saturation of the reaction centers of photosystem 2 (PSII). Therefore, photochemical quenching increases with the time of illumination, with a corresponding increase of the fluorescence intensity. The slow decrease of the fluorescence intensity at later times is caused, in addition to other processes, by non-photochemical quenching. Non-photochemical quenching is a protection mechanism in photosynthetic organisms as they have to avoid the adverse effect of excess light. Which components contribute and in which quantities remains an active but controversial area of research. It is known that carotenoid
https://en.wikipedia.org/wiki/Dyadic%20distribution
A dyadic (or 2-adic) distribution is a specific type of discrete probability distribution that is of some theoretical importance in data compression. Definition A dyadic distribution is a probability distribution whose probability mass function is where is some whole number. It is possible to find a binary code defined on this distribution, which has an average code length that is equal to the entropy.
https://en.wikipedia.org/wiki/Dead-end%20elimination
The dead-end elimination algorithm (DEE) is a method for minimizing a function over a discrete set of independent variables. The basic idea is to identify "dead ends", i.e., combinations of variables that are not necessary to define a global minimum because there is always a way of replacing such combination by a better or equivalent one. Then we can refrain from searching such combinations further. Hence, dead-end elimination is a mirror image of dynamic programming, in which "good" combinations are identified and explored further. Although the method itself is general, it has been developed and applied mainly to the problems of predicting and designing the structures of proteins. It closely related to the notion of dominance in optimization also known as substitutability in a Constraint Satisfaction Problem. The original description and proof of the dead-end elimination theorem can be found in . Basic requirements An effective DEE implementation requires four pieces of information: A well-defined finite set of discrete independent variables A precomputed numerical value (considered the "energy") associated with each element in the set of variables (and possibly with their pairs, triples, etc.) A criterion or criteria for determining when an element is a "dead end", that is, when it cannot possibly be a member of the solution set An objective function (considered the "energy function") to be minimized Note that the criteria can easily be reversed to identify the maximum of a given function as well. Applications to protein structure prediction Dead-end elimination has been used effectively to predict the structure of side chains on a given protein backbone structure by minimizing an energy function . The dihedral angle search space of the side chains is restricted to a discrete set of rotamers for each amino acid position in the protein (which is, obviously, of fixed length). The original DEE description included criteria for the elimination of single rot
https://en.wikipedia.org/wiki/Edward%20Yourdon
Edward Nash Yourdon (April 30, 1944 – January 20, 2016) was an American software engineer, computer consultant, author and lecturer, and software engineering methodology pioneer. He was one of the lead developers of the structured analysis techniques of the 1970s and a co-developer of both the Yourdon/Whitehead method for object-oriented analysis/design in the late 1980s and the Coad/Yourdon methodology for object-oriented analysis/design in the 1990s. Biography Yourdon obtained his B.S. in applied mathematics from Massachusetts Institute of Technology (MIT) in 1965, and did graduate work in electrical engineering and computer science at MIT and the Polytechnic Institute of New York. In 1964 Yourdon started working at Digital Equipment Corporation developing FORTRAN programs for the PDP-5 minicomputer and later assembler for the PDP-8. In the late 1960s and early 1970s he worked at a small consulting firm and as an independent consultant. In 1974 Yourdon founded his own consulting firm, YOURDON Inc., to provide educational, publishing, and consulting services. After he sold this firm in 1986 he served on the Board of multiple IT consultancy corporations and was advisor on several research project in the software industry throughout the 1990s. In June 1997, Yourdon was inducted into the Computer Hall of Fame, along with such notables as Charles Babbage, James Martin, Grace Hopper, and Gerald Weinberg. In December 1999 Crosstalk: The Journal of Defense Software Engineering named him one of the ten most influential people in the software field. In the late 1990s, Yourdon became the center of controversy over his beliefs that Y2K-related computer problems could result in severe software failures that would culminate in widespread social collapse. Due to the efforts of Yourdon and thousands of dedicated technologists, developers and project managers, these potential critical system failure points were successfully remediated, thus avoiding the problems Yourdon and
https://en.wikipedia.org/wiki/Slipstream%20%28computer%20science%29
A slipstream processor is an architecture designed to reduce the length of a running program by removing the non-essential instructions. It is a form of speculative computing. Non-essential instructions include such things as results that are not written to memory, or compare operations that will always return true. Also as statistically most branch instructions will be taken it makes sense to assume this will always be the case. Because of the speculation involved slipstream processors are generally described as having two parallel executing streams. One is an optimized faster A-stream (advanced stream) executing the reduced code, the other is the slower R-stream (redundant stream), which runs behind the A-stream and executes the full code. The R-stream runs faster than if it were a single stream due to data being prefetched by the A-stream effectively hiding memory latency, and due to the A-stream's assistance with branch prediction. The two streams both complete faster than a single stream would. As of 2005, theoretical studies have shown that this configuration can lead to a speedup of around 20%. The main problem with this approach is accuracy: as the A-stream becomes more accurate and less speculative, the overall system runs slower. Furthermore, a large enough distance is needed between the A-stream and the R-stream so that cache misses generated by the A-stream do not slow down the R-stream.
https://en.wikipedia.org/wiki/Detection%20error%20tradeoff
A detection error tradeoff (DET) graph is a graphical plot of error rates for binary classification systems, plotting the false rejection rate vs. false acceptance rate. The x- and y-axes are scaled non-linearly by their standard normal deviates (or just by logarithmic transformation), yielding tradeoff curves that are more linear than ROC curves, and use most of the image area to highlight the differences of importance in the critical operating region. Axis warping The normal deviate mapping (or normal quantile function, or inverse normal cumulative distribution) is given by the probit function, so that the horizontal axis is x = probit(Pfa) and the vertical is y = probit(Pfr), where Pfa and Pfr are the false-accept and false-reject rates. The probit mapping maps probabilities from the unit interval [0,1], to the extended real line [−∞, +∞]. Since this makes the axes infinitely long, one has to confine the plot to some finite rectangle of interest. See also Constant false alarm rate Detection theory False alarm Receiver operating characteristic
https://en.wikipedia.org/wiki/Enumeration%20algorithm
In computer science, an enumeration algorithm is an algorithm that enumerates the answers to a computational problem. Formally, such an algorithm applies to problems that take an input and produce a list of solutions, similarly to function problems. For each input, the enumeration algorithm must produce the list of all solutions, without duplicates, and then halt. The performance of an enumeration algorithm is measured in terms of the time required to produce the solutions, either in terms of the total time required to produce all solutions, or in terms of the maximal delay between two consecutive solutions and in terms of a preprocessing time, counted as the time before outputting the first solution. This complexity can be expressed in terms of the size of the input, the size of each individual output, or the total size of the set of all outputs, similarly to what is done with output-sensitive algorithms. Formal definitions An enumeration problem is defined as a relation over strings of an arbitrary alphabet : An algorithm solves if for every input the algorithm produces the (possibly infinite) sequence such that has no duplicate and if and only if . The algorithm should halt if the sequence is finite. Common complexity classes Enumeration problems have been studied in the context of computational complexity theory, and several complexity classes have been introduced for such problems. A very general such class is EnumP, the class of problems for which the correctness of a possible output can be checked in polynomial time in the input and output. Formally, for such a problem, there must exist an algorithm A which takes as input the problem input x, the candidate output y, and solves the decision problem of whether y is a correct output for the input x, in polynomial time in x and y. For instance, this class contains all problems that amount to enumerating the witnesses of a problem in the class NP. Other classes that have been defined include the f
https://en.wikipedia.org/wiki/Targeted%20covalent%20inhibitors
Targeted covalent inhibitors (TCIs) or Targeted covalent drugs are rationally designed inhibitors that bind and then bond to their target proteins. These inhibitors possess a bond-forming functional group of low chemical reactivity that, following binding to the target protein, is positioned to react rapidly with a proximate nucleophilic residue at the target site to form a bond. Historical impact of covalent drugs Over the last 100 years covalent drugs have made a major impact on human health and have been highly successful drugs for the pharmaceutical industry. These inhibitors react with their target proteins to form a covalent complex in which the protein has lost its function. The majority of these successful drugs, which include penicillin, omeprazole, clopidogrel, and aspirin were discovered through serendipity in phenotypic screens. However, key changes in screening approaches, along with safety concerns, have made pharma reluctant to pursue covalent inhibitors in a systematic way (Liebler & Guengerich, 2005). Recently, there has been considerable attention to using rational drug design to create highly selective covalent inhibitors called targeted covalent inhibitors. The first published example of a targeted covalent drug was for the EGFR kinase. But this has now broadened to other kinases and other protein families. Aside from small molecules, covalent probes are also being derived from peptides or proteins. By incorporation of a reactive group into a binding peptide or protein via posttranslational chemical modification or as an unnatural amino acid, a target protein can be conjugated specifically via proximity-induced reaction. Advantages of covalent drugs Potency Covalent bonding can lead to potencies and ligand efficiencies that are either exceptionally high or, for irreversible covalent interactions, even essentially infinite. Covalent bonding thus allows high potency to be routinely achieved in compounds of low molecular mass, along with all th
https://en.wikipedia.org/wiki/Dilation%20%28operator%20theory%29
In operator theory, a dilation of an operator T on a Hilbert space H is an operator on a larger Hilbert space K, whose restriction to H composed with the orthogonal projection onto H is T. More formally, let T be a bounded operator on some Hilbert space H, and H be a subspace of a larger Hilbert space H' . A bounded operator V on H' is a dilation of T if where is an orthogonal projection on H. V is said to be a unitary dilation (respectively, normal, isometric, etc.) if V is unitary (respectively, normal, isometric, etc.). T is said to be a compression of V. If an operator T has a spectral set , we say that V is a normal boundary dilation or a normal dilation if V is a normal dilation of T and . Some texts impose an additional condition. Namely, that a dilation satisfy the following (calculus) property: where f(T) is some specified functional calculus (for example, the polynomial or H∞ calculus). The utility of a dilation is that it allows the "lifting" of objects associated to T to the level of V, where the lifted objects may have nicer properties. See, for example, the commutant lifting theorem. Applications We can show that every contraction on Hilbert spaces has a unitary dilation. A possible construction of this dilation is as follows. For a contraction T, the operator is positive, where the continuous functional calculus is used to define the square root. The operator DT is called the defect operator of T. Let V be the operator on defined by the matrix V is clearly a dilation of T. Also, T(I - T*T) = (I - TT*)T and a limit argument imply Using this one can show, by calculating directly, that V is unitary, therefore a unitary dilation of T. This operator V is sometimes called the Julia operator of T. Notice that when T is a real scalar, say , we have which is just the unitary matrix describing rotation by θ. For this reason, the Julia operator V(T) is sometimes called the elementary rotation of T. We note here that in the above discussi
https://en.wikipedia.org/wiki/Rupert%20Riedl
Rupert Riedl (22 February 1925 – 18 September 2005) was an Austrian zoologist. Biography Riedl was a scientist with broad interests, whose influence in epistemology grounded in evolutionary theory was notable, although less in English-speaking circles than in German or even Spanish speaking ones. His 1984 work, Biology of Knowledge: The evolutionary basis of reason examined cognitive abilities and the increasing complexity of biological diversification over the immense periods of evolutionary time. Riedl built upon the work of the Viennese school of thought initially typified by Konrad Lorenz, and continued in Vienna by Gerhard Vollmer, Franz Wuketits, and in Spain by Nicanor Ursura. Riedl was skeptical of German idealism, and nourished by the tradition that produced the scientists and philosophers of science Ernst Mach, Ludwig Boltzmann, Erwin Schrödinger, Karl Popper, Hans Reichenbach and Sigmund Freud. Lorenz believed that the Kantian framework of cognitive concepts such as three-dimensional space and time were not fixed but built up over phylogenetic history, potentially subject to further developments. Lorenz’s position, as expanded by Riedl, attempted to make it easier to assimilate non-common sense areas of physics such as quantum field theory and string theory. Riedl drew clear distinctions between the deductive and inductive (non conscious) cognitive processes characteristic of the left and right cerebral hemispheres. His analysis of what he called "the pitfalls of reason" deserves special attention. He, like Lorenz, was concerned with cognitive processes that might endanger the future of civilization. Riedl had less direct influence on academic philosophy than his profound influence on the thinking of investigators in neuroscience such as Michael Gazzaniga, Antonio Damasio, and Vilayanur S. Ramachandran, whose investigations combine synergistically with those of more physiologically-oriented scientists such as Eric Kandel and Rodolfo Llinás, as well a
https://en.wikipedia.org/wiki/International%20Primatological%20Society
The International Primatological Society (IPS) is a scientific, educational, and charitable organization focused on non-human primates. It encourages scientific research in all areas of study, facilitates international cooperation among researchers, and promotes primate conservation. Together with the IUCN Species Survival Commission Primate Specialist Group (IUCN/SSC PSG) and Conservation International (CI), it jointly publishes a biannual report entitled Primates in Peril: The World's 25 Most Endangered Primates. See also International Journal of Primatology
https://en.wikipedia.org/wiki/The%20Animal%20in%20You
The Animal in You is a 1995 non-fiction book by Roy Feinson, which posits a biological basis as to why people tend to exhibit personality traits similar to animal species. The book hypothesizes that through the process of convergent evolution, people adopt a niche set of behaviors enabling them to cope with their particular social milieu in the same way as individual animal species adapt to their environments. The book has been translated into ten different languages, including Mandarin, Japanese, Czech, Hebrew and French, and has been featured on CNN, The Dr. Phil Show and CBS The Talk The Personality Test The Animal in You features a personality test of nine questions that collapses to one of 45 possible personality types. After readers answer the questions about their personality and physical attributes, the test returns a number corresponding to one of the 45 animal personality types, appearing in a look-up table. The underlying mechanisms for these types of tests are trivial for modern software based Internet tests, but this is the first known example of a book-based test able to resolve over 20 categories. The test is augmented by an interdependent weighting scheme wherein each question is assigned a different weight depending on how the other questions are answered. The Animals The animal personalities are broken down into the broad categories shown below. Carnivores Traits: Powerful, Optimistic, Territorial, Courageous, Fastidious, Athletic, Adventurous, Energetic, Attractive, Fun, Loving, Talented, Flamboyant, World Travelers, Loyal Otter Wolf Sea Lion Wild Dog Walrus Lion Shark Tiger Bear Fox Wild Cat Badger Weasel Dog. Optimistic Herbivores Traits: Sociable, Hard Working, Sober, Friendly, Family Oriented, Organized, Reliable, Methodical, Conservative Baboon Elephant Bison Giraffe Cottontail Gorilla Deer Rhinoceros Hippo Sable Horse Sheep Mountain Goat Warthog Zebra. Rodents & Insectivores Traits: Small, Creative, Thr
https://en.wikipedia.org/wiki/Ascorbyl%20stearate
Ascorbyl stearate (C24H42O7) is an ester formed from ascorbic acid and stearic acid. In addition to its use as a source of vitamin C, it is used as an antioxidant food additive in margarine (E number E305). The USDA limits its use to 0.02% individually or in conjunction with other antioxidants. See also Ascorbyl palmitate Mineral ascorbates
https://en.wikipedia.org/wiki/L%27Ann%C3%A9e%20philologique
L'Année philologique (The Philological Year) is an index to scholarly work in fields related to the language, literature, history and culture of Ancient Greece and Rome. It is the standard bibliographical tool for research in classical studies. Published in print annually since 1928, with the first volume covering the years 1924–1926, it is now also available online by institutional or individual subscription. As of June 2014, the electronic version (APh Online) covers volume years 1924 through 2011. The editorial staff gathers each year's additions from 1,500 periodicals, with an additional 500 articles from collections. Overview L'Année philologique aims to be the most comprehensive international resource for Ancient Greece and Rome. Entries on journal articles are often accompanied by a very brief abstract that may be in a language other than that of the original. Abstracts are most often in French, English, German, Italian, or Spanish. No abstracts for books are provided, but entries on books include a listing of published reviews. Indexing often lags publication by two or three years. Because L'Année philologique is an index, not a collection, searches find terms only within the bibliographical entry, not the full text of the article (the "full text" search option refers to the contents of L'Année philologique). The electronic edition offers a variety of search parameters, with some idiosyncrasies. Since September 2018, the database is available for library subscription via the commercial publisher Brepols Publishers. Between 2013 and 2018, access was provided by another publisher, EBSCO. L'Année philologique is published by the Société Internationale de Bibliographie Classique with support from the Centre National de la Recherche Scientifique (CNRS) of France and several other institutions. Editorial offices are located in Chapel Hill, North Carolina; Heidelberg, Germany; Lausanne, Switzerland; Genoa, Italy; and Grenada, Spain. It was founded in Paris by th
https://en.wikipedia.org/wiki/Netgear%20WNR3500L
The WNR3500L (also known as the WNR3500U) is an 802.11b/g/n Wi-Fi router created by Netgear. It was officially launched in the autumn of 2009. The WNR3500L runs open-source Linux firmware and supports the installation of third party packages such as DD-WRT and Tomato. Hardware Version 1: Broadcom BCM4718 453 MHz SoC 8 MB Flash memory 64 MB RAM 32 kB instruction cache 32 kB data cache Three internal antennas 802.11 b/g/n wireless support One 10/100/1000 Mbit/s WAN port Four 10/100/1000 Mbit/s switched LAN ports Integrated USB 2.0 EHCI host port Compatible with Windows 7 Version 2: Broadcom BCM47186 500 MHz SoC 128 MB flash memory 128 MB RAM 32 kB instruction cache 32 kB data cache Two internal antennas 802.11 b/g/n wireless support One 10/100/1000 Mbit/s WAN port Four 10/100/1000 Mbit/s switched LAN ports Integrated USB 2.0 EHCI host port Compatible with Windows 7 Features There are several ways to identify the version, including a v2 label on version 2. Version 1: Supports installation of Tomato firmware and DD-WRT; the manufacturer has a custom version of OpenWrt while the mainline version works partially Supports Wi-Fi Protected Setup (WPS) Automatically detects ISP type, exposed host (DMZ), MAC address authentication, URL content filtering, logs and email alerts of internet activity Static & dynamic routing with TCP/IP, VPN pass-through (IPsec, L2TP), NAT, PPTP, PPPoE, DHCP (client & server) Supports IPv6, including automatic 6to4 tunnel (since firmware 1.2.2.30) Version 2: Supports installation of Tomato firmware (Shibby and Toastman varieties) and DD-WRT Open source features According to one analysis, installing DD-WRT reduced performance.
https://en.wikipedia.org/wiki/William%20E.%20Castle
William Ernest Castle (October 25, 1867 – June 3, 1962) was an early American geneticist. Early years William Ernest Castle was born on a farm in Ohio and took an early interest in natural history. He graduated in 1889 from Denison University in Granville, Ohio, a Baptist college that emphasized classics, and went on to become a teacher of Latin at Ottawa University in Ottawa, Kansas, where he published his first paper on the flowering plants of the area. After three years of teaching, botany won out over Latin. Education Castle entered the senior class of Harvard University in 1892 and in 1893 took a second A.B. degree with honors. He was appointed laboratory assistant in zoology, an A.M. degree in 1894 and a Ph.D. in 1895. He then taught zoology at the University of Wisconsin–Madison and at the Knox College in Galesburg, Illinois, each for a year. Harvard and Drosophila Castle returned to Harvard in 1897. His early work focused on embryology, but after the rediscovery of Mendelian genetics in 1900, he turned to mammalian genetics, especially that of the guinea pig. In 1903 Castle intervened in the debate on mathematical foundations of Mendelian genetics. He corrected some tentative work of Udny Yule on breeding by deliberate selection and genetics. In so doing, he anticipated what has now become known as the Hardy–Weinberg law. Formulated in the terms "as soon as selection is arrested the race remains stable at the degree of purity then attained", it appeared in his paper of November that year. At Harvard, Charles W. Woodworth suggested to Castle that Drosophila might be used for genetical work. Castle was the first to use the fruit fly Drosophila melanogaster, and it was his work that inspired T.H. Morgan to use Drosophila and the basis of Morgan's 1933 Nobel Prize. Bussey Institution In 1908 Castle moved from the Harvard Museum of Comparative Zoology to the Bussey Institution for Applied Biology. There his most famous PhD student was Sewall Wright wh
https://en.wikipedia.org/wiki/Deterministic%20rendezvous%20problem
The deterministic rendezvous problem is a problem in computer science and robotics that involves two or more robots or players that must find each other by following a predetermined set of instructions. The goal is for the robots to meet at a specific location, or rendezvous point, without knowing the location of the other robot or robots. In the deterministic rendezvous problem, each robot follows the same set of instructions, but each robot is assigned a unique label or identifier to differentiate them from each other [2]. This unique label is used to break the symmetry of the problem, as it allows the robots to distinguish themselves from each other and to follow the instructions in a specific order [3]. The deterministic rendezvous problem is typically solved by the robots acting synchronously, meaning that they all follow the instructions at the same time [4]. However, there are also non-synchronous versions of the problem, where the robots may act at different times or may have different internal clocks [5]. The deterministic rendezvous problem has applications in a wide range of fields, including robotics, distributed systems, and computer networks [6]. It is often used as a benchmark for evaluating the performance of algorithms and protocols for rendezvous and coordination in these fields [7]. Overview In the synchronous version of the deterministic rendezvous problem, both robots are initially placed at arbitrary nodes in a finite, connected, undirected graph. The size and structure of the graph is unknown to the robots. The information known by a robot is as follows: T, the number of time steps since it has been activated d, the degree of the node currently occupied by the robot L, the label of the robot (typically taking the form of a bit string) To solve the deterministic rendezvous problem, both robots must be given a sequence of deterministic instructions which allow the robots to use their known information to find each other. The robots are co
https://en.wikipedia.org/wiki/Allura%20Red%20AC
Allura Red AC is a red azo dye that goes by several names, including FD&C Red 40. It is used as a food dye and has the E number E129. It is usually supplied as its red sodium salt, but can also be used as the calcium and potassium salts. These salts are soluble in water. In solution, its maximum absorbance lies at about 504 nm. Allura Red, FD&C Red No. 40 is manufactured by coupling diazotized 5-amino-4-methoxy-2-toluenesulfonic acid with 6-hydroxy-2-naphthalene sulfonic acid in an azo coupling reaction. Use as a consumable coloring agent Allura Red AC is a popular dye used worldwide. Annual production in 1980 was greater than 2.3 million kilograms. It was originally introduced as a replacement for amaranth in the United States. The European Union approves Allura Red AC as a food colorant, but EU countries' local laws banning food colorants are preserved. In the United States, Allura Red AC is approved by the FDA for use in cosmetics, drugs, and food. When prepared as a lake pigment it is disclosed as Red 40 Lake or Red 40 Aluminum Lake. It is used in some tattoo inks and is used in many products, such as cotton candy, soft drinks, cherry-flavored products, children's medications, and dairy products. It is occasionally used to dye medicinal pills, such as the antihistamine fexofenadine, for purely aesthetic reasons. It is by far the most commonly used red dye in the United States, completely replacing amaranth (Red 2) and also replacing erythrosine (Red 3) in most applications due to the negative health effects of those two dyes. Studies on safety Allura Red has been heavily studied by food safety groups in North America and Europe, and remains in wide use. The UK's Food Standards Agency commissioned a study of six food dyes (tartrazine, Allura red, Ponceau 4R, Quinoline Yellow, sunset yellow, carmoisine (dubbed the "Southampton 6")), and sodium benzoate (a preservative) on children in the general population, who consumed them in beverages. The study found