source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Howard%20Johnson%20%28electrical%20engineer%29
Howard Johnson is an electrical engineer, known for his consulting work and commonly referenced books on the topic of signal integrity, especially for high speed electronic circuit design. He served as the chief technical editor for Fast Ethernet and Gigabit Ethernet standardisation, and was recognized by the IEEE as an "Outstanding Contributor" to the IEEE P802.3z Gigabit Task Force. Johnson earned his Bachelor of Science in Electrical Engineering (1978), Masters of Electrical Engineering (1979), and PhD (1982) from Rice University. His dissertation was titled The design of DFT algorithms. Area of contribution Johnson has significantly raised awareness of analog effects at work in high speed digital electronic systems. In modern digital systems, it is common for digital designs to be subject to analog effects, even if they operate at a relatively low clock frequency. Circuits operating at lower clock rates can behave as high speed digital systems if there is sufficient high frequency content in the signal edges (when transitioning between digital logic levels) relative to the distance traveled across a printed circuit board. As a result of improvements in semiconductor process, faster edge rates of even "low technology" electronic components can be sufficient to make the system effectively high speed and thus subject to havoc caused by unanticipated analog effects. A good example is his illustration of the matrix of rising edges that result from different combinations of skin-effect and dielectric loss which illustrates PCB design problems one encounters at microwave frequencies. Johnson was also active in the development of two Institute of Electrical and Electronics Engineers (IEEE) standards that govern Ethernet, IEEE 802.3 Fast Ethernet and IEEE 802.3 Gigabit Ethernet. Books and publications Johnson has written three books: High-Speed Digital Design: A Handbook of Black Magic (1993), Fast Ethernet: Dawn of a New Network (1995), High-Speed Signal
https://en.wikipedia.org/wiki/Nonrecursive%20ordinal
In mathematics, particularly set theory, non-recursive ordinals are large countable ordinals greater than all the recursive ordinals, and therefore can not be expressed using recursive ordinal notations. The Church–Kleene ordinal and variants The smallest non-recursive ordinal is the Church Kleene ordinal, , named after Alonzo Church and S. C. Kleene; its order type is the set of all recursive ordinals. Since the successor of a recursive ordinal is recursive, the Church–Kleene ordinal is a limit ordinal. It is also the smallest ordinal that is not hyperarithmetical, and the smallest admissible ordinal after (an ordinal is called admissible if .) The -recursive subsets of are exactly the subsets of . The notation is in reference to , the first uncountable ordinal, which is the set of all countable ordinals, analogously to how the Church-Kleene ordinal is the set of all recursive ordinals. Some old sources use to denote the Church-Kleene ordinal. For a set , A set is x-computable if it is computable from a Turing machine with an oracle state that queries x. The relativized Church–Kleene ordinal is the supremum of the order types of x-computable relations. The Friedman-Jensen-Sacks theorem states that for every countable admissible ordinal , there exists a set x such that . , first defined by Stephen G. Simpson is an extension of the Church–Kleene ordinal. This is the smallest limit of admissible ordinals, yet this ordinal is not admissible. Alternatively, this is the smallest α such that is a model of -comprehension. Recursively ordinals The th admissible ordinal is sometimes denoted by . Recursively "x" ordinals, where "x" typically represents a large cardinal property, are kinds of nonrecursive ordinals. An ordinal is called recursively inaccessible if it is admissible and a limit of admissibles. Alternatively, is recursively inaccessible iff is the th admissible ordinal, or iff , an extension of Kripke–Platek set theory stating that each set is
https://en.wikipedia.org/wiki/Sven%20Erik%20J%C3%B8rgensen
Sven Erik Jørgensen (29 August 1934 – 5 March 2016) was an ecologist and chemist from Denmark. Biography Also a well known biathlon person. Academic degrees and honors In 1958, he was awarded Master of Science in chemical engineering from the Technical University of Denmark, then Doctor of Environmental Engineerin (Karlsruhe Institute of Technology) and Doctor of Science in ecological modelling (University of Copenhagen). He taught courses in ecological modelling in 32 countries. After his retirement, he became professor emeritus in environmental chemistry at the University of Copenhagen. He was an honourable doctor at Coimbra University, Portugal and at University of Dar es Salaam, Tanzania He received several awards: Ruđer Bošković award, Prigogine Prize, Blaise Pascal Medal , Einstein professorship at the Chinese Academy of Sciences and the Santa Chiara Prize for multidisciplinary teaching. In 2004, together with William J. Mitsch, he was awarded the Stockholm Water Prize. Works In 1975 he founded a journal, Ecological Modelling, and in 1978 he founded ISEM, the International Society of Ecological Modelling. He published 366 papers of which 275 were in peer-reviewed international journals, and edited or authored 76 books, of which several have been translated into other languages (Chinese, Russian, Spanish, and Portuguese). In 2011, he authored a textbook in ecological modeling “Fundamentals of Ecological Modelling”, which was published as a fourth edition together with Brian D. Fath of the Department of Biological Sciences, Towson University. It has been translated into Chinese and Russian (third edition). He was co-editor in chief of the "Encyclopedia of Ecology" published in 2008, and of the "Encyclopedia of Environmental Management" published during December 2012. He co-authored the textbook “Introduction to Systems Ecology, published in English in 2012 and in Chinese in 2013. He was the editorial board member of 18 international journals in the
https://en.wikipedia.org/wiki/Husimi%20Q%20representation
The Husimi Q representation, introduced by Kôdi Husimi in 1940, is a quasiprobability distribution commonly used in quantum mechanics to represent the phase space distribution of a quantum state such as light in the phase space formulation. It is used in the field of quantum optics and particularly for tomographic purposes. It is also applied in the study of quantum effects in superconductors. Definition and properties The Husimi Q distribution (called Q-function in the context of quantum optics) is one of the simplest distributions of quasiprobability in phase space. It is constructed in such a way that observables written in anti-normal order follow the optical equivalence theorem. This means that it is essentially the density matrix put into normal order. This makes it relatively easy to calculate compared to other quasiprobability distributions through the formula which is proportional to a trace of the operator involving the projection to the coherent state . It produces a pictorial representation of the state ρ to illustrate several of its mathematical properties. Its relative ease of calculation is related to its smoothness compared to other quasiprobability distributions. In fact, it can be understood as the Weierstrass transform of the Wigner quasiprobability distribution, i.e. a smoothing by a Gaussian filter, Such Gauss transforms being essentially invertible in the Fourier domain via the convolution theorem, Q provides an equivalent description of quantum mechanics in phase space to that furnished by the Wigner distribution. Alternatively, one can compute the Husimi Q distribution by taking the Segal–Bargmann transform of the wave function and then computing the associated probability density. Q is normalized to unity, and is non-negative definite and bounded: Despite the fact that is non-negative definite and bounded like a standard joint probability distribution, this similarity may be misleading, because different coherent sta
https://en.wikipedia.org/wiki/Middle%20genicular%20artery
The middle genicular artery (azygos articular artery) is a small branch of the popliteal artery. It supplies parts of the knee joint. Structure The middle genicular artery (MGA) arises from the anterolateral surface of the popliteal artery. This point of origin is distal to the superior genicular arteries, and between, equidistantly, the medial condyle of femur and the lateral condyle of femur. As a normal variation, the MGA may emerge from the popliteal artery at a common point of origin shared with the superior lateral genicular artery, or both vessels may arise at separate, distinct points. The angle at which the middle genicular artery leaves the popliteal artery varies with flexion and extension of the knee. It may form a near 90° angle when the knee is flexed, but an angle of only between 15° and 30° when the knee is extended. The diameter of the MGA is between , and its length between . It has two venae comitantes along its length. It pierces the oblique popliteal ligament and the joint capsule of the knee. Function The middle genicular artery supplies the anterior cruciate ligament and the posterior cruciate ligament. It also supplies the synovial membrane at the bottom of the knee. Clinical significance The middle genicular artery may be damaged during knee arthroscopy, particularly using a posterior approach through the popliteal fossa. It may also be damaged in traumatic injuries to the knee, often caused by sports.
https://en.wikipedia.org/wiki/Caldanaerobacter
Caldanaerobacter is a Gram-positive or negative and strictly anaerobic genus of bacteria from the family of Thermoanaerobacteraceae. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) See also List of bacterial orders List of bacteria genera
https://en.wikipedia.org/wiki/Meripilus%20sumstinei
Meripilus sumstinei, commonly known as the giant polypore or the black-staining polypore, is a species of fungus in the family Meripilaceae. Originally described in 1905 by William Alphonso Murrill as Grifola sumstinei, it was transferred to Meripilus in 1988. It is found in North America, where it grows in large clumps on the ground around the base of oak trees and tree stumps. The mushroom is edible.
https://en.wikipedia.org/wiki/Retinoblastoma%20protein
The retinoblastoma protein (protein name abbreviated Rb; gene name abbreviated Rb, RB or RB1) is a tumor suppressor protein that is dysfunctional in several major cancers. One function of pRb is to prevent excessive cell growth by inhibiting cell cycle progression until a cell is ready to divide. When the cell is ready to divide, pRb is phosphorylated, inactivating it, and the cell cycle is allowed to progress. It is also a recruiter of several chromatin remodeling enzymes such as methylases and acetylases. pRb belongs to the pocket protein family, whose members have a pocket for the functional binding of other proteins. Should an oncogenic protein, such as those produced by cells infected by high-risk types of human papillomavirus, bind and inactivate pRb, this can lead to cancer. The RB gene may have been responsible for the evolution of multicellularity in several lineages of life including animals. Name and genetics In humans, the protein is encoded by the RB1 gene located on chromosome 13—more specifically, 13q14.1-q14.2. If both alleles of this gene are mutated in a retinal cell, the protein is inactivated and the cells grows uncontrollably, resulting in development of retinoblastoma cancer, hence the "RB" in the name 'pRb'. Thus most pRb knock-outs occur in retinal tissue when UV radiation-induced mutation inactivates all healthy copies of the gene, but pRb knock-out has also been documented in certain skin cancers in patients from New Zealand where the amount of UV radiation is significantly higher. Two forms of retinoblastoma were noticed: a bilateral, familial form and a unilateral, sporadic form. Sufferers of the former were over six times more likely to develop other types of cancer later in life, compared to individuals with sporadic retinoblastoma. This highlighted the fact that mutated pRb could be inherited and lent support for the two-hit hypothesis. This states that only one working allele of a tumour suppressor gene is necessary for its funct
https://en.wikipedia.org/wiki/John%27s%20equation
John's equation is an ultrahyperbolic partial differential equation satisfied by the X-ray transform of a function. It is named after Fritz John. Given a function with compact support the X-ray transform is the integral over all lines in . We will parameterise the lines by pairs of points , on each line and define as the ray transform where Such functions are characterized by John's equations which is proved by Fritz John for dimension three and by Kurusa for higher dimensions. In three-dimensional x-ray computerized tomography John's equation can be solved to fill in missing data, for example where the data is obtained from a point source traversing a curve, typically a helix. More generally an ultrahyperbolic partial differential equation (a term coined by Richard Courant) is a second order partial differential equation of the form where , such that the quadratic form can be reduced by a linear change of variables to the form It is not possible to arbitrarily specify the value of the solution on a non-characteristic hypersurface. John's paper however does give examples of manifolds on which an arbitrary specification of u can be extended to a solution.
https://en.wikipedia.org/wiki/Socket%203
Socket 3 was a series of CPU sockets for various x86 microprocessors. It was sometimes found alongside a secondary socket designed for a math coprocessor chip, such as the 487. Socket 3 resulted from Intel's creation of lower voltage microprocessors. An upgrade to Socket 2, it rearranged the pin layout. Socket 3 is compatible with 168-pin socket CPUs. Socket 3 was a 237-pin low insertion force (LIF) or zero insertion force (ZIF) 19×19 pin grid array (PGA) socket suitable for the 3.3 V and 5 V, 25–50 MHz Intel 486 SX, 486 DX, 486 DX2, 486 DX4, 486 OverDrive and Pentium OverDrive processors as well as AMD Am486, Am5x86 and Cyrix Cx5x86 processors. See also List of Intel microprocessors List of AMD microprocessors
https://en.wikipedia.org/wiki/WS-Federation
WS-Federation (Web Services Federation) is an Identity Federation specification, developed by a group of companies: BEA Systems, BMC Software, CA Inc. (along with Layer 7 Technologies now a part of CA Inc.), IBM, Microsoft, Novell, Hewlett Packard Enterprise, and VeriSign. Part of the larger Web Services Security framework, WS-Federation defines mechanisms for allowing different security realms to broker information on identities, identity attributes and authentication. WS-Federation focuses on federated identity and trusting authentication tokens across different realms, privileged password management is concerned with the security, control, and audit of high-risk account passwords within an IT environment. Associated specifications The following draft specifications are associated with WS-Security: WS-SecureConversation WS-Federation WS-Authorization WS-Policy WS-Trust WS-Privacy See also List of Web service specifications Web Services SAML XACML Liberty Alliance OpenID
https://en.wikipedia.org/wiki/Thermoacoustic%20heat%20engine
Thermoacoustic engines (sometimes called "TA engines") are thermoacoustic devices which use high-amplitude sound waves to pump heat from one place to another (this requires work, which is provided by the loudspeaker) or use a heat difference to produce work in the form of sound waves (these waves can then be converted into electrical current the same way as a microphone does). These devices can be designed to use either a standing wave or a travelling wave. Compared to vapor refrigerators, thermoacoustic refrigerators have no coolant and few moving parts (only the loudspeaker), therefore require no dynamic sealing or lubrication. History The ability of heat to produce sound was noted by glassblowers centuries ago. In the 1850s experiments showed that a temperature differential drove the phenomenon, and that acoustic volume and intensity vary with tube length and bulb size. Rijke demonstrated that adding a heated wire screen a quarter of the way up the tube greatly magnified the sound, supplying energy to the air in the tube at its point of greatest pressure. Further experiments showed that cooling the air at its points of minimal pressure produced a similar amplifying effect. A Rijke tube converts heat into acoustic energy, using natural convection. In about 1887, Lord Rayleigh discussed the possibility of pumping heat with sound. In 1969, Rott reopened the topic. Using the Navier-Stokes equations for fluids, he derived equations specific for thermoacoustics. Linear thermoacoustic models were developed to form a basic quantitative understanding, and numeric models for computation. Swift continued with these equations, deriving expressions for the acoustic power in thermoacoustic devices. In 1992 a similar thermoacoustic refrigeration device was used on Space Shuttle Discovery. Orest Symko at University of Utah began a research project in 2005 called Thermal Acoustic Piezo Energy Conversion (TAPEC). Niche applications such as small to medium scale
https://en.wikipedia.org/wiki/Control%20system
A control system manages, commands, directs, or regulates the behavior of other devices or systems using control loops. It can range from a single home heating controller using a thermostat controlling a domestic boiler to large industrial control systems which are used for controlling processes or machines. The control systems are designed via control engineering process. For continuously modulated control, a feedback controller is used to automatically control a process or operation. The control system compares the value or status of the process variable (PV) being controlled with the desired value or setpoint (SP), and applies the difference as a control signal to bring the process variable output of the plant to the same value as the setpoint. For sequential and combinational logic, software logic, such as in a programmable logic controller, is used. Open-loop and closed-loop control Feedback control systems Logic control Logic control systems for industrial and commercial machinery were historically implemented by interconnected electrical relays and cam timers using ladder logic. Today, most such systems are constructed with microcontrollers or more specialized programmable logic controllers (PLCs). The notation of ladder logic is still in use as a programming method for PLCs. Logic controllers may respond to switches and sensors and can cause the machinery to start and stop various operations through the use of actuators. Logic controllers are used to sequence mechanical operations in many applications. Examples include elevators, washing machines and other systems with interrelated operations. An automatic sequential control system may trigger a series of mechanical actuators in the correct sequence to perform a task. For example, various electric and pneumatic transducers may fold and glue a cardboard box, fill it with the product and then seal it in an automatic packaging machine. PLC software can be written in many different ways – ladder diagra
https://en.wikipedia.org/wiki/Nicastrin
Nicastrin, also known as NCSTN, is a protein that in humans is encoded by the NCSTN gene. Function Nicastrin (abbreviated NCT) is a protein that is part of the gamma secretase protein complex, which is one of the proteases involved in processing amyloid precursor protein (APP) to the short Alzheimer's disease-associated peptide amyloid beta. The other proteins in the complex are PSEN1 (presenilin-1), which is the catalytically active component of the complex, APH-1 (anterior pharynx-defective 1), and PEN-2 (presenilin enhancer 2). Nicastrin itself is not catalytically active, but instead promotes the maturation and proper trafficking of the other proteins in the complex, all of which undergo significant post-translational modification before becoming active in the cell. Nicastrin has also been identified as a regulator of neprilysin, an enzyme involved in the degradation of amyloid beta fragment. History The protein was named after the Italian country Nicastro, reflecting the fact that Alzheimer's disease was described in 1963 after studying descendants of an extended family originating in the country of Nicastro that had familial Alzheimer's disease (FAD). Interactions Nicastrin has been shown to interact with PSEN1 and PSEN2.
https://en.wikipedia.org/wiki/Carrier-grade%20NAT
Carrier-grade NAT (CGN or CGNAT), also known as large-scale NAT (LSN), is a type of network address translation (NAT) used by ISPs in IPv4 network design. With CGNAT, end sites, in particular residential networks, are configured with private network addresses that are translated to public IPv4 addresses by middlebox network address translator devices embedded in the network operator's network, permitting the sharing of small pools of public addresses among many end users. This essentially repeats the traditional customer-premise NAT function at the ISP level. Carrier-grade NAT is often used for mitigating IPv4 address exhaustion. One use scenario of CGN has been labeled as NAT444, because some customer connections to Internet services on the public Internet would pass through three different IPv4 addressing domains: the customer's own private network, the carrier's private network and the public Internet. Another CGN scenario is Dual-Stack Lite, in which the carrier's network uses IPv6 and thus only two IPv4 addressing domains are needed. CGNAT techniques were first used in 2000 to accommodate the immediate need for large numbers of IPv4 addresses in General Packet Radio Service (GPRS) deployments of mobile networks. Estimated CGNAT deployments increased from 1200 in 2014 to 3400 in 2016, with 28.85% of the studied deployments appearing to be in mobile operator networks. Shared address space If an ISP deploys a CGN, and uses address space to number customer gateways, the risk of address collision, and therefore routing failures, arises when the customer network already uses an address space. This prompted some ISPs to develop a policy within the American Registry for Internet Numbers (ARIN) to allocate new private address space for CGNs, but ARIN deferred to the IETF before implementing the policy indicating that the matter was not a typical allocation issue but a reservation of addresses for technical purposes (per RFC 2860). IETF published , detailing
https://en.wikipedia.org/wiki/Gibson%20assembly
Gibson assembly is a molecular cloning method that allows for the joining of multiple DNA fragments in a single, isothermal reaction. It is named after its creator, Daniel G. Gibson, who is the chief technology officer and co-founder of the synthetic biology company, Telesis Bio. Process The entire Gibson assembly reaction requires few components with minor manipulations. The method can simultaneously combine up to 15 DNA fragments based on sequence identity. It requires that the DNA fragments contain ~20-40 base pair overlap with adjacent DNA fragments. These DNA fragments are mixed with a cocktail of three enzymes, along with other buffer components. The three required enzyme activities are: exonuclease, DNA polymerase, and DNA ligase. The exonuclease chews back DNA from the 5' end, thus not inhibiting polymerase activity and allowing the reaction to occur in one single process. The resulting single-stranded regions on adjacent DNA fragments can anneal. The DNA polymerase incorporates nucleotides to fill in any gaps. The DNA ligase covalently joins the DNA of adjacent segments, thereby removing any nicks in the DNA. The resulting product is different DNA fragments joined into one. Either linear or closed circular molecules can be assembled. There are two approaches to Gibson assembly. A one-step method and a two-step method. Both methods can be performed in a single reaction vessel. The Gibson assembly 1-step method allows for the assembly of up to 5 different fragments using a single step isothermal process. In this method, fragments and a master mix of enzymes are combined and the entire mixture is incubated at 50 °C for up to one hour. For the creation of more complex constructs with up to 15 fragments, or for constructs incorporating fragments from 100 bp to 10 kb, the Gibson assembly two-step approach is used. The two-step reaction requires two separate additions of master mix. One of the reactions is for the exonuclease and annealing step while t
https://en.wikipedia.org/wiki/List%20of%20live%20CDs
A live CD or live DVD is a CD-ROM or DVD-ROM containing a bootable computer operating system. Live CDs are unique in that they have the ability to run a complete, modern operating system on a computer lacking mutable secondary storage, such as a hard disk drive. Rescue and repair Billix – A multiboot distribution and system administration toolkit with the ability to install any of the included Linux distributions Inquisitor – Linux kernel-based hardware diagnostics, stress testing and benchmarking live CD Parted Magic – Entirely based on the 2.6 or newer Linux kernels System Folder of classic Mac OS on a CD or on a floppy disk – Works on any media readable by 68k or PowerPC Macintosh computers SystemRescueCD – A Linux kernel-based CD with tools for Windows and Linux repairs BSD-based FreeBSD based DesktopBSD – as of 1.6RC1 FreeBSD and FreeSBIE based FreeBSD – has supported use of a "fixit" CD for diagnostics since 1996 FreeNAS – m0n0wall-based FreeSBIE (discontinued) – FreeBSD-based GhostBSD – FreeBSD based with gnome GUI, installable to HDD Ging – Debian GNU/kFreeBSD-based m0n0wall (discontinued) – FreeBSD-based TrueOS – FreeBSD-based pfSense – m0n0wall-based Other BSDs DragonFly BSD Linux kernel-based Arch Linux based Artix – LXQt preconfigured and OpenRC-oriented live CD and distribution Archie – live CD version of Arch Linux. Antergos Chakra Manjaro – primarily free software operating system for personal computers aimed at ease of use. Parabola GNU/Linux-libre - distro endorsed by the Free Software Foundation SystemRescueCD Debian-based These are directly based on Debian: antiX – A light-weight edition based on Debian Debian Live – Official live CD version of Debian Devuan - A fork of the Debian Linux distribution that uses sysvinit, runit or OpenRC instead of systemd. Finnix – A small system administration live CD, based on Debian testing, and available for x86 and PowerPC architectures grml – Installable live CD for sysad
https://en.wikipedia.org/wiki/Raytheon%20BBN
Raytheon BBN (originally Bolt Beranek and Newman Inc.) is an American research and development company based in Cambridge, Massachusetts, United States. In 1966, the Franklin Institute awarded the firm the Frank P. Brown Medal, in 1999 BBN received the IEEE Corporate Innovation Recognition, and on 1 February 2013, BBN was awarded the National Medal of Technology and Innovation, the highest honors that the U.S. government bestows upon scientists, engineers and inventors, by President Barack Obama. It became a wholly owned subsidiary of Raytheon in 2009. History BBN has its roots in an initial partnership formed on 15 October 1948 between Leo Beranek and Richard Bolt, professors at the Massachusetts Institute of Technology. Bolt had won a commission to be an acoustic consultant for the new United Nations permanent headquarters to be built in New York City. Realizing the magnitude of the project at hand, Bolt had pulled in his MIT colleague Beranek for help and the partnership between the two was born. The firm, Bolt and Beranek, started out in two rented rooms on the MIT campus. Robert Newman joined the firm soon after in 1950, and the firm became Bolt Beranek Newman. Beranek remained the company's president and chief executive officer until 1967, and Bolt was chairman until 1976. From 1957 to 1962, J. C. R. Licklider served as vice president of engineering psychology for BBN. Foreseeing the potential to obtain federal grants for basic computer research, Licklider convinced the BBN leadership to purchase a then state-of-the-art Royal McBee LGP-30 digital computer in 1958 for US$25,000. Within a year, Ken Olsen, president of the newly formed Digital Equipment Corporation (DEC), approached BBN to test the prototype of DEC's first computer, the PDP-1. Within one month, BBN completed its tests and recommendations of the PDP-1. BBN ultimately purchased the first PDP-1 for around US$150,000 and received the machine in November 1960. After the PDP-1 arrived, BBN hired
https://en.wikipedia.org/wiki/Sahara%20Net
Sahara Net is an information and communications technology provider (ICT) serving the Saudi market, the company has rapidly grown since 1989 to offer various complementary services such as connectivity, internet, hosting, cloud, optimization, cyber security, and managed services. History Sahara Net is a Saudi Joint Stock Company (JSC) and its history goes back to 1989 when Sahara Net established the 1st Saudi Bulletin Board Service (BBS) in the Kingdom. During this period, it operated as a hub for email exchange in the FidoNet network. And in 1994 Sahara Net started offering Internet connectivity and other related services like internet email, web design, web hosting, and Domain name registry services. These services made the first ISP in Saudi Arabia before the official licensing in 1998, when the Saudi Internet market was regulated and Sahara Net received Internet Service Provider (ISP) license and was appointed as the official Local Internet Registry (LIR) in the Kingdom of Saudi Arabia. Today The company grew over these years to become one of the main ICTs in the Saudi Arabian market, extending network coverage to all major cities in Saudi Arabia, and offering various connectivity options to business as well as home users. In 2009, the company was partially acquired by Telindus (the ICT investment arm of Belgacom), the famous telecom operator in Belgium and Europe. Then, in 2014, the company was fully acquired by its original founders. Recently, Sahara Net was converted from an LLC to a JSC with over 1200 shareholders by a capital raise (original founders still control 70% of the shares). Certifications Another achievement is when Sahara Net received the following certifications: ISO 9001:2015 ISO 20000:2018 ISO 27001:2013 ISO/IEC 27017:2017 ISO/IEC 27018:2019 Solutions & Products
https://en.wikipedia.org/wiki/Belweder%20%28TV%20set%29
Belweder was the brand name of the OT1471 television set, manufactured in People's Republic of Poland (PRL) from 1957 to 1960 at the Warszawskie Zakłady Telewizyjne (WZT). It was the second (after the Wisła) TV set made in Poland and the first one designed entirely domestically. The communist authorities of the PRL saw TV set manufacturing not only as satisfying the consumption needs of the citizens, but also a way of popularizing a potentially powerful propaganda medium, which is why the development of television in general and the TV sets in particular enjoyed strong support within the reality of a centrally planned economy. The first plans for the new device, along with laboratory model, were created at the WZT in 1955. Contrary to WZT's first product, the Wisła, which was to a large extent based on solutions licensed from the Soviet Union with many components imported from there, the new TV was to be a modern design using domestic technology only, even though many of the components of the Belweder have not been manufactured in Poland before, and the manufacturing of plastics had to be set up virtually from scratch. The resulting TV set had a 14" screen, external dimensions of 51x41x34 cm and weighed 23 kg. It could receive up to eight TV channels and FM radio. The channel switch could only be set to receive a signal from the transmitters in the part of Poland where a given example was sold - there were two distinct versions, one for the southern and northern part of Poland. A Belweder cost 7000 złoty at the time when the average monthly salary was between one and two thousand, yet, like many consumption goods in the communist economies, it proved very sought after and hard to buy. This seems strange to Westerners used to free market economies; the explanation is that although the TV set cost a few monthly salaries, communist economies produced so little in the way of consumer goods that people normally had vast amounts of money saved, simply because there w
https://en.wikipedia.org/wiki/Adaptive%20grammar
An adaptive grammar is a formal grammar that explicitly provides mechanisms within the formalism to allow its own production rules to be manipulated. Overview John N. Shutt defines adaptive grammar as a grammatical formalism that allows rule sets (aka sets of production rules) to be explicitly manipulated within a grammar. Types of manipulation include rule addition, deletion, and modification. Early history The first description of grammar adaptivity (though not under that name) in the literature is generally taken to be in a paper by Alfonso Caracciolo di Forino published in 1963. The next generally accepted reference to an adaptive formalism (extensible context-free grammars) came from Wegbreit in 1970 in the study of extensible programming languages, followed by the dynamic syntax of Hanford and Jones in 1973. Collaborative efforts Until fairly recently, much of the research into the formal properties of adaptive grammars was uncoordinated between researchers, only first being summarized by Henning Christiansen in 1990 in response to a paper in ACM SIGPLAN Notices by Boris Burshteyn. The Department of Engineering at the University of São Paulo has its Adaptive Languages and Techniques Laboratory, specifically focusing on research and practice in adaptive technologies and theory. The LTA also maintains a page naming researchers in the field. Terminology and taxonomy While early efforts made reference to dynamic syntax and extensible, modifiable, dynamic, and adaptable grammars, more recent usage has tended towards the use of the term adaptive (or some variant such as adaptativa, depending on the publication language of the literature). Iwai refers to her formalism as adaptive grammars, but this specific use of simply adaptive grammars is not typically currently used in the literature without name qualification. Moreover, no standardization or categorization efforts have been undertaken between various researchers, although several have made efforts in this d
https://en.wikipedia.org/wiki/Ecological%20effects%20of%20biodiversity
The diversity of species and genes in ecological communities affects the functioning of these communities. These ecological effects of biodiversity in turn are affected by both climate change through enhanced greenhouse gases, aerosols and loss of land cover, and biological diversity, causing a rapid loss of biodiversity and extinctions of species and local populations. The current rate of extinction is sometimes considered a mass extinction, with current species extinction rates on the order of 100 to 1000 times as high as in the past. The two main areas where the effect of biodiversity on ecosystem function have been studied are the relationship between diversity and productivity, and the relationship between diversity and community stability. More biologically diverse communities appear to be more productive (in terms of biomass production) than are less diverse communities, and they appear to be more stable in the face of perturbations. Also animals that inhabit an area may alter the surviving conditions by factors assimilated by climate. Definitions In order to understand the effects that changes in biodiversity will have on ecosystem functioning, it is important to define some terms. Biodiversity is not easily defined, but may be thought of as the number and/or evenness of genes, species, and ecosystems in a region. This definition includes genetic diversity, or the diversity of genes within a species, species diversity, or the diversity of species within a habitat or region, and ecosystem diversity, or the diversity of habitats within a region. Two things commonly measured in relation to changes in diversity are productivity and stability. Productivity is a measure of ecosystem function. It is generally measured by taking the total aboveground biomass of all plants in an area. Many assume that it can be used as a general indicator of ecosystem function and that total resource use and other indicators of ecosystem function are correlated with productivity.
https://en.wikipedia.org/wiki/UniDIMM
UniDIMM (short for Universal DIMM) is a specification for dual in-line memory modules (DIMMs), which are printed circuit boards (PCBs) designed to carry dynamic random-access memory (DRAM) chips. UniDIMMs can be populated with either DDR3 or DDR4 chips, with no support for any additional memory control logic; as a result, the computer's memory controller must support both DDR3 and DDR4 memory standards. The UniDIMM specification was created by Intel for its Skylake microarchitecture, whose integrated memory controller (IMC) supports both DDR3 (more specifically, the DDR3L low-voltage variant) and DDR4 memory technologies. UniDIMM is a SO-DIMM form factor available in two dimensions: for the standard UniDIMM version (the same size as DDR4 SO-DIMMs), and for the low-profile version. UniDIMMs have a 260-pin edge connector, which has the same pin count as the one on DDR4 SO-DIMMs, with the keying notch in a position that prevents incompatible installation by making UniDIMMs physically incompatible with standard DDR3 and DDR4 SO-DIMM sockets. Because of the lower operating voltage of DDR4 chips (1.2 V) compared with the operating voltage of DDR3 chips (1.5 V for regular DDR3 and 1.35 V for low-voltage DDR3L), UniDIMMs are designed to contain additional built-in voltage regulation circuitry. The UniDIMM specification was created to ease the market transition from DDR3 to DDR4 SDRAM. In previous RAM standard transitions, as it was the case when DDR2 was phased out in favor of DDR3, having an emerging RAM standard as a new product line created a "chicken-and-egg" problem because its manufacturing is initially more expensive, yields low demand, and results in low production rates. The DDR2 to DDR3 transition issues were sometimes handled with specific motherboards that provided separate slots for DDR2 and DDR3 modules, out of which only one kind could be used. By its design, the UniDIMM specification allows either DDR3 or DDR4 memory to be used in the same memory
https://en.wikipedia.org/wiki/109th%20meridian%20west
The meridian 109° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, North America, the Pacific Ocean, the Southern Ocean, and Antarctica to the South Pole. The 109th meridian west forms a great circle with the 71st meridian east. In the United States, the western boundaries of Colorado and New Mexico and the eastern boundaries of Utah and Arizona lie on the 32nd meridian west from Washington, which is approximately 3 minutes of longitude west of the 109th meridian west of Greenwich, or approximately . From pole to pole Starting at the North Pole and heading south to the South Pole, the 109th meridian west passes through: {| class="wikitable plainrowheaders" ! scope="col" width="130" | Co-ordinates ! scope="col" | Country, territory or sea ! scope="col" | Notes |-valign="top" | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arctic Ocean | style="background:#b0e0e6;" | Passing just east of Borden Island, Nunavut, Canada (at ) Passing just east of Vesey Hamilton Island, Nunavut, Canada (at ) |- | ! scope="row" | Canada | Nunavut — Melville Island |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Sabine Bay | style="background:#b0e0e6;" | |- | ! scope="row" | Canada |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Parry Channel | style="background:#b0e0e6;" | Viscount Melville Sound |- | ! scope="row" | Canada | Nunavut — Victoria Island |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Dease Strait | style="background:#b0e0e6;" | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Bathurst Inlet | style="background:#b0e0e6;" | |-valign="top" | ! scope="row" | Canada | Nunavut — Lewes Island, the Stockport Islands and the mainland Northwest Territories — from , passing through the Great Slave Lake Saskatchewan — from , passing through Lake Athabasca |-valign="top" | !
https://en.wikipedia.org/wiki/Sodium%20bisulfate
Sodium bisulfate, also known as sodium hydrogen sulfate, is the sodium salt of the bisulfate anion, with the molecular formula NaHSO4. Sodium bisulfate is an acid salt formed by partial neutralization of sulfuric acid by an equivalent of sodium base, typically in the form of either sodium hydroxide (lye) or sodium chloride (table salt). It is a dry granular product that can be safely shipped and stored. The anhydrous form is hygroscopic. Solutions of sodium bisulfate are acidic, with a 1M solution having a pH of slightly below 1. Production Sodium bisulfate is produced as an intermediate in the Mannheim process, an industrial process involving the reaction of sodium chloride and sulfuric acid: NaCl + H2SO4 → HCl + NaHSO4 The process for the formation of sodium bisulfate is highly exothermic. The liquid sodium bisulfate is sprayed and cooled so that it forms a solid bead. The hydrogen chloride gas is dissolved in water to produce hydrochloric acid as a useful coproduct of the reaction. Sodium bisulfate can be generated as a byproduct of the production of many other mineral acids via the reaction of their sodium salts with an excess of sulfuric acid: NaX + H2SO4 → NaHSO4 + HX ( X− = CN−, NO3−, ClO4−) The acids HX produced have a lower boiling point than the reactants and are separated from the reaction mixture by distillation. Chemical reactions Hydrated sodium bisulfate dehydrates at at which point it separates from the water molecule attached to it. Once cooled again, it is freshly hygroscopic. Heating sodium bisulfate to produces sodium pyrosulfate, another colorless salt: 2 NaHSO4 → Na2S2O7 + H2O Uses Sodium bisulfate is used primarily to lower pH. it also is used in metal finishing, cleaning products, and to lower the pH of water for effective chlorination in swimming pools and hot tubs. Sodium bisulfate is also AAFCO approved as a general-use feed additive, including use in poultry feed and companion animal food. It is used as a urine acidifie
https://en.wikipedia.org/wiki/TOXMAP
TOXMAP was a geographic information system (GIS) from the United States National Library of Medicine (NLM) that was deprecated on December 16, 2019. The application used maps of the United States to help users explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory (TRI) and Superfund programs with visual projections and maps. Description TOXMAP helped users create nationwide, regional, or local area maps showing where TRI chemicals are released on-site into the air, water, ground, and by underground injection, as reported by industrial facilities in the United States. It also identified the releasing facilities, color-codes release amounts for a single year or year range, and provides multi-year aggregate chemical release data and trends over time, starting with 1988. Maps also can show locations of Superfund sites on the Agency for Toxic Substances and Disease Registry National Priorities List (NPL), which lists all chemical contaminants present at these sites. TOXMAP is a useful environmental health tool that makes epidemiological and environmental information available to the public. There were two versions of TOXMAP available from its home page: the classic version of TOXMAP released in 2004 and, a newer version released in 2014 that is based on Adobe Flash/Apache Flex technology. In addition to many of the features of TOXMAP classic, the new version provides an improved map appearance and interactive capabilities as well as a more current GIS look-and-feel. This included seamless panning, immediate update of search results when zooming to a location, two collapsible side panels to maximize map size, and automatic size adjustment after a window resize. The new TOXMAP also improved U.S. Census layers and availability by Census Tract (2000 and 2010), Canadian National Pollutant Release Inventory (NPRI) data, U.S. commercial nuclear power plants, as well as improved and updated congressional district boundaries. TOXM
https://en.wikipedia.org/wiki/Verge3D
Verge3D is a real-time renderer and a toolkit used for creating interactive 3D experiences running on websites. Overview Verge3D enables users to convert content from 3D modelling tools (Blender, 3ds Max, and Maya are currently supported) to view in a web browser. Verge3D was created by the same core group of software engineers that previously created the Blend4Web framework. Features Verge3D uses WebGL for rendering. It incorporates components of the Three.js library and exposes its API to application developers. Puzzles Application functionality can be added via JavaScript, either by writing code directly or by using Puzzles, Verge3D’s visual programming environment based on Google Blockly. Puzzles is aimed primarily at non-programmers allowing quick creation of interactive scenarios in a drag-and-drop fashion. App Manager and web publishing App Manager is a lightweight web-based tool for creating, managing and publishing Verge3D projects, running on top of the local development server. Verge3D Network service integrated in the App Manager allows for publishing Verge3D applications via Amazon S3 and EC2 cloud services. PBR For purposes of authoring materials, a glTF 2.0-compliant physically based rendering pipeline is offered alongside the standard shader-based approach. PBR textures can be authored using external texturing software such as Substance Painter for which Verge3D offers the corresponding export preset. Besides the glTF 2.0 model, Verge3D supports physical materials of 3ds Max and Maya (with Autodesk Arnold as reference), and Blender's real-time Eevee materials. glTF and DCC software integration Verge3D integrates directly with Blender, 3ds Max, and Maya, enabling users to create 3D geometry, materials, and animations inside the software, then export them in the JSON-based glTF format. The Sneak Peek feature allows for exporting and viewing scenes from the DCC tool environment. Facebook 3D posts For Facebook publishing, Verge3D offers a spe
https://en.wikipedia.org/wiki/Visual%20temporal%20attention
Visual temporal attention is a special case of visual attention that involves directing attention to specific instant of time. Similar to its spatial counterpart visual spatial attention, these attention modules have been widely implemented in video analytics in computer vision to provide enhanced performance and human interpretable explanation of deep learning models. As visual spatial attention mechanism allows human and/or computer vision systems to focus more on semantically more substantial regions in space, visual temporal attention modules enable machine learning algorithms to emphasize more on critical video frames in video analytics tasks, such as human action recognition. In convolutional neural network-based systems, the prioritization introduced by the attention mechanism is regularly implemented as a linear weighting layer with parameters determined by labeled training data. Application in Action Recognition Recent video segmentation algorithms often exploits both spatial and temporal attention mechanisms. Research in human action recognition has accelerated significantly since the introduction of powerful tools such as Convolutional Neural Networks (CNNs). However, effective methods for incorporation of temporal information into CNNs are still being actively explored. Motivated by the popular recurrent attention models in natural language processing, the Attention-aware Temporal Weighted CNN (ATW CNN) is proposed in videos, which embeds a visual attention model into a temporal weighted multi-stream CNN. This attention model is implemented as temporal weighting and it effectively boosts the recognition performance of video representations. Besides, each stream in the proposed ATW CNN framework is capable of end-to-end training, with both network parameters and temporal weights optimized by stochastic gradient descent (SGD) with back-propagation. Experimental results show that the ATW CNN attention mechanism contributes substantially to the performan
https://en.wikipedia.org/wiki/Synchronous%20virtual%20pipe
When realizing pipeline forwarding a predefined schedule for forwarding a pre-allocated amount of bytes during one or more time frames along a path of subsequent switches establishes a synchronous virtual pipe (SVP). The SVP capacity is determined by the total number of bits allocated in every time cycle for the SVP. For example, for a 10 ms time cycle, if 20,000 bits are allocated during each of 2 time frames, the SVP capacity is 4 Mbit/s. Pipeline forwarding guarantees that reserved traffic, i.e., traveling on an SVP, experiences: bounded end-to-end delay, delay jitter lower than two TFs, and no congestion and resulting losses. Two implementations of the pipeline forwarding were proposed: time-driven switching (TDS) and time-driven priority (TDP) and can be used to create pipeline forwarding parallel network in the future Internet.
https://en.wikipedia.org/wiki/48th%20meridian%20west
The meridian 48° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Greenland, the Atlantic Ocean, South America, the Southern Ocean, and Antarctica to the South Pole. The 48th meridian west forms a great circle with the 132nd meridian east. The map indexing scheme of Canada's National Topographic System begins in the east at the 48th meridian west. From Pole to Pole Starting at the North Pole and heading south to the South Pole, the 48th meridian west passes through: {| class="wikitable plainrowheaders" ! scope="col" width="120" | Co-ordinates ! scope="col" | Country, territory or sea ! scope="col" | Notes |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arctic Ocean | style="background:#b0e0e6;" | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Lincoln Sea | style="background:#b0e0e6;" | |- | ! scope="row" | | John Murray Island |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Lincoln Sea | style="background:#b0e0e6;" | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Victoria Fjord | style="background:#b0e0e6;" | |- | ! scope="row" | |Wulff Land |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | |-valign="top" | ! scope="row" | | Pará Maranhão — from Tocantins — from Goiás — from Federal District — from , passing just west of Brasilia (at ) Goiás — from Minas Gerais — from São Paulo — from |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Southern Ocean | style="background:#b0e0e6;" | |-valign="top" | ! scope="row" | Antarctica | Claimed by both (Argentine Antarctica) and (British Antarctic Territory) |- |} See also 47th meridian west 49th merid
https://en.wikipedia.org/wiki/Linear%20programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization). More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine (linear) function defined on this polyhedron. A linear programming algorithm finds a point in the polytope where this function has the largest (or smallest) value if such a point exists. Linear programs are problems that can be expressed in standard form as Here the components of are the variables to be determined, and are given vectors, and is a given matrix. The function whose value is to be maximized ( in this case) is called the objective function. The constraints and specify a convex polytope over which the objective function is to be optimized. Linear programming can be applied to various fields of study. It is widely used in mathematics and, to a lesser extent, in business, economics, and some engineering problems. Industries that use linear programming models include transportation, energy, telecommunications, and manufacturing. It has proven useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design. History The problem of solving a system of linear inequalities dates back at least as far as Fourier, who in 1827 published a method for solving them, and after whom the method of Fourier–Motzkin elimination is named. In 1939 a linear programming formulation of a problem that is equivalent to the general linear pr
https://en.wikipedia.org/wiki/Mauro%20Martino
Mauro Martino is an Italian artist, designer and researcher. He is the founder and director of the Visual Artificial Intelligence Lab at IBM Research, and Professor of Practice at Northeastern University. Career He graduated from Polytechnic University of Milan, and was a research affiliate with the Senseable City Lab at MIT. Mauro was formerly an Assistant Research Professor at Northeastern University working with Albert-Laszlo Barabasi at Center for Complex Network Research and with David Lazer and Fellows at The Institute for Quantitative Social Science (IQSS) at Harvard University. His works have been published in "The Best American Infographics" in the 2015 and 2016 editions and have been shown at international festivals and exhibitions including Ars Electronica, RIXC Art Science Festival, Global Exchange at Lincoln Center, TEDx Cambridge THRIVE, TEDx Riga, and the Serpentine Gallery. His work is in the permanent collection at Ars Electronica Center. In 2017, Martino and his team received the National Science Foundation's award for Best Scientific Video for the project Network Earth. In 2019, Martino and Luca Stornaiuolo won the 2019 Webby People's Voice Award in the category NetArt for the project AI Portraits. The project 150 Years of Nature won multiple awards such as Fast Company - Innovation by Design Awards Best Data Design 2020, Webby Award 2020, Webby People's Voice Award 2020. This project, along with other works created in collaboration with Barabási Lab (e.g., Wonder Net, A Century of Physics, Data Sculpture in Bronze, Control, Resilience, Success in Science, Fake News), was shown at the "Barabási Lab. Hidden Patterns" exhibitions at ZKM Center for Art and Media and Ludwig Museum - Museum of Contemporary Art, Budapest. Mauro Martino is a pioneer in the use of the artificial neural network in sculpture. Notable works Strolling Cities is an interactive AI art project created in collaboration with Politecnico di Milano and exhibited in the Ita
https://en.wikipedia.org/wiki/Slide%20attack
The slide attack is a form of cryptanalysis designed to deal with the prevailing idea that even weak ciphers can become very strong by increasing the number of rounds, which can ward off a differential attack. The slide attack works in such a way as to make the number of rounds in a cipher irrelevant. Rather than looking at the data-randomizing aspects of the block cipher, the slide attack works by analyzing the key schedule and exploiting weaknesses in it to break the cipher. The most common one is the keys repeating in a cyclic manner. The attack was first described by David Wagner and Alex Biryukov. Bruce Schneier first suggested the term slide attack to them, and they used it in their 1999 paper describing the attack. The only requirements for a slide attack to work on a cipher is that it can be broken down into multiple rounds of an identical F function. This probably means that it has a cyclic key schedule. The F function must be vulnerable to a known-plaintext attack. The slide attack is closely related to the related-key attack. The idea of the slide attack has roots in a paper published by Edna Grossman and Bryant Tuckerman in an IBM Technical Report in 1977. Grossman and Tuckerman demonstrated the attack on a weak block cipher named New Data Seal (NDS). The attack relied on the fact that the cipher has identical subkeys in each round, so the cipher had a cyclic key schedule with a cycle of only one key, which makes it an early version of the slide attack. A summary of the report, including a description of the NDS block cipher and the attack, is given in Cipher Systems (Beker & Piper, 1982). The actual attack First, to introduce some notation. In this section assume the cipher takes n bit blocks and has a key-schedule using as keys of any length. The slide attack works by breaking the cipher up into identical permutation functions, F. This F function may consist of more than one round of the cipher; it is defined by the key-schedule. For example, i
https://en.wikipedia.org/wiki/Telugu-Kannada%20alphabet
The Telugu–Kannada script (or Kannada–Telugu script) was a writing system used in Southern India. Despite some significant differences, the scripts used for the Telugu and Kannada languages remain quite similar and highly mutually intelligible. Satavahanas and Chalukyas influenced the similarities between Telugu and Kannada scripts. History The Dravidian family comprises about 73 languages including Telugu, Tamil, Kannada, and Malayalam. Satavahanas introduced the Brahmi to present-day Telugu and Kannada-speaking regions. Bhattiprolu script introduced by the Satavahanas gave rise to the Kadamba script. During the 5th to 7th centuries the early Bādāmi Chālukyās and early Banavasi Kadambās used an early form of the Kadamba script in inscriptions. When Chalukya empire extended towards Telugu speaking regions they established another branch in Vengi, namely the Eastern Chalukyas or the Chalukyas of Vengi who later introduced Kadamba script to Telugu language which developed into the Telugu-Kannada script which was used between the 7th and 11th centuries CE. Between 1100 CE and 1400 CE, the Telugu and Kannada scripts separated from the Telugu-Kannada script. Both the Telugu and Kannada scripts were standardised at the beginning of the nineteenth century. Comparison The following sections visualize the difference between modern-day Telugu and Kannada styles. Consonants There is another legacy consonant ೞ/ఴ (ḻa) used to represent , but currently not in use. Vowels Independent vowels Numerals Unicode Although the alphabets for Telugu and Kannada languages could have been encoded under a single Unicode block with language-specific fonts to differentiate the styles, they were encoded separately by the governments due to socio-political reasons. Both the script variants were added to the Unicode Standard in October 1991 with the release of version 1.0. See also Kannada inscriptions Linguistic history of the Indian subcontinent Pallava script
https://en.wikipedia.org/wiki/Percentage%20depth%20dose%20curve
In radiotherapy, a percentage depth dose curve (PDD) (sometimes percent depth dose curve) relates the absorbed dose deposited by a radiation beam into a medium as it varies with depth along the axis of the beam. The dose values are divided by the maximum dose, referred to as dmax, yielding a plot in terms of percentage of the maximum dose. Dose measurements are generally made in water or "water equivalent" plastic with an ionization chamber, since water is very similar to human tissue with regard to radiation scattering and absorption. Percent depth dose (PDD), which reflects the overall percentage of dose deposited as compared to the depth of maximum dose, depends on the depth of interest, beam energy, field size, and SSD (source to surface distance) as follows. Of note, PDD generally refers to depths greater than the depth of maximum dose PDD decreases with increasing depth due to the inverse square law and due to attenuation of the radiation beam PDD increases with increasing radiation field size due to greater primary and scattered photons from the irradiated medium PDD increases with increasing SSD because inverse square variations over a fixed distance interval are smaller at large total distance than small total distance See also Dosimetry Dose profile
https://en.wikipedia.org/wiki/ACT%20domain
In molecular biology, the ACT domain is a protein domain that is found in a variety of proteins involved in metabolism. ACT domains are linked to a wide range of metabolic enzymes that are regulated by amino acid concentration. The ACT domain is named after three of the proteins that contain it: aspartate kinase, chorismate mutase and TyrA. The archetypical ACT domain is the C-terminal regulatory domain of 3-phosphoglycerate dehydrogenase (3PGDH), which folds with a ferredoxin-like topology. A pair of ACT domains form an eight-stranded antiparallel sheet with two molecules of allosteric inhibitor serine bound in the interface. Biochemical exploration of a few other proteins containing ACT domains supports the suggestions that these domains contain the archetypical ACT structure. The ACT domain was discovered by Aravind and Koonin using iterative sequence searches.
https://en.wikipedia.org/wiki/Alzheimer%27s%20disease
Alzheimer's disease (AD) is a neurodegenerative disease that usually starts slowly and progressively worsens, and is the cause of 60–70% of cases of dementia. The most common early symptom is difficulty in remembering recent events. As the disease advances, symptoms can include problems with language, disorientation (including easily getting lost), mood swings, loss of motivation, self-neglect, and behavioral issues. As a person's condition declines, they often withdraw from family and society. Gradually, bodily functions are lost, ultimately leading to death. Although the speed of progression can vary, the typical life expectancy following diagnosis is three to nine years. The cause of Alzheimer's disease is poorly understood. There are many environmental and genetic risk factors associated with its development. The strongest genetic risk factor is from an allele of APOE. Other risk factors include a history of head injury, clinical depression, and high blood pressure. The disease process is largely associated with amyloid plaques, neurofibrillary tangles, and loss of neuronal connections in the brain. A probable diagnosis is based on the history of the illness and cognitive testing, with medical imaging and blood tests to rule out other possible causes. Initial symptoms are often mistaken for normal brain aging. Examination of brain tissue is needed for a definite diagnosis, but this can only take place after death. Good nutrition, physical activity, and engaging socially are known to be of benefit generally in aging, and may help in reducing the risk of cognitive decline and Alzheimer's. No treatments can stop or reverse its progression, though some may temporarily improve symptoms. Affected people become increasingly reliant on others for assistance, often placing a burden on caregivers. The pressures can include social, psychological, physical, and economic elements. Exercise programs may be beneficial with respect to activities of daily living and can potent
https://en.wikipedia.org/wiki/Telephone%20number%20pooling
Telephone number pooling, thousands-block number pooling, or just number pooling, is a method of allocating telephony numbering space of the North American Numbering Plan in the United States. The method allocates telephone numbers in blocks of one thousand consecutive numbers of a given central office code to telephony service providers. In the United States it replaced the practice of allocating all 10,000 numbers of a central office prefix at a time. Under number pooling, the entire prefix is assigned to a rate center, to be shared among all providers delivering services in that rate center. Number pooling reduced the quantity of unused telephone numbers in markets which have been fragmented between multiple service providers, avoided central office prefix exhaustion in high growth areas, and extended the lifetime of the North American telephone numbering plan without structure changes of telephone numbers. Telephone number pooling was first tested for area code 847 in Illinois in June 1998, and became national policy in a series of Federal Communications Commission (FCC) orders from 2000 to 2003. History The North American Numbering Plan is a closed numbering plan, meaning that it assigns telephone numbers to individual endpoints based on a fixed-length telephone number. The national telephone number consists of a three-digit area code, a three-digit central office code, and a four-digit line number. Thus, each central office provides a resource of 10,000 telephone lines with a unique number each. While often enough for small communities, most cities require multiple central offices to service the community. In the North American Numbering Plan, mobile telephones do not use distinct area codes from wireline services, but many central offices provide only wireless services, or just wireline services. After the breakup of the Bell System on January 1, 1984, most telephone service areas in the United States were dominated by one carrier which held a monopoly on
https://en.wikipedia.org/wiki/Influential%20observation
In statistics, an influential observation is an observation for a statistical calculation whose deletion from the dataset would noticeably change the result of the calculation. In particular, in regression analysis an influential observation is one whose deletion has a large effect on the parameter estimates. Assessment Various methods have been proposed for measuring influence. Assume an estimated regression , where is an n×1 column vector for the response variable, is the n×k design matrix of explanatory variables (including a constant), is the n×1 residual vector, and is a k×1 vector of estimates of some population parameter . Also define , the projection matrix of . Then we have the following measures of influence: , where denotes the coefficients estimated with the i-th row of deleted, denotes the i-th value of matrix's main diagonal. Thus DFBETA measures the difference in each parameter estimate with and without the influential point. There is a DFBETA for each variable and each observation (if there are N observations and k variables there are N·k DFBETAs). Table shows DFBETAs for the third dataset from Anscombe's quartet (bottom left chart in the figure): Outliers, leverage and influence An outlier may be defined as a data point that differs significantly from other observations. A high-leverage point are observations made at extreme values of independent variables. Both types of atypical observations will force the regression line to be close to the point. In Anscombe's quartet, the bottom right image has a point with high leverage and the bottom left image has an outlying point. See also Influence function (statistics) Outlier Leverage Partial leverage Regression analysis Anomaly detection
https://en.wikipedia.org/wiki/Melanoidin
Melanoidins are brown, high molecular weight heterogeneous polymers that are formed when sugars and amino acids combine (through the Maillard reaction) at high temperatures and low water activity. They were discovered by Schmiedeberg in 1897. Melanoidins are commonly present in foods that have undergone some form of non-enzymatic browning, such as barley malts (Vienna and Munich), bread crust, bakery products and coffee. They are also present in the wastewater of sugar refineries, necessitating treatment in order to avoid contamination around the outflow of these refineries. Dietary melanoidins themselves produce various effects in the organism: they decrease Phase I liver enzyme activity and promote glycation in vivo, which may contribute to diabetes, reduced vascular compliance and Alzheimer's disease. Some of the melanoidins are metabolized by the intestinal microflora. Coffee is one of the main sources of melanoidins in the human diet, yet coffee consumption is associated with some health benefits and antiglycative action.
https://en.wikipedia.org/wiki/Fusarubin
Fusarubin is a naphthoquinone-antibiotic which is produced by the fungi Fusarium solani. Fusarubin has the molecular formula C15H14O7.
https://en.wikipedia.org/wiki/Bijective%20numeration
Bijective numeration is any numeral system in which every non-negative integer can be represented in exactly one way using a finite string of digits. The name refers to the bijection (i.e. one-to-one correspondence) that exists in this case between the set of non-negative integers and the set of finite strings using a finite set of symbols (the "digits"). Most ordinary numeral systems, such as the common decimal system, are not bijective because more than one string of digits can represent the same positive integer. In particular, adding leading zeroes does not change the value represented, so "1", "01" and "001" all represent the number one. Even though only the first is usual, the fact that the others are possible means that the decimal system is not bijective. However, the unary numeral system, with only one digit, is bijective. A bijective base-k numeration is a bijective positional notation. It uses a string of digits from the set {1, 2, ..., k} (where k ≥ 1) to encode each positive integer; a digit's position in the string defines its value as a multiple of a power of k. calls this notation k-adic, but it should not be confused with the p-adic numbers: bijective numerals are a system for representing ordinary integers by finite strings of nonzero digits, whereas the p-adic numbers are a system of mathematical values that contain the integers as a subset and may need infinite sequences of digits in any numerical representation. Definition The base-k bijective numeration system uses the digit-set {1, 2, ..., k} (k ≥ 1) to uniquely represent every non-negative integer, as follows: The integer zero is represented by the empty string. The integer represented by the nonempty digit-string is . The digit-string representing the integer m > 0 is where and being the least integer not less than x (the ceiling function). In contrast, standard positional notation can be defined with a similar recursive algorithm where Extension to integers For base , the bije
https://en.wikipedia.org/wiki/Baccharis%20intermedia
Baccharis intermedia is a species of shrub native to Chile. The species was first formally described by the botanist Augustin Pyramus de Candolle in 1836. Distribution This species is common on coastal hills of central Chile. Natural hybridisation It is observed in areas, in which the habitats of Baccharis linearis and Baccharis macraei overlap or come into close contact. It is a natural hybrid of the aforementioned species and is part of a homoploid hybrid swarm. The morphology is intermediate in all aspects and shows all variations from both extremes of the parental phenotypes to intermediate forms. This is due to the back-crossing of hybrids with the parent species. The intermediate morphology is also reflected in the specific epithet intermedia, which suggests this species is intermediate between others.
https://en.wikipedia.org/wiki/Spinors%20in%20three%20dimensions
In mathematics, the spinor concept as specialised to three dimensions can be treated by means of the traditional notions of dot product and cross product. This is part of the detailed algebraic discussion of the rotation group SO(3). Formulation The association of a spinor with a 2×2 complex Hermitian matrix was formulated by Élie Cartan. In detail, given a vector x = (x1, x2, x3) of real (or complex) numbers, one can associate the complex matrix In physics, this is often written as a dot product , where is the vector form of Pauli matrices. Matrices of this form have the following properties, which relate them intrinsically to the geometry of 3-space: , where denotes the determinant. , where I is the identity matrix. where Z is the matrix associated to the cross product . If is a unit vector, then is the matrix associated with the vector that results from reflecting in the plane orthogonal to . The last property can be used to simplify rotational operations. It is an elementary fact from linear algebra that any rotation in 3-space factors as a composition of two reflections. (More generally, any orientation-reversing orthogonal transformation is either a reflection or the product of three reflections.) Thus if R is a rotation which decomposes as the reflection in the plane perpendicular to a unit vector followed by the reflection in the plane perpendicular to , then the matrix represents the rotation of the vector through R. Having effectively encoded all the rotational linear geometry of 3-space into a set of complex 2×2 matrices, it is natural to ask what role, if any, the 2×1 matrices (i.e., the column vectors) play. Provisionally, a spinor is a column vector with complex entries ξ1 and ξ2. The space of spinors is evidently acted upon by complex 2×2 matrices. As shown above, the product of two reflections in a pair of unit vectors defines a 2×2 matrix whose action on euclidean vectors is a rotation. So there is an action of rotations o
https://en.wikipedia.org/wiki/Atlas%20%28computer%29
The Atlas Computer was one of the world's first supercomputers, in use from 1962 (when it was claimed to be the most powerful computer in the world) to 1972. Atlas' capacity promoted the saying that when it went offline, half of the United Kingdom's computer capacity was lost. It is notable for being the first machine with virtual memory (at that time referred to as 'one-level store') using paging techniques; this approach quickly spread, and is now ubiquitous. Atlas was a second-generation computer, using discrete germanium transistors. Atlas was created in a joint development effort among the University of Manchester, Ferranti International plc and the Plessey Co., plc. Two other Atlas machines were built: one for British Petroleum and the University of London, and one for the Atlas Computer Laboratory at Chilton near Oxford. A derivative system was built by Ferranti for Cambridge University. Called the Titan, or Atlas 2, it had a different memory organisation and ran a time-sharing operating system developed by Cambridge University Computer Laboratory. Two further Atlas 2s were delivered: one to the CAD Centre in Cambridge (later called CADCentre, then AVEVA), and the other to the Atomic Weapons Research Establishment (AWRE), Aldermaston. The University of Manchester's Atlas was decommissioned in 1971. The final Atlas, the CADCentre machine, was switched off in late 1976. Parts of the Chilton Atlas are preserved by National Museums Scotland in Edinburgh; the main console itself was rediscovered in July 2014 and is at Rutherford Appleton Laboratory in Chilton, near Oxford. History Background Through 1956 there was a growing awareness that the UK was falling behind the US in computer development. In April, B.W. Pollard of Ferranti told a computer conference that "there is in this country a range of medium-speed computers, and the only two machines which are really fast are the Cambridge EDSAC 2 and the Manchester Mark 2, although both are still very slow comp
https://en.wikipedia.org/wiki/PowerDsine
PowerDsine was a semiconductor and systems company, acquired by Microsemi in January 2007 following its IPO in 2004. It was established in 1994. Its initial products were Ringing (telephony) generators, and it also developed xDSL Remote Power Feeding Modules, before inventing Power over Ethernet's (PoE) precursor Power over LAN. PowerDsine supplied PoE Injectors, PoE Test Equipment and PoE ICs.
https://en.wikipedia.org/wiki/Mostowski%20collapse%20lemma
In mathematical logic, the Mostowski collapse lemma, also known as the Shepherdson–Mostowski collapse, is a theorem of set theory introduced by and . Statement Suppose that R is a binary relation on a class X such that R is set-like: R−1[x] = {y : y R x} is a set for every x, R is well-founded: every nonempty subset S of X contains an R-minimal element (i.e. an element x ∈ S such that R−1[x] ∩ S is empty), R is extensional: R−1[x] ≠ R−1[y] for every distinct elements x and y of X The Mostowski collapse lemma states that for every such R there exists a unique transitive class (possibly proper) whose structure under the membership relation is isomorphic to (X, R), and the isomorphism is unique. The isomorphism maps each element x of X to the set of images of elements y of X such that y R x (Jech 2003:69). Generalizations Every well-founded set-like relation can be embedded into a well-founded set-like extensional relation. This implies the following variant of the Mostowski collapse lemma: every well-founded set-like relation is isomorphic to set-membership on a (non-unique, and not necessarily transitive) class. A mapping F such that F(x) = {F(y) : y R x} for all x in X can be defined for any well-founded set-like relation R on X by well-founded recursion. It provides a homomorphism of R onto a (non-unique, in general) transitive class. The homomorphism F is an isomorphism if and only if R is extensional. The well-foundedness assumption of the Mostowski lemma can be alleviated or dropped in non-well-founded set theories. In Boffa's set theory, every set-like extensional relation is isomorphic to set-membership on a (non-unique) transitive class. In set theory with Aczel's anti-foundation axiom, every set-like relation is bisimilar to set-membership on a unique transitive class, hence every bisimulation-minimal set-like relation is isomorphic to a unique transitive class. Application Every set model of ZF is set-like and extensional. If the model is well-founded
https://en.wikipedia.org/wiki/Begging%20in%20animals
Begging in animals is when an animal solicits being given resources by another animal. This is usually a young animal soliciting food from their parents, brood hosts or other adults. However, the resource is sometimes non-food related or may be solicited by adult animals. Begging behavior is most widely studied in birds, however, mammals, amphibians, and invertebrates perform begging displays. Generally in food solicitation, begging behavior is instinctive, although in some instances it is learned (e.g. pet cats and dogs). While the ultimate causation for begging is an increase in the animal's individual fitness, several theories have been proposed for how food begging evolved proximate causes including scramble competition, honest signalling of need, and cooperative begging by siblings. Various types of information such as nutritional status or immunocompetence can be transmitted with auditory and visual begging signals and the behavior can be modulated by several factors such as brood size and hormones. Similarly, several costs of begging have been investigated including energetic, growth and predation cost. Begging from humans also occurs under artificial circumstances such as donkeys, elephants and dolphins begging for food from tourists. History In 1950, Tinbergen and Perdeck tested the effects of visual stimuli on begging behavior by gull chicks, elucidating which characteristics of their parent's bills the chicks were reacting to. Using models varying in different characteristics, they tested multiple stimuli and found that gull chicks pecked most at a long, red bill with contrasting white bars at the end. Chicks also pecked at other models; in decreasing order of begging intensity were a two-dimensional cardboard cut-out of the head of a gull with a red spot on its bill, simply a bill with a red spot, and a cut-out of a gull's head with no red spot on the bill. These studies showed the chicks were responding to the red spot stimulus on their parents' b
https://en.wikipedia.org/wiki/FIR%20transfer%20function
Transfer function filter utilizes the transfer function and the Convolution theorem to produce a filter. In this article, an example of such a filter using finite impulse response is discussed and an application of the filter into real world data is shown. FIR (Finite Impulse Response) Linear filters In digital processing, an FIR filter is a time-continuous filter that is invariant with time. This means that the filter does not depend on the specific point of time, but rather depends on the time duration. The specification of this filter uses a transfer function which has a frequency response which will only pass the desired frequencies of the input. This type of filter is non-recursive, which means that the output can be completely derived at from a combination of the input without any recursive values of the output. This means that there is no feedback loop that feeds the new output the values of previous outputs. This is an advantage over recursive filters such as IIR filter (Infinite Impulse Response) in the applications that require a linear phase response because it will pass the input without a phase distortion. Mathematical model Let the output function be and the input is . The convolution of the input with a transfer function provides a filtered output. The mathematical model of this type of filter is: h() is a transfer function of an impulse response to the input. The convolution allows the filter to only be activated when the input recorded a signal at the same time value. This filter returns the input values (x(t)) if k falls into the support region of function h. This is the reason why this filter is called finite response. If k is outside of the support region, the impulse response is zero which makes output zero. The central idea of this h() function can be thought of as a quotient of two functions. According to Huang (1981) Using this mathematical model, there are four methods of designing non-recursive linear filters with various concurren
https://en.wikipedia.org/wiki/Fano%20fibration
In algebraic geometry, a Fano fibration or Fano fiber space, named after Gino Fano, is a morphism of varieties whose general fiber is a Fano variety (in other words has ample anticanonical bundle) of positive dimension. The ones arising from extremal contractions in the minimal model program are called Mori fibrations or Mori fiber spaces (for Shigefumi Mori). They appear as standard forms for varieties without a minimal model. See also Ample line bundle Fiber bundle Fibration Quasi-fibration
https://en.wikipedia.org/wiki/Groma%20%28surveying%29
The groma (as standardized in the imperial Latin, sometimes croma, or gruma in the literature of the republican times) was a Roman surveying instrument. The groma allowed projecting right angles and straight lines and thus enabling the centuriation (setting up of a rectangular grid). It is the only Roman surveying tool with examples that made it through to the present day. Construction The tool utilizes a rotating horizontal cross with plumb bobs hanging down from all four ends. The center of the cross represents the umbilicus soli (reference point). The cross is mounted on a vertical Jacob's staff, so called ferramentum. The umbilicus is offset with respect to the ferramentum by using a bracket pivoting on the top of the staff (frequently ferramentum is used to describe the whole tool). The purpose of offsetting the reference point from the Jacob's staff (vertical pole) is twofold: it enables sighting of lines on the ground through a pair of strings (used to suspend an opposite pair of plumbs from the cross) without the staff obscuring the view and allows placing the reference point over a sturdy object (like a boundary stone), where the staff cannot be inserted. Bracket controversy The pivoting bracket on the top of the staff was suggested in the 1912 reconstruction by Adolf Schulten and confirmed by soon afterwards. However, as asserted by Thorkild Schiöler in 1994, the 5-kilogram cross found in Pompeii is too heavy to be supported in this way, thus the bracket had never existed. Furthermore, there is no archeological evidence of the bracket, and the images of gromas on tombstones do not show it. The archeologists rejecting the bracket suggest that the staff was slightly angled to permit sighting without the pole obscuring the view. Use Despite a great deal of surviving information about the groma (and the simplicity of the tool itself), the details of its operation are not entirely clear. The general idea is straightforward: the staff was inserted into
https://en.wikipedia.org/wiki/FLEXPART
The FLEXible PARTicle dispersion model (FLEXPART) is a Lagrangian Particle Dispersion Model used to simulate air parcel trajectories. It can be run in either forward or backward mode. The forward mode is typically used to determine the downwind concentration or mixing ratio of pollutants. The backward mode can be used to estimate footprint areas, to determine the origin of observed emissions. History FLEXPART has inherited large portions of its code from its predecessor, FLEXTRA (FLEXible TRAjectory model). The first version of FLEXPART was released in 1998 and was considered free software. Since the release of version 8.2 in 2010, the code is distributed under the GNU General Public License (GPL) Version 3. Due to a growing user base the main developers decided to provide an online platform for developers and modellers, which was launched together with the release of version 9.0 in June 2012. In addition to main FLEXPART code, several branches have been developed for use with mesoscale meteorological models. In particular, FLEXPART-WRF was created to work with the open source WRF model. The first version of FLEXPART-WRF was presented in 2006 by Jerome D. Fast and Richard C. Easter. The model was later renamed the "PNNL Integrated Lagrangian Transport" (PILT) model since the code base started to deviate extensively from the main FLEXPART branch. In 2007, a new technical report was presented where the model was once again referred to as FLEXPART-WRF. At this time, there were still a number of important features missing in FLEXPART-WRF (which were available in FLEXPART). A number of research groups started developing FLEXPART-WRF on their own, in 2011, there were three separate projects on GitHub, each with a partial goal to implement a scheme for wet deposition. In 2013, a major update of FLEXPART-WRF was released with support from the FLEXPART developers, the release included a working wet deposition scheme as well as new run-time options for wind fields and t
https://en.wikipedia.org/wiki/Proximity%20effect%20%28audio%29
The proximity effect in audio is an increase in bass or low frequency response when a sound source is close to a directional or cardioid microphone. Proximity effect is a change in the frequency response of a directional pattern microphone that results in an emphasis on lower frequencies. It is caused by the use of ports to create directional polar pickup patterns, so omni-directional microphones do not exhibit the effect (this is not necessarily true of the "omni" pattern on multipattern condenser mics, which create the "omni" pattern by summing two back-to-back cardioid capsules, which may or may not share a common backplate.) Proximity effect can be viewed in two ways. In some settings, sound engineers may view it as undesirable, and so the type of microphone or microphone practice may be chosen in order to reduce the proximity effect. On the other hand, some microphone users seek to intentionally use the proximity effect, such as beat boxing singers in hip hop music. Technical explanation Depending on the microphone design, proximity effect may result in a boost of up to 16 dB or more at lower frequencies, depending on the size of the microphone's diaphragm and the distance of the source. A ready (and common) example of proximity effect can be observed with cardioid dynamic vocal microphones (though it is not limited to this class of microphone) when the vocalist is very close to or even touching the mic with their lips. The effect is heard as a 'fattening up' of the voice. Many radio broadcast microphones are large diameter cardioid pickup pattern microphones, and radio announcers are often observed to employ proximity effect, adding a sense of gravitas and depth to the voice. Proximity effect is sometimes referred to as "bass tip-up." Angular dependence To explain how the proximity effect arises in directional microphones, it is first necessary to briefly describe how a directional microphone works. A microphone is constructed with a diaphragm whose mech
https://en.wikipedia.org/wiki/Ki-67%20%28protein%29
Antigen Kiel 67, also known as Ki-67 or MKI67 (marker of proliferation Kiel 67), is a protein that in humans is encoded by the gene (antigen identified by monoclonal antibody Ki-67). Function Antigen KI-67 is a nuclear protein that is associated with cellular proliferation and ribosomal RNA transcription. Inactivation of antigen KI-67 leads to inhibition of ribosomal RNA synthesis, but does not significantly affect cell proliferation in vivo: Ki-67 mutant mice developed normally and cells lacking Ki-67 proliferated efficiently. Use as a marker of proliferating cells The Ki-67 protein (also known as MKI67) is a cellular marker for proliferation, and can be used in immunohistochemistry. It is strictly associated with cell proliferation. During interphase, the Ki-67 antigen can be exclusively detected within the cell nucleus, whereas in mitosis most of the protein is relocated to the surface of the chromosomes. Ki-67 protein is present during all active phases of the cell cycle (G1, S, G2, and mitosis), but is absent in resting (quiescent) cells (G0). Cellular content of Ki-67 protein markedly increases during cell progression through S phase of the cell cycle. In breast cancer Ki67 identifies a high proliferative subset of patients with ER-positive breast cancer who derive greater benefit from adjuvant chemotherapy. Antibody labeling Ki-67 is an excellent marker to determine the growth fraction of a given cell population. The fraction of Ki-67-positive tumor cells (the Ki-67 labeling index) is often correlated with the clinical course of cancer. The best-studied examples in this context are prostate, brain and breast carcinomas, as well as nephroblastoma and neuroendocrine tumors. For these types of tumors, the prognostic value for survival and tumor recurrence have repeatedly been proven in uni- and multivariate analysis. MIB-1 Ki-67 and MIB-1 monoclonal antibodies are directed against different epitopes of the same proliferation-related antigen. Ki-67 and M
https://en.wikipedia.org/wiki/Wilderness-acquired%20diarrhea
Wilderness-acquired diarrhea is a variety of traveler's diarrhea in which backpackers and other outdoor enthusiasts are affected. Potential sources are contaminated food or water, or "hand-to-mouth", directly from another person who is infected. Cases generally resolve spontaneously, with or without treatment, and the cause is typically unknown. The National Outdoor Leadership School has recorded about one incident per 5,000 person-field days by following strict protocols on hygiene and water treatment. More limited, separate studies have presented highly varied estimated rates of affliction that range from 3 percent to 74 percent of wilderness visitors. One survey found that long-distance Appalachian Trail hikers reported diarrhea as their most common illness. Based on reviews of epidemiologic data and literature, some researchers believe that the risks have been over-stated and are poorly understood by the public. Symptoms and signs The average incubation periods for giardiasis and cryptosporidiosis are each 7 days. Certain other bacterial and viral agents have shorter incubation periods, although hepatitis may take weeks to manifest itself. The onset usually occurs within the first week of return from the field, but may also occur at any time while hiking. Most cases begin abruptly and usually result in increased frequency, volume, and weight of stool. Typically, a hiker experiences at least four to five loose or watery bowel movements each day. Other commonly associated symptoms are nausea, vomiting, abdominal cramping, bloating, low fever, urgency, and malaise, and usually the appetite is affected. The condition is much more serious if there is blood or mucus in stools, abdominal pain, or high fever. Dehydration is a possibility. Life-threatening illness resulting from WAD is extremely rare but can occur in people with weakened immune systems. Some people may be carriers and not exhibit symptoms. Causes Infectious diarrhea acquired in the wilderness is ca
https://en.wikipedia.org/wiki/MYB%20%28gene%29
Myb genes are part of a large gene family of transcription factors found in animals and plants. In humans, it includes Myb proto-oncogene like 1 and Myb-related protein B in addition to MYB proper. Members of the extended SANT/Myb family also include the SANT domain and other similar all-helical homeobox-like domains. Function Viral The Myb gene family is named after the eponymous gene in Avian myeloblastosis virus. The viral Myb (v-Myb, ) recognizes the sequence 5'-YAACKG-3'. It causes myeloblastosis (myeloid leukemia) in chickens. Compared to the normal animal cellular Myb (c-myb), v-myb contains deletions in the C-terminal regulatory domain, leading to aberrant activation of other oncogenes. Animals Myb proto-oncogene protein is a member of the MYB (myeloblastosis) family of transcription factors. The protein contains three domains, an N-terminal DNA-binding domain, a central transcriptional activation domain and a C-terminal domain involved in transcriptional repression. It may play a role in cell cycle regulation. Like the viral version, this gene is an oncogene, and rearrangements of the gene (often involving deletion in the C-terminal domain) causes cancer. Plants MYB factors represent a family of proteins that include the conserved MYB DNA-binding domain. Plants contain a MYB-protein subfamily that is characterised by the R2R3-type MYB domain. In maize, phlobaphenes are synthesized in the flavonoids synthetic pathway from polymerisation of flavan-4-ols which encodes an R2R3 myb-like transcriptional activator of the A1 gene encoding for the dihydroflavonol 4-reductase (reducing dihydroflavonols into flavan-4-ols) while another gene (Suppressor of Pericarp Pigmentation 1 or SPP1) acts as a suppressor. The maize P gene encodes a Myb homolog that recognizes the sequence CCWACC, in sharp contrast with the YAACGG bound by vertebrate Myb proteins. In sorghum, the corresponding yellow seed 1 gene (y1) also encodes a R2R3 type of Myb domain protein that regu
https://en.wikipedia.org/wiki/15-Hydroxyeicosatetraenoic%20acid
15-Hydroxyeicosatetraenoic acid (also termed 15-HETE, 15(S)-HETE, and 15S-HETE) is an eicosanoid, i.e. a metabolite of arachidonic acid. Various cell types metabolize arachidonic acid to 15(S)-hydroperoxyeicosatetraenoic acid (15(S)-HpETE). This initial hydroperoxide product is extremely short-lived in cells: if not otherwise metabolized, it is rapidly reduced to 15(S)-HETE. Both of these metabolites, depending on the cell type which forms them, can be further metabolized to 15-oxo-eicosatetraenoic acid (15-oxo-ETE), 5(S),15(S)-dihydroxy-eicosatetraenoic acid (5(S),15(S)-diHETE), 5-oxo-15(S)-hydroxyeicosatetraenoic acid (5-oxo-15(S)-HETE), a subset of specialized pro-resolving mediators viz., the lipoxins, a class of pro-inflammatory mediators, the eoxins, and other products that have less well-defined activities and functions. Thus, 15(S)-HETE and 15(S)-HpETE, in addition to having intrinsic biological activities, are key precursors to numerous biologically active derivatives. Some cell types (e.g. platelets) metabolize arachidonic acid to the stereoisomer of 15(S)-HpETE, 15(R)-HpETE. Both stereoisomers may also be formed as result of the metabolism of arachidonic acid by cellular microsomes or as a result of arachidonic acid auto-oxidation. Similar to 15(S)-HpETEs, 15(R)-HpETE may be rapidly reduced to 15(R)-HETE. These R,S stereoisomers differ only in having their hydroxy residue in opposite orientations. While the two R stereoisomers are sometimes referred to as 15-HpETE and 15-HETE, proper usage should identify them as R stereoisomers. 15(R)-HpETE and 15(R)-HETE lack some of the activity attributed to their S stereoisomers but can be further metabolized to bioactive products viz., the 15(R) class of lipoxins (also termed epi-lipoxins). 15(S)-HETE, 15(S)-HpETE, and many of their derivative metabolites are thought to have physiologically important functions. They appear to act as hormone-like autocrine and paracrine signaling agents that are involved in regula
https://en.wikipedia.org/wiki/Language-agnostic
Language-agnostic programming or scripting (also called language-neutral, language-independent, or cross-language) is a software paradigm in which no particular language is promoted. In introductory instruction, the term refers to teaching principles rather than language features. For example, a textbook such as Structure and Interpretation of Computer Programs is really a language-agnostic book about programming, and is not about programming in Scheme, per se. As a development methodology, the concept suggests that a particular language should be chosen because of its appropriateness for a particular task (taking into consideration all factors, including ecosystem, developer skill-sets, performance, etc.), and not purely because of the skill-set available within a development team. For example, a language agnostic Java development team might choose to use Ruby or Perl for some development work, where Ruby or Perl would be more appropriate than Java. "Cross-Language" in programming and scripting describes a program in which two or more languages are used to good effect within a program's code, with each contributing its distinctive benefits. Related terms Language-independent specification Cross-language information retrieval, referring to natural languages, not programming languages Language independent datatypes See also Bilingual (disambiguation) Language-independent (disambiguation) Glue language Language binding Middleware Polyglot (computing)
https://en.wikipedia.org/wiki/Acyl-CoA-binding%20protein
In molecular biology, the Acyl-CoA-binding protein (ACBP) is a small (10 Kd) protein that binds medium- and long-chain acyl-CoA esters with very high affinity and may function as an intracellular carrier of acyl-CoA esters. ACBP is also known as diazepam binding inhibitor (DBI) or endozepine (EP) because of its ability to displace diazepam from the benzodiazepine (BZD) recognition site located on the GABA type A receptor. It is therefore possible that this protein also acts as a neuropeptide to modulate the action of the GABA receptor. ACBP is a highly conserved protein of about 90 amino acids that is found in all four eukaryotic kingdoms, Animalia, Plantae, Fungi and Protista, and in some eubacterial species. Although ACBP occurs as a completely independent protein, intact ACB domains have been identified in a number of large, multifunctional proteins in a variety of eukaryotic species. These include large membrane-associated proteins with N-terminal ACB domains, multifunctional enzymes with both ACB and peroxisomal enoyl-CoA Delta(3), Delta(2)-enoyl-CoA isomerase domains, and proteins with both an ACB domain and ankyrin repeats. The ACB domain consists of four alpha-helices arranged in a bowl shape with a highly exposed acyl-CoA-binding site. The ligand is bound through specific interactions with residues on the protein, most notably several conserved positive charges that interact with the phosphate group on the adenosine-3'phosphate moiety, and the acyl chain is sandwiched between the hydrophobic surfaces of CoA and the protein. Other proteins containing an ACB domain include: Endozepine-like peptide (ELP) (gene DBIL5) from mouse. ELP is a testis-specific ACBP homologue that may be involved in the energy metabolism of the mature sperm. MA-DBI, a transmembrane protein of unknown function which has been found in mammals. MA-DBI contains a N-terminal ACB domain. DRS-1, a human protein of unknown function that contains a N-terminal ACB domain and a C-terminal
https://en.wikipedia.org/wiki/Paraspecies
A paraspecies (a paraphyletic species) is a species, living or fossil, that gave rise to one or more daughter species without itself becoming extinct. Geographically widespread species that have given rise to one or more daughter species as peripheral isolates without themselves becoming extinct (i.e. through peripatric speciation) are examples of paraspecies. Paraspecies are expected from evolutionary theory (Crisp and Chandler, 1996), and are empirical realities in many terrestrial and aquatic taxa. Examples A well-documented example of a living mammal species that gave rise to another living species is the evolution of the polar bear from the brown bear. An example of a living reptile paraspecies is New Zealand's North Island tuatara Sphenodon punctatus, which gave rise to the Brothers Island tuatara Sphenodon guntheri. An example of a living bird paraspecies is Empidonax occidentalis, the Cordilleran flycatcher. An example of a living plant paraspecies is Pouteria cuspidata, the pouteria trees or eggfruits. See also Cladogenesis Anagenesis, also known as "phyletic change", where no branching event occurred (or is known to have occurred) Notes and references Evolutionary biology
https://en.wikipedia.org/wiki/Composition%20ring
In mathematics, a composition ring, introduced in , is a commutative ring (R, 0, +, −, ·), possibly without an identity 1 (see non-unital ring), together with an operation such that, for any three elements one has It is not generally the case that , nor is it generally the case that (or ) has any algebraic relationship to and . Examples There are a few ways to make a commutative ring R into a composition ring without introducing anything new. Composition may be defined by for all f,g. The resulting composition ring is a rather uninteresting. Composition may be defined by for all f,g. This is the composition rule for constant functions. If R is a boolean ring, then multiplication may double as composition: for all f,g. More interesting examples can be formed by defining a composition on another ring constructed from R. The polynomial ring R[X] is a composition ring where for all . The formal power series ring R also has a substitution operation, but it is only defined if the series g being substituted has zero constant term (if not, the constant term of the result would be given by an infinite series with arbitrary coefficients). Therefore, the subset of R formed by power series with zero constant coefficient can be made into a composition ring with composition given by the same substitution rule as for polynomials. Since nonzero constant series are absent, this composition ring does not have a multiplicative unit. If R is an integral domain, the field R(X) of rational functions also has a substitution operation derived from that of polynomials: substituting a fraction g1/g2 for X into a polynomial of degree n gives a rational function with denominator , and substituting into a fraction is given by However, as for formal power series, the composition cannot always be defined when the right operand g is a constant: in the formula given the denominator should not be identically zero. One must therefore restrict to a subring of R(X) to have a wel
https://en.wikipedia.org/wiki/Hydrogen%20valve
A hydrogen valve is a special type of valve that is used for hydrogen at very low temperatures or high pressures in hydrogen storage or for example hydrogen vehicles. Types High pressure ball valves up to 6000 psig (413 bar) at 250 degrees F (121 degrees C) and flow coefficients from 4.0 to 13.8. Material Valves used in industrial hydrogen and oxygen applications, such as petrochemical processes, are often made of inconel. See also Diaphragm valve Gate valve Hydrogen tank
https://en.wikipedia.org/wiki/Immune%20adherence
Immune adherence was described by Nelson (1953) for an in vitro immunological reaction between normal erythrocytes and a wide variety of microorganisms sensitized with their individually specific antibody and complement; erythrocytes were observed to adhere to microorganisms. It was later recognized to occur in vivo. The phenomenon is now resolved as a complement-dependent binding reaction of erythrocytes to microorganisms where specific antibodies are engaged in the process. The reaction process is as follows: any microorganisms are bound with their specific antibodies, if they are produced, which activate the classical pathway of the complement system. The cascade begins to work from C1 to C3b through C4b, C3b being further transformed to iC3b (inactive derivative of C3b), all of which, C4b and thereafter, remain to bind to the surface of the microbe. Because primate erythrocytes express complement receptor 1 (CR1) on their surface and having binding specificity to C4b, C3b, or iC3b, erythrocytes accumulate on the microbe via CR1-complement binding. Function of the immune adherence (in vivo) Human erythrocytes express 100 to 1,000 CR1 per cell, the average number of approximately 300 being an inherited characteristics. Immune complexes bound to erythrocytes are effectively removed from the circulation, which is presumed alternatively to prevent deposition at tissue sites, for example, the renal glomerulus. Erythrocytes bearing immune complexes traverse sinusoids of the liver and spleen, where they encounter fixed phagocytes. Phagocytes expressing CR1, CR3, and Fcγ receptors effect a transfer of the immune complexes to their surface. Then erythrocytes leave the liver and spleen bearing off immune complexes and work on the next round of transfer of immune complexes after adhering to them.
https://en.wikipedia.org/wiki/Scrypt
In cryptography, scrypt (pronounced "ess crypt") is a password-based key derivation function created by Colin Percival in March 2009, originally for the Tarsnap online backup service. The algorithm was specifically designed to make it costly to perform large-scale custom hardware attacks by requiring large amounts of memory. In 2016, the scrypt algorithm was published by IETF as RFC 7914. A simplified version of scrypt is used as a proof-of-work scheme by a number of cryptocurrencies, first implemented by an anonymous programmer called ArtForz in Tenebrix and followed by Fairbrix and Litecoin soon after. Introduction A password-based key derivation function (password-based KDF) is generally designed to be computationally intensive, so that it takes a relatively long time to compute (say on the order of several hundred milliseconds). Legitimate users only need to perform the function once per operation (e.g., authentication), and so the time required is negligible. However, a brute-force attack would likely need to perform the operation billions of times, at which point the time requirements become significant and, ideally, prohibitive. Previous password-based KDFs (such as the popular PBKDF2 from RSA Laboratories) have relatively low resource demands, meaning they do not require elaborate hardware or very much memory to perform. They are therefore easily and cheaply implemented in hardware (for instance on an ASIC or even an FPGA). This allows an attacker with sufficient resources to launch a large-scale parallel attack by building hundreds or even thousands of implementations of the algorithm in hardware and having each search a different subset of the key space. This divides the amount of time needed to complete a brute-force attack by the number of implementations available, very possibly bringing it down to a reasonable time frame. The scrypt function is designed to hinder such attempts by raising the resource demands of the algorithm. Specifically, the alg
https://en.wikipedia.org/wiki/Cori%20cycle
The Cori cycle (also known as the lactic acid cycle), named after its discoverers, Carl Ferdinand Cori and Gerty Cori, is a metabolic pathway in which lactate, produced by anaerobic glycolysis in muscles, is transported to the liver and converted to glucose, which then returns to the muscles and is cyclically metabolized back to lactate. Process Muscular activity requires ATP, which is provided by the breakdown of glycogen in the skeletal muscles. The breakdown of glycogen, known as glycogenolysis, releases glucose in the form of glucose 1-phosphate (G1P). The G1P is converted to G6P by phosphoglucomutase. G6P is readily fed into glycolysis, (or can go into the pentose phosphate pathway if G6P concentration is high) a process that provides ATP to the muscle cells as an energy source. During muscular activity, the store of ATP needs to be constantly replenished. When the supply of oxygen is sufficient, this energy comes from feeding pyruvate, one product of glycolysis, into the citric acid cycle, which ultimately generates ATP through oxygen-dependent oxidative phosphorylation. When oxygen supply is insufficient, typically during intense muscular activity, energy must be released through anaerobic metabolism. Lactic acid fermentation converts pyruvate to lactate by lactate dehydrogenase. Most importantly, fermentation regenerates NAD+, maintaining its concentration so additional glycolysis reactions can occur. The fermentation step oxidizes the NADH produced by glycolysis back to NAD+, transferring two electrons from NADH to reduce pyruvate into lactate. (Refer to the main articles on glycolysis and fermentation for the details.) Instead of accumulating inside the muscle cells, lactate produced by anaerobic fermentation is taken up by the liver. This initiates the other half of the Cori cycle. In the liver, gluconeogenesis occurs. From an intuitive perspective, gluconeogenesis reverses both glycolysis and fermentation by converting lactate first into pyruvate, an
https://en.wikipedia.org/wiki/Hahn%20series
In mathematics, Hahn series (sometimes also known as Hahn–Mal'cev–Neumann series) are a type of formal infinite series. They are a generalization of Puiseux series (themselves a generalization of formal power series) and were first introduced by Hans Hahn in 1907 (and then further generalized by Anatoly Maltsev and Bernhard Neumann to a non-commutative setting). They allow for arbitrary exponents of the indeterminate so long as the set supporting them forms a well-ordered subset of the value group (typically or ). Hahn series were first introduced, as groups, in the course of the proof of the Hahn embedding theorem and then studied by him in relation to Hilbert's second problem. Formulation The field of Hahn series (in the indeterminate ) over a field and with value group (an ordered group) is the set of formal expressions of the form with such that the support of f is well-ordered. The sum and product of and are given by and (in the latter, the sum over values such that , and is finite because a well-ordered set cannot contain an infinite decreasing sequence). For example, is a Hahn series (over any field) because the set of rationals is well-ordered; it is not a Puiseux series because the denominators in the exponents are unbounded. (And if the base field K has characteristic p, then this Hahn series satisfies the equation so it is algebraic over .) Properties Properties of the valued field The valuation of a non-zero Hahn series is defined as the smallest such that (in other words, the smallest element of the support of ): this makes into a spherically complete valued field with value group and residue field (justifying a posteriori the terminology). In fact, if has characteristic zero, then is up to (non-unique) isomorphism the only spherically complete valued field with residue field and value group . The valuation defines a topology on . If , then corresponds to an ultrametric absolute value , with respect to which is a c
https://en.wikipedia.org/wiki/Genotype%E2%80%93phenotype%20distinction
The genotype–phenotype distinction is drawn in genetics. "Genotype" is an organism's full hereditary information. "Phenotype" is an organism's actual observed properties, such as morphology, development, or behavior, and the consequences thereof. This distinction is fundamental in the study of inheritance of traits and their evolution. Overview The terms "genotype" and "phenotype" were created by Wilhelm Johannsen in 1911, although the meaning of the terms and the significance of the distinction have evolved since they were introduced. It is the organism's physical properties that directly determine its chances of survival and reproductive output, but the inheritance of physical properties is dependent on the inheritance of genes. Therefore, understanding the theory of evolution via natural selection requires understanding the genotype–phenotype distinction. The genes contribute to a trait, and the phenotype is the observable expression of the genes (and therefore the genotype that affects the trait). If a white mouse had recessive genes that caused the genes responsible for color to be inactive, its genotype would be responsible for its phenotype (the white color). The mapping of a set of genotypes to a set of phenotypes is sometimes referred to as the genotype–phenotype map. An organism's genotype is a major (the largest by far for morphology) influencing factor in the development of its phenotype, but it is not the only one. Even two organisms with identical genotypes normally differ in their phenotypes. One experiences this in everyday life with monozygous (i.e. identical) twins. Identical twins share the same genotype, since their genomes are identical; but they never have the same phenotype, although their phenotypes may be very similar. This is apparent in the fact that close relations can always tell them apart, even though others might not be able to see the subtle differences. Further, identical twins can be distinguished by their fingerprints, which
https://en.wikipedia.org/wiki/Soil%20Biology%20and%20Biochemistry
Soil Biology and Biochemistry is a monthly peer-reviewed scientific journal that was established in 1969 and is currently published by Elsevier. Its founding editor-in-chief was John Saville Waid. It publishes original research papers that describe and explain biological processes occurring in soil. Since 2020, the editors-in-chief are Karl Ritz (University of Nottingham) and Josh Schimel (University of California Santa Barbara). According to the Journal Citation Reports, the journal has a 2020 impact factor of 7.609.
https://en.wikipedia.org/wiki/MeWatch
meWatch (stylized as mewatch, and formerly as meWATCH) is a Singaporean digital video on demand service brand owned by Mediacorp. It was launched on 1 February 2013 as an over-the-top media service and an entertainment and lifestyle website Toggle. On 1 April 2015, xinmsn, an internet portal which is a joint venture between MediaCorp and Microsoft, was closed down and merged with Toggle. On 30 January 2020, Toggle was renamed meWATCH. Content meWATCH offers to worldwide audiences video streaming or on-demand content of programs from Mediacorp's archived library as well as original webseries. In addition, meWatch also offers live streaming of Mediacorp's free-to-air channels (exclusive to Singapore only). It also offers catch-up TV for viewers to watch shows they have missed on prime time TV shows from the previous few days. In 2016, meWATCH (then Toggle) began to offer made-for-digital productions under the brand Toggle Originals. Certain original series, such as I Want to Be a Star may be telecasted on Mediacorp's television channels, typically in the late night programming slots. In 2019, meWATCH began offering additional content from HBO Go for Singapore residents only. In the same year, Mediacorp and Wattpad inked partnership for developing scripted series and films for meWATCH and Mediacorp FTA channels, based on books made by Wattpad writers that are based in Singapore. In 30 May 2019, meWATCH began offering on-demand Korean movies from tvN Movies. On 30 January 2020, Toggle was renamed meWATCH. At the same time, Mediacorp made a content deal with HOOQ to stream the latter's original content. However, on 27 March 2020, HOOQ filed for liquidation, shut down on 30 April 2020, and eventually was acquired by Coupang in July 2020 in order to being used as the basis of its streaming service named Coupang Play. As of most recent, they had carried anime from both Medialink, Mighty Media and Muse Communication, dozens of Mandarin series from Chinese production
https://en.wikipedia.org/wiki/Krull%E2%80%93Schmidt%20category
In category theory, a branch of mathematics, a Krull–Schmidt category is a generalization of categories in which the Krull–Schmidt theorem holds. They arise, for example, in the study of finite-dimensional modules over an algebra. Definition Let C be an additive category, or more generally an additive -linear category for a commutative ring . We call C a Krull–Schmidt category provided that every object decomposes into a finite direct sum of objects having local endomorphism rings. Equivalently, C has split idempotents and the endomorphism ring of every object is semiperfect. Properties One has the analogue of the Krull–Schmidt theorem in Krull–Schmidt categories: An object is called indecomposable if it is not isomorphic to a direct sum of two nonzero objects. In a Krull–Schmidt category we have that an object is indecomposable if and only if its endomorphism ring is local. every object is isomorphic to a finite direct sum of indecomposable objects. if where the and are all indecomposable, then , and there exists a permutation such that for all . One can define the Auslander–Reiten quiver of a Krull–Schmidt category. Examples An abelian category in which every object has finite length. This includes as a special case the category of finite-dimensional modules over an algebra. The category of finitely-generated modules over a finite -algebra, where is a commutative Noetherian complete local ring. The category of coherent sheaves on a complete variety over an algebraically-closed field. A non-example The category of finitely-generated projective modules over the integers has split idempotents, and every module is isomorphic to a finite direct sum of copies of the regular module, the number being given by the rank. Thus the category has unique decomposition into indecomposables, but is not Krull-Schmidt since the regular module does not have a local endomorphism ring. See also Quiver Karoubi envelope Notes
https://en.wikipedia.org/wiki/Kodokushi
or lonely death is a Japanese phenomenon of people dying alone and remaining undiscovered for a long period of time. First described in the 1980s, kodokushi has become an increasing problem in Japan, attributed to economic troubles and Japan's increasingly elderly population. It is also known as – "isolation death", and – "live alone death". History Kodokushi was first documented in Japanese newspapers during the 1970s, and studies exploring the phenomenon began as early as 1973, with surveys conducted by the National Social Welfare Council and National Union of Voluntary District Welfare Commissioners. The first instance that became national news in Japan was in 2000 when the corpse of a 69-year-old man was discovered three years after his death; his monthly rent and utilities had been withdrawn automatically from his bank account and only after his savings were depleted was his skeleton discovered at his home. The body had been consumed by maggots and beetles. Statistics Statistics regarding kodokushi are often incomplete or inaccurate. Japan's public broadcaster NHK reported that 32,000 elderly people nationwide died alone in 2009. The number of kodokushi tripled between 1983 and 1994, with 1,049 lonely deaths reported in Tokyo in 1994. In 2008, there were more than 2,200 reported lonely deaths in Tokyo. Similar numbers were reported in 2011. One private moving company in Osaka reported that 20 percent of the moving company's jobs (300 per year) involved removing the belongings of people who had died lonely deaths. Approximately 4.5% of funerals in 2006 involved instances of kodokushi. Kodokushi mostly affects men who are 50 or older. Causes Several reasons for the increase in kodokushi have been proposed. One proposed reason is increased social isolation. A decreasing proportion of elderly Japanese people are living in multi-generational housing and are instead living alone. Elderly people who live alone are more likely to lack social contacts with famil
https://en.wikipedia.org/wiki/Club%20set
In mathematics, particularly in mathematical logic and set theory, a club set is a subset of a limit ordinal that is closed under the order topology, and is unbounded (see below) relative to the limit ordinal. The name club is a contraction of "closed and unbounded". Formal definition Formally, if is a limit ordinal, then a set is closed in if and only if for every if then Thus, if the limit of some sequence from is less than then the limit is also in If is a limit ordinal and then is unbounded in if for any there is some such that If a set is both closed and unbounded, then it is a club set. Closed proper classes are also of interest (every proper class of ordinals is unbounded in the class of all ordinals). For example, the set of all countable limit ordinals is a club set with respect to the first uncountable ordinal; but it is not a club set with respect to any higher limit ordinal, since it is neither closed nor unbounded. If is an uncountable initial ordinal, then the set of all limit ordinals is closed unbounded in In fact a club set is nothing else but the range of a normal function (i.e. increasing and continuous). More generally, if is a nonempty set and is a cardinal, then (the set of subsets of of cardinality ) is club if every union of a subset of is in and every subset of of cardinality less than is contained in some element of (see stationary set). The closed unbounded filter Let be a limit ordinal of uncountable cofinality For some , let be a sequence of closed unbounded subsets of Then is also closed unbounded. To see this, one can note that an intersection of closed sets is always closed, so we just need to show that this intersection is unbounded. So fix any and for each n < ω choose from each an element which is possible because each is unbounded. Since this is a collection of fewer than ordinals, all less than their least upper bound must also be less than so we can call it This process genera
https://en.wikipedia.org/wiki/BCJ%20%28algorithm%29
In data compression, BCJ, short for Branch/Call/Jump, refers to a technique that improves the compression of machine code by replacing relative branch addresses with absolute ones. This allows a Lempel–Ziv compressor to identify duplicate targets and more efficiently encode them. On decompression, the inverse filter restores the original encoding. Different BCJ filters are used for different instruction sets, as each use different opcodes for branching. A form of BCJ is seen in Microsoft's cabinet file format from 1996, which filters x86 CALL instructions for the LZX compressor. The 7z and xz file formats implement BCJ for multiple architectures. ZPAQ calls its x86 BCJ as "E8E9", after the opcode values. bsdiff, a tool for delta updates, circumvents the need of writing architecture-specific BCJ tools by encoding bytewise differences. This allows it to be much better than the "match and copy" type tools such as VCDIFF, giving an output size of only 6% for Google Chrome. However, Google's courgette, which adds a layer of explicit disassembly, is able to produce 9× smaller diffs. Effect For a squashfs image of a Fedora Linux 31 live image, using x86 BCJ saves an extra 30 MB out of the ~1.7 GB compressed size, but doubles the installation time.
https://en.wikipedia.org/wiki/Derek%20Lee%20%28biologist%29
Derek Lee (also known as Derek E. Lee or Derek Edward Lee) is an American ecologist and wildlife biologist specializing in population biology and conservation biology. Lee was born in Lodi, California on March 15, 1971, and attended Tokay High School. Lee earned his bachelor's degree from the University of California at Santa Barbara, his master's degree from Humboldt State University, and his Ph.D. from Dartmouth College. For his MS degree he investigated the migratory behavior of black brant geese in Humboldt Bay using capture-recapture statistics to estimate stopover duration and space use. For his Ph.D., he studied the spatial demography of giraffes in the Tarangire ecosystem of Tanzania. His academic work on climate influences on marine bird demography, spotted owls and forest fire, and computer vision applications to wildlife biology are highly cited. His discovery of a white leucistic giraffe was widely reported in popular media. He worked at Point Blue Conservation Science for 8 years on Southeast Farallon Island studying how ocean climate change affects the population dynamics of marine predators such as elephant seals, Cassin's auklets, and common murres. He is founder and CEO of Wild Nature Institute, a research organization. Since 2018 he has also worked at Pennsylvania State University as an Associate Research Professor. Lee has published more than 50 scientific peer-reviewed papers in the field of ecology, mainly focused on demography and population biology of wild vertebrates.
https://en.wikipedia.org/wiki/Hydrochloric%20acid
Hydrochloric acid, also known as muriatic acid or spirits of salt, is an aqueous solution of hydrogen chloride with the chemical formula HCl(aq). It is a colorless solution with a distinctive pungent smell. It is classified as a strong acid. It is a component of the gastric acid in the digestive systems of most animal species, including humans. Hydrochloric acid is an important laboratory reagent and industrial chemical. Etymology Because it was produced from rock salt according to the methods of Johann Rudolph Glauber, hydrochloric acid was historically called by European alchemists spirits of salt or acidum salis (salt acid). Both names are still used, especially in other languages, such as , , , , , , , , and (yánsuān). Gaseous HCl was called marine acid air. The name muriatic acid has the same origin (muriatic means "pertaining to brine or salt", hence muriate means hydrochloride), and this name is still sometimes used. The name hydrochloric acid was coined by the French chemist Joseph Louis Gay-Lussac in 1814. History 9th–10th century In the early tenth century, the Persian physician and alchemist Abu Bakr al-Razi (–925, Latin: Rhazes) conducted experiments with sal ammoniac (ammonium chloride) and vitriol (hydrated sulfates of various metals), which he distilled together, thus producing the gas hydrogen chloride. In doing so, al-Razi may have stumbled upon a primitive method for producing hydrochloric acid, as perhaps manifested in the following recipe from his ("The Book of Secrets"): However, it appears that in most of his experiments al-Razi disregarded the gaseous products, concentrating instead on the color changes that could be effected in the residue. According to Robert P. Multhauf, hydrogen chloride was produced many times without clear recognition that, by dissolving it in water, hydrochloric acid may be produced. 11th–13th century Drawing on al-Razi's experiments, the ("On Alums and Salts"), an eleventh- or twelfth-century Arabic text f
https://en.wikipedia.org/wiki/MacMahon%27s%20master%20theorem
In mathematics, MacMahon's master theorem (MMT) is a result in enumerative combinatorics and linear algebra. It was discovered by Percy MacMahon and proved in his monograph Combinatory analysis (1916). It is often used to derive binomial identities, most notably Dixon's identity. Background In the monograph, MacMahon found so many applications of his result, he called it "a master theorem in the Theory of Permutations." He explained the title as follows: "a Master Theorem from the masterly and rapid fashion in which it deals with various questions otherwise troublesome to solve." The result was re-derived (with attribution) a number of times, most notably by I. J. Good who derived it from his multilinear generalization of the Lagrange inversion theorem. MMT was also popularized by Carlitz who found an exponential power series version. In 1962, Good found a short proof of Dixon's identity from MMT. In 1969, Cartier and Foata found a new proof of MMT by combining algebraic and bijective ideas (built on Foata's thesis) and further applications to combinatorics on words, introducing the concept of traces. Since then, MMT has become a standard tool in enumerative combinatorics. Although various q-Dixon identities have been known for decades, except for a Krattenthaler–Schlosser extension (1999), the proper q-analog of MMT remained elusive. After Garoufalidis–Lê–Zeilberger's quantum extension (2006), a number of noncommutative extensions were developed by Foata–Han, Konvalinka–Pak, and Etingof–Pak. Further connections to Koszul algebra and quasideterminants were also found by Hai–Lorentz, Hai–Kriegk–Lorenz, Konvalinka–Pak, and others. Finally, according to J. D. Louck, the theoretical physicist Julian Schwinger re-discovered the MMT in the context of his generating function approach to the angular momentum theory of many-particle systems. Louck writes: Precise statement Let be a complex matrix, and let be formal variables. Consider a coefficient (He
https://en.wikipedia.org/wiki/ADMAR
ADMAR, an initialism for the German titleAbgesetzte Darstellung von MADAP Radar data, was the predecessor product of CIMACT. Definition The EUROCONTROL software product ADMAR did combine and merge several civilian surveillance- and military sensor data sources with Flight plan data sources. After data correlation it was able to provide a Recognised Air Picture (RAP). It could be operated on COTS hardware or special IT. History ADMAR was developed on the basis of ADKAR and GAME footing on the special Agreement of MOD Germany ( – A/13/D/HG/82, April 18, 1983) in cooperation with EUROCONTROL. It has been operational since 1983 and was used by the German Air Force exclusively. Since 2003 it became of interest for other European countries, NATO and security related authorities and organisations as well. ADMAR 2000 was the final software release. Remark: MADAP – Maastricht Automatic Data Processing system ADKAR – Abgesetzte Darstellung von KARLDAP Radar-Daten KARLDAP – Karlsruhe Automatic Data Processing system GAME – GEADGE / ADKAR Message Exchange GEADGE – German Air Defence Ground Environment Utilisation In Germany the utilisation of ADMAR was as follows: Stationary Control and Reporting Centre (CRC), TACCS Operation Centre National Air Defence (de: Nationales Lage- und Führungszentrum für Sicherheit im Luftraum - NLFZ SiLuRa) General Air Force Office (de: Luftwaffenamt) :de:Multinational Aircrew Electronic Warfare Tactics Facility Polygone / en:Polygone Co-ordination Centre (PCC) Bundeswehr Air Traffic Service Office (de: Amt für Flugsicherung der Bundeswehr - AFSBw) JG 71 and JG 74 See also CIMACT - European Civil-Military Air Traffic Management Co-ordination Tool External links The EUROCONTROL OneSky Portal Air traffic control in Europe Air traffic control systems Control engineering Information systems
https://en.wikipedia.org/wiki/Self-energy
In quantum field theory, the energy that a particle has as a result of changes that it causes in its environment defines self-energy , and represents the contribution to the particle's energy, or effective mass, due to interactions between the particle and its environment. In electrostatics, the energy required to assemble the charge distribution takes the form of self-energy by bringing in the constituent charges from infinity, where the electric force goes to zero. In a condensed matter context relevant to electrons moving in a material, the self-energy represents the potential felt by the electron due to the surrounding medium's interactions with it. Since electrons repel each other the moving electron polarizes, or causes to displace the electrons in its vicinity and then changes the potential of the moving electron fields. These are examples of self-energy. Characteristics Mathematically, this energy is equal to the so-called on mass shell value of the proper self-energy operator (or proper mass operator) in the momentum-energy representation (more precisely, to times this value). In this, or other representations (such as the space-time representation), the self-energy is pictorially (and economically) represented by means of Feynman diagrams, such as the one shown below. In this particular diagram, the three arrowed straight lines represent particles, or particle propagators, and the wavy line a particle-particle interaction; removing (or amputating) the left-most and the right-most straight lines in the diagram shown below (these so-called external lines correspond to prescribed values for, for instance, momentum and energy, or four-momentum), one retains a contribution to the self-energy operator (in, for instance, the momentum-energy representation). Using a small number of simple rules, each Feynman diagram can be readily expressed in its corresponding algebraic form. In general, the on-the-mass-shell value of the self-energy operator in the momentum-
https://en.wikipedia.org/wiki/Fen-meadow
A fen-meadow is a type of peatland, common in North America and Europe, that receives water from precipitation and groundwater. Habitat The continuous flow of mineral-rich and nutrient-poor acidic groundwater through fen-meadow topsoil fosters endemic plant species, including plant associations such as Juncus subnodulosus-Cirsium palustre, and protects them from floods, droughts, and nutrient pollution. Degradation by human activity Fen meadows have been severely impacted by farming, resulting in hydrological changes, acidification, and nutrient pollution, leaving few preserved into the 21st century. Compositional transformations and increased groundwater flow have the greatest effect this habitat and can degrade peat. Keeping water tables at appropriate levels allows fen meadows to regulate themselves. Restoration efforts are difficult and no deteriorated fens have been fully restored. Therefore, preservation efforts focus on maintaining fen meadows with slight to no anthropogenic interference. Habitat protection and restoration The consequences of water supply alteration are severe. Even after acidified topsoil has been removed and replaced, native species reintroduced, and groundwater sources restored or purified, fen meadows are unable to return to their natural state. Rewetting, maintenance, and seed transfer are used in unison to recover damaged fens. Rewetting reintroduces water to topsoil, but not water flow. Without proper drainage mechanisms in place, the water will provide correct amounts of nutrients and minerals but drown the vegetation in the area. Maintenance, like mowing, can be a rapid way to salvage vegetative species. Combined with seed transfer, to reintroduce vital biotic components to the habitat. Although most fens do not ever return to the natural state that requires no human upkeep, maintenance can be slowly reduced over time. See also Fen Meadow
https://en.wikipedia.org/wiki/Edy
Edy, provided by Rakuten, Inc. in Japan is a prepaid rechargeable contactless smart card. While the name derives from euro, dollar, and yen, it works with yen only. History Edy was launched on January 18, 2001, by BitWallet, with financing primarily from Sony, in addition to then other companies, including NTT Docomo and the Sumitomo Group. NTT Docomo's i-mode mobile payment service Osaifu-Keitai, which launched on 10 July 2004, included support for BitWallet's Edy. In 2005, over a million payments had been made with the service. On 18 April 2006, Intel announced a five billion yen (approx. US$45 million, or 35 million euros as of May 20, 2006) investment in bitWallet, aimed at furthering its usage on computers. On 1 June 2012, Rakuten acquired Edy, changing the official name to RakutenEdy and the parent company from bitWallet to RakutenEdy Inc. The three-oval blue-tone logo was changed to the Rakuten logo and the font of the word 'Edy' was altered. Mobile phones Edy can be used on Osaifu-Keitai featured cellphones. Makers of these phones include major cell phone carriers such as docomo, au and SoftBank. The phones can be used physically like an Edy card, and online Edy features can be accessed from the phones as well, such as the ability to charge an Edy account.
https://en.wikipedia.org/wiki/Melanophilin
Melanophilin is a carrier protein which in humans is encoded by the MLPH gene. Several alternatively spliced transcript variants of this gene have been described, but the full-length nature of some of these variants has not been determined. Function This gene encodes a member of the exophilin subfamily of Rab effector proteins. The protein forms a ternary complex with the small Ras-related GTPase Rab27A in its GTP-bound form and the motor protein myosin Va. A similar protein complex in mouse functions to tether pigment-producing organelles called melanosomes to the actin cytoskeleton in melanocytes, and is required for visible pigmentation in the hair and skin. In melanocytic cells MLPH gene expression may be regulated by MITF. Clinical significance A mutation in this gene results in Griscelli syndrome type 3, which is characterized by a silver-gray hair color and abnormal pigment distribution in the hair shaft. Mutations in melanophilin cause the "dilute" coat color phenotype in dogs and cats. Variation in this gene appears to have been a target for recent natural selection in humans, and it has been hypothesized that this is due to a role in human pigmentation.
https://en.wikipedia.org/wiki/Virtual%20Library%20museums%20pages
The Virtual Library museums pages (VLmp) formed an early leading directory of online museums around the world. History The VLmp online directory resource was founded by Jonathan Bowen in 1994, originally at the Oxford University Computing Laboratory in the United Kingdom. It has been supported by the International Council of Museums (ICOM) and Museophile Limited. As part of the World Wide Web Virtual Library, initiated by Tim Berners-Lee and later managed by Arthur Secret. The main VLmp site moved to London South Bank University in the early 2000s and is now hosted on the MuseumsWiki wiki, established in 2006 and hosted by Fandom (previously Wikia) as a historical record. The directory was developed and organised in a distributed manner by country, with around twenty people in different countries maintaining various sections. Canada, through the Canadian Heritage Information Network (CHIN), was the first country to become involved. The MDA maintained the United Kingdom section of museums, later the Collections Trust. The Historisches Centrum Hagen has maintained and hosted pages for Germany. Other countries actively participating included Romania. In total, around 20 countries were involved. The directory was influential in the museum field during the 1990s and 2000s. It was used as a standard starting point to find museums online. It was useful for monitoring the growth of museums internationally online. It was also used for online museum surveys. It was recommended as an educational resource and included a search facility. Virtual Museum of Computing The Virtual Museum of Computing (VMoC), part of the Virtual Library museums pages, was created as a virtual museum providing information on the history of computers and computer science. It included virtual "galleries" (e.g., on Alan Turing, curated by Andrew Hodges) and links to other computer museums. VMoC was founded in 1995, initially at the University of Oxford. As part of VLmp, it was hosted by the Internati
https://en.wikipedia.org/wiki/Cedar%20rust
Cedar rust may refer to: Gymnosporangium juniperi-virginianae, or Cedar-apple rust Gymnosporangium clavipes, or Cedar-quince rust Gymnosporangium globosum, or Cedar-hawthorn rust See also Miller v. Schoene
https://en.wikipedia.org/wiki/Hungry%20gap
In cultivation of vegetables in a temperate oceanic climate, the hungry gap is the period in spring when there is little or no fresh produce available from a vegetable garden or allotment. It usually starts when overwintered brassica vegetables such as brussels sprouts and winter cauliflowers and January King cabbages "bolt" (i.e. run up to flower) as the days get warmer and longer, but sooner if a very hard frost kills these crops; and ends when the new season's first broad beans are ready. Means to bridge the gap or part of it include: Using stored food: but stored potatoes sprout if kept too long in warm weather, and salted-away meat is used up or goes bad in store. See the origin of lent. Autumn-sown broad beans: this is risky as seeds could be killed in the ground if it freezes. Heated greenhouse, or hotbeds, to start summer vegetable seedlings sooner. Other meanings One variety of kale is called "Hungry Gap" because it crops during this period: see cultivars of kale. It was introduced to UK agriculture in 1941.
https://en.wikipedia.org/wiki/Halorubrum%20distributum
Halorubrum distributum is a halophilic Archaeon in the family of Halorubraceae.
https://en.wikipedia.org/wiki/Bristol%20Activities%20of%20Daily%20Living%20Scale
The Bristol Activities of Daily Living Scale (BADLS) is a 20-item questionnaire designed to measure the ability of someone with dementia to carry out daily activities such as dressing, preparing food and using transport.
https://en.wikipedia.org/wiki/Gnathology
Gnathology is the study of the masticatory system, including its physiology, functional disturbances, and treatment. Dr Beverly McCollum established the Gnathologic Society in 1926. Chairman of the Board of Directors, David W. McLean DDS represented the So. Cal. Gnathology Society in 1929 in Mexico City, furthering the Society's work. “The study of the relationship of the mandibular border movements and occlusal morphology, and how this relationship affects the anatomy, histology, physiology, pathology, and therapeutics of the oral organ, as well as how this relationship affects the rest of the body, including, but not limited to, the TMJ.” Dr. Jack Hockel, editor of Orthopedic Gnathology by Quintessence, 1983. Chapter 1 describes the ten characteristics of an Organic Occlusion, the goal of Gnathology.
https://en.wikipedia.org/wiki/Antisynthetase%20syndrome
Anti-synthetase syndrome is an autoimmune disease associated with interstitial lung disease, arthritis, and myositis. Signs and symptoms As a syndrome, this condition is poorly defined. Diagnostic criteria require one or more antisynthetase antibodies (which target tRNA synthetase enzymes), and one or more of the following three clinical features: interstitial lung disease, inflammatory myopathy, and inflammatory polyarthritis affecting small joints symmetrically. Other supporting features may include fever, Raynaud's phenomenon and "mechanics hands"-thick, cracked skin usually on the palms and radial surfaces of the digits. The disease, rare as it is, is more prevalent in women than in men. Early diagnosis is difficult, and milder cases may not be detected. Also, interstitial lung disease may be the only manifestation of the disease. Severe disease may develop over time, with intermittent relapses. Pathogenesis It is postulated that autoantibodies are formed against aminoacyl-tRNA synthetases. The synthetases may be involved in recruiting antigen-presenting and inflammatory cells to the site of muscle or lung injury. The specific molecular pathway of the process awaits elucidation. Antisynthetase antibodies The most common antibody is "Anti-Jo-1" named after John P, a patient with polymyositis and interstitial lung disease detected in 1980. This anti-histidyl tRNA Synthetase antibody is commonly seen in patients with pulmonary manifestations of the syndrome. The following are other possible antibodies that may be seen in association with antisynthetase syndrome: Anti-PL-7, Anti-PL-12, Anti-EJ, Anti-OJ, Anti-KS, Anti-Zo, Anti-Ha (YRS, Tyr). Diagnosis In the presence of suspicious symptoms a number of test are helpful in the diagnosis: Anti-tRNA antibody testing Electromyography Imaging such as High Resolution computed tomography Lung biopsy Muscle biopsy Muscle enzymes are often elevated, i.e. creatine kinase Pulmonary function testing In certain situa
https://en.wikipedia.org/wiki/Resource%20%28biology%29
In biology and ecology, a resource is a substance or object in the environment required by an organism for normal growth, maintenance, and reproduction. Resources box can be consumed by one organism and, as a result, become unavailable to another organism. For plants key resources are light, nutrients, water, and place to grow. For animals key resources are food, water, and territory. Key resources for plants Terrestrial plants require particular resources for photosynthesis and to complete their life cycle of germination, growth, reproduction, and dispersal: Carbon dioxide Microsite (ecology) Nutrients Pollination Seed dispersal Soil Water Key resources for animals Animals require particular resources for metabolism and to complete their life cycle of gestation, birth, growth, and reproduction: Foraging Territory Water Resources and ecological processes Resource availability plays a central role in ecological processes: Carrying capacity Biological competition Liebig's law of the minimum Niche differentiation See also Abiotic component Biotic component Community ecology Ecology Population ecology Plant ecology size-asymmetric competition
https://en.wikipedia.org/wiki/Virtual%20security%20appliance
A virtual security appliance is a computer appliance that runs inside virtual environments. It is called an appliance because it is pre-packaged with a hardened operating system and a security application and runs on a virtualized hardware. The hardware is virtualized using hypervisor technology delivered by companies such as VMware, Citrix and Microsoft. The security application may vary depending on the particular network security vendor. Some vendors such as Reflex Systems have chosen to deliver Intrusion Prevention technology as a Virtualized Appliance, or as a multifunctional server vulnerability shield delivered by Blue Lane. The type of security technology is irrelevant when it comes to the definition of a Virtual Security Appliance and is more relevant when it comes to the performance levels achieved when deploying various types of security as a virtual security appliance. Other issues include visibility into the hypervisor and the virtual network that runs inside. Security appliance history Traditionally, security appliances have been viewed as high performance products that may have had custom ASIC chips in it that allow for higher performance levels due to its dedicated hardware approach. Many vendors have started to call pre-built operating systems with dedicated applications on dedicated server hardware from the likes of IBM, Dell and offshore brands “appliances”. The appliance terminology although heavily used now has strayed from its original roots. An administrator would expect to see any underpinning Linux OS employ a monolithic kernel since the hardware platform is presumably static and vendor-controlled. However, the following examples are configured to use loadable kernel modules, reflecting the dynamic nature of the underlying hardware platforms used by product managers. "Appliances" have varying degrees of administrative openness. Enterasys Dragon version 7 IPS sensors (GE250 and GE500) are lightly hardened version of a Slackware Linux
https://en.wikipedia.org/wiki/Heuser%27s%20membrane
Heuser's membrane (or the exocoelomic membrane) is a short lived combination of hypoblast cells and extracellular matrix. At day 9-10 of embryonic development, cells from the hypoblast begin to migrate to the embryonic pole, forming a layer of cells just beneath the cytotrophoblast, called Heuser's membrane. It surrounds the exocoelomic cavity (primary yolk sac), i.e. it lines the inner surface of the cytotrophoblast. At this point, the exocoelomic cavity replaces the blastocyst cavity. At days 11 to 12, there is further delineation of the trophoblastic cells giving rise to a layer of loosely arranged cells that inserts between Heuser's membrane and both syncytiotrophoblast and cytotrophoblast. The Heuser's membrane cells (hypoblast cells) that migrated along the inner cytotrophoblast lining of the blastocoel, secrete an extracellular matrix along the way. Cells of the hypoblast migrate along the outer edges of this reticulum and form the extraembryonic mesoderm (splanchic & somatic); this disrupts the extraembryonic reticulum. Soon pockets form in the reticulum, which ultimately coalesce to form the chorionic cavity (extraembryonic coelom).
https://en.wikipedia.org/wiki/Frank%E2%80%93Wolfe%20algorithm
The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was originally proposed by Marguerite Frank and Philip Wolfe in 1956. In each iteration, the Frank–Wolfe algorithm considers a linear approximation of the objective function, and moves towards a minimizer of this linear function (taken over the same domain). Problem statement Suppose is a compact convex set in a vector space and is a convex, differentiable real-valued function. The Frank–Wolfe algorithm solves the optimization problem Minimize subject to . Algorithm Initialization: Let , and let be any point in . Step 1. Direction-finding subproblem: Find solving Minimize Subject to (Interpretation: Minimize the linear approximation of the problem given by the first-order Taylor approximation of around constrained to stay within .) Step 2. Step size determination: Set , or alternatively find that minimizes subject to . Step 3. Update: Let , let and go to Step 1. Properties While competing methods such as gradient descent for constrained optimization require a projection step back to the feasible set in each iteration, the Frank–Wolfe algorithm only needs the solution of a linear problem over the same set in each iteration, and automatically stays in the feasible set. The convergence of the Frank–Wolfe algorithm is sublinear in general: the error in the objective function to the optimum is after k iterations, so long as the gradient is Lipschitz continuous with respect to some norm. The same convergence rate can also be shown if the sub-problems are only solved approximately. The iterations of the algorithm can always be represented as a sparse convex combination of the extreme points of the feasible set, which has helped to the popularity of the algorithm for sparse greedy optimization in machine learning
https://en.wikipedia.org/wiki/Haplodiploidy
Haplodiploidy is a sex-determination system in which males develop from unfertilized eggs and are haploid, and females develop from fertilized eggs and are diploid. Haplodiploidy is sometimes called arrhenotoky. Haplodiploidy determines the sex in all members of the insect orders Hymenoptera (bees, ants, and wasps) and Thysanoptera ('thrips'). The system also occurs sporadically in some spider mites, Hemiptera, Coleoptera (bark beetles), and rotifers. In this system, sex is determined by the number of sets of chromosomes an individual receives. An offspring formed from the union of a sperm and an egg develops as a female, and an unfertilized egg develops as a male. This means that the males have half the number of chromosomes that a female has, and are haploid. The haplodiploid sex-determination system has a number of peculiarities. For example, a male has no father and cannot have sons, but he has a grandfather and can have grandsons. Additionally, if a eusocial-insect colony has only one queen, and she has only mated once, then the relatedness between workers (diploid females) in a hive or nest is . This means the workers in such monogamous single-queen colonies are significantly more closely related than in other sex determination systems where the relatedness of siblings is usually no more than . It is this point which drives the kin selection theory of how eusociality evolved. Whether haplodiploidy did in fact pave the way for the evolution of eusociality is still a matter of debate. Another feature of the haplodiploidy system is that recessive lethal and deleterious alleles will be removed from the population rapidly because they will automatically be expressed in the males (dominant lethal and deleterious alleles are removed from the population every time they arise, as they kill any individual they arise in). Haplodiploidy is not the same thing as an X0 sex-determination system. In haplodiploidy, males receive one half of the chromosomes that females r
https://en.wikipedia.org/wiki/TransUnion
TransUnion is an American consumer credit reporting agency. TransUnion collects and aggregates information on over one billion individual consumers in over thirty countries including "200 million files profiling nearly every credit-active consumer in the United States". Its customers include over 65,000 businesses. Based in Chicago, Illinois, TransUnion's 2014 revenue was US$1.3 billion. It is the smallest of the three largest credit agencies, along with Experian and Equifax (known as the "Big Three"). TransUnion also markets credit reports and other credit and fraud-protection products directly to consumers. Like all credit reporting agencies, the company is required by U.S. law to provide consumers with one free credit report every year. Additionally a growing segment of TransUnion's business is its business offerings that use advanced big data, particularly its deep AI-TLOxp product. History TransUnion was originally formed in 1968 as a holding company for Union Tank Car Company, making TransUnion a descendant of Standard Oil through Union Tank Car Company. The following year, it acquired the Credit Bureau of Cook County, which possessed and maintained 3.6 million credit accounts. In 1981, a Chicago-based holding company, The Marmon Group, acquired TransUnion for approximately $688 million. In 2010, Goldman Sachs Capital Partners and Advent International acquired it from Madison Dearborn Partners. In 2014, TransUnion acquired Hank Asher's data company TLO. On June 25, 2015, TransUnion became a publicly traded company for the first time, trading under the symbol TRU. TransUnion eventually began to offer products and services for both businesses and consumers. For businesses, TransUnion updated its traditional credit score offering to include trended data that helps predict consumer repayment and debt behavior. This product, referred to as CreditVision, launched in October 2013. Its SmartMove™ service facilitates credit and background checks for landlords. Th
https://en.wikipedia.org/wiki/Salinirubellus%20salinus
Salinirubellus salinus is an halophile archaeal species. It was first isolated from a marine solar saltern in Zhejiang Province in China. It is the only known species in the genus Salinirubellus.
https://en.wikipedia.org/wiki/Kinematic%20diffraction
Kinematic diffraction is the approach to study diffraction phenomena by neglecting multiple scattering. For linear wave equations, it thus consists in summing the contribution of the partial waves emanating from the different scatterers, where only the incident field drives the scattering. As a consequence, the far-field amplitude essentially corresponds to the Fourier transform of the scattering length density. It is typically understood as the Born approximation applied to a regular arrangement of scatterers, as appropriate for X-ray crystallography. The corresponding full (non-perturbative) theory is called the dynamical theory of diffraction.
https://en.wikipedia.org/wiki/Condition%20index%20in%20fish
The condition index in fish is a way to measure the overall health of a fish by comparing its weight with the typical weight of other fish of the same kind and of the same length. The condition index is its actual weight divided by its expected weight, times 100%. A fish of normal weight has a condition index of 100 percent. So if a tarpon, for example, has a condition index of 104 percent, that would mean it is above the normal weight for an average tarpon of that length. If a tarpon has a condition index of 92 percent, that would mean that it is thinner, or below the normal weight of other tarpon that length. The condition index depends on how much a fish is eating compared to the energy it has to spend to live, migrate, reproduce, and do its other activities. The condition index for fish is a simple measurement that can be used to provide important biological information that can then be used to make better management decisions. Definition and Calculation The condition index in fish is a way to measure the overall health of a fish by comparing its weight with the average weight of other fish of the same length and kind. To do this, a weight-length relationship that is published in a journal or a reliable report is used to make the comparison. Some weight-length relationships are for a group of the same fish from a certain location; others are developed from much data from various locations and represent a more general result. Some studies have enough data to develop separate weight-length relationships for male and female fish of the same species, and/or to develop separate weight-length relationships for different seasons of the year (e.g. spring and fall). To compute the condition index of an individual fish, it must be weighed and its length measured. Some weight-length relationships use the total length of the fish while others use the fork length. It is important to measure the same kind of length that the reference relationship uses. It is also importan
https://en.wikipedia.org/wiki/Kinescope
Kinescope , shortened to kine , also known as telerecording in Britain, is a recording of a television programme on motion picture film, directly through a lens focused on the screen of a video monitor. The process was pioneered during the 1940s for the preservation, re-broadcasting and sale of television programmes before the introduction of quadruplex videotape, which from 1956 eventually superseded the use of kinescopes for all of these purposes. Kinescopes were the only practical way to preserve live television broadcasts prior to videotape. Typically, the term Kinescope can refer to the process itself, the equipment used for the procedure (a movie camera mounted in front of a video monitor, and synchronised to the monitor's scanning rate), or a film made using the process. The term originally referred to the cathode ray tube used in television receivers, as named by inventor Vladimir K. Zworykin in 1929. Hence, the recordings were known in full as kinescope films or kinescope recordings. RCA was granted a trademark for the term (for its cathode ray tube) in 1932; it voluntarily released the term to the public domain in 1950. Film recorders are similar, but record source material from a computer system. Whereas a kinescope records television to film, a telecine is used to play film back on television. History The General Electric laboratories in Schenectady, New York experimented with making still and motion picture records of television images in 1931. There is anecdotal evidence that the BBC experimented with filming the output of the television monitor before its television service was suspended in 1939 due to the outbreak of World War II. A BBC executive, Cecil Madden, recalled filming a production of The Scarlet Pimpernel in this way, only for film director Alexander Korda to order the burning of the negative as he owned the film rights to the book, which he felt had been infringed. While there is no written record of any BBC Television production of T
https://en.wikipedia.org/wiki/Behavior-based%20robotics
Behavior-based robotics (BBR) or behavioral robotics is an approach in robotics that focuses on robots that are able to exhibit complex-appearing behaviors despite little internal variable state to model its immediate environment, mostly gradually correcting its actions via sensory-motor links. Principles Behavior-based robotics sets itself apart from traditional artificial intelligence by using biological systems as a model. Classic artificial intelligence typically uses a set of steps to solve problems, it follows a path based on internal representations of events compared to the behavior-based approach. Rather than use preset calculations to tackle a situation, behavior-based robotics relies on adaptability. This advancement has allowed behavior-based robotics to become commonplace in researching and data gathering. Most behavior-based systems are also reactive, which means they need no programming of a chair looks like, or what kind of surface the robot is moving on. Instead, all the information is gleaned from the input of the robot's sensors. The robot uses that information to gradually correct its actions according to the changes in immediate environment. Behavior-based robots (BBR) usually show more biological-appearing actions than their computing-intensive counterparts, which are very deliberate in their actions. A BBR often makes mistakes, repeats actions, and appears confused, but can also show the anthropomorphic quality of tenacity. Comparisons between BBRs and insects are frequent because of these actions. BBRs are sometimes considered examples of weak artificial intelligence, although some have claimed they are models of all intelligence. Features Most behavior-based robots are programmed with a basic set of features to start them off. They are given a behavioral repertoire to work with dictating what behaviors to use and when, obstacle avoidance and battery charging can provide a foundation to help the robots learn and succeed. Rather than buil
https://en.wikipedia.org/wiki/Algorithmic%20Lov%C3%A1sz%20local%20lemma
In theoretical computer science, the algorithmic Lovász local lemma gives an algorithmic way of constructing objects that obey a system of constraints with limited dependence. Given a finite set of bad events {A1, ..., An} in a probability space with limited dependence amongst the Ais and with specific bounds on their respective probabilities, the Lovász local lemma proves that with non-zero probability all of these events can be avoided. However, the lemma is non-constructive in that it does not provide any insight on how to avoid the bad events. If the events {A1, ..., An} are determined by a finite collection of mutually independent random variables, a simple Las Vegas algorithm with expected polynomial runtime proposed by Robin Moser and Gábor Tardos can compute an assignment to the random variables such that all events are avoided. Review of Lovász local lemma The Lovász Local Lemma is a powerful tool commonly used in the probabilistic method to prove the existence of certain complex mathematical objects with a set of prescribed features. A typical proof proceeds by operating on the complex object in a random manner and uses the Lovász Local Lemma to bound the probability that any of the features is missing. The absence of a feature is considered a bad event and if it can be shown that all such bad events can be avoided simultaneously with non-zero probability, the existence follows. The lemma itself reads as follows: Let be a finite set of events in the probability space Ω. For let denote a subset of such that is independent from the collection of events . If there exists an assignment of reals to the events such that then the probability of avoiding all events in is positive, in particular Algorithmic version of the Lovász local lemma The Lovász Local Lemma is non-constructive because it only allows us to conclude the existence of structural properties or complex objects but does not indicate how these can be found or constructed efficiently in