source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Terricolous%20lichen
A terricolous lichen is a lichen that grows on the soil as a substrate. An example is some members of the genus Peltigera. References Lichenology
https://en.wikipedia.org/wiki/NeXTdimension
The NeXTdimension (ND) is an accelerated 32-bit color board manufactured and sold by NeXT from 1991 that gave the NeXTcube color capabilities with PostScript planned. The NeXTBus (NuBus-like) card was a full size card for the NeXTcube, filling one of four slots, another one being filled with the main board itself. The NeXTdimension featured S-Video input and output, RGB output, an Intel i860 64-bit RISC processor at 33 MHz for Postscript acceleration, 8 MB main memory (expandable to 64 MB via eight 72-pin SIMM slots) and 4 MB VRAM for a resolution of 1120x832 at 24-bit color plus 8-bit alpha channel. An onboard C-Cube CL550 chip for MJPEG video compression was announced, but never shipped. A handful of engineering prototypes for the MJPEG daughterboard exist. A stripped down Mach kernel was used as the operating system for the card. Due to the supporting processor, 32-bit color on the NeXTdimension was faster than 2-bit greyscale Display PostScript on the NeXTcube. Display PostScript never actually ran on the board so the Intel i860 never did much more than move blocks of color data around. The Motorola 68040 did the crunching and the board, while fast for its time, never lived up to the hype. Since the main board always included the greyscale video logic, each NeXTdimension allowed the simultaneous use of an additional monitor. List price for a NeXTdimension sold as an add-on to the NeXTcube was , and for the MegaPixel Color Display. See also NeXT character set NeXTcube References External links www.vamp.org/next/ Site for ND owners, featuring ND mailing list, ND faq and more NeXTComputers.org NeXTdimensionBoard NeXT Graphics cards
https://en.wikipedia.org/wiki/Sampson%20%28horse%29
Sampson (later renamed Mammoth) was a Shire horse gelding born in 1846 and bred by Thomas Cleaver at Toddington Mills, Bedfordshire, England. According to Guinness World Records (1986) he was the tallest horse ever recorded, by 1850 measuring or 21.25 hands in height. His peak weight was estimated at See also List of historical horses References 1846 animal births Individual draft horses Individual male horses World record holders Biological records Horses in the United Kingdom
https://en.wikipedia.org/wiki/Morse%20code%20mnemonics
Morse code mnemonics are systems to represent the sound of Morse characters in a way intended to be easy to remember. Since every one of these mnemonics requires a two-step mental translation between sound and character, none of these systems are useful for using manual Morse at practical speeds. Amateur radio clubs can provide resources to learn Morse code. Cross-linguistic Visual mnemonic Visual mnemonic charts have been devised over the ages. Baden-Powell included one in the Girl Guides handbook in 1918. Here is a more up-to-date version, ca. 1988: Other visual mnemonic systems have been created for Morse code, mapping the elements of the Morse code characters onto pictures for easy memorization. For instance, "R" () might be represented as a "racecar" seen in a profile view, with the two wheels of the racecar being the dits and the body being the dah. English Syllabic mnemonics Syllabic mnemonics are based on the principle of associating a word or phrase to each Morse code letter, with stressed syllables standing for a dah and unstressed ones for a dit. There is no well-known complete set of syllabic mnemonics for English, but various mnemonics do exist for individual letters. Slavic languages In Czech, the mnemonic device to remember Morse codes lies in remembering words that begin with each appropriate letter and has so called long vowel (i.e. á é í ó ú ý) for every dash and short vowel (a e i o u y) for every dot. Additionally, some other theme-related sets of words have been thought out as Czech folklore. In Polish, which does not distinguish long and short vowels, Morse mnemonics are also words or short phrases that begin with each appropriate letter, but dash is coded as a syllable containing an "o" (or "ó"), while a syllable containing another vowel codes for dot. For some letters, multiple mnemonics are in use; the table shows one example. Hebrew Invented in 1922 by Zalman Cohen, a communication soldier in the Haganah organization. Indone
https://en.wikipedia.org/wiki/CygnusEd
CygnusEd is a text editor for the AmigaOS and MorphOS. It was first developed in 1986-1987 by Bruce Dawson, Colin Fox and Steve LaRocque who were working for CygnusSoft Software. It was the first Amiga text editor with an undo/redo feature and one of the first Amiga programs that had an AREXX scripting port by which it was possible to integrate the editor with AREXX enabled C compilers and build a semi-integrated development environment. Many Amiga programmers grew up with CygnusEd and a considerable part of the Amiga software library was created with CygnusEd. It is still one of very few text editors that support jerkyless soft scrolling. It remained popular even after Commodore's bankruptcy in 1994. In 1997 version 4 was developed by Olaf Barthel and was ported to MorphOS by Ralph Schmidt in 2000 and made available for users having the original CygnusED 4 CDROM. In 2007 version 5 was finished by Olaf Barthel again, which runs natively on AmigaOS 2 and AmigaOS 4. References Text editors Amiga development software AmigaOS 4 software MorphOS software Rexx TeX editors
https://en.wikipedia.org/wiki/Witt%20vector
In mathematics, a Witt vector is an infinite sequence of elements of a commutative ring. Ernst Witt showed how to put a ring structure on the set of Witt vectors, in such a way that the ring of Witt vectors over the finite field of order is isomorphic to , the ring of -adic integers. They have a highly non-intuitive structure upon first glance because their additive and multiplicative structure depends on an infinite set of recursive formulas which do not behave like addition and multiplication formulas for standard p-adic integers. The main idea behind Witt vectors is instead of using the standard -adic expansionto represent an element in , we can instead consider an expansion using the Teichmüller characterwhich sends each element in the solution set of in to an element in the solution set of in . That is, we expand out elements in in terms of roots of unity instead of as profinite elements in . We can then express a -adic integer as an infinite sumwhich gives a Witt vectorThen, the non-trivial additive and multiplicative structure in Witt vectors comes from using this map to give an additive and multiplicative structure such that induces a commutative ring morphism. History In the 19th century, Ernst Eduard Kummer studied cyclic extensions of fields as part of his work on Fermat's Last Theorem. This led to the subject now known as Kummer theory. Let be a field containing a primitive -th root of unity. Kummer theory classifies degree cyclic field extensions of . Such fields are in bijection with order cyclic groups , where corresponds to . But suppose that has characteristic . The problem of studying degree extensions of , or more generally degree extensions, may appear superficially similar to Kummer theory. However, in this situation, cannot contain a primitive -th root of unity. Indeed, if is a -th root of unity in , then it satisfies . But consider the expression . By expanding using binomial coefficients we see that the operation of ra
https://en.wikipedia.org/wiki/Parallel%20Virtual%20File%20System
The Parallel Virtual File System (PVFS) is an open-source parallel file system. A parallel file system is a type of distributed file system that distributes file data across multiple servers and provides for concurrent access by multiple tasks of a parallel application. PVFS was designed for use in large scale cluster computing. PVFS focuses on high performance access to large data sets. It consists of a server process and a client library, both of which are written entirely of user-level code. A Linux kernel module and pvfs-client process allow the file system to be mounted and used with standard utilities. The client library provides for high performance access via the message passing interface (MPI). PVFS is being jointly developed between The Parallel Architecture Research Laboratory at Clemson University and the Mathematics and Computer Science Division at Argonne National Laboratory, and the Ohio Supercomputer Center. PVFS development has been funded by NASA Goddard Space Flight Center, The DOE Office of Science Advanced Scientific Computing Research program, NSF PACI and HECURA programs, and other government and private agencies. PVFS is now known as OrangeFS in its newest development branch. History PVFS was first developed in 1993 by Walt Ligon and Eric Blumer as a parallel file system for Parallel Virtual Machine (PVM) as part of a NASA grant to study the I/O patterns of parallel programs. PVFS version 0 was based on Vesta, a parallel file system developed at IBM T. J. Watson Research Center. Starting in 1994 Rob Ross re-wrote PVFS to use TCP/IP and departed from many of the original Vesta design points. PVFS version 1 was targeted to a cluster of DEC Alpha workstations networked using switched FDDI. Like Vesta, PVFS striped data across multiple servers and allowed I/O requests based on a file view that described a strided access pattern. Unlike Vesta, the striping and view were not dependent on a common record size. Ross' research focused on scheduling
https://en.wikipedia.org/wiki/Minix%203
Minix 3 is a small, Unix-like operating system. It is published under a BSD-3-Clause license and is a successor project to the earlier versions, Minix 1 and 2. The project's main goal is for the system to be fault-tolerant by detecting and repairing its faults on the fly, with no user intervention. The main uses of the system are envisaged to be embedded systems and education. , Minix 3 supports IA-32 and ARM architecture processors. It can also run on emulators or virtual machines, such as Bochs, VMware Workstation, Microsoft Virtual PC, Oracle VirtualBox, and QEMU. A port to PowerPC architecture is in development. The distribution comes on a live CD and does not support live USB installation. Minix 3 is believed to have inspired the Intel Management Engine (ME) OS found in Intel's Platform Controller Hub, starting with the introduction of ME 11, which is used with Skylake and Kaby Lake processors. It was debated that Minix could have been the most widely used OS on x86/AMD64 processors, with more installations than Microsoft Windows, Linux, or macOS, because of its use in the Intel ME. The project has been dormant since 2018, and the latest release is 3.4.0 rc6 from 2017, although the Minix 3 discussion group is still active. Goals of the project Reflecting on the nature of monolithic kernel based systems, where a driver (which has, according to Minix creator Tanenbaum, approximately 3–7 times as many bugs as a usual program) can bring down the whole system, Minix 3 aims to create an operating system that is a "reliable, self-healing, multiserver Unix clone". To achieve that, the code running in kernel must be minimal, with the file server, process server, and each device driver running as separate user-mode processes. Each driver is carefully monitored by a part of the system named the reincarnation server. If a driver fails to respond to pings from this server, it is shut down and replaced by a fresh copy of the driver. In a monolithic system, a bug i
https://en.wikipedia.org/wiki/WURFL
WURFL (Wireless Universal Resource FiLe) is a set of proprietary application programming interfaces (APIs) and an XML configuration file which contains information about device capabilities and features for a variety of mobile devices, focused on mobile device detection. Until version 2.2, WURFL was released under an "open source / public domain" license. Prior to version 2.2, device information was contributed by developers around the world and the WURFL was updated frequently, reflecting new wireless devices coming on the market. In June 2011, the founder of the WURFL project, Luca Passani, and Steve Kamerman, the author of Tera-WURFL, a popular PHP WURFL API, formed ScientiaMobile, Inc to provide commercial mobile device detection support and services using WURFL. As of August 30, 2011, the ScientiaMobile WURFL APIs are licensed under a dual-license model, using the AGPL license for non-commercial use and a proprietary commercial license. The current version of the WURFL database itself is no longer open source. Solution approaches There have been several approaches to this problem, including developing very primitive content and hoping it works on a variety of devices, limiting support to a small subset of devices or bypassing the browser solution altogether and developing a Java ME or BREW client application. WURFL solves this by allowing development of content pages using abstractions of page elements (buttons, links and textboxes for example). At run time, these are converted to the appropriate, specific markup types for each device. In addition, the developer can specify other content decisions be made at runtime based on device specific capabilities and features (which are all in the WURFL). WURFL Cloud In March 2012, ScientiaMobile has announced the launch of the WURFL Cloud. While the WURFL Cloud is a paid service, a free offer is made available to hobbyists and micro-companies for use on mobile sites with limited traffic. Currently, the WURFL
https://en.wikipedia.org/wiki/Gfarm%20file%20system
Gfarm file system is an open-source distributed file system, generally used for large-scale cluster computing and wide-area data sharing, and provides features to manage replica location explicitly. The name is derived from the Grid Data Farm architecture it implements. Grid Datafarm is a petascale data-intensive computing project initiated in Japan. The project is a collaboration among High Energy Accelerator Research Organization (KEK), National Institute of Advanced Industrial Science and Technology (AIST), the University of Tokyo, Tokyo Institute of Technology and University of Tsukuba. The challenge involves construction of a Peta- to Exascale parallel filesystem exploiting local storage of PCs spread over the worldwide Grid. See also Distributed file system List of file systems, the distributed parallel fault-tolerant file system section References External links Gfarm file system Home Page OSS Tsukuba at GitHub Distributed file systems Distributed file systems supported by the Linux kernel Network file systems
https://en.wikipedia.org/wiki/Symbolic%20simulation
In computer science, a simulation is a computation of the execution of some appropriately modelled state-transition system. Typically this process models the complete state of the system at individual points in a discrete linear time frame, computing each state sequentially from its predecessor. Models for computer programs or VLSI logic designs can be very easily simulated, as they often have an operational semantics which can be used directly for simulation. Symbolic simulation is a form of simulation where many possible executions of a system are considered simultaneously. This is typically achieved by augmenting the domain over which the simulation takes place. A symbolic variable can be used in the simulation state representation in order to index multiple executions of the system. For each possible valuation of these variables, there is a concrete system state that is being indirectly simulated. Because symbolic simulation can cover many system executions in a single simulation, it can greatly reduce the size of verification problems. Techniques such as symbolic trajectory evaluation (STE) and generalized symbolic trajectory evaluation (GSTE) are based on this idea of symbolic simulation. See also Symbolic execution Symbolic computation Electronic design automation Formal methods
https://en.wikipedia.org/wiki/Index%20arbitrage
Index arbitrage is a subset of statistical arbitrage focusing on index components. An index (such as S&P 500) is made up of several components (in the case of the S&P 500, 500 large US stocks picked by S&P to represent the US market), and the value of the index is typically computed as a linear function of the component prices, where the details of the computation (such as the weights of the linear function) are determined in accordance with the index methodology. The idea of index arbitrage is to exploit discrepancies between the market price of a product that tracks the index (such as a Stock market index future or Exchange-traded fund) and the market prices of the underlying index components, which are typically stocks. For example, an arbitrageur could take the current prices of traded stocks, calculate a synthetic index value using the relevant index methodology, and then apply an interest rate and dividend adjustment to calculate the "fair value" of the stock market index future. If the stock market index future is trading above its "fair value", the arbitrageur can buy the component stocks and sell the index future. Likewise, if the stock market index futures is trading below its "fair value", the arbitrageur can short the component stocks and buy the index future. In both cases, then the arbitrageur would be exposed to Basis risk if the interest rate and dividend yield risks are left unhedged. In a different example, the arbitrageur can take the current prices of traded stocks, calculate the "fair value" of an ETF (based on its holdings, which are chosen to track the index) and arbitrage between the market price of the ETF and the market prices of the stock holdings. In this scenario, the arbitrageur would use the ETF creation and redemption process to net-out the offsetting ETF and stock positions. See also Algorithmic trading Complex event processing Dark pool Electronic trading Implementation shortfall Investment strategy Quantitative trading
https://en.wikipedia.org/wiki/North%20American%20Datum
The North American Datum (NAD) is the horizontal datum now used to define the geodetic network in North America. A datum is a formal description of the shape of the Earth along with an "anchor" point for the coordinate system. In surveying, cartography, and land-use planning, two North American Datums are in use for making lateral or "horizontal" measurements: the North American Datum of 1927 (NAD 27) and the North American Datum of 1983 (NAD 83). Both are geodetic reference systems based on slightly different assumptions and measurements. Vertical measurements, based on distances above or below Mean High Water (MHW), are calculated using the North American Vertical Datum of 1988 (NAVD 88). NAD 83, along with NAVD 88, is set to be replaced with a new GPS- and gravimetric geoid model-based geometric reference frame and geopotential datum in 2022. First North American Datum of 1901 In 1901 the United States Coast and Geodetic Survey adopted a national horizontal datum called the United States Standard Datum, based on the Clarke Ellipsoid of 1866. It was fitted to data previously collected for regional datums, which by that time had begun to overlap. In 1913, Canada and Mexico adopted that datum, so it was also renamed the North American Datum. North American Datum of 1927 As more data were gathered, discrepancies appeared, so the datum was recomputed in 1927, using the same spheroid and origin as its predecessor. The North American Datum of 1927 (NAD 27) was based on surveys of the entire continent from a common reference point that was chosen in 1901, because it was as near the center of the contiguous United States as could be calculated: It was based on a triangulation station at the junction of the transcontinental triangulation arc of 1899 on the 39th parallel north and the triangulation arc along the 98th meridian west that was near the geographic center of the contiguous United States. The datum declares the Meades Ranch Triangulation Station in Osborne C
https://en.wikipedia.org/wiki/DECmate
DECmate was the name of a series of PDP-8-compatible computers produced by the Digital Equipment Corporation in the late 1970s and early 1980s. All of the models used an Intersil 6100 (later known as the Harris 6100) or Harris 6120 (an improved Intersil 6100) microprocessor which emulated the 12-bit DEC PDP-8 CPU. They were text-only and used the OS/78 or OS/278 operating systems, which were extensions of OS/8 for the PDP-8. Aimed at the word processing market, they typically ran the WPS-8 word-processing program. Later models optionally had Intel 8080 or Z80 microprocessors which allowed them to run CP/M. The range was a development of the VT78 which was introduced in July 1977. VT78 Introduced in July 1977, this machine was built into a VT52 case and had an Intersil 6100 microprocessor running at 2.2 MHz. The standard configuration included an RX02 dual 8-inch floppy disk unit which was housed in the pedestal that the computer rested on. DECmate Introduced in 1980, this machine was built into a VT100 case. It had a 10 MHz clock and 32 Kwords of memory. It was also known as the VT278. DECmate II As part of a three-pronged strategy against IBM, the company released this model in 1982 at the same time as the PDP-11-based PRO-350 and the Intel 8088-based Rainbow 100. The DECmate II resembles the Rainbow 100 but uses the 6120 processor. Its two operating systems are the WPS-8 word processing system, and the COS-310 Commercial Operating System running DIBOL. Like the others it had a monochrome VR201 (VT220-style) monitor, an LK201 keyboard and dual 400 KB single-sided quad-density 5.25-inch RX50 floppy disk drives. It had 32 Kwords of RAM for use by programs, and a further 32 Kwords containing code which was used for device emulation. Code running in this second bank was nicknamed "slushware", in contrast to firmware since it was loaded from floppy disk as the machine booted. It was also known as the PC278. The model could be expanded, either by adding anot
https://en.wikipedia.org/wiki/Ferrier%20Lecture
The Ferrier Lecture is a Royal Society lectureship given every three years "on a subject related to the advancement of natural knowledge on the structure and function of the nervous system". It was created in 1928 to honour the memory of Sir David Ferrier, a neurologist who was the first British scientist to electronically stimulate the brain for the purpose of scientific study. In its 90-year history, the Lecture has been given 30 times. It has never been given more than once by the same person. The first female to be awarded the honour was Prof. Christine Holt in 2017. The first lecture was given in 1929 by Charles Scott Sherrington, and was titled "Some functional problems attaching to convergence". The most recent lecturer was provided by Prof. Christine Holt, who presented a lecture in 2017 titled "understanding of the key molecular mechanisms involved in nerve growth, guidance and targeting which has revolutionised our knowledge of growing axon tips". In 1971, the lecture was given by two individuals (David Hunter Hubel and Torsten Nils Wiesel) on the same topic, with the title "The function and architecture of the visual cortex". List of Lecturers References General Specific Biology education in the United Kingdom Neurology Royal Society lecture series
https://en.wikipedia.org/wiki/Drinfeld%20module
In mathematics, a Drinfeld module (or elliptic module) is roughly a special kind of module over a ring of functions on a curve over a finite field, generalizing the Carlitz module. Loosely speaking, they provide a function field analogue of complex multiplication theory. A shtuka (also called F-sheaf or chtouca) is a sort of generalization of a Drinfeld module, consisting roughly of a vector bundle over a curve, together with some extra structure identifying a "Frobenius twist" of the bundle with a "modification" of it. Drinfeld modules were introduced by , who used them to prove the Langlands conjectures for GL2 of an algebraic function field in some special cases. He later invented shtukas and used shtukas of rank 2 to prove the remaining cases of the Langlands conjectures for GL2. Laurent Lafforgue proved the Langlands conjectures for GLn of a function field by studying the moduli stack of shtukas of rank n. "Shtuka" is a Russian word штука meaning "a single copy", which comes from the German noun “Stück”, meaning “piece, item, or unit". In Russian, the word "shtuka" is also used in slang for a thing with known properties, but having no name in a speaker's mind. Drinfeld modules The ring of additive polynomials We let be a field of characteristic . The ring is defined to be the ring of noncommutative (or twisted) polynomials over , with the multiplication given by The element can be thought of as a Frobenius element: in fact, is a left module over , with elements of acting as multiplication and acting as the Frobenius endomorphism of . The ring can also be thought of as the ring of all (absolutely) additive polynomials in , where a polynomial is called additive if (as elements of ). The ring of additive polynomials is generated as an algebra over by the polynomial . The multiplication in the ring of additive polynomials is given by composition of polynomials, not by multiplication of commutative polynomials, and is not commutative. Defin
https://en.wikipedia.org/wiki/Circular%20arc
A circular arc is the arc of a circle between a pair of distinct points. If the two points are not directly opposite each other, one of these arcs, the minor arc, subtends an angle at the center of the circle that is less than radians (180 degrees); and the other arc, the major arc, subtends an angle greater than radians. The arc of a circle is defined as the part or segment of the circumference of a circle. A straight line that connects the two ends of the arc is known as a chord of a circle. If the length of an arc is exactly half of the circle, it is known as a semicircular arc. Length The length (more precisely, arc length) of an arc of a circle with radius r and subtending an angle θ (measured in radians) with the circle center — i.e., the central angle — is This is because Substituting in the circumference and, with α being the same angle measured in degrees, since θ = , the arc length equals A practical way to determine the length of an arc in a circle is to plot two lines from the arc's endpoints to the center of the circle, measure the angle where the two lines meet the center, then solve for L by cross-multiplying the statement: measure of angle in degrees/360° = L/circumference. For example, if the measure of the angle is 60 degrees and the circumference is 24 inches, then This is so because the circumference of a circle and the degrees of a circle, of which there are always 360, are directly proportional. The upper half of a circle can be parameterized as Then the arc length from to is Sector area The area of the sector formed by an arc and the center of a circle (bounded by the arc and the two radii drawn to its endpoints) is The area A has the same proportion to the circle area as the angle θ to a full circle: We can cancel on both sides: By multiplying both sides by r, we get the final result: Using the conversion described above, we find that the area of the sector for a central angle measured in degrees is Segment area Th
https://en.wikipedia.org/wiki/Ethernet%20in%20the%20first%20mile
Ethernet in the first mile (EFM) refers to using one of the Ethernet family of computer network technologies between a telecommunications company and a customer's premises. From the customer's point of view, it is their first mile, although from the access network's point of view it is known as the last mile. A working group of the Institute of Electrical and Electronics Engineers (IEEE) produced the standards known as IEEE 802.3ah-2004, which were later included in the overall standard IEEE 802.3-2008. Although it is often used for businesses, it can also be known as Ethernet to the home (ETTH). One family of standards known as Ethernet passive optical network (EPON) uses a passive optical network. History With wide, metro, and local area networks using various forms of Ethernet, the goal was to eliminate non-native transport such as Ethernet over Asynchronous Transfer Mode (ATM) from access networks. One early effort was the EtherLoop technology invented at Nortel Networks in 1996, and then spun off into the company Elastic Networks in 1998. Its principal inventor was Jack Terry. The hope was to combine the packet-based nature of Ethernet with the ability of digital subscriber line (DSL) technology to work over existing telephone access wires. The name comes from local loop, which traditionally describes the wires from a telephone company office to a subscriber. The protocol was half-duplex with control from the provider side of the loop. It adapted to line conditions with a peak of 10 Mbit/s advertised, but 4-6 Mbit/s more typical, at a distance of about . Symbol rates were 1 megabaud or 1.67 megabaud, with 2, 4, or 6 bits per symbol. The EtherLoop product name was registered as a trademark in the US and Canada. The EtherLoop technology was eventually purchased by Paradyne Networks in 2002, which was in turn purchased by Zhone Technologies in 2005. Another effort was the concept promoted by Michael Silverton of using Ethernet variants that used fiber optic c
https://en.wikipedia.org/wiki/Henry%20Wilbraham
Henry Wilbraham (25 July 1825 – 13 February 1883) was an English mathematician. He is known for discovering and explaining the Gibbs phenomenon nearly fifty years before J. Willard Gibbs did. Gibbs and Maxime Bôcher, as well as nearly everyone else, were unaware of Wilbraham's paper on the Gibbs phenomenon. Biography Henry Wilbraham was born to George and Lady Anne Wilbraham at Delamere, Cheshire. His family was privileged, with his father a parliamentarian and his mother the daughter of the Earl Fortescue. He attended Harrow School before being admitted to Trinity College, Cambridge at the age of 16. He received a BA in 1846 and an MA in 1849 from Cambridge. At the age of 22 he published his paper on the Gibbs phenomenon. He remained at Trinity as a Fellow until 1856. In 1864 he married Mary Jane Marriott, and together they had seven children. In the last years of his life, he was the District Registrar of the Chancery Court at Manchester. References Paul J. Nahin, Dr. Euler's Fabulous Formula, Princeton University Press, 2006. Ch. 4, Sect. 4. 1825 births 1883 deaths 19th-century English mathematicians Mathematical analysts People educated at Harrow School Alumni of Trinity College, Cambridge People from Cheshire
https://en.wikipedia.org/wiki/Flight%20information%20display%20system
A flight information display system (FIDS) is a computer system used in airports to display flight information to passengers, in which a computer system controls mechanical or electronic display boards or monitors in order to display arriving and departing flight information in real-time. The displays are located inside or around an airport terminal. A virtual version of a FIDS can also be found on most airport websites and teletext systems. In large airports, there are different sets of FIDS for each terminal or even each major airline. FIDS are used to inform passengers of boarding gates, departure/arrival times, destinations, notifications of flight delays/flight cancellations, and partner airlines, et al. Each line on an FIDS indicates a different flight number accompanied by: the airline name/logo and/or its IATA or ICAO airline designator (can also include names/logos of interlining/codesharing airlines or partner airlines, e.g. HX252/BR2898.) the city of origin or destination, and any intermediate points the expected arrival or departure time and/or the updated time (reflecting any delays) the status of the flight, such as "Landed", "Delayed", "Boarding", etc. And in the case of departing flights: the check-in counter numbers or the name of the airline handling the check-in the gate number Due to code sharing, a flight may be represented by a series of different flight numbers. For example, LH 474 and AC 9099, both partners of Star Alliance, codeshare on a route using a single aircraft, either Lufthansa or Air Canada, to operate that route at that given time. Lines may be sorted by time, airline name, or city. Most FIDS are now displayed on LCD or LED screen, although some airports still use split-flap displays. Display technology Airport infrastructure
https://en.wikipedia.org/wiki/XenMan
XenMan is a Xen Hypervisor management tool with a graphical user interface that allows a user to perform the standard set of operations (start, stop, pause, kill, shutdown, reboot, snapshot, etc...) in addition to some higher level operations such as the creation of a guest domain (which includes the creation of the configuration file, the retrieval of appropriate kernels and initial ram disks, as well as the starting of the domain) in one single operation. The goal is to create a graphical management tool that fulfills all the Xen management needs both a novice and advanced user may require. The application is developed in the python programming language, uses the gtk widget set and is released under the GPL. External links XenMan sourceforge project page. XenMan screenshot. Virtualization software
https://en.wikipedia.org/wiki/Leonor%20F.%20Loree
Leonor F. Loree (April 23, 1858 – September 6, 1940) was an American civil engineer, lawyer, railroad executive, and founder of the American Newcomen Society. He obtained a Bachelor of Science degree in 1877, a Master of Science in 1880, a Civil Engineering degree in 1896 and a Doctor of Law in 1917, all from Rutgers College. He also obtained a Doctor of Engineering degree from Rensselaer Polytechnic Institute in 1933. He was President of the Delaware & Hudson Railroad and had interests in Kansas City Southern, Baltimore and Ohio, New York Central, and the Rock Island Railroads. He was a Trustee at Rutgers University from 1909–1940 and was Chairman of the Rutgers Board of Trustees Committee on New Jersey College for Women (now Douglass College) until 1938. He was the donor of the New Jersey College for Women Athletic Field (Antilles Field). Rutgers has a building named after him, Leonor Fresnel Loree, erected in 1963 on the Douglass campus. Accomplishments In 1923, Loree was a principal founder of The Newcomen Society in North America, a learned society promoting engineering, technology and free enterprise. In 1903, Loree, together with Frank PJ Patenall, received , for the upper quadrant semaphore, which soon became the most widely used form of railroad lineside signal in North America. Railroads continued to install them until the 1940s. "This is a helluva way to run a railroad!" In 1906 a committee of creditors asked Leonor to take charge of the Kansas City Southern Railroad. At the time it was considered no more than "two streaks of rust, its engines lost steam, the men were disheartened and the stations were shacks." After Mr. Loree gave his initial inspection, in a speech in front of the financial community, he ended his professional and technical description of the railroad line by stating, "This is a helluva way to run a railroad". Career Baltimore and Ohio Railroad: president 1901 - 1904 Chicago, Rock Island and Pacific Railroad president - 1904
https://en.wikipedia.org/wiki/Schmidt%20decomposition
In linear algebra, the Schmidt decomposition (named after its originator Erhard Schmidt) refers to a particular way of expressing a vector in the tensor product of two inner product spaces. It has numerous applications in quantum information theory, for example in entanglement characterization and in state purification, and plasticity. Theorem Let and be Hilbert spaces of dimensions n and m respectively. Assume . For any vector in the tensor product , there exist orthonormal sets and such that , where the scalars are real, non-negative, and unique up to re-ordering. Proof The Schmidt decomposition is essentially a restatement of the singular value decomposition in a different context. Fix orthonormal bases and . We can identify an elementary tensor with the matrix , where is the transpose of . A general element of the tensor product can then be viewed as the n × m matrix By the singular value decomposition, there exist an n × n unitary U, m × m unitary V, and a positive semidefinite diagonal m × m matrix Σ such that Write where is n × m and we have Let be the m column vectors of , the column vectors of , and the diagonal elements of Σ. The previous expression is then Then which proves the claim. Some observations Some properties of the Schmidt decomposition are of physical interest. Spectrum of reduced states Consider a vector of the tensor product in the form of Schmidt decomposition Form the rank 1 matrix . Then the partial trace of , with respect to either system A or B, is a diagonal matrix whose non-zero diagonal elements are . In other words, the Schmidt decomposition shows that the reduced states of on either subsystem have the same spectrum. Schmidt rank and entanglement The strictly positive values in the Schmidt decomposition of are its Schmidt coefficients, or Schmidt numbers. The total number of Schmidt coefficients of , counted with multiplicity, is called its Schmidt rank. If can be expressed as a product then is cal
https://en.wikipedia.org/wiki/Outpost%20Firewall%20Pro
Outpost Firewall Pro is a discontinued personal firewall developed by Agnitum (founded in 1999 in St. Petersburg, Russia). Overview Outpost Firewall Pro monitors incoming and outgoing network traffic on Windows machines. Outpost also monitors application behavior in an attempt to stop malicious software covertly infecting Windows systems. Agnitum called this technology "Component Control" and "Anti-Leak Control" (included into HIPS-based "Host Protection" module). The product also includes a spyware scanner and monitor, along with a pop-up blocker and spyware filter for Internet Explorer and Mozilla Firefox. (Outpost's web surfing security tools had included black-lists for IPs and URLs, unwanted web page element filters and ad-blocking. The technology altogether is known as "Web control".) Version 7.5 adds new techniques to help PC users block unknown new threats before their activation: Removable media protection (so-called "USB Virus Protection", part of the Proactive Protection module) blocks unsigned programs set to run automatically upon the connection of a removable media. SmartDecision technology (so-called "Personal Virus Adviser", basis of the Proactive Protection module) facilitates decision-making process. Version 8 introduces further improvements as well as Windows 8 compatibility and a redesigned user interface; version 8 also has extends x64 host-based intrusion-prevention system (HIPS) support. Outpost Firewall Pro allows the user to specifically define how a PC application connects to the Internet. This is known as the "Rules Wizard" mode, or policy, and is the default behavior for the program. In this mode, Outpost Firewall Pro displays a prompt each time a new process attempts network access or when a process requests a connection that was not covered by its pre-validated rules. The idea is to let the user decide whether an application should be allowed a network connection to a specific address, port or protocol. Outpost Firewall includes
https://en.wikipedia.org/wiki/Metabolic%20control%20analysis
Metabolic control analysis (MCA) is a mathematical framework for describing metabolic, signaling, and genetic pathways. MCA quantifies how variables, such as fluxes and species concentrations, depend on network parameters. In particular, it is able to describe how network-dependent properties, called control coefficients, depend on local properties called elasticities or Elasticity Coefficients. MCA was originally developed to describe the control in metabolic pathways but was subsequently extended to describe signaling and genetic networks. MCA has sometimes also been referred to as Metabolic Control Theory, but this terminology was rather strongly opposed by Henrik Kacser, one of the founders. More recent work has shown that MCA can be mapped directly on to classical control theory and are as such equivalent. Biochemical systems theory is a similar formalism, though with rather different objectives. Both are evolutions of an earlier theoretical analysis by Joseph Higgins. Control coefficients A control coefficient measures the relative steady state change in a system variable, e.g. pathway flux (J) or metabolite concentration (S), in response to a relative change in a parameter, e.g. enzyme activity or the steady-state rate () of step . The two main control coefficients are the flux and concentration control coefficients. Flux control coefficients are defined by and concentration control coefficients by . Summation theorems The flux control summation theorem was discovered independently by the Kacser/Burns group and the Heinrich/Rapoport group in the early 1970s and late 1960s. The flux control summation theorem implies that metabolic fluxes are systemic properties and that their control is shared by all reactions in the system. When a single reaction changes its control of the flux this is compensated by changes in the control of the same flux by all other reactions. Elasticity coefficients The elasticity coefficient measures the local response of
https://en.wikipedia.org/wiki/Vacuum%20flange
A vacuum flange is a flange at the end of a tube used to connect vacuum chambers, tubing and vacuum pumps to each other. Vacuum flanges are used for scientific and industrial applications to allow various pieces of equipment to interact via physical connections and for vacuum maintenance, monitoring, and manipulation from outside a vacuum's chamber. Several flange standards exist with differences in ultimate attainable pressure, size, and ease of attachment. Vacuum flange types Several vacuum flange standards exist, and the same flange types are called by different names by different manufacturers and standards organizations. KF/QF The ISO standard quick release flange is known by the names Quick Flange (QF) or Kleinflansch (KF, German which translates to "Small flange" in English). The KF designation has been adopted by ISO, DIN, and Pneurop. KF flanges are made with a chamfered back surface that are attached with a circular clamp and an elastomeric o-ring (AS568 specification) that is mounted in a metal centering ring. Standard sizes are indicated by the nominal inner diameter in millimeters for flanges 10 through 50 mm in diameter. Sizes 10, 20 and 32 are less common sizes (see Renard numbers). Some sizes share their flange dimensions with their respective larger neighbor and use the same clamp size. This means a DN10KF can mate to a DN16KF by using an adaptive centering ring. The same applies for DN20KF to DN25KF and DN32KF to DN40KF. ISO The ISO large flange standard is known as LF, LFB, MF or sometimes just ISO flange. As in KF-flanges, the flanges are joined by a centering ring and an elastomeric o-ring. An extra spring-loaded circular clamp is often used around the large-diameter o-rings to prevent them from rolling off from the centering ring during mounting. The ISO large flanges come in two varieties. The ISO-K (or ISO LF) flanges are joined with double-claw clamps, which clamp to a circular groove on the tubing side of the flange. The ISO-F (or ISO
https://en.wikipedia.org/wiki/Read%E2%80%93modify%E2%80%93write
In computer science, read–modify–write is a class of atomic operations (such as test-and-set, fetch-and-add, and compare-and-swap) that both read a memory location and write a new value into it simultaneously, either with a completely new value or some function of the previous value. These operations prevent race conditions in multi-threaded applications. Typically they are used to implement mutexes or semaphores. These atomic operations are also heavily used in non-blocking synchronization. Maurice Herlihy (1991) ranks atomic operations by their consensus numbers, as follows: : memory-to-memory move and swap, augmented queue, compare-and-swap, fetch-and-cons, sticky byte, load-link/store-conditional (LL/SC) : -register assignment : test-and-set, swap, fetch-and-add, queue, stack : atomic read and atomic write It is impossible to implement an operation that requires a given consensus number with only operations with a lower consensus number, no matter how many of such operations one uses. Read–modify–write instructions often produce unexpected results when used on I/O devices, as a write operation may not affect the same internal register that would be accessed in a read operation. This term is also associated with RAID levels that perform actual write operations as atomic read–modify–write sequences. Such RAID levels include RAID 4, RAID 5 and RAID 6. See also Linearizability Read–erase–modify–write References Concurrency control Computer memory
https://en.wikipedia.org/wiki/Verilog-AMS
Verilog-AMS is a derivative of the Verilog hardware description language that includes Analog and Mixed-Signal extensions (AMS) in order to define the behavior of analog and mixed-signal systems. It extends the event-based simulator loops of Verilog/SystemVerilog/VHDL, by a continuous-time simulator, which solves the differential equations in analog-domain. Both domains are coupled: analog events can trigger digital actions and vice versa. Overview The Verilog-AMS standard was created with the intent of enabling designers of analog and mixed signal systems and integrated circuits to create and use modules that encapsulate high-level behavioral descriptions as well as structural descriptions of systems and components. Verilog-AMS is an industry standard modeling language for mixed signal circuits. It provides both continuous-time and event-driven modeling semantics, and so is suitable for analog, digital, and mixed analog/digital circuits. It is particularly well suited for verification of very complex analog, mixed-signal and RF integrated circuits. Verilog and Verilog/AMS are not procedural programming languages, but event-based hardware description languages (HDLs). As such, they provide sophisticated and powerful language features for definition and synchronization of parallel actions and events. On the other hand, many actions defined in HDL program statements can run in parallel (somewhat similar to threads and tasklets in procedural languages, but much more fine-grained). However, Verilog/AMS can be coupled with procedural languages like the ANSI C language using the Verilog Procedural Interface of the simulator, which eases testsuite implementation, and allows interaction with legacy code or testbench equipment. The original intention of the Verilog-AMS committee was a single language for both analog and digital design, however due to delays in the merger process it remains at Accellera while Verilog evolved into SystemVerilog and went to the IEEE. Code
https://en.wikipedia.org/wiki/Torque%20motor
A torque motor is a specialized form of DC electric motor which can operate indefinitely while stalled, without incurring damage. In this mode of operation, the motor will apply a steady torque to the load (hence the name). A torque motor that cannot perform a complete rotation is known as a limited angle torque motor. Brushless torque motors are available; elimination of commutators and brushes allows higher speed operation. Construction Torque motors normally use toroidal construction, allowing them to have wider diameter, more torque, and better dissipation of heat. They differences from other motors because their higher torque, thermal performance, and ability to operate while drawing high current in a stalled state. Linear versions An analogous device, moving linearly rather than rotating, is described as a ''. These are widely used for refrigeration compressors and ultra-quiet air compressors, where the force motor produces simple harmonic motion in conjunction with a restoring spring. Applications Tape recorders A common application of a torque motor would be the supply- and take-up reel motors in a tape drive. In this application, driven from a low voltage, the characteristics of these motors allow a relatively constant light tension to be applied to the tape whether or not the capstan is feeding tape past the tape heads. Driven from a higher voltage, (and so delivering a higher torque), the torque motors can also achieve fast-forward and rewind operation without requiring any additional mechanics such as gears or clutches. Computer games In the computer gaming world, torque motors are used in force feedback steering wheels. Throttle control Another common application is the control of the throttle of an internal combustion engine in conjunction with an electronic governor. In this usage, the motor works against a return spring to move the throttle in accordance with the output of the governor. The latter monitors engine speed by counting electrical p
https://en.wikipedia.org/wiki/WKMJ-TV
WKMJ-TV (channel 68) is a PBS member television station in Louisville, Kentucky, United States. It is the flagship station for KET2, the second television service of Kentucky Educational Television (KET), which is owned by the Kentucky Authority for Educational Television. The station's master control and internal operations are located at KET's main studios at the O. Leonard Press Telecommunications Center in Lexington. WKMJ's transmitter, like those of several other Louisville stations including main KET transmitter WKPC-TV, is located at the Kentuckiana Tower Farm at Floyds Knobs, in Floyd County, Indiana. WKMJ and WKPC are the only KET-owned stations whose transmitters are outside Kentucky's borders. History As KET's original Louisville station When Kentucky Educational Television began broadcasting in 1968, it was built to provide the widest statewide coverage with the fewest transmitters possible. Network officials expected that the transmitters in Elizabethtown (WKZT-TV, channel 23) and Owenton (WKON-TV, channel 54) would provide sufficient service in the Louisville area. Reception, however, was poorer than expected, prompting KET in March 1969 to announce plans to file for UHF channel 68 and strike a deal with NBC affiliate WAVE-TV for a new tower, which would also house a stronger WKPC-TV. The station, with the callsign WKMJ (the -TV suffix was added in 1983), began test broadcasts on August 17, 1970, and full service began two weeks later. Channel 68 originally went off the air when the rest of the stations of KET was airing the same programming as WKPC-TV. Duplication remained low, and at the end of 1982, an agreement was reached for WKPC-TV to be the primary PBS outlet in Louisville. However, after this arrangement, duplication returned. In 1995, after WKPC-TV experienced a series of financial reversals caused by for-profit ventures intended to bolster station income, talks about intending to merge the two stations began, with channel 15—with its str
https://en.wikipedia.org/wiki/ANSI/ISA-95
ANSI/ISA-95, or ISA-95 as it is more commonly referred, is an international standard from the International Society of Automation for developing an automated interface between enterprise and control systems. This standard has been developed for global manufacturers. It was developed to be applied in all industries, and in all sorts of processes, like batch processes, continuous and repetitive processes. The objectives of ISA-95 are to provide consistent terminology that is a foundation for supplier and manufacturer communications, provide consistent information models, and to provide consistent operations models which is a foundation for clarifying application functionality and how information is to be used. There are 5 parts of the ISA-95 standard. ANSI/ISA-95.00.01-2000, Enterprise-Control System Integration Part 1: Models and Terminology consists of standard terminology and object models, which can be used to decide which information should be exchanged. The models help define boundaries between the enterprise systems and the control systems. They help address questions like which tasks can be executed by which function and what information must be exchanged between applications. Here is a . ISA-95 Models Context Hierarchy Models Scheduling and control (Purdue) Equipment hierarchy Functional Data Flow Model Manufacturing Functions Data Flows Object Models Objects Object Relationships Object Attributes Operations Activity Models Operations Elements: PO, MO, QO, IO Operations Data Flow Model Operations Functions Operations Flows ANSI/ISA-95.00.02-2001, Enterprise-Control System Integration Part 2: Object Model Attributes consists of attributes for every object that is defined in part 1. The objects and attributes of Part 2 can be used for the exchange of information between different systems, but these objects and attributes can also be used as the basis for relational databases. ANSI/ISA-95.00.03-2005, Enterprise-Control System Integration, Part 3: Models
https://en.wikipedia.org/wiki/Saprophagy
Saprophages are organisms that obtain nutrients by consuming decomposing dead plant or animal biomass. They are distinguished from detritivores in that saprophages are sessile consumers while detritivores are mobile. Typical saprophagic animals include sedentary polychaetes such as amphitrites (Amphitritinae, worms of the family Terebellidae) and other terebellids. The eating of wood, whether live or dead, is known as xylophagy. The activity of animals feeding only on dead wood is called sapro-xylophagy and those animals, sapro-xylophagous. Ecology In food webs, saprophages generally play the roles of decomposers. There are two main branches of saprophages, broken down by nutrient source. There are necrophages which consume dead animal biomass, and thanatophages which consume dead plant biomass. See also Detritivore Decomposer Saprotrophic nutrition Consumer-resource systems References Eating behaviors Mycology Soil biology
https://en.wikipedia.org/wiki/Indiglo
Indiglo is a product feature on watches marketed by Timex, incorporating an electroluminescent panel as a backlight for even illumination of the watch dial. The brand is owned by Indiglo Corporation, which is in turn solely owned by Timex, and the name derives from the word indigo, as the original watches featuring the technology emitted a green-blue light. History The Indiglo name was originally developed by Austin Innovations Inc. Timex introduced the Indiglo technology in 1992 in their Ironman watch line and subsequently expanded its use to 70% of their watch line, including men's and women's watches, sport watches and chronographs. Casio introduced their version of electroluminescent backlight technology in 1995. From 2006-2011, the Timex Group marketed a line of high-end quartz watches under the TX Watch Company brand, using a proprietary six-hand, four-motor, micro-processor controlled movement. To separate the brand from Timex, the movements had luxury features associated with a higher-end brand, e.g., sapphire crystals and stainless steel or titanium casework — and used hands treated with super-luminova luminescent pigment for low-light legibility — rather than indiglo technology. When the Timex Group migrated the microprocessor-controlled, multi-motor, multi-hand technology to its Timex brand in 2012, it created a sub-collection marketed as Intelligent Quartz (IQ). The line employed the same movements and capabilities from the TX brand, at a much lower price-point -- incorporating indiglo technology rather than the super-luminova pigments. Design Indiglo backlights typically emit a distinct greenish-blue color and evenly light the entire display or dial. Certain Indiglo models, e.g., Timex Datalink USB, use a negative liquid-crystal display so that only the digits are illuminated, rather than the entire display. References External links How does an Indiglo watch work? at HowStuffWorks Overview of electroluminescent display technology,
https://en.wikipedia.org/wiki/Wisconsin%20Integrally%20Synchronized%20Computer
The Wisconsin Integrally Synchronized Computer (WISC) was an early digital computer designed and built at the University of Wisconsin–Madison. Operational in 1954, it was the first digital computer in the state. Pioneering computer designer Gene Amdahl drafted the WISC's design as his PhD thesis. The computer was built over the period 1951-1954. It had 1,024 50-bit words (equivalent to about 6 KB) of drum memory, with an operation time of 1/15 second and throughput of 60 operations per second, which was achieved by an early form of instruction pipeline. It was capable of both fixed and floating point operation. It weighed about . Part of it was at the Computer History Museum until about 2020, when it was moved to an unknown location. References External links Oral history interview with Gene M. Amdahl. Charles Babbage Institute, University of Minnesota, Minneapolis. Amdahl starts by describing his early life and education, recalling his experiences teaching in the Advanced Specialized Training Program during and after World War II. Amdahl discusses his graduate work at the University of Wisconsin and his direction of the design and construction of the Wisconsin Integrally Synchronized Computer. Describes his role in the design of several computers for IBM including the STRETCH, IBM 701, 701A, and IBM 704. He discusses his work with Nathaniel Rochester and IBM's management of the design process for computers. He also mentions his work with Ramo-Wooldridge, Aeronutronic, and Computer Sciences Corporation. Contains Gene Amdahl's PhD thesis and WISC User's Manual Photos: Early computers One-of-a-kind computers
https://en.wikipedia.org/wiki/TJ-2
TJ-2 (Type Justifying Program) was published by Peter Samson in May 1963 and is thought to be the first page layout program. Although it lacks page numbering, page headers and footers, TJ-2 is the first word processor to provide a number of essential typographic alignment and automatic typesetting features: Columnation, indentation, margins, justification, and centering Word wrap, page breaks and automatic hyphenation Tab stop simulation Developed from two earlier Samson programs, Justify and TJ-1, TJ-2 was written for the PDP-1 that was donated to the Massachusetts Institute of Technology in 1961 by Digital Equipment Corporation. Taking English text as input, TJ-2 aligns left and right margins, justifying the output using white space and word hyphenation. Text is marked-up with single lowercase characters combined with the PDP-1's overline character, carriage returns, and internal concise codes. The computer's six toggle switches control the input and output devices, enable and disable hyphenation and stop the session. Words can be hyphenated with a light pen on the computer's CRT display and from the session's dictionary in memory. On-screen hyphenation has SAVE and FORGET commands and OOPS, the undo. Comments in the code were quoted thirty years later: "The ways of God are just and can be justified to man" and "Girls who wear pants should be sure that the end justifies the jeans." TJ-2 was succeeded by TYPSET and RUNOFF, a pair of complementary programs written in 1964 for the CTSS operating system. TYPSET and RUNOFF soon evolved into runoff for Multics, which was in turn ported to Unix in the 1970s as roff. A similar program for the ITS PDP-6 and later the PDP-10 was TJ6. See also Colossal Typewriter Desktop publishing Expensive Typewriter Peter Samson Text editor Text Editor and Corrector (TECO) TYPSET and RUNOFF Notes References Transcription of the 1963 memo describing TJ-2, with annotations by Daniel P. B. Smith . Samson begins at 1:
https://en.wikipedia.org/wiki/Digital%20cross-connect%20system
A digital cross-connect system (DCS or DXC) is a piece of circuit-switched network equipment, used in telecommunications networks, that allows lower-level TDM bit streams, such as DS0 bit streams, to be rearranged and interconnected among higher-level TDM signals, such as DS1 bit streams. DCS units are available that operate on both older T-carrier/E-carrier bit streams, as well as newer SONET/SDH bit streams. DCS devices can be used for "grooming" telecommunications traffic, switching traffic from one circuit to another in the event of a network failure, supporting automated provisioning, and other applications. Having a DCS in a circuit-switched network provides important flexibility that can otherwise only be obtained at higher cost using manual "DSX" cross-connect patch panels. It is important to realize that while DCS devices "switch" traffic, they are not packet switches—they switch circuits, not packets, and the circuit arrangements they are used to manage tend to persist over very long time spans, typically months or longer, as compared to packet switches, which can route every packet differently, and operate on micro- or millisecond time spans. DCS units are also sometimes colloquially called "DACS" units, after a proprietary brand name of DCS units created and sold by AT&T's Western Electric division, now Alcatel-Lucent. Modern digital access and cross-connect systems are not limited to the T-carrier system, and may accommodate high data rates such as those of SONET. Transmuxing Transmuxing (transmux: transcode multiplexing) is a telecommunications signaling format change between two signaling methods, typically synchronous optical network signals, SONET, and various time-division multiplexing, TDM, signals. Transmuxing changes the “container” without changing the “contents.” Transmuxing provides the carrier the capability to embed a telecommunications signal from one logical TDM circuit to another within SONET without physically breaking down the
https://en.wikipedia.org/wiki/Inexact%20differential
An inexact differential or imperfect differential is a differential whose integral is path dependent. It is most often used in thermodynamics to express changes in path dependent quantities such as heat and work, but is defined more generally within mathematics as a type of differential form. In contrast, an integral of an exact differential is always path independent since the integral acts to invert the differential operator. Consequently, a quantity with an inexact differential cannot be expressed as a function of only the variables within the differential. I.e., its value cannot be inferred just by looking at the initial and final states of a given system. Inexact differentials are primarily used in calculations involving heat and work because they are path functions, not state functions. Definition An inexact differential is a differential for which the integral over some two paths with the same end points is different. Specifically, there exist integrable paths such that , and In this case, we denote the integrals as and respectively to make explicit the path dependence of the change of the quantity we are considering as . More generally, an inexact differential is a differential form which is not an exact differential, i.e., for all functions , The fundamental theorem of calculus for line integrals requires path independence in order to express the values of a given vector field in terms of the partial derivatives of another function that is the multivariate analogue of the antiderivative. This is because there can be no unique representation of an antiderivative for inexact differentials since their variation is inconsistent along different paths. This stipulation of path independence is a necessary addendum to the fundamental theorem of calculus because in one-dimensional calculus there is only one path in between two points defined by a function. Notation Thermodynamics Instead of the differential symbol , the symbol is used, a convention wh
https://en.wikipedia.org/wiki/Heteroduplex%20analysis
Heteroduplex analysis (HDA) is a method in biochemistry used to detect point mutations in DNA (Deoxyribonucleic acid) since 1992. Heteroduplexes are dsDNA molecules that have one or more mismatched pairs, on the other hand homoduplexes are dsDNA which are perfectly paired. This method of analysis depend up on the fact that heteroduplexes shows reduced mobility relative to the homoduplex DNA. heteroduplexes are formed between different DNA alleles. In a mixture of wild-type and mutant amplified DNA, heteroduplexes are formed in mutant alleles and homoduplexes are formed in wild-type alleles. There are two types of heteroduplexes based on type and extent of mutation in the DNA. Small deletions or insertion create bulge-type heteroduplexes which is stable and is verified by electron microscope. Single base substitutions creates more unstable heteroduplexes called bubble-type heteroduplexes, because of low stability it is difficult to visualize in electron microscopy. HDA is widely used for rapid screening of mutation of the 3 bp p.F508del deletion in the CFTR gene. References Biochemistry methods Biochemistry Molecular biology
https://en.wikipedia.org/wiki/Annexin%20A5%20affinity%20assay
In molecular biology, an annexin A5 affinity assay is a test to quantify the number of cells undergoing apoptosis. The assay uses the protein annexin A5 to tag apoptotic and dead cells, and the numbers are then counted using either flow cytometry or a fluorescence microscope. The annexin a5 protein binds to apoptotic cells in a calcium-dependent manner using phosphatidylserine-containing membrane surfaces that are usually present only on the inner leaflet of the membrane. Background Apoptosis is a form of programmed cell death that is used by the body to remove unwanted, damaged, or senescent cells from tissues. Removal of apoptotic cells is carried out via phagocytosis by white blood cells such as macrophages and dendritic cells. Phagocytic white blood cells recognize apoptotic cells by their exposure of negatively charged phospholipids (phosphatidylserine) on the cell surface. In normal cells, the negative phospholipids reside on the inner side of the cellular membrane while the outer surface of the membrane is occupied by uncharged phospholipids. After a cell has entered apoptosis, the negatively charged phospholipids are transported to the outer cell surface by a hypothetical protein known as scramblase. Phagocytic white blood cells express a receptor that can bind to and detect the negatively charged phospholipids on the apoptotic cell surfaces. After detection the apoptotic cells are removed. Detection of cell death with annexin A5 Healthy individual apoptotic cells are rapidly removed by phagocytes. However, in pathological processes, the removal of apoptotic cells may be delayed or even absent. Dying cells in tissue can be detected with annexin A5. Labeling of annexin A5 with fluorescent or radioactive molecules makes it possible to detect binding of labeled annexin A5 to the cell surface of apoptotic cells. After binding to the phospholipid surface, annexin A5 assembles into a trimeric cluster. This trimer consists of three annexin A5 molecules that ar
https://en.wikipedia.org/wiki/Alcohol%20consumption%20recommendations
Recommendations for consumption of the drug alcohol (also known formally as ethanol) vary from recommendations to be alcohol-free to daily or weekly drinking "safe limits" or maximum intakes. Many governmental agencies and organizations have issued guidelines. These recommendations concerning maximum intake are distinct from any legal restrictions, for example countries with drunk driving laws or countries that have prohibited alcohol. General recommendations These guidelines apply to men, and women who are neither pregnant nor breastfeeding. Alcohol-free recommendations The World Health Organization published a statement in The Lancet Public Health in April 2023 that "there is no safe amount that does not affect health"'. The 2023 Nordic Nutrition Recommendations state "Since no safe limit for alcohol consumption can be provided, the recommendation in NNR2023 is that everyone should avoid drinking alcohol."'' The American Heart Association recommends that those who do not already consume alcoholic beverages should not start doing so because of the negative long-term effects of alcohol consumption. The Canadian Centre on Substance Use and Addiction states "Not drinking has benefits, such as better health, and better sleep." Alcohol intake recommendations by country Some governments set the same recommendation for both sexes, while others give separate limits. The guidelines give drink amounts in a variety of formats, such as standard drinks, fluid ounces, or milliliters, but have been converted to grams of ethanol for ease of comparison. Overall, the daily limits range from 10–37 g per day for men and 10-16 g per day for women. Weekly limits range from 27–170 g/week for men and 27–140 g/week for women. The weekly limits are lower than the daily limits, meaning intake on a particular day may be higher than one-seventh of the weekly amount, but consumption on other days of the week should be lower. The limits for women are consistently lower than those for m
https://en.wikipedia.org/wiki/Minimum%20inhibitory%20concentration
In microbiology, the minimum inhibitory concentration (MIC) is the lowest concentration of a chemical, usually a drug, which prevents visible in vitro growth of bacteria or fungi. MIC testing is performed in both diagnostic and drug discovery laboratories. The MIC is determined by preparing a dilution series of the chemical, adding agar or broth, then inoculating with bacteria or fungi, and incubating at a suitable temperature. The value obtained is largely dependent on the susceptibility of the microorganism and the antimicrobial potency of the chemical, but other variables can affect results too. The MIC is often expressed in micrograms per milliliter (μg/mL) or milligrams per liter (mg/L). In diagnostic labs, MIC test results are used to grade the susceptibility of microbes. These grades are assigned based on agreed upon values called breakpoints. Breakpoints are published by standards development organizations such as the U.S. Clinical and Laboratory Standards Institute (CLSI), the British Society for Antimicrobial Chemotherapy (BSAC) and the European Committee on Antimicrobial Susceptibility Testing (EUCAST). The purpose of measuring MICs and grading microbes is to enable physicians to prescribe the most appropriate antimicrobial treatment. The first step in drug discovery is often measurement of the MICs of biological extracts, isolated compounds or large chemical libraries against bacteria and fungi of interest. MIC values provide a quantitative measure of an extract or compound’s antimicrobial potency. The lower the MIC, the more potent the antimicrobial. When in vitro toxicity data is available, MICs can also be used to calculate selectivity index values, a measure of off-target to target toxicity. History After the discovery and commercialization of antibiotics, microbiologist, pharmacologist, and physician Alexander Fleming developed the broth dilution technique using the turbidity of the broth for assessment. This is commonly believed to be
https://en.wikipedia.org/wiki/CRON-diet
The CRON-diet (Calorie Restriction with Optimal Nutrition) is a nutrient-rich, reduced calorie diet developed by Roy Walford, Lisa Walford, and Brian M. Delaney. The CRON-diet involves calorie restriction in the hope that the practice will improve health and retard aging, while still attempting to provide the recommended daily amounts of various nutrients. Other names include CR-diet, Longevity diet, and Anti-Aging Plan. The Walfords and Delaney, among others, founded the CR Society International to promote the CRON-diet. Context There is no experimental evidence that calorie restriction can slow biological aging in humans. The biological mechanisms for the supposed antiaging effects are not determined, as of 2021. Origins The CRON-diet was developed from data Walford compiled during his participation in Biosphere 2 from 1991 to 1993. The subjects ate a diet low in fat and in calories but "nutrient-dense", derived from the food crops raised inside the Biosphere. Debate on effectiveness The writer Christopher Turner in The Telegraph reported that Walford claimed that the diet "will retard your rate of ageing, extend lifespan (up to perhaps 150 to 160 years, depending on when you start and how thoroughly you hold to it), and markedly decrease susceptibility to most major diseases." The same article noted however that the diet "failed to dramatically increase Walford's lifespan; he died in 2004 aged 79." A review of the effects of calorie restriction in humans by Anna Picca and colleagues in 2017 noted that direct evidence was limited to what had been "recorded from the members of the Calorie Restriction Society, who have imposed on themselves a regimen of severe CR with optimal nutrition (CRON), believing to extend in this way their healthy lifespan." The review noted that bone density was reduced but that bone strength was improved and maximal aerobic capacity per unit body mass was maintained or increased, while measures of quality of life including depressi
https://en.wikipedia.org/wiki/Quotition%20and%20partition
In arithmetic, quotition and partition are two ways of viewing fractions and division. In quotition division one asks, "how many parts are there?"; while in partition division one asks, "what is the size of each part?". For example, the expression can be constructed of either of two ways: "How many parts of the size of 2 must be added to get the amount of 6?" (Quotition division) One can write Since it takes 3 parts, the conclusion is that "What is the size of 2 equal parts whose sum is that of 6?". (Partition division) One can write Since the size of each part is 3, the conclusion is that It is a fact of elementary theoretical mathematics that the numerical answer is always the same no matter which way you put it,  6 ÷ 2 = 3. This is essentially equivalent to the commutativity of multiplication in multiplication arithmetic. Division involves thinking about a whole in terms of its parts. One frequent division special case, is that of a natural number (positive integers) of equal parts, is known to teachers as a partition or sharing: the whole entity becomes an integer number with equal parts. What quotition focuses on, is explained by removing the word integer in the last sentence. Allow the number to be any fraction and you may have a quotition instead of a partition. See also List of partition topics References External links A University of Melbourne web page shows what to do when the fraction is a ratio of integers or rational. Operations on numbers Division (mathematics)
https://en.wikipedia.org/wiki/Intrusion%20tolerance
Intrusion tolerance is a fault-tolerant design approach to defending information systems against malicious attacks. In that sense, it is also a computer security approach. Abandoning the conventional aim of preventing all intrusions, intrusion tolerance instead calls for triggering mechanisms that prevent intrusions from leading to a system security failure. Distributed computing In distributed computing there are two major variants of intrusion tolerance mechanisms: mechanisms based on redundancy, such as the Byzantine fault tolerance, as well as mechanisms based on intrusion detection as implemented in intrusion detection system) and intrusion reaction. Intrusion-tolerant server architectures Intrusion-tolerance has started to influence the design of server architectures in academic institutions, and industry. Examples of such server architectures include KARMA, Splunk IT Service Intelligence (ITSI), project ITUA, and the practical Byzantine Fault Tolerance (pBFT) model. See also Intrusion detection system evasion techniques References Fault tolerance Computer security
https://en.wikipedia.org/wiki/Cryptogenic%20species
A cryptogenic species ("cryptogenic" being derived from Greek "κρυπτός", meaning hidden, and "γένεσις", meaning origin) is a species whose origins are unknown. The cryptogenic species can be an animal or plant, including other kingdoms or domains, such as fungi, algae, bacteria, or even viruses. In ecology, a cryptogenic species is one which may be either a native species or an introduced species, clear evidence for either origin being absent. An example is the Northern Pacific seastar (Asterias amurensis) in Alaska and Canada. In palaeontology, a cryptogenic species is one which appears in the fossil record without clear affinities to an earlier species. See also Cosmopolitan distribution Cryptozoology References Further reading Ecology terminology
https://en.wikipedia.org/wiki/Smart%20contract
A smart contract is a computer program or a transaction protocol that is intended to automatically execute, control or document events and actions according to the terms of a contract or an agreement. The objectives of smart contracts are the reduction of need for trusted intermediators, arbitration costs, and fraud losses, as well as the reduction of malicious and accidental exceptions. Smart contracts are commonly associated with cryptocurrencies, and the smart contracts introduced by Ethereum are generally considered a fundamental building block for decentralized finance (DeFi) and NFT applications. Vending machines are mentioned as the oldest piece of technology equivalent to smart contract implementation. The original Ethereum white paper by Vitalik Buterin in 2014 describes the Bitcoin protocol as a weak version of the smart contract concept as originally defined by Nick Szabo, and proposed a stronger version based on the Solidity language, which is Turing complete. Since Bitcoin, various cryptocurrencies have supported programming languages which allow for more advanced smart contracts between untrusted parties. A smart contract should not be confused with a smart legal contract, which refers to a traditional, natural-language, legally-binding agreement that has selected terms expressed and implemented in machine-readable code. Etymology Smart contracts were first proposed in the early 1990s by Nick Szabo, who coined the term, using it to refer to "a set of promises, specified in digital form, including protocols within which the parties perform on these promises". In 1998, the term was used to describe objects in rights management service layer of the system The Stanford Infobus, which was a part of Stanford Digital Library Project. Legal status of smart contracts A smart contract does not typically constitute a valid binding agreement at law, although a smart legal contract is intended to be both executable by a machine and legally enforceable. Smar
https://en.wikipedia.org/wiki/Leeuwenhoek%20Lecture
The Leeuwenhoek Lecture is a prize lecture of the Royal Society to recognize achievement in microbiology. The prize was originally given in 1950 and awarded annually, but from 2006 to 2018 was given triennially. From 2018 it will be awarded biennially. The prize is named after the Dutch microscopist Antonie van Leeuwenhoek and was instituted in 1948 from a bequest from George Gabb. A gift of £2000 is associated with the lecture. Leeuwenhoek Lecturers The following is a list of Leeuwenhoek Lecture award winners along with the title of their lecture: 21st Century 2024 Joanne Webster, for her achievements in advancing control of disease in humans and animals caused by parasites in Asia and Africa 2022 Sjors Scheres, for ground-breaking contributions and innovations in image analysis and reconstruction methods in electron cryo-microscopy, enabling the structure determination of complex macromolecules of fundamental biological and medical importance to atomic resolution 2020 Geoffrey L. Smith, for his studies of poxviruses which has had major impact in wider areas, notably vaccine development, biotechnology, host-pathogen interactions and innate immunity 2018 Sarah Cleaveland, Can we make rabies history? Realising the value of research for the global elimination of rabies 2015 Jeffrey Errington, for his seminal discoveries in relation to the cell cycle and cell morphogenesis in bacteria 2012 Brad Amos, How new science is transforming the optical microscope 2010 Robert Gordon Webster, Pandemic Influenza: one flu over the cuckoo's nest 2006 Richard Anthony Crowther, Microscopy goes cold: frozen viruses reveal their structural secrets. 2005 Keith Chater, Streptomyces inside out: a new perspective on the bacteria that provide us with antibiotics. 2004 David Sherratt, A bugs life 2003 Brian Spratt, Bacterial populations and bacterial disease 2002 Stephen West, DNA repair from microbes to man 2001 Robin Weiss, From Pan to pandemic: animal to human infection
https://en.wikipedia.org/wiki/Hyper%20Sports
Hyper Sports, known in Japan as is an Olympic-themed sports video game released by Konami for arcades in 1984. It is the sequel to 1983's Track & Field and features seven new Olympic events. Like its predecessor, Hyper Sports has two run buttons and one action button per player. The Japanese release of the game sported an official license for the 1984 Summer Olympics. Gameplay The gameplay is much the same as Track & Field in that the player competes in an event and tries to score the most points based on performance criteria, and also by beating the computer entrants in that event. Also, the player tries to exceed a qualification time, distance, or score to advance to the next event. In Hyper Sports, if all of the events are passed successfully, the player advances to the next round of the same events which are faster and harder to qualify for. The events changed to include these new sports: Swimming - swimming speed is controlled by two run buttons, and breathing is controlled by the action button when prompted by swimmer on screen. There is one re-do if a player fouls due to launching before the gun, but only one "run" at the qualifying time. Skeet shooting - selecting left or right shot via the two run buttons while a clay-bird is in the sight. There are three rounds to attempt to pass the qualifying score. If a perfect score is attained then a different pattern follows allowing for a higher score. Long horse - speed to run at horse is computer controlled, player jumps and pushes off horse via the action button, and rotates as many times as possible via run buttons (and tries to land straight up on feet). There are three attempts at the qualifying score. Archery - firing of the arrow controlled by action button; the elevation angle is controlled by depressing the action button and releasing at the proper time. There are three attempts at passing the qualifying score. Triple jump - speed is controlled by the run buttons, jump and angle are controlled by actio
https://en.wikipedia.org/wiki/Proof%20of%20the%20Euler%20product%20formula%20for%20the%20Riemann%20zeta%20function
Leonhard Euler proved the Euler product formula for the Riemann zeta function in his thesis Variae observationes circa series infinitas (Various Observations about Infinite Series), published by St Petersburg Academy in 1737. The Euler product formula The Euler product formula for the Riemann zeta function reads where the left hand side equals the Riemann zeta function: and the product on the right hand side extends over all prime numbers p: Proof of the Euler product formula This sketch of a proof makes use of simple algebra only. This was the method by which Euler originally discovered the formula. There is a certain sieving property that we can use to our advantage: Subtracting the second equation from the first we remove all elements that have a factor of 2: Repeating for the next term: Subtracting again we get: where all elements having a factor of 3 or 2 (or both) are removed. It can be seen that the right side is being sieved. Repeating infinitely for where is prime, we get: Dividing both sides by everything but the ζ(s) we obtain: This can be written more concisely as an infinite product over all primes p: To make this proof rigorous, we need only to observe that when , the sieved right-hand side approaches 1, which follows immediately from the convergence of the Dirichlet series for . The case s = 1 An interesting result can be found for ζ(1), the harmonic series: which can also be written as, which is, as, thus, While the series ratio test is inconclusive for the left-hand side it may be shown divergent by bounding logarithms. Similarly for the right-hand side the infinite coproduct of reals greater than one does not guarantee divergence, e.g., . Instead, the denominator may be written in terms of the primorial numerator so that divergence is clear given the trivial composed logarithmic divergence of an inverse prime series. Another proof Each factor (for a given prime p) in the product above can be expanded to a geometric se
https://en.wikipedia.org/wiki/AES51
AES51 is a standard first published by the Audio Engineering Society in June 2006 that specifies a method of carrying Asynchronous Transfer Mode (ATM) cells over Ethernet physical structure intended in particular for use with AES47 to carry AES3 digital audio transport structure. The purpose of this is to provide an open standard, Ethernet based approach to the networking of linear (uncompressed) digital audio with extremely high quality-of-service alongside standard Internet Protocol connections. This standard specifies a method, also known as "ATM-E", of carrying ATM cells over hardware specified for IEEE 802.3 (Ethernet). It is intended as a companion standard to AES47 (Transmission of digital audio over ATM networks), to provide a standard method of carrying ATM cells and real-time clock over hardware specified for Ethernet. References Networking standards Broadcast engineering Digital audio Audio network protocols Ethernet Audio Engineering Society standards Asynchronous Transfer Mode
https://en.wikipedia.org/wiki/MPEG%20elementary%20stream
An elementary stream (ES) as defined by the MPEG communication protocol is usually the output of an audio encoder or video encoder. An ES contains only one kind of data (e.g. audio, video, or closed caption). An elementary stream is often referred to as "elementary", "data", "audio", or "video" bitstreams or streams. The format of the elementary stream depends upon the codec or data carried in the stream, but will often carry a common header when packetized into a packetized elementary stream. Header for MPEG-2 video elementary stream General layout of MPEG-1 audio elementary stream The digitized sound signal is divided up into blocks of 384 samples in Layer I and 1152 samples in Layers II and III. The sound sample block is encoded within an audio frame: header error check audio data ancillary data The header of a frame contains general information such as the MPEG Layer, the sampling frequency, the number of channels, whether the frame is CRC protected, whether the sound is the original: Although most of this information may be the same for all frames, MPEG decided to give each audio frame such a header in order to simplify synchronization and bitstream editing. See also MP3 Packetized elementary stream MPEG program stream MPEG transport stream External links ISO/IEC 11172-3:1993: Information technology -- Coding of moving pictures and associated audio for digital storage media at up to about 1,5 Mbit/s -- Part 3: Audio MPEG
https://en.wikipedia.org/wiki/Exciter%20%28effect%29
An exciter (also called a harmonic exciter or aural exciter) is an audio signal processing technique used to enhance a signal by dynamic equalization, phase manipulation, harmonic synthesis of (usually) high frequency signals, and through the addition of subtle harmonic distortion. Dynamic equalization involves variation of the equalizer characteristics in the time domain as a function of the input. Due to the varying nature, noise is reduced compared to static equalizers. Harmonic synthesis involves the creation of higher order harmonics from the fundamental frequency signals present in the recording. As noise is usually more prevalent at higher frequencies, the harmonics are derived from a purer frequency band resulting in clearer highs. Exciters are also used to synthesize harmonics of low frequency signals to simulate deep bass in smaller speakers. Originally made in valve (tube) based equipment, they are now implemented as part of a digital signal processor, often trying to emulate analogue exciters. Exciters are mostly found as plug-ins for sound editing software and in sound enhancement processors. Aphex aural exciter The Aphex aural exciter was one of the first exciter effects. The effect was developed in the mid-1970s by Aphex Electronics. The aural exciter adds phase shift and musically related synthesized harmonics to audio signals. The first Aural Exciter units were available in the mid-1970s, exclusively on the rental basis of $30 per minute of finished recorded time. In the 1970s, certain recording artists, including Anne Murray, Neil Diamond, Jackson Browne, The Four Seasons, Olivia Newton-John, Linda Ronstadt and James Taylor stated in their liner notes "This album was recorded using the Aphex Aural Exciter." Aphex started selling the professional units, and introduced two low-cost models: Type B and Type C. The Aural Exciter circuit is now licensed by a growing list of manufacturers, including Yamaha, MacKenzie, Gentner, E-mu Systems and Bogen.
https://en.wikipedia.org/wiki/HiSoft%20Systems
HiSoft Systems is a software company based in the UK, creators of a range of programming tools for microcomputers in 1980s and 1990s. Products Their first products were Pascal and Assembler implementations for the NASCOM 1 and 2 kit-based computers, followed by Pascal and C for computers, as well as a BASIC compiler for this platform and a C compiler for CP/M. While compilers for the were typical products for this platform, with integrated editor, compiler and runtime environment fitting in RAM together with program's source, the C compiler for CP/M was typical for this operating system, batch operated, with separate compilation and linking stages. Their most well-known products were the Devpac assembler IDE environments (earlier known as GenST and GenAm for the Atari ST and Amiga, respectively). The Devpac IDE was a full editor/assembler/debugger environment written entirely in 68k assembler and was a favourite tool among programmers on the Atari GEM platform. HiSoft also sold HiSoft BASIC and Power BASIC, HiSoft C Interpreter for the Atari ST, Aztec C, Personal Pascal, and FTL Modula-2. They also produced WERCS, the WIMP Environment Resource Construction Set. Background The business was created in 1980 and was based in Dunstable, Bedfordshire before relocating to the village of Greenfield in the same county. In November 2001, HiSoft's staff were employed by Maxon Computer Limited, the UK arm of MAXON Computer GmbH. to work on Cinema 4D. David Link, the founder and owner, ran a café () in the village of Emsworth for a year until July 2007 and a restaurant/bar/guest house() in Shanklin, Isle of Wight, from 2010 until January 2015. References External links HiSoft Systems World of Spectrum HiSoft archive ZX Spectrum Software companies of the United Kingdom Amiga Atari ST Atari ST software Software companies established in 1980 1980 establishments in the United Kingdom
https://en.wikipedia.org/wiki/Open%20book%20decomposition
In mathematics, an open book decomposition (or simply an open book) is a decomposition of a closed oriented 3-manifold M into a union of surfaces (necessarily with boundary) and solid tori. Open books have relevance to contact geometry, with a famous theorem of Emmanuel Giroux (given below) that shows that contact geometry can be studied from an entirely topological viewpoint. Definition and construction Definition. An open book decomposition of a 3-dimensional manifold M is a pair (B, π) where B is an oriented link in M, called the binding of the open book; π: M \ B → S1 is a fibration of the complement of B such that for each θ ∈ S1, π−1(θ) is the interior of a compact surface Σ ⊂ M whose boundary is B. The surface Σ is called the page of the open book. This is the special case m = 3 of an open book decomposition of an m-dimensional manifold, for any m. The definition for general m is similar, except that the surface with boundary (Σ, B) is replaced by an (m − 1)-manifold with boundary (P, ∂P). Equivalently, the open book decomposition can be thought of as a homeomorphism of M to the quotient space where f:P → P is a self-homeomorphism preserving the boundary. This quotient space is called a relative mapping torus. When Σ is an oriented compact surface with n boundary components and φ: Σ → Σ is a homeomorphism which is the identity near the boundary, we can construct an open book by first forming the mapping torus Σφ. Since φ is the identity on ∂Σ, ∂Σφ is the trivial circle bundle over a union of circles, that is, a union of tori; one torus for each boundary component. To complete the construction, solid tori are glued to fill in the boundary tori so that each circle S1 × {p} ⊂ S1×∂D2 is identified with the boundary of a page. In this case, the binding is the collection of n cores S1×{q} of the n solid tori glued into the mapping torus, for arbitrarily chosen q ∈ D2. It is known that any open book can be constructed this way. As the only information used in
https://en.wikipedia.org/wiki/Uranium%20in%20the%20environment
Uranium in the environment is a global health concern, and comes from both natural and man-made sources. Mining, phosphates in agriculture, weapons manufacturing, and nuclear power are sources of uranium in the environment. In the natural environment, radioactivity of uranium is generally low, but uranium is a toxic metal that can disrupt normal functioning of the kidney, brain, liver, heart, and numerous other systems. Chemical toxicity can cause public health issues when uranium is present in groundwater, especially if concentrations in food and water are increased by mining activity. The biological half-life (the average time it takes for the human body to eliminate half the amount in the body) for uranium is about 15 days. Uranium's radioactivity can present health and environmental issues in the case of nuclear waste produced by nuclear power plants or weapons manufacturing. Uranium is weakly radioactive and remains so because of its long physical half-life (4.468 billion years for uranium-238). The use of depleted uranium (DU) in munitions is controversial because of questions about potential long-term health effects. Natural occurrence Uranium is a naturally occurring element found in low levels within all rock, soil, and water. This is the highest-numbered element to be found naturally in significant quantities on earth. According to the United Nations Scientific Committee on the Effects of Atomic Radiation the normal concentration of uranium in soil is 300 μg/kg to 11.7 mg/kg. It is considered to be more plentiful than antimony, beryllium, cadmium, gold, mercury, silver, or tungsten and is about as abundant as tin, arsenic or molybdenum. It is found in many minerals including uraninite (most common uranium ore), autunite, uranophane, torbernite, and coffinite. Significant concentrations of uranium occur in some substances such as phosphate rock deposits, and minerals such as lignite, and monazite sands in uranium-rich ores (it is recovered commercial
https://en.wikipedia.org/wiki/List%20of%20object%E2%80%93relational%20mapping%20software
This is a list of well-known object–relational mapping software. Java Apache Cayenne, open-source for Java Apache OpenJPA, open-source for Java DataNucleus, open-source JDO and JPA implementation (formerly known as JPOX) Ebean, open-source ORM framework EclipseLink, Eclipse persistence platform Enterprise JavaBeans (EJB) Enterprise Objects Framework, Mac OS X/Java, part of Apple WebObjects Hibernate, open-source ORM framework, widely used Java Data Objects (JDO) JOOQ Object Oriented Querying (jOOQ) Kodo, commercial implementation of both Java Data Objects and Java Persistence API TopLink by Oracle iOS Core Data by Apple for Mac OS X and iOS .NET Base One Foundation Component Library, free or commercial Dapper, open source Entity Framework, included in .NET Framework 3.5 SP1 and above iBATIS, free open source, maintained by ASF but now inactive. LINQ to SQL, included in .NET Framework 3.5 NHibernate, open source nHydrate, open source Quick Objects, free or commercial Objective-C, Cocoa Enterprise Objects, one of the first commercial OR mappers, available as part of WebObjects Core Data, object graph management framework with several persistent stores, ships with Mac OS X and iOS Perl DBIx::Class PHP Laravel, framework that contains an ORM called "Eloquent" an ActiveRecord implementation. Doctrine, open source ORM for PHP 5.2.3, 5.3.X., 7.4.X Free software (MIT) CakePHP, ORM and framework for PHP 5, open source (scalars, arrays, objects); based on database introspection, no class extending CodeIgniter, framework that includes an ActiveRecord implementation Yii, ORM and framework for PHP 5, released under the BSD license. Based on the ActiveRecord pattern FuelPHP, ORM and framework for PHP 5.3, released under the MIT license. Based on the ActiveRecord pattern. Laminas, framework that includes a table data gateway and row data gateway implementations Propel, ORM and query-toolkit for PHP 5, inspired by Apache Torque, free software, MIT Qcodo, ORM and framework
https://en.wikipedia.org/wiki/Moisture%20sorption%20isotherm
At equilibrium, the relationship between water content and equilibrium relative humidity of a material can be displayed graphically by a curve, the so-called moisture sorption isotherm. For each humidity value, a sorption isotherm indicates the corresponding water content value at a given, constant temperature. If the composition or quality of the material changes, then its sorption behaviour also changes. Because of the complexity of sorption process the isotherms cannot be determined explicitly by calculation, but must be recorded experimentally for each product. The relationship between water content and water activity (aw) is complex. An increase in aw is usually accompanied by an increase in water content, but in a non-linear fashion. This relationship between water activity and moisture content at a given temperature is called the moisture sorption isotherm. These curves are determined experimentally and constitute the fingerprint of a food system. BET theory (Brunauer-Emmett-Teller) provides a calculation to describe the physical adsorption of gas molecules on a solid surface. Because of the complexity of the process, these calculations are only moderately successful; however, Stephen Brunauer was able to classify sorption isotherms into five generalized shapes as shown in Figure 2. He found that Type II and Type III isotherms require highly porous materials or desiccants, with first monolayer adsorption, followed by multilayer adsorption and finally leading to capillary condensation, explaining these materials high moisture capacity at high relative humidity. Care must be used in extracting data from isotherms, as the representation for each axis may vary in its designation. Brunauer provided the vertical axis as moles of gas adsorbed divided by the moles of the dry material, and on the horizontal axis he used the ratio of partial pressure of the gas just over the sample, divided by its partial pressure at saturation. More modern isotherms showing the
https://en.wikipedia.org/wiki/TDtv
TDtv combines IPWireless commercial UMTS TD-CDMA solution and 3GPP Release 6 Multimedia Broadcast Multicast Service (MBMS) to deliver Mobile TV. TDtv operates in the universal unpaired 3G spectrum bands that are available worldwide at 1900 MHz and 2100 MHz. It allows UMTS operators to fully utilize their existing spectrum and base stations to offer mobile TV and multimedia packages without affecting other voice and data 3G services. External links NextWave Wireless dead link due to the merger of the company Streaming television
https://en.wikipedia.org/wiki/Cooperative%20multitasking
Cooperative multitasking, also known as non-preemptive multitasking, is a style of computer multitasking in which the operating system never initiates a context switch from a running process to another process. Instead, in order to run multiple applications concurrently, processes voluntarily yield control periodically or when idle or logically blocked. This type of multitasking is called cooperative because all programs must cooperate for the scheduling scheme to work. In this scheme, the process scheduler of an operating system is known as a cooperative scheduler whose role is limited to starting the processes and letting them return control back to it voluntarily. This is related to the asynchronous programming approach. Usage Although it is rarely used as the primary scheduling mechanism in modern operating systems, it is widely used in memory-constrained embedded systems and also, in specific applications such as CICS or the JES2 subsystem. Cooperative multitasking was the primary scheduling scheme for 16-bit applications employed by Microsoft Windows before Windows 95 and Windows NT, and by the classic Mac OS. Windows 9x used non-preemptive multitasking for 16-bit legacy applications, and the PowerPC Versions of Mac OS X prior to Leopard used it for classic applications. NetWare, which is a network-oriented operating system, used cooperative multitasking up to NetWare 6.5. Cooperative multitasking is still used on RISC OS systems. Cooperative multitasking is used with await in languages, such as JavaScript or Python, that feature a single-threaded event-loop in their runtime. This contrasts with operating system cooperative multitasking as await is scoped only to the function or block, meaning other tasks may run concurrently in other parts of the code while a single function is waiting. In most modern languages, async and await are implemented as coroutines. Problems As a cooperatively multitasked system relies on each process regularly giving up t
https://en.wikipedia.org/wiki/Bretschneider%27s%20formula
In geometry, Bretschneider's formula is a mathematical expression for the area of a general quadrilateral. It works on both convex and concave quadrilaterals (but not crossed ones), whether it is cyclic or not. History The German mathematician Carl Anton Bretschneider discovered the formula in 1842. The formula was also derived in the same year by the German mathematician Karl Georg Christian von Staudt. Formulation Bretschneider's formula is expressed as: Here, , , , are the sides of the quadrilateral, is the semiperimeter, and and are any two opposite angles, since as long as Proof Denote the area of the quadrilateral by . Then we have Therefore The law of cosines implies that because both sides equal the square of the length of the diagonal . This can be rewritten as Adding this to the above formula for yields Note that: (a trigonometric identity true for all ) Following the same steps as in Brahmagupta's formula, this can be written as Introducing the semiperimeter the above becomes and Bretschneider's formula follows after taking the square root of both sides: The second form is given by using the cosine half-angle identity yielding Emmanuel García has used the generalized half angle formulas to give an alternative proof. Related formulae Bretschneider's formula generalizes Brahmagupta's formula for the area of a cyclic quadrilateral, which in turn generalizes Heron's formula for the area of a triangle. The trigonometric adjustment in Bretschneider's formula for non-cyclicality of the quadrilateral can be rewritten non-trigonometrically in terms of the sides and the diagonals and to give Notes References & further reading C. A. Bretschneider. Untersuchung der trigonometrischen Relationen des geradlinigen Viereckes. Archiv der Mathematik und Physik, Band 2, 1842, S. 225-261 ( online copy, German) F. Strehlke: Zwei neue Sätze vom ebenen und sphärischen Viereck und Umkehrung des Ptolemaischen Lehrsatzes. Archiv der Mathematik
https://en.wikipedia.org/wiki/Echelon%20Corporation
Echelon Corporation was an American company which designed control networks to connect machines and other electronic devices, for the purposes of sensing, monitoring and control. Echelon is now owned by Adesto Technologies. History Echelon was founded in February 1988 in Palo Alto, California by Clifford "Mike" Markkula Jr. The chief executive was M. Kenneth Oshman. Echelon's LonWorks platform for control networking was released in 1990 for use in the building, industrial, transportation, and home automation markets. At their initial public offering on March 31, 1998, their shares were listed on the NASDAQ exchange with the symbol ELON. Started in 2003, Echelon's Networked Energy Services system was an open metering service. Echelon provides the underlying network technology for the world's largest Advanced Metering Infrastructure (AMI) in Italy with over 27 million connected electricity meters. Based on the experiences with this installation, Echelon developed the NES (Networked Energy Services) System (including smart meters, data concentrators and a head-end data collection system) in October 2014 with about 3.5 million devices installed. In August 2014, after quarterly revenues dropped from $24.8 million to $15 million, Echelon announced it was leaving the smart-grid business, shifting its entire corporate focus to the Internet of things as a market for its technology. Echelon committed to only support existing customers, but not grow the grid business, and to potentially seek the sale of its grid business. Echelon is based in Santa Clara, California, with international offices in China, France, Germany, Italy, Hong Kong, Japan, Korea, The Netherlands, and the United Kingdom. On June 29, 2018, Adesto Technologies announced its intention to acquire Echelon for $45 million. The acquisition was completed on September 14, 2018. References External links About Echelon Corporation LonMark International Networking companies of the United States Computer compa
https://en.wikipedia.org/wiki/Expensive%20Desk%20Calculator
Expensive Desk Calculator by Robert A. Wagner is thought to be computing's first interactive calculation program. The software first ran on the TX-0 computer loaned to the Massachusetts Institute of Technology (MIT) by Lincoln Laboratory. It was ported to the PDP-1 donated to MIT in 1961 by Digital Equipment Corporation. Friends from the MIT Tech Model Railroad Club, Wagner and a group of fellow students had access to these room-sized machines outside classes, signing up for time during off hours. Overseen by Jack Dennis, John McKenzie and faculty advisors, they were personal computer users as early as the late 1950s. The calculators Wagner needed to complete his numerical analysis homework were across campus and in short supply so he wrote one himself. Although the program has about three thousand lines of code and took months to write, Wagner received a grade of zero on his homework. His professor's reaction was, "You used a computer! This can't be right." Steven Levy wrote, "The professor would learn in time, as would everyone, that the world opened up by the computer was a limitless one." References See also PDP-1 Expensive Typewriter Expensive Planetarium Expensive Tape Recorder Calculators History of software
https://en.wikipedia.org/wiki/Subversive%20Proposal
The "Subversive Proposal" was an Internet posting by Stevan Harnad on June 27, 1994 (presented at the 1994 Network Services Conference in London) calling on all authors of "esoteric" research writings to archive their articles for free for everyone online (in anonymous FTP archives or websites). It initiated a series of online exchanges, many of which were collected and published as a book in 1995: Scholarly Journals at the Crossroads: A Subversive Proposal for Electronic Publishing. This led to the creation in 1997 of Cogprints, an open access archive for self-archived articles in the cognitive sciences and in 1998 to the creation of the American Scientist Open Access Forum (initially called the "September98 Forum" until the founding of the Budapest Open Access Initiative which first coined the term "open access"). The Subversive Proposal also led to the development of the GNU EPrints software used for creating OAI-compliant open access institutional repositories, and inspired CiteSeer, a tool to locate and index the resulting eprints. The proposal was updated gradually across the years, as summarized in the American Scientist Open Access Forum on its 10th anniversary. A retrospective was written by Richard Poynder. A self-critique was posted on its 15th anniversary in 2009. An online interview of Stevan Harnad was conducted by Richard Poynder on the occasion of the 20th anniversary of the subversive proposal. References Bosc, Hélène Les idées et la technique : une rétrospective de ces 15 dernières années Further reading Harnad, Stevan (1995): (2001/2003/2004) For Whom the Gate Tolls? Published as: (2003) Open Access to Peer-Reviewed Research Through Author/Institution Self-Archiving: Maximizing Research Impact by Maximizing Online Access. In: Law, Derek & Judith Andrews, Eds. Digital Libraries: Policy Planning and Practice. Ashgate Publishing 2003. (2003) Journal of Postgraduate Medicine 49: 337–342. (2004) Historical Social Research (HSR) 29:1 (2003) Ciélog
https://en.wikipedia.org/wiki/Motion%20camouflage
Motion camouflage is camouflage which provides a degree of concealment for a moving object, given that motion makes objects easy to detect however well their coloration matches their background or breaks up their outlines. The principal form of motion camouflage, and the type generally meant by the term, involves an attacker's mimicking the optic flow of the background as seen by its target. This enables the attacker to approach the target while appearing to remain stationary from the target's perspective, unlike in classical pursuit (where the attacker moves straight towards the target at all times, and often appears to the target to move sideways). The attacker chooses its flight path so as to remain on the line between the target and some landmark point. The target therefore does not see the attacker move from the landmark point. The only visible evidence that the attacker is moving is its looming, the change in size as the attacker approaches. Camouflage is sometimes facilitated by motion, as in the leafy sea dragon and some stick insects. These animals complement their passive camouflage by swaying like plants in the wind or ocean currents, delaying their recognition by predators. First discovered in hoverflies in 1995, motion camouflage by minimising optic flow has been demonstrated in another insect order, dragonflies, as well as in two groups of vertebrates, falcons and echolocating bats. Since bats hunting at night cannot be using the strategy for camouflage, it has been named, describing its mechanism, as constant absolute target direction. This is an efficient homing strategy, and it has been suggested that anti-aircraft missiles could benefit from similar techniques. Camouflage of approach motion Many animals are highly sensitive to motion; for example, frogs readily detect small moving dark spots but ignore stationary ones. Therefore, motion signals can be used to defeat camouflage. Moving objects with disruptive camouflage patterns remain harde
https://en.wikipedia.org/wiki/Collaborative%20working%20environment
A collaborative working environment (CWE) supports people, such as e-professionals, in their individual and cooperative work. Research in CWE involves focusing on organizational, technical, and social issues. Background Working practices in a collaborative working environment evolved from the traditional or geographical co-location paradigm. In a CWE, professionals work together regardless of their geographical location. In this context, e-professionals use a collaborative working environment to provide and share information and exchange views in order to reach a common understanding. Such practices enable an effective and efficient collaboration among different proficiencies. Description The following applications or services are considered elements of a CWE: E-mail Instant messaging Application sharing Video conferencing, Web conferencing Virtual workplace, document management and version control system Task and workflow management(Task management and Workflow management) Wiki group or community effort to edit wiki pages. (e.g. wiki pages describing concepts to enable a common understanding within a group or community) Blogging where entries are categorized by groups or communities or other concepts supporting collaboration Overview The concept of CWE is derived from the idea of virtual work-spaces, and is related to the concept of remote work. It extends the traditional concept of the professional to include any type of knowledge worker who intensively uses information and communications technology (ICT) environments and tools in their working practices. Typically, a group of e-professionals conduct their collaborative work through the use of collaborative working environments (CWE). CWE refers to online collaboration (such as virtual teams, mass collaboration, and massively distributed collaboration); online communities of practice (such as the open source community); and open innovation principles. Collaborative work systems A collaborative workin
https://en.wikipedia.org/wiki/Heijunka%20box
A heijunka box is a visual scheduling tool used in heijunka, a method originally created by Toyota for achieving a smoother production flow. While heijunka is the smoothing of production, the heijunka box is the name of a specific tool used in achieving the aims of heijunka. The heijunka box is generally a wall schedule which is divided into a grid of boxes or a set of 'pigeon-holes'/rectangular receptacles. Each column of boxes representing a specific period of time, lines are drawn down the schedule/grid to visually break the schedule into columns of individual shifts or days or weeks. Coloured cards representing individual jobs (referred to as kanban cards) are placed on the heijunka box to provide a visual representation of the upcoming production runs. The heijunka box makes it easy to see what type of jobs are queued for production and for when they are scheduled. Workers on the process remove the kanban cards for the current period from the box in order to know what to do. These cards will be passed to another section when they process the related job. Implementation The Heijunka box allows easy and visual control of a smoothed production schedule. A typical heijunka box has horizontal rows for each product. It has vertical columns for identical time intervals of production. In the illustration on the right, the time interval is thirty minutes. Production control kanban are placed in the pigeon-holes provided by the box in proportion to the number of items to be built of a given product type during a time interval. In this illustration, each time period builds an A and two Bs along with a mix of Cs, Ds and Es. What is clear from the box, from the simple repeating patterns of kanbans in each row, is that the production is smooth of each of these products. This ensures that production capacity is kept under a constant pressure thereby eliminating many issues. See also Lean production Just In Time References Bibliography Japanese business terms Lean
https://en.wikipedia.org/wiki/Boilerplate%20code
In computer programming, boilerplate code, or simply boilerplate, are sections of code that are repeated in multiple places with little to no variation. When using languages that are considered verbose, the programmer must write a lot of boilerplate code to accomplish only minor functionality. The need for boilerplate can be reduced through high-level mechanisms such as metaprogramming (which has the computer automatically write the needed boilerplate code or insert it at compile time), convention over configuration (which provides good default values, reducing the need to specify program details in every project) and model-driven engineering (which uses models and model-to-code generators, eliminating the need for manual boilerplate code). Origin The term arose from the newspaper business. Columns and other pieces that were distributed by print syndicates were sent to subscribing newspapers in the form of prepared printing plates. Because of their resemblance to the metal plates used in the making of boilers, they became known as "boiler plates", and their resulting text—"boilerplate text". As the stories that were distributed by boiler plates were usually "fillers" rather than "serious" news, the term became synonymous with unoriginal, repeated text. A related term is bookkeeping code, referring to code that is not part of the business logic but is interleaved with it in order to keep data structures updated or handle secondary aspects of the program. Preamble One form of boilerplate consists of declarations which, while not part of the program logic or the language's essential syntax, are added to the start of a source file as a matter of custom. The following Perl example demonstrates boilerplate: #!/usr/bin/perl use warnings; use strict; The first line is a shebang, which identifies the file as a Perl script that can be executed directly on the command line on Unix/Linux systems. The other two are pragmas turning on warnings and strict mode, which are
https://en.wikipedia.org/wiki/Nokia%20Networks
Nokia Networks (formerly Nokia Solutions and Networks (NSN) and Nokia Siemens Networks (NSN)) is a multinational data networking and telecommunications equipment company headquartered in Espoo, Finland, and wholly owned subsidiary of Nokia Corporation. It started as a joint venture between Nokia of Finland and Siemens of Germany known as Nokia Siemens Networks. Nokia Networks has operations in around 120 countries. In 2013, Nokia acquired 100% of Nokia Networks, buying all of Siemens' shares. In April 2014, the NSN name was phased out as part of a rebranding process. History The company was created as the result of a joint venture between Siemens Communications (minus its Enterprise business unit) and Nokia's Network Business. The formation of the company was publicly announced on 19 June 2006. Nokia Siemens Networks was officially launched at the 3GSM World Congress in Barcelona in February 2007. Nokia Siemens Networks then began full operations on 1 April 2007 and has its headquarters in Espoo, Greater Helsinki, Finland. In January 2008 Nokia Siemens Networks acquired Israeli company Atrica, a company that builds carrier-class Ethernet transport systems for metro networks. The official release did not disclose terms, however they are thought to be in the region of $100 million. In February 2008 Nokia Siemens Networks acquired Apertio, a Bristol, UK-based mobile network customer management tools provider, for €140 million. With this acquisition Nokia Siemens Networks gained customers in the subscriber management area including Orange, T-Mobile, O2, Vodafone, and Hutchison 3G. In 2009, according to Siemens, Siemens only retained a non-controlling financial interest in NSN, with the day-to-day operations residing with Nokia. On 19 July 2010, Nokia Siemens Networks announced it would acquire the wireless-network equipment of Motorola. The acquisition was completed on 29 April 2011 for $975 million in cash. As of the transaction approximately 6,900 employees trans
https://en.wikipedia.org/wiki/Crop%20Trust
The Crop Trust, officially known as the Global Crop Diversity Trust, is an international nonprofit organization with a secretariat in Bonn, Germany. Its mission is to conserve and make available the world's crop diversity for food security. Established in 2004, the Crop Trust is the only organization whose sole mission is to safeguard the world’s crop diversity for future food security. Through an endowment fund for crop diversity, the Crop Trust provides financial support for key international and national genebanks that hold collections of diversity for food crops available under the International Treaty for Plant Genetic Resources for Food and Agriculture (ITPGRFA). The organization also provides tools and support for the efficient management of genebanks, facilitates coordination between conserving institutions, and organizes final backup of crop seeds in the Svalbard Global Seed Vault. Since its establishment, the Crop Trust has raised more than USD 300 million for the Crop Diversity Endowment Fund and supports conservation work in over 80 countries. Mission Crop diversity is the biological foundation of agriculture, and is the raw material plant breeders and farmers use to adapt crop varieties to pests and diseases. In the future, this crop diversity will play a central role in helping agriculture adjust to climate change and adapt to water and energy constraints. History In 1996, the UN Food and Agriculture Organization (FAO) recognized the need for global coordination for the conservation of the world’s crop diversity. At a conference organized by the FAO, 150 countries launched a Global Plan of Action to coordinate efforts at halting the loss of the world’s agrobiodiversity. The Global Plan of Action formed a major pillar of what would become the International Treaty on Plant Genetic Resources for Food and Agriculture, known as the Plant Treaty. The Plant Treaty brings the diversity of 64 food and forage crops into a multilateral system where the gene
https://en.wikipedia.org/wiki/Intersection%20theorem
In projective geometry, an intersection theorem or incidence theorem is a statement concerning an incidence structure – consisting of points, lines, and possibly higher-dimensional objects and their incidences – together with a pair of objects and (for instance, a point and a line). The "theorem" states that, whenever a set of objects satisfies the incidences (i.e. can be identified with the objects of the incidence structure in such a way that incidence is preserved), then the objects and must also be incident. An intersection theorem is not necessarily true in all projective geometries; it is a property that some geometries satisfy but others don't. For example, Desargues' theorem can be stated using the following incidence structure: Points: Lines: Incidences (in addition to obvious ones such as ): The implication is then —that point is incident with line . Famous examples Desargues' theorem holds in a projective plane if and only if is the projective plane over some division ring (skewfield} — . The projective plane is then called desarguesian. A theorem of Amitsur and Bergman states that, in the context of desarguesian projective planes, for every intersection theorem there is a rational identity such that the plane satisfies the intersection theorem if and only if the division ring satisfies the rational identity. Pappus's hexagon theorem holds in a desarguesian projective plane if and only if is a field; it corresponds to the identity . Fano's axiom (which states a certain intersection does not happen) holds in if and only if has characteristic ; it corresponds to the identity . References Incidence geometry Theorems in projective geometry
https://en.wikipedia.org/wiki/Bevameter
A bevameter is a device used in terramechanics to measure the mechanical properties of soil. Bevameter technique was developed to measure terrain mechanical properties for the study of vehicle mobility. The bevameter test consists of penetration test to measure normal loads and shear test to determine shear loads exerted by a vehicle. Bevameter area size need to be the size of the wheel or track. DEM analysis can take data from one size and simulate bevameter performance for a different size. External links Terrain Trafficability Characterization with a Mobile Robot, Ojeda, L., Borenstein, J., Witus, G. Soil science Measuring instruments Earth observation in-situ sensors
https://en.wikipedia.org/wiki/Continua%20Health%20Alliance
Continua Health Alliance is an international non-profit, open industry group of nearly 240 healthcare providers, communications, medical, and fitness device companies. Continua was a founding member of Personal Connected Health Alliance which was launched in February 2014 with other founding members mHealth SUMMIT and HIMSS. Overview Continua Health Alliance is an international not-for-profit industry organization enabling end-to-end, plug-and-play connectivity of devices and services for personal health management and healthcare delivery. Its mission is to empower information-driven health management and facilitate the incorporation of health and wellness into the day-to-day lives of consumers. ts activities include a certification and brand support program, events and collaborations to support technology and clinical innovation, as well as outreach to employers, payers, governments and care providers. With nearly 220 member companies reaching across the globe, Continua comprises technology, medical device and healthcare industry leaders and service providers dedicated to making personal connected health a reality. Continua Health Alliance is working toward establishing systems of interoperable telehealth devices and services in three major categories: chronic disease management, aging independently, and health and physical fitness. Devices and services Continua Health Alliance version 1 design guidelines are based on proven connectivity technical standards and include Bluetooth for wireless and USB for wired device connection. The group released the guidelines to the public in June 2009. The group is establishing a product certification program using its recognizable logo, the Continua Certified Logo program, signifying that the product is interoperable with other Continua-certified products. Products made under Continua Health Alliance guidelines will provide consumers with increased assurance of interoperability between devices, enabling them to more easi
https://en.wikipedia.org/wiki/Cyclin%20B
Cyclin B is a member of the cyclin family. Cyclin B is a mitotic cyclin. The amount of cyclin B (which binds to Cdk1) and the activity of the cyclin B-Cdk complex rise through the cell cycle until mitosis, where they fall abruptly due to degradation of cyclin B (Cdk1 is constitutively present). The complex of Cdk and cyclin B is called maturation promoting factor or mitosis promoting factor (MPF). Function Cyclin B is necessary for the progression of the cells into and out of M phase of the cell cycle. At the end of S phase the phosphatase cdc25c dephosphorylates tyrosine15 and this activates the cyclin B/CDK1 complex. Upon activation the complex is shuttled to the nucleus where it serves to trigger for entry into mitosis. However, if DNA damage is detected alternative proteins are activated which results in the inhibitory phosphorylation of cdc25c and therefore cyclinB/CDK1 is not activated. In order for the cell to progress out of mitosis, the degradation of cyclin B is necessary. The cyclin B/CDK1 complex also interacts with a variety of other key proteins and pathways which regulate cell growth and progression of mitosis. Cross-talk between many of these pathways links cyclin B levels indirectly to induction of apoptosis. The cyclin B/CDK1 complex plays a critical role in the expression of the survival signal survivin. Survivin is necessary for proper creation of the mitotic spindle which strongly affects cell viability, therefore when cyclin B levels are disrupted cells experience difficulty polarizing. A decrease in survivin levels and the associated mitotic disarray triggers apoptosis via caspase 3 mediated pathway. Role in Cancer Cyclin B plays an integral role in many types of cancer. Hyperplasia (uncontrolled cell growth) is one of the hallmarks of cancer. Because cyclin B is necessary for cells to enter mitosis and therefore necessary for cell division, cyclin B levels are often de-regulated in tumors. When cyclin B levels are elevated, cells
https://en.wikipedia.org/wiki/DNA%20field-effect%20transistor
A DNA field-effect transistor (DNAFET) is a field-effect transistor which uses the field-effect due to the partial charges of DNA molecules to function as a biosensor. The structure of DNAFETs is similar to that of MOSFETs, with the exception of the gate structure which, in DNAFETs, is replaced by a layer of immobilized ssDNA (single-stranded DNA) molecules which act as surface receptors. When complementary DNA strands hybridize to the receptors, the charge distribution near the surface changes, which in turn modulates current transport through the semiconductor transducer. Arrays of DNAFETs can be used for detecting single nucleotide polymorphisms (causing many hereditary diseases) and for DNA sequencing. Their main advantage compared to optical detection methods in common use today is that they do not require labeling of molecules. Furthermore, they work continuously and (near) real-time. DNAFETs are highly selective since only specific binding modulates charge transport. References Biosensors Biotechnology Field-effect transistors MOSFETs
https://en.wikipedia.org/wiki/Heterodont
In anatomy, a heterodont (from Greek, meaning 'different teeth') is an animal which possesses more than a single tooth morphology. In vertebrates, heterodont pertains to animals where teeth are differentiated into different forms. For example, members of the Synapsida generally possess incisors, canines ("dogteeth"), premolars, and molars. The presence of heterodont dentition is evidence of some degree of feeding and or hunting specialization in a species. In contrast, homodont or isodont dentition refers to a set of teeth that possess the same tooth morphology. In invertebrates, the term heterodont refers to a condition where teeth of differing sizes occur in the hinge plate, a part of the Bivalvia. References See also Diphodonty Zoology Dentition types
https://en.wikipedia.org/wiki/Distributed%20amplifier
Distributed amplifiers are circuit designs that incorporate transmission line theory into traditional amplifier design to obtain a larger gain-bandwidth product than is realizable by conventional circuits. History The design of the distributed amplifiers was first formulated by William S. Percival in 1936. In that year Percival proposed a design by which the transconductances of individual vacuum tubes could be added linearly without lumping their element capacitances at the input and output, thus arriving at a circuit that achieved a gain-bandwidth product greater than that of an individual tube. Percival's design did not gain widespread awareness however, until a publication on the subject was authored by Ginzton, Hewlett, Jasberg, and Noe in 1948. It is to this later paper that the term distributed amplifier can actually be traced. Traditionally, DA design architectures were realized using vacuum tube technology. Current technology More recently, III-V semiconductor technologies, such as GaAs and InP have been used. These have superior performance resulting from higher bandgaps (higher electron mobility), higher saturated electron velocity, higher breakdown voltages and higher-resistivity substrates. The latter contributes much to the availability of higher quality-factor (Q-factor or simply Q) integrated passive devices in the III-V semiconductor technologies. To meet the marketplace demands on cost, size, and power consumption of monolithic microwave integrated circuits (MMICs), research continues in the development of mainstream digital bulk-CMOS processes for such purposes. The continuous scaling of feature sizes in current IC technologies has enabled microwave and mm-wave CMOS circuits to directly benefit from the resulting increased unity-gain frequencies of the scaled technology. This device scaling, along with the advanced process control available in today's technologies, has recently made it possible to reach a transition frequency (ft) of 170
https://en.wikipedia.org/wiki/N-vector
The n-vector representation (also called geodetic normal or ellipsoid normal vector) is a three-parameter non-singular representation well-suited for replacing geodetic coordinates (latitude and longitude) for horizontal position representation in mathematical calculations and computer algorithms. Geometrically, the n-vector for a given position on an ellipsoid is the outward-pointing unit vector that is normal in that position to the ellipsoid. For representing horizontal positions on Earth, the ellipsoid is a reference ellipsoid and the vector is decomposed in an Earth-centered Earth-fixed coordinate system. It behaves smoothly at all Earth positions, and it holds the mathematical one-to-one property. More in general, the concept can be applied to representing positions on the boundary of a strictly convex bounded subset of k-dimensional Euclidean space, provided that that boundary is a differentiable manifold. In this general case, the n-vector consists of k parameters. General properties A normal vector to a strictly convex surface can be used to uniquely define a surface position. n-vector is an outward-pointing normal vector with unit length used as a position representation. For most applications the surface is the reference ellipsoid of the Earth, and thus n-vector is used to represent a horizontal position. Hence, the angle between n-vector and the equatorial plane corresponds to geodetic latitude, as shown in the figure. A surface position has two degrees of freedom, and thus two parameters are sufficient to represent any position on the surface. On the reference ellipsoid, latitude and longitude are common parameters for this purpose, but like all two-parameter representations, they have singularities. This is similar to orientation, which has three degrees of freedom, but all three-parameter representations have singularities. In both cases the singularities are avoided by adding an extra parameter, i.e. to use n-vector (three parameters) to rep
https://en.wikipedia.org/wiki/Power%20automorphism
In mathematics, in the realm of group theory, a power automorphism of a group is an automorphism that takes each subgroup of the group to within itself. It is worth noting that the power automorphism of an infinite group may not restrict to an automorphism on each subgroup. For instance, the automorphism on rational numbers that sends each number to its double is a power automorphism even though it does not restrict to an automorphism on each subgroup. Alternatively, power automorphisms are characterized as automorphisms that send each element of the group to some power of that element. This explains the choice of the term power. The power automorphisms of a group form a subgroup of the whole automorphism group. This subgroup is denoted as where is the group. A universal power automorphism is a power automorphism where the power to which each element is raised is the same. For instance, each element may go to its cube. Here are some facts about the powering index: The powering index must be relatively prime to the order of each element. In particular, it must be relatively prime to the order of the group, if the group is finite. If the group is abelian, any powering index works. If the powering index 2 or -1 works, then the group is abelian. The group of power automorphisms commutes with the group of inner automorphisms when viewed as subgroups of the automorphism group. Thus, in particular, power automorphisms that are also inner must arise as conjugations by elements in the second group of the upper central series. References Subgroup lattices of groups by Roland Schmidt (PDF file) Group theory Group automorphisms
https://en.wikipedia.org/wiki/IA%20automorphism
In mathematics, in the realm of group theory, an IA automorphism of a group is an automorphism that acts as identity on the abelianization. The abelianization of a group is its quotient by its commutator subgroup. An IA automorphism is thus an automorphism that sends each coset of the commutator subgroup to itself. The IA automorphisms of a group form a normal subgroup of the automorphism group. Every inner automorphism is an IA automorphism. See also Torelli group References Group theory Group automorphisms
https://en.wikipedia.org/wiki/Class%20automorphism
In mathematics, in the realm of group theory, a class automorphism is an automorphism of a group that sends each element to within its conjugacy class. The class automorphisms form a subgroup of the automorphism group. Some facts: Every inner automorphism is a class automorphism. Every class automorphism is a family automorphism and a quotientable automorphism. Under a quotient map, class automorphisms go to class automorphisms. Every class automorphism is an IA automorphism, that is, it acts as identity on the abelianization. Every class automorphism is a center-fixing automorphism, that is, it fixes all points in the center. Normal subgroups are characterized as subgroups invariant under class automorphisms. For infinite groups, an example of a class automorphism that is not inner is the following: take the finitary symmetric group on countably many elements and consider conjugation by an infinitary permutation. This conjugation defines an outer automorphism on the group of finitary permutations. However, for any specific finitary permutation, we can find a finitary permutation whose conjugation has the same effect as this infinitary permutation. This is essentially because the infinitary permutation takes permutations of finite supports to permutations of finite support. For finite groups, the classical example is a group of order 32 obtained as the semidirect product of the cyclic ring on 8 elements, by its group of units acting via multiplication. Finding a class automorphism in the stability group that is not inner boils down to finding a cocycle for the action that is locally a coboundary but is not a global coboundary. Group theory Group automorphisms
https://en.wikipedia.org/wiki/Splice%20%28system%20call%29
is a Linux-specific system call that moves data between a file descriptor and a pipe without a round trip to user space. The related system call moves or copies data between a pipe and user space. Ideally, splice and vmsplice work by remapping pages and do not actually copy any data, which may improve I/O performance. As linear addresses do not necessarily correspond to contiguous physical addresses, this may not be possible in all cases and on all hardware combinations. Workings With , one can move data from one file descriptor to another without incurring any copies from user space into kernel space, which is usually required to enforce system security and also to keep a simple interface for processes to read and write to files. works by using the pipe buffer. A pipe buffer is an in-kernel memory buffer that is opaque to the user space process. A user process can splice the contents of a source file into this pipe buffer, then splice the pipe buffer into the destination file, all without moving any data through userspace. Linus Torvalds described in a 2006 email, which was included in a KernelTrap article. Origins The Linux splice implementation borrows some ideas from an original proposal by Larry McVoy in 1998. The splice system calls first appeared in Linux kernel version 2.6.17 and were written by Jens Axboe. Prototype ssize_t splice(int fd_in, loff_t *off_in, int fd_out, loff_t *off_out, size_t len, unsigned int flags); Some constants that are of interest are: /* Splice flags (not laid down in stone yet). */ #ifndef SPLICE_F_MOVE #define SPLICE_F_MOVE 0x01 #endif #ifndef SPLICE_F_NONBLOCK #define SPLICE_F_NONBLOCK 0x02 #endif #ifndef SPLICE_F_MORE #define SPLICE_F_MORE 0x04 #endif #ifndef SPLICE_F_GIFT #define SPLICE_F_GIFT 0x08 #endif Example This is an example of splice in action: /* Transfer from disk to a log. */ int log_blocks (struct log_handle * handle, int fd, loff_t offset, size_t size) { int
https://en.wikipedia.org/wiki/Fixture%20%28tool%29
A fixture is a work-holding or support device used in the manufacturing industry. Fixtures are used to securely locate (position in a specific location or orientation) and support the work, ensuring that all parts produced using the fixture will maintain conformity and interchangeability. Using a fixture improves the economy of production by allowing smooth operation and quick transition from part to part, reducing the requirement for skilled labor by simplifying how workpieces are mounted, and increasing conformity across a production run. Compared with a jig A fixture differs from a jig in that when a fixture is used, the tool must move relative to the workpiece; a jig moves the piece while the tool remains stationary. Purpose A fixture's primary purpose is to create a secure mounting point for a workpiece, allowing for support during operation and increased accuracy, precision, reliability, and interchangeability in the finished parts. It also serves to reduce working time by allowing quick set-up, and by smoothing the transition from part to part. It frequently reduces the complexity of a process, allowing for unskilled workers to perform it and effectively transferring the skill of the tool maker to the unskilled worker. Fixtures also allow for a higher degree of operator safety by reducing the concentration and effort required to hold a piece steady. Economically speaking the most valuable function of a fixture is to reduce labor costs. Without a fixture, operating a machine or process may require two or more operators; using a fixture can eliminate one of the operators by securing the workpiece. Design Fixtures should be designed with economics in mind; the purpose of these devices is often to reduce costs, and so they should be designed in such a way that the cost reduction outweighs the cost of implementing the fixture. It is usually better, from an economic standpoint, for a fixture to result in a small cost reduction for a process in constant use, t
https://en.wikipedia.org/wiki/Acrocyanosis
Acrocyanosis is persistent blue or cyanotic discoloration of the extremities, most commonly occurring in the hands, although it also occurs in the feet and distal parts of face. Although described over 100 years ago and not uncommon in practice, the nature of this phenomenon is still uncertain. The very term "acrocyanosis" is often applied inappropriately in cases when blue discoloration of the hands, feet, or parts of the face is noted. The principal (primary) form of acrocyanosis is that of a benign cosmetic condition, sometimes caused by a relatively benign neurohormonal disorder. Regardless of its cause, the benign form typically does not require medical treatment. A medical emergency would ensue if the extremities experience prolonged periods of exposure to the cold, particularly in children and patients with poor general health. However, frostbite differs from acrocyanosis because pain (via thermal nociceptors) often accompanies the former condition, while the latter is very rarely associated with pain. There are also a number of other conditions that affect hands, feet, and parts of the face with associated skin color changes that need to be differentiated from acrocyanosis: Raynaud phenomenon, pernio, acrorygosis, erythromelalgia, and blue finger syndrome. The diagnosis may be challenging in some cases, especially when these syndromes co-exist. Acrocyanosis may be a sign of a more serious medical problem, such as connective tissue diseases and diseases associated with central cyanosis. Other causative conditions include infections, toxicities, antiphospholipid syndrome, cryoglobulinemia, neoplasms. In these cases, the observed cutaneous changes are known as "secondary acrocyanosis". They may have a less symmetric distribution and may be associated with pain and tissue loss. Signs and symptoms Acrocyanosis is characterized by peripheral cyanosis: persistent cyanosis of the hands, feet, knees, or face. The extremities often are cold and clammy and may exhibi
https://en.wikipedia.org/wiki/Stability%20group
In mathematics, in the realm of group theory, the stability group of subnormal series is the group of automorphisms that act as identity on each quotient group. Group theory
https://en.wikipedia.org/wiki/Shape%20correction%20function
The shape correction function is a ratio of the surface area of a growing organism and that of an isomorph as function of the volume. The shape of the isomorph is taken to be equal to that of the organism for a given reference volume, so for that particular volume the surface areas are also equal and the shape correction function has value one. For a volume and reference volume , the shape correction function equals: V0-morphs: V1-morphs: Isomorphs: Static mixtures between a V0 and a V1-morph can be found as: for The shape correction function is used in Dynamic Energy Budget theory to correct equations for isomorphs to organisms that change shape during growth. The conversion is necessary for accurately modelling food (substrate) acquisition and mobilization of reserve for use by metabolism. References Developmental biology Metabolism
https://en.wikipedia.org/wiki/Panning%20%28audio%29
Panning is the distribution of an audio signal (either monaural or stereophonic pairs) into a new stereo or multi-channel sound field determined by a pan control setting. A typical physical recording console has a pan control for each incoming source channel. A pan control or pan pot (short for "panning potentiometer") is an analog control with a position indicator which can range continuously from the 7 o'clock when fully left to the 5 o'clock position fully right. Audio mixing software replaces pan pots with on-screen virtual knobs or sliders which function like their physical counterparts. Overview A pan pot has an internal architecture which determines how much of a source signal is sent to the left and right buses. "Pan pots split audio signals into left and right channels, each equipped with its own discrete gain (volume) control." This signal distribution is often called a taper or law. When centered (at 12 o'clock), the law can be designed to send −3, −4.5 or −6 decibels (dB) equally to each bus. "Signal passes through both the channels at an equal volume while the pan pot points directly north." If the two output buses are later recombined into a monaural signal, then a pan law of -6 dB is desirable. If the two output buses are to remain stereo then a law of -3 dB is desirable. A law of −4.5 dB at center is a compromise between the two. A pan control fully rotated to one side results in the source being sent at full strength (0 dB) to one bus (either the left or right channel) and zero strength (− dB) to the other. Regardless of the pan setting, the overall sound power level remains (or appears to remain) constant. Because of the phantom center phenomenon, sound panned to the center position is perceived as coming from between the left and right speakers, but not in the center unless listened to with headphones, because of head-related transfer function HRTF. Panning in audio borrows its name from panning action in moving image technology. An audio pan
https://en.wikipedia.org/wiki/Syntrophy
In biology, syntrophy, synthrophy, or cross-feeding (from Greek syn meaning together, trophe meaning nourishment) is the phenomenon of one species feeding on the metabolic products of another species to cope up with the energy limitations by electron transfer. In this type of biological interaction, metabolite transfer happens between two or more metabolically diverse microbial species that live in close proximity to each other. The growth of one partner depends on the nutrients, growth factors, or substrates provided by the other partner. Thus, syntrophism can be considered as an obligatory interdependency and a mutualistic metabolism between two different bacterial species. Microbial syntrophy Syntrophy is often used synonymously for mutualistic symbiosis especially between at least two different bacterial species. Syntrophy differs from symbiosis in a way that syntrophic relationship is primarily based on closely linked metabolic interactions to maintain thermodynamically favorable lifestyle in a given environment. Syntrophy plays an important role in a large number of microbial processes especially in oxygen limited environments, methanogenic environments and anaerobic systems. In anoxic or methanogenic environments such as wetlands, swamps, paddy fields, landfills, digestive tract of ruminants, and anerobic digesters syntrophy is employed to overcome the energy constraints as the reactions in these environments proceed close to thermodynamic equilibrium. Mechanism of microbial syntrophy The main mechanism of syntrophy is removing the metabolic end products of one species so as to create an energetically favorable environment for another species. This obligate metabolic cooperation is required to facilitate the degradation of complex organic substrates under anaerobic conditions. Complex organic compounds such as ethanol, propionate, butyrate, and lactate cannot be directly used as substrates for methanogenesis by methanogens. On the other hand, fermentation
https://en.wikipedia.org/wiki/Maintenance%20of%20an%20organism
Maintenance of an organism is the collection of processes to stay alive, excluding production processes. The Dynamic Energy Budget theory delineates two classes Somatic maintenance mainly comprises the turnover of structural mass (mainly proteins) and the maintenance of concentration gradients of metabolites across membranes (e.g., counteracting leakage). This is related to maintenance respiration. Maturity maintenance concerns the maintenance of defence systems (such as the immune system), and the preparation of the body for reproduction. The theory assumes that maturity maintenance costs can be reduced more easily during starvation than somatic maintenance costs. Under extreme starvation conditions, somatic maintenance costs are paid from structural mass, which causes shrinking. Some organism manage to switch to the torpor state under starvation conditions, and reduce their maintenance costs. Developmental biology
https://en.wikipedia.org/wiki/Tonnetz
In musical tuning and harmony, the (German for 'tone network') is a conceptual lattice diagram representing tonal space first described by Leonhard Euler in 1739. Various visual representations of the Tonnetz can be used to show traditional harmonic relationships in European classical music. History through 1900 The Tonnetz originally appeared in Leonhard Euler's 1739 . Euler's Tonnetz, pictured at left, shows the triadic relationships of the perfect fifth and the major third: at the top of the image is the note F, and to the left underneath is C (a perfect fifth above F), and to the right is A (a major third above F). The Tonnetz was rediscovered in 1858 by Ernst Naumann, and was disseminated in an 1866 treatise of Arthur von Oettingen. Oettingen and the influential musicologist Hugo Riemann (not to be confused with the mathematician Bernhard Riemann) explored the capacity of the space to chart harmonic motion between chords and modulation between keys. Similar understandings of the Tonnetz appeared in the work of many late-19th century German music theorists. Oettingen and Riemann both conceived of the relationships in the chart being defined through just intonation, which uses pure intervals. One can extend out one of the horizontal rows of the Tonnetz indefinitely, to form a never-ending sequence of perfect fifths: F-C-G-D-A-E-B-F♯-C♯-G♯-D♯-A♯-E♯-B♯-F𝄪-C𝄪-G𝄪- (etc.) Starting with F, after 12 perfect fifths, one reaches E♯. Perfect fifths in just intonation are slightly larger than the compromised fifths used in equal temperament tuning systems more common in the present. This means that when one stacks 12 fifths starting from F, the E♯ we arrive at will not be seven octaves above the F we started with. Oettingen and Riemann's Tonnetz thus extended on infinitely in every direction without actually repeating any pitches. In the twentieth century, composer-theorists such as Ben Johnston and James Tenney continued to developed theories and applications involving
https://en.wikipedia.org/wiki/Sequencing%20batch%20reactor
Sequencing batch reactors (SBR) or sequential batch reactors are a type of activated sludge process for the treatment of wastewater. SBR reactors treat wastewater such as sewage or output from anaerobic digesters or mechanical biological treatment facilities in batches. Oxygen is bubbled through the mixture of wastewater and activated sludge to reduce the organic matter (measured as biochemical oxygen demand (BOD) and chemical oxygen demand (COD)). The treated effluent may be suitable for discharge to surface waters or possibly for use on land. Overview While there are several configurations of SBRs, the basic process is similar. The installation consists of one or more tanks that can be operated as plug flow or completely mixed reactors. The tanks have a “flow through” system, with raw wastewater (influent) coming in at one end and treated water (effluent) flowing out the other. In systems with multiple tanks, while one tank is in settle/decant mode the other is aerating and filling. In some systems, tanks contain a section known as the bio-selector, which consists of a series of walls or baffles which direct the flow either from side to side of the tank or under and over consecutive baffles. This helps to mix the incoming Influent and the returned activated sludge (RAS), beginning the biological digestion process before the liquor enters the main part of the tank. Treatment stages There are five stages in the treatment process: Fill React Settle Decant Idle First, the inlet valve is opened and the tank is filled, while mixing is provided by mechanical means, but no air is added yet. This stage is also called the anoxic stage. During the second stage, aeration of the mixed liquor is performed by the use of fixed or floating mechanical pumps or by transferring air into fine bubble diffusers fixed to the floor of the tank. No aeration or mixing is provided in the third stage and the settling of suspended solids starts. During the fourth stage the outlet valve ope
https://en.wikipedia.org/wiki/Neural%20facilitation
Neural facilitation, also known as paired-pulse facilitation (PPF), is a phenomenon in neuroscience in which postsynaptic potentials (PSPs) (EPPs, EPSPs or IPSPs) evoked by an impulse are increased when that impulse closely follows a prior impulse. PPF is thus a form of short-term synaptic plasticity. The mechanisms underlying neural facilitation are exclusively pre-synaptic; broadly speaking, PPF arises due to increased presynaptic concentration leading to a greater release of neurotransmitter-containing synaptic vesicles. Neural facilitation may be involved in several neuronal tasks, including simple learning, information processing, and sound-source localization. Mechanisms Overview plays a significant role in transmitting signals at chemical synapses. Voltage-gated channels are located within the presynaptic terminal. When an action potential invades the presynaptic membrane, these channels open and enters. A higher concentration of enables synaptic vesicles to fuse to the presynaptic membrane and release their contents (neurotransmitters) into the synaptic cleft to ultimately contact receptors in the postsynaptic membrane. The amount of neurotransmitter released is correlated with the amount of influx. Therefore, short-term facilitation (STF) results from a build up of within the presynaptic terminal when action potentials propagate close together in time. Facilitation of excitatory post-synaptic current (EPSC) can be quantified as a ratio of subsequent EPSC strengths. Each EPSC is triggered by pre-synaptic calcium concentrations and can be approximated by: EPSC = k([]presynaptic)4 = k([]rest + []influx + []residual)4 Where k is a constant. Facilitation = EPSC2 / EPSC1 = (1 + []residual / []influx)4 - 1 Experimental evidence Early experiments by Del Castillo & Katz in 1954 and Dudel & Kuffler in 1968 showed that facilitation was possible at the neuromuscular junction even if transmitter release does not occur, indicating that facilitation is an
https://en.wikipedia.org/wiki/Chimera%20%28virus%29
A chimera or chimeric virus is a virus that contains genetic material derived from two or more distinct viruses. It is defined by the Center for Veterinary Biologics (part of the U.S. Department of Agriculture's Animal and Plant Health Inspection Service) as a "new hybrid microorganism created by joining nucleic acid fragments from two or more different microorganisms in which each of at least two of the fragments contain essential genes necessary for replication." The term genetic chimera had already been defined to mean: an individual organism whose body contained cell populations from different zygotes or an organism that developed from portions of different embryos. Chimeric flaviviruses have been created in an attempt to make novel live attenuated vaccines. Etymology In mythology, a chimera is a creature such as a hippogriff or a gryphon formed from parts of different animals, thus the name for these viruses. As a natural phenomenon Viruses are categorized in two types: In prokaryotes, the great majority of viruses possess double-stranded (ds) DNA genomes, with a substantial minority of single-stranded (ss) DNA viruses and only limited presence of RNA viruses. In contrast, in eukaryotes, RNA viruses account for the majority of the virome diversity although ssDNA and dsDNA viruses are common as well. In 2012, the first example of a naturally-occurring RNA-DNA hybrid virus was unexpectedly discovered during a metagenomic study of the acidic extreme environment of Boiling Springs Lake that is in Lassen Volcanic National Park, California. The virus was named BSL-RDHV (Boiling Springs Lake RNA DNA Hybrid Virus).<ref name=devor12>Devor, Caitlin (12 July 2012)."Scientists discover hybrid virus". Journal of Young Investigators". Retrieved 31 March 2020.</ref> Its genome is related to a DNA circovirus, which usually infect birds and pigs, and a RNA tombusvirus, which infect plants. The study surprised scientists, because DNA and RNA viruses vary and the way the chi
https://en.wikipedia.org/wiki/Prosopography%20of%20Anglo-Saxon%20England
The Prosopography of Anglo-Saxon England (PASE) is a database and associated website that aims to construct a prosopography of individuals within Anglo-Saxon England. The PASE online database presents details (which it calls factoids) of the lives of every recorded individual who lived in, or was closely connected with, Anglo-Saxon England from 597 to 1087, with specific citations to (and often quotations from) each primary source describing each factoid. PASE was funded by the British Arts and Humanities Research Council from 2000 to 2008 as a major research project based at King's College London in the Department of History and the Centre for Computing in the Humanities (now the Department of Digital Humanities), and at the Department of Anglo-Saxon, Norse and Celtic, University of Cambridge. The first phase of the project (PASE1) was launched at the British Academy on 27 May 2005 and is freely available on the Internet at www.pase.ac.uk. This covers individuals named in written sources up to 1066, and contains 11,758 individuals. Each person is assigned a number, to aid the ready identification of individuals in future scholarship- e.g. King Alfred the Great is denoted as Alfred 8. Each named individual is accompanied by the various spellings of their name as it appears in the written sources, along with factoids on their career and personal relationships where this can be determined. A second phase (PASE2), released on 10 August 2010, added information drawn chiefly from Domesday Book to the database. This includes 19,807 named individuals. The landholdings of these individuals are mapped, along with a table illustrating their named landholdings. In cases where enough information is possible, a small prose biography is provided. A number of publications have resulted from the creation of the PASE database - these are listed on the site. The PASE database is dedicated to professor Nicholas Brooks and Ann Williams. Directors Dame Janet 'Jinty' Nelson Sim
https://en.wikipedia.org/wiki/Sage%20oil
Sage oils are essential oils that come in several varieties: Dalmatian sage oil Also called English, Garden, and True sage oil. Made by steam distillation of Salvia officinalis partially dried leaves. Yields range from 0.5 to 1.0%. A colorless to yellow liquid with a warm camphoraceous, thujone-like odor and sharp and bitter taste. The main components of the oil are thujone (50%), camphor, pinene, and cineol. Clary sage oil Sometimes called muscatel. Made by steam or water distillation of Salvia sclarea flowering tops and foliage. Yields range from 0.7 to 1.5%. A pale yellow to yellow liquid with a herbaceous odor and a winelike bouquet. Produced in large quantities in France, Russia and Morocco. The oil contains linalyl acetate, linalool and other terpene alcohols (sclareol), as well as their acetates. Spanish sage oil Made by steam distillation of Salvia lavandulifolia leaves and twigs. A colorless to pale yellow liquid with the characteristic camphoraceous odor. Unlike Dalmatian sage oil, Spanish sage oil contains no or only traces of thujone; camphor and eucalyptol are the major components. Greek sage oil Made by steam distillation of Salvia triloba leaves. Grows in Greece and Turkey. Yields range from 0.25% to 4%. The oil contains camphor, thujone, and pinene, the dominant component being eucalyptol. Judaean sage oil Made by steam distillation of Salvia judaica leaves. The oil contains mainly cubebene and ledol. References Essential oils Flavors
https://en.wikipedia.org/wiki/DNA%20footprinting
DNA footprinting is a method of investigating the sequence specificity of DNA-binding proteins in vitro. This technique can be used to study protein-DNA interactions both outside and within cells. The regulation of transcription has been studied extensively, and yet there is still much that is unknown. Transcription factors and associated proteins that bind promoters, enhancers, or silencers to drive or repress transcription are fundamental to understanding the unique regulation of individual genes within the genome. Techniques like DNA footprinting help elucidate which proteins bind to these associated regions of DNA and unravel the complexities of transcriptional control. History In 1978, David J. Galas and Albert Schmitz developed the DNA footprinting technique to study the binding specificity of the lac repressor protein. It was originally a modification of the Maxam-Gilbert chemical sequencing technique. Method The simplest application of this technique is to assess whether a given protein binds to a region of interest within a DNA molecule. Polymerase chain reaction (PCR) amplify and label region of interest that contains a potential protein-binding site, ideally amplicon is between 50 and 200 base pairs in length. Add protein of interest to a portion of the labeled template DNA; a portion should remain separate without protein, for later comparison. Add a cleavage agent to both portions of DNA template. The cleavage agent is a chemical or enzyme that will cut at random locations in a sequence independent manner. The reaction should occur just long enough to cut each DNA molecule in only one location. A protein that specifically binds a region within the DNA template will protect the DNA it is bound to from the cleavage agent. Run both samples side by side on a polyacrylamide gel electrophoresis. The portion of DNA template without protein will be cut at random locations, and thus when it is run on a gel, will produce a ladder-like distribution. Th
https://en.wikipedia.org/wiki/Far-western%20blot
The far-western blot, or far-western blotting, is a molecular biological method based on the technique of western blot to detect protein-protein interaction in vitro. Whereas western blot uses an antibody probe to detect a protein of interest, far-western blot uses a non-antibody probe which can bind the protein of interest. Thus, whereas western blotting is used for the detection of certain proteins, far-western blotting is employed to detect protein/protein interactions. Method In conventional western blot, gel electrophoresis is used to separate proteins from a sample; these proteins are then transferred to a membrane in a 'blotting' step. In a western blot, specific proteins are then identified using an antibody probe. Far-western blot employs non-antibody proteins to probe the protein of interest on the blot. In this way, binding partners of the probe (or the blotted) protein may be identified. The probe protein is often produced in E. coli using an expression cloning vector. The probe protein can then be visualized through the usual methods — it may be radiolabelled; it may bear a specific affinity tag like His or FLAG for which antibodies exist; or there may be a protein specific antibody (to the probe protein). Because cell extracts are usually completely denatured by boiling in detergent before gel electrophoresis, this approach is most useful for detecting interactions that do not require the native folded structure of the protein of interest. References External links Overview at piercenet.com Overview at utoronto.ca Molecular biology techniques Protein methods
https://en.wikipedia.org/wiki/One-compartment%20kinetics
One-compartment kinetics for a chemical compound specifies that the uptake in the compartment is proportional to the concentration outside the compartment, and the elimination is proportional to the concentration inside the compartment. Both the compartment and the environment outside the compartment are considered to be homogeneous (well mixed).The compartment typically represents some organism (e.g. a fish or a daphnid). This model is used in the simplest versions of the DEBtox method for the quantification of effects of toxicants. References "One-compartment kinetics." British Journal of Anaesthetics. 1992 Oct;69(4):387-96. Biochemistry
https://en.wikipedia.org/wiki/ICONIX
ICONIX is a software development methodology which predates both the Rational Unified Process (RUP), Extreme Programming (XP) and Agile software development. Like RUP, the ICONIX process is UML Use Case driven but more lightweight than RUP. ICONIX provides more requirement and design documentation than XP, and aims to avoid analysis paralysis. The ICONIX Process uses only four UML based diagrams in a four-step process that turns use case text into working code. A principal distinction of ICONIX is its use of robustness analysis, a method for bridging the gap between analysis and design. Robustness analysis reduces the ambiguity in use case descriptions, by ensuring that they are written in the context of an accompanying domain model. This process makes the use cases much easier to design, test and estimate. The ICONIX Process is described in the book Use Case Driven Object Modeling with UML: Theory and Practice. Essentially, the ICONIX Process describes the core "logical" analysis and design modeling process. However, the process can be used without much tailoring on projects that follow different project management. Overview of the ICONIX Process The ICONIX process is split up into four milestones. At each stage the work for the previous milestone is reviewed and updated. Milestone 1: Requirements review Before beginning the ICONIX process there needs to have been some requirements analysis done. From this analysis use cases can be identified, a domain model produced and some prototype GUIs made. Milestone 2: Preliminary Design Review Once use cases have been identified, text can be written describing how the user and system will interact. A robustness analysis is performed to find potential errors in the use case text, and the domain model is updated accordingly. The use case text is important for identifying how the users will interact with the intended system. They also provide the developer with something to show the Customer and verify that the
https://en.wikipedia.org/wiki/Index%20Translationum
The Index Translationum is UNESCO's database of book translations. Books have been translated for thousands of years, with no central record of the fact. The League of Nations established a record of translations in 1932. In 1946, the United Nations superseded the League and UNESCO was assigned the Index. In 1979, the records were computerised. Since the Index counts translations of individual books, authors with many books with few translations can rank higher than authors with a few books with more translations. So, for example, while the Bible is the single most translated book in the world, it does not rank in the top ten of the index. The Index counts the Walt Disney Company, employing many writers, as a single writer. Authors with similar names are sometimes included as one entry, for example, the ranking for "Hergé" applies not only to the author of The Adventures of Tintin (Hergé), but also to B.R. Hergehahn, Elisabeth Herget, and Douglas Hergert. Hence, the top authors, as the Index presents them, are from a database query whose results require interpretation. According to the Index, Agatha Christie remains the most-translated individual author. Statistics Source: UNESCO Top 10 Author Top 10 Country Top 10 Target Language Top 10 Original language See also UNESCO Collection of Representative Works, UNESCO's program for funding the translation of works List of literary works by number of translations References External links Index Translationum Index Translationum: Statistics - Search forms Online databases Indexes Translation databases
https://en.wikipedia.org/wiki/History%20of%20information%20theory
The decisive event which established the discipline of information theory, and brought it to immediate worldwide attention, was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948. In this revolutionary and groundbreaking paper, the work for which Shannon had substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion that "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point." With it came the ideas of the information entropy and redundancy of a source, and its relevance through the source coding theorem; the mutual information, and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem; the practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; and of course the bit - a new way of seeing the most fundamental unit of information. Before 1948 Early telecommunications Some of the oldest methods of telecommunications implicitly use many of the ideas that would later be quantified in information theory. Modern telegraphy, starting in the 1830s, used Morse code, in which more common letters (like "E", which is expressed as one "dot") are transmitted more quickly than less common letters (like "J", which is expressed by one "dot" followed by three "dashes"). The idea of encoding information in this manner is the cornerstone of lossless data compression. A hundred years later, frequency modulation illustrated that bandwidth can be considered merely another degree of freedom. The vocoder, now largely looked at as an audio engineering curiosity, was originally designed in 1