id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
50,318 | https://en.wikipedia.org/wiki/Symmetric%20multiprocessing | Symmetric multiprocessing or shared-memory multiprocessing (SMP) involves a multiprocessor computer hardware and software architecture where two or more identical processors are connected to a single, shared main memory, have full access to all input and output devices, and are controlled by a single operating system instance that treats all processors equally, reserving none for special purposes. Most multiprocessor systems today use an SMP architecture. In the case of multi-core processors, the SMP architecture applies to the cores, treating them as separate processors.
Professor John D. Kubiatowicz considers traditionally SMP systems to contain processors without caches. Culler and Pal-Singh in their 1998 book "Parallel Computer Architecture: A Hardware/Software Approach" mention: "The term SMP is widely used but causes a bit of confusion. [...] The more precise description of what is intended by SMP is a shared memory multiprocessor where the cost of accessing a memory location is the same for all processors; that is, it has uniform access costs when the access actually is to memory. If the location is cached, the access will be faster, but cache access times and memory access times are the same on all processors."
SMP systems are tightly coupled multiprocessor systems with a pool of homogeneous processors running independently of each other. Each processor, executing different programs and working on different sets of data, has the capability of sharing common resources (memory, I/O device, interrupt system and so on) that are connected using a system bus or a crossbar.
Design
SMP systems have centralized shared memory called main memory (MM) operating under a single operating system with two or more homogeneous processors. Usually each processor has an associated private high-speed memory known as cache memory (or cache) to speed up the main memory data access and to reduce the system bus traffic.
Processors may be interconnected using buses, crossbar switches or on-chip mesh networks. The bottleneck in the scalability of SMP using buses or crossbar switches is the bandwidth and power consumption of the interconnect among the various processors, the memory, and the disk arrays. Mesh architectures avoid these bottlenecks, and provide nearly linear scalability to much higher processor counts at the sacrifice of programmability:
Serious programming challenges remain with this kind of architecture because it requires two distinct modes of programming; one for the CPUs themselves and one for the interconnect between the CPUs. A single programming language would have to be able to not only partition the workload, but also comprehend the memory locality, which is severe in a mesh-based architecture.
SMP systems allow any processor to work on any task no matter where the data for that task is located in memory, provided that each task in the system is not in execution on two or more processors at the same time. With proper operating system support, SMP systems can easily move tasks between processors to balance the workload efficiently.
History
The earliest production system with multiple identical processors was the Burroughs B5000, which was functional around 1961. However at run-time this was asymmetric, with one processor restricted to application programs while the other processor mainly handled the operating system and hardware interrupts. The Burroughs D825 first implemented SMP in 1962.
IBM offered dual-processor computer systems based on its System/360 Model 65 and the closely related Model 67 and 67–2. The operating systems that ran on these machines were OS/360 M65MP and TSS/360. Other software developed at universities, notably the Michigan Terminal System (MTS), used both CPUs. Both processors could access data channels and initiate I/O. In OS/360 M65MP, peripherals could generally be attached to either processor since the operating system kernel ran on both processors (though with a "big lock" around the I/O handler). The MTS supervisor (UMMPS) has the ability to run on both CPUs of the IBM System/360 model 67–2. Supervisor locks were small and used to protect individual common data structures that might be accessed simultaneously from either CPU.
Other mainframes that supported SMP included the UNIVAC 1108 II, released in 1965, which supported up to three CPUs, and the GE-635 and GE-645, although GECOS on multiprocessor GE-635 systems ran in a master-slave asymmetric fashion, unlike Multics on multiprocessor GE-645 systems, which ran in a symmetric fashion.
Starting with its version 7.0 (1972), Digital Equipment Corporation's operating system TOPS-10 implemented the SMP feature, the earliest system running SMP was the DECSystem 1077 dual KI10 processor system. Later KL10 system could aggregate up to 8 CPUs in a SMP manner. In contrast, DECs first multi-processor VAX system, the VAX-11/782, was asymmetric, but later VAX multiprocessor systems were SMP.
Early commercial Unix SMP implementations included the Sequent Computer Systems Balance 8000 (released in 1984) and Balance 21000 (released in 1986). Both models were based on 10 MHz National Semiconductor NS32032 processors, each with a small write-through cache connected to a common memory to form a shared memory system. Another early commercial Unix SMP implementation was the NUMA based Honeywell Information Systems Italy XPS-100 designed by Dan Gielan of VAST Corporation in 1985. Its design supported up to 14 processors, but due to electrical limitations, the largest marketed version was a dual processor system. The operating system was derived and ported by VAST Corporation from AT&T 3B20 Unix SysVr3 code used internally within AT&T.
Earlier non-commercial multiprocessing UNIX ports existed, including a port named MUNIX created at the Naval Postgraduate School by 1975.
Uses
Time-sharing and server systems can often use SMP without changes to applications, as they may have multiple processes running in parallel, and a system with more than one process running can run different processes on different processors.
On personal computers, SMP is less useful for applications that have not been modified. If the system rarely runs more than one process at a time, SMP is useful only for applications that have been modified for multithreaded (multitasked) processing. Custom-programmed software can be written or modified to use multiple threads, so that it can make use of multiple processors.
Multithreaded programs can also be used in time-sharing and server systems that support multithreading, allowing them to make more use of multiple processors.
Advantages/disadvantages
In current SMP systems, all of the processors are tightly coupled inside the same box with a bus or switch; on earlier SMP systems, a single CPU took an entire cabinet. Some of the components that are shared are global memory, disks, and I/O devices. Only one copy of an OS runs on all the processors, and the OS must be designed to take advantage of this architecture. Some of the basic advantages involves cost-effective ways to increase throughput. To solve different problems and tasks, SMP applies multiple processors to that one problem, known as parallel programming.
However, there are a few limits on the scalability of SMP due to cache coherence and shared objects.
Programming
Uniprocessor and SMP systems require different programming methods to achieve maximum performance. Programs running on SMP systems may experience an increase in performance even when they have been written for uniprocessor systems. This is because hardware interrupts usually suspends program execution while the kernel that handles them can execute on an idle processor instead. The effect in most applications (e.g. games) is not so much a performance increase as the appearance that the program is running much more smoothly. Some applications, particularly building software and some distributed computing projects, run faster by a factor of (nearly) the number of additional processors. (Compilers by themselves are single threaded, but, when building a software project with multiple compilation units, if each compilation unit is handled independently, this creates an embarrassingly parallel situation across the entire multi-compilation-unit project, allowing near linear scaling of compilation time. Distributed computing projects are inherently parallel by design.)
Systems programmers must build support for SMP into the operating system, otherwise, the additional processors remain idle and the system functions as a uniprocessor system.
SMP systems can also lead to more complexity regarding instruction sets. A homogeneous processor system typically requires extra registers for "special instructions" such as SIMD (MMX, SSE, etc.), while a heterogeneous system can implement different types of hardware for different instructions/uses.
Performance
When more than one program executes at the same time, an SMP system has considerably better performance than a uniprocessor system, because different programs can run on different CPUs simultaneously. Conversely, asymmetric multiprocessing (AMP) usually allows only one processor to run a program or task at a time. For example, AMP can be used in assigning specific tasks to CPU based to priority and importance of task completion. AMP was created well before SMP in terms of handling multiple CPUs, which explains the lack of performance based on the example provided.
In cases where an SMP environment processes many jobs, administrators often experience a loss of hardware efficiency. Software programs have been developed to schedule jobs and other functions of the computer so that the processor utilization reaches its maximum potential. Good software packages can achieve this maximum potential by scheduling each CPU separately, as well as being able to integrate multiple SMP machines and clusters.
Access to RAM is serialized; this and cache coherency issues cause performance to lag slightly behind the number of additional processors in the system.
Alternatives
SMP uses a single shared system bus that represents one of the earliest styles of multiprocessor machine architectures, typically used for building smaller computers with up to 8 processors.
Larger computer systems might use newer architectures such as NUMA (Non-Uniform Memory Access), which dedicates different memory banks to different processors. In a NUMA architecture, processors may access local memory quickly and remote memory more slowly. This can dramatically improve memory throughput as long as the data are localized to specific processes (and thus processors). On the downside, NUMA makes the cost of moving data from one processor to another, as in workload balancing, more expensive. The benefits of NUMA are limited to particular workloads, notably on servers where the data are often associated strongly with certain tasks or users.
Finally, there is computer clustered multiprocessing (such as Beowulf), in which not all memory is available to all processors. Clustering techniques are used fairly extensively to build very large supercomputers.
Variable SMP
Variable Symmetric Multiprocessing (vSMP) is a specific mobile use case technology initiated by NVIDIA. This technology includes an extra fifth core in a quad-core device, called the Companion core, built specifically for executing tasks at a lower frequency during mobile active standby mode, video playback, and music playback.
Project Kal-El (Tegra 3), patented by NVIDIA, was the first SoC (System on Chip) to implement this new vSMP technology. This technology not only reduces mobile power consumption during active standby state, but also maximizes quad core performance during active usage for intensive mobile applications. Overall this technology addresses the need for increase in battery life performance during active and standby usage by reducing the power consumption in mobile processors.
Unlike current SMP architectures, the vSMP Companion core is OS transparent meaning that the operating system and the running applications are totally unaware of this extra core but are still able to take advantage of it. Some of the advantages of the vSMP architecture includes cache coherency, OS efficiency, and power optimization. The advantages for this architecture are explained below:
Cache coherency: There are no consequences for synchronizing caches between cores running at different frequencies since vSMP does not allow the companion core and the main cores to run simultaneously.
OS efficiency: It is inefficient when multiple CPU cores are run at different asynchronous frequencies because this could lead to possible scheduling issues. With vSMP, the active CPU cores will run at similar frequencies to optimize OS scheduling.
Power optimization: In asynchronous clocking based architecture, each core is on a different power plane to handle voltage adjustments for different operating frequencies. The result of this could impact performance. vSMP technology is able to dynamically enable and disable certain cores for active and standby usage, reducing overall power consumption.
These advantages lead the vSMP architecture to considerably benefit over other architectures using asynchronous clocking technologies.
See also
Asymmetric multiprocessing
Binary Modular Dataflow Machine
Cellular multiprocessing
Locale (computer hardware)
Massively parallel
Partitioned global address space
Simultaneous multithreading – where functional elements of a CPU core are allocated across multiple threads of execution
Software lockout
Xeon Phi
References
External links
History of Multi-Processing
Linux and Multiprocessing
AMD
Classes of computers
Flynn's taxonomy
Parallel computing | Symmetric multiprocessing | Technology | 2,733 |
58,032 | https://en.wikipedia.org/wiki/Control%20Data%20Corporation | Control Data Corporation (CDC) was a mainframe and supercomputer company that in the 1960s was one of the nine major U.S. computer companies, which group included IBM, the Burroughs Corporation, and the Digital Equipment Corporation (DEC), the NCR Corporation (NCR), General Electric, and Honeywell, RCA and UNIVAC. For most of the 1960s, the strength of CDC was the work of the electrical engineer Seymour Cray who developed a series of fast computers, then considered the fastest computing machines in the world; in the 1970s, Cray left the Control Data Corporation and founded Cray Research (CRI) to design and make supercomputers. In 1988, after much financial loss, the Control Data Corporation began withdrawing from making computers and sold the affiliated companies of CDC; in 1992, CDC established Control Data Systems, Inc. The remaining affiliate companies of CDC currently do business as the software company Dayforce.
Background: World War II – 1957
During World War II the U.S. Navy had built up a classified team of engineers to build codebreaking machinery for both Japanese and German electro-mechanical ciphers. A number of these were produced by a team dedicated to the task working in the Washington, D.C., area. With the post-war wind-down of military spending, the Navy grew increasingly worried that this team would break up and scatter into various companies, and it started looking for ways to keep the code-breaking team together.
Eventually they found their solution: John Parker, the owner of a Chase Aircraft affiliate named Northwestern Aeronautical Corporation located in St. Paul, Minnesota, was about to lose all his contracts due to the ending of the war. The Navy never told Parker exactly what the team did, since it would have taken too long to get top secret clearance. Instead they simply said the team was important, and they would be very happy if he hired them all. Parker was obviously wary, but after several meetings with increasingly high-ranking Naval officers it became apparent that whatever it was, they were serious, and he eventually agreed to give this team a home in his military glider factory.
The result was Engineering Research Associates (ERA). Formed in 1946, this contract engineering company worked on a number of seemingly unrelated projects in the early 1950s. Among these was the ERA Atlas, an early military stored program computer, the basis of the Univac 1101, which was followed by the 1102, and then the 36-bit ERA 1103 (UNIVAC 1103). The Atlas was built for the Navy, which intended to use it in their non-secret code-breaking centers. In the early 1950s a minor political debate broke out in Congress about the Navy essentially "owning" ERA, and the ensuing debates and legal wrangling left the company drained of both capital and spirit. In 1952, Parker sold ERA to Remington Rand.
Although Rand kept the ERA team together and developing new products, it was most interested in ERA's magnetic drum memory systems. Rand soon merged with Sperry Corporation to become Sperry Rand. In the process of merging the companies, the ERA division was folded into Sperry's UNIVAC division. At first this did not cause too many changes at ERA, since the company was used primarily to provide engineering talent to support a variety of projects. However, one major project was moved from UNIVAC to ERA, the UNIVAC II project, which led to lengthy delays and upsets to nearly everyone involved.
Since the Sperry "big company" mentality encroached on the decision-making powers of the ERA employees, a number left Sperry to form the Control Data Corp. in September 1957, setting up shop in an old warehouse across the river from Sperry's St. Paul laboratory, in Minneapolis at 501 Park Avenue. Of the members forming CDC, William Norris was the unanimous choice to become the chief executive officer of the new company. Seymour Cray soon became the chief designer, though at the time of CDC's formation he was still in the process of completing a prototype for the Naval Tactical Data System (NTDS), and he did not leave Sperry to join CDC until it was complete. The M-460 was Seymour's first transistor computer, though the power supply rectifiers were still tubes.
Early designs and Cray's big plan
CDC started business by selling subsystems, mostly drum memory systems, to other companies. Cray joined the next year, and he immediately built a small transistor-based 6-bit machine known as the "CDC Little Character" to test his ideas on large-system design and transistor-based machines. "Little Character" was a great success.
In 1959, CDC released a 48-bit transistorized version of their re-design of the 1103 re-design under the name CDC 1604; the first machine was delivered to the U.S. Navy in 1960 at the Naval Postgraduate School in Monterey, California. Legend has it that the 1604 designation was chosen by adding CDC's first street address (501 Park Avenue) to Cray's former project, the ERA-Univac 1103.
A 12-bit cut-down version was also released as the CDC 160A in 1960, often considered among the first minicomputers. The 160A was particularly notable as it was built as a standard office desk, which was unusual packaging for that era. New versions of the basic 1604 architecture were rebuilt into the CDC 3000 series, which sold through the early and mid-1960s.
Cray immediately turned to the design of a machine that would be the fastest (or in the terminology of the day, largest) machine in the world, setting the goal at 50 times the speed of the 1604. This required radical changes in design, and as the project "dragged on" — it had gone on for about four years by then — the management got increasingly upset and it demanded greater oversight. Cray in turn demanded (in 1962) to have his own remote lab, saying that otherwise, he would quit. Norris agreed, and Cray and his team moved to Cray's home town, Chippewa Falls, Wisconsin. Not even Bill Norris, the founder and president of CDC, could visit Cray's laboratory without an invitation.
Peripherals business
In the early 1960s, the corporation moved to the Highland Park neighborhood of St. Paul where Norris lived. Through this period, Norris became increasingly worried that CDC had to develop a "critical mass" to compete with IBM. To do this, he started an aggressive program of buying up various companies to round out CDC's peripheral lineup. In general, they tried to offer a product to compete with any of IBM's, but running 10% faster and costing 10% less. This was not always easy to achieve.
One of its first peripherals was a tape transport, which led to some internal wrangling as the Peripherals Equipment Division attempted to find a reasonable way to charge other divisions of the company for supplying the devices. If the division simply "gave" them away at cost as part of a system purchase, they would never have a real budget of their own. Instead, a plan was established in which it would share profits with the divisions selling its peripherals, a plan eventually used throughout the company.
The tape transport was followed by the 405 Card Reader and the 415 Card Punch, followed by a series of tape drives and drum printers, all of which were designed in-house. The printer business was initially supported by Holley Carburetor in the Rochester, Michigan suburb outside of Detroit. They later formalized this by creating a jointly held company, Holley Computer Products. Holley later sold its stake back to CDC, the remainder becoming the Rochester Division.
Train printers and band printers in Rochester were developed in a joint venture with NCR and ICL, with CDC holding controlling interest. This joint venture was known as Computer Peripherals, Inc. (CPI). In the early 80s, it was merged with dot matrix computer printer manufacturer Centronics.
Norris was particularly interested in breaking out of the punched card–based workflow, where IBM held a stranglehold. He eventually decided to buy Rabinow Engineering, one of the pioneers of optical character recognition (OCR) systems. The idea was to bypass the entire punched card stage by having the operators simply type onto normal paper pages with an OCR-friendly typewriter font, and then submit those pages to the computer. Since a typewritten page contains much more information than a punched card (which has essentially one line of text from a page), this would offer savings all around. This seemingly simple task turned out to be much harder than anyone expected, and while CDC became a major player in the early days of OCR systems, OCR has remained a niche product to this day. Rabinow's plant in Rockville, MD was closed in 1976, and CDC left the business.
With the continued delays on the OCR project, it became clear that punched cards were not going to go away any time soon, and CDC had to address this as quickly as possible. Although the 405 remained in production, it was an expensive machine to build. So another purchase was made, Bridge Engineering, which offered a line of lower-cost as well as higher-speed card punches. All card-handling products were moved to what became the Valley Forge Division after Bridge moved to a new factory, with the tape transports to follow. Later, the Valley Forge and Rochester divisions were spun off to form a new joint company with National Cash Register (later NCR Corporation), Computer Peripherals Inc (CPI), to share development and production costs across the two companies. ICL later joined the effort. Eventually the Rochester Division was sold to Centronics in 1982.
Another side effect of Norris's attempts to diversify was the creation of a number of service bureaus that ran jobs on behalf of smaller companies that could not afford to buy computers. This was never very profitable, and in 1965, several managers suggested that the unprofitable centers be closed in a cost-cutting measure. Nevertheless, Norris was so convinced of the idea that he refused to accept this, and ordered an across-the-board "belt tightening" instead.
Control Data Institute
Control Data created an international technical/computer vocational school from the mid-1960s to the late 1980s. By the late 1970s there were sixty-nine learning centers worldwide, serving 18,000 students.
CDC 6600: defining supercomputing
Meanwhile, at the new Chippewa Falls lab, Seymour Cray, Jim Thornton, and Dean Roush put together a team of 34 engineers, which continued work on the new computer design. One of the ways they hoped to improve the CDC 1604 was to use better transistors, and Cray used the new silicon transistors using the planar process, developed by Fairchild Semiconductor. These were much faster than the germanium transistors in the 1604, without the drawbacks of the older mesa silicon transistors. The speed of light restriction forced a more compact design with refrigeration designed by Dean Roush. In 1964, the resulting computer was released onto the market as the CDC 6600, out-performing everything on the market by roughly ten times. When it sold over 100 units at $8 million ($ million in dollars) each; it was considered a supercomputer.
The 6600 had a 100ns, transistor-based CPU (Central Processing Unit) with multiple asynchronous functional units, using 10 logical, external I/O processors to off-load many common tasks and core memory. That way, the CPU could devote all of its time and circuitry to processing actual data, while the other controllers dealt with the mundane tasks like punching cards and running disk drives. Using late-model compilers, the machine attained a standard mathematical operations rate of 500 kiloFLOPS, but handcrafted assembly managed to deliver approximately 1 megaFLOPS. A simpler, albeit much slower and less expensive version, implemented using a more traditional serial processor design rather than the 6600's parallel functional units, was released as the CDC 6400, and a two-processor version of the 6400 is called the CDC 6500.
A FORTRAN compiler, known as MNF (Minnesota FORTRAN), was developed by Lawrence A. Liddiard and E. James Mundstock at the University of Minnesota for the 6600.
After the delivery of the 6600 IBM took notice of this new company. In 1965 IBM started an effort to build a machine that would be faster than the 6600, the ACS-1. Two hundred people were gathered on the U.S. West Coast to work on the project, away from corporate prodding, in an attempt to mirror Cray's off-site lab. The project produced interesting computer architecture and technology, but it was not compatible with IBM's hugely successful System/360 line of computers. The engineers were directed to make it 360-compatible, but that compromised its performance. The ACS was canceled in 1969, without ever being produced for customers. Many of the engineers left the company, leading to a brain-drain in IBM's high-performance departments.
In the meantime, IBM announced a new System/360 model, the Model 92, which would be just as fast as CDC's 6600. Although this machine did not exist, sales of the 6600 dropped drastically while people waited for the release of the mythical Model 92. Norris did not take this tactic, dubbed as fear, uncertainty and doubt (FUD), lying down, and in an extensive antitrust lawsuit launched against IBM a year later, he eventually won a settlement valued at $80 million. As part of the settlement, he picked up IBM's subsidiary, Service Bureau Corporation (SBC), which ran computer processing for other corporations on its own computers. SBC fitted nicely into CDC's existing service bureau offerings.
During the designing of the 6600, CDC had set up Project SPIN to supply the system with a high speed hard disk memory system. At the time it was unclear if disks would replace magnetic memory drums, or whether fixed or removable disks would become the more prevalent. SPIN explored all of these approaches, and eventually delivered a 28" diameter fixed disk and a smaller multi-platter 14" removable disk-pack system. Over time, the hard disk business pioneered in SPIN became a major product line.
CDC 7600 and 8600
In the same month it won its lawsuit against IBM, CDC announced its new computer, the CDC 7600 (previously referred to as the 6800 within CDC). This machine's hardware clock speed was almost four times that of the 6600 (36 MHz vs. 10 MHz), with a 27.5 ns clock cycle, and it offered considerably more than four times the total throughput, with much of the speed increase coming from extensive use of pipelining.
The 7600 did not sell well because it was introduced during the 1969 downturn in the U.S. national economy. Its complexity had led to poor reliability. The machine was not totally compatible with the 6000-series and required a completely different operating system, which like most new OSs, was primitive. The 7600 project paid for itself, but damaged CDC's reputation. The 7600 memory had a split primary- and secondary-memory which required user management but was more than fast enough to make it the fastest uniprocessor from 1969 to 1976. A few dozen 7600s were the computers of choice at supercomputer centers around the world.
Cray then turned to the design of the CDC 8600. This design included four 7600-like processors in a single, smaller case. The smaller size and shorter signal paths allowed the 8600 to run at much higher clock speeds which, together with faster memory, provided most of the performance gains. The 8600, however, belonged to the "old school" in terms of its physical construction, and it used individual components soldered to circuit boards. The design was so compact that cooling the CPU modules proved effectively impossible, and access for maintenance difficult. An abundance of hot-running solder joints ensured that the machines did not work reliably; Cray recognized that a re-design was needed.
The STAR and the Cyber
In addition to the redesign of the 8600, CDC had another project called the CDC STAR-100 under way, led by Cray's former collaborator on the 6600/7600, Jim Thornton. Unlike the 8600's "four computers in one box" solution to the speed problem, the STAR was a new design using a unit that we know today as the vector processor. By highly pipelining mathematical instructions with purpose-built instructions and hardware, mathematical processing is dramatically improved in a machine that was otherwise slower than a 7600. Although the particular set of problems it would be best at solving was limited compared to the general-purpose 7600, it was for solving exactly these problems that customers would buy CDC machines.
Since these two projects competed for limited funds during the late 1960s, Norris felt that the company could not support simultaneous development of the STAR and a complete redesign of the 8600. Therefore, Cray left CDC to form the Cray Research company in 1972. Norris remained, however, a staunch supporter of Cray, and invested money into Cray's new company. In 1974 CDC released the STAR, designated as the Cyber 203. It turned out to have "real world" performance that was considerably worse than expected. STAR's chief designer, Jim Thornton, then left CDC to form the Network Systems Corporation.
In 1975, a STAR-100 was placed into service in a Control Data service center which was considered the first supercomputer in a data center. Founder William C. Norris presided at the podium for the press conference announcing the new service. Publicity was a key factor in making the announcement a success by coordinating the event with Guinness; thus, establishing the Star-100 as "The most powerful and fastest computer" which was published in the Guinness Book of World Records. The late Duane Andrews, Public Relations, was responsible for coordinating this event. Andrews successfully attracted many influential editors including the research editor at Business Week who chronicled this publicity release "... as the most exciting public event he attended in 20 years". Sharing the podium were William C. Norris, Boyd Jones V.P. and S. Steve Adkins, Data Center Manager. It was extremely rare for Bill Norris to take the podium being a very private individual. Also, during the lunch at a local country club, Norris signed a huge stack of certificates attesting to the record which were printed by the Star 100 on printer paper produced in our Lincoln, Nebraska plant. The paper included a half-tone photo of the Star 100. The main customers of the STAR-100 data center were oil companies running oil reservoir simulations. Most notably was the simulation controlled from a terminal in Texas which solved oil extraction simulations for oil fields in Kuwait. A front page Wall Street Journal news article resulted in acquiring a new user, Allis-Chalmers, for simulation of a damaged hydroelectric turbine in a Norwegian mountain hydropower plant.
A variety of systems based on the basic 6600/7600 architecture were repackaged in different price/performance categories of the CDC Cyber, which became CDC's main product line in the 1970s. An updated version of the STAR architecture, the Cyber 205, had considerably better performance than the original. By this time, however, Cray's own designs, like the Cray-1, were using the same basic design techniques as the STAR, but were computing much faster. The Star 100 was able to process vectors up to 64K (65536) elements, versus 64 elements for the Cray-1, but the Star 100 took much longer for initiating the operation so the Cray-1 outperformed with short vectors.
Sales of the STAR were weak, but Control Data Corp. produced a successor system, the Cyber 200/205, that gave Cray Research some competition. CDC also embarked on a number of special projects for its clients, who produced an even smaller number of black project computers. The CDC Advanced Flexible Processor (AFP), also known as CYBER PLUS, was one such machine.
Another design direction was the "Cyber 80" project, which was aimed at release in 1980. This machine could run old 6600-style programs, and also had a completely new 64-bit architecture. The concept behind Cyber 80 was that current 6000-series users would migrate to these machines with relative ease. The design and debugging of these machines went on past 1980, and the machines were eventually released under other names.
CDC was also attempting to diversify its revenue from hardware into services and this included its promotion of the PLATO computer-aided learning system, which ran on Cyber hardware and incorporated many early computer interface innovations including bit-mapped touchscreen terminals.
Magnetic Peripherals Inc.
Meanwhile, several very large Japanese manufacturing firms were entering the market. The supercomputer market was too small to support more than a handful of companies, so CDC started looking for other markets. One of these was the hard disk drive (HDD) market.
Magnetic Peripherals Inc., later Imprimis Technology, was originally a joint venture with Honeywell formed in 1975 to manufacture HDDs for both companies. CII-Honeywell Bull later purchased a 3 percent interest in MPI from Honeywell. Sperry became a partner in 1983 with 17 percent, making the ownership split CDC (67%) and Honeywell (17%). MPI was a captive supplier to its parents. It sold on an OEM basis only to them, while CDC sold MPI product to third parties under its brand name.
It became a major player in the HDD market. It was the worldwide leader in 14-inch disk drive technology in the OEM marketplace in the late 1970s and early 1980s especially with its SMD (Storage Module Device) and CMD (Cartridge Module Drive), with its plant at Brynmawr in the South Wales valleys running 24/7 production. The Magnetic Peripherals division in Brynmawr had produced 1 million disks and 3 million magnetic tapes by October 1979. CDC was an early developer of the eight-inch drive technology with products from its MPI Oklahoma City Operation. Its CDC Wren series drives were particularly popular with high end users, although it was behind the capacity growth and performance curves of numerous startups such as Micropolis, Atasi, Maxtor, and Quantum. CDC also co-developed the now universal Advanced Technology Attachment (ATA) interface with Compaq and Western Digital, which was aimed at lowering the cost of adding low-performance drives.
CDC founded a separate division called Rigidyne in Simi Valley, California, to develop 3.5-inch drives using technology from the Wren series. These were marketed by CDC as the "Swift" series, and were among the first high-performance 3.5-inch drives on the market at their introduction in 1987.
In September 1988, CDC merged Rigidyne and MPI into the umbrella subsidiary of Imprimis Technology. The next year, Seagate Technology purchased Imprimis for $250 million in cash, 10.7 million in Seagate stock and a $50 million promissory note.
Investments
Control Data held interests in other companies including computer research company Arbitron, Commercial Credit Corporation and Ticketron.
Commercial Credit Corporation
In 1968, Commercial Credit Corporation was the target of a hostile takeover by Loews Inc. Loews had acquired nearly 10% of CCC, which it intended to break up on acquisition. To avoid the takeover, CCC forged a deal with CDC lending them the money to purchase control in CCC instead, and "That is how a computer company came to own a fleet of fishing boats in the Chesapeake Bay." By the 1980s, Control Data entered an unstable period, which resulted in the company liquidating many of their assets. In 1986, Sandy Weill convinced the Control Data management to spin off their Commercial Credit subsidiary to prevent the company's potential liquidation. Over a period of years, Weill used Commercial Credit to build an empire that became Citigroup. In 1999, Commercial Credit was renamed CitiFinancial, and in 2011, the full-service network of US CitiFinancial branches were renamed OneMain Financial.
Ticketron
In 1969, Control Data acquired 51% of Ticketron for $3.9 million from Cemp Investments. In 1970, Ticketron became the sole computerized ticketing provider in the United States. In 1973, Control Data increased the size of its investment.
Ticketron also provided ticketing terminals and back-end infrastructure for parimutuel betting, and provided similar services for a number of US lotteries, including those in New York, Illinois, Pennsylvania, Delaware, Washington and Maryland.
By the mid 1980s, Ticketron was CDC's most profitable business with revenue of $120 million and CDC, which was loss-making at the time, considered selling the business. In 1990 the majority of Ticketron's assets and business, with the exception of a small antitrust carve-out for Broadway's "Telecharge" business-unit, were bought by The Carlyle Group who sold it the following year to rival Ticketmaster.
ETA Systems, wind-down and sale of assets
CDC decided to fight for the high-performance niche, but Norris considered that the company had become moribund and unable to quickly design competitive machines. In 1983 he set up a spinoff company, ETA Systems, whose design goal was a machine processing data at 10 GFLOPs, about 40 times the speed of the Cray-1. The design never fully matured, and it was unable to reach its goals. Nevertheless, the product was one of the fastest computers on the market, and 7 liquid nitrogen-cooled and 27 smaller air cooled versions of the computers were sold during the next few years. They used the new CMOS chips, which produced much less heat. The effort ended after half-hearted attempts to sell ETA Systems. In 1989, most of the employees of ETA Systems were laid off, and the remaining ones were folded into CDC.
Despite having valuable technology, CDC still suffered huge losses in 1985 ($567 million) and 1986 while attempting to reorganize. As a result, in 1987 it sold its PathLab Laboratory Information System to 3M. While CDC was still making computers, it was decided that hardware manufacturing was no longer as profitable as it used to be, and so in 1988 it was decided to leave the industry, bit by bit. The first division to go was Imprimis. After that, CDC sold other assets such as VTC (a chip maker that specialized in mass-storage circuitry and was closely linked with MPI), and non-computer-related assets like Ticketron.
In 1992, the company separated into two independent companies – the computer businesses were spun out as Control Data Systems, Inc. (CDS), while the information service businesses became the Ceridian Corporation.
CDS later became owner of ICEM Technologies, makers of ICEM DDN and ICEM Surf software and sold the business to PTC for $40.6m in 1998. In 1999, CDS was bought out by Syntegra, a subsidiary of the BT Group, and merged into BT's Global Services organization.
Ceridian continues as a successful outsourced IT company focusing on human resources. CDC's Energy Management Division, was one of its most successful business units, providing control systems solutions that managed as much as 25% of all electricity on the planet, and went to Ceridian in the split. This division was renamed Empros and was sold to Siemens in 1993. In 1997, General Dynamics acquired the Computing Devices International Division of Ceridian, which was a defense electronics and systems integration business headquartered in Bloomington, Minnesota – originally Control Data's Government Systems Division.
In March 2001, Ceridian separated into two independent companies, with the old Ceridian Corporation renamed itself to Arbitron Inc. and the rest of the company (consisting of human resources services and Comdata business) took the Ceridian Corporation name. Ceridian was later split again in 2013, with formation of Ceridian HCM Holding Inc. (human resources services) and Comdata Inc. (payments business), marking the end of CDC assets split for good.
Timeline of systems releases
CDC 1604 et al – 1604, 1604-A, 1604-B, 1604-C, 924 (a "cut down" 1604 sibling) * CDC 160 series – 160, 160A (160-A), 160G (160-G) * CDC 3000 series – 3100, 3200, 3300, 3400, 3500, 3600, 3800 * CDC 6000 series – 6200, 6400, 6500, 6700 * CDC 6600 * CDC 7600 * CDC CYBER – 17, 18, 71, 72, 73, 74, 76, 170, 171, 172, 173, 174, 175, 176, 203, 205, Omega/480, 700 * CDC STAR-100
1957 – Founding
1959 – 1604
1960 – 1604-B
1961 – 160
1962 – 924 (a 24-bit 1604)
1963 – 160A (160-A), 1604-A, 3400, 6600
1964 – 160G (160-G), 3100, 3200, 3600, 6400
1965 – 1604-C, 1700, 3300, 3500, 8050, 8090
1966 – 3800, 6200, 6500, Station 6000
1968 – 7600
1969 – 6700
1970 – STAR-100
1971 – Cyber 71, Cyber 72, Cyber 73, Cyber 74, Cyber 76
1972 – 5600, 8600
1973 – Cyber 170, Cyber 172, Cyber 173, Cyber 174, Cyber 175, Cyber 17
1976 – Cyber 18
1977 – Cyber 171, Cyber 176, Omega/480
1979 – Cyber 203, Cyber 720, Cyber 730, Cyber 740, Cyber 750, Cyber 760
1980 – Cyber 205
1982 – Cyber 815, Cyber 825, Cyber 835, Cyber 845, Cyber 855, Cyber 865, Cyber 875
1983 – ETA10
1984 – Cyber 810, Cyber 830, Cyber 840, Cyber 850, Cyber 860, Cyber 990, CyberPlus
1987 – Cyber 910, Cyber 930, Cyber 995
1988 – Cyber 960
1989 – Cyber 920, Cyber 2000
Note: The 8xx & 9xx Cyber models, introduced beginning in 1982, formed the 64-bit Cyber 180 series, and their Peripheral Processors (PPs) were 16-bit. The 180 series had virtual memory capability, using CDC's NOS/VE operating system.The more complete nomenclature for these was 180/xxx, although at times the shorter form (e.g. Cyber 990) was used.
Peripheral Systems Group
Control Data Corporation's Peripheral Systems Group was both a hardware and a software development unit that functioned in the 1970s and 1980s.
Their services including development and marketing of IBM-oriented (operating) systems software. One of the Peripheral Systems Group's software products was named CUPID, "Control Data's Program for Unlike Data Set Concatenation." Its focus was for customers of IBM's MVS operating system, and the intended audience was systems programmers. The product's General Information and Reference Manual included SysGen-like options and information about internal
user-accessible control blocks.
Film and science fiction references
Mars Needs Women (1967) – a CDC 3400 is used for radio communication and to direct the actions of the military as they intercept the Martian spaceships.
Colossus: The Forbin Project (1970) – The title sequences to this film include tape drives and other early CDC equipment.
The Mad Bomber (1973) – The police department has a CDC 3100 that they use to profile the bomber.
The Adolescence of P-1 (1977), by Thomas Ryan – Control Data computers were very enticing to young P-1.
The New Avengers – In episode 2-10 (#23) ("Complex", 1977) Purdey uses a CDC card reader.
Mi-Sex – Computer Games: 1979 pop music video. The band enters the computer room in the Control Data North Sydney building and proceeds to play with CDC equipment.
Tron (1982) – In the wide screen version of the film, when Flynn and Lora sneak into Encom, a CDC 7600 is visible in the background, alongside a Cray-1. This scene was shot at the Lawrence Livermore National Laboratory.
Die Hard (1988) – The computer room shot up by one of the terrorists contained a number of working Cyber 180 computers and a mock-up of an ETA-10 supercomputer, along with a number of other peripheral devices, all provided by CDC Demonstration Services/Benchmark Lab. This equipment was requested on short notice after another computer manufacturer backed out at the last minute. Paul Derby, manager of the Benchmark Lab, arranged to send two van-loads of equipment to Hollywood for the shoot, accompanied by Jerry Sterns of the Benchmark Lab who supervised the equipment while it was on the set. After the machines were returned to Minnesota, they were inspected and tested, and as each machine was sold, a notation was made in the corporate records that the machine had appeared in the film.
They Live (1988), by John Carpenter – As Roddy Piper's character is trying on his new "sunglasses" that allow him to see the world as it is, he looks at an advertisement for Control Data Corporation and sees the word OBEY. The film's credits include "special thanks" to CDC.
References
Further reading
Lundstrom, David. A Few Good Men from Univac. Cambridge, Massachusetts: MIT Press, 1987. .
Misa, Thomas J., ed. Building the Control Data Legacy: The Career of Robert M. Price. Minneapolis: Charles Babbage Institute, 2012
Murray, Charles J. The Supermen: The Story of Seymour Cray and the Technical Wizards behind the Supercomputer. New York: John Wiley, 1997. .
Price, Robert M. The Eye for Innovation: Recognizing Possibilities and Managing the Creative Enterprise. New Haven: Yale University Press, 2005
Thornton, J. E. Design of a Computer: The Control Data 6600. Glenview, Ill.: Scott, Foresman, 1970
Worthy, James C. William C. Norris: Portrait of a Maverick. Ballinger Pub Co., May 1987.
External links
Control Data Corporation Records at the Charles Babbage Institute, University of Minnesota, Minneapolis; CDC records donated by Ceridian Corporation in 1991; finding guide contains historical timeline, product timeline, acquisitions list, and joint venture list.
Oral history interview with William Norris discusses ERA years, acquisition of ERA by Remington Rand, the Univac File computer, work as head of the Univac Division, and the formation of CDC. Charles Babbage Institute, University of Minnesota, Minneapolis.
Oral history interview with Willis K. Drake Discusses Remington-Rand, the Eckert-Mauchly Computer Company, ERA, and formation of Control Data Corporation. Charles Babbage Institute, University of Minnesota, Minneapolis.
Organized discussion moderated by Neil R. Lincoln with eighteen Control Data Corporation (CDC) engineers on computer architecture and design. Charles Babbage Institute, University of Minnesota, Minneapolis. Engineers include Robert Moe, Wayne Specker, Dennis Grinna, Tom Rowan, Maurice Hutson, Curt Alexander, Don Pagelkopf, Maris Bergmanis, Dolan Toth, Chuck Hawley, Larry Krueger, Mike Pavlov, Dave Resnick, Howard Krohn, Bill Bhend, Kent Steiner, Raymon Kort, and Neil R. Lincoln. Discussion topics include CDC 1604, CDC 6600, CDC 7600, CDC 8600, CDC STAR-100 and Seymour Cray.
Information about the spin out of Commercial Credit from Control Data by Sandy Weill
Information about the Control Data CDC 3800 Computer—on display at the National Air and Space Museum Steven F. Udvar-Hazy Center near Washington Dulles International Airport.
Private Collection of historical documents about CDC
Control Data User Manuals Library @ Computing History
Computing history describing the use of a range of CDC systems and equipment 1970–1985
A German collection of CDC, Cray and other large computer systems, some of them in operation
American companies established in 1957
American companies disestablished in 1992
Chippewa County, Wisconsin
Computer companies established in 1957
Computer companies disestablished in 1992
Defunct companies based in Minneapolis
Defunct companies based in Minnesota
Defunct computer companies of the United States
Defunct computer hardware companies
Defunct computer systems companies
Defunct software companies of the United States
Manufacturing companies based in Minnesota
Software companies based in Minnesota
Supercomputers
Technology companies established in 1957
Technology companies disestablished in 1992 | Control Data Corporation | Technology | 7,581 |
30,873,297 | https://en.wikipedia.org/wiki/Pentylenetetrazol | Pentylenetetrazol (PTZ), also known as pentylenetetrazole, leptazol, metrazol, pentetrazol (INN), pentamethylenetetrazol, Corazol, Cardiazol, or Deumacard, is a drug formerly used as a circulatory and respiratory stimulant. High doses cause convulsions, as discovered by Hungarian-American neurologist and psychiatrist Ladislas J. Meduna in 1934. It has been used in convulsive therapy, and was found to be effective—primarily for depression—but side effects such as uncontrolled seizures were difficult to avoid. In 1939, pentylenetetrazol was replaced by electroconvulsive therapy, which is easier to administer, as the preferred method for inducing seizures in England's mental hospitals. In the US, its approval by the Food and Drug Administration was revoked in 1982. It is used in Italy as a cardio-respiratory stimulant in combination with codeine in a cough suppressant drug.
Side effects
Pentylenetetrazol is anxiogenic and has been known to induce severe anxiety in humans.
Mechanism
The mechanism of pentylenetetrazol is not well understood, and it may have multiple mechanisms of action. In 1984, Squires et al. published a report analyzing pentylenetetrazol and several structurally related convulsant drugs. They found that in vivo convulsant potency was strongly correlated to in vitro affinity to the picrotoxin binding site on the GABA-A receptor complex. Many GABA-A ligands, such as the sedatives diazepam and phenobarbital, are effective anticonvulsants, but presumably pentylenetetrazol has the opposite effect when it binds to the GABA-A receptor.
Several studies have focused on the way pentylenetetrazol influences neuronal ion channels. A 1987 study found that pentylenetetrazol increases calcium influx and sodium influx, both of which depolarize the neuron. Because these effects were antagonized by calcium channel blockers, pentylenetetrazol apparently acts at calcium channels, and it causes them to lose selectivity and conduct sodium ions, as well.
Research
Pentylenetetrazol has been used experimentally to study seizure phenomena and to identify pharmaceuticals that may control seizure susceptibility. For instance, researchers can induce status epilepticus in animal models. Pentylenetetrazol is also a prototypical anxiogenic drug and has been extensively used in animal models of anxiety. Pentylenetetrazol produces a reliable discriminative stimulus, which is largely mediated by the GABAA receptor. Several classes of compounds can modulate the pentylenetetrazol discriminative stimulus, including 5-HT1A, 5-HT3, NMDA, glycine, and L-type calcium channel ligands.
Pentylenetetrazol is being studied as a wakefulness-promoting agent in the treatment of idiopathic hypersomnia and narcolepsy.
See also
List of investigational sleep drugs
GABAA receptor negative allosteric modulator
GABAA receptor § Ligands
References
Antidepressants
Anxiogenics
Convulsants
GABAA receptor negative allosteric modulators
Respiratory agents
Stimulants
Tetrazoles
Wakefulness-promoting agents
Withdrawn drugs | Pentylenetetrazol | Chemistry | 731 |
65,804,770 | https://en.wikipedia.org/wiki/Telephone%20game%20%28game%20theory%29 | The Telephone game is an example of a coordination game potentially having more than one Nash equilibrium proposed by David Lewis. The game was based on a convention in Lewis's home town of Oberlin, Ohio that when a telephone call was cut off then the caller would redial the callee.
Equilibrium analysis
This game involves two players in a town having a telephone service with only one telephone line that cuts callers off after a set period of time (e.g., five minutes) if their call is not completed. Assuming one player (the caller) calls a second player (the callee) and is cut-off, then the players will have two potential strategies - wait for the other to dial them back, or redial to call the other. If both players wait, then no call will be completed, resulting in zero benefit to either player. If both players call each other, then they will get a busy signal, again, resulting in zero benefit to either party. In a simple case where the cost of calling is negligible then it is equally optimal for both parties for one of the caller and the callee to wait whilst the other redials (represented as a benefit of 10 for both parties in Fig. 1) and as such this is a pure coordination game.
In a more complex version of the game (Fig. 2), if the cost of calling is high, then the players will prefer the waiting strategy with its resulting deadlock. If one player calls and the other waits then the player that waits will receive a benefit (say, 6) and the player that calls will receive a lesser benefit as they have to pay the cost of the call (say, 3). In this case there are two potential Nash equilibria.
References
Non-cooperative games | Telephone game (game theory) | Mathematics | 363 |
7,454,236 | https://en.wikipedia.org/wiki/Tetrathiafulvalene | Tetrathiafulvalene (TTF) is an organosulfur compound with the formula (. Studies on this heterocyclic compound contributed to the development of molecular electronics. TTF is related to the hydrocarbon fulvalene, , by replacement of four CH groups with sulfur atoms. Over 10,000 scientific publications discuss TTF and its derivatives.
Preparation
The high level of interest in TTFs has spawned the development of many syntheses of TTF and its analogues. Most preparations entail the coupling of cyclic building blocks such as 1,3-dithiole-2-thion or the related 1,3-dithiole-2-ones. For TTF itself, the synthesis begins with the cyclic trithiocarbonate (1,3-dithiole-2-thione), which is S-methylated and then reduced to give (1,3-dithiole-2-yl methyl thioether), which is treated as follows:
Redox properties
Bulk TTF itself has unremarkable electrical properties. Distinctive properties are, however, associated with salts of its oxidized derivatives, such as salts derived from .
The high electrical conductivity of TTF salts can be attributed to the following features of TTF:
its planarity, which allows π-π stacking of its oxidized derivatives,
its high symmetry, which promotes charge delocalization, thereby minimizing coulombic repulsions, and
its ability to undergo oxidation at mild potentials to give a stable radical cation. Electrochemical measurements show that TTF can be oxidized twice reversibly:
(E = 0.34 V)
(E = 0.78 V, vs. Ag/AgCl in solution)
Each dithiolylidene ring in TTF has 7π electrons: 2 for each sulfur atom, 1 for each sp2 carbon atom. Thus, oxidation converts each ring to an aromatic 6π-electron configuration, consequently leaving the central double bond essentially a single bond, as all π-electrons occupy ring orbitals.
History
The salt was reported to be a semiconductor in 1972. Subsequently, the charge-transfer salt [TTF]TCNQ was shown to be a narrow band gap semiconductor. X-ray diffraction studies of [TTF][TCNQ] revealed stacks of partially oxidized TTF molecules adjacent to anionic stacks of TCNQ molecules. This "segregated stack" motif was unexpected and is responsible for the distinctive electrical properties, i.e. high and anisotropic electrical conductivity. Since these early discoveries, numerous analogues of TTF have been prepared. Well studied analogues include tetramethyltetrathiafulvalene (Me4TTF), tetramethylselenafulvalenes (TMTSFs), and bis(ethylenedithio)tetrathiafulvalene (BEDT-TTF, CAS [66946-48-3]). Several tetramethyltetrathiafulvalene salts (called Fabre salts) are of some relevance as organic superconductors.
See also
Bechgaard salt
References
Further reading
Physical properties of Tetrathiafulvalene from the literature.
Molecular electronics
Organic semiconductors
Dithioles | Tetrathiafulvalene | Chemistry,Materials_science | 690 |
54,112,223 | https://en.wikipedia.org/wiki/Transcriptomics%20technologies | Transcriptomics technologies are the techniques used to study an organism's transcriptome, the sum of all of its RNA transcripts. The information content of an organism is recorded in the DNA of its genome and expressed through transcription. Here, mRNA serves as a transient intermediary molecule in the information network, whilst non-coding RNAs perform additional diverse functions. A transcriptome captures a snapshot in time of the total transcripts present in a cell. Transcriptomics technologies provide a broad account of which cellular processes are active and which are dormant.
A major challenge in molecular biology is to understand how a single genome gives rise to a variety of cells. Another is how gene expression is regulated.
The first attempts to study whole transcriptomes began in the early 1990s. Subsequent technological advances since the late 1990s have repeatedly transformed the field and made transcriptomics a widespread discipline in biological sciences. There are two key contemporary techniques in the field: microarrays, which quantify a set of predetermined sequences, and RNA-Seq, which uses high-throughput sequencing to record all transcripts. As the technology improved, the volume of data produced by each transcriptome experiment increased. As a result, data analysis methods have steadily been adapted to more accurately and efficiently analyse increasingly large volumes of data. Transcriptome databases getting bigger and more useful as transcriptomes continue to be collected and shared by researchers. It would be almost impossible to interpret the information contained in a transcriptome without the knowledge of previous experiments.
Measuring the expression of an organism's genes in different tissues or conditions, or at different times, gives information on how genes are regulated and reveals details of an organism's biology. It can also be used to infer the functions of previously unannotated genes. Transcriptome analysis has enabled the study of how gene expression changes in different organisms and has been instrumental in the understanding of human disease. An analysis of gene expression in its entirety allows detection of broad coordinated trends which cannot be discerned by more targeted assays.
History
Transcriptomics has been characterised by the development of new techniques which have redefined what is possible every decade or so and rendered previous technologies obsolete. The first attempt at capturing a partial human transcriptome was published in 1991 and reported 609 mRNA sequences from the human brain. In 2008, two human transcriptomes, composed of millions of transcript-derived sequences covering 16,000 genes, were published, and by 2015 transcriptomes had been published for hundreds of individuals. Transcriptomes of different disease states, tissues, or even single cells are now routinely generated. This explosion in transcriptomics has been driven by the rapid development of new technologies with improved sensitivity and economy.
Before transcriptomics
Studies of individual transcripts were being performed several decades before any transcriptomics approaches were available. Libraries of silkmoth mRNA transcripts were collected and converted to complementary DNA (cDNA) for storage using reverse transcriptase in the late 1970s. In the 1980s, low-throughput sequencing using the Sanger method was used to sequence random transcripts, producing expressed sequence tags (ESTs). The Sanger method of sequencing was predominant until the advent of high-throughput methods such as sequencing by synthesis (Solexa/Illumina). ESTs came to prominence during the 1990s as an efficient method to determine the gene content of an organism without sequencing the entire genome. Amounts of individual transcripts were quantified using Northern blotting, nylon membrane arrays, and later reverse transcriptase quantitative PCR (RT-qPCR) methods, but these methods are laborious and can only capture a tiny subsection of a transcriptome. Consequently, the manner in which a transcriptome as a whole is expressed and regulated remained unknown until higher-throughput techniques were developed.
Early attempts
The word "transcriptome" was first used in the 1990s. In 1995, one of the earliest sequencing-based transcriptomic methods was developed, serial analysis of gene expression (SAGE), which worked by Sanger sequencing of concatenated random transcript fragments. Transcripts were quantified by matching the fragments to known genes. A variant of SAGE using high-throughput sequencing techniques, called digital gene expression analysis, was also briefly used. However, these methods were largely overtaken by high throughput sequencing of entire transcripts, which provided additional information on transcript structure such as splice variants.
Development of contemporary techniques
The dominant contemporary techniques, microarrays and RNA-Seq, were developed in the mid-1990s and 2000s. Microarrays that measure the abundances of a defined set of transcripts via their hybridisation to an array of complementary probes were first published in 1995. Microarray technology allowed the assay of thousands of transcripts simultaneously and at a greatly reduced cost per gene and labour saving. Both spotted oligonucleotide arrays and Affymetrix high-density arrays were the method of choice for transcriptional profiling until the late 2000s. Over this period, a range of microarrays were produced to cover known genes in model or economically important organisms. Advances in design and manufacture of arrays improved the specificity of probes and allowed more genes to be tested on a single array. Advances in fluorescence detection increased the sensitivity and measurement accuracy for low abundance transcripts.
RNA-Seq is accomplished by reverse transcribing RNA in vitro and sequencing the resulting cDNAs. Transcript abundance is derived from the number of counts from each transcript. The technique has therefore been heavily influenced by the development of high-throughput sequencing technologies. Massively parallel signature sequencing (MPSS) was an early example based on generating 16–20 bp sequences via a complex series of hybridisations, and was used in 2004 to validate the expression of ten thousand genes in Arabidopsis thaliana. The earliest RNA-Seq work was published in 2006 with one hundred thousand transcripts sequenced using 454 technology. This was sufficient coverage to quantify relative transcript abundance. RNA-Seq began to increase in popularity after 2008 when new Solexa/Illumina technologies allowed one billion transcript sequences to be recorded. This yield now allows for the quantification and comparison of human transcriptomes.
Data gathering
Generating data on RNA transcripts can be achieved via either of two main principles: sequencing of individual transcripts (ESTs, or RNA-Seq) or hybridisation of transcripts to an ordered array of nucleotide probes (microarrays).
Isolation of RNA
All transcriptomic methods require RNA to first be isolated from the experimental organism before transcripts can be recorded. Although biological systems are incredibly diverse, RNA extraction techniques are broadly similar and involve mechanical disruption of cells or tissues, disruption of RNase with chaotropic salts, disruption of macromolecules and nucleotide complexes, separation of RNA from undesired biomolecules including DNA, and concentration of the RNA via precipitation from solution or elution from a solid matrix. Isolated RNA may additionally be treated with DNase to digest any traces of DNA. It is necessary to enrich messenger RNA as total RNA extracts are typically 98% ribosomal RNA. Enrichment for transcripts can be performed by poly-A affinity methods or by depletion of ribosomal RNA using sequence-specific probes. Degraded RNA may affect downstream results; for example, mRNA enrichment from degraded samples will result in the depletion of 5’ mRNA ends and an uneven signal across the length of a transcript. Snap-freezing of tissue prior to RNA isolation is typical, and care is taken to reduce exposure to RNase enzymes once isolation is complete.
Expressed sequence tags
An expressed sequence tag (EST) is a short nucleotide sequence generated from a single RNA transcript. RNA is first copied as complementary DNA (cDNA) by a reverse transcriptase enzyme before the resultant cDNA is sequenced. Because ESTs can be collected without prior knowledge of the organism from which they come, they can be made from mixtures of organisms or environmental samples. Although higher-throughput methods are now used, EST libraries commonly provided sequence information for early microarray designs; for example, a barley microarray was designed from 350,000 previously sequenced ESTs.
Serial and cap analysis of gene expression (SAGE/CAGE)
Serial analysis of gene expression (SAGE) was a development of EST methodology to increase the throughput of the tags generated and allow some quantitation of transcript abundance. cDNA is generated from the RNA but is then digested into 11 bp "tag" fragments using restriction enzymes that cut DNA at a specific sequence, and 11 base pairs along from that sequence. These cDNA tags are then joined head-to-tail into long strands (>500 bp) and sequenced using low-throughput, but long read-length methods such as Sanger sequencing. The sequences are then divided back into their original 11 bp tags using computer software in a process called deconvolution. If a high-quality reference genome is available, these tags may be matched to their corresponding gene in the genome. If a reference genome is unavailable, the tags can be directly used as diagnostic markers if found to be differentially expressed in a disease state.
The cap analysis gene expression (CAGE) method is a variant of SAGE that sequences tags from the 5’ end of an mRNA transcript only. Therefore, the transcriptional start site of genes can be identified when the tags are aligned to a reference genome. Identifying gene start sites is of use for promoter analysis and for the cloning of full-length cDNAs.
SAGE and CAGE methods produce information on more genes than was possible when sequencing single ESTs, but sample preparation and data analysis are typically more labour-intensive.
Microarrays
Principles and advances
Microarrays usually consist of a grid of short nucleotide oligomers, known as "probes", typically arranged on a glass slide. Transcript abundance is determined by hybridisation of fluorescently labelled transcripts to these probes. The fluorescence intensity at each probe location on the array indicates the transcript abundance for that probe sequence. Groups of probes designed to measure the same transcript (i.e., hybridizing a specific transcript in different positions) are usually referred to as "probesets".
Microarrays require some genomic knowledge from the organism of interest, for example, in the form of an annotated genome sequence, or a library of ESTs that can be used to generate the probes for the array.
Methods
Microarrays for transcriptomics typically fall into one of two broad categories: low-density spotted arrays or high-density short probe arrays. Transcript abundance is inferred from the intensity of fluorescence derived from fluorophore-tagged transcripts that bind to the array.
Spotted low-density arrays typically feature picolitre drops of a range of purified cDNAs arrayed on the surface of a glass slide. These probes are longer than those of high-density arrays and cannot identify alternative splicing events. Spotted arrays use two different fluorophores to label the test and control samples, and the ratio of fluorescence is used to calculate a relative measure of abundance. High-density arrays use a single fluorescent label, and each sample is hybridised and detected individually. High-density arrays were popularised by the Affymetrix GeneChip array, where each transcript is quantified by several short 25-mer probes that together assay one gene.
NimbleGen arrays were a high-density array produced by a maskless-photochemistry method, which permitted flexible manufacture of arrays in small or large numbers. These arrays had 100,000s of 45 to 85-mer probes and were hybridised with a one-colour labelled sample for expression analysis. Some designs incorporated up to 12 independent arrays per slide.
RNA-Seq
Principles and advances
RNA-Seq refers to the combination of a high-throughput sequencing methodology with computational methods to capture and quantify transcripts present in an RNA extract. The nucleotide sequences generated are typically around 100 bp in length, but can range from 30 bp to over 10,000 bp depending on the sequencing method used. RNA-Seq leverages deep sampling of the transcriptome with many short fragments from a transcriptome to allow computational reconstruction of the original RNA transcript by aligning reads to a reference genome or to each other (de novo assembly). Both low-abundance and high-abundance RNAs can be quantified in an RNA-Seq experiment (dynamic range of 5 orders of magnitude)—a key advantage over microarray transcriptomes. In addition, input RNA amounts are much lower for RNA-Seq (nanogram quantity) compared to microarrays (microgram quantity), which allow examination of the transcriptome even at a single-cell resolution when combined with amplification of cDNA. Theoretically, there is no upper limit of quantification in RNA-Seq, and background noise is very low for 100 bp reads in non-repetitive regions.
RNA-Seq may be used to identify genes within a genome, or identify which genes are active at a particular point in time, and read counts can be used to accurately model the relative gene expression level. RNA-Seq methodology has constantly improved, primarily through the development of DNA sequencing technologies to increase throughput, accuracy, and read length. Since the first descriptions in 2006 and 2008, RNA-Seq has been rapidly adopted and overtook microarrays as the dominant transcriptomics technique in 2015.
The quest for transcriptome data at the level of individual cells has driven advances in RNA-Seq library preparation methods, resulting in dramatic advances in sensitivity. Single-cell transcriptomes are now well described and have even been extended to in situ RNA-Seq where transcriptomes of individual cells are directly interrogated in fixed tissues.
Methods
RNA-Seq was established in concert with the rapid development of a range of high-throughput DNA sequencing technologies. However, before the extracted RNA transcripts are sequenced, several key processing steps are performed. Methods differ in the use of transcript enrichment, fragmentation, amplification, single or paired-end sequencing, and whether to preserve strand information.
The sensitivity of an RNA-Seq experiment can be increased by enriching classes of RNA that are of interest and depleting known abundant RNAs. The mRNA molecules can be separated using oligonucleotides probes which bind their poly-A tails. Alternatively, ribo-depletion can be used to specifically remove abundant but uninformative ribosomal RNAs (rRNAs) by hybridisation to probes tailored to the taxon's specific rRNA sequences (e.g. mammal rRNA, plant rRNA). However, ribo-depletion can also introduce some bias via non-specific depletion of off-target transcripts. Small RNAs, such as micro RNAs, can be purified based on their size by gel electrophoresis and extraction.
Since mRNAs are longer than the read-lengths of typical high-throughput sequencing methods, transcripts are usually fragmented prior to sequencing. The fragmentation method is a key aspect of sequencing library construction. Fragmentation may be achieved by chemical hydrolysis, nebulisation, sonication, or reverse transcription with chain-terminating nucleotides. Alternatively, fragmentation and cDNA tagging may be done simultaneously by using transposase enzymes.
During preparation for sequencing, cDNA copies of transcripts may be amplified by PCR to enrich for fragments that contain the expected 5’ and 3’ adapter sequences. Amplification is also used to allow sequencing of very low input amounts of RNA, down to as little as 50 pg in extreme applications. Spike-in controls of known RNAs can be used for quality control assessment to check library preparation and sequencing, in terms of GC-content, fragment length, as well as the bias due to fragment position within a transcript. Unique molecular identifiers (UMIs) are short random sequences that are used to individually tag sequence fragments during library preparation so that every tagged fragment is unique. UMIs provide an absolute scale for quantification, the opportunity to correct for subsequent amplification bias introduced during library construction, and accurately estimate the initial sample size. UMIs are particularly well-suited to single-cell RNA-Seq transcriptomics, where the amount of input RNA is restricted and extended amplification of the sample is required.
Once the transcript molecules have been prepared they can be sequenced in just one direction (single-end) or both directions (paired-end). A single-end sequence is usually quicker to produce, cheaper than paired-end sequencing and sufficient for quantification of gene expression levels. Paired-end sequencing produces more robust alignments/assemblies, which is beneficial for gene annotation and transcript isoform discovery. Strand-specific RNA-Seq methods preserve the strand information of a sequenced transcript. Without strand information, reads can be aligned to a gene locus but do not inform in which direction the gene is transcribed. Stranded-RNA-Seq is useful for deciphering transcription for genes that overlap in different directions and to make more robust gene predictions in non-model organisms.
Legend: NCBI SRA – National center for biotechnology information sequence read archive.
Currently RNA-Seq relies on copying RNA molecules into cDNA molecules prior to sequencing; therefore, the subsequent platforms are the same for transcriptomic and genomic data. Consequently, the development of DNA sequencing technologies has been a defining feature of RNA-Seq. Direct sequencing of RNA using nanopore sequencing represents a current state-of-the-art RNA-Seq technique. Nanopore sequencing of RNA can detect modified bases that would be otherwise masked when sequencing cDNA and also eliminates amplification steps that can otherwise introduce bias.
The sensitivity and accuracy of an RNA-Seq experiment are dependent on the number of reads obtained from each sample. A large number of reads are needed to ensure sufficient coverage of the transcriptome, enabling detection of low abundance transcripts. Experimental design is further complicated by sequencing technologies with a limited output range, the variable efficiency of sequence creation, and variable sequence quality. Added to those considerations is that every species has a different number of genes and therefore requires a tailored sequence yield for an effective transcriptome. Early studies determined suitable thresholds empirically, but as the technology matured suitable coverage was predicted computationally by transcriptome saturation. Somewhat counter-intuitively, the most effective way to improve detection of differential expression in low expression genes is to add more biological replicates rather than adding more reads. The current benchmarks recommended by the Encyclopedia of DNA Elements (ENCODE) Project are for 70-fold exome coverage for standard RNA-Seq and up to 500-fold exome coverage to detect rare transcripts and isoforms.
Data analysis
Transcriptomics methods are highly parallel and require significant computation to produce meaningful data for both microarray and RNA-Seq experiments. Microarray data is recorded as high-resolution images, requiring feature detection and spectral analysis. Microarray raw image files are each about 750 MB in size, while the processed intensities are around 60 MB in size. Multiple short probes matching a single transcript can reveal details about the intron-exon structure, requiring statistical models to determine the authenticity of the resulting signal. RNA-Seq studies produce billions of short DNA sequences, which must be aligned to reference genomes composed of millions to billions of base pairs. De novo assembly of reads within a dataset requires the construction of highly complex sequence graphs. RNA-Seq operations are highly repetitious and benefit from parallelised computation but modern algorithms mean consumer computing hardware is sufficient for simple transcriptomics experiments that do not require de novo assembly of reads. A human transcriptome could be accurately captured using RNA-Seq with 30 million 100 bp sequences per sample. This example would require approximately 1.8 gigabytes of disk space per sample when stored in a compressed fastq format. Processed count data for each gene would be much smaller, equivalent to processed microarray intensities. Sequence data may be stored in public repositories, such as the Sequence Read Archive (SRA). RNA-Seq datasets can be uploaded via the Gene Expression Omnibus.
Image processing
Microarray image processing must correctly identify the regular grid of features within an image and independently quantify the fluorescence intensity for each feature. Image artefacts must be additionally identified and removed from the overall analysis. Fluorescence intensities directly indicate the abundance of each sequence, since the sequence of each probe on the array is already known.
The first steps of RNA-seq also include similar image processing; however, conversion of images to sequence data is typically handled automatically by the instrument software. The Illumina sequencing-by-synthesis method results in an array of clusters distributed over the surface of a flow cell. The flow cell is imaged up to four times during each sequencing cycle, with tens to hundreds of cycles in total. Flow cell clusters are analogous to microarray spots and must be correctly identified during the early stages of the sequencing process. In Roche’s pyrosequencing method, the intensity of emitted light determines the number of consecutive nucleotides in a homopolymer repeat. There are many variants on these methods, each with a different error profile for the resulting data.
RNA-Seq data analysis
RNA-Seq experiments generate a large volume of raw sequence reads which have to be processed to yield useful information. Data analysis usually requires a combination of bioinformatics software tools (see also List of RNA-Seq bioinformatics tools) that vary according to the experimental design and goals. The process can be broken down into four stages: quality control, alignment, quantification, and differential expression. Most popular RNA-Seq programs are run from a command-line interface, either in a Unix environment or within the R/Bioconductor statistical environment.
Quality control
Sequence reads are not perfect, so the accuracy of each base in the sequence needs to be estimated for downstream analyses. Raw data is examined to ensure: quality scores for base calls are high, the GC content matches the expected distribution, short sequence motifs (k-mers) are not over-represented, and the read duplication rate is acceptably low. Several software options exist for sequence quality analysis, including FastQC and FaQCs. Abnormalities may be removed (trimming) or tagged for special treatment during later processes.
Alignment
In order to link sequence read abundance to the expression of a particular gene, transcript sequences are aligned to a reference genome or de novo aligned to one another if no reference is available. The key challenges for alignment software include sufficient speed to permit billions of short sequences to be aligned in a meaningful timeframe, flexibility to recognise and deal with intron splicing of eukaryotic mRNA, and correct assignment of reads that map to multiple locations. Software advances have greatly addressed these issues, and increases in sequencing read length reduce the chance of ambiguous read alignments. A list of currently available high-throughput sequence aligners is maintained by the EBI.
Alignment of primary transcript mRNA sequences derived from eukaryotes to a reference genome requires specialised handling of intron sequences, which are absent from mature mRNA. Short read aligners perform an additional round of alignments specifically designed to identify splice junctions, informed by canonical splice site sequences and known intron splice site information. Identification of intron splice junctions prevents reads from being misaligned across splice junctions or erroneously discarded, allowing more reads to be aligned to the reference genome and improving the accuracy of gene expression estimates. Since gene regulation may occur at the mRNA isoform level, splice-aware alignments also permit detection of isoform abundance changes that would otherwise be lost in a bulked analysis.
De novo assembly can be used to align reads to one another to construct full-length transcript sequences without use of a reference genome. Challenges particular to de novo assembly include larger computational requirements compared to a reference-based transcriptome, additional validation of gene variants or fragments, and additional annotation of assembled transcripts. The first metrics used to describe transcriptome assemblies, such as N50, have been shown to be misleading and improved evaluation methods are now available. Annotation-based metrics are better assessments of assembly completeness, such as contig reciprocal best hit count. Once assembled de novo, the assembly can be used as a reference for subsequent sequence alignment methods and quantitative gene expression analysis.
Legend: RAM – random access memory; MPI – message passing interface; EST – expressed sequence tag.
Quantification
Quantification of sequence alignments may be performed at the gene, exon, or transcript level. Typical outputs include a table of read counts for each feature supplied to the software; for example, for genes in a general feature format file. Gene and exon read counts may be calculated quite easily using HTSeq, for example. Quantitation at the transcript level is more complicated and requires probabilistic methods to estimate transcript isoform abundance from short read information; for example, using cufflinks software. Reads that align equally well to multiple locations must be identified and either removed, aligned to one of the possible locations, or aligned to the most probable location.
Some quantification methods can circumvent the need for an exact alignment of a read to a reference sequence altogether. The kallisto software method combines pseudoalignment and quantification into a single step that runs 2 orders of magnitude faster than contemporary methods such as those used by tophat/cufflinks software, with less computational burden.
Differential expression
Once quantitative counts of each transcript are available, differential gene expression is measured by normalising, modelling, and statistically analysing the data. Most tools will read a table of genes and read counts as their input, but some programs, such as cuffdiff, will accept binary alignment map format read alignments as input. The final outputs of these analyses are gene lists with associated pair-wise tests for differential expression between treatments and the probability estimates of those differences.
Legend: mRNA - messenger RNA.
Validation
Transcriptomic analyses may be validated using an independent technique, for example, quantitative PCR (qPCR), which is recognisable and statistically assessable. Gene expression is measured against defined standards both for the gene of interest and control genes. The measurement by qPCR is similar to that obtained by RNA-Seq wherein a value can be calculated for the concentration of a target region in a given sample. qPCR is, however, restricted to amplicons smaller than 300 bp, usually toward the 3’ end of the coding region, avoiding the 3’UTR. If validation of transcript isoforms is required, an inspection of RNA-Seq read alignments should indicate where qPCR primers might be placed for maximum discrimination. The measurement of multiple control genes along with the genes of interest produces a stable reference within a biological context. qPCR validation of RNA-Seq data has generally shown that different RNA-Seq methods are highly correlated.
Functional validation of key genes is an important consideration for post transcriptome planning. Observed gene expression patterns may be functionally linked to a phenotype by an independent knock-down/rescue study in the organism of interest.
Applications
Diagnostics and disease profiling
Transcriptomic strategies have seen broad application across diverse areas of biomedical research, including disease diagnosis and profiling. RNA-Seq approaches have allowed for the large-scale identification of transcriptional start sites, uncovered alternative promoter usage, and novel splicing alterations. These regulatory elements are important in human disease and, therefore, defining such variants is crucial to the interpretation of disease-association studies. RNA-Seq can also identify disease-associated single nucleotide polymorphisms (SNPs), allele-specific expression, and gene fusions, which contributes to the understanding of disease causal variants.
Retrotransposons are transposable elements which proliferate within eukaryotic genomes through a process involving reverse transcription. RNA-Seq can provide information about the transcription of endogenous retrotransposons that may influence the transcription of neighboring genes by various epigenetic mechanisms that lead to disease. Similarly, the potential for using RNA-Seq to understand immune-related disease is expanding rapidly due to the ability to dissect immune cell populations and to sequence T cell and B cell receptor repertoires from patients.
Human and pathogen transcriptomes
RNA-Seq of human pathogens has become an established method for quantifying gene expression changes, identifying novel virulence factors, predicting antibiotic resistance, and unveiling host-pathogen immune interactions. A primary aim of this technology is to develop optimised infection control measures and targeted individualised treatment.
Transcriptomic analysis has predominantly focused on either the host or the pathogen. Dual RNA-Seq has been applied to simultaneously profile RNA expression in both the pathogen and host throughout the infection process. This technique enables the study of the dynamic response and interspecies gene regulatory networks in both interaction partners from initial contact through to invasion and the final persistence of the pathogen or clearance by the host immune system.
Responses to environment
Transcriptomics allows identification of genes and pathways that respond to and counteract biotic and abiotic environmental stresses. The non-targeted nature of transcriptomics allows the identification of novel transcriptional networks in complex systems. For example, comparative analysis of a range of chickpea lines at different developmental stages identified distinct transcriptional profiles associated with drought and salinity stresses, including identifying the role of transcript isoforms of AP2-EREBP. Investigation of gene expression during biofilm formation by the fungal pathogen Candida albicans revealed a co-regulated set of genes critical for biofilm establishment and maintenance.
Transcriptomic profiling also provides crucial information on mechanisms of drug resistance. Analysis of over 1000 isolates of Plasmodium falciparum, a virulent parasite responsible for malaria in humans, identified that upregulation of the unfolded protein response and slower progression through the early stages of the asexual intraerythrocytic developmental cycle were associated with artemisinin resistance in isolates from Southeast Asia.
The use of transcriptomics is also important to investigate responses in the marine environment. In marine ecology, "stress" and "adaptation" have been among the most common research topics, especially related to anthropogenic stress, such as global change and pollution. Most of the studies in this area have been done in animals, although invertebrates have been underrepresented. One issue still is a deficiency in functional genetic studies, which hamper gene annotations, especially for non-model species, and can lead to vague conclusions on the effects of responses studied.
Gene function annotation
All transcriptomic techniques have been particularly useful in identifying the functions of genes and identifying those responsible for particular phenotypes. Transcriptomics of Arabidopsis ecotypes that hyperaccumulate metals correlated genes involved in metal uptake, tolerance, and homeostasis with the phenotype. Integration of RNA-Seq datasets across different tissues has been used to improve annotation of gene functions in commercially important organisms (e.g. cucumber) or threatened species (e.g. koala).
Assembly of RNA-Seq reads is not dependent on a reference genome and so is ideal for gene expression studies of non-model organisms with non-existing or poorly developed genomic resources. For example, a database of SNPs used in Douglas fir breeding programs was created by de novo transcriptome analysis in the absence of a sequenced genome. Similarly, genes that function in the development of cardiac, muscle, and nervous tissue in lobsters were identified by comparing the transcriptomes of the various tissue types without use of a genome sequence. RNA-Seq can also be used to identify previously unknown protein coding regions in existing sequenced genomes.
Non-coding RNA
Transcriptomics is most commonly applied to the mRNA content of the cell. However, the same techniques are equally applicable to non-coding RNAs (ncRNAs) that are not translated into a protein, but instead have direct functions (e.g. roles in protein translation, DNA replication, RNA splicing, and transcriptional regulation). Many of these ncRNAs affect disease states, including cancer, cardiovascular, and neurological diseases.
Transcriptome databases
Transcriptomics studies generate large amounts of data that have potential applications far beyond the original aims of an experiment. As such, raw or processed data may be deposited in public databases to ensure their utility for the broader scientific community. For example, as of 2018, the Gene Expression Omnibus contained millions of experiments.
Legend: NCBI – National Center for Biotechnology Information; EBI – European Bioinformatics Institute; DDBJ – DNA Data Bank of Japan; ENA – European Nucleotide Archive; MIAME – Minimum Information About a Microarray Experiment; MINSEQE – Minimum Information about a high-throughput nucleotide SEQuencing Experiment.
See also
omics
Genomics
Proteomics
Metabolomics
Interactomics
References
Notes
Further reading
Comparative Transcriptomics Analysis in Reference Module in Life Sciences
Software used in transcriptomics:
cufflinks
kallisto
tophat
Omics
Molecular biology | Transcriptomics technologies | Chemistry,Biology | 6,859 |
324,997 | https://en.wikipedia.org/wiki/Radiological%20warfare | Radiological warfare is any form of warfare involving deliberate radiation poisoning or contamination of an area with radiological sources.
Radiological weapons are normally classified as weapons of mass destruction (WMDs), although radiological weapons can also be specific in whom they target, such as the radiation poisoning of Alexander Litvinenko by the Russian FSB, using radioactive polonium-210.
Numerous countries have expressed an interest in radiological weapons programs, several have actively pursued them, and three have performed radiological weapons tests.
Salted nuclear weapons
A salted bomb is a nuclear weapon that is equipped with a large quantity of radiologically inert salting material. The radiological warfare agents are produced through neutron capture by the salting materials of the neutron radiation emitted by the nuclear weapon. This avoids the problems of having to stockpile the highly radioactive material, as it is produced when the bomb explodes. The result is a more intense fallout than from regular nuclear weapons and can render an area uninhabitable for a long period.
The cobalt bomb is an example of a radiological warfare weapon, where cobalt-59 is converted to cobalt-60 by neutron capture. Initially, gamma radiation of the nuclear fission products from an equivalent sized "clean" fission-fusion-fission bomb (assuming the amount of radioactive dust particles generated are equal) are much more intense than cobalt-60: 15,000 times more intense at 1 hour; 35 times more intense at 1 week; 5 times more intense at 1 month; and about equal at 6 months. Thereafter fission drops off rapidly so that cobalt-60 fallout is 8 times more intense than fission at 1 year and 150 times more intense at 5 years. The very long-lived isotopes produced by fission would overtake the cobalt-60 again after about 75 years.
Other salted bomb variants that do not use cobalt have also been theorized. For example, salting with sodium-23, that transmutes to sodium-24, which because of its 15-hour half-life results in intense radiation.
Surface-burst nuclear weapons
An air burst is preferred if the effects of thermal radiation and blast wave is to be maximized for an area (i.e. area covered by direct line of sight and sufficient luminosity to cause burning, and formation of mach stem respectively). Both fission and fusion weapons will irradiate the detonation site with neutron radiation, causing neutron activation of the material there. Fission bombs will also contribute with the bomb-material residue. Air will not form isotopes useful for radiological warfare when neutron-activated. By detonating them at or near the surface instead, the ground will be vaporized, become radioactive, and when it cools down and condenses into particles cause significant fallout.
Dirty bombs
A far lower-tech radiological weapon than those discussed above is a "dirty bomb" or radiological dispersal device, whose purpose is to disperse radioactive dust over an area. The release of radioactive material may involve no special "weapon" or side forces like a blast explosion and include no direct killing of people from its radiation source, but rather could make whole areas or structures unusable or unfavorable for the support of human life. The radioactive material may be dispersed slowly over a large area, and it can be difficult for the victims to initially know that such a radiological attack is being carried out, especially if detectors for radioactivity are not installed beforehand.
Radiological warfare with dirty bombs could be used for nuclear terrorism, spreading or intensifying fear. In relation to these weapons, nation states can also spread rumor, disinformation and fear.
In July 2023, both Ukraine and Russia blamed each other for preparing to bomb the Zaporizhzhia nuclear power plant in Ukraine, in order to use the nuclear reactors as dirty bombs.
See also
Acute radiation syndrome
Area denial weapons
Depleted uranium
Neutron bomb
Nuclear detection
Nuclear warfare
Operation Peppermint
Scorched earth and "Salting the earth"
Yasser Arafat § Theories about the cause of death
Further reading
Kirby, R. (2020) Radiological Weapons: America's Cold War Experience.
References
External links
Radiological Weapons as Means of Attack. Anthony H. Cordesman
Radiological-weapons threats: case studies from the extreme right. BreAnne K. Fleer, 2020; The Nonproliferation Review
Radiobiology
Warfare by type
Nuclear terrorism
Radiological weapons | Radiological warfare | Chemistry,Biology | 897 |
1,812,809 | https://en.wikipedia.org/wiki/Pseudorandom%20generator | In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution.
Many different classes of statistical tests have been considered in the literature, among them the class of all Boolean circuits of a given size.
It is not known whether good pseudorandom generators for this class exist, but it is known that their existence is in a certain sense equivalent to (unproven) circuit lower bounds in computational complexity theory.
Hence the construction of pseudorandom generators for the class of Boolean circuits of a given size rests on currently unproven hardness assumptions.
Definition
Let be a class of functions.
These functions are the statistical tests that the pseudorandom generator will try to fool, and they are usually algorithms.
Sometimes the statistical tests are also called adversaries or distinguishers. The notation in the codomain of the functions is the Kleene star.
A function with is a pseudorandom generator against with bias if, for every in , the statistical distance between the distributions and is at most , where is the uniform distribution on .
The quantity is called the seed length and the quantity is called the stretch of the pseudorandom generator.
A pseudorandom generator against a family of adversaries with bias is a family of pseudorandom generators , where is a pseudorandom generator against with bias and seed length .
In most applications, the family represents some model of computation or some set of algorithms, and one is interested in designing a pseudorandom generator with small seed length and bias, and such that the output of the generator can be computed by the same sort of algorithm.
In cryptography
In cryptography, the class usually consists of all circuits of size polynomial in the input and with a single bit output, and one is interested in designing pseudorandom generators that are computable by a polynomial-time algorithm and whose bias is negligible in the circuit size.
These pseudorandom generators are sometimes called cryptographically secure pseudorandom generators (CSPRGs).
It is not known if cryptographically secure pseudorandom generators exist.
Proving that they exist is difficult since their existence implies P ≠ NP, which is widely believed but a famously open problem.
The existence of cryptographically secure pseudorandom generators is widely believed. This is because it has been proven that pseudorandom generators can be constructed from any one-way function which are believed to exist. Pseudorandom generators are necessary for many applications in cryptography.
The pseudorandom generator theorem shows that cryptographically secure pseudorandom generators exist if and only if one-way functions exist.
Uses
Pseudorandom generators have numerous applications in cryptography. For instance, pseudorandom generators provide an efficient analog of one-time pads. It is well known that in order to encrypt a message m in a way that the cipher text provides no information on the plaintext, the key k used must be random over strings of length |m|. Perfectly secure encryption is very costly in terms of key length. Key length can be significantly reduced using a pseudorandom generator if perfect security is replaced by semantic security. Common constructions of stream ciphers are based on pseudorandom generators.
Pseudorandom generators may also be used to construct symmetric key cryptosystems, where a large number of messages can be safely encrypted under the same key. Such a construction can be based on a pseudorandom function family, which generalizes the notion of a pseudorandom generator.
In the 1980s, simulations in physics began to use pseudorandom generators to produce sequences with billions of elements, and by the late 1980s, evidence had developed that a few common generators gave incorrect results in such cases as phase transition properties of the 3D Ising model and shapes of diffusion-limited aggregates. Then in the 1990s, various idealizations of physics simulations—based on random walks, correlation functions, localization of eigenstates, etc., were used as tests of pseudorandom generators.
Testing
NIST announced SP800-22 Randomness tests to test whether a pseudorandom generator produces high quality random bits. Yongge Wang showed that NIST testing is not enough to detect weak pseudorandom generators and developed statistical distance based testing technique LILtest.
For derandomization
A main application of pseudorandom generators lies in the derandomization of computation that relies on randomness, without corrupting the result of the computation.
Physical computers are deterministic machines, and obtaining true randomness can be a challenge.
Pseudorandom generators can be used to efficiently simulate randomized algorithms with using little or no randomness.
In such applications, the class describes the randomized algorithm or class of randomized algorithms that one wants to simulate, and the goal is to design an "efficiently computable" pseudorandom generator against whose seed length is as short as possible.
If a full derandomization is desired, a completely deterministic simulation proceeds by replacing the random input to the randomized algorithm with the pseudorandom string produced by the pseudorandom generator.
The simulation does this for all possible seeds and averages the output of the various runs of the randomized algorithm in a suitable way.
Constructions
For polynomial time
A fundamental question in computational complexity theory is whether all polynomial time randomized algorithms for decision problems can be deterministically simulated in polynomial time. The existence of such a simulation would imply that BPP = P. To perform such a simulation, it is sufficient to construct pseudorandom generators against the family F of all circuits of size s(n) whose inputs have length n and output a single bit, where s(n) is an arbitrary polynomial, the seed length of the pseudorandom generator is O(log n) and its bias is ⅓.
In 1991, Noam Nisan and Avi Wigderson provided a candidate pseudorandom generator with these properties. In 1997 Russell Impagliazzo and Avi Wigderson proved that the construction of Nisan and Wigderson is a pseudorandom generator assuming that there exists a decision problem that can be computed in time 2O(n) on inputs of length n but requires circuits of size 2Ω(n).
For logarithmic space
While unproven assumption about circuit complexity are needed to prove that the Nisan–Wigderson generator works for time-bounded machines, it is natural to restrict the class of statistical tests further such that we need not rely on such unproven assumptions.
One class for which this has been done is the class of machines whose work space is bounded by .
Using a repeated squaring trick known as Savitch's theorem, it is easy to show that every probabilistic log-space computation can be simulated in space .
Noam Nisan (1992) showed that this derandomization can actually be achieved with a pseudorandom generator of seed length that fools all -space machines.
Nisan's generator has been used by Saks and Zhou (1999) to show that probabilistic log-space computation can be simulated deterministically in space .
This result was improved by William Hoza in 2021 to space .
For linear functions
When the statistical tests consist of all multivariate linear functions over some finite field , one speaks of epsilon-biased generators.
The construction of achieves a seed length of , which is optimal up to constant factors.
Pseudorandom generators for linear functions often serve as a building block for more complicated pseudorandom generators.
For polynomials
proves that taking the sum of small-bias generators fools polynomials of degree .
The seed length is .
For constant-depth circuits
Constant depth circuits that produce a single output bit.
Limitations on probability
The pseudorandom generators used in cryptography and universal algorithmic derandomization have not been proven to exist, although their existence is widely believed. Proofs for their existence would imply proofs of lower bounds on the circuit complexity of certain explicit functions. Such circuit lower bounds cannot be proved in the framework of natural proofs assuming the existence of stronger variants of cryptographic pseudorandom generators.
References
Sanjeev Arora and Boaz Barak, Computational Complexity: A Modern Approach, Cambridge University Press (2009), .
Oded Goldreich, Computational Complexity: A Conceptual Perspective, Cambridge University Press (2008), .
Oded Goldreich, Foundations of Cryptography: Basic Tools, Cambridge University Press (2001), .
Algorithmic information theory
Pseudorandomness
Cryptography | Pseudorandom generator | Mathematics,Engineering | 1,791 |
20,695,220 | https://en.wikipedia.org/wiki/AMOLED | AMOLED (active-matrix organic light-emitting diode; ) is a type of OLED display device technology. OLED describes a specific type of thin-film-display technology in which organic compounds form the electroluminescent material, and active matrix refers to the technology behind the addressing of pixels.
Since 2007, AMOLED technology has been used in mobile phones, media players, TVs and digital cameras, and it has continued to make progress toward low-power, low-cost, high resolution and large size (for example, 88-inch and 8K resolution) applications.
Design
An AMOLED display consists of an active matrix of OLED pixels generating light (luminescence) upon electrical activation that have been deposited or integrated onto a thin-film transistor (TFT) array, which functions as a series of switches to control the current flowing to each individual pixel.
Typically, this continuous current flow is controlled by at least two TFTs at each pixel (to trigger the luminescence), with one TFT to start and stop the charging of a storage capacitor and the second to provide a voltage source at the level needed to create a constant current to the pixel, thereby eliminating the need for the very high currents required for passive-matrix OLED operation.
TFT backplane technology is crucial in the fabrication of AMOLED displays. In AMOLEDs, the two primary TFT backplane technologies, polycrystalline silicon (poly-Si) and amorphous silicon (a-Si), are currently used offering the potential for directly fabricating the active-matrix backplanes at low temperatures (below 150 °C) onto flexible plastic substrates for producing flexible AMOLED displays.
History
AMOLED was developed in 2006. Samsung SDI was one of the main investors in the technology, and many other display companies were also developing it. One of the earliest consumer electronics products with an AMOLED display was the BenQ-Siemens S88 mobile handset and, in 2007, the iriver Clix 2 portable media player. In 2008 it appeared on the Nokia N85 followed by the Samsung i7110 - both Nokia and Samsung Electronics were early adopters of this technology on their smartphones.
Future development
Manufacturers have developed in-cell touch panels, integrating the production of capacitive sensor arrays in the AMOLED module fabrication process. In-cell sensor AMOLED fabricators include AU Optronics and Samsung. Samsung has marketed its version of this technology as "Super AMOLED". Researchers at DuPont used computational fluid dynamics (CFD) software to optimize coating processes for a new solution-coated AMOLED display technology that is competitive in cost and performance with existing chemical vapor deposition (CVD) technology. Using custom modeling and analytic approaches, Samsung has developed short and long-range film-thickness control and uniformity that is commercially viable at large glass sizes.
Comparison to other display technologies
Compared to other display technologies, AMOLED screens have several advantages and disadvantages.
AMOLED displays can provide higher refresh rates than passive-matrix, often have response times less than a millisecond, and they consume significantly less power. This advantage makes active-matrix OLEDs well-suited for portable electronics, where power consumption is critical to battery life.
The amount of power the display consumes varies significantly depending on the color and brightness shown. As an example, one old QVGA OLED display consumes 0.3 watts while showing white text on a black background, but more than 0.7 watts showing black text on a white background, while an LCD may consume only a constant 0.35 watts regardless of what is being shown on screen.
A new FHD+ or WQHD+ display will consume much more.
Because the black pixels turn completely off, AMOLED also has contrast ratios that are significantly higher than LCDs.
AMOLED displays may be difficult to view in direct sunlight compared with LCDs because of their reduced maximum brightness. Samsung's Super AMOLED technology addresses this issue by reducing the size of gaps between layers of the screen. Additionally, PenTile technology is often used for a higher resolution display while requiring fewer subpixels than needed otherwise, sometimes resulting in a display less sharp and more grainy than a non-PenTile display with the same resolution.
The organic materials used in AMOLED displays are very prone to degradation over a relatively short period of time, resulting in color shifts as one color fades faster than another, image persistence, or burn-in.
Flagship smartphones sold in 2020 and 2021 used a Super AMOLED. Super AMOLED displays, such as the one on the Samsung Galaxy S21+ / S21 Ultra and Samsung Galaxy Note 20 Ultra have often been compared to IPS LCDs, found in phones such as the Xiaomi Mi 10T, Huawei Nova 5T, and Samsung Galaxy A20e. For example, according to ABI Research, the AMOLED display found in the Motorola Moto X draws just 92 mA during bright conditions and 68 mA while dim. On the other hand, compared with the IPS, the yield rate of AMOLED is low; the cost is also higher.
Marketing terms
Super AMOLED
"Super AMOLED" is a marketing term created by Samsung for an AMOLED display with an integrated touch screen digitizer: the layer that detects touch is integrated into the display, rather than overlaid on top of it and cannot be separated from the display itself. Super AMOLED is a more advanced version and it integrates touch-sensors and the actual screen in a single layer. When compared with a regular LCD display an AMOLED display consumes less power, provides more vivid picture quality, and renders faster motion response as compared to other display technologies such as LCD. However, Super AMOLED is even better at this with 20% brighter screen, 20% lower power consumption and 80% less sunlight reflection. According to Samsung, Super AMOLED reflects one-fifth as much sunlight as the first generation AMOLED. The generic term for this technology is One Glass Solution (OGS).
Comparison
Below is a mapping table of marketing terms versus resolutions and sub-pixel types. Note how the pixel density relates to choices of sub-pixel type.
Future
Future displays exhibited from 2011 to 2013 by Samsung have shown flexible, 3D, transparent Super AMOLED Plus displays using very high resolutions and in varying sizes for phones. These unreleased prototypes use a polymer as a substrate removing the need for glass cover, a metal backing, and touch matrix, combining them into one integrated layer.
So far, Samsung plans on branding the newer displays as Youm, or y-octa.
Also planned for the future are 3D stereoscopic displays that use eye-tracking (via stereoscopic front-facing cameras) to provide full resolution 3D visuals.
See also
List of flat panel display manufacturers
microLED
OLED
References
External links
Mobile phones
Conductive polymers
Display technology
Molecular electronics
Optical diodes
Organic electronics | AMOLED | Chemistry,Materials_science,Engineering | 1,447 |
1,302,888 | https://en.wikipedia.org/wiki/BESS%20%28experiment%29 | BESS is a particle physics experiment carried by a balloon. BESS stands for Balloon-borne Experiment with Superconducting Spectrometer.
See also
BOOMERanG experiment
References
External links
BESS webpage on the NASA website
High energy particle telescopes
Cosmic-ray experiments
Balloon-borne experiments
Astronomical experiments in the Antarctic | BESS (experiment) | Physics,Astronomy | 65 |
6,375,012 | https://en.wikipedia.org/wiki/454%20Life%20Sciences | 454 Life Sciences was a biotechnology company based in Branford, Connecticut that specialized in high-throughput DNA sequencing. It was acquired by Roche in 2007 and shut down by Roche in 2013 when its technology became noncompetitive, although production continued until mid-2016.
History
454 Life Sciences was founded by Jonathan Rothberg and was originally known as 454 Corporation, a subsidiary of CuraGen. For their method for low-cost gene sequencing, 454 Life Sciences was awarded the Wall Street Journal's Gold Medal for Innovation in the Biotech-Medical category in 2005. The name 454 was the code name by which the project was referred to at CuraGen, and the numbers have no known special meaning.
In November 2006, Rothberg, Michael Egholm, and colleagues at 454 published a cover article with Svante Pääbo in Nature describing the first million base pairs of the Neanderthal genome, and initiated the Neanderthal Genome Project to complete the sequence of the Neanderthal genome by 2009.
In late March 2007, Roche Diagnostics acquired 454 Life Sciences for US$154.9 million. It remained a separate business unit.
In October 2013, Roche announced that it would shut down 454, and stop supporting the platform by mid-2016.
In May 2007, 454 published the results of Project "Jim": the sequencing of the genome of James Watson, co-discoverer of the structure of DNA.
Technology
454 Sequencing used a large-scale parallel pyrosequencing system capable of sequencing roughly 400-600 megabases of DNA per 10-hour run on the Genome Sequencer FLX with GS FLX Titanium series reagents.
The system relied on fixing nebulized and adapter-ligated DNA fragments to small DNA-capture beads in a water-in-oil emulsion. The DNA fixed to these beads was then amplified by PCR. Each DNA-bound bead was placed into a ~29 μm well on a PicoTiterPlate, a fiber optic chip. A mix of enzymes such as DNA polymerase, ATP sulfurylase, and luciferase was also packed into the well. The PicoTiterPlate was then placed into the GS FLX System for sequencing.
454 released the GS20 sequencing machine in 2005, the first next-generation DNA sequencer on the market. In 2008, 454 Sequencing launched the GS FLX Titanium series reagents for use on the Genome Sequencer FLX instrument, with the ability to sequence 400-600 million base pairs per run with 400-500 base pair read lengths. In late 2009, 454 Life Sciences introduced the GS Junior System, a bench top version of the Genome Sequencer FLX System.
DNA library preparation and emPCR
Genomic DNA was fractionated into smaller fragments (300-800 base pairs) and polished (made blunt at each end). Short adaptors were then ligated onto the ends of the fragments. These adaptors provided priming sequences for both amplification and sequencing of the sample-library fragments. One adaptor (Adaptor B) contained a 5'-biotin tag for immobilization of the DNA library onto streptavidin-coated beads. After nick repair, the non-biotinylated strand was released and used as a single-stranded template DNA (sstDNA) library. The sstDNA library was assessed for its quality, and the optimal amount (DNA copies per bead) needed for emPCR is determined by titration.
The sstDNA library was immobilized onto beads. The beads containing a library fragment carried a single sstDNA molecule. The bead-bound library was emulsified with the amplification reagents in a water-in-oil mixture. Each bead was captured within its own microreactor where PCR amplification occurs. This resulted in bead-immobilized, clonally amplified DNA fragments.
Sequencing
Single-stranded template DNA library beads were added to the DNA Bead Incubation Mix (containing DNA polymerase) and were layered with Enzyme Beads (containing sulfurylase and luciferase) onto a PicoTiterPlate device. The device was centrifuged to deposit the beads into the wells. The layer of Enzyme Beads ensured that the DNA beads remained positioned in the wells during the sequencing reaction. The bead-deposition process was designed to maximize the number of wells that contain a single amplified library bead.
The loaded PicoTiterPlate device were placed into the Genome Sequencer FLX Instrument. The fluidics sub-system delivered sequencing reagents (containing buffers and nucleotides) across the wells of the plate. The four DNA nucleotides were added sequentially in a fixed order across the PicoTiterPlate device during a sequencing run. During the nucleotide flow, millions of copies of DNA bound to each of the beads were sequenced in parallel. When a nucleotide complementary to the template strand was added into a well, the polymerase extended the existing DNA strand by adding nucleotide(s). Addition of one (or more) nucleotide(s) generated a light signal that was recorded by the CCD camera in the instrument. This technique was based on sequencing-by-synthesis and called pyrosequencing. The signal strength was proportional to the number of nucleotides; for example, homopolymer stretches, incorporated in a single nucleotide flow, generated a greater signal than single nucleotides. However, the signal strength for homopolymer stretches was linear only up to eight consecutive nucleotides, after which the signal fell off rapidly. Data were stored in standard flowgram format (SFF) files for downstream analysis.
See also
DNA Sequencing
Notes
Defunct biotechnology companies of the United States
Defunct manufacturing companies based in Connecticut
Biotechnology companies established in 2000
Companies based in New Haven County, Connecticut
DNA sequencing
Genomics companies
Biotechnology companies disestablished in 2013
2000 establishments in Connecticut
2013 disestablishments in Connecticut | 454 Life Sciences | Chemistry,Biology | 1,252 |
75,209,749 | https://en.wikipedia.org/wiki/Double%20liner | A double liner is a fluid barrier system that incorporates two impermeable layers separated by a permeable drainage layer also called a leak detection layer. Typically the impermeable layers are made from geomembranes with a permeable layer in between. The uppermost layer is called the primary liner while the lower layer is called the secondary liner. This combination of layers is designed to prevent hydraulic head from building on the secondary liner, thereby limiting or preventing any permeation into the secondary liner. Due to the difficulty of constructing a single large scale impermeable layer without any defects, a double liner system is more robust, as it can deal with leakage through the primary liner. A double liner system is required by the United States EPA for landfill, surface impoundments, and waste piles.
History
The first double geomembrane liner system was designed by geosynthetics pioneer J.P. Giroud, and installed in 1974 in Le Pont-de-Claix, France to serve as a water reservoir; this is still in service today. This system was composed of an early form of a bituminous geomembrane as the secondary liner, gravel as the drainage layer, and a butyl rubber geomembrane as the primary liner.
References
Geosynthetics
Landfill | Double liner | Materials_science,Engineering | 272 |
360,876 | https://en.wikipedia.org/wiki/Supremacism | Supremacism is the belief that a certain group of people are superior to, and should have supreme authority over, all others. The presumed superior people can be defined by age, gender, race, ethnicity, religion, sexual orientation, language, social class, ideology, nationality, culture, generation or belong to any other part of a particular population.
Sexual
Male
Some feminist theorists have argued that in patriarchy, a standard of male "supremacism" is enforced through a variety of cultural, political, religious, sexual, and interpersonal strategies. Since the 19th century there have been a number of feminist movements opposed to male supremacism, usually aimed at achieving equal legal rights and protections for women in all cultural, political and interpersonal relations.
Female
Racial
White
Centuries of European colonialism in the Americas, Asia, Africa and Oceania were justified by Eurocentric attitudes as well as sometimes by white supremacist attitudes.
During the 19th century, "The White Man's Burden", the phrase which refers to the thought that whites have the obligation to make the societies of the other peoples more 'civilized', was widely used to justify colonial policies as a noble enterprise. Historian Thomas Carlyle, best known for his historical account of the French Revolution, The French Revolution: A History, argued that western policies were justified on the grounds that they provided the greatest benefit to "inferior" native peoples. However, even at the time of its publication in 1849, Carlyle's main work on the subject, the Occasional Discourse on the Negro Question, was poorly received by his contemporaries.
According to William Nicholls, religious antisemitism can be distinguished from racial antisemitism which is based on racial or ethnic grounds. "The dividing line was the possibility of effective conversion ... a Jew ceased to be a Jew upon baptism." However, with racial antisemitism, "Now the assimilated Jew was still a Jew, even after baptism ... . From the Enlightenment onward, it is no longer possible to draw clear lines of distinction between religious and racial forms of hostility towards Jews... Once Jews have been emancipated and secular thinking makes its appearance, without leaving behind the old Christian hostility towards Jews, the new term antisemitism becomes almost unavoidable, even before explicitly racist doctrines appear."
One of the first typologies which was used to classify various human races was invented by Georges Vacher de Lapouge (1854–1936), a theoretician of eugenics, who published L'Aryen et son rôle social (1899 – "The Aryan and his social role") in 1899. In his book, he divides humanity into various, hierarchical races, starting with the highest race which is the "Aryan white race, dolichocephalic", and ending with the lowest race which is the "brachycephalic", "mediocre and inert" race, that race is best represented by Southern European, Catholic peasants". Between these, Vacher de Lapouge identified the "Homo europaeus" (Teutonic, Protestant, etc.), the "Homo alpinus" (Auvergnat, Turkish, etc.), and finally the "Homo mediterraneus" (Neapolitan, Andalus, etc.) Jews were brachycephalic just like the Aryans were, according to Lapouge; but he considered them dangerous for this exact reason; they were the only group, he thought, which was threatening to displace the Aryan aristocracy. Georges Vacher de Lapouge became one of the leading inspirations of Nazi antisemitism and Nazi racist ideology.
United States
White Americans who participated in the Atlantic slave trade believed and Justified their economic exploitation of African Americans by creating a scientific theory of white superiority and black inferiority. Thomas Jefferson, who was a believer of scientific racism and enslaver of over 600 African Americans (regarded as property under the Articles of Confederation), wrote that blacks were "inferior to the whites in the endowments of body and mind."
A justification for the conquest of American Indian tribes emanated from their dehumanized perception as the "merciless Indian savages", as described in the United States Declaration of Independence.
Before the outbreak of the American Civil War, the Confederate States of America was founded with a constitution that contained clauses which restricted the government's ability to limit or interfere with the institution of "negro" slavery. In the 1861 Cornerstone Speech, Confederate vice president, Alexander Stephens declared that one of the Confederacy's foundational tenets was White Supremacy over African American slaves. Following the war, a hate group, known as the Ku Klux Klan, was founded in the American South, after the end of the American Civil War. Its purpose has been to maintain White, Protestant supremacy in the US after the Reconstruction period, which it did so through violence and intimidation.
The Anti-Defamation League (ADL) and Southern Poverty Law Center condemn writings about "Jewish Supremacism" by Holocaust-denier, former Grand Wizard of the KKK, and conspiracy theorist David Duke as antisemitic – in particular, his book Jewish Supremacism: My Awakening to the Jewish Question. Kevin B. MacDonald, known for his theory of Judaism as a "group evolutionary strategy", has also been accused of being "antisemitic" and a "white supremacist" in his writings on the subject by the ADL and his own university psychology department.
Nazi Germany
From 1933 to 1945, Nazi Germany, under the rule of Adolf Hitler, promoted the belief in the existence of a superior, Aryan Herrenvolk, or master race. The state's propaganda advocated the belief that Germanic peoples, whom they called "Aryans", were a master race or a Herrenvolk whose members were superior to the Jews, Slavs, and Romani people, so-called "gypsies". Arthur de Gobineau, a French racial theorist and aristocrat, blamed the fall of the ancien régime in France on racial intermixing, which he believed had destroyed the purity of the Nordic race. Gobineau's theories, which attracted a large and strong following in Germany, emphasized the belief in the existence of an irreconcilable polarity between Aryan and Jewish cultures.
Russia
Black
Cornel West, an African-American philosopher, writes that black supremacist religious views arose in America as a part of black Muslim theology in response to white supremacy.
Hutu supremacism
Arab
In Africa, black Southern Sudanese allege that they are being subjected to a racist form of Arab supremacy, which they equate with the historic white supremacism of South Africa's apartheid. The alleged genocide and ethnic cleansing in the ongoing War in Darfur has been described as an example of Arab racism.
For example, in their analysis of the sources of the conflict, Julie Flint and Alex de Waal say that Colonel Gaddafi, the leader of Libya, sponsored "Arab supremacism" across the Sahara during the 1970s. Gaddafi supported the "Islamic Legion" and the Sudanese opposition "National Front, including the Muslim Brothers and the Ansar, the Umma Party's military wing." Gaddafi tried to use such forces to annex Chad from 1979 to 1981. Gaddafi supported the Sudanese government's war in the South during the early 1980s, and in return, he was allowed to use the Darfur region as a "back door to Chad". As a result, the first signs of an "Arab racist political platform" appeared in Darfur in the early 1980s.
India
In Asia, Indians in Ancient India considered all foreigners barbarians. The Muslim scholar Al-Biruni wrote that the Indians called foreigners impure. A few centuries later, Dubois observes that "Hindus look upon Europeans as barbarians totally ignorant of all principles of honour and good breeding... In the eyes of a Hindu, a Pariah (outcaste) and a European are on the same level." The Chinese also considered the Europeans repulsive, ghost-like creatures, and they even considered them devils. Chinese writers also referred to foreigners as barbarians.
China
Religious
Christianity
Academics Carol Lansing and Edward D. English argue that Christian supremacism was a motivation for the Crusades in the Holy Land, as well as a motivation for crusades against Muslims and pagans throughout Europe. The blood libel is a widespread European conspiracy theory which led to centuries of pogroms and massacres of European Jewish minorities because it alleged that Jews required the pure blood of a Christian child in order to make matzah for Passover. Thomas of Cantimpré writes of the blood curse which the Jews put upon themselves and all of their generations at the court of Pontius Pilate where Jesus was sentenced to death: "A very learned Jew, who in our day has been converted to the (Christian) faith, informs us that one enjoying the reputation of a prophet among them, toward the close of his life, made the following prediction: 'Be assured that relief from this secret ailment, to which you are exposed, can only be obtained through Christian blood ("solo sanguine Christiano")." The Atlantic slave trade has also been partially attributed to Christian supremacism. The Ku Klux Klan has been described as a white supremacist Christian organization, as are many other white supremacist groups, such as the Posse Comitatus and the Christian Identity and Positive Christianity movements.
Islam
Academics Khaled Abou El Fadl, Ian Lague, and Joshua Cone note that, while the Quran and other Islamic scriptures express tolerant beliefs, such as Al-Baqara 256 "there is no compulsion in religion", there have also been numerous instances of Muslim or Islamic supremacism. Examples of how supremacists have interpreted Islam include the history of slavery in the Muslim world, Caliphate, Ottoman Empire, the early-20th-century pan-Islamism promoted by Abdul Hamid II, the jizya and supremacy of Sharia law, such as rules of marriage in Muslim countries being imposed on non-Muslims.
While non-violent proselytism of Islam (Dawah) is not Islamic supremacism, forced conversion to Islam is Islamic supremacism. Death penalty for apostasy in Islam is a sign of Islamic supremacism.
Numerous massacres and ethnic cleansing of Jews, Christians and non-Muslims occurred in some Muslim-majority countries including in Morocco, Libya, and Algeria, where eventually Jews were forced to live in ghettos. Decrees ordering the destruction of synagogues were enacted during the Middle Ages in Egypt, Syria, Iraq, and Yemen. At certain times in Yemen, Morocco, and Baghdad, Jews were forced to convert to Islam or face the Islamic death penalty. While there were antisemitic incidents before the 20th century, antisemitism increased after the Arab–Israeli conflict. Following the 1948 Arab–Israeli War, the Palestinian exodus, the creation of the State of Israel and Israeli victories during the wars of 1956 and 1967 were a severe humiliation to Israel's opponentsprimarily Egypt, Syria, and Iraq. However, by the mid-1970s the vast majority of Jews had left Muslim-majority countries, moving primarily to Israel, France, and the United States. The reasons for the Jewish exodus are varied and disputed.
Judaism
Ilan Pappé, an expatriate Israeli historian, writes that the First Aliyah to Israel "established a society based on Jewish supremacy" within "settlement-cooperatives" that were Jewish owned and operated. Joseph Massad, a professor of Arab studies, holds that "Jewish supremacism" has always been a "dominating principle" in religious and secular Zionism.
Other
Social
Political
See also
Chauvinism
Colonialism
Rule according to higher law
Legislative supremacy
Judicial supremacy
Notes
Ethnic supremacy
Narcissism
Political theories
Prejudice and discrimination
Racism
Social concepts
Pejorative terms | Supremacism | Biology | 2,470 |
10,340,305 | https://en.wikipedia.org/wiki/Automatism%20%28toxicology%29 | Automatism, in toxicology, refers to a tendency to take a drug over and over again, forgetting each time that one has already taken the dose. This can lead to a cumulative overdose. A particular example is barbiturates which were once commonly used as hypnotic (sleep inducing) drugs. Among the current hypnotics, benzodiazepines, especially midazolam might show marked automatism, possibly through their intrinsic anterograde amnesia effect. Barbiturates are known to induce hyperalgesia, i.e. aggravation of pain and for sleeplessness due to pain, if barbiturates are used, more pain and more disorientation would follow leading to drug automation and finally a "pseudo"suicide. Such reports dominated the medical literature of 1960s and 1970s; a reason replacing the barbiturates with benzodiazepines when they became available.
References
Toxicology | Automatism (toxicology) | Chemistry,Environmental_science | 192 |
1,699,990 | https://en.wikipedia.org/wiki/Globus%20cruciger | The , also known as stavroforos sphaira () or "the orb and cross", is an orb surmounted by a cross. It has been a Christian symbol of authority since the Middle Ages, used on coins, in iconography, and with a sceptre as royal regalia.
The cross laid over the globus represents Christ's dominion over the world, literally held in the hand of a worthy earthly ruler. In the iconography of Western art, when Christ himself holds the globe, he is called Salvator Mundi (Latin for 'Saviour of the World'). For instance, the 16th-century Infant Jesus of Prague statue holds a globus cruciger in this manner.
History
Holding the world in one's hand, or, more ominously, under one's foot, has been a symbol since antiquity. To citizens of the Roman Empire, the plain spherical globe held by the god Jupiter represented the world or the universe, as the dominion held by the Emperor. A 2nd-century coin from the reign of Emperor Hadrian shows the Roman goddess Salus with her foot upon a globus, and a 4th-century coin from the reign of Emperor Constantine I shows him with a globus in hand. The orbis terrarum was central to the iconography of the Tetrarchy, in which it represented the Tetrarchs' restoration of security to the Roman world. Constantine I claimed to have had a vision of symbol of Christ above the sun, with the words "In this sign, you shall conquer" (Latin: "In hoc signo vinces"), before the Battle of Milvian Bridge in AD 312. This symbol is usually assumed to be the "Chi-Rho (X-P)" symbol, but some think it was a cross. Consequently, his soldiers painted this symbol on their shields and then defeated their foe, Maxentius.With the growth of Christianity in the 5th century, the orb (in Latin works orbis terrarum, the 'world of the lands', whence "orb" derives) was surmounted with a cross, hence globus cruciger, symbolizing the Christian God's dominion of the world. The Emperor held the world in his hand to show that he ruled it on behalf of God. To non-Christians already familiar with the pagan globe, the surmounting of a cross indicated the victory of Christianity over the world. In medieval iconography, the size of an object relative to those of nearby objects indicated its relative importance; therefore the orb was small and the one who held it was large to emphasize the nature of their relationship. Although the globe symbolized the whole Earth, many Christian rulers, some of them not even sovereign, who reigned over small territories of the Earth, used it symbolically.
The first known depiction in art of the symbol was probably in the early 5th century AD, possibly as early as AD 395, namely on the reverse side of the coinage of Emperor Arcadius, yet most certainly by AD 423 on the reverse side of the coinage of Emperor Theodosius II.
The globus cruciger was associated with powerful rulers and angels; it adorned portrayals of both emperors and kings, and also archangels. It remained popular throughout the Middle Ages in coinage, iconography, and royal regalia. For example, it was often used by Byzantine emperors in order to symbolize their authority and sovereignty over the Christian world, usually being done via coinage. The symbol was meant to demonstrate that the emperor ruled both politically and divinely. The papacy, which in the Middle Ages rivaled the Holy Roman Emperor in temporal power, also used the symbol on top of the Papal tiara, which consisted of a triple crown; the Pope did not use a separate orb as a symbol. The globus cruciger (made up of a monde and cross) was generally featured as the finial of European royal crowns, whether on physical crowns or merely in royal heraldry, for example, in Denmark, the Holy Roman Empire, Hungary, Italy, The Netherlands, Portugal, Romania, Spain, Sweden, and Yugoslavia. It is still depicted not only in the arms of European polities for which a monarchy survives, yet also, since the end of communism in 1991, in the arms of some eastern European polities, despite the termination of their historical monarchies. Even in the modern era in the United Kingdom, the Sovereign's Orb symbolizes both the state and Church of England under the protection and domain of the monarchy.
Gallery
Use as an alchemical symbol
The globus cruciger was used as the alchemical symbol (♁) for antimony. It was also used as an alchemical symbol for "the grey wolf", supposedly used to purify alloyed metals into pure gold. (stibnite) was used to purify gold, as the sulphur in the antimony sulphide bonds to the metals alloyed with the gold, and these form a slag which can be removed. The gold remains dissolved in the metallic antimony which can be boiled off to leave the purified gold.
See also
The Ball and the Cross
Holy Hand Grenade of Antioch
Monde (crown)
Earth symbol
Celestial spheres
T and O map
Apfelgroschen – coin depicting the orb and cross of the Holy Roman Empire
Venus symbol
Cintamani
References
Leslie Brubaker, Dictionary of the Middle Ages, vol 5, pg. 564,
Picture of the 10th century Orb, Scepter and Crown insignia of the Holy Roman Empire
External links
Christian iconography
Christian symbols
Cross symbols
Formal insignia
Latin religious words and phrases
Regalia
Religious symbols
Heraldic charges
Byzantine regalia
Spherical objects | Globus cruciger | Physics | 1,175 |
18,437,142 | https://en.wikipedia.org/wiki/SEIF%20SLAM | In robotics, the SEIF SLAM is the use of the sparse extended information filter (SEIF) to solve the simultaneous localization and mapping by maintaining a posterior over the robot pose and the map. Similar to GraphSLAM, the SEIF SLAM solves the SLAM problem fully, but is an online algorithm (GraphSLAM is offline).
References
Robot control | SEIF SLAM | Engineering | 74 |
19,138,238 | https://en.wikipedia.org/wiki/Hydnellum%20concrescens | Hydnellum concrescens is an inedible fungus, commonly known as the zoned hydnellum or zoned tooth fungus. As with other tooth fungi, the spores are produced on spines on the underside of the cap, rather than gills. It has a funnel-shaped cap, typically between in diameter, which has characteristic concentric zones of color. The cap may also have radial ridges extending from the center to the margins. The spines are pink in young specimens, but turn brown with age.
This species is very similar in appearance to Hydnellum scrobiculatum, and traditionally, largely unreliable microscopic characteristics such as spore size and ornamentation have been used to distinguish between the two. Recent research has demonstrated a way to discriminate the two species using DNA sequencing of the ITS regions.
References
External links
Index Fungorum synonyms
Roger's Mushrooms picture and description
healing-mushrooms.net description, bioactive compounds and medicinal properties
Inedible fungi
concrescens
Fungi of Europe
Fungi described in 1796
Fungus species | Hydnellum concrescens | Biology | 217 |
43,027,004 | https://en.wikipedia.org/wiki/3-subset%20meet-in-the-middle%20attack | The 3-subset meet-in-the-middle (hereafter shortened MITM) attack is a variant of the generic meet-in-the-middle attack, which is used in cryptology for hash and block cipher cryptanalysis. The 3-subset variant opens up the possibility to apply MITM attacks on ciphers, where it is not trivial to divide the keybits into two independent key-spaces, as required by the MITM attack.
The 3-subset variant relaxes the restriction for the key-spaces to be independent, by moving the intersecting parts of the keyspaces into a subset, which contains the keybits common between the two key-spaces.
History
The original MITM attack was first suggested in an article by Diffie and Hellman in 1977, where they discussed the cryptanalytic properties of DES. They argued that the keysize of DES was too small, and that reapplying DES multiple times with different keys could be a solution to the key-size; however, they advised against using double-DES and suggested triple-DES as a minimum, due to MITM attacks (Double-DES is very susceptible to a MITM attack, as DES could easily be split into two subciphers (the first and second DES encryption) with keys independent of one another, thus allowing for a basic MITM attack that reduces the computational complexity from to .
Many variations has emerged, since Diffie and Hellman suggested MITM attacks. These variations either makes MITM attacks more effective, or allows them to be used in situations, where the basic variant cannot. The 3-subset variant was shown by Bogdanov and Rechberger in 2011, and has shown its use in cryptanalysis of ciphers, such as the lightweight block-cipher family KTANTAN.
Procedure
As with general MITM attacks, the attack is split into two phases: A key-reducing phase and a key-verification phase. In the first phase, the domain of key-candidates is reduced, by applying the MITM attack. In the second phase, the found key-candidates are tested on another plain-/ciphertext pair to filter away the wrong key(s).
Key-reducing phase
In the key-reducing phase, the attacked cipher is split into two subciphers, and , with each their independent keybits, as is normal with MITM attacks. Instead of having to conform to the limitation that the keybits of the two subciphers should be independent, the 3-subset attack allows for splitting the cipher into two subciphers, where some of the bits are allowed to be used in both of the subciphers.
This is done by splitting the key into three subsets instead, namely:
= the keybits the two subciphers have in common.
= the keybits distinct to the first subcipher,
= the keybits distinct to the second subcipher,
To now carry out the MITM attack, the 3 subsets are bruteforced individually, according to the procedure below:
For each guess of :
Calculate the intermediate value from the plaintext, for all key-bit combinations in
Calculate the intermediate value , for all key-bit combinations in
Compare and . When there is a match. Store it is a key-candidate.
Key-testing phase
Each key-candidate found in the key-reducing phase, is now tested with another plain-/ciphertext pair. This is done simply by seeing if the encryption of the plaintext, P, yields the known ciphertext, C. Usually only a few other pairs are needed here, which makes the 3-subset MITM attack, have a very little data complexity.
Example
The following example is based on the attack done by Rechberger and Bogdanov on the KTANTAN cipher-family. The naming-conventions used in their paper is also used for this example. The attack reduces the computational complexity of KTANTAN32 to , down from if compared with a bruteforce attack. A computational complexity of is of 2014 still not practical to break, and the attack is thus not computationally feasible as of now. The same goes for KTANTAN48 and KTANTAN64, which complexities can be seen at the end of the example.
The attack is possible, due to weaknesses exploited in KTANTAN's bit-wise key-schedule. It is applicable to both KTANTAN32, KTANTAN48 and KTANTAN64, since all the variations uses the same key-schedule. It is not applicable to the related KANTAN family of block-ciphers, due to the variations in the key-schedule between KTANTAN and KANTAN.
Overview of KTANTAN
KTANTAN is a lightweight block-cipher, meant for constrained platforms such as RFID tags, where a cryptographic primitive such as AES, would be either impossible (given the hardware) or too expensive to implement. It was invented by Canniere, Dunkelman and Knezevic in 2009. It takes a block size of either 32, 48 or 64 bits, and encrypts it using an 80-bit key over 254 rounds. Each round utilizes two bits of the key (selected by the key schedule) as round key.
Attack
Preparation
In preparation to the attack, weaknesses in the key schedule of KTANTAN that allows the 3-subset MITM attack was identified.
Since only two key-bits are used each round, the diffusion of the key per round is small - the safety lies in the number of rounds. Due to this structure of the key-schedule, it was possible to find a large number of consecutive rounds, which never utilized certain key-bits.
More precisely, the authors of the attack found that:
Round 1 to 111 never uses the key-bits:
Round 131 to 254 never uses the key-bits:
This characteristics of the key-schedule is used for staging the 3-subset MITM attack, as we now are able to split the cipher into two blocks with independent key-bits. The parameters for the attack are thus:
= the keybits used by both blocks (which means the rest 68 bits not mentioned above)
= the keybits used only by the first block (defined by round 1-111)
= the keybits used only by the second block (defined by round 131-254)
Key-reducing phase
One may notice a problem with step 1.3 in the key-reducing phase. It is not possible to directly compare the values of and , as is calculated at the end of round 111, and is calculated at the start of round 131. This is mitigated by another MITM technique called partial-matching. The authors found by calculating forwards from the intermediate value , and backwards from the intermediate value that at round 127, 8 bits was still unchanged in both and with a probability one. They thus only compared part of the state, by comparing those 8 bits (It was 8 bits at round 127 for KTANTAN32. It was 10 bits at round 123 and 47 bits at round 131 for KTANTAN48 and KTANTAN64, respectively). Doing this yields more false positives, but nothing that increases the complexity of the attack noticeably.
Key-testing phase
KTANTAN32 requires on average 2 pairs now to find the key-candidate, due to the false positives from only matching on part of the state of the intermediate values. KTANTAN48 and KTANTAN64 on average still only requires one plain-/ciphertext pair to test and find the correct key-candidates.
Results
For:
KTANTAN32, the computational complexity of the above attack is , compared to with an exhaustive key search. The data complexity is 3 plain-/ciphertext pairs.
KTANTAN48, the computational complexity is and 2 plain-/ciphertext pairs are needed.
KTANTAN64 it is and 2 plain-/ciphertext pairs are needed.
The results are taken from the article by Rechberger and Bogdanov.
This is not the best attack on KTANTAN anymore. The best attack as of 2011 is contributed to Wei, Rechberger, Guo, Wu, Wang and Ling which improved upon the MITM attack on the KTANTAN family. They arrived at a computational complexity of with 4 chosen plain-/ciphertext pairs using indirect partial-matching and splice & cut MITM techniques.
Notes
Computer network security
Cryptographic attacks | 3-subset meet-in-the-middle attack | Technology,Engineering | 1,729 |
24,034,908 | https://en.wikipedia.org/wiki/RealTouch | The RealTouch was a first of a kind teledildonic male sexual stimulation device consisting of a sleeve fitted with "belts, jets, heating elements and other gadgetry" that fits over the penis and synchronizes sensations to a specially produced online video. It was created by AEBN in 2008. Representatives for the company demonstrated the device at the 2009 AVN Adult Entertainment Expo in Las Vegas and it was released in November 2009.
AEBN also produced a dildo with a capacitive touch sensor; the JoyStick. With these two products, it sought to offer access to interactive remote teledildonic services over the Internet through its RealTouch Interactive division.
Marketing the RealTouch had significant patent licensing costs. AEBN withdrew from the market in 2015 when it could no longer sustain the cost of producing the product.
References
American inventions
Male sex toys
Machine sex
Ejaculation inducing devices
Sexuality and computing
Teledildonics | RealTouch | Physics,Technology,Biology | 196 |
3,840,294 | https://en.wikipedia.org/wiki/Halimione%20portulacoides | Halimione portulacoides, commonly known as sea purslane, is a shrub found in Eurasia.
Description
The perennial plant grows to in height. The leaves are thick and oval-shaped, with a powdery surface. In northern temperate climates it flowers from July to September. The flowers are small, borne in short clusters, monoecious, and pollinated by wind.
Taxonomy
Botanical synonyms include Atriplex portulacoides L. and Obione portulacoides (L.) Moq. Recent phylogenetic research revealed that Halimione is a distinct genus and cannot be included in Atriplex.
Distribution and habitat
Halimione portulacoides occurs at the sea shores of western and southern Europe, and from the Mediterranean Sea to western Asia. A halophyte, it is found in salt marshes and coastal dunes, and is usually flooded at high tide.
Ireland
Copeland Islands (County Down).
Uses
The edible leaves can be eaten raw in salads or cooked as a potherb. They are thick and succulent with a crunchy texture and a natural saltiness. The leaves are good for human and animal health as they contain important micronutrients like zinc, iron, copper, and cobalt.
References
External links
Chenopodioideae
Flora of Europe
Flora of Western Asia
Flora of North Africa
Plants described in 1753
Taxa named by Carl Linnaeus
Leaf vegetables | Halimione portulacoides | Chemistry | 289 |
61,594,757 | https://en.wikipedia.org/wiki/Papiine%20gammaherpesvirus%201 | Papiine gammaherpesvirus 1 (PaHV-1), commonly known as baboon lymphocryptovirus, is a species of virus in the genus Lymphocryptovirus, subfamily Gammaherpesvirinae, family Herpesviridae, and order Herpesvirales.
This species was the first Lymphocryptovirus isolated from a non-human primate to be described.
References
External links
Gammaherpesvirinae | Papiine gammaherpesvirus 1 | Biology | 102 |
63,591,823 | https://en.wikipedia.org/wiki/Simen%20%C3%85dn%C3%B8y%20Ellingsen | Simen Andreas Ådnøy Ellingsen (born 14 May 1981) is a Norwegian engineering physicist specializing in fluid mechanics, especially waves, turbulence, and quantum mechanics. He is a full professor at the Norwegian University of Science and Technology, at the Department of Energy and Process Engineering. He is known for having expanded Lord Kelvin's work known as Kelvinangle. He received the Royal Norwegian Society of Sciences and Letters Prize for Young Researchers in the Natural Sciences in 2011 and became a member of the Young Academy of Norway in 2019. He received a European Research Council Consolidator Grant in 2022.
He plays several instruments and has published music with the band Shamblemaths.
Education
Ellingsen has two doctoral degrees. The first from 2009 is Nuclear Terrorism and Rational Choice from King's College London. The second from 2011 is Dispersion forces in Micromechanics: Casimir and Casimir-Polder forces affected by geometry and non-zero temperature from the Norwegian University of Science and Technology.
Publications (selection)
(The Norwegian Scientific Index)
Membership and honours
In 2011 he was the winner of the Royal Norwegian Society of Sciences and Letters Prize for Young Researchers in the Natural Sciences.
Ellingsen became one of 12 new members of the Young Academy of Norway in 2019, and is member of the Royal Norwegian Society of Sciences and Letters.
References
External links
Home page
1981 births
Engineering academics
Engineering educators
Fluid dynamicists
Living people
Norwegian physicists
Academic staff of the Norwegian University of Science and Technology
Quantum physicists
Royal Norwegian Society of Sciences and Letters | Simen Ådnøy Ellingsen | Physics,Chemistry | 312 |
30,204,464 | https://en.wikipedia.org/wiki/Sunstone%20%28medieval%29 | The sunstone () is a type of mineral attested in several 13th–14th-century written sources in Iceland, one of which describes its use to locate the Sun in a completely overcast sky. Sunstones are also mentioned in the inventories of several churches and one monastery in 14th–15th-century Iceland and Germany.
A theory exists that the sunstone had polarizing attributes and was used as a navigational instrument by seafarers in the Viking Age. A stone found in 2002 off Alderney, in the wreck of a 16th-century warship, may lend evidence of the existence of sunstones as navigational devices.
Sources
One medieval source in Iceland, Rauðúlfs þáttr, mentions the sunstone as a mineral by means of which the sun could be located in an overcast and snowy sky by holding it up and noting where it emitted, reflected or transmitted light (hvar geislaði úr honum). Sunstones are also mentioned in (13th century) and in church and monastic inventories (14th–15th century) without discussing their attributes. The sunstone texts of Hrafns saga Sveinbjarnarsonar were copied to all four versions of the medieval hagiography Guðmundar saga góða.
Thorsteinn Vilhjalmsson translates the Icelandic description in Rauðúlfs þáttr of the use of the sunstone as follows:
Allegorical nature of the medieval texts
Two of the original medieval texts on the sunstone are allegorical. Hrafns saga Sveinbjarnarsonar contains a burst of purely allegorical material associated with Hrafn’s slaying. This involves a celestial vision with three highly cosmological knights, recalling the horsemen of the Apocalypse. It has been suggested that the horsemen of Hrafns saga contain allegorical allusions to the winter solstice and the four elements as an omen of Hrafn's death, where the sunstone also appears.
"Rauðúlfs þáttr", a tale of Saint Olav, and the only medieval source mentioning how the sunstone was used, is a thoroughly allegorical work. A round and rotating house visited by Olav has been interpreted as a model of the cosmos and the human soul, as well as a prefiguration of the Church. The intention of the author was to achieve an apotheosis of St. Olav, through placing him in the symbolic seat of Christ. The house belongs to the genre of "abodes of the sun," which seemed widespread in medieval literature. St. Olav used the sunstone to confirm the time reckoning skill of his host right after leaving this allegorical house. He held the sunstone up against the snowy and completely overcast sky and noted where light was emitted from it (the Icelandic words used do not make it clear whether the light was reflected by the stone, emitted by it or transmitted through it). It has been suggested that in "Rauðúlfs þáttr" the sunstone was used as a symbol of the Virgin, following a widespread tradition in which the virgin birth of Christ is compared with glass letting a ray of the sun through.
The allegories of the above-mentioned texts exploit the symbolic value of the sunstone, but the church and monastic inventories, however, show that something called sunstones did exist as physical objects in Iceland. The presence of the sunstone in "Rauðúlfs þáttr" may be entirely symbolic but its use is described in sufficient detail to show that the idea of using a stone to find the sun's position in overcast conditions was commonplace.
Possibility of use for orientation and navigation
Danish archaeologist Thorkild Ramskou posited that the "sunstone" could have been one of the minerals (cordierite or Iceland spar) that polarize light and by which the azimuth of the sun can be determined in a partly overcast sky or when the sun is just below the horizon. The principle is used by some insects. Polarization is a phenomenon that occurs when light encounters an obstacle, such as a shiny surface or a fog bank, deviating to a particular orientation and bees, for example, are known to be able to detect the polarization of the sunlight. Polar flights applied the idea before more advanced techniques became available. Ramskou further conjectured that Iceland spar could have aided navigation in the open sea in the Viking period. This idea has become very popular, and research as to how a "sunstone" could be used in nautical navigation continues, often in the context of the Uunartoq disc.
Research in 2011 confirms that one can identify the direction of the sun to within a few degrees in both cloudy and twilight conditions using Iceland spar and the naked eye. The process involves moving the stone across the visual field to reveal a yellow entoptic pattern on the fovea of the eye. Alternatively, a dot can be placed on top of the crystal so that when you look at it from below, two dots appear, because the light is "depolarised" and fractured along different axes. The crystal can then be rotated until the two points have the same luminosity. The angle of the top face now gives the direction of the sun. Attempts to replicate this work in both Scotland and off the coast of Turkey by science journalist Matt Kaplan and mineralogists at the British Geological Survey in 2014 failed. Kaplan communicated with Ropars, and neither could understand why the samples of Iceland spar that were being used during the trials did not reveal the sun's direction, with the author hypothesizing that the stones require some experience to be handled effectively.
The recovery of a piece of Iceland spar from an Elizabethan ship that sank near Alderney in 1592 suggests the possibility that this navigational technology may have persisted after the invention of the magnetic compass. Although the stone was found near a navigational instrument, its use remains uncertain.
Beyond nautical navigation, a polarizing crystal would have been useful as a sundial, especially at high latitudes with extended hours of twilight, in mountainous areas, or in partly overcast conditions. This would have required the polarizing crystal to be used in conjunction with known landmarks. Churches and monasteries would have valued such an object as an aid to keep track of the canonical hours.
A Hungarian team proposed that a sun compass artifact with crystals might also have allowed Vikings to guide their boats at night. A type of crystal they called sunstone can use scattered sunlight from below the horizon as a guide. What they suggest is that Iceland spar crystals were used in combination with Haidinger's brush. If so, Vikings could have used them in the northern latitudes where it never becomes completely dark in summer. In areas of confused magnetic deviation (such as the Labrador coast), a sunstone could have been a more reliable guide than a magnetic compass.
See also
Allegory in the Middle Ages
Pfund sky compass
Solar compass
References
External links
The Fabled Viking Sunstone
The Viking Sunstone Is the legend of the Sun-Stone true ?
Culture of Iceland
History of navigation
Navigational equipment
Polarization (waves)
Gemstones in culture
sv:Islandsspat#Solsten | Sunstone (medieval) | Physics | 1,488 |
690,110 | https://en.wikipedia.org/wiki/Ctags | Ctags is a programming tool that generates an index file (or tag file) of names found in source and header files of various programming languages to aid code comprehension. Depending on the language, functions, variables, class members, macros and so on may be indexed. These tags allow definitions to be quickly and easily located by a text editor, a code search engine, or other utility. Alternatively, there is also an output mode that generates a cross reference file, listing information about various names found in a set of language files in human-readable form.
The original Ctags was introduced in BSD Unix 2.0 and was written by Ken Arnold, with Fortran support by Jim Kleckner and Pascal support by Bill Joy. It is part of the initial release of Single Unix Specification and XPG4 of 1992.
Editors that support ctags
Tag index files are supported by many source code editors, including:
Atom
BBEdit 8+
CodeLite (via built-in ctagsd language server)
Cloud9 IDE (uses it internally but does not expose it)
CygnusEd
Emacs and XEmacs
EmEditor Professional
Far Manager (via Ctags Source Navigator plugin)
Geany
Gedit (via gedit-symbol-browser-plugin)
JED
jEdit (via plugins CodeBrowser, Tags, ClassBrowser, CtagsSideKick, or Jump)
JOE
KDevelop
Kate
mcedit (Midnight Commander builtin editor)
NEdit
Notepad++ (via OpenCTags plug-in)
QDevelop
TSE (via macro)
TextMate (via CodeBrowser-PlugIn)
UltraEdit
TextPad
VEDIT
vi (and derivatives such as Elvis, Nvi, Vim, vile, etc.)
Visual Studio Code
Xedit (X11)
Variants of ctags
There are a few other implementations of the ctags program:
Etags
GNU Emacs comes with two ctags utilities, etags and ctags, which are compiled from the same source code. Etags generates a tag table file for Emacs, while the ctags command is used to create a similar table in a format understood by vi. They have different sets of command line options:
etags does not recognize and ignores options which only make sense for vi style tag files produced by the ctags command.
Exuberant Ctags
Exuberant Ctags, written and maintained by Darren Hiebert until 2009, was initially distributed with Vim, but became a separate project upon the release of Vim 6. It includes support for Emacs and compatibility.
Exuberant Ctags includes support for over 40 programming languages with the ability to add support for even more using regular expressions.
Universal Ctags
Universal Ctags is a fork of Exuberant Ctags, with the objective of continuing its development. A few parsers are rewritten to better support the languages.
Language-specific
creates ctags compatible tag files for Haskell source files. It includes support for creating Emacs etags files.
is a ctags-compatible code indexing solution for JavaScript. It is specialized for JavaScript and uses the CommonJS packaging system. It outperforms Exuberant Ctags for JavaScript code, finding more tags than the latter.
Tags file formats
There are multiple tag file formats. Some of them are described below. In the following, represents the byte with hexadecimal representation . Every line ends with a line feed (LF, = ).
Ctags and descendants
The original ctags and the Exuberant/Universal descendants have similar file formats:
Ctags
This is the format used by vi and various clones. The tags file is normally named "tags".
The tags file is a list of lines, each line in the format:
{tagname}\t{tagfile}\t{tagaddress}
The fields are specified as follows:
} – Any identifier, not containing white space
– Exactly one tab () character, although many versions of vi can handle any amount of white space.
} – The name of the file where } is defined, relative to the current directory
} – An ex mode command that will take the editor to the location of the tag. For POSIX implementations of vi this may only be a search or a line number, providing added security against arbitrary command execution.
The tags file is sorted on the } field which allows for fast searching of the tags file.
Extended Ctags
This is the format used by Vim's Exuberant Ctags and Universal Ctags. These programs can generate an original ctags file format or an extended format that attempts to retain backward compatibility.
The extended tags file is a list of lines, each line in the format:
{tagname}\t{tagfile}\t{tagaddress}[;"\t{tagfield...}]
The fields up to and including } are the same as for ctags above.
Optional additional fields are indicated by square brackets ("[...]") and include:
– semicolon + double quote: Ends the } in a way that looks like the start of a comment to vi or ex.
} – extension fields: tab separated "key:value" pairs for more information.
This format is compatible with non-POSIX vi as the additional data is interpreted as a comment. POSIX implementations of vi must be changed to support it, however.
Etags
This is the format used by Emacs etags. The tags file is normally named "TAGS".
The etags files consists of multiple sections—one section per input source file. Sections are plain-text with several non-printable ascii characters used for special purposes. These characters are represented as underlined hexadecimal codes below.
A section starts with a two line header (the first two bytes make up a magic number):
\x0c
{src_file},{size_of_tag_definition_data_in_bytes}
The header is followed by tag definitions, one definition per line, with the format:
{tag_definition_text}\x7f{tagname}\x01{line_number},{byte_offset}
can be omitted if the name of the tag can be deduced from the text at the tag definition.
Example
Given a single line test.c source code:
#define CCC(x)
The TAGS (etags) file would look like this:
\x0c
test.c,21
#define CCC(\x7fCCC\x011,0
The tags (ctags) file may look like:
CCC( test.c 1
or more flexibly using a search:
CCC( test.c /^#define CCC(/
See also
GNU GLOBAL
References
External links
Universal Ctags homepage
Exuberant Ctags homepage
Ctags on VMS
source code for Emacs vtags.el module
Code comprehension tools
Code navigation tools
Free computer programming tools
Unix programming tools
Standard Unix programs
Unix SUS2008 utilities
Software using the BSD license
fr:Ctags | Ctags | Technology | 1,478 |
12,637,359 | https://en.wikipedia.org/wiki/Carbon%20dioxide%20scrubber | A carbon dioxide scrubber is a piece of equipment that absorbs carbon dioxide (CO2). It is used to treat exhaust gases from industrial plants or from exhaled air in life support systems such as rebreathers or in spacecraft, submersible craft or airtight chambers. Carbon dioxide scrubbers are also used in controlled atmosphere (CA) storage and carbon capture and storage processes.
Technologies
Amine scrubbing
The primary application for CO2 scrubbing is for removal of CO2 from the exhaust of coal- and gas-fired power plants and from the enclosed atmosphere of nuclear submarines. The technology being involves the use of various amines, e.g. monoethanolamine. Cold solutions of these organic compounds bind CO2, but the binding is reversed at higher temperatures:
CO2 + 2 ↔ +
, this technology has only been lightly implemented in coal-fired power plants because of capital costs of installing the facility and the operating costs of utilizing it. However, the technology has been utilized as a primary part of atmosphere control in nuclear submarines since the late 1950s.
Minerals and zeolites
Several minerals and mineral-like materials reversibly bind CO2. Most often, these minerals are oxides or hydroxides, and often the CO2 is bound as carbonate. Carbon dioxide reacts with quicklime (calcium oxide) to form limestone (calcium carbonate), in a process called carbonate looping. Other minerals include serpentinite, a magnesium silicate hydroxide, and olivine. Molecular sieves also function in this capacity.
Various (cyclical) scrubbing processes have been proposed to remove CO2 from the air or from flue gases and release them in a controlled environment, reverting the scrubbing agent. These usually involve using a variant of the Kraft process which may be based on sodium hydroxide. The CO2 is absorbed into such a solution, transfers to lime (via a process called causticization) and is released again through the use of a kiln. With some modifications to the existing processes (mainly changing to an oxygen-fired kiln) the resulting exhaust becomes a concentrated stream of CO2, ready for storage or use in fuels. An alternative to this thermo-chemical process is an electrical one which releases the CO2 through electrolyzing of the carbonate solution. While simpler, this electrical process consumes more energy as electrolysis, also splits water. Early incarnations of environmentally motivated CO2 capture used electricity as the energy source and were therefore dependent on green energy. Some thermal CO2 capture systems use heat generated on-site, which reduces the inefficiencies resulting from off-site electricity production, but it still needs a source of (green) heat, which nuclear power or concentrated solar power could provide.
Sodium hydroxide
Zeman and Lackner outlined a specific method of air capture.
First, CO2 is absorbed by an alkaline NaOH solution to produce dissolved sodium carbonate. The absorption reaction is a gas liquid reaction, strongly exothermic, here:
2NaOH(aq) + CO2(g) → (aq) + (l)
(aq) + (s) → 2NaOH(aq) + (s)
ΔH° = −114.7 kJ/mol
Causticization is performed ubiquitously in the pulp and paper industry and readily transfers 94% of the carbonate ions from the sodium to the calcium cation. Subsequently, the calcium carbonate precipitate is filtered from solution and thermally decomposed to produce gaseous CO2. The calcination reaction is the only endothermic reaction in the process and is shown here:
(s) → CaO(s) + CO2(g)
ΔH° = +179.2 kJ/mol
The thermal decomposition of calcite is performed in a lime kiln fired with oxygen in order to avoid an additional gas separation step. Hydration of the lime (CaO) completes the cycle. Lime hydration is an exothermic reaction that can be performed with water or steam. Using water, it is a liquid/solid reaction as shown here:
CaO(s) + (l) → (s)
ΔH° = −64.5 kJ/mol
Lithium hydroxide
Other strong bases such as soda lime, sodium hydroxide, potassium hydroxide, and lithium hydroxide are able to remove carbon dioxide by chemically reacting with it. In particular, lithium hydroxide was used aboard spacecraft, such as in the Apollo program, to remove carbon dioxide from the atmosphere. It reacts with carbon dioxide to form lithium carbonate. Recently lithium hydroxide absorbent technology has been adapted for use in anesthesia machines. Anesthesia machines which provide life support and inhaled agents during surgery typically employ a closed circuit necessitating the removal of carbon dioxide exhaled by the patient. Lithium hydroxide may offer some safety and convenience benefits over the older calcium based products.
2 LiOH(s) + 2 (g) → 2 LiOH·(s)
2 LiOH·(s) + CO2(g) → (s) + 3 (g)
The net reaction being:
2LiOH(s) + CO2(g) → (s) + (g)
Lithium peroxide can also be used as it absorbs more CO2 per unit weight with the added advantage of releasing oxygen.
In recent years lithium orthosilicate has attracted much attention towards CO2 capture, as well as energy storage. This material offers considerable performance advantages although it requires high temperatures for the formation of carbonate to take place.
Regenerative carbon dioxide removal system
The regenerative carbon dioxide removal system (RCRS) on the Space Shuttle orbiter used a two-bed system that provided continuous removal of carbon dioxide without expendable products. Regenerable systems allowed a shuttle mission a longer stay in space without having to replenish its sorbent canisters. Older lithium hydroxide (LiOH)-based systems, which are non-regenerable, were replaced by regenerable metal-oxide-based systems. A system based on metal oxide primarily consisted of a metal oxide sorbent canister and a regenerator assembly. It worked by removing carbon dioxide using a sorbent material and then regenerating the sorbent material. The metal-oxide sorbent canister was regenerated by pumping air at approximately through it at a standard flow rate of for 10 hours.
Activated carbon
Activated carbon can be used as a carbon dioxide scrubber. Air with high carbon dioxide content, such as air from fruit storage locations, can be blown through beds of activated carbon and the carbon dioxide will adhere to the activated carbon [adsorption]. Once the bed is saturated it must then be "regenerated" by blowing low carbon dioxide air, such as ambient air, through the bed. This will release the carbon dioxide from the bed, and it can then be used to scrub again, leaving the net amount of carbon dioxide in the air the same as when the process was started.
Metal-organic frameworks (MOFs)
Metal-organic frameworks are well-studied for carbon dioxide capture and sequestration via adsorption. No large-scale commercial technology exists. In one set of tests MOFs were able to separate 90% of the CO2 from the flue gas stream using a vacuum pressure swing process. The cost of energy is estimated to increase by 65% if MOFs were used vs an increase of 81% for amines as the capturing agent.
Extend air cartridge
An extend air cartridge (EAC) is a make or type of pre-loaded one-use absorbent canister that can be fitted into a recipient cavity in a suitably-designed rebreather.
Other methods
Many other methods and materials have been discussed for scrubbing carbon dioxide.
Adsorption
Regenerative carbon dioxide removal system (RCRS)
Algae filled bioreactors
Membrane gas separations
Reversing heat exchangers
See also
References
Scrubbers
Carbon dioxide
Space suit components
Spacecraft life support systems
Gas technologies
Carbon capture and storage | Carbon dioxide scrubber | Chemistry,Engineering | 1,661 |
32,727,729 | https://en.wikipedia.org/wiki/List%20of%20online%20digital%20musical%20document%20libraries | This is a list of online digital musical document libraries. Each source listed below offers access to collections of digitized music documents (typically originating from printed or manuscript musical sources). They may contain scanned images, fully encoded scores, or encodings designed for music playback (e.g., via MIDI). Some (e.g., KernScores) are adapted for music analysis.
See also
Virtual Library of Musicology
List of online music databases
References
Projects
Digital library projects
Digital Musical Document Libraries
Digital library projects | List of online digital musical document libraries | Technology | 105 |
3,286,331 | https://en.wikipedia.org/wiki/Macaulay%20Institute | The Macaulay Institute, formally the Macaulay Land Use Research Institute and sometimes referred to simply as The Macaulay, was a research institute based at Aberdeen in Scotland, which is now part of the James Hutton Institute. Its work covered aspects such as landscape, soil and water conservation and climate change.
History
The Macaulay Institute for Soil Research was founded in 1930. A benefaction of £10,000 from one of Canada's Scottish sons, Thomas Bassett Macaulay, of the Sun Life Assurance Company of Canada was used to purchase 50 acres and buildings at Craigiebuckler in Aberdeen. Macaulay's aim was to improve the productivity of Scottish agriculture. Thomas Bassett Macaulay was a descendant of Macaulay family of Lewis, who were centred on the Hebridean Isle of Lewis. He was true to his Hebridean roots throughout his life, often giving large donations to Lewis, which funded various projects including a new library and a new wing at Lewis hospital.
The new Macaulay Institute opened on a site near Bucksburn in April 1987. It was formed by the merger of the Macaulay Institute for Soil Research and the Hill Farm Research Organisation. The proposed merger was announced in December 1985 with the government anticipating that it would result in cost savings. It was established to carry out research in support of the agricultural industry, taking account of the interaction between the industry and other land users, and set in the context of the environmental objectives of the UK Government and the European Union.
In April 2011, the Macaulay Institute merged with SCRI (Scottish Crop Research Institute) in Dundee to form the James Hutton Institute. The chief executive of the new institute is Professor Iain Gordon.
Research
It is an international centre for research and consultancy on the environmental and social consequences of rural land uses. Interdisciplinary research across the environmental and social sciences aims to support the protection of natural resources, the creation of integrated land use systems, and the development of sustainable rural communities.
With an annual income from research and consultancy of over £11million, the Macaulay Institute is the largest interdisciplinary research organisation of its kind in Europe.
It is one of the main research providers to the Scottish Government and currently about 75% of the Macaulay's income is related to commissioned research programmes, principally on "Land Use and Rural Stewardship". The 300 staff and postgraduate students are drawn from over 25 countries, and conduct research in Scotland, across Europe and internationally, with a wide range of partner organisations. Their goal is that the research they undertake provides evidence that will help shape future environmental and rural-development policy both in Scotland and internationally.
The Macaulay Land Use Research Institute was a registered charity since 1931. Commercial services are delivered through Macaulay Scientific Consulting Ltd, its subsidiary consultancy company.
The mineral Macaulayite is named after the institute.
Notable Directors
William Ogg Gammie FRSE LLB (1930–1943)
Donald McArthur FRSE (1948–1958)
Robert Lyell Mitchell FRSE (1968–1975)
Thomas Summers West CBE, FRS, FRSE (1975–1987)
Head of Microbiology
Donald Webley FRSE (1945–1975)
Affiliations
The Macaulay Institute is a member of the Aberdeen Research Consortium which also includes:
University of Aberdeen
FRS Marine Laboratory, Aberdeen
Rowett Research Institute
Robert Gordon University
Scottish Agricultural College
Current work
LADSS and AGRIGRID are examples of projects that are being undertaken at the institute.
See also
AGRIGRID
LADSS
References
External links
Macaulay Institute Official Site
Macaulay Scientific Consulting Ltd
James Hutton Institute
1987 establishments in Scotland
2011 disestablishments in Scotland
Agriculture in Scotland
Agricultural research institutes in the United Kingdom
Charities based in Aberdeen
Economy Directorates
Environmental research institutes
Environment of Scotland
Government research
Research institutes established in 1987
Research institutes disestablished in 2011
Research institutes in Scotland
Public bodies of the Scottish Government | Macaulay Institute | Environmental_science | 776 |
19,875,582 | https://en.wikipedia.org/wiki/Population%20balance%20equation | Population balance equations (PBEs) have been introduced in several branches of modern science, mainly in Chemical Engineering, to describe the evolution of a population of particles. This includes topics like crystallization, leaching (metallurgy), liquid–liquid extraction, gas-liquid dispersions like water electrolysis, liquid-liquid reactions, comminution, aerosol engineering, biology (where the separate entities are cells based on their size or intracellular proteins), polymerization, etc. Population balance equations can be said to be derived as an extension of the Smoluchowski coagulation equation which describes only the coalescence of particles. PBEs, more generally, define how populations of separate entities develop in specific properties over time. They are a set of Integro-partial differential equations which gives the mean-field behavior of a population of particles from the analysis of behavior of single particle in local conditions.
Particulate systems are characterized by the birth and death of particles. For example, consider precipitation process (formation of solid from liquid solution) which has the subprocesses nucleation, agglomeration, breakage, etc., that result in the increase or decrease of the number of particles of a particular radius (assuming formation of spherical particles). Population balance is nothing but a balance on the number of particles of a particular state (in this example, size).
Formulation of PBE
Consider the average number of particles with particle properties denoted by a particle state vector (x,r) (where x corresponds to particle properties like size, density, etc. also known as internal coordinates and, r corresponds to spatial position or external coordinates) dispersed in a continuous phase defined by a phase vector Y(r,t) (which again is a function of all such vectors which denote the phase properties at various locations) is denoted by f(x,r,t). Hence it gives the particle characteristics in property and space domains. Let h(x,r,Y,t) denote the birth rate of particles per unit volume of particle state space, so the number conservation can be written as
This is a generalized form of PBE.
Solution to PBE
Monte Carlo methods
, discretization methods and moment methods are mainly used to solve these equations. The choice depends on the application and computing infrastructure.
References
Partial differential equations
Mathematical and theoretical biology | Population balance equation | Mathematics | 480 |
3,193,769 | https://en.wikipedia.org/wiki/Lucien%20Rudaux | Lucien Rudaux (; 1874–1947) was a French artist and astronomer, who created famous paintings of space themes in the 1920s and 1930s.
The Rudaux crater on Mars and the Lucien Rudaux Memorial Award are named in his honor. The asteroid 3574 Rudaux is also named for him.
Biography
Lucien Rudaux was the son of the painter Edmond Rudaux, and grandfather by marriage of the French physicist Francis Rocard.
In 1892, he joined the Société astronomique de France. In 1894, he founded an observatory in Donville. In 1895–1896, he completed his military service at Granville.
From 1903, he was a science writer and artist for Nature and, from 1905, for L'Illustration.
He was in military service from August 1914 in the 79th Territorial Infantry Regiment. In 1915 he joined the 10th nursing section until 1917.
In 1936, he lived in 113 Boulevard Saint-Michel in Paris.
In 1912 he was appointed an Officer of Public Instruction. He was a member of the Astronomical Society of France and the National Meteorological Office. In 1936, he was awarded a knighthood (Chevalier) in the Legion of Honour.
Astronomical activities
He was the director of a small observatory, Donville-les-Bains in Normandy, and contributed to the establishment of the "Astronomy" in the "Palais de la découverte".
Books
L. Rudaux, G. Vaucouleurs; Astronomy (1962)
Publications in French
, illustrated by Lucien Rudaux.
(later editions 1952, with collaborator Gérard de Vaucouleurs)
(later edition. 1990)
(later editions. 1952, 1956)
Notes and references
http://iaaa.org/gallery/rudaux/
http://www.fabiofeminofantascience.org/PAUL/PAUL2.html
1874 births
1947 deaths
Space artists
20th-century French astronomers
19th-century French painters
French male painters
20th-century French painters
20th-century French male artists
19th-century French astronomers
19th-century French male artists | Lucien Rudaux | Astronomy | 422 |
31,187,897 | https://en.wikipedia.org/wiki/Mumford%27s%20compactness%20theorem | In mathematics, Mumford's compactness theorem states that the space of compact Riemann surfaces of fixed genus g > 1 with no closed geodesics of length less than some fixed ε > 0 in the Poincaré metric is compact. It was proved by as a consequence of a theorem about the compactness of sets of discrete subgroups of semisimple Lie groups generalizing Mahler's compactness theorem.
References
Riemann surfaces
Kleinian groups
Compactness theorems | Mumford's compactness theorem | Mathematics | 98 |
39,783,039 | https://en.wikipedia.org/wiki/Function%20of%20several%20real%20variables | In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument, with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex. However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions, by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article.
The domain of a function of variables is the subset of for which the function is defined. As usual, the domain of a function of several real variables is supposed to contain a nonempty open subset of .
General definition
A real-valued function of real variables is a function that takes as input real numbers, commonly represented by the variables , for producing another real number, the value of the function, commonly denoted . For simplicity, in this article a real-valued function of several real variables will be simply called a function. To avoid any ambiguity, the other types of functions that may occur will be explicitly specified.
Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable are taken in a subset of , the domain of the function, which is always supposed to contain an open subset of . In other words, a real-valued function of real variables is a function
such that its domain is a subset of that contains a nonempty open set.
An element of being an -tuple (usually delimited by parentheses), the general notation for denoting functions would be . The common usage, much older than the general definition of functions between sets, is to not use double parentheses and to simply write .
It is also common to abbreviate the -tuple by using a notation similar to that for vectors, like boldface , underline , or overarrow . This article will use bold.
A simple example of a function in two variables could be:
which is the volume of a cone with base area and height measured perpendicularly from the base. The domain restricts all variables to be positive since lengths and areas must be positive.
For an example of a function in two variables:
where and are real non-zero constants. Using the three-dimensional Cartesian coordinate system, where the xy plane is the domain and the z axis is the codomain , one can visualize the image to be a two-dimensional plane, with a slope of in the positive x direction and a slope of in the positive y direction. The function is well-defined at all points in . The previous example can be extended easily to higher dimensions:
for non-zero real constants , which describes a -dimensional hyperplane.
The Euclidean norm:
is also a function of n variables which is everywhere defined, while
is defined only for .
For a non-linear example function in two variables:
which takes in all points in , a disk of radius "punctured" at the origin in the plane , and returns a point in . The function does not include the origin , if it did then would be ill-defined at that point. Using a 3d Cartesian coordinate system with the xy-plane as the domain , and the z axis the codomain , the image can be visualized as a curved surface.
The function can be evaluated at the point in :
However, the function couldn't be evaluated at, say
since these values of and do not satisfy the domain's rule.
Image
The image of a function is the set of all values of when the -tuple runs in the whole domain of . For a continuous (see below for a definition) real-valued function which has a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function.
The preimage of a given real number is called a level set. It is the set of the solutions of the equation .
Domain
The domain of a function of several real variables is a subset of that is sometimes, but not always, explicitly defined. In fact, if one restricts the domain of a function to a subset , one gets formally a different function, the restriction of to , which is denoted . In practice, it is often (but not always) not harmful to identify and , and to omit the restrictor .
Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation.
Moreover, many functions are defined in such a way that it is difficult to specify explicitly their domain. For example, given a function , it may be difficult to specify the domain of the function If is a multivariate polynomial, (which has as a domain), it is even difficult to test whether the domain of is also . This is equivalent to test whether a polynomial is always positive, and is the object of an active research area (see Positive polynomial).
Algebraic structure
The usual operations of arithmetic on the reals may be extended to real-valued functions of several real variables in the following way:
For every real number , the constant function is everywhere defined.
For every real number and every function , the function: has the same domain as (or is everywhere defined if ).
If and are two functions of respective domains and such that contains a nonempty open subset of , then and are functions that have a domain containing .
It follows that the functions of variables that are everywhere defined and the functions of variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals (-algebras). This is a prototypical example of a function space.
One may similarly define
which is a function only if the set of the points in the domain of such that contains an open subset of . This constraint implies that the above two algebras are not fields.
Univariable functions associated with a multivariable function
One can easily obtain a function in one real variable by giving a constant value to all but one of the variables. For example, if is a point of the interior of the domain of the function , we can fix the values of to respectively, to get a univariable function
whose domain contains an interval centered at . This function may also be viewed as the restriction of the function to the line defined by the equations for .
Other univariable functions may be defined by restricting to any line passing through . These are the functions
where the are real numbers that are not all zero.
In next section, we will show that, if the multivariable function is continuous, so are all these univariable functions, but the converse is not necessarily true.
Continuity and limit
Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of several real variables are ubiquitous in mathematics, it is worth to define this notion without reference to the general notion of continuous maps between topological space.
For defining the continuity, it is useful to consider the distance function of , which is an everywhere defined function of real variables:
A function is continuous at a point which is interior to its domain, if, for every positive real number , there is a positive real number such that for all such that . In other words, may be chosen small enough for having the image by of the ball of radius centered at contained in the interval of length centered at . A function is continuous if it is continuous at every point of its domain.
If a function is continuous at , then all the univariate functions that are obtained by fixing all the variables except one at the value , are continuous at . The converse is false; this means that all these univariate functions may be continuous for a function that is not continuous at . For an example, consider the function such that , and is otherwise defined by
The functions and are both constant and equal to zero, and are therefore continuous. The function is not continuous at , because, if and , we have , even if is very small. Although not continuous, this function has the further property that all the univariate functions obtained by restricting it to a line passing through are also continuous. In fact, we have
for .
The limit at a point of a real-valued function of several real variables is defined as follows. Let be a point in topological closure of the domain of the function . The function, has a limit when tends toward , denoted
if the following condition is satisfied:
For every positive real number , there is a positive real number such that
for all in the domain such that
If the limit exists, it is unique. If is in the interior of the domain, the limit exists if and only if the function is continuous at . In this case, we have
When is in the boundary of the domain of , and if has a limit at , the latter formula allows to "extend by continuity" the domain of to .
Symmetry
A symmetric function is a function that is unchanged when two variables and are interchanged:
where and are each one of . For example:
is symmetric in since interchanging any pair of leaves unchanged, but is not symmetric in all of , since interchanging with or or gives a different function.
Function composition
Suppose the functions
or more compactly , are all defined on a domain . As the -tuple varies in , a subset of , the -tuple varies in another region a subset of . To restate this:
Then, a function of the functions defined on ,
is a function composition defined on , in other terms the mapping
Note the numbers and do not need to be equal.
For example, the function
defined everywhere on can be rewritten by introducing
which is also everywhere defined in to obtain
Function composition can be used to simplify functions, which is useful for carrying out multiple integrals and solving partial differential equations.
Calculus
Elementary calculus is the calculus of real-valued functions of one real variable, and the principal ideas of differentiation and integration of such functions can be extended to functions of more than one real variable; this extension is multivariable calculus.
Partial derivatives
Partial derivatives can be defined with respect to each variable:
Partial derivatives themselves are functions, each of which represents the rate of change of parallel to one of the axes at all points in the domain (if the derivatives exist and are continuous—see also below). A first derivative is positive if the function increases along the direction of the relevant axis, negative if it decreases, and zero if there is no increase or decrease. Evaluating a partial derivative at a particular point in the domain gives the rate of change of the function at that point in the direction parallel to a particular axis, a real number.
For real-valued functions of a real variable, , its ordinary derivative is geometrically the gradient of the tangent line to the curve at all points in the domain. Partial derivatives extend this idea to tangent hyperplanes to a curve.
The second order partial derivatives can be calculated for every pair of variables:
Geometrically, they are related to the local curvature of the function's image at all points in the domain. At any point where the function is well-defined, the function could be increasing along some axes, and/or decreasing along other axes, and/or not increasing or decreasing at all along other axes.
This leads to a variety of possible stationary points: global or local maxima, global or local minima, and saddle points—the multidimensional analogue of inflection points for real functions of one real variable. The Hessian matrix is a matrix of all the second order partial derivatives, which are used to investigate the stationary points of the function, important for mathematical optimization.
In general, partial derivatives of higher order have the form:
where are each integers between and such that , using the definitions of zeroth partial derivatives as identity operators:
The number of possible partial derivatives increases with , although some mixed partial derivatives (those with respect to more than one variable) are superfluous, because of the symmetry of second order partial derivatives. This reduces the number of partial derivatives to calculate for some .
Multivariable differentiability
A function is differentiable in a neighborhood of a point if there is an -tuple of numbers dependent on in general, , so that:
where as . This means that if is differentiable at a point , then is continuous at , although the converse is not true - continuity in the domain does not imply differentiability in the domain. If is differentiable at then the first order partial derivatives exist at and:
for , which can be found from the definitions of the individual partial derivatives, so the partial derivatives of exist.
Assuming an -dimensional analogue of a rectangular Cartesian coordinate system, these partial derivatives can be used to form a vectorial linear differential operator, called the gradient (also known as "nabla" or "del") in this coordinate system:
used extensively in vector calculus, because it is useful for constructing other differential operators and compactly formulating theorems in vector calculus.
Then substituting the gradient (evaluated at with a slight rearrangement gives:
where denotes the dot product. This equation represents the best linear approximation of the function at all points within a neighborhood of . For infinitesimal changes in and as :
which is defined as the total differential, or simply differential, of , at . This expression corresponds to the total infinitesimal change of , by adding all the infinitesimal changes of in all the directions. Also, can be construed as a covector with basis vectors as the infinitesimals in each direction and partial derivatives of as the components.
Geometrically is perpendicular to the level sets of , given by which for some constant describes an -dimensional hypersurface. The differential of a constant is zero:
in which is an infinitesimal change in in the hypersurface , and since the dot product of and is zero, this means is perpendicular to .
In arbitrary curvilinear coordinate systems in dimensions, the explicit expression for the gradient would not be so simple - there would be scale factors in terms of the metric tensor for that coordinate system. For the above case used throughout this article, the metric is just the Kronecker delta and the scale factors are all 1.
Differentiability classes
If all first order partial derivatives evaluated at a point in the domain:
exist and are continuous for all in the domain, has differentiability class . In general, if all order partial derivatives evaluated at a point :
exist and are continuous, where , and are as above, for all in the domain, then is differentiable to order throughout the domain and has differentiability class .
If is of differentiability class , has continuous partial derivatives of all order and is called smooth. If is an analytic function and equals its Taylor series about any point in the domain, the notation denotes this differentiability class.
Multiple integration
Definite integration can be extended to multiple integration over the several real variables with the notation;
where each region is a subset of or all of the real line:
and their Cartesian product gives the region to integrate over as a single set:
an -dimensional hypervolume. When evaluated, a definite integral is a real number if the integral converges in the region of integration (the result of a definite integral may diverge to infinity for a given region, in such cases the integral remains ill-defined). The variables are treated as "dummy" or "bound" variables which are substituted for numbers in the process of integration.
The integral of a real-valued function of a real variable with respect to has geometric interpretation as the area bounded by the curve and the -axis. Multiple integrals extend the dimensionality of this concept: assuming an -dimensional analogue of a rectangular Cartesian coordinate system, the above definite integral has the geometric interpretation as the -dimensional hypervolume bounded by and the axes, which may be positive, negative, or zero, depending on the function being integrated (if the integral is convergent).
While bounded hypervolume is a useful insight, the more important idea of definite integrals is that they represent total quantities within space. This has significance in applied mathematics and physics: if is some scalar density field and are the position vector coordinates, i.e. some scalar quantity per unit n-dimensional hypervolume, then integrating over the region gives the total amount of quantity in . The more formal notions of hypervolume is the subject of measure theory. Above we used the Lebesgue measure, see Lebesgue integration for more on this topic.
Theorems
With the definitions of multiple integration and partial derivatives, key theorems can be formulated, including the fundamental theorem of calculus in several real variables (namely Stokes' theorem), integration by parts in several real variables, the symmetry of higher partial derivatives and Taylor's theorem for multivariable functions. Evaluating a mixture of integrals and partial derivatives can be done by using theorem differentiation under the integral sign.
Vector calculus
One can collect a number of functions each of several real variables, say
into an -tuple, or sometimes as a column vector or row vector, respectively:
all treated on the same footing as an -component vector field, and use whichever form is convenient. All the above notations have a common compact notation . The calculus of such vector fields is vector calculus. For more on the treatment of row vectors and column vectors of multivariable functions, see matrix calculus.
Implicit functions
A real-valued implicit function of several real variables is not written in the form "". Instead, the mapping is from the space to the zero element in (just the ordinary zero 0):
is an equation in all the variables. Implicit functions are a more general way to represent functions, since if:
then we can always define:
but the converse is not always possible, i.e. not all implicit functions have an explicit form.
For example, using interval notation, let
Choosing a 3-dimensional (3D) Cartesian coordinate system, this function describes the surface of a 3D ellipsoid centered at the origin with constant semi-major axes , along the positive x, y and z axes respectively. In the case , we have a sphere of radius centered at the origin. Other conic section examples which can be described similarly include the hyperboloid and paraboloid, more generally so can any 2D surface in 3D Euclidean space. The above example can be solved for , or ; however it is much tidier to write it in an implicit form.
For a more sophisticated example:
for non-zero real constants , this function is well-defined for all , but it cannot be solved explicitly for these variables and written as "", "", etc.
The implicit function theorem of more than two real variables deals with the continuity and differentiability of the function, as follows. Let be a continuous function with continuous first order partial derivatives, and let ϕ evaluated at a point be zero:
and let the first partial derivative of with respect to evaluated at be non-zero:
Then, there is an interval containing , and a region containing , such that for every in there is exactly one value of in satisfying , and is a continuous function of so that . The total differentials of the functions are:
Substituting into the latter differential and equating coefficients of the differentials gives the first order partial derivatives of with respect to in terms of the derivatives of the original function, each as a solution of the linear equation
for .
Complex-valued function of several real variables
A complex-valued function of several real variables may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values.
If is such a complex valued function, it may be decomposed as
where and are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions.
This reduction works for the general properties. However, for an explicitly given function, such as:
the computation of the real and the imaginary part may be difficult.
Applications
Multivariable functions of real variables arise inevitably in engineering and physics, because observable physical quantities are real numbers (with associated units and dimensions), and any one physical quantity will generally depend on a number of other quantities.
Examples of real-valued functions of several real variables
Examples in continuum mechanics include the local mass density of a mass distribution, a scalar field which depends on the spatial position coordinates (here Cartesian to exemplify), , and time :
Similarly for electric charge density for electrically charged objects, and numerous other scalar potential fields.
Another example is the velocity field, a vector field, which has components of velocity that are each multivariable functions of spatial coordinates and time similarly:
Similarly for other physical vector fields such as electric fields and magnetic fields, and vector potential fields.
Another important example is the equation of state in thermodynamics, an equation relating pressure , temperature , and volume of a fluid, in general it has an implicit form:
The simplest example is the ideal gas law:
where is the number of moles, constant for a fixed amount of substance, and the gas constant. Much more complicated equations of state have been empirically derived, but they all have the above implicit form.
Real-valued functions of several real variables appear pervasively in economics. In the underpinnings of consumer theory, utility is expressed as a function of the amounts of various goods consumed, each amount being an argument of the utility function. The result of maximizing utility is a set of demand functions, each expressing the amount demanded of a particular good as a function of the prices of the various goods and of income or wealth. In producer theory, a firm is usually assumed to maximize profit as a function of the quantities of various goods produced and of the quantities of various factors of production employed. The result of the optimization is a set of demand functions for the various factors of production and a set of supply functions for the various products; each of these functions has as its arguments the prices of the goods and of the factors of production.
Examples of complex-valued functions of several real variables
Some "physical quantities" may be actually complex valued - such as complex impedance, complex permittivity, complex permeability, and complex refractive index. These are also functions of real variables, such as frequency or time, as well as temperature.
In two-dimensional fluid mechanics, specifically in the theory of the potential flows used to describe fluid motion in 2d, the complex potential
is a complex valued function of the two spatial coordinates and , and other real variables associated with the system. The real part is the velocity potential and the imaginary part is the stream function.
The spherical harmonics occur in physics and engineering as the solution to Laplace's equation, as well as the eigenfunctions of the z-component angular momentum operator, which are complex-valued functions of real-valued spherical polar angles:
In quantum mechanics, the wavefunction is necessarily complex-valued, but is a function of real spatial coordinates (or momentum components), as well as time :
where each is related by a Fourier transform.
See also
Real coordinate space
Real analysis
Complex analysis
Function of several complex variables
Multivariate interpolation
Scalar fields
References
Mathematical analysis
Real numbers
Multivariable calculus | Function of several real variables | Mathematics | 4,841 |
7,109,264 | https://en.wikipedia.org/wiki/Model%20engine | A model engine is a small internal combustion engine typically used to power a radio-controlled aircraft, radio-controlled car, radio-controlled boat, free flight, control line aircraft, or ground-running tether car model.
Because of the square–cube law, the behaviour of many engines does not always scale up or down at the same rate as the machine's size; usually at best causing a dramatic loss of power or efficiency, and at worst causing them not to work at all. Methanol and nitromethane are common fuels.
Overview
The fully functional, albeit small, engines vary from the most common single-cylinder two-stroke to the exotic single and multiple-cylinder four-stroke, the latter taking shape in boxer, v-twin, inline and radial form, a few Wankel engine designs are also used. Most model engines run on a blend of methanol, nitromethane, and lubricant (either castor or synthetic oil).
Two-stroke model engines, most often designed since 1970 with Schnuerle porting for best performance, range in typical size from .12 cubic inches (2 cubic centimeters) to 1.2 ci (19.6 cc) and generate between .5 horsepower (370 watts) to 5 hp (3.7 kW), can get as small as .010 ci (.16 cc) and as large as 3-4 ci (49–66 cc). Four-stroke model engines have been made in sizes as small as 0.20 in3 (3.3 cc) for the smallest single-cylinder models, all the way up to 3.05 in3 (50 cc) for the largest size for single-cylinder units, with twin- and multi-cylinder engines on the market being as small as 10 cc for opposed-cylinder twins, while going somewhat larger in size than 50 cc, and even upwards to well above 200 cc for some model boxer opposed-piston, inline and radial engines. While the methanol and nitromethane blended "glow fuel" engines are the most common, many larger (especially above 15 cc/0.90 ci displacement) model engines, both two-stroke and a growing number of four-stroke examples are spark ignition, and are primarily fueled with gasoline — with some examples of both two and four-stroke glow plug-designed methanol aeromodeling engines capable, with aftermarket upgrades, to having battery-powered, electronically controlled spark ignition systems replacing the glow plugs normally used. Model engines refitted in such a manner often run more efficiently on methanol-based glow plug engine fuels, often with the ability to exclude the use of nitromethane altogether in their fuel formulas.
This article concerns itself with the methanol engines; gasoline-powered model engines are similar to those built for use in string trimmers, chainsaws, and other yard equipment, unless they happen to be purpose-built for aeromodeling use, being especially true for four-stroke gasoline-fueled model engines. Such engines usually use a fuel that contains a small percentage of motor oil as a two-stroke engine uses for lubrication purposes, as most model four-stroke engines — be they glow plug or spark ignition — have no built-in reservoir for motor oil in their crankcase or engine block design.
The majority of model engines have used, and continue to use, the two-stroke cycle principle to avoid needing valves in the combustion chamber, but a growing number of model engines use the four-stroke cycle design instead. Both reed valve and rotary valve-type two-strokes are common, with four-stroke model engines using either conventional poppet valve, and rotary valve formats for induction and exhaust.
The engine shown to the right has its carburetor in the center of the zinc alloy casting to the left. (It uses a flow restriction, like the choke on an old car engine, because the venturi effect is not effective on such a small scale.) The valve reed, cross shaped above its retainer spring, is still beryllium copper alloy, in this old engine. The glow plug is built into the cylinder head. Large production volume makes it possible to use a machined cylinder and an extruded crank case (cut away by hand in the example shown). These Cox Bee reed valve engines are notable for their low cost and ability to survive crashes. The components of the engine shown come from several different engines.
Comparison of engines
Images of a glowplug engine and a "diesel" engine are shown below for comparison. The most obvious external difference is seen on top of the cylinder head. The glowplug engine's glow plug has a pinlike terminal for its center contact, which is an electrical connector for the glowplug. The "diesel" engine has a T-bar which is used for adjusting the compression. The cylindrical object behind the glowplug engine is an exhaust silencer or muffler.
Glowplug engines
Glow plugs are used for starting as well as continuing the power cycle. The glow plug consists of a
durable, mostly platinum, helically wound wire filament, within a cylindrical pocket in the plug body, exposed to the combustion chamber. A small direct current voltage (around 1.5 volts) is applied to the glow plug, the engine is then started, and the voltage is removed. The burning of the fuel/air mixture in a glow-plug model engine, which requires methanol for the glow plug to work in the first place, and sometimes with the use of nitromethane for greater power output and steadier idle, occurs due to the catalytic reaction of the methanol vapor to the presence of the platinum in the filament, thus causing the ignition. This keeps the plug's filament glowing hot, and allows it to ignite the next charge.
Since the ignition timing is not controlled electrically, as in a spark ignition engine or by fuel injection, as in an ordinary diesel, it must be adjusted by the richness of the mixture, the ratio of nitromethane to methanol, the compression ratio, the cooling of the cylinder head, the type of glow plug, etc. A richer mixture will tend to cool the filament and so retard ignition, slowing the engine, and a rich mixture also eases starting. After starting the engine can easily be leaned (by adjusting a needle valve in the spraybar) to obtain maximum power. Glowplug engines are also known as nitro engines. Nitro engines require a 1.5 volt ignitor to light the glow plug in the heat sink. Once primed, pulling the starter with the ignitor in will start the engine.
Diesel engines
Diesel engines are an alternative to methanol glow plug engines. These "diesels" run on a mixture of kerosene, ether, castor oil or vegetable oil, and cetane or amyl nitrate booster. Despite their name, their use of compression ignition, and the use of a kerosene fuel that is similar to diesel, model diesels share very little with full-size diesel engines.
Full-size diesel engines, such as those found in a truck, are fuel injected and either two-stroke or four-stroke. They use compression ignition to ignite the mixture: the compression within the cylinder heats the inlet charge sufficiently to cause ignition, without requiring an applied ignition source. A fundamental feature of such engines, unlike petrol (gasoline) engines, is that they draw in air alone and the fuel is only mixed by being injected into the combustion chamber separately. Model diesel engines are instead a carbureted two-stroke using the crankcase for compression. The carburetor supplies a mixture of fuel and air into the engine, with the proportions kept fairly constant and their total volume throttled to control the engine power.
Apart from sharing the diesel's use of compression ignition, their construction has more in common with a small two-stroke motorcycle or lawnmower engine. In addition to this, model diesels have variable compression ratios. This variable compression is achieved by a "contra-piston", at the top of the cylinder, which can be adjusted by a screwed "T-bar". The swept volume of the engine remains the same, but as the volume of the combustion chamber at top dead centre is changed by adjusting the contra-piston, the compression ratio (swept volume + combustion chamber / combustion chamber) changes accordingly.
Model diesels are found to produce more torque than glow engines of the same displacement, and are thought to get better fuel efficiency, because the same power is produced at a lower rpm, and in a smaller displacement engine. However, the specific power may not be significantly superior to a glow engine, due to the heavier construction needed to assure that the engine can withstand the much higher compression ratio, sometimes reaching 30:1. Diesels also run significantly quieter, due to the more rapid combustion, unlike two-stroke glow engines, in which combustion may still be occurring when the exhaust ports are uncovered, causing a significant amount of noise.
Recent developments in model engineering have produced true diesel model engines, with a traditional injector and injector pump, and these engines operate in the same way as a large diesel engine.
See also
Four-stroking
Glow plug (model engine)
Glow fuel
Nitro engine
Schnuerle porting, used on model two-stroke engines since the 1970s
Makers
Bullitt Engines
Cox Model Engines
Enya Model Engines (two and four-stroke model engines)
FOX Manufacturing
FX Royal Racing Engines
K&B Manufacturing
Laser Engines
LRP electronic (rebranded OS Engines)
Mantua Models
GAUI GPOWER
MECOA
Motori Cipolla
Ninja Engine
Novarossi
nVision
O.S. Engines (two and four-stroke model engines)
OPS (engine)
Picco Micromotori
RB Products
rcvengines
Reds Racing
Saito Seisakusho (four-stroke and model steam engine specialist)
Team Orion
Thunder Tiger
Webra
Yamada Engines (YS) (two and four-stroke model engines)
References
External links
K&B Manufacturing
Yamada Engines
Saito Seisakusho
WEBRA
FOX Manufacturing
MECOA
COX Hobbies
Engine technology
Model engines
Radio control
Scale modeling | Model engine | Physics,Technology | 2,103 |
25,863,044 | https://en.wikipedia.org/wiki/Mesowear | Mesowear is a method, used in different branches and fields of biology. This method can apply to both extant and extinct animals, according to the scope of the study. Mesowear is based on studying an animal's tooth wearing fingerprint. In brief, each animal has special feeding habits, which cause unique tooth wearing. Rough feeds cause serious tooth abrasion, while smooth one triggers moderate abrasion, so browsers have teeth with moderate abrasion and grazers have teeth with rough abrasion. Scoring systems can quantify tooth abrasion observations and ease comparisons between individuals.
Mesowear definition
The mesowear method or tooth wear scoring method is a quick and inexpensive process of determining the lifelong diet of a taxon (grazer or browser) and was first introduced in the year 2000.
The mesowear technique can be extended to extinct and also extant animals.
Mesowear analyses require large sample populations (>20), which can be problematic for some localities, but the method yields an accurate depiction of an animal's average lifelong diet. Mesowear analysis is based on the physical properties of ungulate foods as reflected in the relative amounts of attritive and abrasive wear that they cause on the dental enamel of the occlusal surfaces. Mesowear was recorded by examining the buccal apices of molar tooth cusps. Apices were characterized as sharp, rounded, or blunt, and the valleys between them either high or low. The method has been developed only for selenodont and trilophodont molars, but the principle is readily extendable to other crown types. In collecting the data the teeth are inspected at close range, a hand lens will be used. Mesowear analysis is insensitive to wear stage as long as the very early and very late stages are excluded.
Mesowear analysis follows standard protocols. Specimens are digitally photographed in labial view so that cusp shape and occlusal relief can be scored.
this method helps zoologists and nutritionists to prepare proper kind of hay for captive feral herbivores with unknown feed habits in zoos.
In collecting the data the teeth are inspected at close range, using a hand lens.
Gravity toward lower teeth causes more abrasion on lower teeth than upper teeth. This fact is base of mesowear method.
Shape definition
Sharp: A sharp cusp terminates to a point and has practically no rounded area between the
mesial and distal phase I facets,
Round: a rounded cusp has a distinctly rounded tip (apex) without planar
facet wear but retains facets on the lower slopes.
Blunt: blunt cusp lacks distinct facets altogether.
Terminology
The attrition: this kind of dental wearing is as a result of rubbing tooth to tooth and no external forces cause this enamel abrasion.usually browsers feed contains less food abrasive materials( such as silica because of feed selecting behavior in this animals so wearing type of browser ungulates will be this type in most cases.
The abrasion: rubbing food to tooth triggers this kind of tooth wearing more visible for grazer animals than browsers.
References
External links
Mesowear Equilibrium in Zoo Animals
Tooth wear as a tool of reconstructing diet in fossil ungulates
EMD - Equus Mesowear Datenbase
Morphological and trophic distinction in the dentitions of two early alcelaphine bovids from Langebaanweg (genus Damalacra)
Biological techniques and tools
Dental anatomy
Ecology
Evolutionary biology | Mesowear | Biology | 730 |
240,960 | https://en.wikipedia.org/wiki/Hydration%20reaction | In chemistry, a hydration reaction is a chemical reaction in which a substance combines with water. In organic chemistry, water is added to an unsaturated substrate, which is usually an alkene or an alkyne. This type of reaction is employed industrially to produce ethanol, isopropanol, and butan-2-ol.
Organic chemistry
Any unsaturated organic compound is susceptible to hydration.
Epoxides to glycol
Several million tons of ethylene glycol are produced annually by the hydration of oxirane, a cyclic compound also known as ethylene oxide:
C2H4O + H2O → HO–CH2CH2–OH
Acid catalysts are typically used.
Alkenes
The general chemical equation for the hydration of alkenes is the following:
RRC=CH2 + H2O → RRC(OH)-CH3
A hydroxyl group (OH−) attaches to one carbon of the double bond, and a proton (H+) adds to the other. The reaction is highly exothermic. In the first step, the alkene acts as a nucleophile and attacks the proton, following Markovnikov's rule. In the second step an H2O molecule bonds to the other, more highly substituted carbon. The oxygen atom at this point has three bonds and carries a positive charge (i.e., the molecule is an oxonium). Another water molecule comes along and takes up the extra proton. This reaction tends to yield many undesirable side products, (for example diethyl ether in the process of creating ethanol) and in its simple form described here is not considered very useful for the production of alcohol.
Two approaches are taken. Traditionally the alkene is treated with sulfuric acid to give alkyl sulphate esters. In the case of ethanol production, this step can be written:
H2SO4 + C2H4 → C2H5-O-SO3H
Subsequently, this sulphate ester is hydrolyzed to regenerate sulphuric acid and release ethanol:
C2H5-O-SO3H + H2O → H2SO4 + C2H5OH
This two step route is called the "indirect process".
In the "direct process," the acid protonates the alkene, and water reacts with this incipient carbocation to give the alcohol. The direct process is more popular because it is simpler. The acid catalysts include phosphoric acid and several solid acids.
Here an example reaction mechanism of the hydration of 1-methylcyclohexene to 1-methylcyclohexanol:
Many alternative routes are available for producing alcohols, including the hydroboration–oxidation reaction, the oxymercuration–reduction reaction, the Mukaiyama hydration, the reduction of ketones and aldehydes and as a biological method fermentation.
Alkynes
Acetylene hydrates to give acetaldehyde: The process typically relies on mercury catalysts and has been discontinued in the West but is still practiced in China. The Hg2+ center binds to a C≡C bond, which is then attacked by water. The reaction is
H2O + C2H2 → CH3CHO
Aldehydes and ketones
Aldehydes and to some extent even ketones, hydrate to geminal diols. The reaction is especially dominant for formaldehyde, which, in the presence of water, exists significantly as dihydroxymethane.
Conceptually similar reactions include hydroamination and hydroalkoxylation, which involve adding amines and alcohols to alkenes.
Nitriles
Nitriles are susceptible to hydration to amides:
This reaction requires catalysts. Enzymes are used for the commercial production of acrylamide from acrylonitrile.
Inorganic and materials chemistry
Hydration is an important process in many other applications; one example is the production of Portland cement by the crosslinking of calcium oxides and silicates that is induced by water. Hydration is the process by which desiccants function.
See also
Aquation
References
Addition reactions
General chemistry | Hydration reaction | Chemistry | 881 |
536,858 | https://en.wikipedia.org/wiki/New%20Haven%20Coliseum | New Haven Coliseum, formally known as New Haven Veterans Memorial Coliseum, was a sports and entertainment arena located in downtown New Haven, Connecticut. Construction began in 1968 and was completed in 1972. The Coliseum was officially closed on September 1, 2002, by Mayor John DeStefano Jr., and demolished by implosion on January 20, 2007.
The arena's formal name was New Haven Veterans Memorial Coliseum, but most locals simply referred to it as "New Haven Coliseum". The Coliseum held 11,497 people at full capacity, and occupied 4.5 acres (18,000 m2) of land next to the Knights of Columbus Building and faced the Oak Street Connector/Route 34 downtown spur.
Hosted events
The Coliseum hosted the New Haven Knights of the United Hockey League, New Haven Nighthawks, New Haven Senators, and Beast of New Haven of the American Hockey League, as well as the 1984 Metro Atlantic Athletic Conference and Yale University's 2002 National Invitational Tournament men's college basketball tournament opening round games. Also, it was home of the Connecticut Coasters roller hockey team in 1993, the Connecticut Pride of the IBL during the 2000–01 season, and the New Haven Ninjas af2 team in 2002. The UConn Huskies men's basketball team played home games at the arena as their part-time home from 1978 to 1987. Ice Capades also performed at the Coliseum. New Haven Coliseum was also second home to Yale University Hockey, playing games sporadically at the Coliseum over the years. The U.S.A. Women's Olympic Squad played an exhibition game vs. Sweden on December 15, 2001.
The Coliseum was also known for hosting many concerts during its existence, notable performers included Grateful Dead, Rush, Elvis Presley, Frank Sinatra, Lynyrd Skynyrd, Bee Gees, Queen, Aerosmith, Black Sabbath, Jethro Tull, Pat Benatar, Judas Priest, Bon Jovi, KISS, Phish, and Guns N' Roses.
Most notably, in 1986, the Coliseum served as the setting for Van Halen's multi-platinum concert film Live Without a Net. Many of the era's most prominent musical stars also appeared at the Coliseum. It also was the home of the famous "Eruption" solo performed by Eddie Van Halen and his bandmates which took place on August 27, 1986.
The pilot episode of WWE SmackDown was filmed at the Coliseum on April 27, 1999, and aired on UPN two days later.
Tool was the final musical act on August 20, 2002.
The final event held at the Coliseum was a professional wrestling show held by World Wrestling Entertainment, one of the original attractions in the arena since 1972. The WWE considered the Coliseum its home arena, as it was—for much of its history—the closest venue to WWE's headquarters in Stamford, Connecticut. Most matches were broadcast, first on WTNH, as well as on local UHF stations.
History
Construction
The Coliseum was built to replace the New Haven Arena, New Haven's prior indoor sports and entertainment venue. The Coliseum, as well as the neighboring Knights of Columbus building, was designed by the architect Kevin Roche of Roche-Dinkeloo. One interesting aspect of the arena's design was that the parking garage was built on top of the actual Coliseum structure; this was necessitated by a high water table in the area which made it overly difficult to construct sub-surface parking facilities. Though an interesting solution, this design proved unpopular because of the quarter-mile helical ramps required to access the parking. Vincent Scully, the revered architectural historian at nearby Yale University, often referred to the design as "Structural Exhibitionism" in his modern architecture lectures. Other features of the design, such as street storefronts and an exhibition hall, were never completed.
Deterioration and Closure
During the 1980s, the structure of the parking garages had deteriorated to the point where large canvas panels had to be attached to the outside to catch pieces of concrete that would occasionally drop off onto the sidewalk below. Renovations were made to correct that problem. The city shut down the facility in 2002 after concluding that it was a drain on city coffers. However, the city did not hold any public hearings, referendum votes, or conduct any surveys, and several groups, local stakeholders, and the Coalition to Save Our Coliseum mounted a campaign to save and renovate the Coliseum, to no avail. Others in the community supported the plan to demolish the arena. Despite Mayor DeStefano's plan to close and demolish the building within six months, it ultimately took more than four years.
Among the reasons for the Coliseum's demise was the construction or renovation (often with state money) in the 1990s of alternative comparably sized venues within the southern Connecticut market. The Arena at Harbor Yard in Bridgeport attracted a minor league hockey team, the Bridgeport Sound Tigers. The Mohegan Sun Arena was built about an hour away, and became the home of the Connecticut Sun. Many musical acts started booking the Oakdale Theatre in the city of Wallingford, Connecticut, after it was upgraded and expanded. Even though the state gave $5.5 million to the arena for new paint, signage, and scoreboards, the Coliseum simply could not compete with newer facilities. Even as early as 1980 the Coliseum was decried as a "white elephant". Mayor DeStefano also had staked out a strategy of investing city resources into arts and cultural activities rather than attracting sports teams to the city.
Demolition
Actual demolition work began in late October 2005 with removal of most of the arena area. At 7:50 a.m. on January 20, 2007, after years of wrangling and delay, the Coliseum was finally imploded, using more than 2,000 pounds of explosive. It was said that the implosion could be heard all the way to Meriden and Northford. As it came down, a massive cloud of dust and smoke covered the surrounding area, but blew away quickly toward the shoreline. Upwards of 20,000 people watched from the nearby Temple Street Garage and other buildings, and residents of nearby apartments were evacuated. The two helical ramps were not imploded, and were subsequently destroyed by conventional methods.
The city has tentative plans to replace the Coliseum with a new downtown/Long Wharf redevelopment plan, including a relocated Long Wharf Theatre and a new campus for Gateway Community College.
A temporary 400-space parking lot opened on the former Coliseum site on December 4, 2007, but plans are advancing to redevelop the site with a hotel, hundreds of housing units and approximately of commercial space. The master developer, LiveWorkLearnPlay had the project approved in 2013, but construction has been delayed due to cost issues related to the moving of utilities and conflicts related to planned highway improvements.
On January 12, 2009, the Knights of Columbus filed a lawsuit against the City of New Haven, Stamford Wrecking Company and Demolition Dynamics Company. The lawsuit seeks repayment for damages incurred to the Knights of Columbus Building and Knights of Columbus Museum across the street from the Coliseum.
After demolition
A poster archiving all concerts held at the Coliseum was installed on the parking lot, on March 4, 2021. It was a commemoration of the Coliseum and rock 'n roll culture.
See also
Last Days of the Coliseum (2010 documentary film)
References
External links
Last Days of the Coliseum 2010 documentary on the history and demolition of the Coliseum
WTNH News Story on Implosion
New Haven Coliseum Implosion Pictures and Videos
Defunct college basketball venues in the United States
Ice hockey venues in Connecticut
Sports venues in New Haven, Connecticut
New Haven Nighthawks
Sports venues completed in 1972
2002 disestablishments in Connecticut
Sports venues demolished in 2007
Defunct indoor arenas in the United States
Demolished sports venues in Connecticut
Indoor soccer venues in the United States
Roche-Dinkeloo buildings
Defunct sports venues in Connecticut
UConn Huskies basketball venues
1972 establishments in Connecticut
Buildings and structures demolished by controlled implosion | New Haven Coliseum | Engineering | 1,624 |
53,992,606 | https://en.wikipedia.org/wiki/Common%20Electrical%20I/O | The Common Electrical I/O (CEI) refers to a series of influential Interoperability Agreements (IAs) that have been published by the Optical Internetworking Forum (OIF). CEI defines the electrical and jitter requirements for 3.125, 6, 11, 25-28, and 56 Gbit/s electrical interfaces.
CEI, the Common Electrical I/O
The Common Electrical I/O (CEI) Interoperability Agreement published by the OIF defines the electrical and jitter requirements for 3.125, 6, 11, 25-28, and 56 Gbit/s SerDes interfaces. This CEI specification has defined SerDes interfaces for the industry since 2004, and it has been highly influential. The development of electrical interfaces at the OIF began with SPI-3 in 2000, and the first differential interface was published in 2003. The seventh generation electrical interface, CEI-56G, defines five reaches of 56 Gbit/s interfaces. The OIF completed work on its eighth generation through its CEI-112G project. The OIF has launched its ninth generation with its CEI-224G project. CEI has influenced or has been adopted or adapted in many other serial interface standards by many different standards organizations over its long lifetime. SerDes interfaces have been developed based on CEI for most ASIC and FPGA products.
CEI direct predecessors
Throughout the 2000s, the OIF produced an important series of interfaces that influenced the development of multiple generations of devices. Beginning with the donation of the PL-3 interface by PMC-Sierra in 2000, the OIF produced the System Packet Interface (SPI) family of packet interfaces. SPI-3 and SPI-4.2 defined two generations of devices before they were supplanted by the closely related Interlaken standard in the SPI-5 generation in 2006.
The OIF also defined the SerDes Framer Interface (SFI) family of specifications in parallel with SPI. As a part of the SPI-5 and SFI-5 development, a common electrical interface was developed termed SxI-5. SxI-5 abstracted the electrical I/O interface away from the individual SPI and SFI documents. This abstraction laid the groundwork for the highly successful CEI family of Interoperability Agreements and was incorporated in the original release of CEI 1.0 a generation later.
Generations of OIF Electrical Interfaces
Two earlier generations in this development path were defined by some of the same individuals at the ATM Forum in 1994 and 1995. These specifications were called UTOPIA Level 1 and 2. These operated at 25 Mbit/s (0.025 Gbit/s) and 50 Mbit/s per wire single ended and were used in OC-3 (155 Mbit/s) applications. PL-3 was a packet extension of the cells carried by those earlier interfaces.
Public demonstrations
Compliant implementations to the draft CEI-56G IAs were demonstrated in the OIF booth at the Optical Fiber Conference in 2015, 2016 and 2017.
References
Digital electronics
Ethernet
Synchronous optical networking
Fiber-optic communications | Common Electrical I/O | Engineering | 645 |
56,767,513 | https://en.wikipedia.org/wiki/Zariski%27s%20finiteness%20theorem | In algebra, Zariski's finiteness theorem gives a positive answer to Hilbert's 14th problem for the polynomial ring in two variables, as a special case. Precisely, it states:
Given a normal domain A, finitely generated as an algebra over a field k, if L is a subfield of the field of fractions of A containing k such that , then the k-subalgebra is finitely generated.
References
Hilbert's problems
Invariant theory
Commutative algebra | Zariski's finiteness theorem | Physics,Mathematics | 101 |
20,630,639 | https://en.wikipedia.org/wiki/Oral%20ecology | Oral ecology is the microbial ecology of the microorganisms found in mouths. Oral ecology, like all forms of ecology, involves the study of the living things found in oral cavities as well as their interactions with each other and with their environment. Oral ecology is frequently investigated from the perspective of oral disease prevention, often focusing on conditions such as dental caries (or "cavities"), candidiasis ("thrush"), gingivitis, periodontal disease, and others. However, many of the interactions between the microbiota and oral environment protect from disease and support a healthy oral cavity. Interactions between microbes and their environment can result in the stabilization or destabilization of the oral microbiome, with destabilization believed to result in disease states. Destabilization of the microbiome can be influenced by several factors, including diet changes, drugs or immune system disorders.
History
Bacteria were first detected under the microscope of Dutch scientist Anton van Leeuwenhoek in the late 17th century from his own healthy human oral sample. After using this technology on a healthy sample, Leeuwenhoek applied his tool to the decayed tooth matter of his wife, where he noted that the organisms present were highly similar to those found in cheese. These are believed to likely have been lactic acid bacteria, however the link between bacterial acid production and tooth decay was not further uncovered until much later. After this discovery and the further development of microscopy, bacteria was found within tooth cavities by multiple scientists throughout the 19th century. Willoughby Miller was the first recorded oral microbiologist, and he performed much of his foundational microbiology research in the laboratory of famed microbiologist Robert Koch. In this time, Miller generated the chemo-parasitic (also referred to as "acidogenic") theory of caries, which proposed that tooth decay is initiated by bacterial acid production on the surface of teeth. This theory is considered to be foundational to the field of dentistry as well as oral ecology, by drawing connections between the activities of microbial entities and its effects on their non-living microscopic environment.
In ecological terms, early work in oral microbiology largely falls into a category of microbial research now described as "reductionist", generally meaning it focused heavily on the isolation of individual microbes before observation or testing. It wasn't until the late 20th century that "holistic" approaches to oral microbiology were coming into the mainstream, and thus microbial ecology was intentionally studied. Holistic microbiology considers not only an organism of interest but also the biological and abiotic context in which the organism naturally is found. Scientist Philip Marsh is credited with developing the ecological plaque hypothesis in 1994, in which he ideated that dental plaque can be both normal and healthy as well as "cariogenic" (creates cavities), depending on the microbial community (or "consortia") present in the biofilm and the community's stability. Furthermore, in his theory, Marsh links the exposure of nonliving environmental influences on the microbial community to the selection and change in microbial constituents that can cause cariogenic conditions.
Oral environment
Teeth, saliva, and oral tissues are the major components of the oral environment in which the oral microbiome resides. Like most environments, some oral environments, such as teeth and saliva, are abiotic (non-living), and some are living, such as the host immune system or host mouth mucosal tissues- including gums, cheek ("buccal") and tongue (when present).
Abiotic
Saliva holds multiple roles in oral ecology. For example, it creates a physical disturbance to microbes through a washing action. Increase in saliva flow via stimulation (i.e. chewing gum) has been shown to diminish cariogenic plaque formation. Saliva is also largely responsible for environmental pH, water content, nutrients, and host-produced immune cells and antimicrobials. One major antimicrobial found in saliva (as well as mucus) is lysozyme, an enzyme that shears bacterial cells. Another critical role that saliva plays in the microscopic environment is supplying the glycoproteins bacteria use to cling to the surface of teeth.
Teeth are another example of the abiotic environmental factors involved in oral ecology. Bacteria settle on the tooth surface as a solid substrate on which they grow. Compared to floating in saliva, bacteria on teeth gain environmental stability so that they experience a consistent environment of temperature, relative oxygen exposure, nutrient density, physical disturbances, etc. While teeth provide stability to the microbial community, the overgrowth of bacteria is known to result in tooth decay primarily due to acid production from sugar-consuming fermentative metabolisms. Some organisms associated with this condition are lactobacilli, which produce the lactic acid that breaks down tooth enamel. As a result, host diet also influences the ecology of the mouth by altering saliva pH and nutrient content. As a result the microbial life interacts with the oral environment.
Oxygen content is a major variable that can influence the type of microbial flora present in the oral cavity. This variable is slightly unique to the oral cavity due to its exposure to the outside of the host body. In ecology, niches are a set of conditions that can be associated with the presence of a certain organism. Thus, oxygen concentration variation throughout the mouth can be a factor in niche differentiation within this environment. At the microscopic scale, oxygen concentration can dictate where in the mouth aerobic, anaerobic, facultative anaerobic, aerotolerant, or microaerophilic microbes grow or form biofilm. Biofilms themselves can help regulate oxygen exposure and keep anaerobic organisms at the interior, adding to the complexity of the niches within the oral cavity.
Another abiotic environmental influence on oral ecology includes the use of drugs, especially antibiotics and orally-administered antibiotics. Antibiotics can kill oral bacteria as well as cause secondary environmental effects such as a decrease in saliva, leading to further changes in the abiotic microenvironment. The destabilization of the bacteria in a microbiome which results in disease is known as bacterial dysbiosis. For example, the destabilization of the bacterial community in the mouth can lead to a bloom in fungal communities, resulting in diseases such as thrush. Furthermore, the development of antibiotic-resistant populations in response to the treatment can result in an overpopulation of the resistant bacteria after treatment is completed, disturbing the relative abundances found pre-treatment.
Biotic (non-bacterial)
The host of the oral cavity in which the oral ecology is studied is also of importance. This is an example of a biotic, or living, environmental factor. General host health and immune system function is critical to oral microflora, as it determines which microbes are able to survive in the mouth. The innate immune system, which operates in animals continuously regardless of the presence of disease, is most relevant due to its constant role in oral ecology both in healthy and unhealthy hosts. This includes the production of free-floating antibodies, macrophages, and other immune cells present in saliva. At a healthy, stable state, the host immune system permits the colonization of certain microbes by not targeting them. This can be described as "immune equilibrium", or the conditions where the host and the microbiota in the oral microbiome symbiose.
Human
Bacterial
In microbial ecology, the principle of priority effect refers to the competitive advantage some microorganisms gain by colonizing a surface first. It is generally believed that primary colonization occurs by transmission from the mother or their breastmilk (vertical transmission), as well as the environment of the newborn (horizontal transmission). It has been found that at different locations in the oral cavity, different microbes are early colonizers. The very initial colonizers of teeth are considered to be Streptococcus, a genus of bacteria that are usually facultative anaerobes that can grow in both aerobic and anaerobic conditions. This is advantageous in an environment that is variably exposed to oxygen throughout the day as well as throughout the oral cavity. Despite over 700 unique species of bacteria being associated with the human mouth, in tooth plaque only between 7-9 "major players" have been repeatedly identified as early colonizers, including Actinomyces, Streptococcus, Neisseria, and Veillonella species. It is believed that the colonization of these specific genera of bacteria influence the stability and homeostasis of the resulting oral microflora. This colonization occurs by the construction of and adhesion to a pellicle made of glycoproteins from host saliva. Upon adhesion to the pellicle, early colonizing bacteria begin to produce the biofilm intended to anchor the colony to the tooth. As is common in microbiomes, this biofilm does not remain a single genera or species. In fact, the vast majority of relevant microbes perform co-aggregation within a biofilm. However, it is understood that not all microbes will co-aggregate together, and ammensal activity does occur between specific species, such as S. mutans and P. gingivalis. The interbacterial interactions as well as the interactions with the host teeth, oxygen conditions, and saliva are what compose bacterial oral ecology.
Nonbacterial
Bacteria, while being the most abundant, are not the only kind of microbiota present in the oral cavity. Fungal/yeast cells are also present, particularly including the genus Candida. The yeast species C. albicans and C. tropicalis are known as commensals in the human mouth, which means that they are a part of normal flora that engages in a mutually-beneficial relationship with its host. They are the most abundant non-bacterial microbes isolated from the human mouth. As described in the above section, co-aggregation within a biofilm is not uncommon, including the cohabitation of yeasts with bacteria. Candida albicans is known to selectively participate in "dual-species" biofilms with certain species of Streptococcus bacteria through the actual attachment of the yeast to the bacterial cell surface. This allows the yeast to be anchored to the tooth surface indirectly to gain stability.
Some other, but significantly less abundant, non-bacterial microbes in the human mouth include the fungi genera Cryptococcus, Aspergillus, and Fusarium.
References
Further reading
Ecology
Microbiology | Oral ecology | Chemistry,Biology | 2,191 |
77,645,677 | https://en.wikipedia.org/wiki/ORG-24598 | ORG-24598 is a selective inhibitor of the type 1 glycine transporter.
Potential uses
Alcohol use disorder
A test in rats has showed that combining varenicline, bupropion, and an indirect glycine agonist (such as ORG-24598) could be beneficial for treatment of alcohol use disorder.
Schizophrenia
Studies have shown that glycine re-uptake inhibitors selective for the type 1 transporter may be useful for the treatment of certain schizophrenia symptoms.
References
Tertiary amines
Amino acids
Trifluoromethyl compounds
Phenol ethers
Glycine reuptake inhibitors | ORG-24598 | Chemistry | 127 |
8,940,450 | https://en.wikipedia.org/wiki/Foundations%20of%20Physics | Foundations of Physics is a monthly journal "devoted to the conceptual bases and fundamental theories of modern physics and cosmology, emphasizing the logical, methodological, and philosophical premises of modern physical theories and procedures". The journal publishes results and observations based on fundamental questions from all fields of physics, including: quantum mechanics, quantum field theory, special relativity, general relativity, string theory, M-theory, cosmology, thermodynamics, statistical physics, and quantum gravity
Foundations of Physics has been published since 1970. Its founding editors were Henry Margenau and Wolfgang Yourgrau. The 1999 Nobel laureate Gerard 't Hooft was editor-in-chief from January 2007. At that stage, it absorbed the associated journal for shorter submissions Foundations of Physics Letters, which had been edited by Alwyn Van der Merwe since its foundation in 1988. Past editorial board members (which include several Nobel laureates) include Louis de Broglie, Robert H. Dicke, Murray Gell-Mann, Abdus Salam, Ilya Prigogine and Nathan Rosen. Carlo Rovelli was announced as new editor-in-chief in February 2016.
Einstein–Cartan–Evans theory
Between 2003 and 2005, Foundations of Physics Letters published a series of papers by Myron W. Evans claiming to make obsolete well-established results of quantum field theory and general relativity. In 2008, an editorial was written by the new Editor-in-Chief Gerard 't Hooft distancing the journal from the topic of Einstein–Cartan–Evans theory.
Abstracting and indexing
According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.276. The journal is abstracted and indexed in the following databases:
References
External links
Physics journals
Philosophy of physics
Monthly journals | Foundations of Physics | Physics | 363 |
56,705,877 | https://en.wikipedia.org/wiki/Sheena%20Cruickshank | Sheena Margaret Cruickshank is a British immunologist and Professor in Biomedical Sciences and Public Engagement at the University of Manchester. She researches how immune responses of the gut are started as a result of infection and/or inflammation. Cruickshank is a science communicator.
Education
Cruickshank completed a Bachelor's degree in Biochemistry & Immunology at Strathclyde University. She earned a PhD in immunology in 1998 from the University of Leeds for research investigating the effects of pleiotropic cytokines on liver cells.
Research and career
Since 2007 Cruickshank has worked in the Department of Immunology at the University of Manchester. She uses in vitro and in vivo approaches to characterise crosstalk between immune cells, commensal bacteria, pathogens and epithelial cells. These experiments make use of infectious models, including Toxoplasma gondii and Trichuris muris, to understand immunity regulation in the skin and gut. By identifying how the skin and gut recognise and respond to the microbiome, they are starting to understand how it affects cell function. Cruickshank won the Northwest BioNow award for her test for the management and assessment of Inflammatory Bowel Disease.
BBC Radio 4 broadcast in 2018 in the series The Life Scientific Sheena Cruickshank's reflections on her life and career. Her brother was passionate about science and particularly marine biology which inspired her to have an interest in science from an early age. His subsequent illness with cancer and death at a young age was a series of events that helped shape her curiosity about the immune system and why diseases happen.
Public engagement
In 2009, Cruickshank co-created the Worm Wagon, "an interactive program that merges art and science activities to promote awareness of parasitic worm infection". She later created the Wiggling Rangoli, exploring parasites and how parasitic infections impacts people around the world. Cruickshank created the citizen science app Britain Breathing, which teaches the impacts of air pollution. The app maps the incidence of allergies and asthma, and uses data to explore why allergies are increasing and the role of air pollution in allergy development. She is a trustee of and the Public Engagement Secretary for the British Society for Immunology. She has appeared on the BBC and CNN. Cruickshank is interested in ways to empower migrant communities with science using English classes.
She acts as academic lead for public engagement at the University of Manchester where she has developed their public engagement strategy and is working to enhance public engagement support and development at the University of Manchester. She blogs about the University of Manchester public engagement.
Awards and honours
In 2013 she won Royal Society of Biology Communicator of the Year and the Manchester International Women’s Day Award in Women & Science, Technology, Engineering and Mathematics. She was a finalist in the 2014 NCCPE Engage and 2016 Biotechnology and Biological Sciences Research Council (BBSRC) Innovator competitions. In 2014 she spoke at QEDcon, a two day skepticism and pop-science conference. In 2016 she introduced Manchester's Science in the City festival. She was a keynote speaker at the 2017 Bluedot Festival. She spoke at New Scientist Live 2017. Cruickshank was a 2017 Cosmic Superhero, a photographic exhibition at Conway Hall Ethical Society. She spoke about the microbiome at TEDx Manchester in Bridgewater Hall, her talk was called Eat Yourself Healthy.
She won the 2017 Better World and Making a Difference Award for Social Responsibility Award. She is a 2017-18 American Association for the Advancement of Science (AAAS) Leshner Leadership Institute Public Engagement Fellow.
References
Living people
British immunologists
Women immunologists
Women biochemists
Alumni of the University of Leeds
Academics of Manchester Metropolitan University
Year of birth missing (living people) | Sheena Cruickshank | Chemistry | 790 |
3,837,568 | https://en.wikipedia.org/wiki/CANopen | CANopen is a communication protocol stack and device profile specification for embedded systems used in automation. In terms of the OSI model, CANopen implements the layers above and including the network layer. The CANopen standard consists of an addressing scheme, several small communication protocols and an application layer defined by a device profile. The communication protocols have support for network management, device monitoring and communication between nodes, including a simple transport layer for message segmentation/desegmentation. The lower level protocol implementing the data link and physical layers is usually Controller Area Network (CAN), although devices using some other means of communication (such as Ethernet Powerlink, EtherCAT) can also implement the CANopen device profile.
The basic CANopen device and communication profiles are given in the CiA 301 specification released by CAN in Automation. Profiles for more specialized devices are built on top of this basic profile, and are specified in numerous other standards released by CAN in Automation, such as CiA 401 for I/O-modules and CiA 402 for motion control.
Device model
Every CANopen device has to implement certain standard features in its controlling software.
A communication unit implements the protocols for messaging with the other nodes in the network.
Starting and resetting the device is controlled via a state machine. It must contain the states Initialization, Pre-operational, Operational and Stopped. The transitions between states are made by issuing a network management (NMT) communication object to the device.
The object dictionary is an array of variables with a 16-bit index. Additionally, each variable can have an 8-bit subindex. The variables can be used to configure the device and reflect its environment, i.e. contain measurement data.
The application part of the device actually performs the desired function of the device, after the state machine is set to the operational state. The application is configured by variables in the object dictionary and the data are sent and received through the communication layer.
Object dictionary
CANopen devices must have an object dictionary, which is used for configuration and communication with the device. An entry in the object dictionary is defined by:
Index, the 16-bit address of the object in the dictionary
Object name (Object Type/Size), a symbolic type of the object in the entry, such as an array, record, or simple variable
Name, a string describing the entry
Type, gives the datatype of the variable (or the datatype of all variables of an array)
Attribute, which gives information on the access rights for this entry, this can be read/write, read-only or write-only
The Mandatory/Optional field (M/O) defines whether a device conforming to the device specification has to implement this object or not
The basic datatypes for object dictionary values such as booleans, integers and floats are defined in the standard (their size in bits is optionally stored in the related type definition, index range 0x0001–0x001F), as well as composite datatypes such as strings, arrays and records (defined in index range 0x0040–0x025F). The composite datatypes can be subindexed with an 8-bit index; the value in subindex 0 of an array or record indicates the number of elements in the data structure, and is of type UNSIGNED8.
For example, the device communication parameters, standardized in the basic device profile CiA 301 are mapped in the index range 0x1000–0x1FFF ("communication profile area"). The first few entries in this area are as follows:
Given suitable tools, the content of the object dictionary of a device, based on an electronic data sheet (EDS), can be customized to a device configuration file (DCF) to integrate the device into a specific CANopen network. According to CiA 306, the format of the EDS-file is the INI file format. There is an upcoming XML-style format, that is described in CiA 311.
Communication
Communication objects
CAN bus, the data link layer of CANopen, can only transmit short packages consisting of an 11-bit id, a remote transmission request (RTR) bit and 0 to 8 bytes of data. The CANopen standard divides the 11-bit CAN frame id into a 4-bit function code and 7-bit CANopen node ID. This limits the number of devices in a CANopen network to 127 (0 being reserved for broadcast). An extension to the CAN bus standard (CAN 2.0 B) allows extended frame ids of 29 bits, but in practice CANopen networks big enough to need the extended id range are rarely seen.
In CANopen the 11-bit id of a CAN-frame is known as communication object identifier, or COB-ID. In case of a transmission collision, the bus arbitration used in the CAN bus allows the frame with the smallest id to be transmitted first and without a delay. Using a low code number for time critical functions ensures the lowest possible delay.
Contents of a CANopen frame:
The data frame with an 11-bit identifier is also called "base frame format".
The default CAN-ID mapping sorts frames by attributing a function code (NMT, SYNC, EMCY, PDO, SDO...) to the first 4 bits, so that critical functions are given priority. This mapping can however be customized for special purposes (except for NMT and SDO, required for basic communication).
The standard reserves certain CAN-IDs to network management and SDO transfers. Some function codes and CAN-IDs have to be mapped to standard functionality after device initialization, but can be configured for other uses later.
Predefined Connection Set
For simple network structures, CANopen supports a predefined allocation of message identifiers.
The transmit and receive directions are from the device's point of view. So a query to a device on the network would send a 0x600+nodeid and get back a 0x580+nodeid.
Communication models
Different kinds of communication models are used in the messaging between CANopen nodes.
In a master/slave relationship, one CANopen node is designated as the master, which sends or requests data from the slaves. The NMT protocol is an example of a master/slave communication model.
A client/server relationship is implemented in the SDO protocol, where the SDO client sends data (the object dictionary index and subindex) to an SDO server, which replies with one or more SDO packages containing the requested data (the contents of the object dictionary at the given index).
A producer/consumer model is used in the Heartbeat and Node Guarding protocols. In the push-model of producer/consumer, the producer sends data to the consumer without a specific request, whereas in the pull model, the consumer has to request the data from the producer.
Protocols
Network management (NMT) protocols
The NMT protocols are used to issue state machine change commands (e.g. to start and stop the devices), detect remote device bootups and error conditions.
The Module control protocol is used by the NMT master to change the state of the devices. The CAN-frame COB-ID of this protocol is always 0, meaning that it has a function code 0 and node ID 0, which means that every node in the network will process this message. The actual node ID, to which the command is meant to, is given in the data part of the message (at the second byte). This can also be 0, meaning that all the devices on the bus should go to the indicated state.
The Heartbeat protocol is used to monitor the nodes in the network and verify that they are alive. A heartbeat producer (usually a slave device) periodically sends a message with the binary function code of 1110 and its node ID (COB-ID19 = 0x700 + node ID). The data part of the frame contains a byte indicating the node status. The heartbeat consumer reads these messages. If the messages fail to arrive within a certain time limit (defined in the object dictionary of the devices) the consumer can take action to, for example, reset the device or indicate an error.
The frame format is:
CANopen devices are required to make the transition from the state Initializing to Pre-operational automatically during bootup. When this transition is made, a single heartbeat message is sent to the bus. This is the bootup protocol.
A response/reply-style (pull model) protocol, called node guarding, exists for slave monitoring.
Service Data Object (SDO) protocol
The SDO protocol is used for setting and for reading values from the object dictionary of a remote device. The device whose object dictionary is accessed is the SDO server and the device accessing the remote device is the SDO client. The communication is always initiated by the SDO client. In CANopen terminology, communication is viewed from the SDO server, so that a read from an object dictionary results in an SDO upload and a write to a dictionary entry is an SDO download.
Because the object dictionary values can be larger than the eight bytes limit of a CAN frame, the SDO protocol implements segmentation and desegmentation of longer messages. Actually, there are two of these protocols: SDO download/upload and SDO Block download/upload. The SDO block transfer is a newer addition to standard, which allows large amounts of data to be transferred with slightly less protocol overhead.
The COB-IDs of the respective SDO transfer messages from client to server and server to client can be set in the object dictionary. Up to 128 SDO servers can be set up in the object dictionary at addresses 0x1200 - 0x127F. Similarly, the SDO client connections of the device can be configured with variables at 0x1280 - 0x12FF. However the pre-defined connection set defines an SDO channel which can be used even just after bootup (in the Pre-operational state) to configure the device. The COB-IDs of this channel are 0x600 + node ID for receiving and 0x580 + node ID for transmitting.
To initiate a download, the SDO client sends the following data in a CAN message with the 'receive' COB-ID of the SDO channel.
ccs is the client command specifier of the SDO transfer, this is 0 for SDO segment download, 1 for initiating download, 2 for initiating upload, 3 for SDO segment upload, 4 for aborting an SDO transfer, 5 for SDO block upload and 6 for SDO block download
n is the number of bytes in the data part of the message which do not contain data, only valid if e and s are set
e, if set, indicates an expedited transfer, i.e. all data exchanged are contained within the message. If this bit is cleared then the message is a segmented transfer where the data does not fit into one message and multiple messages are used.
s, if set, indicates that the data size is specified in n (if e is set) or in the data part of the message
index is the object dictionary index of the data to be accessed, encoded in little endian
subindex is the subindex of the object dictionary variable
data contains the data to be uploaded in the case of an expedited transfer (e is set), or the size of the data to be uploaded (s is set, e is not set), often encoded in little endian
Process Data Object (PDO) protocol
The Process Data Object protocol is used to process real time data among various nodes. You can transfer up to 8 bytes (64 bits) of data per one PDO either from or to the device. One PDO can contain multiple object dictionary entries and the objects within one PDO are configurable using the mapping and parameter object dictionary entries.
There are two kinds of PDOs: transmit and receive PDOs (TPDO and RPDO). The former is for data coming from the device (the device is a data producer) and the latter is for data going to the device (the device is a data consumer); that is, with RPDO you can send data to the device and with TPDO you can read data from the device. In the pre-defined connection set there are identifiers for four TPDOs and four RPDOs available. With configuration, 512 PDOs are possible.
PDOs can be sent synchronously or asynchronously. Synchronous PDOs are sent after the SYNC message whereas asynchronous messages are sent after internal or external trigger. For example, you can make a request to a device to transmit TPDO that contains data you need by sending an empty TPDO with the RTR flag (if the device is configured to accept TPDO requests).
With RPDOs you can, for example, start two devices simultaneously. You only need to map the same RPDO into two or more different devices and make sure those RPDOs are mapped with the same COB-ID.
Synchronization Object (SYNC) protocol
The Sync-Producer provides the synchronization-signal for the Sync-Consumer. When the Sync-Consumer receive the signal they start carrying out their synchronous tasks.
In general, the fixing of the transmission time of synchronous PDO messages coupled with the periodicity of transmission of the Sync Object guarantees that sensor devices may arrange to sample process variables and that actuator devices may apply their actuation in a coordinated fashion.
The identifier of the Sync Object is available at index 1005h.
Time Stamp Object (TIME) protocol
Usually the Time-Stamp object represents a time as a 6-byte field: a count of milliseconds after midnight (at most 27 bits, stored in a 32-bit field), and an unsigned 16-bit number of days since January 1, 1984. (This will overflow on 7 June 2163.)
Some time critical applications especially in large networks with reduced transmission rates require very accurate synchronization; it may be necessary to synchronize the local clocks with an accuracy in the order of microseconds. This is achieved by using the optional high resolution synchronization protocol which employs a special form of timestamp message to adjust the inevitable drift of the local clocks.
The high-resolution timestamp is encoded as unsigned32 with a resolution of 1 microsecond which means that the time counter restarts every 72 minutes. It is configured by mapping the high resolution time-stamp (object 1013h) into a PDO.
Emergency Object (EMCY) protocol
Emergency messages are triggered by the occurrence of a device internal fatal error situation and are transmitted from the concerned application device to the other devices with high priority. This makes them suitable for interrupt type error alerts. An Emergency Telegram may be sent only once per ‘error event’, i.e. the emergency messages must not be repeated. As long as no new errors occur on a device no further emergency message must be sent.
By means of CANopen Communication Profile defined emergency error codes, the error register and device specific additional information are specified in the device profiles.
Initialization
Sample trace of communications between a master and two pressure transducer slaves configured for id 1 and node ID 2.
Electronic Data Sheet
Electronic Data Sheet (EDS) is a file format, defined in CiA306, that describes the communication behaviour and the object dictionary entries of a device. This allows tools such as service tools, configuration tools, development tools, and others to handle the devices properly.
Those EDS files are mandatory for passing the CiA CANopen conformance test.
Since end of 2007 a new XML based format called XDD is defined in CiA311. XDD is conformant to ISO standard 15745.
Glossary of CANopen terms
PDO: Process Data Object - Inputs and outputs. Values of type rotational speed, voltage, frequency, electric current, etc.
SDO: Service Data Object - Configuration settings, possibly node ID, baud rate, offset, gain, etc.
COB-ID: Communication object identifier
CAN ID: CAN Identifier. This is the 11-bit CAN message identifier which is at the beginning of every CAN message on the bus.
EDS: Electronic Data Sheet. This is an INI style or XML style formatted file.
DCF: Device Configuration File. This is a modified EDS file with settings for node ID and baud rate.
See also
Controller area network is an article on the CAN bus.
J1939
DeviceNet
IEEE 1451
TransducerML
References
CiA 301 CANopen application layer specification, free downloadable from CAN in Automation
CiA 306 CANopen Electronic Data Sheet (EDS) specification
CiA 311 CANopen XML-EDS specification
Predefined Connection Set from CANopen Basics
CiA 401 CANopen device profile specification for generic I/O modules, free downloadable from CAN in Automation
CiA 402 CANopen device profile for motion controllers and drives (same as IEC 61800-7-201/301)
External links
CANopen Origins - Esprit project ASPIC 1993 (Bosch, Newcastle University, University of Applied Science in Reutlingen)
About CANopen (canopensolutions.com)
Identifier usage in CANopen networks
CanFestival - An open source CANopen multiplatform framework
CanOpenNode - An open source CANopen framework for microcontrollers and Linux
Lely CANopen - An open source CANopen library for masters and slaves
openCANopen - An open source CANopen master
CANopen Stack Project - A flexible open source CANopen stack for microcontroller
CANopen for Python
CANnewsletter-Information on CAN, CANopen and J1939
CANopen educational pages
Introduction to CANopen Fundamentals (in www.canopen-solutions.com)
Wiki of the CANopen-Lift Community
CANeds: Free editor EDA and XDD files
Online portal by CAN in Automation
CANopen - Application layer and general communication profile
CAN bus
Network protocols
Industrial automation | CANopen | Technology,Engineering | 3,744 |
22,775,557 | https://en.wikipedia.org/wiki/Liquid%20rheostat | A liquid rheostat or water rheostat or salt water rheostat is a type of variable resistor.
This may be used as a dummy load or as a starting resistor for large slip ring motors.
In the simplest form it consists of a tank containing brine or other electrolyte solution, in which electrodes are submerged to create an electrical load. The electrodes may be raised or lowered into the liquid to respectively increase or decrease the electrical resistance of the load. To stabilize the load, the mixture must not be allowed to boil.
Modern designs use stainless steel electrodes, and sodium carbonate, or other salts, and do not use the container as one electrode. In some designs the electrodes are fixed and the liquid is raised and lowered by an external cylinder or pump. Motor start systems used for frequent and rapid starts and re-starts, thus a high heat load to the rheostats, may include water circulation to external heat exchangers. In such cases anti-freeze and anti-corrosion additives must be carefully chosen to not change the resistance or support the growth of algae or bacteria.
The salt water rheostat operates at unity power factor and presents a resistance with negligible series inductance compared to a wire wound equivalent, and was widely used by generator assemblers, until 20 years ago, as a matter of course. They are still sometimes constructed on-site for the commissioning of large diesel generators in remote places, where discarded oil drums and scaffold tubes may form an improvised tank and electrodes.
Description
Typically a traditional liquid rheostat consists of a steel cylinder (the negative), about in size, standing on insulators, in which was suspended a hollow steel cylinder. This acted as the positive electrode and was supported by a steel rope and insulator from an adjustable pulley. The water pipe connection included an insulated section. The tank contained salt water, but not at the concentration that could be described as “brine”. The whole device was fenced off for safety.
Operation was very simple, as adding more salt, more water or varying the height of the centre electrode would vary the load. The load proved to be quite stable, varying only slightly as the water heated up, which never came to boil. Power dissipation was about 1 megawatt, at a potential of about 700 volts and current of about 1,500 amperes.
Modern designs use stainless steel electrodes, and sodium carbonate, or other salts, and do not use the container as one electrode.
Systems with frequent starting may include water circulation to external heat exchangers. In such cases anti-freeze and anti-corrosion additives must be carefully chosen to not change the resistance or support the growth of algae or bacteria.
Advantages and disadvantages
An advantage is silent operation, with none of the fan noise of current resistive grid designs.
Disadvantages include:
corrosion to the copper connection cables and to the wire rope
lack of insulation from ground which may trip a ground detection system
Uses
Railways commonly used salt water load banks in the 1950s to test the output power of diesel-electric locomotives. They were subsequently replaced by specially designed resistive load banks. Some early three-phase AC electric locomotives also used liquid rheostats for starting up the motors and balancing load between multiple locomotives.
Liquid rheostats were sometimes used in large (thousands of kilowatts/horsepower) wound rotor motor drives, to control the rotor circuit resistance and so the speed of the motor. Electrode position could be adjusted with a small electrically operated winch or a pneumatic cylinder. A cooling pump and heat exchanger were provided to allow slip energy to be dissipated into process water or other water system.
Massive rheostats were once used for dimming theatrical lighting, but solid-state components have taken their place in most high-wattage applications.
Current use
High voltage distribution networks use fixed electrolyte resistors to ground the neutral, to provide a current limiting action, so that the voltage across the ground during fault is kept to a safe level. Unlike a solid resistor, the liquid resistor is self healing in the event of overload. Normally the resistance is set up during commissioning, and then left fixed.
Modern motor starters are totally enclosed and the electrode movement is servo motor controlled. Typically a 1 tonne tank will start a 1 megawatt slip ring type motor, but there is considerable variation in start time depending on application.
Safety issues with older designs
The fully salt-water load bank dates from an earlier, less regulated and litigious era. To pass current safety legislation, a more enclosed design is required.
They are no more dangerous than electrode heaters, which work on the same principle, but with plain water, or electrical immersion heaters, provided the correct precautions are used. This requires connecting the container to both ground and neutral and breaking all poles with a linked over-current circuit breaker. If in the open, safety barriers are required.
See also
Liquid resistor
Electrode boiler
BS 7671
References
Electric power
Resistive components
Nondestructive testing
Electrochemistry | Liquid rheostat | Physics,Chemistry,Materials_science,Engineering | 1,044 |
29,162,655 | https://en.wikipedia.org/wiki/Depauperate%20ecosystem | A depauperate ecosystem can be defined as an ecosystem that lacks species richness or species diversity. This leads to significantly low biodiversity often due to a lack of necessary resources to support life. As a result, depauperate ecosystems often exhibit less complexity by having fewer predators and competing species. Because of this, depauperate ecosystems are often more slow-growing than more complex ecosystems.
Islands are often an example of depauperate ecosystems because they exhibit lower species diversity and lex complex food webs than continental ecosystems. This can make colonization of depauperate ecosystems more difficult as the colonizing species can exhibit drastic trophic level alterations.
References
Further reading
Ecosystems
Ecology terminology
Habitat | Depauperate ecosystem | Biology | 142 |
3,088,675 | https://en.wikipedia.org/wiki/Profinet | Profinet (usually styled as PROFINET, as a portmanteau for Process Field Network) is an industry technical standard for data communication over Industrial Ethernet, designed for collecting data from, and controlling equipment in industrial systems, with a particular strength in delivering data under tight time constraints. The standard is maintained and supported by Profibus and Profinet International, an umbrella organization headquartered in Karlsruhe, Germany.
Functionalities
Overview
Profinet implements the interfacing to peripherals. It defines the communication with field connected peripheral devices. Its basis is a cascading real-time concept. Profinet defines the entire data exchange between controllers (called "IO-Controllers") and the devices (called "IO-Devices"), as well as parameter setting and diagnosis. IO-Controllers are typically a PLC, DCS, or IPC; whereas IO-Devices can be varied: I/O blocks, drives, sensors, or actuators. The Profinet protocol is designed for the fast data exchange between Ethernet-based field devices and follows the provider-consumer model. Field devices in a subordinate Profibus line can be integrated in the Profinet system seamlessly via an IO-Proxy (representative of a subordinate bus system).
Conformance Classes (CC)
Applications with Profinet can be divided according to the international standard IEC 61784-2 into four conformance classes:
In Conformance Class A (CC-A), only the devices are certified. A manufacturer certificate is sufficient for the network infrastructure. This is why structured cabling or a wireless local area network for mobile subscribers can also be used. Typical applications can be found in infrastructure (e.g. motorway or railway tunnels) or in building automation.
Conformance Class B (CC-B) stipulates that the network infrastructure also includes certified products and is structured according to the guidelines of Profinet. Shielded cables increase robustness and switches with management functions facilitate network diagnostics and allow the network topology to be captured as desired for controlling a production line or machine. Process automation requires increased availability, which can be achieved through media and system redundancy. For a device to adhere to Conformance Class B, it must communicate successfully via Profinet, have two ports (integrated switch), and support SNMP.
With Conformance Class C (CC-C), positioning systems can be implemented with additional bandwidth reservation and application synchronization. Conformance Class C devices additionally communicate via Profinet IRT.
For Conformance Class D (CC-D), Profinet is used via Time-Sensitive Networking (TSN). The same functions can be achieved as with CC-C. In contrast to CC-A and CC-B, the complete communication (cyclic and acyclic) between controller and device takes place on Ethernet layer 2. The Remote Service Interface (RSI) was introduced for this purpose.
Device types
A Profinet system consists of the following devices:
The IO-Controller, which controls the automation task.
The IO-Device, which is a field device, monitored and controlled by an IO-Controller. An IO-Device may consist of several modules and sub-modules.
The IO-Supervisor is software typically based on a PC for setting parameters and diagnosing individual IO-Devices.
System structure
A minimal Profinet IO-System consists of at least one IO-Controller that controls one or more IO-Devices. In addition, one or more IO-Supervisors can optionally be switched on temporarily for the engineering of the IO-Devices if required.
If two IO-Systems are in the same IP network, the IO-Controllers can also share an input signal as shared input, in which they have read access to the same submodule in an IO-Device. This simplifies the combination of a PLC with a separate safety controller or motion control. Likewise, an entire IO-Device can be shared as a shared device, in which individual submodules of an IO-Device are assigned to different IO-Controllers.
Each automation device with an Ethernet interface can simultaneously fulfill the functionality of an IO-Controller and an IO-Device. If a controller for a partner controller acts as an IO-Device and simultaneously controls its periphery as an IO-Controller, the tasks between controllers can be coordinated without additional devices.
Relations
An Application Relation (AR) is established between an IO-Controller and an IO-Device. These ARs are used to define Communication Relations (CR) with different characteristics for the transfer of parameters, cyclic exchange of data and handling of alarms.
Engineering
The project engineering of an IO system is nearly identical to the Profibus in terms of "look and feel":
The properties of an IO-Device are described by the device manufacturer in a GSD file (General Station Description). The language used for this is GSDML (GSD Markup Language) - an XML-based language. The GSD file serves an engineering environment as a basis for planning the configuration of a Profinet IO system.
All Profinet field devices determine their neighbors. This means that field devices can be exchanged in the event of a fault without additional tools and prior knowledge. By reading out this information, the plant topology can be displayed graphically for better clarity.
The engineering can be supported by tools such as PROFINET Commander or PRONETA.
Dependability
Profinet is also increasingly being used in critical applications. There is always a risk that the required functions cannot be fulfilled. This risk can be reduced by specific measures as identified by a dependability analyses. The following objectives are in the foreground:
Safety: Ensuring functional safety. The system should go into a safe state in the event of a fault.
Availability: Increasing the availability. In the event of a fault, the system should still be able to perform the minimum required function.
Security: Information security is to ensure the integrity of the system.
These goals can interfere with or complement each other.
Functional safety: Profisafe
Profisafe defines how safety-related devices (emergency stop buttons, light grids, overfill prevention devices, ...) communicate with safety controllers via Profinet in such a safe way that they can be used in safety-related automation tasks up to Safety Integrity Level 3 (SIL) according to IEC 61508, Performance Level "e" (PL) according to ISO 13849, or Category 4 according to EN 954-1.
Profisafe implements safe communication via a profile, i.e. via a special format of the user data and a special protocol. It is designed as a separate layer on top of the fieldbus application layer to reduce the probability of data transmission errors. The Profisafe messages use standard fieldbus cables and messages. They do not depend on error detection mechanisms of underlying transmission channels, and thus supports securing of whole communication paths, including backplanes inside controllers or remote I/O. The Profisafe protocol uses error and failure detection mechanisms such as:
Consecutive numbering
Timeout monitoring
Source/destination authentication
Cyclic redundancy checking (CRC)
and is defined in the IEC 61784-3-3 standard.
Increased availability
High availability is one of the most important requirements in industrial automation, both in factory and process automation. The availability of an automation system can be increased by adding redundancy for critical elements. A distinction can be made between system and media redundancy.
System redundancy
System redundancy can also be implemented with Profinet to increase availability. In this case, two IO-Controllers that control the same IO-Device are configured. The active IO-Controller marks its output data as primary. Output data that is not marked is ignored by an IO-Device in a redundant IO-System. In the event of an error, the second IO-Controller can therefore take control of all IO-Devices without interruption by marking its output data as primary. How the two IO-Controllers synchronize their tasks is not defined in Profinet and is implemented differently by the various manufacturers offering redundant control systems.
Media redundancy
Profinet offers two media redundancy solutions. The Media Redundancy Protocol (MRP) allows the creation of a protocol-independent ring topology with a switching time of less than 50 ms. This is often sufficient for standard real-time communication with Profinet. To switch over the redundancy in the event of an error without time delay, the "Media Redundancy for Planned Duplication" (MRPD) must be used as a seamless media redundancy concept. In the MRPD, the cyclic real-time data is transmitted in both directions in the ring-shaped topology. A time stamp in the data packet allows the receiver to remove the redundant duplicates.
Security
The IT security concept for Profinet assumes a defense-in-depth approach. In this approach, the production plant is protected against attacks, particularly from outside, by a multi-level perimeter, including firewalls. In addition, further protection is possible within the plant by dividing it into zones using firewalls. In addition, a security component test ensures that the Profinet components are resistant to overload to a defined extent. This concept is supported by organizational measures in the production plant within the framework of a security management system according to ISO 27001.
Application Profiles
For a smooth interaction of the devices involved in an automation solution, they must correspond in their basic functions and services. Standardization is achieved by "profiles" with binding specifications for functions and services. The possible functions of communication with Profinet are restricted and additional specifications regarding the function of the field device are prescribed. These can be cross-device class properties such as a safety-relevant behavior (Common Application Profiles) or device class specific properties (Specific Application Profiles). A distinction is made between
Device profiles for e.g. robots, drives (PROFIdrive), process devices, encoders, pumps
Industry Profiles for e.g. laboratory technology or rail vehicles
Integration Profiles for the integration of subsystems such as IO-Link systems
Drives
PROFIdrive is the modular device profile for drive devices. It was jointly developed by manufacturers and users in the 1990s and since then, in conjunction with Profibus and, from version 4.0, also with Profinet, it has covered the entire range from the simplest to the most demanding drive solutions.
Energy
Another profile is PROFIenergy which includes services for real time monitoring of energy demand. This was requested in 2009 by the AIDA group of German automotive Manufacturers (Audi, BMW, Mercedes-Benz, Porsche and Volkswagen ) who wished to have a standardised way of actively managing energy usage in their plants. High energy devices and sub-systems such as robots, lasers and even paint lines are the target for this profile, which will help reduce a plant's energy costs by intelligently switching the devices into 'sleep' modes to take account of production breaks, both foreseen (e.g. weekends and shut-downs) and unforeseen (e.g. breakdowns).
Process automation
Modern process devices have their own intelligence and can take over part of the information processing or the overall functionality in automation systems. For integration into a Profinet system, a two-wire Ethernet is required in addition to increased availability.
Process devices
The profile PA Devices defines for different classes of process devices all functions and parameters typically used in process devices for the signal flow from the sensor signal from the process to the pre-processed process value, which is read out to the control system together with a measured value status. The PA Devices profile contains device data sheets for
Pressure and differential pressure
Level, temperature and flow rate
Analog and digital inputs and outputs
Valves and actuators
Analysis equipment
Advanced Physical Layer
Ethernet Advanced Physical Layer (Ethernet-APL) describes a physical layer for the Ethernet communication technology which is especially developed for the requirements of the process industries. The development of Ethernet-APL was determined by the need for communication at high speeds and over long distances, the supply of power and communications signals via common single, twisted-pair (2-wire) cable as well as protective measures for the safe use within explosion hazardous areas. Ethernet APL opens the possibility for Profinet to be incorporated into process instruments.
Technology
Profinet protocols
Profinet uses the following protocols in the different layers of the OSI model:
Layers 1-2: Mainly full-duplex with 100 MBit/s electrical (100BASE-TX) or optical (100BASE-FX) according to IEEE 802.3 are recommended as device connections. Autocrossover is mandatory for all connections so that the use of crossover cables can be avoided. From IEEE 802.1Q the VLAN with priority tagging is used. All real-time data are thus given the highest possible priority 6 and are therefore forwarded by a switch with a minimum delay.
The Profinet protocol can be recorded and displayed with any Ethernet analysis tool. Wireshark is capable of decoding Profinet telegrams.
The Link Layer Discovery Protocol (LLDP) has been extended with additional parameters, so that in addition to the detection of neighbors, the propagation time of the signals on the connection lines can be communicated.
Layers 3-6: Either the Remote Service Interface (RSI) protocol or the Remote Procedure Call (RPC) protocol is used for the connection setup and the acyclic services. The RPC protocol is used via User Datagram Protocol (UDP) and Internet Protocol (IP) with the use of IP addresses. The Address Resolution Protocol (ARP) is extended for this purpose with the detection of duplicate IP addresses. The Discovery and basic Configuration Protocol (DCP) is mandatory for the assignment of IP addresses. Optionally, the Dynamic Host Configuration Protocol (DHCP) can also be used for this purpose. No IP addresses are used with the RSI protocol. Thus, IP can be used in the operating system of the field device for other protocols such as OPC Unified Architecture (OPC UA).
Layer 7: Various protocols are defined to access the services of the Fieldbus Application Layer (FAL). The RT (Real-Time) protocol for class A & B applications with cycle times in the range of 1 - 10 ms. The IRT (Isochronous Real-Time) protocol for application class C allows cycle times below 1 ms for drive technology applications. This can also be achieved with the same services via Time-Sensitive Networking (TSN).
Technology of Conformance Classes
The functionalities of Profinet IO are realized with different technologies and protocols:
Technology of Class A (CC-A)
The basic function of the Profinet is the cyclic data exchange between the IO-Controller as producer and several IO-Devices as consumers of the output data and the IO-Devices as producers and the IO-Controller as consumer of the input data. Each communication relationship IO data CR between the IO-Controller and an IO-Device defines the number of data and the cycle times.
All Profinet IO-Devices must support device diagnostics and the safe transmission of alarms via the communication relation for alarms Alarm CR.
In addition, device parameters can be read and written with each Profinet device via the acyclic communication relation Record Data CR. The data set for the unique identification of an IO-Device, the Identification and Maintenance Data Set 0 (I&M 0), must be installed by all Profinet IO-Devices. Optionally, further information can be stored in a standardized format as I&M 1-4.
For real-time data (cyclic data and alarms), the Profinet Real-Time (RT) telegrams are transmitted directly via Ethernet. UDP/IP is used for the transmission of acyclic data.
Management of the Application Relations (AR)
The Application Relation (AR) is established between an IO-Controller and every IO-Device to be controlled. Inside the ARs are defined the required CRs. The Profinet AR life-cycle consists of address resolution, connection establishment, parameterization, process IO data exchange / alarm handling, and termination.
Address resolution: A Profinet IO-Device is identified on the Profinet network by its station name. Connection establishment, parameterization and alarm handling are implemented with User Datagram Protocol (UDP), which requires that the device also be assigned an IP address. After identifying the device by its station name, the IO-Controller assigns the pre-configured IP address to the device.
Connection establishment: Connection establishment starts with the IO-Controller sending a connect request to the IO-Device. The connect request establishes an Application Relationship (AR) containing a number of Communication Relationships (CRs) between the IO-Controller and IO-Device. In addition to the AR and CRs, the connect request specifies the modular configuration of the IO-Device, the layout of the process IO data frames, the cyclic rate of IO data exchange and the watchdog. Acknowledgement of the connect request by the IO-Device allows parameterization to follow. From this point forward, both the IO-Device and IO-Controller start exchanging cyclic process I/O data frames. The process I/O data frames don't contain valid data at this point, but they start serving as keep-alive to keep the watchdog from expiring.
Parameterization: The IO-Controller writes parameterization data to each IO-Device sub-module in accordance with the General Station Description Mark-up Language (GSDML) file. Once all sub-modules have been configured, the IO-Controller signals that parameterization has ended. The IO-Device responds by signaling application readiness, which allows process IO data exchange and alarm handling to ensue.
Process IO data exchange / alarm handling: The IO-Device followed by the IO-Controller start to cyclically refresh valid process I/O data. The IO-Controller processes the inputs and controls the outputs of the IO-Device. Alarm notifications are exchanged acyclically between the IO-Controller and IO-Device as events and faults occur.
Termination: The connection between the IO-Device and IO-Controller terminates when the watchdog expires. Watchdog expiry is the result of a failure to refresh cyclic process I/O data by the IO-Controller or the IO-Device. Unless the connection was intentionally terminated at the IO-Controller, the IO-Controller will try to restart the Profinet Application Relation.
Technology of Class B (CC-B)
In addition to the basic Class A functions, Class B devices must support additional functionalities. These functionalities primarily support the commissioning, operation and maintenance of a Profinet IO system and are intended to increase the availability of the Profinet IO system.
Support of network diagnostics with the Simple Network Management Protocol (SNMP) is mandatory. Likewise, the Link Layer Discovery Protocol (LLDP) for neighborhood detection including the extensions for Profinet must be supported by all Class B devices. This also includes the collection and provision of Ethernet port-related statistics for network maintenance. With these mechanisms, the topology of a Profinet IO network can be read out at any time and the status of the individual connections can be monitored. If the network topology is known, automatic addressing of the nodes can be activated by their position in the topology. This considerably simplifies device replacement during maintenance, since no more settings need to be made.
High availability of the IO system is particularly important for applications in process automation and process engineering. For this reason, special procedures have been defined for Class B devices with the existing relationships and protocols. This allows system redundancy with two IO-Controllers accessing the same IO-Devices simultaneously. In addition, there is a prescribed procedure Dynamic Reconfiguration (DR), how the configuration of an IO-Device can be changed with the help of these redundant relationships without losing control over the IO-Device.
Technology of Class C (CC-C)
For the functionalities of Conformance Class C (CC-C) the Isochronous Real-Time (IRT) protocol is mainly used.
With the bandwidth reservation, a part of the available transmission bandwidth of 100 MBit/s is reserved exclusively for real-time tasks. A procedure similar to a time multiplexing method is used. The bandwidth is divided into fixed cycle times, which in turn are divided into phases. The red phase is reserved exclusively for class C real-time data, in the orange phase the time-critical messages are transmitted and in the green phase the other Ethernet messages are transparently passed through. To ensure that maximum Ethernet telegrams can still be passed through transparently, the green phase must be at least 125 μs long. Thus, cycle times under 250 μs are not possible in combination with unchanged Ethernet.
In order to achieve shorter cycle times down to 31.25 μs, the Ethernet telegrams of the green phase are optionally broken down into fragments. These short fragments are now transmitted via the green phase. This fragmentation mechanism is transparent to the other participants on the Ethernet and therefore not recognizable.
In order to implement these bus cycles for bandwidth reservation, precise clock synchronization of all participating devices including the switches is required with a maximum deviation of 1 μs. This clock synchronization is implemented with the Precision Time Protocol (PTP) according to the IEEE 1588-2008 (1588 V2) standard. All devices involved in the bandwidth reservation must therefore be in the same time domain.
For position control applications for several axes or for positioning processes according to the PROFIdrive drive profile of application classes 4 - 6, not only must communication be timely, but the actions of the various drives on a Profinet must also be coordinated and synchronized. The clock synchronization of the application program to the bus cycle allows control functions to be implemented that are executed synchronously on distributed devices.
If several Profinet devices are connected in a line (daisy chain), it is possible to further optimise the cyclic data exchange with Dynamic Frame Packing (DFP). For this purpose, the controller puts all output data for all devices into a single IRT frame. At the passing IRT frame, each Device takes out the data intended for the device, i.e. the IRT frame becomes shorter and shorter. For the data from the different devices to the controller, the IRT frame is dynamically assembled. The great efficiency of the DFP lies in the fact that the IRT frame is always only as extensive as necessary and that the data from the controller to the devices can be transmitted in full duplex simultaneously with the data from the devices to the controller.
Technology of Class D (CC-D)
Class D offers the same services to the user as Class C, with the difference that these services are provided using the mechanisms of Time-Sensitive Networking (TSN) defined by IEEE.
The Remote Service Interface (RSI) is used as a replacement for the Internet protocol suite. Thus, this application class D is implemented independently of IP addresses. The protocol stack will be smaller and independent of future Internet versions (IPv6).
The TSN is not a consistent, self-contained protocol definition, but a collection of different protocols with different characteristics that can be combined almost arbitrarily for each application. For use in industrial automation, a subset is compiled in IEC/IEEE standard 60802 "Joint Profile TSN for Industrial Automation". A subset is used in the Profinet specification version 2.4 for implementing class D.
In this specification, a distinction is made between two applications:
isochronous, cyclic data exchange with short, limited latency time (Isochronous Cyclic Real Time) for applications in Motion Control and distributed control technology
Cyclic data exchange with limited latency time (Cyclic Real Time) for general automation tasks
For the isochronous data exchange the clocks of the participants must be synchronized. For this purpose, the specifications of the Precision Time Protocol according to IEC 61588 for time synchronization with TSN are adapted accordingly.
The telegrams are arranged in queues according to the priorities provided in the VLAN tag. The Time-Aware Shaper (TAS) now specifies a clock pulse with which the individual queues are processed in a switch. This leads to a time-slot procedure where the isochronous, cyclical data is transmitted with the highest priority, the cyclical data with the second priority before all acyclic data. This reduces the latency time and also the jitter for the cyclic data. If a data telegram with low priority lasts too long, it can be interrupted by a cyclic data telegram with high priority and transmitted further afterwards. This procedure is called Frame Preemption and is mandatory for CC-D.
Implementation of Profinet interface
For the realization of a Profinet interface as controller or device, no additional hardware requirements are required for Profinet (CC-A and CC-B) that cannot be met by a common Ethernet interface (100BASE-TX or 100BASE-FX). To enable a simpler line topology, the installation of a switch with 2 ports in a device is recommended.
For the realization of class C (CC-C) devices, an extension of the hardware with time synchronization with the Precision Time Protocol (PTP) and the functionalities of bandwidth reservation is required. For class D (CC-D) devices, the hardware must support the required functionalities of Time-Sensitive Networking (TSN) according to IEEE standards.
The method of implementation depends on the design and performance of the device and the expected quantities. The alternatives are
Development in-house or with a service provider
Use of ready-made building blocks or individual design
Execution in fixed design ASIC, reconfigurable in FPGA technology, as plug-in module or as software component.
History
At the general meeting of the Profibus user organisation in 2000, the first concrete discussions for a successor to Profibus based on Ethernet took place. Just one year later, the first specification of Component Based Automation (CBA) was published and presented at the Hanover Fair. In 2002, the Profinet CBA became part of the international standard IEC 61158 / IEC 61784-1.
A Profinet CBA system consists of different automation components. One component comprises all mechanical, electrical and information technology variables. The component may have been created with the usual programming tools. To describe a component, a Profinet Component Description (PCD) file is created in XML. A planning tool loads these descriptions and allows the logical connections between the individual components to be created to implement a plant.
The basic idea behind Profinet CBA was that in many cases it is possible to divide an entire automation system into autonomously operating - and thus manageable - subsystems. The structure and functionality may well be found in several plants in identical or slightly modified form. Such so-called Profinet components are normally controlled by a manageable number of input signals. Within the component, a control program written by the user executes the required functionality and sends the corresponding output signals to another controller. The communication of a component-based system is planned instead of programmed. Communication with Profinet CBA was suitable for bus cycle times of approx. 50 to 100 ms.
Individual systems show how these concepts can be successfully implemented in the application. However, Profinet CBA does not find the expected acceptance in the market and will no longer be listed in the IEC 61784-1 standard from the 4th edition of 2014.
In 2003 the first specification of Profinet IO (IO = Input Output) was published. The application interface of the Profibus DP (DP = Decentralized Periphery), which was successful on the market, was adopted and supplemented with current protocols from the Internet. In the following year, the extension with isochronous transmission follows, which makes Profinet IO suitable for motion control applications. Profisafe is adapted so that it can also be used via Profinet. With the clear commitment of AIDA to Profinet in 2004, acceptance in the market is given. In 2006 Profinet IO becomes part of the international standard IEC 61158 / IEC 61784-2.
In 2007, according to the neutral count, 1 million Profinet devices have already been installed, in the following year this number doubles to 2 million. By 2019, a total of 26 million devices sold by the various manufacturers are reported.
In 2019, the specification for Profinet was completed with Time-Sensitive Networking (TSN), thus introducing the CC-D conformance class.
Further reading
Notes
References
External links
PROFIBUS & PROFINET International (PI)
PROFINET Technology Page
PROFIBUS International
PROFIsafe web portal
PROFINET University
wireshark PROFINET Wiki
PROFINET Community Stack
p-net - An open-source PROFINET device stack
Industrial Ethernet | Profinet | Engineering | 5,961 |
23,881,177 | https://en.wikipedia.org/wiki/D-xylose%20absorption%20test | D-xylose absorption test is a medical test performed to diagnose conditions that present with malabsorption of the proximal small intestine due to defects in the integrity of the gastrointestinal mucosa. D-xylose is a monosaccharide, or simple sugar, that does not require enzymes for digestion prior to absorption. Its absorption requires an intact mucosa only. In contrast, polysaccharides require enzymes, such as amylase, to break them down so that they can eventually be absorbed as monosaccharides. This test was previously in use but has been made redundant by antibody tests.
In normal individuals, a 25 g oral dose of D-xylose will be absorbed and excreted in the urine at approximately 4.5 g in 5 hours. A decreased urinary excretion of D-xylose is seen in conditions involving the gastrointestinal mucosa, such as small intestinal bacterial overgrowth and Whipple's disease. In cases of bacterial overgrowth, the values of D-xylose absorption return to normal after treatment with antibiotics. In contrast, if the D-xylose urinary excretion is not normal after a course of antibiotics, then the problem must be due to a non-infectious cause of malabsorption (i.e., celiac disease).
References
Carbohydrate methods
Diagnostic gastroenterology
Urine tests
Blood tests | D-xylose absorption test | Chemistry,Biology | 310 |
76,268,064 | https://en.wikipedia.org/wiki/Electrofreezing | Electrofreezing is the tendency of a material to solidify upon being exposed to an external electric field. Electrofreezing was initially introduced by Dufour in 1892. Examples are the electrofreezing of liquid ammonia supposed to be naturally occurring during electrical storms in Jupiter-like planets, and ice χ supposedly being a form of high pressure ice.
Depending on the material, freezing occur only at certain field intensities, above which electric fields are strong enough to induce chemical reactions.
References
Physical phenomena
Phase transitions | Electrofreezing | Physics,Chemistry | 103 |
77,821,413 | https://en.wikipedia.org/wiki/Ioxynil | Ioxynil is a post-emergent selective nitrile herbicide. It is used in Australia, New Zealand and Japan to control broadleaf weeds via the inhibition of photosynthesis. It is used notably on onion crops, among others, normally at 300–900 g/Ha. It was introduced in 1966. The supply of ioxynil is decreasing, as of 2019 but the herbicide remains effective.
History
Ioxynil and bromoxynil (along with 2,4-DB and MCPB) were patented by Louis Wain as joint-head of the chemistry department at Wye College, and coincidentally discovered independently by May & Baker in England screening spare nitriles for herbicide activity, and by Amchem Products Inc in America doing similar screening, all in 1963. Commercial prospects were promising, as cereals could tolerate large amounts, over 2 lbs/ac; even 4 lbs/ac only temporarily scorches.
Wain theorised ioxynil and bromoxynil, the nitrile (-CN) group herbicides, because of the chemical similarity to a nitro (NO2) group, and on their success, the -SO2CH3 group was explored, leading to the discovery of methylsulphone herbicides.
May & Baker, a subsidiary of Rhone-Poulenc began ioxynil's, and the very similar bromoxynil's, production in Norwich in 1965, where it has continued for over 40 years. By 1968, ioxynil (as "Buctril") was registered for use in the USA, Canada, the UK, Australia, New Zealand, Japan, the West Indies and most of Europe.
In the 2010s, ioxynil was produced in South Africa to alleviate shortages.
Regulations
Today, ioxynil is banned in the EU and used in Brazil, China (as octanoate), New Zealand, Australia, South Africa and Japan.
The UK followed the EU's ban (taking effect 1 September 2015) to ban ioxynil's sale; the European approval lapsing under Regulation (EC) 1107/2009.
India and Columbia raised concerns about the EU's maximum allowed residue for ioxynil (amongst other pesticides), saying the EU's stance was too precautionary and not based on evidence, which is yet inconclusive on their genotoxicity. Bayer, Syngenta and others launched a lawsuit against the 2022 ban on exporting EU-prohibited pesticides from the EU to nations where they are legal, however the French Constitutional Court has upheld the ban.
Properties
Ioxynil is a flammable solid with a weak phenolic smell and decays under UV light. Ioxynil's octanoate, ioxynil octanoate, or 4-cyan-2,6-diiodphenyloctanoate, is likewise a colourless insoluble solid and hydrolyses to ioxynil in basic conditions.
The taste of ioxynil is "slight, not characteristic."
Mechanism and effect
Ioxynil acts via photosynthesis inhibition. It and bromoxynil uncouple oxidative phosphorylation and inhibit photosynthetic phosphorylation. Ioxynil additionally breaks down into iodide ions which inhibit plant growth again. Ioxynil may also inhibit photoreduction of ferricyanide, fixation of carbon dioxide, photoreduction of NAPD or of endogenous plastoquinone. Ioxynil acts as an electron transport inhibitor and uncoupling agent.
Symptoms on weeds appear after a few hours or days. Areas of collapsed tissue appear, rapidly becoming necrotic. In good conditions on small plants, necrosis may complete within two days but some weeds can take up to three weeks to die. Effectiveness is enhanced if any times of high humidity occur 1 or 2 days after application. Light and temperature speed up herbicidal action.
Ioxynil is a Group C, (Australia), C3 (Global) or Group 6, (numeric) resistance class herbicide.
Toxicology
Ioxynil is toxic to mammals, with an oral LD50 of 110 mg/kg (rats), dermal LD50 of 800 mg/kg, and inhalative LC50 of 0.38 mg/L over four hours. Ioxynil is toxic to fish, with a 96 hour LC50 of 6.8 mg/L, and 3.9 mg/L for daphnia. Plankton and bloodworms are also effected. The oral LD50s in mice, guinea-pigs, rabbits and dogs respectively are 230, 76, 180, > 100 mg/kg.
Ioxynil can affect the human thyroid via binding to transthyretin, a thyroid hormone binding protein which transports thyroid hormone in the blood. It can provoke thyroid tumors in rats, and can disrupt zebrafish's heart development.
Environmental fate
Ioxynil is a contact herbicide and has no residual soil activity or translocation, so spray coverage must be thorough as unsprayed weeds will not be controlled; large enough weeds may even contain surviving portions that resprout, and resistance can occur at later growth stages. Translocated chemical may produce chlorosis but is unlikely to be lethal.
Ioxynil bioaccumulates, although it does not linger long in the environment. Ioxynil, bromoxynil, and their octanoate variants, leave negligible residues after use on crops. In all cases, under 0.01 ppm, the limit of detection, though some inactive content may be adsorbed into the soil.
Lists
Ioxynil is or has been sold under these tradenames: Ioxynil, Unyunox, Totril, Toxynil, Hawk, Hocks, Sanoxynil, Iotril, Certrol, Actril, Actrilawn, Bentrol, Belgran, Bronx, Cipotril, Dantril, Oxytril, Mextrol-Biox, Sanoxynil, Shamseer-2, Stellox, Iotox, Iconix and Trevespan. Some products include multiple active ingredients.
Ioxynil has been sold in formulations also containing bromoxynil and isoproturon.
It is used to control these weeds: bellvine, burr medic, capeweed, chickweed, climbing buckwheat, common heliotrope, common sowthistle, corn gromwell, dandelion, dead nettle, fat-hen, fumitory, green amaranth, green crumbleweed, bittercress, ox tongue, pigweed, potato weed, saffron thistle, scarlet pimpernel, shepherd's purse, slender celery, smallflower mallow, stagger weed, threecornered Jack, three flowered nightshade, turnip weed, Ward's weed, wild radish, wild turnip, wireweed, annual sowthistle, cornbind, musky storksbill, willow weed, buttercup, field pansy, groundsel, plantain, speedwell, stinking mayweed, the knotweed family broadly, in particular tartary buckwheat, the composite or sunflower family, chamomile, mayweed, some borages, fiddlenecks, gromwells and prickly paddy melon.
Crops situations which ioxynil has been used on include: onions, spring onions, welsh onions, garlic onions, cereals, leeks, garlic, shallots, flax, sugarcane, forage grasses, lawns and turf. Peas, oats, maize, sorghum and rice show high tolerance. Limited resistance is seen in lucerne, clover and carrot.
References
External links
Herbicides
Nitriles
Products introduced in 1966
Phenols
Iodoarenes | Ioxynil | Chemistry,Biology | 1,684 |
358,677 | https://en.wikipedia.org/wiki/Obedience | Obedience, in human behavior, is a form of "social influence in which a person yields to explicit instructions or orders from an authority figure". Obedience is generally distinguished from compliance, which some authors define as behavior influenced by peers while others use it as a more general term for positive responses to another individual's request, and from conformity, which is behavior intended to match that of the majority. Depending on context, obedience can be seen as moral, immoral, or amoral. For example, in psychological research, individuals are usually confronted with immoral demands designed to elicit an internal conflict. If individuals still choose to submit to the demand, they are acting obediently.
Humans have been shown to be obedient in the presence of perceived legitimate authority figures, as shown by the Milgram experiment in the 1960s, which was carried out by Stanley Milgram to find out how the Nazis managed to get ordinary people to take part in the mass murders of the Holocaust. The experiment showed that obedience to authority was the norm, not the exception. Regarding obedience, Milgram said that "Obedience is as basic an element in the structure of social life as one can point to. Some system of authority is a requirement of all communal living, and it is only the man dwelling in isolation who is not forced to respond, through defiance or submission, to the commands of others." A similar conclusion was reached in the Stanford prison experiment.
Experimental studies
Classical methods and results
Although other fields have studied obedience, social psychology has been primarily responsible for the advancement of research on obedience. It has been studied experimentally in several different ways.
Milgram's experiment
In one classical study, Stanley Milgram (as part of the Milgram experiment) created a highly controversial yet often replicated study. Like many other experiments in psychology, Milgram's setup involved deception of the participants. In the experiment, subjects were told they were going to take part in a study of the effects of punishment on learning. In reality, the experiment focuses on people's willingness to obey malevolent authority. Each subject served as a teacher of associations between arbitrary pairs of words. After meeting the "teacher" at the beginning of the experiment, the "learner" (an accomplice of the experimenter) sat in another room and could be heard, but not seen. Teachers were told to give the "learner" electric shocks of increasing severity for each wrong answer. If subjects questioned the procedure, the "researcher" (again, an accomplice of Milgram) would encourage them to continue. Subjects were told to ignore the agonized screams of the learner, his desire to be untied and stop the experiment, and his pleas that his life was at risk and that he suffered from a heart condition. The experiment, the "researcher" insisted, had to go on. The dependent variable in this experiment was the voltage amount of shocks administered.
Zimbardo's experiment
The other classical study on obedience was conducted at Stanford University during the 1970s. Phillip Zimbardo was the main psychologist responsible for the experiment. In the Stanford Prison Experiment, college age students were put into a pseudo prison environment in order to study the impacts of "social forces" on participants behavior. Unlike the Milgram study in which each participant underwent the same experimental conditions, here using random assignment half the participants were prison guards and the other half were prisoners. The experimental setting was made to physically resemble a prison while simultaneously inducing "a psychological state of imprisonment".
Results
The Milgram study found that most participants would obey orders even when obedience posed severe harm to others. With encouragement from a perceived authority figure, about two-thirds of the participants were willing to administer the highest level of shock to the learner. This result was surprising to Milgram because he thought that "subjects have learned from childhood that it is a fundamental breach of moral conduct to hurt another person against his will". Milgram attempted to explain how ordinary people were capable of performing potentially lethal acts against other human beings by suggesting that participants may have entered into an agentic state, where they allowed the authority figure to take responsibility for their own actions. Another unanticipated discovery was the tension that the procedure caused. Subjects expressed signs of tension and emotional strain especially after administering the powerful shocks. 3 of the subjects had full-blown uncontrollable seizures, and on one occasion the experiment was stopped.
Zimbardo obtained similar results as the guards in the study obeyed orders and turned aggressive. Prisoners likewise were hostile to and resented their guards. The cruelty of the "guards" and the consequent stress of the "prisoners," forced Zimbardo to terminate the experiment prematurely, after 6 days.
Modern methods and results
The previous two studies greatly influenced how modern psychologists think about obedience. Milgram's study in particular generated a large response from the psychology community. In a modern study, Jerry Burger replicated Milgram's method with a few alterations. Burger's method was identical to Milgram's except when the shocks reached 150 volts, participants decided whether or not they wanted to continue and then the experiment ended (base condition). To ensure the safety of the participants, Burger added a two-step screening process; this was to rule out any participants that may react negatively to the experiment. In the modeled refusal condition, two confederates were used, where one confederate acted as the learner and the other was the teacher. The teacher stopped after going up to 90 volts, and the participant was asked to continue where the confederate left off. This methodology was considered more ethical because many of the adverse psychological effects seen in previous studies' participants occurred after moving past 150 volts. Additionally, since Milgram's study only used men, Burger tried to determine if there were differences between genders in his study and randomly assigned equal numbers of men and women to the experimental conditions.
Using data from his previous study, Burger probed participant's thoughts about obedience. Participants' comments from the previous study were coded for the number of times they mentioned "personal responsibility and the learner's well being". The number of prods the participants used in the first experiment were also measured.
Another study that used a partial replication of Milgram's work changed the experimental setting. In one of the Utrecht University studies on obedience, participants were instructed to make a confederate who was taking an employment test feel uncomfortable. Participants were told to make all of the instructed stress remarks to the confederate that ultimately made him fail in the experimental condition, but in the control condition they were not told to make stressful remarks. The dependent measurements were whether or not the participant made all of the stress remarks (measuring absolute obedience) and the number of stress remarks (relative obedience).
Following the Utrecht studies, another study used the stress remarks method to see how long participants would obey authority. The dependent measures for this experiment were the number of stress remarks made and a separate measure of personality designed to measure individual differences.
Neuroscience has only recently begun to approach the question of obedience, bringing novel but complementary perspectives on how obeying or issuing commands impacts brain functioning, fostering conditions for moral transgressions. The experimental protocol, inspired by Milgram, does not rely on deception and involves real behaviors. A participant assigned the role of agent must either freely decide or receive orders from the experimenter to deliver or withhold a mildly painful electric shock to another participant (the "victim") in exchange for €0.05. In a study conducted in 2020, fMRI results indicated that seeing the shock delivered to the victim triggered activations in the anterior cingulate cortex (ACC) and the anterior insula (AI), key brain regions associated with empathy. However, such activations were lower in the coerced condition compared to the free-choice condition, consistent with participants' subjective perception of the victim’s pain. Activity in brain regions associated with the interpersonal feeling of guilt was also reduced when participants obeyed orders compared to acting freely. Other studies showed that the sense of agency, as measured through the implicit task of time perception, was reduced in the coerced compared to the free-choice condition, suggesting that the sense of agency diminishes when individuals obey orders compared to acting freely. These neuroscience studies highlight how obeying orders alters our natural aversion to hurting others.
Results
Burger's first study had results similar to the ones found in Milgram's previous study. The rates of obedience were very similar to those found in the Milgram study, showing that participants' tendency to obey has not declined over time. Additionally, Burger found that both genders exhibited similar behavior, suggesting that obedience will occur in participants independent of gender.
In Burger's follow-up study, he found that participants that worried about the well-being of the learner were more hesitant to continue the study. He also found that the more the experimenter prodded the participant to continue, the more likely they were to stop the experiment.
The Utrecht University study also replicated Milgram's results. They found that although participants indicated they did not enjoy the task, over 90% of them completed the experiment.
The Bocchiaro and Zimbardo study had similar levels of obedience compared to the Milgram and Utrecht studies. They also found that participants would either stop the experiment at the first sign of the learner's pleas or would continue until the end of the experiment (called "the foot in the door scenario").
In addition to the above studies, additional research using participants from different cultures (including Spain, Australia, and Jordan) also found participants to be obedient.
Implications
One of the major assumptions of obedience research is that the effect is caused only by the experimental conditions, and Thomas Blass' research contests this point, as in some cases participant factors involving personality could potentially influence the results.
In one of Blass' reviews on obedience, he found that participant's personalities can impact how they respond to authority, as people that were high in authoritarian submission were more likely to obey. He replicated this finding in his own research, as in one of his experiments, he found that when watching portions of the original Milgram studies on film, participants placed less responsibility on those punishing the learner when they scored high on measures of authoritarianism.
In addition to personality factors, participants who are resistant to obeying authority had high levels of social intelligence.
Other research
Obedience can also be studied outside of the Milgram paradigm in fields such as economics or political science. One economics study that compared obedience to a tax authority in the lab versus at home found that participants were much more likely to pay participation tax when confronted in the lab. This finding implies that even outside of experimental settings, people will forgo potential financial gain to obey authority.
Another study involving political science measured public opinion before and after a Supreme Court case debating whether or not states can legalize physician assisted suicide. They found that participants' tendency to obey authorities was not as important to public opinion polling numbers as religious and moral beliefs. Although prior research has demonstrated that the tendency to obey persists across settings, this finding suggests that at personal factors like religion and morality can limit how much people obey authority.
Other experiments
The Hofling hospital experiment
Both the Milgram and Stanford experiments were conducted in research settings. In 1966, psychiatrist Charles K. Hofling published the results of a field experiment on obedience in the nurse–physician relationship in its natural hospital setting. Nurses, unaware they were taking part in an experiment, were ordered by unknown doctors to administer dangerous doses of a (fictional) drug to their patients. Although several hospital rules disallowed administering the drug under the circumstances, 21 out of the 22 nurses would have given the patient an overdose.
Cultural attitudes
Many traditional cultures regard obedience as a virtue; historically, societies have expected children to obey their elders (compare patriarchy or matriarchy), slaves their owners, serfs their lords in feudal society, lords their king, and everyone God. Even long after slavery ended in the United States, the Black codes required black people to obey and submit to whites, on pain of lynching. Compare the religious ideal of surrender and its importance in Islam (the word Islam can literally mean "surrender").
In some Christian weddings, obedience was formally included along with honor and love as part of the bride's (but not the bridegroom's) marriage vow. This came under attack with women's suffrage and the feminist movement. the inclusion of this promise to obey has become optional in some denominations.
In the Catholic Church, obedience is seen as one of the evangelical counsels, "undertaken in a spirit of faith and love in the following of Christ".
Learning to obey adult rules is a major part of the socialization process in childhood, and many techniques are used by adults to modify the behavior of children. Additionally, extensive training is given in armies to make soldiers capable of obeying orders in situations where an untrained person would not be willing to follow orders. Soldiers are initially ordered to do seemingly trivial things, such as picking up the sergeant's hat off the floor, marching in just the right position, or marching and standing in formation. The orders gradually become more demanding, until an order to the soldiers to place themselves into the midst of gunfire gets an instinctively obedient response.
Factors affecting obedience
Embodiment of prestige or power
When the Milgram experimenters were interviewing potential volunteers, the participant selection process itself revealed several factors that affected obedience, outside of the actual experiment.
Interviews for eligibility were conducted in an abandoned complex in Bridgeport, Connecticut. Despite the dilapidated state of the building, the researchers found that the presence of a Yale professor as stipulated in the advertisement affected the number of people who obeyed. This was not further researched to test obedience without a Yale professor because Milgram had not intentionally staged the interviews to discover factors that affected obedience. A similar conclusion was reached in the Stanford prison experiment.
In the actual experiment, prestige or the appearance of power was a direct factor in obedience—particularly the presence of men dressed in gray laboratory coats, which gave the impression of scholarship and achievement and was thought to be the main reason why people complied with administering what they thought was a painful or dangerous shock. A similar conclusion was reached in the Stanford prison experiment.
Raj Persaud, in an article in the BMJ, comments on Milgram's attention to detail in his experiment:
Despite the fact that prestige is often thought of as a separate factor, it is, in fact, merely a subset of power as a factor. Thus, the prestige conveyed by a Yale professor in a laboratory coat is only a manifestation of the experience and status associated with it and/or the social status afforded by such an image.
Agentic state and other factors
According to Milgram, "the essence of obedience consists in the fact that a person comes to view himself as the instrument for carrying out another person's wishes, and he therefore no longer sees himself as responsible for his actions. Once this critical shift of viewpoint has occurred in the person, all of the essential features of obedience follow." Thus, "the major problem for the subject is to recapture control of his own regnant processes once he has committed them to the purposes of the experimenter." Besides this hypothetical agentic state, Milgram proposed the existence of other factors accounting for the subject's obedience: politeness, awkwardness of withdrawal, absorption in the technical aspects of the task, the tendency to attribute impersonal quality to forces that are essentially human, a belief that the experiment served a desirable end, the sequential nature of the action, and anxiety.
Belief perseverance
Another explanation of Milgram's results invokes belief perseverance as the underlying cause. What "people cannot be counted on is to realize that a seemingly benevolent authority is in fact malevolent, even when they are faced with overwhelming evidence which suggests that this authority is indeed malevolent. Hence, the underlying cause for the subjects' striking conduct could well be conceptual, and not the alleged 'capacity of man to abandon his humanity ... as he merges his unique personality into larger institutional structures."'
See also
In humans:
In animals:
Animal training
Obedience training (for dogs)
Horse breaking
References
External links
Science Aid: Obedience High school level Psychology
Catholic Encyclopedia article on obedience
Authority
Human behavior
Conformity
Social influence
Virtue | Obedience | Biology | 3,347 |
19,030,472 | https://en.wikipedia.org/wiki/Standoff%20distance | Standoff distance is a security term that refers to measures to prevent unscreened and potentially threatening people and vehicles from approaching within a certain distance of a building, car, or other shelter, roadblock or other location, or to a person such as a law enforcement officer or VIP, or to a friendly area / location.
Standoff distance is used when a violent criminal is in a fortified position, when hostages are under armed threat from kidnappers, when a bomb is believed to have been placed, or when other unspecified dangers may be lurking. It is a measure of distance used by government, law enforcement, or military operatives handling the situation to protect their own agents and civilians from physical injury or death while the situation is resolved.
Standoff distance may be ensured using fixed physical barriers such as fences or bollards; temporary placement of items to block access (e.g., using law enforcement vehicles or police tape to block a road or bridge); physical features other than barriers (these may appear innocuous, such as the White House lawn or adding an ornamental pond); armed guards or positions (e.g., a police sniper in overwatch); or deploying police officers with carbines such as an M-4, instead of just a service sidearm. When police officers have carbines the standoff distance is increased because an attacker who poses a threat can be fired upon from greater distances.
Firearms
When an armed and violent criminal is sheltered in a location not easily reachable by a tranquilizer round or disabling shot - or lethal ammunition, if authorized by mission leaders - police, military, and counterterrorism officers maintain distance (often out of the direct line of sight and behind cover) while often using a megaphone to call for backup, the arrest of the subject, or to take him/her into custody.
Sniper coverage is used often in these situations, and standard procedure for officers or operatives (or citizens taking part in a citizen's arrest) is to call for heavily armored backup while maintaining cover themselves. In the wake of active shooter scenarios, some law enforcement agencies have switched to moving in on the suspects, to prevent the gunmen from harming civilians. Therapeutic interventions or diplomatic techniques may be used to talk down the suspects or identified threats and assailants.
Hostage situations
In a hostage situation, the primary goal is the safe recovery of the hostages, who are usually held under threat of violence or other prolonged physical harm (starvation, poisoning, bleeding, illness) from kidnappers. Thus the situation is treated similarly to situations with other armed attackers under cover, but with even more caution. Snipers are often employed to attempt to provide leverage against the hostage-takers or to fire at the hostage takers if an imminent risk of harm to the hostages is identified.
Unless all kidnappers can be hit and killed by sniper gunfire almost simultaneously, generally extreme prejudice (e.g., shooting at gunmen) is not used as freely due to the danger of other kidnappers killing the hostages, as in the 1972 Munich example.
This is not true in lone wolf situations, where the hostage taker is often shot by a snipers with armor-piercing or wall-piercing ammunition if talk or negotiation resolution is impossible. In all situations the preferred method is to talk the kidnappers into releasing the hostages for ransom or otherwise talking them down using therapeutic or diplomatic techniques, to protect the safety of the hostages and, ideally, have the suspect surrender peacefully.
Explosive Threats
An explosion is an extremely rapid release of energy in the form of light, heat, sound, and a shock wave. A shock wave consists of highly compressed air traveling radially outward from the source at supersonic velocities. As the shock wave expands, pressures decrease rapidly and, when it meets a surface that is in line-of-sight of the explosion, it is reflected and amplified. Pressures also decay rapidly over time and have a very brief span of existence, measured typically in thousandths of a second, or milliseconds. Diffraction effects, caused by corners of a building or structure, may act to confine the air-blast, the airborne shock wave that results from the detonation of the explosives, prolonging its duration. Late in the explosive event, the shock wave becomes negative, creating suction. Behind the shock wave, where a vacuum has been created, air rushes in, creating a powerful wind or drag pressure on all surfaces of the building. This wind picks up and carries flying debris, acting as fragmentation, in the vicinity of the detonation. In an external explosion, a portion of the energy is also imparted to the ground, creating a crater and generating a ground shock wave analogous to a high-intensity, short-duration earthquake.
Note that the severity of an air-blast event is directly dependent on the explosive, distance, and its confinement. The chances of survival dramatically increase as the distance from an explosive threat increase. Note that the majority of the deaths affiliated with explosives are those that are within the immediate vicinity and those that are critically injured by debris generated by material within the vicinity of the explosion.
Standoff Distance for Explosives / Bombs
With explosive threats or bombs, the standoff distance used by law enforcement officers depends on the size and type of the bomb. The smallest standoff distances, about 70 feet (21 m) from the threat, are used for small pipe bombs with about five (5) pounds (2.25 kg) of explosives. A human suicide bomber with about 20 pounds (9 kg) of explosives strapped to his/her body has a standoff distance of 110 feet (33.5 m). A briefcase or suitcase bomb with about 50 pounds (22.67 kg) of explosives has a 150-foot (46 m) standoff distance. Larger car bombs or truck bombs have a much larger standoff distance, as the blast radius is bigger. A car bomb with a 500-pound (226.79 kg) bomb has a 320-foot (97.5 m) standoff distance. A small delivery truck-based truck bomb with a 1,000 pound (453.59 kg) bomb has a 640-foot (195 m) standoff distance. A huge 18-wheeler truck-sized truck bomb with over 60,000 pounds (27215.5 kg) of explosives has a 1,570 foot (478.5 m) standoff distance. This information is included in the following table, note that the distances for mandatory evacuation are for inside and outside of buildings. Also, as a word of caution, note that the mandatory evacuation distance does not necessarily ensure safety, and all should proceed to the preferred evacuation distances indicated below.
Standoff distance is also intended to deter terrorists from using car bombs by making it more difficult for them to cause catastrophic damage. In the wake of the Oklahoma City bombing, many high-risk federal buildings began enforcing standoff distances. It is based on the concept that a blast shock load is essentially a high-pressure front that moves out radially and decays very quickly - because blast falloff is thus often more exponential than linear, any standoff distance helps increases survival chances for passersby and minimizes danger, though shrapnel mitigates this effect if present.
Hydraulic roadblocks (sometimes wedge-shaped), or bollards can be raised to block approaching vehicles; these can be designed to prevent even a heavy, fast-moving truck from getting through. Jersey barriers and concrete planters filled with dirt have also been used to maintain separation between screened and unscreened traffic. Certain infrastructure at risk of terrorist attack, such as bridges, may not be well-suited to standoff distances since their purpose is for traffic to travel along them.
The effects of various long duration blast overpressures and the associated effect on structures and the human body are summarized below. Note that this data assumes that the structures and personnel affected by an explosive threat are not protected from debris.
Notes
References
External links
http://www.asisonline.org/councils/BlastResistantStandards_1.pdf
Counterterrorism
Law enforcement
Security
Architecture | Standoff distance | Engineering | 1,664 |
1,417,287 | https://en.wikipedia.org/wiki/Kilju | Kilju () is the Finnish word for a mead-like homemade alcoholic beverage made from a source of carbohydrates (such as cane sugar or honey), yeast, and water, making it both affordable and cheap to produce. The ABV is around 15–17%, and since it does not contain a sweet reserve it is completely dry. Crude product may be distilled into moonshine. Kilju intended for direct consumption is usually clarified and stabilized to avoid wine faults. It is a flax-colored alcoholic beverage with no discernible taste other than that of ethanol. It can be used as an ethanol base for drink mixers.
Cultural aspects
Kilju is commonly associated with the punk subculture.
Kilju is a well-established part of the Finnish alcohol and counter-culture, as witnessed even in the leading engineer school's making-and-use-of video of yore. "Four thousand litres of gases are generated. They are led to the neighbours' delight." The drink tends to invite such black humour, of the deadpan kind.
The first commercially produced kilju was introduced in 2022.
Production
The process is similar to that of homebrewing wine. If done slowly, it requires rigorous hygiene and filtering of the product. If brewed fast, specialized dried yeasts are available in amounts to drive the fermentation process through before bacterial infiltration can take place, in about three days. In Finnish, the latter are called pikahiiva (lit. quick-yeast), and they are sold in about a hundred gramme packs dry, as opposed to the live standard pack of brewer's yeast of 50 g wet.
Properly made kilju is a clear, colorless, or off-white liquid with no discernible taste other than that of ethanol. It can be produced by natural settling of the yeast over time, but nowadays various fining agents are used to hasten the process as well.
Kilju is often produced improperly by home brewers who allow contaminants to disrupt fermentation or do not adequately filter or rack the liquid, or do not use a fining agent. The latter mistakes result in yeast being suspended, causing the mixture to be cloudy rather than clear. The yeast is not harmful, but can yield an unpleasant taste and intestinal discomfort. It is also a common mistake to leave the carbon dioxide produced by fermentation into the suspension, so that the yeast provides it with nucleation sites, keeping the yeast up in the solution. Proper technique calls for airing the product after fermentation, stirring, and perhaps for fining agents such as microsilica or various semipolar proteinaceacous or carbohydrate agents.
Ingredients
An easy way to produce fermented water is to obtain turbo yeast kits (contains Saccharomyces cerevisiae yeast strain, enzymes, vitamins, and minerals) that instructs on the package the quantity of white sugar, and tap water needed.
Inverted sugar syrup
Water
Sugars in wine: White sugar (or crystallized sucrose) is cheap and common. Also, partially refined sugars such as brown sugar should be avoided, for example molasses produces a distinct flavor in rum. Using plain sugar is beneficial over whole fruit; Methanol is a major occurrence in fruit spirits.
Yeast in winemaking: The most common yeast associated with winemaking is Saccharomyces cerevisiae. Saccharomyces cerevisiae is excellent at producing ethanol. Yeast are dependent on a few nutrients (often included in yeast kit sanchets) to produce as much ethanol as possible, the most important ones are:
Invertase is an enzyme that cleaves the glycosidic linkage between the glucose and fructose molecules in sucrose. This helps the yeast metabolize the sugars faster.
Thiamine: Increases the resistance of the yeast Saccharomyces cerevisiae against oxidative, osmotic and thermal stress.
Yeast assimilable nitrogen (YAN), is the combination of free amino nitrogen (FAN), ammonia (NH3) and ammonium (NH4+) that is available for the wine yeast Saccharomyces cerevisiae to use during fermentation. Outside the sugars in wine, nitrogen is the most important nutrient needed to carry out a successful fermentation that doesn't end prior to the intended point of dryness or sees the development of off-odors and related wine faults.
Inverted sugar syrup
Inverted sugar syrup for fermented water is usually home-made by fully dissolving sugar in cold tap water. Yeast requires oxygen rich water that do not exceed 25 degrees Celsius.
A common manual way to dissolve refined sugar is to mix with water in a container which is half filled, and then sealed and shaken. However, a mixer or blender may be used to automatically dissolve the sugar, in turns, if necessary.
Yeast
Yeast, and yeast nutrition, is mixed in the syrup. One gram pure yeast consumes approximately 0.2 grams sugar.
Yeasts will usually die out once the alcohol level reaches about 15% due to the toxicity of alcohol on the yeast cells' physiology while the more alcohol tolerant Saccharomyces species take over. In addition to S. cerevisiae, Saccharomyces bayanus is a species of yeast that can tolerate alcohol levels of 17–20%.
Alcohol measurement
To make plain crude kilju, the must weight must be zero: A fermentation lock should indicate less than a bubble per minute. Then the sugar reserve is measured with a must weight refractometer/hygrometer. If there's sugar left, then more yeast should be added to consume it, and this measurement process should be repeated. A solution with sugar is not fermented water, but fermented syrup.
Clarification: The solution is clarified, typically with a fining agent such as bentonite.
Alcohol by volume: Only when the must weight is zero, and when the solution has been clarified, an alcoholic hydrometer, or an ethanol-type refractometer, will display accurate alcohol volume. A leftover sugar reserve will give false values.
Alcohol adjustment
Since fermented water contains no flavors, water may be added to cut down the ABV if desired.
Post-process
Fermented water contains a similar alcoholic content of wines as both beverages are fermented on yeast, however fermented water differs from wine and other fermented beverages in that it contains no fruit juice or residual sugar after manufacture.
Kilju can be produced by fermenting sugar, yeast, and water, but it was illegal in Finland before March 2018; therefore, grain, potatoes, fruits or berries were used during fermentation to avoid legal problems and to flavor the drink. Oranges and lemons are a popular choice for this purpose.
Undistilled
Flavoring
It often has additives such as citrus fruits, apples, berry juices, or artificial flavorings. Flavored kilju from fruits for example doesn't necessarily have to be sweet as long as all sugar is consumed by the yeast.
Kilju (15-17% ABV) contains 2.4-2.7 times more water than 40% distilled spirit. Since kilju contains approximately 85% water, it can be mixed with concentrates such as a drink mixer, fruit syrup, or squash concentrate.
Carbonation (alcopop)
Alternatively, it can be made as a carbonated soft drink by two methods.
When served before the fermentation process is complete. Kilju made this way is high in sugar and carbon dioxide (CO2) content, and has little to no alcohol, being similar to a sweet lemon soda. It is a family tradition to many. The simple production process also makes it accessible to underage drinkers. Cf. sima, commonly seasoned with lemon and unpurified cane sugar, leading to a small beer or a light mead.
To make homemade alcopop (typically to 3–7%) water is added to kilju after the fermentation process is complete to dilute the ABV accordingly. The solution is then carbonated with a soda machine, and soft drink syrup (which will lower the ABV approximately 10%) is added. Alternatively, it can be made as a carbonated soft drink when served before the fermentation process is complete. Fermented water made this way is high in sugar and carbon dioxide (CO2) content, and do not need to be diluted with water because it has little to no alcohol depending on how many days it has been fermented, being similar to a sweet lemon soda.
Distillation (moonshine)
Kilju can be refined into moonshine by means of distillation to vodka or rectified spirit, but it is illegal in most countries. It is distinct from rum because it is typically made by molasses, a byproduct of the sugar refining process, or fresh sugar cane juice that has a discernible taste of its own.
Moonshine by country, often distilled from fermented water:
Cuba: Gualfarina
Finland: Pontikka
Latvia: Kandža
Nicaragua: Cususa
Poland: Bimber
Russia: Samogon
Saudi Arabia: Aragh
Sweden: Hembränt (HB)
Legality
Winemaking is legal in most countries. However, kilju is fermented from pure carbohydrates like white sugar (a plant extract) instead of grapes.
Finland
The Finnish Alcoholic Beverages Act 1 March 2018 legalized the manufacture of fermented water and wine from fruits, berries and other carbohydrate sources, without the pretense of making proper wine.
Sweden
In Sweden, it is legal to produce if the final product is not distilled.
Consumption
Kilju is often mixed with juice or some other beverage to mask off tastes, of which there can be several.
Compared to wines, kilju most closely resembles Beaujolais nouveau, which is drunk after only a few weeks of fermentation. However, properly made kilju will not easily turn into vinegar, lacking the nutrients necessary for further fermentation. It is possible to drink kilju years after it was made if it has been properly stored. In fact as white wines, it ages well into 2-3a, especially when made from impure cane sugar, molasses included (fariinisokeri), or if brewed partially from oat malt and hops, as an extra strong beer.
Binge drinking
Kilju is regarded as a low-quality drink that is primarily consumed for its alcohol content, mainly associated with binge drinking. Due to its low cost, potential wine fault (when not clarified enough), and simple production process, kilju is mostly drunk by low-income people.
History
When homebrewing grew in popularity during the economic depression that followed the Finnish banking crisis of the early 1990s, yeast strains known as "turbo yeast" ("turbohiiva", "pikahiiva") were introduced to the market. These yeast strains enable a very rapid fermentation to full cask strength, in some cases in as little as three days (compared to several weeks required by traditional wine yeast strains). Such a short production time naturally does not allow the yeast to become lees. The introduction of turbo yeast reinforced the public's view of kilju as an easy method of procuring cheap alcohol.
See also
Fermented tea
Free Beer
Mead
Fruit wine
Pruno
Tharra
References
External links
Kiljun valmistus ja käyttö
Alcopops
Ethanol
Fermentation
Fermented drinks
Finnish alcoholic drinks
Moonshine
Sugar-based alcoholic drinks | Kilju | Chemistry,Biology | 2,446 |
76,135,218 | https://en.wikipedia.org/wiki/Ion%20network | An ion network is an interconnected network or structure composed of ions in a solution. The term "ion network" was coined by Cho and coworkers in 2014. The notion of extended ion aggregates in electrolyte solutions, however, can be found in an earlier report. The ion network is particularly relevant in high-salt solutions where ions can aggregate and interact strongly and it has been investigated in an increasing number of research and review articles.
In high-salt solutions, ions can form clusters or aggregates due to their electrostatic interactions. These aggregates may further organize into spatially more extensive networks, where ions are connected through electrostatic forces and possibly other types of interactions, such as hydrogen bonding.
The formation of percolating ion networks can significantly affect the surrounding solvent molecules, particularly the water hydrogen-bonding networks in aqueous solutions that become intertwined with morphologically complementary ion networks. The presence of ion networks can disrupt the hydrogen-bonding network of water molecules, altering the structure and properties of the solution. This disruption in water structure may have implications for various phenomena, including solvation dynamics, ion transport, and chemical reactions occurring in the solution.
Overall, the concept of an ion network highlights the complex and dynamic interactions between ions and solvent molecules in solution, and its understanding is crucial for elucidating the behavior of electrolyte solutions in various contexts, ranging from biological systems to industrial processes, including lithium-ion batteries.
Research
The study of ion networks and their implications in solution chemistry is an active and interdisciplinary field that has attracted attention from researchers across various disciplines, including chemistry, physics, materials science, and biology. Here are some key research subjects and activities in this field:
Electrolyte Solutions and Ionic Liquids: Electrolyte solutions, which contain dissolved ions, and ionic liquids, which are essentially molten salts at room temperature, are important systems for studying ion networks. Researchers have investigated the structure and dynamics of ion networks in these systems using a variety of experimental and theoretical techniques.
Molecular Dynamics (MD) Simulations: Molecular dynamics simulations play a crucial role in understanding ion networks at the molecular level. By simulating the behavior of individual ions and solvent molecules over time, researchers can explore the formation, structure, and dynamics of ion networks in solution.
Spectroscopic Techniques: Experimental techniques such as infrared spectroscopy, nuclear magnetic resonance (NMR) spectroscopy, and X-ray scattering are commonly used to study ion networks in solution. These techniques provide valuable information about the structure, composition, and dynamics of ion networks.
Hofmeister Effect: The Hofmeister effect refers to the phenomenon where the addition of specific ions to a solution can significantly alter the solubility, stability, and other properties of solutes. Understanding the Hofmeister effect is essential for elucidating the role of ion networks in solution chemistry.
Soft Matter Physics: Ion networks in solution are also of interest in the field of soft matter physics, where researchers study the behavior of complex fluids and materials. Understanding the structure and dynamics of ion networks is crucial for designing new materials with tailored properties.
Graph Theory Analysis: Ions often self-assemble into large and polydisperse aggregates in solution. Graph-theoretical approaches have been applied to quantitatively study morphological characteristics of these structural patterns including ion networks. In this approach, the aggregate structures taken from MD trajectories are treated as mathematical structures called graphs, and their properties, such as graph spectrum, degree distribution, clustering coefficient, minimum path length, and graph entropy, are calculated and analyzed. For example, this approach has been used to identify two morphologically different ion aggregates, namely localized clusters and extended networks, in high-salt solutions of the Hofmeister series of ions.
References
Electrolytes
Liquids | Ion network | Physics,Chemistry | 765 |
3,916,819 | https://en.wikipedia.org/wiki/Capillary%20pressure | In fluid statics, capillary pressure () is the pressure between two immiscible fluids in a thin tube (see capillary action), resulting from the interactions of forces between the fluids and solid walls of the tube. Capillary pressure can serve as both an opposing or driving force for fluid transport and is a significant property for research and industrial purposes (namely microfluidic design and oil extraction from porous rock). It is also observed in natural phenomena.
Definition
Capillary pressure is defined as:
where:
is the capillary pressure
is the pressure of the non-wetting phase
is the pressure of the wetting phase
The wetting phase is identified by its ability to preferentially diffuse across the capillary walls before the non-wetting phase. The "wettability" of a fluid depends on its surface tension, the forces that drive a fluid's tendency to take up the minimal amount of space possible, and it is determined by the contact angle of the fluid. A fluid's "wettability" can be controlled by varying capillary surface properties (e.g. roughness, hydrophilicity). However, in oil-water systems, water is typically the wetting phase, while for gas-oil systems, oil is typically the wetting phase. Regardless of the system, a pressure difference arises at the resulting curved interface between the two fluids.
Equations
Capillary pressure formulas are derived from the pressure relationship between two fluid phases in a capillary tube in equilibrium, which is that force up = force down. These forces are described as:
These forces can be described by the interfacial tension and contact angle of the fluids, and the radius of the capillary tube. An interesting phenomena, capillary rise of water (as pictured to the right) provides a good example of how these properties come together to drive flow through a capillary tube and how these properties are measured in a system. There are two general equations that describe the force up and force down relationship of two fluids in equilibrium.
The Young–Laplace equation is the force up description of capillary pressure, and the most commonly used variation of the capillary pressure equation:
where:
is the interfacial tension
is the effective radius of the interface
is the wetting angle of the liquid on the surface of the capillary
The force down formula for capillary pressure is seen as:
where:
is the height of the capillary rise
is the density gradient of the wetting phase
is the density gradient of the non-wetting phase
Applications
Microfluidics
Microfluidics is the study and design of the control or transport of small volumes of fluid flow through porous material or narrow channels for a variety of applications (e.g. mixing, separations). Capillary pressure is one of many geometry-related characteristics that can be altered in a microfluidic device to optimize a certain process. For instance, as the capillary pressure increases, a wettable surface in a channel will pull the liquid through the conduit. This eliminates the need for a pump in the system, and can make the desired process completely autonomous. Capillary pressure can also be utilized to block fluid flow in a microfluidic device.
The capillary pressure in a microchannel can be described as:
where:
is the surface tension of the liquid
is the contact angle at the bottom
is the contact angle at the top
is the contact angle at the left side of the channel
is the contact angles at the right side of the channel
is the depth
is the width
Thus, the capillary pressure can be altered by changing the surface tension of the fluid, contact angles of the fluid, or the depth and width of the device channels. To change the surface tension, one can apply a surfactant to the capillary walls. The contact angles vary by sudden expansion or contraction within the device channels. A positive capillary pressure represents a valve on the fluid flow while a negative pressure represents the fluid being pulled into the microchannel.
Measurement Methods
Methods for taking physical measurements of capillary pressure in a microchannel have not been thoroughly studied, despite the need for accurate pressure measurements in microfluidics. The primary issue with measuring the pressure in microfluidic devices is that the volume of fluid is too small to be used in standard pressure measurement tools. Some studies have presented the use of microballoons, which are size-changing pressure sensors. Servo-nulling, which is historically used for measuring blood pressure, has also been demonstrated to provide pressure information in microfluidic channels with the assistance of a LabVIEW control system. Essentially, a micropipette is immersed in the microchannel fluid and is programmed to respond to changes in the fluid meniscus. A displacement in the meniscus of the fluid in the micropipette induces a voltage drop, which triggers a pump to restore the original position of the meniscus. The pressure exerted by the pump is interpreted as the pressure within the microchannel.
Examples
Current research in microfluidics is focused on developing point-of-care diagnostics and cell sorting techniques (see lab-on-a-chip), and understanding cell behavior (e.g. cell growth, cell aging). In the field of diagnostics, the lateral flow test is a common microfluidic device platform that utilizes capillary forces to drive fluid transport through a porous membrane. The most famous lateral flow test is the take home pregnancy test, in which bodily fluid initially wets and then flows through the porous membrane, often cellulose or glass fiber, upon reaching a capture line to indicate a positive or negative signal. An advantage to this design, and several other microfluidic devices, is its simplicity (for example, its lack of human intervention during operation) and low cost. However, a disadvantage to these tests is that capillary action cannot be controlled after it has started, so the test time cannot be sped up or slowed down (which could pose an issue if certain time-dependent processes are to take place during the fluid flow).
Another example of point-of-care work involving a capillary pressure-related design component is the separation of plasma from whole blood by filtration through porous membrane. Efficient and high-volume separation of plasma from whole blood is often necessary for infectious disease diagnostics, like the HIV viral load test. However, this task is often performed through centrifugation, which is limited to clinical laboratory settings. An example of this point-of-care filtration device is a packed-bed filter, which has demonstrated the ability to separate plasma and whole blood by utilizing asymmetric capillary forces within the membrane pores.
Petrochemical industry
Capillary pressure plays a vital role in extracting sub-surface hydrocarbons (such as petroleum or natural gas) from underneath porous reservoir rocks. Its measurements are utilized to predict reservoir fluid saturations and cap-rock seal capacity, and for assessing relative permeability (the ability of a fluid to be transported in the presence of a second immiscible fluid) data. Additionally, capillary pressure in porous rocks has been shown to affect phase behavior of the reservoir fluids, thus influencing extraction methods and recovery. It is crucial to understand these geological properties of the reservoir for its development, production, and management (e.g. how easy it is to extract the hydrocarbons).
The Deepwater Horizon oil spill is an example of why capillary pressure is significant to the petrochemical industry. It is believed that upon the Deepwater Horizon oil rig’s explosion in the Gulf of Mexico in 2010, methane gas had broken through a recently implemented seal, and expanded up and out of the rig. Although capillary pressure studies (or potentially a lack thereof) do not necessarily sit at the root of this particular oil spill, capillary pressure measurements yield crucial information for understanding reservoir properties that could have influenced the engineering decisions made in the Deepwater Horizon event.
Capillary pressure, as seen in petroleum engineering, is often modeled in a laboratory where it is recorded as the pressure required to displace some wetting phase by a non-wetting phase to establish equilibrium. For reference, capillary pressures between air and brine (which is a significant system in the petrochemical industry) have been shown to range between 0.67 and 9.5 MPa. There are various ways to predict, measure, or calculate capillary pressure relationships in the oil and gas industry. These include the following:
Leverett J-function
The Leverett J-function serves to provide a relationship between the capillary pressure and the pore structure (see Leverett J-function).
Mercury Injection
This method is well suited to irregular rock samples (e.g. those found in drill cuttings) and is typically used to understand the relationship between capillary pressure and the porous structure of the sample. In this method, the pores of the sample rock are evacuated, followed by mercury filling the pores with increasing pressure. Meanwhile, the volume of mercury at each given pressure is recorded and given as a pore size distribution, or converted to relevant oil/gas data. One pitfall to this method is that it does not account for fluid-surface interactions. However, the entire process of injecting mercury and collecting data occurs rapidly in comparison to other methods.
Porous Plate Method
The Porous Plate Method is an accurate way to understand capillary pressure relationships in fluid-air systems. In this process, a sample saturated with water is placed on a flat plate, also saturated with water, inside a gas chamber. Gas is injected at increasing pressures, thus displacing the water through the plate. The pressure of the gas represents the capillary pressure, and the amount of water ejected from the porous plate is correlated to the water saturation of the sample.
Centrifuge Method
The centrifuge method relies on the following relationship between capillary pressure and gravity:
where:
is the height of the capillary rise
is gravity
is the density of the wetting phase
is the density of the non-wetting phase
The centrifugal force essentially serves as an applied capillary pressure for small test plugs, often composed of brine and oil. During the centrifugation process, a given amount of brine is expelled from the plug at certain centrifugal rates of rotation. A glass vial measures the amount of fluid as it is being expelled, and these readings result in a curve that relates rotation speeds with drainage amounts. The rotation speed is correlated to capillary pressure by the following equation:
where:
is the radius of rotation of the bottom of the core sample
is the radius of rotation of the top of the core sample
is the rotational speed
The primary benefits to this method are that it's rapid (producing curves in a matter of hours) and is not restricted to being performed at certain temperatures.
Other methods include the Vapor Pressure Method, Gravity-Equilibrium Method, Dynamic Method, Semi-dynamic Method, and the Transient Method.
Correlations
In addition to measuring the capillary pressure in a laboratory setting to model that of an oil/natural gas reservoir, there exist several relationships to describe the capillary pressure given specific rock and extraction conditions. For example, R. H. Brooks and A. T. Corey developed a relationship for capillary pressure during the drainage of oil from an oil-saturated porous medium experiencing a gas invasion:
where:
is the capillary pressure between oil and gas phases
is the oil saturation
is the residual oil saturation that remains trapped in the pore at high capillary pressure
is the threshold pressure (the pressure at which the gas phase is allowed to flow)
is a parameter that is related to the distribution of pore sizes
for narrow distributions
for wide distributions
Additionally, R. G. Bentsen and J. Anli developed a correlation for the capillary pressure during the drainage from a porous rock sample in which an oil phase displaces saturated water:
where:
is the capillary pressure between oil and water phases
is a parameter that controls the shape of the capillary pressure function
is the normalized wetting-phase saturation
is the saturation of the wetting phase
is the irreducible wetting-phase saturation
Averaging capillary pressure vs. water saturation curves
It has been shown that as reservoir simulators use the primary drainage capillary pressure data for saturation-height modeling calculations, primary drainage capillary pressure data should be averaged in the same manner that water saturations are averaged. Also, as reservoir simulators use the imbibition and secondary drainage capillary pressure data for fluids displacement calculations, these capillary pressures should not be averaged like primary drainage capillary pressure data. These can be averaged by Leverett J-function. The averaging equations are as follows
averaging primary drainage capillary pressure vs. normalized saturation data
in which is the number of core samples, is the effective porosity, is the bulk volume of sample, and is the primary drainage capillary pressure data vs. normalized water saturation.
averaging imbibition and secondary drainage capillary pressure vs. normalized saturation data
in which is the number of core samples, is the effective porosity, is the absolute permeability, is the interfacial tension or IFT, and is the imbibition or secondary drainage capillary pressure data vs. normalized water saturation.
In nature
Needle ice
In addition to being manipulated for medical and energy applications, capillary pressure is the cause behind various natural phenomena as well. For example, needle ice, seen in cold soil, occurs via capillary action. The first major contributions to the study of needle ice, or simply, frost heaving were made by Stephen Taber (1929) and Gunnar Beskow (1935), who independently aimed to understand soil freezing. Taber’s initial work was related to understanding how the size of pores within the ground influenced the amount of frost heave. He also discovered that frost heave is favorable for crystal growth and that a gradient of soil moisture tension drives water upward toward the freezing front near the top of the ground. In Beskow’s studies, he defined this soil moisture tension as “capillary pressure” (and soil water as “capillary water”). Beskow determined that the soil type and effective stress on the soil particles influenced frost heave, where effective stress is the sum of pressure from above ground and the capillary pressure.
In 1961, D.H. Everett elaborated on Taber and Beskow’s studies to understand why pore spaces filled with ice continue to experience ice growth. He utilized thermodynamic equilibrium principles, a piston cylinder model for ice growth and the following equation to understand the freezing of water in porous media (directly applicable to the formation of needle ice):
where:
is the pressure of the solid crystal
is the pressure in the surrounding liquid
is the interfacial tension between the solid and the liquid
is the surface area of the phase boundary
is the volume of the crystal
is the mean curvature of the solid/liquid interface
With this equation and model, Everett noted the behavior of water and ice given different pressure conditions at the solid-liquid interface. Everett determined that if the pressure of the ice is equal to the pressure of the liquid underneath the surface, ice growth is unable to continue into the capillary. Thus, with additional heat loss, it is most favorable for water to travel up the capillary and freeze in the top cylinder (as needle ice continues to grow atop itself above the soil surface). As the pressure of the ice increases, a curved interface between the solid and liquid arises and the ice will either melt, or equilibrium will be reestablished so that further heat loss again leads to ice formation. Overall, Everett determined that frost heaving (analogous to the development of needle ice) occurs as a function of the pore size in the soil and the energy at the interface of ice and water. Unfortunately, the downside to Everett's model is that he did not consider soil particle effects on the surface.
Circulatory system
Capillaries in the circulatory system are vital to providing nutrients and excreting waste throughout the body. There exist pressure gradients (due to hydrostatic and oncotic pressures) in the capillaries that control blood flow at the capillary level, and ultimately influence the capillary exchange processes (e.g. fluid flux). Due to limitations in technology and bodily structure, most studies of capillary activity are done in the retina, lip and skin, historically through cannulation or a servo-nulling system. Capillaroscopy has been used to visualize capillaries in the skin in 2D, and has been reported to observe an average range of capillary pressure of 10.5 to 22.5 mmHg in humans, and an increase in pressure among people with type 1 diabetes and hypertension. Relative to other components of the circulatory system, capillary pressure is low, as to avoid rupturing, but sufficient for facilitating capillary functions.
See also
Capillary action
Capillary number
Disjoining pressure
Leverett J-function
Young–Laplace equation
Laplace pressure
Surface tension
Microfluidics
Water_retention_curve
TEM-function
USBM wettability index
References
Fluid dynamics | Capillary pressure | Chemistry,Engineering | 3,626 |
73,843,019 | https://en.wikipedia.org/wiki/Tungsten%20hexabromide | Tungsten hexabromide, also known as tungsten(VI) bromide, is a chemical compound of tungsten and bromine with the formula WBr6. It is an air-sensitive dark grey powder that decomposes above 200 °C to tungsten(V) bromide and bromine.
Production and reactions
Tungsten hexabromide is mainly produced by the reaction of metallic tungsten and bromine at temperatures around 100 °C in a nitrogen atmosphere:
W + 3 Br2 → WBr6
Another method of producing this compound is by the reaction of tungsten hexacarbonyl and bromine at room temperature, releasing carbon monoxide. It can also be produced by the metathesis reaction of boron tribromide and tungsten hexachloride.
WBr6 is reduced with elemental antimony at elevated temperatrues, consecutively producing, WBr5, WBr4, W4Br10, W5Br12, then finally WBr2 at 350 °C. This reaction produces antimony tribromide as a side product. Any of these bromides can be reverted to the hexabromide by oxidation with bromine at 160 °C.
Tungsten hexabromide is hydrolyzed in water, producing tungsten pentoxide and releasing bromine.
Tungsten(VI) oxytetrabromide is produced by the reaction of tungsten hexabromide and tungsten(VI) oxide:
2 WBr6 + WO3 → 3 WOBr4
Structure
The trigonal crystal structure of WBr6 consists of isolated WBr6 octahedra and is isostructural with α-WCl6.
References
Tungsten halides
Bromides | Tungsten hexabromide | Chemistry | 352 |
21,928,636 | https://en.wikipedia.org/wiki/Irish%20units%20of%20measurement | Early Irish law texts record a wide variety of units of measurement, organised into various systems. These were used from Early Christian Ireland (Middle Ages) or perhaps earlier, before being displaced by Irish measure from the 16th century onward.
Length
A troighid ("foot") was the length of a man's foot, divided into twelve ordlach, "thumb-lengths". These figures assume a man's foot to measure .
A magh-space was a unit set at the distance from which a cock-crow or bell could be heard. Other units such as inntrit and lait appear in documents; their value is uncertain, perhaps being equivalent to 1 and 2 fertachs respectively.
Ancient Laws of Ireland reads ceithri orlaighi i mbais, teora basa i troighid (4 thumb-lengths in a palm, 3 palms in a foot). and
"Catalogue of the Irish manuscripts in the British Museum v.1" gives ceithri gráine an t-órdlach (4 grains in the thumb-length).
Stair Ercuil ocus a bás: the life and death of Hercules mentions ceim curadh (warrior's paces).
Area
The basic unit of area was the tir-cumaile, "land of three cows", as it was an area of land that was at some point worth three cows. It is sometimes erroneously interpreted as the area needed to graze three cows, but it is far too large for that; in modern Ireland, a cow grazes on about 0.4 ha, so twenty or more could graze a tir-cumaile. Ireland in total covered about 870,000 tir-cumaile.
Capacity
A hen's eggshell was used as a standard unit, roughly 55 ml.
Mass
The Manners and Customs of Ireland lists two types of unge: unge mór at 20 pennyweights (31.1 g) and unge beg at 10 pennyweights (15.6 g).
A (gold scruple) was used for measuring gold weight and was equal to a quarter-ounce (7 g).
Time
A night (oídhche) was used as a measure for time in preference to a day. As was normal for Islam and Jews and in line with the Bible (It was evening and morning of the first day), the Irish held that a new day began at sunset, not at sunrise, so that a Wednesday night would precede the day of Wednesday.
See also
List of obsolete units of measurement
Metrication in Ireland
Irish measure
References
Irish units of measurement
Obsolete units of measurement | Irish units of measurement | Mathematics | 545 |
37,865,554 | https://en.wikipedia.org/wiki/Narcotics%20and%20Psychotropics%20Control%20Law | The Narcotics and Psychotropics Control Law (麻薬及び向精神薬取締法 Mayaku oyobi kouseishin'yaku torishimari hō) is a law enacted in Japan in 1953 to control most narcotic and psychotropic drugs. It was enacted in 1953 under the name of Narcotics Control Law (麻薬取締法 Mayaku torishimari hō) and was renamed current title in 1990 along with Japan's ratification of Convention on Psychotropic Substances in the same year. It is often abbreviated to Makōhō (麻向法).
Japan has four separate laws to regulate drugs. There is one for marijuana, one for stimulants, and one for opium; the remainder of all drugs fall under the category "narcotics and psychotropics." All of these laws were written in the 1950s, although some were revised in the Heisei period in accordance with the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. Marijuana was unregulated before the American occupation of Japan; opium was banned during the Meiji Restoration. Stimulants, most commonly methamphetamine, were widely administered to soldiers and workers in the 1940s and 1950s.
The restrictions laid out by this law are comparable to Schedule II of the US's Controlled Substances Act.
References
Drug control law
Japanese criminal law
Drug policy of Japan | Narcotics and Psychotropics Control Law | Chemistry | 289 |
372,399 | https://en.wikipedia.org/wiki/Opposite%20category | In category theory, a branch of mathematics, the opposite category or dual category Cop of a given category C is formed by reversing the morphisms, i.e. interchanging the source and target of each morphism. Doing the reversal twice yields the original category, so the opposite of an opposite category is the original category itself. In symbols, .
Examples
An example comes from reversing the direction of inequalities in a partial order. So if X is a set and ≤ a partial order relation, we can define a new partial order relation ≤op by
x ≤op y if and only if y ≤ x.
The new order is commonly called dual order of ≤, and is mostly denoted by ≥. Therefore, duality plays an important role in order theory and every purely order theoretic concept has a dual. For example, there are opposite pairs child/parent, descendant/ancestor, infimum/supremum, down-set/up-set, ideal/filter etc. This order theoretic duality is in turn a special case of the construction of opposite categories as every ordered set can be understood as a category.
Given a semigroup (S, ·), one usually defines the opposite semigroup as (S, ·)op = (S, *) where x*y ≔ y·x for all x,y in S. So also for semigroups there is a strong duality principle. Clearly, the same construction works for groups, as well, and is known in ring theory, too, where it is applied to the multiplicative semigroup of the ring to give the opposite ring. Again this process can be described by completing a semigroup to a monoid, taking the corresponding opposite category, and then possibly removing the unit from that monoid.
The category of Boolean algebras and Boolean homomorphisms is equivalent to the opposite of the category of Stone spaces and continuous functions.
The category of affine schemes is equivalent to the opposite of the category of commutative rings.
The Pontryagin duality restricts to an equivalence between the category of compact Hausdorff abelian topological groups and the opposite of the category of (discrete) abelian groups.
By the Gelfand–Naimark theorem, the category of localizable measurable spaces (with measurable maps) is equivalent to the category of commutative Von Neumann algebras (with normal unital homomorphisms of *-algebras).
Properties
Opposite preserves products:
(see product category)
Opposite preserves functors:
(see functor category, opposite functor)
Opposite preserves slices:
(see comma category)
See also
Dual object
Dual (category theory)
Duality (mathematics)
Adjoint functor
Contravariant functor
Opposite functor
References
Category theory | Opposite category | Mathematics | 579 |
166,980 | https://en.wikipedia.org/wiki/Incidence%20algebra | In order theory, a field of mathematics, an incidence algebra is an associative algebra, defined for every locally finite partially ordered set
and commutative ring with unity. Subalgebras called reduced incidence algebras give a natural construction of various types of generating functions used in combinatorics and number theory.
Definition
A locally finite poset is one in which every closed interval
[a, b] = {x : a ≤ x ≤ b}
is finite.
The members of the incidence algebra are the functions f assigning to each nonempty interval [a, b] a scalar f(a, b), which is taken from the ring of scalars, a commutative ring with unity. On this underlying set one defines addition and scalar multiplication pointwise, and "multiplication" in the incidence algebra is a convolution defined by
An incidence algebra is finite-dimensional if and only if the underlying poset is finite.
Related concepts
An incidence algebra is analogous to a group algebra; indeed, both the group algebra and the incidence algebra are special cases of a category algebra, defined analogously; groups and posets being special kinds of categories.
Upper-triangular matrices
Consider the case of a partial order ≤ over any -element set . We enumerate as , and in such a way that the enumeration is compatible with the order ≤ on , that is, implies , which is always possible.
Then, functions as above, from intervals to scalars, can be thought of as matrices , where whenever , and otherwise. Since we arranged in a way consistent with the usual order on the indices of the matrices, they will appear as upper-triangular matrices with a prescribed zero-pattern determined by the incomparable elements in under ≤.
The incidence algebra of ≤ is then isomorphic to the algebra of upper-triangular matrices with this prescribed zero-pattern and arbitrary (including possibly zero) scalar entries everywhere else, with the operations being ordinary matrix addition, scaling and multiplication.
Special elements
The multiplicative identity element of the incidence algebra is the delta function, defined by
The zeta function of an incidence algebra is the constant function ζ(a, b) = 1 for every nonempty interval [a, b]. Multiplying by ζ is analogous to integration.
One can show that ζ is invertible in the incidence algebra (with respect to the convolution defined above). (Generally, a member h of the incidence algebra is invertible if and only if h(x, x) is invertible for every x.) The multiplicative inverse of the zeta function is the Möbius function μ(a, b); every value of μ(a, b) is an integral multiple of 1 in the base ring.
The Möbius function can also be defined inductively by the following relation:
Multiplying by μ is analogous to differentiation, and is called Möbius inversion.
The square of the zeta function gives the number of elements in an interval:
Examples
Positive integers ordered by divisibility
The convolution associated to the incidence algebra for intervals [1, n] becomes the Dirichlet convolution, hence the Möbius function is μ(a, b) = μ(b/a), where the second "μ" is the classical Möbius function introduced into number theory in the 19th century.
Finite subsets of some set E, ordered by inclusion
The Möbius function is
whenever S and T are finite subsets of E with S ⊆ T, and Möbius inversion is called the principle of inclusion-exclusion.
Geometrically, this is a hypercube:
Natural numbers with their usual order
The Möbius function is and Möbius inversion is called the (backwards) difference operator.
Geometrically, this corresponds to the discrete number line.
The convolution of functions in the incidence algebra corresponds to multiplication of formal power series: see the discussion of reduced incidence algebras below. The Möbius function corresponds to the sequence (1, −1, 0, 0, 0, ... ) of coefficients of the formal power series 1 − t, and the zeta function corresponds to the sequence of coefficients (1, 1, 1, 1, ...) of the formal power series , which is inverse. The delta function in this incidence algebra similarly corresponds to the formal power series 1.
Finite sub-multisets of some multiset E, ordered by inclusion
The above three examples can be unified and generalized by considering a multiset E, and finite sub-multisets S and T of E. The Möbius function is
This generalizes the positive integers ordered by divisibility by a positive integer corresponding to its multiset of prime factors with multiplicity, e.g., 12 corresponds to the multiset
This generalizes the natural numbers with their usual order by a natural number corresponding to a multiset of one underlying element and cardinality equal to that number, e.g., 3 corresponds to the multiset
Subgroups of a finite p-group G, ordered by inclusion
The Möbius function is if is a normal subgroup of and and it is 0 otherwise. This is a theorem of Weisner (1935).
Partitions of a set
Partially order the set of all partitions of a finite set by saying σ ≤ τ if σ is a finer partition than τ. In particular, let τ have t blocks which respectively split into s1, ..., st finer blocks of σ, which has a total of s = s1 + ⋅⋅⋅ + st blocks. Then the Möbius function is:
Euler characteristic
A poset is bounded if it has smallest and largest elements, which we call 0 and 1 respectively (not to be confused with the 0 and 1 of the ring of scalars). The Euler characteristic of a bounded finite poset is μ(0,1). The reason for this terminology is the following: If P has a 0 and 1, then μ(0,1) is the reduced Euler characteristic of the simplicial complex whose faces are chains in P \ {0, 1}. This can be shown using Philip Hall's theorem, relating the value of μ(0,1) to the number of chains of length i.
Reduced incidence algebras
The reduced incidence algebra consists of functions which assign the same value to any two intervals which are equivalent in an appropriate sense, usually meaning isomorphic as posets. This is a subalgebra of the incidence algebra, and it clearly contains the incidence algebra's identity element and zeta function. Any element of the reduced incidence algebra that is invertible in the larger incidence algebra has its inverse in the reduced incidence algebra. Thus the Möbius function is also in the reduced incidence algebra.
Reduced incidence algebras were introduced by Doubillet, Rota, and Stanley to give a natural construction of various rings of generating functions.
Natural numbers and ordinary generating functions
For the poset the reduced incidence algebra consists of functions invariant under translation, for all so as to have the same value on isomorphic intervals [a+k, b+k] and [a, b]. Let t denote the function with t(a, a+1) = 1 and t(a, b) = 0 otherwise, a kind of invariant delta function on isomorphism classes of intervals. Its powers in the incidence algebra are the other invariant delta functions t n(a, a+n) = 1 and t n(x, y) = 0 otherwise. These form a basis for the reduced incidence algebra, and we may write any invariant function as . This notation makes clear the isomorphism between the reduced incidence algebra and the ring of formal power series over the scalars R, also known as the ring of ordinary generating functions. We may write the zeta function as the reciprocal of the Möbius function
Subset poset and exponential generating functions
For the Boolean poset of finite subsets ordered by inclusion , the reduced incidence algebra consists of invariant functions defined to have the same value on isomorphic intervals [S,T] and [S′,T ′] with |T \ S| = |T ′ \ S′|. Again, let t denote the invariant delta function with t(S,T) = 1 for |T \ S| = 1 and t(S,T) = 0 otherwise. Its powers are:
where the sum is over all chains and the only non-zero terms occur for saturated chains with since these correspond to permutations of n, we get the unique non-zero value n!. Thus, the invariant delta functions are the divided powers and we may write any invariant function as where [n] = {1, . . . , n}. This gives a natural isomorphism between the reduced incidence algebra and the ring of exponential generating functions. The zeta function is with Möbius function:
Indeed, this computation with formal power series proves that Many combinatorial counting sequences involving subsets or labeled objects can be interpreted in terms of the reduced incidence algebra, and computed using exponential generating functions.
Divisor poset and Dirichlet series
Consider the poset D of positive integers ordered by divisibility, denoted by The reduced incidence algebra consists of functions that are invariant under multiplication: for all (This multiplicative equivalence of intervals is a much stronger relation than poset isomorphism; e.g., for primes p, the two-element intervals [1,p] are all inequivalent.) For an invariant function, f(a,b) depends only on b/a, so a natural basis consists of invariant delta functions defined by if b/a = n and 0 otherwise; then any invariant function can be written
The product of two invariant delta functions is:
since the only non-zero term comes from c = na and b = mc = nma. Thus, we get an isomorphism from the reduced incidence algebra to the ring of formal Dirichlet series by sending to so that f corresponds to
The incidence algebra zeta function ζD(a,b) = 1 corresponds to the classical Riemann zeta function having reciprocal where is the classical Möbius function of number theory. Many other arithmetic functions arise naturally within the reduced incidence algebra, and equivalently in terms of Dirichlet series. For example, the divisor function is the square of the zeta function, a special case of the above result that gives the number of elements in the interval [x,y]; equivalenty,
The product structure of the divisor poset facilitates the computation of its Möbius function. Unique factorization into primes implies D is isomorphic to an infinite Cartesian product , with the order given by coordinatewise comparison: , where is the kth prime, corresponds to its sequence of exponents Now the Möbius function of D is the product of the Möbius functions for the factor posets, computed above, giving the classical formula:
The product structure also explains the classical Euler product for the zeta function. The zeta function of D corresponds to a Cartesian product of zeta functions of the factors, computed above as so that where the right side is a Cartesian product. Applying the isomorphism which sends t in the kth factor to , we obtain the usual Euler product.
See also
Graph algebra
Incidence coalgebra
Path algebra
Literature
Incidence algebras of locally finite posets were treated in a number of papers of Gian-Carlo Rota beginning in 1964, and by many later combinatorialists. Rota's 1964 paper was:
N. Jacobson, Basic Algebra. I, W. H. Freeman and Co., 1974. See section 8.6 for a treatment of Mobius functions on posets
Further reading
Algebraic combinatorics
Order theory | Incidence algebra | Mathematics | 2,416 |
15,786,775 | https://en.wikipedia.org/wiki/Exosite | An exosite is a secondary binding site, remote from the active site, on an enzyme or other protein.
This is similar to allosteric sites, but differs in the fact that, in order for an enzyme to be active, its exosite typically must be occupied. Exosites have recently become a topic of increased interest in biomedical research as potential drug targets.
References
External links
Enzymes
Catalysis | Exosite | Chemistry | 85 |
650,086 | https://en.wikipedia.org/wiki/Contour%20line | A contour line (also isoline, isopleth, isoquant or isarithm) of a function of two variables is a curve along which the function has a constant value, so that the curve joins points of equal value. It is a plane section of the three-dimensional graph of the function parallel to the -plane. More generally, a contour line for a function of two variables is a curve connecting points where the function has the same particular value.
In cartography, a contour line (often just called a "contour") joins points of equal elevation (height) above a given level, such as mean sea level. A contour map is a map illustrated with contour lines, for example a topographic map, which thus shows valleys and hills, and the steepness or gentleness of slopes. The contour interval of a contour map is the difference in elevation between successive contour lines.
The gradient of the function is always perpendicular to the contour lines. When the lines are close together the magnitude of the gradient is large: the variation is steep. A level set is a generalization of a contour line for functions of any number of variables.
Contour lines are curved, straight or a mixture of both lines on a map describing the intersection of a real or hypothetical surface with one or more horizontal planes. The configuration of these contours allows map readers to infer the relative gradient of a parameter and estimate that parameter at specific places. Contour lines may be either traced on a visible three-dimensional model of the surface, as when a photogrammetrist viewing a stereo-model plots elevation contours, or interpolated from the estimated surface elevations, as when a computer program threads contours through a network of observation points of area centroids. In the latter case, the method of interpolation affects the reliability of individual isolines and their portrayal of slope, pits and peaks.
History
The idea of lines that join points of equal value was rediscovered several times. The oldest known isobath (contour line of constant depth) is found on a map dated 1584 of the river Spaarne, near Haarlem, by Dutchman Pieter Bruinsz. In 1701, Edmond Halley used such lines (isogons) on a chart of magnetic variation. The Dutch engineer Nicholas Cruquius drew the bed of the river Merwede with lines of equal depth (isobaths) at intervals of 1 fathom in 1727, and Philippe Buache used them at 10-fathom intervals on a chart of the English Channel that was prepared in 1737 and published in 1752. Such lines were used to describe a land surface (contour lines) in a map of the Duchy of Modena and Reggio by Domenico Vandelli in 1746, and they were studied theoretically by Ducarla in 1771, and Charles Hutton used them in the Schiehallion experiment. In 1791, a map of France by J. L. Dupain-Triel used contour lines at 20-metre intervals, hachures, spot-heights and a vertical section. In 1801, the chief of the French Corps of Engineers, Haxo, used contour lines at the larger scale of 1:500 on a plan of his projects for Rocca d'Anfo, now in northern Italy, under Napoleon.
By around 1843, when the Ordnance Survey started to regularly record contour lines in Great Britain and Ireland, they were already in general use in European countries. Isobaths were not routinely used on nautical charts until those of Russia from 1834, and those of Britain from 1838.
As different uses of the technique were invented independently, cartographers began to recognize a common theme, and debated what to call these "lines of equal value" generally. The word isogram () was proposed by Francis Galton in 1889 for lines indicating equality of some physical condition or quantity, though isogram can also refer to a word without a repeated letter. As late as 1944, John K. Wright still preferred isogram, but it never attained wide usage. During the early 20th century, isopleth () was being used by 1911 in the United States, while isarithm () had become common in Europe. Additional alternatives, including the Greek-English hybrid isoline and isometric line (), also emerged. Despite attempts to select a single standard, all of these alternatives have survived to the present.
When maps with contour lines became common, the idea spread to other applications. Perhaps the latest to develop are air quality and noise pollution contour maps, which first appeared in the United States in approximately 1970, largely as a result of national legislation requiring spatial delineation of these parameters.
Types
Contour lines are often given specific names beginning with "iso-" according to the nature of the variable being mapped, although in many usages the phrase "contour line" is most commonly used. Specific names are most common in meteorology, where multiple maps with different variables may be viewed simultaneously. The prefix "iso-" can be replaced with "isallo-" to specify a contour line connecting points where a variable changes at the same rate during a given time period.
An isogon () is a contour line for a variable which measures direction. In meteorology and in geomagnetics, the term isogon has specific meanings which are described below. An isocline () is a line joining points with equal slope. In population dynamics and in geomagnetics, the terms isocline and isoclinic line have specific meanings which are described below.
Equidistant points
A curve of equidistant points is a set of points all at the same distance from a given point, line, or polyline. In this case the function whose value is being held constant along a contour line is a distance function.
Isopleths
In 1944, John K. Wright proposed that the term isopleth be used for contour lines that depict a variable which cannot be measured at a point, but which instead must be calculated from data collected over an area, as opposed to isometric lines for variables that could be measured at a point; this distinction has since been followed generally. An example of an isopleth is population density, which can be calculated by dividing the population of a census district by the surface area of that district. Each calculated value is presumed to be the value of the variable at the centre of the area, and isopleths can then be drawn by a process of interpolation. The idea of an isopleth map can be compared with that of a choropleth map.ArcGIS, Isopleth: Contours, 2013.
In meteorology, the word isopleth is used for any type of contour line.
Meteorology
Meteorological contour lines are based on interpolation of the point data received from weather stations and weather satellites. Weather stations are seldom exactly positioned at a contour line (when they are, this indicates a measurement precisely equal to the value of the contour). Instead, lines are drawn to best approximate the locations of exact values, based on the scattered information points available.
Meteorological contour maps may present collected data such as actual air pressure at a given time, or generalized data such as average pressure over a period of time, or forecast data such as predicted air pressure at some point in the future.
Thermodynamic diagrams use multiple overlapping contour sets (including isobars and isotherms) to present a picture of the major thermodynamic factors in a weather system.
Barometric pressure
An isobar () is a line of equal or constant pressure on a graph, plot, or map; an isopleth or contour line of pressure. More accurately, isobars are lines drawn on a map joining places of equal average atmospheric pressure reduced to sea level for a specified period of time. In meteorology, the barometric pressures shown are reduced to sea level, not the surface pressures at the map locations. The distribution of isobars is closely related to the magnitude and direction of the wind field, and can be used to predict future weather patterns. Isobars are commonly used in television weather reporting.
Isallobars are lines joining points of equal pressure change during a specific time interval. These can be divided into anallobars, lines joining points of equal pressure increase during a specific time interval, and katallobars, lines joining points of equal pressure decrease. In general, weather systems move along an axis joining high and low isallobaric centers. Isallobaric gradients are important components of the wind as they increase or decrease the geostrophic wind.
An isopycnal is a line of constant density. An isoheight or isohypse is a line of constant geopotential height on a constant pressure surface chart. Isohypse and isoheight are simply known as lines showing equal pressure on a map.
Temperature and related subjects
An isotherm () is a line that connects points on a map that have the same temperature. Therefore, all points through which an isotherm passes have the same or equal temperatures at the time indicated. An isotherm at 0 °C is called the freezing level. The term lignes isothermes (or lignes d'égale chaleur) was coined by the Prussian geographer and naturalist Alexander von Humboldt, who as part of his research into the geographical distribution of plants published the first map of isotherms in Paris, in 1817. According to Thomas Hankins, the Scottish engineer William Playfair's graphical developments greatly influenced Alexander von Humbolt's invention of the isotherm. Humbolt later used his visualizations and analyses to contradict theories by Kant and other Enlightenment thinkers that non-Europeans were inferior due to their climate.
An isocheim is a line of equal mean winter temperature, and an isothere is a line of equal mean summer temperature.
An isohel () is a line of equal or constant solar radiation.
An isogeotherm is a line of equal temperature beneath the Earth's surface.
Rainfall and air moisture
An isohyet or isohyetal line () is a line on a map joining points of equal rainfall in a given period. A map with isohyets is called an isohyetal map.
An isohume is a line of constant relative humidity, while an isodrosotherm () is a line of equal or constant dew point.
An isoneph is a line indicating equal cloud cover.
An isochalaz is a line of constant frequency of hail storms, and an isobront is a line drawn through geographical points at which a given phase of thunderstorm activity occurred simultaneously.
Snow cover is frequently shown as a contour-line map.
Wind
An isotach () is a line joining points with constant wind speed.
In meteorology, the term isogon refers to a line of constant wind direction.
Freeze and thaw
An isopectic line denotes equal dates of ice formation each winter, and an isotac denotes equal dates of thawing.
Physical geography and oceanography
Elevation and depth
Contours are one of several common methods used to denote elevation or altitude and depth on maps. From these contours, a sense of the general terrain can be determined. They are used at a variety of scales, from large-scale engineering drawings and architectural plans, through topographic maps and bathymetric charts, up to continental-scale maps.
"Contour line" is the most common usage in cartography, but isobath for underwater depths on bathymetric maps and isohypse for elevations are also used.
In cartography, the contour interval is the elevation difference between adjacent contour lines. The contour interval should be the same over a single map. When calculated as a ratio against the map scale, a sense of the hilliness of the terrain can be derived.
Interpretation
There are several rules to note when interpreting terrain contour lines:
The rule of Vs: sharp-pointed vees usually are in stream valleys, with the drainage channel passing through the point of the vee, with the vee pointing upstream. This is a consequence of erosion.
The rule of Os: closed loops are normally uphill on the inside and downhill on the outside, and the innermost loop is the highest area. If a loop instead represents a depression, some maps note this by short lines called hachures which are perpendicular to the contour and point in the direction of the low. (The concept is similar to but distinct from hachures used in hachure maps.)
Spacing of contours: close contours indicate a steep slope; distant contours a shallow slope. Two or more contour lines merging indicates a cliff. By counting the number of contours that cross a segment of a stream, the stream gradient can be approximated.
Of course, to determine differences in elevation between two points, the contour interval, or distance in altitude between two adjacent contour lines, must be known, and this is normally stated in the map key. Usually contour intervals are consistent throughout a map, but there are exceptions. Sometimes intermediate contours are present in flatter areas; these can be dashed or dotted lines at half the noted contour interval. When contours are used with hypsometric tints on a small-scale map that includes mountains and flatter low-lying areas, it is common to have smaller intervals at lower elevations so that detail is shown in all areas. Conversely, for an island which consists of a plateau surrounded by steep cliffs, it is possible to use smaller intervals as the height increases.
Electrostatics
An isopotential map is a measure of electrostatic potential in space, often depicted in two dimensions with the electrostatic charges inducing that electric potential. The term equipotential line or isopotential line refers to a curve of constant electric potential. Whether crossing an equipotential line represents ascending or descending the potential is inferred from the labels on the charges. In three dimensions, equipotential surfaces may be depicted with a two dimensional cross-section, showing equipotential lines at the intersection of the surfaces and the cross-section.
The general mathematical term level set is often used to describe the full collection of points having a particular potential, especially in higher dimensional space.
Magnetism
In the study of the Earth's magnetic field, the term isogon or isogonic line refers to a line of constant magnetic declination, the variation of magnetic north from geographic north. An agonic line is drawn through points of zero magnetic declination. An isoporic line refers to a line of constant annual variation of magnetic declination
.
An isoclinic line connects points of equal magnetic dip, and an aclinic line is the isoclinic line of magnetic dip zero.
An isodynamic line (from or dynamis meaning 'power') connects points with the same intensity of magnetic force.
Oceanography
Besides ocean depth, oceanographers use contour to describe diffuse variable phenomena much as meteorologists do with atmospheric phenomena. In particular, isobathytherms are lines showing depths of water with equal temperature, isohalines show lines of equal ocean salinity, and isopycnals are surfaces of equal water density.
Geology
Various geological data are rendered as contour maps in structural geology, sedimentology, stratigraphy and economic geology. Contour maps are used to show the below ground surface of geologic strata, fault surfaces (especially low angle thrust faults) and unconformities. Isopach maps use isopachs (lines of equal thickness) to illustrate variations in thickness of geologic units.
Environmental science
In discussing pollution, density maps can be very useful in indicating sources and areas of greatest contamination. Contour maps are especially useful for diffuse forms or scales of pollution. Acid precipitation is indicated on maps with isoplats. Some of the most widespread applications of environmental science contour maps involve mapping of environmental noise (where lines of equal sound pressure level are denoted isobels), air pollution, soil contamination, thermal pollution and groundwater contamination. By contour planting and contour ploughing, the rate of water runoff and thus soil erosion can be substantially reduced; this is especially important in riparian zones.
Ecology
An isoflor is an isopleth contour connecting areas of comparable biological diversity. Usually, the variable is the number of species of a given genus or family that occurs in a region. Isoflor maps are thus used to show distribution patterns and trends such as centres of diversity.
Social sciences
In economics, contour lines can be used to describe features which vary quantitatively over space. An isochrone shows lines of equivalent drive time or travel time to a given location and is used in the generation of isochrone maps. An isotim shows equivalent transport costs from the source of a raw material, and an isodapane shows equivalent cost of travel time.
Contour lines are also used to display non-geographic information in economics. Indifference curves (as shown at left) are used to show bundles of goods to which a person would assign equal utility. An isoquant (in the image at right) is a curve of equal production quantity for alternative combinations of input usages, and an isocost curve (also in the image at right) shows alternative usages having equal production costs.
In political science an analogous method is used in understanding coalitions (for example the diagram in Laver and Shepsle's work).
In population dynamics, an isocline shows the set of population sizes at which the rate of change, or partial derivative, for one population in a pair of interacting populations is zero.
Statistics
In statistics, isodensity lines or isodensanes are lines that join points with the same value of a probability density. Isodensanes are used to display bivariate distributions. For example, for a bivariate elliptical distribution the isodensity lines are ellipses.
Thermodynamics, engineering, and other sciences
Various types of graphs in thermodynamics, engineering, and other sciences use isobars (constant pressure), isotherms (constant temperature), isochors (constant specific volume), or other types of isolines, even though these graphs are usually not related to maps. Such isolines are useful for representing more than two dimensions (or quantities) on two-dimensional graphs. Common examples in thermodynamics are some types of phase diagrams.
Isoclines are used to solve ordinary differential equations.
In interpreting radar images, an isodop is a line of equal Doppler velocity, and an isoecho is a line of equal radar reflectivity.
In the case of hybrid contours, energies of hybrid orbitals and the energies of pure atomic orbitals are plotted. The graph obtained is called hybrid contour.
Other phenomena
isochasm: aurora equal occurrence
isochor: volume
isodose: absorbed dose of radiation
isophene: biological events occurring with coincidence such as plants flowering
isophote'': illuminance
mobile telephony: mobile received power and cell coverage area
Algorithms
finding boundaries of level sets after image segmentation
Edge detection
Level-set method
Boundary tracing
Active contour model
Graphical design
To maximize readability of contour maps, there are several design choices available to the map creator, principally line weight, line color, line type and method of numerical marking.Line weight is simply the darkness or thickness of the line used. This choice is made based upon the least intrusive form of contours that enable the reader to decipher the background information in the map itself. If there is little or no content on the base map, the contour lines may be drawn with relatively heavy thickness. Also, for many forms of contours such as topographic maps, it is common to vary the line weight and/or color, so that a different line characteristic occurs for certain numerical values. For example, in the topographic map above, the even hundred foot elevations are shown in a different weight from the twenty foot intervals.Line color is the choice of any number of pigments that suit the display. Sometimes a sheen or gloss is used as well as color to set the contour lines apart from the base map. Line colour can be varied to show other information.Line type refers to whether the basic contour line is solid, dashed, dotted or broken in some other pattern to create the desired effect. Dotted or dashed lines are often used when the underlying base map conveys very important (or difficult to read) information. Broken line types are used when the location of the contour line is inferred.Numerical marking''' is the manner of denoting the arithmetical values of contour lines. This can be done by placing numbers along some of the contour lines, typically using interpolation for intervening lines. Alternatively a map key can be produced associating the contours with their values.
If the contour lines are not numerically labeled and adjacent lines have the same style (with the same weight, color and type), then the direction of the gradient cannot be determined from the contour lines alone. However, if the contour lines cycle through three or more styles, then the direction of the gradient can be determined from the lines. The orientation of the numerical text labels is often used to indicate the direction of the slope.
Plan view versus profile view
Most commonly contour lines are drawn in plan view, or as an observer in space would view the Earth's surface: ordinary map form. However, some parameters can often be displayed in profile view showing a vertical profile of the parameter mapped. Some of the most common parameters mapped in profile are air pollutant concentrations and sound levels. In each of those cases it may be important to analyze (air pollutant concentrations or sound levels) at varying heights so as to determine the air quality or noise health effects on people at different elevations, for example, living on different floor levels of an urban apartment. In actuality, both plan and profile view contour maps are used in air pollution and noise pollution studies.
Labeling contour maps
Labels are a critical component of elevation maps. A properly labeled contour map helps the reader to quickly interpret the shape of the terrain. If numbers are placed close to each other, it means that the terrain is steep. Labels should be placed along a slightly curved line "pointing" to the summit or nadir, from several directions if possible, making the visual identification of the summit or nadir easy. Contour labels can be oriented so a reader is facing uphill when reading the label.
Manual labeling of contour maps is a time-consuming process, however, there are a few software systems that can do the job automatically and in accordance with cartographic conventions, called automatic label placement.
See also
Aeronautical chart
Bathymetry
Dymaxion map
Fall line (topography)
Geologic map
Marching squares
Planform
Tensor field
TERCOM
References
External links
Forthright's Phrontistery
Cartography
Curves
Multivariable calculus
Topography
Relief maps | Contour line | Mathematics | 4,769 |
1,695,675 | https://en.wikipedia.org/wiki/Technophilia | Technophilia (from Greek τέχνη - technē, "art, skill, craft" and φίλος - philos, "beloved, dear, friend") refers generally to a strong attraction for technology, especially new technologies such as personal computers, the Internet, mobile phones, and home cinema. The term is used in sociology to examine individuals' interactions with society and is contrasted with technophobia.
On a psychodynamic level, technophilia generates the expression of its opposite, technophobia. Technophilia and technophobia are the two extremes of the relationship between technology and society. The technophile regards most or all technology positively, adopts new forms of technology enthusiastically, sees it as a means to improve life, and whilst some may even view it as a means to combat social problems.
Technophiles do not have a fear of the effects of the technological advancements on society, as do technophobes. Technological determinism is the theory that humanity has little power to resist the influence that technology has on society.
Etymology
The word technophile is said to have originated in the 1960s as an "unflattering word introduced by technophobes". The idea of technophilia can be used to focus on the larger idea on how technology can create strong innovative positive feelings about different technologies. On the other hand, sometimes technology can prevent an accurate view on environmental and the social impact of technology when it comes to society. Technophiles also are not afraid of the effects that today's developed technologies have on society compared to technophobes.
Narcissism through technophilia
Many forms of technology are seen as venerable because the user experiences them as the embodiment of their own narcissism. Technophiles enjoy using technology and focus on the egocentric benefits of technology rather than seeing the potential issues associated with using technology too frequently. The notion of addiction is often negatively associated with technophilia, and describes technophiles who become too dependent on the forms of technology they possess.
Technological utopia
Technophiles may view technology's interaction with society as creating a utopia, cyber or otherwise, and a strong indescribable futuristic feeling. "In the utopian stories, technologies are seen as natural societal developments, improvements to daily life, or as forces that will transform reality for the better. Dystopian reactions emphasize fears of losing control, becoming dependent, and being unable to stop change". Both utopian and dystopian streams are weaved in Aldous Huxley's Brave New World (1932) and George Orwell's Nineteen Eighty-Four (1949).
See also
Technocracy
Technological determinism
Technophobia
Transhumanism
References
Technology in society | Technophilia | Technology | 562 |
40,313,828 | https://en.wikipedia.org/wiki/Time-resolved%20mass%20spectrometry | Time-resolved mass spectrometry (TRMS) is a strategy in analytical chemistry that uses mass spectrometry platform to collect data with temporal resolution. Implementation of TRMS builds on the ability of mass spectrometers to process ions within sub-second duty cycles. It often requires the use of customized experimental setups. However, they can normally incorporate commercial mass spectrometers. As a concept in analytical chemistry, TRMS encompasses instrumental developments (e.g. interfaces, ion sources, mass analyzers), methodological developments, and applications.
Applications
An early application of TRMS was in the observation of flash photolysis process. It took advantage of a time-of-flight mass analyzer.
TRMS currently finds applications in the monitoring of organic reactions, formation of reactive intermediates, enzyme-catalyzed reactions, convection, protein folding, extraction, and other chemical and physical processes.
Temporal resolution
TRMS is typically implemented to monitor processes that occur on second to millisecond time scale. However, there exist reports from studies in which sub-millisecond resolutions were achieved.
References
Analytical chemistry
Biochemistry
Laboratory techniques
Mass spectrometry
Scientific techniques | Time-resolved mass spectrometry | Physics,Chemistry,Biology | 241 |
23,980,788 | https://en.wikipedia.org/wiki/C19H24N2 | {{DISPLAYTITLE:C19H24N2}}
The molecular formula C19H24N2 (molar mass: 280.40 g/mol) may refer to:
4-ANPP
Bamipine
Daledalin
Histapyrrodine
Ibogamine
Imipramine
Propazepine
Yohimban
Molecular formulas | C19H24N2 | Physics,Chemistry | 73 |
745,073 | https://en.wikipedia.org/wiki/F%C3%AAte%20de%20la%20Musique | The Fête de la Musique, also known in English as Music Day, Make Music Day, or World Music Day, is an annual music celebration that takes place on 21 June. On Music Day, citizens and residents are urged to play music outside in their neighborhoods or in public spaces and parks. Free concerts are also organized, where musicians play for fun and not for payment.
The first all-day musical celebration on the day of the summer solstice was originated by Jack Lang, then Minister of Culture of France, as well as by Maurice Fleuret; it was celebrated in Paris in 1982. Music Day later became celebrated in 120 countries around the world.
History
In October 1981, Maurice Fleuret became Director of Music and Dance at the French Ministry of Culture at Jack Lang's request. He applied his reflections to the musical practice and its evolution: "the music everywhere and the concert nowhere". When he discovered, in a 1982 study on the cultural habits of the French, that five million people, one young person out of two, played a musical instrument, he began to dream of a way to bring people out on the streets. It first took place in 1982 in Paris as the Fête de la Musique.
Ever since, the festival has become an international phenomenon, celebrated on the same day in more than 700 cities in 120 countries, including India, Germany, Italy, Greece, Russia, Australia, Peru, Brazil, Ecuador, Mexico, Canada, the United States, the UK, and Japan.
In the Anglosphere, the day has become known as Music Day, Make Music Day and World Music Day.
Purpose
Fête de la Musique's purpose is to promote music. Amateur and professional musicians are encouraged to perform in the streets, under the slogan "Faites de la musique" ("Make music"), a homophone of Fête de la musique. Thousands of free concerts are staged throughout the day, making all genres of music accessible to the public.
In France, all concerts must be free to the public, and all performers donate their time free of charge. This is true of most participating cities as well.
In France
Despite there being a large tolerance by the general public about the performance of music by amateurs in public areas after the usual hours, noise restrictions still apply and can cause some establishments to be forbidden to remain open and broadcast music out of their doors without prior authorization. This means that the prefectures in France can still forbid individuals, groups, or establishments to install any audio hardware in the street.
Reach and impact
there were 120 countries participating in Fête de la Musique, and over 1,000 cities participated across the world in that year. In 2023, events were held on most continents.
Italy's Festa della Musica began in 1985, and became national in 1994.
The UK Event began as National Music Day in 1992. Make Music Day UK became an independent organization in 2022.
Ukraine has held the event in Lviv since 2013, and it has continued despite the Russian invasion of Ukraine.
In the United States, the Make Music Alliance was formed in 2014 to help coordinate efforts across the country. In 2023 there were 4,791 free concerts held across 117 U.S. cities, with over 100 in Cincinnati, Madison, New York City, Philadelphia, and Salem.
In Australia, Make Music Day Australia was initiated in 2018 by the Australian Music Association (AMA), and as of 2022 was co-hosted by the AMA and National Association of Music Merchants (NAMM). In 2023, a huge international project called "Make Music, Make Friends" partnered 10 Australian schools with schools around the world to share music and greet one another on Make Music Day.
Turkiye and Ghana held their first Make Music Days in 2022, and South Africa in 2023.
See also
Make Music Day UK
World music
References
External links
The French Culture Ministry's website on the Fête de la Musique (in French, international section also available in English)
1982 establishments in France
June observances
Music festivals in France
Music festivals established in 1982
Events in Paris
Summer solstice | Fête de la Musique | Astronomy | 834 |
70,397,447 | https://en.wikipedia.org/wiki/U%20band | The U band is a range of frequencies contained in the microwave region of the electromagnetic spectrum. Common usage places this range between 40 and 60 GHz, but may vary depending on the source using the term.
References
Microwave bands
Satellite broadcasting | U band | Engineering | 47 |
11,633,991 | https://en.wikipedia.org/wiki/Alfred%20Brousseau | Brother Alfred Brousseau, F.S.C. (February 17, 1907 – May 31, 1988), was an educator, photographer and mathematician and was known mostly as a founder of the Fibonacci Association and as an educator.
Biography
Brother Alfred Brousseau was born in North Beach, San Francisco, as one of six children. On August 14, 1920, Brousseau entered the juniorate of the De La Salle Christian Brothers (Brothers of the Christian Schools), a religious institute of teachers in the Roman Catholic Church. He was accepted into the Christian Brothers novitiate on 31 July 1923 and advanced to the scholasticate on the campus of St. Mary's College in 1924.
Academic career
In 1926, while still a college student, Brousseau began teaching at Sacred Heart High School in San Francisco, California. He continued teaching at the secondary level until 1930 when he was assigned to teach at St. Mary's College while subsequently pursuing a doctorate in physics from the University of California, Berkeley, in 1937. In 1941 Brousseau was appointed principal of Sacred Heart High School in San Francisco, and later was appointed provincial of the Christian Brothers of the District of California. He returned to St. Mary's College in 1959 and became chair of the School of Science. Between this period and 1978, Alfred served both president and treasurer of the Northern Section of the California Mathematics Council and later as president of the entire State Council.
In 1963, with the American mathematician Verner E. Hoggatt, Brousseau founded the Fibonacci Association with the intention of promoting research into the Fibonacci numbers and related fields. In 1969 Brousseau commented on the Fibonacci Association (and its associated journal, the Fibonacci Quarterly) in the April edition of Time magazine, "We got a group of people together in 1963, and just like a bunch of nuts, we started a mathematics magazine ... [People] tend to find an esthetic satisfaction in it. They think that there's some kind of mystical connection between these numbers and the universe."
Photography
Brousseau was a keen photographer and amassed a collection of in excess of 20,000 color 35 mm transparencies recording the native flora of California.
References
External links
Brother Alfred Brousseau
Selected photographs by Brousseau
California Mathematics Council
1907 births
1988 deaths
People from San Francisco
De La Salle Brothers
Roman Catholic religious brothers
Saint Mary's College of California alumni
UC Berkeley College of Letters and Science alumni
Educators from California
Fibonacci numbers
Photographers from California
20th-century American mathematicians
Scientists from California | Alfred Brousseau | Mathematics | 530 |
31,497,986 | https://en.wikipedia.org/wiki/Joseph%20W.%20Greig | Joseph W. Greig (1895–1977) was a Canadian-born American geochemist and physical chemist, a pioneer in high temperature phase equilibria and immiscibility investigations of oxides and sulfides. His name has been assigned to a new magnetic mineral, greigite () discovered in 1963, increasing to nine the number of minerals known to have been named after Queen's geologists.
Career
Greig was born in Ontario, Canada in 1895. He studied geology and mineralogy at Queen's University before graduating at Columbia University. He received his Ph.D. from Harvard University and then worked at the Carnegie Institute for thirty-eight years. Once he retired in 1960, he became a visiting professor at Pennsylvania State University. He also served in the World War I with the Canadian Expeditionary Force, and in the second World War with the United States Bomber Command in the Pacific Theatre.
He was known and appreciated for his critical mind, very helpful for reviewing scientific papers and improving research proposals. As he first applied his criticism to his own works more than to these of others, it was also an obstacle to his publications and many of his works remain unpublished for this reason.
Greigite (Fe3S4)
In 1963, a newly-discovered mineral was named "greigite" in his honor and in recognition of his contributions to mineralogy and physical chemistry. The new mineral, , a magnetic iron sulfide, equivalent of magnetite (), was discovered in San Bernardino County, California, by the US Geological Survey.
References
External links
Geology at Queens
Memorial of Joseph W. Greig
1895 births
1977 deaths
Canadian emigrants to the United States
American physical chemists
American geochemists
Harvard University alumni
Queen's University at Kingston alumni
Columbia University alumni
20th-century American chemists | Joseph W. Greig | Chemistry | 373 |
11,593,538 | https://en.wikipedia.org/wiki/Severe%20weather | Severe weather is any dangerous meteorological phenomenon with the potential to cause damage, serious social disruption, or loss of human life. These vary depending on the latitude, altitude, topography, and atmospheric conditions. High winds, hail, excessive precipitation, and wildfires are forms and effects, as are thunderstorms, downbursts, tornadoes, waterspouts, tropical cyclones, and extratropical cyclones. Regional and seasonal phenomena include blizzards,snowstorms, ice storms, and duststorms.
Severe weather is one type of extreme weather, which includes unexpected, unusual, severe, or unseasonal weather and is by definition rare for that location or time of the year. Due to the effects of climate change, the frequency and intensity of some of the extreme weather events are increasing, for example, heatwaves and droughts.
Terminology
Meteorologists have generally defined severe weather as any aspect of the weather that poses risks to life or property or requires the intervention of authorities. A narrower definition of severe weather is any weather phenomenon relating to severe thunderstorms.
According to the World Meteorological Organization (WMO), severe weather can be categorized into two groups: general severe weather and localized severe weather. Nor'easters, European wind storms, and the phenomena that accompany them form over wide geographic areas. These occurrences are classified as general severe weather. Downbursts and tornadoes are more localized and therefore have a more limited geographic effect. These forms of weather are classified as localized severe weather.
The term severe weather is technically not the same phenomenon as extreme weather. Extreme weather describes unusual weather events that are at the extremes of the historical distribution for a given area.
Causes
Organized severe weather occurs under the same conditions that generate ordinary thunderstorms: atmospheric moisture, lift (often from thermals), and instability. A wide variety of conditions cause severe weather. Several factors can convert thunderstorms into severe weather. For example, a pool of cold air aloft may aid in the development of large hail from an otherwise innocuous-appearing thunderstorm. The most severe hail and tornadoes are produced by supercell thunderstorms, and the worst downbursts and derechos (straight-line winds) are produced by bow echoes. Both of these types of storms tend to form in environments with high wind shear.
Floods, hurricanes, tornadoes, and thunderstorms are considered to be the most destructive weather-related natural disasters. Although these weather phenomena are all related to cumulonimbus clouds, they form and develop under different conditions and geographic locations. The relationship between these weather events and their formation requirements is used to develop models to predict the most frequent and possible locations. This information is used to notify affected areas and save lives.
Categories
Severe thunderstorms can be assessed in three different categories. These are "approaching severe", "severe", and "significantly severe".
Approaching severe is defined as hail between diameter or winds between 50 and 58 mph (50 knots, 80–93 km/h). In the United States, such storms will usually warrant a Significant Weather Alert.
Severe is defined as hail diameter, winds , or a tornado.
Significant severe is defined as hail in diameter or larger, winds 75 mph (65 knots, 120 km/h) or more, or a tornado of strength EF2 or stronger.
Both severe and significant severe events warrant a severe thunderstorm warning from the United States National Weather Service (excludes flash floods), the Environment Canada, the Australian Bureau of Meteorology, the Meteorological Service of New Zealand and the Meteorological Office UK, if the event occurs in those countries. If a tornado is occurring (a tornado has been seen by spotters) or is imminent (Doppler weather radar has observed strong rotation in a storm, indicating an incipient tornado), the severe thunderstorm warning will be superseded by a tornado warning in the United States and Canada.
A severe weather outbreak is typically considered to be when ten or more tornadoes, some of which will likely be long-tracked and violent, and many large hail or damaging wind reports occur within one or more consecutive days. Severity is also dependent on the size of the geographic area affected, whether it covers hundreds or thousands of square kilometers.
High winds
High winds are known to cause damage, depending upon their strength.
Wind speeds as low as may lead to power outages when tree branches fall and disrupt power lines. Some species of trees are more vulnerable to winds. Trees with shallow roots are more prone to uproot, and brittle trees such as eucalyptus, sea hibiscus, and avocado are more prone to branch damage.
Wind gusts may cause poorly designed suspension bridges to sway. When wind gusts harmonize with the frequency of the swaying bridge, the bridge may fail as occurred with the Tacoma Narrows Bridge in 1940.
Hurricane-force winds, caused by individual thunderstorms, thunderstorm complexes, derechos, tornadoes, extratropical cyclones, or tropical cyclones can destroy mobile homes and structurally damage buildings with foundations. Winds of this strength due to downslope winds off terrain have been known to shatter windows and sandblast paint from cars.
Once winds exceed within strong tropical cyclones and tornadoes, homes completely collapse, and significant damage is done to larger buildings. Total destruction to man-made structures occurs when winds reach . The Saffir–Simpson scale for cyclones and Enhanced Fujita scale (TORRO scale in Europe) for tornadoes were developed to help estimate wind speed from the damage they cause.
Tornado
A dangerous rotating column of air in contact with both the surface of the earth and the base of a cumulonimbus cloud (thundercloud) or a cumulus cloud, in rare cases. Tornadoes come in many sizes but typically form a visible condensation funnel whose narrowest end reaches the earth and surrounded by a cloud of debris and dust.
Tornadoes' wind speeds generally average between and . They are approximately across and travel a few miles (kilometers) before dissipating. Some attain wind speeds in excess of , may stretch more than two miles (3.2 km) across, and maintain contact with the ground for dozens of miles (more than 100 km). The Enhanced Fujita Scale and the TORRO Scale are two examples of scales used to rate the strength, intensity and/or damage of a tornado.
Tornadoes, despite being one of the most destructive weather phenomena, are generally short-lived. A long-lived tornado generally lasts no more than an hour, but some have been known to last for 2 hours or longer (for example, the Tri-State Tornado). Due to their relatively short duration, less information is known about the development and formation of tornadoes.
Waterspout
Waterspouts are generally defined as tornadoes or non-supercell tornadoes that develop over bodies of water.
Waterspouts typically do not do much damage because they occur over open water, but they are capable of traveling over land. Vegetation, weakly constructed buildings, and other infrastructure may be damaged or destroyed by waterspouts. Waterspouts do not generally last long over terrestrial environments as the friction produced easily dissipates the winds. Strong horizontal winds will cause waterspouts to dissipate as they disturb the vortex. While not generally as dangerous as "classic" tornadoes, waterspouts can overturn boats, and they can cause severe damage to larger ships.
Downburst and derecho
Downbursts are created within thunderstorms by significantly rain-cooled air, which, upon reaching ground level, spreads out in all directions and produce strong winds. Unlike winds in a tornado, winds in a downburst are not rotational but are directed outwards from the point where they strike land or water. "Dry downbursts" are associated with thunderstorms with very little precipitation, while wet downbursts are generated by thunderstorms with large amounts of rainfall. Microbursts are very small downbursts with winds that extend up to 2.5 miles (4 km) from their source, while macrobursts are large-scale downbursts with winds that extend in excess of 2.5 miles (4 km). The heat burst is created by vertical currents on the backside of old outflow boundaries and squall lines where rainfall is lacking. Heat bursts generate significantly higher temperatures due to the lack of rain-cooled air in their formation. Derechos are longer, usually stronger, forms of downburst winds characterized by straight-lined windstorms.
Downbursts create vertical wind shear or microbursts, which are dangerous to aviation. These convective downbursts can produce damaging winds, lasting 5 to 30 minutes, with wind speeds as high as , and cause tornado-like damage on the ground. Downbursts also occur much more frequently than tornadoes, with ten downburst damage reports for every one tornado.
Squall line
A squall line is an elongated line of severe thunderstorms that can form along or ahead of a cold front. The squall line typically contains heavy precipitation, hail, frequent lightning, strong straight line winds, and possibly tornadoes or waterspouts. Severe weather in the form of strong straight-line winds can be expected in areas where the squall line forms a bow echo, in the farthest portion of the bow. Tornadoes can be found along waves within a line echo wave pattern (LEWP) where mesoscale low-pressure areas are present. Intense bow echoes responsible for widespread, extensive wind damage are called derechos, and move quickly over large territories. A wake low or a mesoscale low-pressure area forms behind the rain shield (a high pressure system under the rain canopy) of a mature squall line and is sometimes associated with a heat burst.
Squall lines often cause severe straight-line wind damage, and most non-tornadic wind damage is caused from squall lines. Although the primary danger from squall lines is straight-line winds, some squall lines also contain weak tornadoes.
Tropical cyclone
Very high winds can be caused by mature tropical cyclones (called hurricanes in the United States and Canada and typhoons in eastern Asia). A tropical cyclone's heavy surf created by such winds may cause harm to marine life either close to or upon the surface of the water, such as coral reefs. Coastal regions usually take more serious wind damage than inland, due to rapid dissipation upon landfall, though heavy rain from their remnants may flood either.
Strong extratropical cyclones
Severe local windstorms in Europe that develop from winds off the North Atlantic. These windstorms are commonly associated with the destructive extratropical cyclones and their low pressure frontal systems. European windstorms occur mainly in the seasons of autumn and winter. Severe European windstorms are often characterized by heavy precipitation as well.
A synoptic-scale extratropical storm along the upper East Coast of the United States and Atlantic Canada is called a Nor'easter. They are so named because their winds come from the northeast, especially in the coastal areas of the Northeastern United States and Atlantic Canada. More specifically, it describes a low-pressure area whose center of rotation is just off the upper East Coast and whose leading winds in the left forward quadrant rotate onto land from the northeast. Nor'easters may cause coastal flooding, coastal erosion, heavy rain or snow, and hurricane-force winds. The precipitation pattern of Nor'easters is similar to other mature extratropical storms. Nor'easters can cause heavy rain or snow, either within their comma-head precipitation pattern or along their trailing cold or stationary front. Nor'easters can occur at any time of the year but are mostly known for their presence in the winter season.
Dust storm
A dust storm is an unusual form of windstorm that is characterized by the existence of large quantities of sand and dust particles carried by the wind. Dust storms frequently develop during periods of droughts, or over arid and semi-arid regions.
Dust storms have numerous hazards and are capable of causing deaths. Visibility may be reduced dramatically, so risks of vehicle and aircraft crashes are possible. Additionally, the particulates may reduce oxygen intake by the lungs, potentially resulting in suffocation. Damage can also be inflicted upon the eyes due to abrasion.
Dust storms can many issues for agricultural industries as well. Soil erosion is one of the most common hazards and decreases arable lands. Dust and sand particles can cause severe weathering of buildings and rock formations. Nearby bodies of water may be polluted by settling dust and sand, killing aquatic organisms. Decrease in exposure to sunlight can affect plant growth, as well as decrease in infrared radiation may cause decreased temperatures.
Wildfires
The most common cause of wildfires varies throughout the world. In the United States, Canada, and Northwest China, lightning is the major source of ignition. In other parts of the world, human involvement is a major contributor. For instance, in Mexico, Central America, South America, Africa, Southeast Asia, Fiji, and New Zealand, wildfires can be attributed to human activities such as animal husbandry, agriculture, and land-conversion burning. Human carelessness is a major cause of wildfires in China and in the Mediterranean Basin. In Australia, the source of wildfires can be traced to both lightning strikes and human activities such as machinery sparks and cast-away cigarette butts." Wildfires have a rapid forward rate of spread (FROS) when burning through dense, uninterrupted fuels. They can move as fast as in forests and in grasslands. Wildfires can advance tangential to the main front to form a flanking front, or burn in the opposite direction of the main front by backing.
Wildfires may also spread by jumping or spotting as winds and vertical convection columns carry firebrands (hot wood embers) and other burning materials through the air over roads, rivers, and other barriers that may otherwise act as firebreaks. Torching and fires in tree canopies encourage spotting, and dry ground fuels that surround a wildfire are especially vulnerable to ignition from firebrands. Spotting can create spot fires as hot embers and firebrands ignite fuels downwind from the fire. In Australian bushfires, spot fires are known to occur as far as from the fire front. Since the mid-1980s, earlier snowmelt and associated warming has also been associated with an increase in length and severity of the wildfire season in the Western United States.
Hail
Any form of thunderstorm that produces precipitating hailstones is known as a hailstorm. Hailstorms are generally capable of developing in any geographic area where thunderclouds (cumulonimbus) are present, although they are most frequent in tropical and monsoon regions. The updrafts and downdrafts within cumulonimbus clouds cause water molecules to freeze and solidify, creating hailstones and other forms of solid precipitation. Due to their larger density, these hailstones become heavy enough to overcome the density of the cloud and fall towards the ground. The downdrafts in cumulonimbus clouds can also cause increases in the speed of the falling hailstones. The term hailstorm is usually used to describe the existence of significant quantities or size of hailstones.
Hailstones can cause serious damage, notably to automobiles, aircraft, skylights, glass-roofed structures, livestock, and crops. Rarely, massive hailstones have been known to cause concussions or fatal head trauma. Hailstorms have been the cause of costly and deadly events throughout history. One of the earliest recorded incidents occurred around the 12th century in Wellesbourne, Britain. The largest hailstone in terms of maximum circumference and length ever recorded in the United States fell in 2003 in Aurora, Nebraska, USA. The hailstone had a diameter of 7 inches (18 cm) and a circumference of 18.75 inches (47.6 cm).
Heavy rainfall and flooding
Heavy rainfall can lead to a number of hazards, most of which are floods or hazards resulting from floods. Flooding is the inundation of areas that are not normally under water. It is typically divided into three classes: River flooding, which relates to rivers rising outside their normal banks; flash flooding, which is the process where a landscape, often in urban and arid environments, is subjected to rapid floods; and coastal flooding, which can be caused by strong winds from tropical or non-tropical cyclones. Meteorologically, excessive rains occur within a plume of air with high amounts of moisture (also known as an atmospheric river), which is directed around an upper level cold-core low or a tropical cyclone.
Flash flooding can frequently occur in slow-moving thunderstorms and are usually caused by the heavy liquid precipitation that accompanies it. Flash floods are most common in densely populated urban environments, where less plants and bodies of water are presented to absorb and contain the extra water. Flash flooding can be hazardous to small infrastructure, such as bridges, and weakly constructed buildings. Plants and crops in agricultural areas can be destroyed and devastated by the force of raging water. Automobiles parked within experiencing areas can also be displaced. Soil erosion can occur as well, exposing risks of landslide phenomena. Like all forms of flooding phenomenon, flash flooding can also spread and produce waterborne and insect-borne diseases cause by microorganisms. Flash flooding can be caused by extensive rainfall released by tropical cyclones of any strength or the sudden thawing effect of ice dams.
Monsoons
Seasonal wind shifts lead to long-lasting wet seasons, which produce the bulk of annual precipitation in areas such as Southeast Asia, Australia, Western Africa, eastern South America, Mexico, and the Philippines. Widespread flooding occurs if rainfall is excessive, which can lead to landslides and mudflows in mountainous areas. Floods cause rivers to exceed their capacity with nearby buildings becoming submerged. Flooding may be exacerbated if there are fires during the previous dry season. This may cause soils that are sandy or composed of loam to become hydrophobic and repel water.
Government organizations help their residents deal with wet-season floods though floodplain mapping and information on erosion control. Mapping is conducted to help determine areas that may be more prone to flooding. Erosion control instructions are provided through outreach over the telephone or the internet.
Flood waters that occur during monsoon seasons can often host numerous protozoa, bacterial, and viral microorganisms. Mosquitoes and flies will lay their eggs within the contaminated bodies of water. These disease agents may cause infections of foodborne and waterborne diseases. Diseases associated with exposure to flood waters include malaria, cholera, typhoid, hepatitis A, and the common cold. Possible trench foot infections may also occur when personnel are exposed for extended periods of time within flooded areas.
Tropical cyclone
A tropical cyclone is a rapidly rotating storm system characterized by a low-pressure center, a closed low-level atmospheric circulation, strong winds, and a spiral arrangement of thunderstorms that produce heavy rain or squalls. A tropical cyclone feeds on heat released when moist air rises, resulting in condensation of water vapor contained in the moist air. Tropical cyclones may produce torrential rain, high waves, and damaging storm surge. Heavy rains produce significant inland flooding. Storm surges may produce extensive coastal flooding up to from the coastline.
Although cyclones take an enormous toll in lives and personal property, they are also important factors in the precipitation regimes of areas they affect. They bring much-needed precipitation to otherwise dry regions. Areas in their path can receive a year's worth of rainfall from a tropical cyclone passage. Tropical cyclones can also relieve drought conditions. They also carry heat and energy away from the tropics and transport it toward temperate latitudes, which makes them an important part of the global atmospheric circulation mechanism. As a result, tropical cyclones help to maintain equilibrium in the Earth's troposphere.
Severe winter weather
Heavy snowfall
When extratropical cyclones deposit heavy, wet snow with a snow-water equivalent (SWE) ratio of between 6:1 and 12:1 and a weight in excess of 10 pounds per square foot (~50 kg/m2) piles onto trees or electricity lines, significant damage may occur on a scale usually associated with strong tropical cyclones. An avalanche can occur with a sudden thermal or mechanical impact on snow that has accumulated on a mountain, which causes the snow to rush downhill suddenly. Preceding an avalanche is a phenomenon known as an avalanche wind caused by the approaching avalanche itself, which adds to its destructive potential. Large amounts of snow that accumulate on top of man-made structures can lead to structural failure. During snowmelt, acidic precipitation that previously fell in the snow pack is released and harms marine life.
Lake-effect snow is produced in the winter in the shape of one or more elongated bands. This occurs when cold winds move across long expanses of warmer lake water, providing energy and picking up water vapor, which then freezes and is deposited on the lee shores. For more information on this effect see the main article.
Conditions within blizzards often include large quantities of blowing snow and strong winds that may significantly reduce visibility. Reduced viability of personnel on foot may result in extended exposure to the blizzard and increase the chance of becoming lost. The strong winds associated with blizzards create wind chill that can result in frostbites and hypothermia. The strong winds present in blizzards are capable of damaging plants and may cause power outages, frozen pipes, and cut off fuel lines.
Ice storm
Ice storms are also known as a Silver storm, referring to the color of the freezing precipitation. Ice storms are caused by liquid precipitation which freezes upon cold surfaces and leads to the gradual development of a thickening layer of ice.
The accumulations of ice during the storm can be extremely destructive. Trees and vegetation can be destroyed and in turn may bring down power lines, causing the loss of heat and communication lines. Roofs of buildings and automobiles may be severely damaged. Gas pipes can become frozen or even damaged causing gas leaks. Avalanches may develop due to the extra weight of the ice present. Visibility can be reduced dramatically. The aftermath of an ice storm may result in severe flooding due to sudden thawing, with large quantities of displaced water, especially near lakes, rivers, and bodies of water.
Heat and drought
Drought
Another form of severe weather is drought, which is a prolonged period of persistently dry weather (that is, absence of precipitation). Although droughts do not develop or progress as quickly as other forms of severe weather, their effects can be just as deadly; in fact, droughts are classified and measured based upon these effects. Droughts have a variety of severe effects; they can cause crops to fail, and they can severely deplete water resources, sometimes interfering with human life. A drought in the 1930s known as the Dust Bowl affected 50 million acres of farmland in the central United States. In economic terms, they can cost many billions of dollars: a drought in the United States in 1988 caused over $40 billion in losses, exceeding the economic totals of Hurricane Andrew, the Great Flood of 1993, and the 1989 Loma Prieta earthquake. In addition to the other severe effects, the dry conditions caused by droughts also significantly increase the risk of wildfires.
Heat waves
Although official definitions vary, a heat wave is generally defined as a prolonged period with excessive heat. Although heat waves do not cause as much economic damage as other types of severe weather, they are extremely dangerous to humans and animals: according to the United States National Weather Service, the average total number of heat-related fatalities each year is higher than the combined total fatalities for floods, tornadoes, lightning strikes, and hurricanes. In Australia, heat waves cause more fatalities than any other type of severe weather. The dry conditions that may accompany heat waves can also severely affect plant life as the plants lose moisture and die. Heat waves are often more severe when combined with high humidity.
See also
List of natural disasters by death toll
List of severe weather phenomena
Storm chasing
References
External links
Design Discussion Primer – Severe Storms residential building design strategies.
Weather hazards | Severe weather | Physics | 4,917 |
8,960,168 | https://en.wikipedia.org/wiki/PalmPilot%20Professional | The PalmPilot Professional is a personal digital assistant. While the PalmPilot was released March 10, 1997 as an updated version of the Pilot 5000, there was delayed general availability of the Professional model in the marketplace.
It was marketed with a compact design, a back-lit display and the ability to quickly connect to a Microsoft Windows or Macintosh personal computer. it has the ability to synchronize via its cradle (or through a modem, which was sold separately) to a computer, making it possible to send e-mails, set appointments with others, and set contact information. Various third party applications, such as upIRC, enabled connecting to various messaging systems, including most popular instant messaging services. An optional memory card with an IR port was available as an upgrade directly from Palm.
System Details
Operating System: Palm OS 2.0, upgradeable to Palm OS 2.0.5 with 1MB or Palm OS 3.0 if 2MB memory upgrade is installed.
Processor: Motorola MC68328 DragonBall
Internal RAM: 1 MB
Screen Resolution: 160x160 pixels
Battery Type: 2 AAAs
Battery Life: 30 hours
Size: 117 x 81 x 17 mm - 4.6 x 3.2 x 0.7 in.
Weight:
References
Palm OS devices
Computer-related introductions in 1997
Products introduced in 1997
68k-based mobile devices | PalmPilot Professional | Technology | 280 |
71,410,644 | https://en.wikipedia.org/wiki/Wickerhamomycetaceae | The Wickerhamomycetaceae are a family of yeasts in the order Saccharomycetales that reproduce by budding. Species in the family have a widespread distribution.
Genus Wickerhamomyces used to be placed within Phaffomycetaceae family, until 2008 when it was separated and placed within its own order Wickerhamomycetaceae.
Beneficially, various Wickerhamomyces species have been used in a number of biotechnologically applications, such as in the environment, food, beverage industries, (including wine making, ) biofuel, medicine and agriculture.
Description
The fungi has asexual reproduction and that budding is multilateral on a narrow base. The cells are spherical, ovoid, or elongate in shape. Pseudohyphae and true hyphae (a long, branching, filamentous structure) are produced by some species. In sexual reproduction, it is found that the asci (spore bearing cell) may be unconjugated or show conjugation between a cell and its bud or between independent cells. Some species are heterothallic (species have sexes that reside in different individuals). Asci may be persistent or deliquescent and form one to four ascospores that may be hat-shaped or spherical with an equatorial ledge.
It can be found in soils, on plant material (such as phylloplane of rice,) and also as an opportunistic pathogen of humans and animals.
Genera
According to GBIF, and the United States Department of Agriculture and the Agricultural Research Service;
Figures in brackets are approx. how many species per genus.
Wickerhamomyces anomalus is normally found on plants, but has been found in sugar, dry salted beans, sauerkraut and in cucumber brines.
References
Other sources
Kurtzman, C. P., C. J. Robnett, and E. Basehoar-Powers. 2008. Phylogenetic relationships among species of Pichia, Issatchenkia and Williopsis determined from multigene sequence analysis, and the proposal of Barnettozyma gen. nov., Lindnera gen. nov. and Wickerhamomyces gen. nov. FEMS Yeast Res 8:939-54.
Yeasts
Saccharomycetes
Ascomycota families | Wickerhamomycetaceae | Biology | 491 |
16,086,895 | https://en.wikipedia.org/wiki/Independent%20electron%20approximation | In condensed matter physics, the independent electron approximation is a simplification used in complex systems, consisting of many electrons, that approximates the electron–electron interaction in crystals as null. It is a requirement for both the free electron model and the nearly-free electron model, where it is used alongside Bloch's theorem. In quantum mechanics, this approximation is often used to simplify a quantum many-body problem into single-particle approximations.
While this simplification holds for many systems, electron–electron interactions may be very important for certain properties in materials. For example, the theory covering much of superconductivity is BCS theory, in which the attraction of pairs of electrons to each other, termed "Cooper pairs", is the mechanism behind superconductivity. One major effect of electron–electron interactions is that electrons distribute around the ions so that they screen the ions in the lattice from other electrons.
Quantum treatment
For an example of the Independent electron approximation's usefulness in quantum mechanics, consider an N-atom crystal with one free electron per atom (each with atomic number Z). Neglecting spin, the Hamiltonian of the system takes the form:
where is the reduced Planck constant, e is the elementary charge, me is the electron rest mass, and is the gradient operator for electron i. The capitalized is the Ith lattice location (the equilibrium position of the Ith nuclei) and the lowercase is the ith electron position.
The first term in parentheses is called the kinetic energy operator while the last two are simply the Coulomb interaction terms for electron–nucleus and electron–electron interactions, respectively. If the electron–electron term were negligible, the Hamiltonian could be decomposed into a set of N decoupled Hamiltonians (one for each electron), which greatly simplifies analysis. The electron–electron interaction term, however, prevents this decomposition by ensuring that the Hamiltonian for each electron will include terms for the position of every other electron in the system. If the electron–electron interaction term is sufficiently small, however, the Coulomb interactions terms can be approximated by an effective potential term, which neglects electron–electron interactions. This is known as the independent electron approximation. Bloch's theorem relies on this approximation by setting the effective potential term to a periodic potential of the form that satisfies , where is any reciprocal lattice vector (see Bloch's theorem). This approximation can be formalized using methods from the Hartree–Fock approximation or density functional theory.
See also
Strongly correlated material
References
Omar, M. Ali (1994). Elementary Solid State Physics, 4th ed. Addison Wesley. .
Electron | Independent electron approximation | Chemistry | 552 |
11,305,143 | https://en.wikipedia.org/wiki/Power%20distribution%20unit | A power distribution unit (PDU) is a device fitted with multiple outputs designed to distribute electric power, especially to racks of computers and networking equipment located within a data center. Data centers face challenges in power protection and management solutions. This is why many data centers rely on PDU monitoring to improve efficiency, uptime, and growth. For data center applications, the power requirement is typically much larger than a home or office style power strips with power inputs as large as 22 kVA or even greater. Most large data centers utilize PDUs with 3-phase power input and 1-phase power output. There are two main categories of PDUs: Basic PDUs and Intelligent (networked) PDUs or iPDUs. Basic PDUs simply provide a means of distributing power from the input to a plurality of outlets. Intelligent PDUs normally have an intelligence module that allow the PDU for remote management of power metering information, power outlet on/off control, and/or alarms. Some advanced PDUs allow users to manage external sensors such as temperature, humidity, airflow, etc.
Form factors
PDUs vary from simple and inexpensive rack-mounted power strips to larger floor-mounted PDUs with multiple functions including power filtering to improve power quality, intelligent load balancing, and remote monitoring and control by LAN or SNMP. This kind of PDU placement offers intelligent capabilities such as power metering at the inlet, outlet, and PDU branch circuit level and support for environment sensors.
Newer generation of intelligent PDUs allow for IP consolidation, which means many PDUs can be linked in an array under a single IP address. Next-generation models also offer integration with electronic locks, providing the ability to network and manage PDUs and locks through the same appliance.
In data centers, larger PDUs are needed to power multiple server cabinets. Each server cabinet or row of cabinets may require multiple high current circuits, possibly from different phases of incoming power or different UPSs. Standalone cabinet PDUs are self-contained units that include main circuit breakers, individual circuit breakers, and power monitoring panels. The cabinet provides internal bus bars for neutral and grounding. Prepunched top and bottom panels allow for safe cable entry.
See also
Uninterruptible power supply
AC power plugs and sockets
References
External links
Out-of-band management
Data centers
Mains power connectors | Power distribution unit | Technology | 479 |
5,732,881 | https://en.wikipedia.org/wiki/Stewart%27s%20theorem | In geometry, Stewart's theorem yields a relation between the lengths of the sides and the length of a cevian in a triangle. Its name is in honour of the Scottish mathematician Matthew Stewart, who published the theorem in 1746.
Statement
Let , , be the lengths of the sides of a triangle. Let be the length of a cevian to the side of length . If the cevian divides the side of length into two segments of length and , with adjacent to and adjacent to , then Stewart's theorem states that
A common mnemonic used by students to memorize this equation (after rearranging the terms) is:
The theorem may be written more symmetrically using signed lengths of segments. That is, take the length to be positive or negative according to whether is to the left or right of in some fixed orientation of the line. In this formulation, the theorem states that if are collinear points, and is any point, then
In the special case where the cevian is a median (meaning it divides the opposite side into two segments of equal length), the result is known as Apollonius' theorem.
Proof
The theorem can be proved as an application of the law of cosines.
Let be the angle between and and the angle between and . Then is the supplement of , and so . Applying the law of cosines in the two small triangles using angles and produces
Multiplying the first equation by and the third equation by and adding them eliminates . One obtains
which is the required equation.
Alternatively, the theorem can be proved by drawing a perpendicular from the vertex of the triangle to the base and using the Pythagorean theorem to write the distances , , in terms of the altitude. The left and right hand sides of the equation then reduce algebraically to the same expression.
History
According to , Stewart published the result in 1746 when he was a candidate to replace Colin Maclaurin as Professor of Mathematics at the University of Edinburgh. state that the result was probably known to Archimedes around 300 B.C.E. They go on to say (mistakenly) that the first known proof was provided by R. Simson in 1751. state that the result is used by Simson in 1748 and by Simpson in 1752, and its first appearance in Europe given by Lazare Carnot in 1803.
See also
Mass point geometry
Notes
References
Further reading
I.S Amarasinghe, Solutions to the Problem 43.3: Stewart's Theorem (A New Proof for the Stewart's Theorem using Ptolemy's Theorem), Mathematical Spectrum, Vol 43(03), pp. 138 – 139, 2011.
External links
Euclidean plane geometry
Theorems about triangles
Articles containing proofs | Stewart's theorem | Mathematics | 549 |
15,173,758 | https://en.wikipedia.org/wiki/SN%202003H | SN 2003H was a supernova that appeared halfway between the colliding NGC 2207 and IC 2163 galaxies. It was discovered on January 8, 2003, by the Lick Observatory and Tenagra Supernova Searches (LOTOSS).
References
External links
Spectra on the Open Supernova Catalog
Simbad
Canis Major
20030108
Supernovae | SN 2003H | Chemistry,Astronomy | 74 |
21,758,647 | https://en.wikipedia.org/wiki/Abell%20370 | Abell 370 is a galaxy cluster located nearly 5 billion light-years away from the Earth (at redshift z = 0.375), in the constellation Cetus. Its core is made up of several hundred galaxies. It was catalogued by George Abell, and is the most distant of the clusters he catalogued.
In the 1980s astronomers of Toulouse Observatory discovered a gravitational lens in space between Earth and Abell 370 using the Canada-France-Hawaii Telescope. A curious arc had been observed earlier near the cluster, but the astronomers were able to recognize it as this phenomenon.
Gravitational lensing
Abell 370 appears to include several arcs of light, including the largest ever discovered with 30" long. It was originally referred to as the Giant Arc, but later renamed to the Dragon Arc. These arcs or deformations are mirages caused by gravitational lensing of distant galaxies by the massive galaxy cluster located between the observer and the magnified galaxies. This cluster shows an apparent magnitude of +22.
In 2002, astronomers used this lensing effect to discover a galaxy, HCM-6A, 12.8 billion light years away from Earth. At the time it was the furthest known galaxy.
In 2009, and HST study in the field of Abell 370 revealed in greater detail the 30" long arc with the appearance of a dragon, and hence rebranded as The Dragon by NASA scientists. Its head is composed of a spiral galaxy, with another image of the spiral composing the tail. Several other images form the body of the dragon, all overlapping. These galaxies all lie approximately 5 billion light years away.
See also
Abell 2218
Abell catalogue
List of Abell clusters
References
Gravitationally lensed images in Abell 370 Authors: Grossman, S. A. & Narayan, R.
Image of Abell 370 released bu STScI HST in May 2017 : A Lot of Galaxies Need Guarding in This NASA Hubble View
External links
A lot of galaxies need guarding in this NASA Hubble view
Galaxy clusters
370
Abell richness class 0
Cetus | Abell 370 | Astronomy | 417 |
52,080,361 | https://en.wikipedia.org/wiki/List%20of%20computer-animated%20television%20series | This is a list of released animated television series made mainly with computer animation.
1990s
2000s
2010s
2020s
Upcoming
See also
List of computer-animated films
References
Lists of animated television series
Computing-related lists | List of computer-animated television series | Technology | 41 |
765,860 | https://en.wikipedia.org/wiki/Menninger%20Foundation | The Menninger Foundation was founded in 1919 by the Menninger family in Topeka, Kansas. The Menninger Foundation, known locally as Menninger's, consists of a clinic, a sanatorium, and a school of psychiatry, all of which bear the Menninger name. Menninger's consisted of a campus at 5800 S.W. 6th Avenue in Topeka, Kansas which included a pool as well as the other aforementioned buildings. In 2003, the Menninger Clinic moved to Houston. The foundation was started in 1919 by Dr. Charles F. Menninger and his sons, Drs. Karl and William Menninger. It represented the first group psychiatry practice. "We had a vision," Dr. C. F. Menninger said, "of a better kind of medicine and a better kind of world."
History
The Menninger Clinic, also known as the C. F. Menninger Memorial Hospital, was founded in the 1920s in Topeka, Kansas. The Menninger Sanitarium was founded in 1925. The Menninger Clinic established the Southard School for children in 1926. The school fostered treatment programs for children and adolescents that were recognized worldwide. In the 1930s the Menningers expanded training programs for psychiatrists, psychologists, and other mental health professionals.
The Menninger Foundation was established in 1941. The Menninger School of Psychiatry was established in 1946. It quickly became the largest training center in the country, driven by the country's demand for psychiatrists to treat military veterans.
Menninger announced its affiliation with Baylor College of Medicine and The Methodist Hospital in December 2002. The concept was that Menninger would perform treatment while Baylor would oversee research and education.
Moves
The Menninger Clinic moved in June 2003 from Topeka, Kansas to Houston, Texas. The Menninger Clinic again moved to its new location at 12301 S. Main St., Houston, Texas, 77035 in May 2012.
Current facilities
As of May 2012, The Menninger Clinic offers: Adolescent Treatment Program, a Professionals Program, the Compass Program for Young Adults, the Comprehensive Psychiatric Assessment & Stabilization Program, an Assessments Service and the Hope Program for Adults.
Revolution in psychiatric education
The Menninger School of Psychiatry and the local Veterans Administration Hospital represented the center of a psychiatric education revolution. The Clinic and the School became the hub for training professionals in the bio-psycho-social approach. This approach integrated the foundations of medical, psychodynamic, developmental, and family systems to focus on the overall health of patients. For patients, this way of treatment attended to their physical, emotional, and social needs.
Dr. Otto Fleischmann, head of the psychoanalytic institute from 1956 to 1963, was doing psychotherapy behind a one-way vision screen, in full view of all the students.
In 1960 Otto Kernberg joined the Clinic and later became its director until 1965.
Karl Menninger
Dr. Karl Menninger's first book, The Human Mind (1930), became a bestseller and familiarized the American public with human behavior. Many Americans also read his subsequent books, including The Vital Balance, Man Against Himself and Love Against Hate.
Will Menninger
Dr. Will Menninger made a major contribution to the field of psychiatry when he developed a system of hospital treatment known as milieu therapy. This approach involved a patient's total environment in treatment. Dr. Menninger served as Chief of the Army Medical Corps' Psychiatric Division during World War II. Under his leadership, the Army reduced losses in personnel due to psychological impairment. In 1945, the Army promoted Dr. Menninger to brigadier general. After the war, Dr. Menninger led a national revolution to reform state sanitariums. In 1948, Time magazine featured Dr. Menninger on its cover, lauding him as "psychiatry’s U.S. sales manager."
Activities
At the Menninger Clinic, staff proceeded to launch new treatment approaches and open specialty programs. The Menninger Foundation gained a reputation for intensive, individualized treatment, particularly for patients with complex or long-standing symptoms. The treatment approach was multidimensional, addressing a patient's medical, psychological, and social needs. Numerous independent organizations recognized the Menninger Foundation as a world leader in psychiatric and behavioral health treatment.
In 2020, US News & World Report listed Houston’s Menninger Clinic #5 in Psychiatry on their annual list of best hospitals.
The Menninger Clinic remains one of the primary North American settings supporting psychodynamically informed research on clinical diagnosis, assessment, and treatment. It has organized efforts around the construct of mentalizing, a concept integrating research activities related to attachment, theory of mind, internal representations, and neuroscience.
In the 1960s the Menninger Clinic studied Swami Rama, a noted yogi, specifically investigating his ability to exercise voluntary control of bodily processes (such as heartbeat) which are normally considered non-voluntary (autonomous) as well as Yoga Nidra. It was part of Gardner Murphy's research program into creativity and the paranormal, funded by Ittleson Family Foundation.
See also
Roy W. Menninger
W. Walter Menninger
Harriet Lerner
Riley Gardner
The New York Foundation
References
Lawrence Jacob Friedman, Menninger: The Family and the Clinic, University Press of Kansas, 1992 (Reprint)
Robert S. Wallerstein, Forty-two lives in treatment : a study of psychoanalysis and psychotherapy : the report of the Psychotherapy Research Project of the Menninger Foundation, 1954-1982, New York : Other Press, 2000
External links
Menninger Clinic official website
Bulletin of the Menninger Clinic
The Topeka Capital Journal's in-depth coverage of Menninger leaving Topeka - index page
U.S. News & World Report psychiatric hospital rankings
Menninger Foundation Archives from Kansas State Historical Society
Access Menninger photographs and documents on Kansas Memory, the Kansas State Historical Society's digital portal
ERICA GOODE - Famed Psychiatric Clinic Abandons Prairie Home - New York Times Article 2003
Staff of Psychotherapy Research Project at Menninger in Topeka, Kansas, 1959, at Kansas Memory, not in PD
Biomedical research foundations
Mental health organizations based in Kansas
History of psychiatry
Psychoanalysis in the United States
Mental health organizations based in Texas
Medical and health foundations in the United States | Menninger Foundation | Engineering,Biology | 1,306 |
30,870,475 | https://en.wikipedia.org/wiki/Mobile%203D%20Graphics%20API | The Mobile 3D Graphics API, commonly referred to as M3G, is an open source graphics API and file format specification for developing Java ME applications that produce 3D computer graphics on embedded devices such as mobile phones and PDAs.
History
Originally developed at Nokia Research Center Tampere in 2003-2005, M3G was standarized under the Java Community Process as JSR 184 in 22 Dec, 2003. , the latest version of M3G was 1.1, but version 2.0 was drafted as JSR 297 in April 2009. In 2010, M3G 1.1 JNI source code and related Symbian OS Java Runtime Environment were subsequently released into open source through the Symbian Foundation.
Rendering
M3G is an object-oriented interface consists of 30 classes that can be used to draw complex animated three-dimensional scenes, it provides two ways for developers to draw 3D graphics: immediate mode and retained mode.
In immediate mode, graphics commands are issued directly into the graphics pipeline and the rendering engine executes them immediately. When using this method, the developer must write code that specifically tells the rendering engine what to draw for each animation frame. A camera, and set of lights are also associated with the scene, but is not necessarily part of it. In immediate mode it is possible to display single objects, as well as entire scenes (or worlds, with a camera, lights, and background as parts of the scene).
Retained mode always uses a scene graph that links all geometric objects in the 3D world in a tree structure, and also specifies the camera, lights, and background. Higher-level information about each object—such as its geometric structure, position, and appearance—is retained from frame to frame. In retained mode, data are not serialized by Java's own serialization mechanism. They are optimized by the M3G serialization mechanism, which produces and loads data streams conforming to the .m3g file format specification for 3D model data, including animation data format. This allows developers to create content on desktop computers that can be loaded by M3G on mobile devices.
Emulation
After the discontinued development of M3G, the emulation has been achieved by an open source Android application called "JL-Mod" in 2020.
Further reading
Aarnio, Callow, Miettinen and Vaarala: Developing Mobile 3D Applications With OpenGL ES and M3G, SIGGRAPH 2005: Courses
Alessio Malizia: Mobile 3D Graphics, Springer, 2006,
Kari Pulli, Tomi Aarnio, Ville Miettinen, Kimmo Roimela, Jani Vaarala: Mobile 3D Graphics with OpenGL ES and M3G, Morgan Kaufmann, 2007,
Claus Höfele: Mobile 3D Graphics: Learning 3D Graphics with the Java Micro Edition, Thomson Course Technology PTR, 2007,
Carlos Morales, David Nelson: Mobile 3D Game Development: From Start to Market, Charles River Media, 2007,
References
External links
Java Community Process
JSR 184 (Mobile 3D Graphics API for J2ME 1.0, 1.1 Final Release 2)
JSR 297 (Mobile 3D Graphics API 2.0 Proposed Final Draft)
JSR 239 (Java Bindings for OpenGL ES) – related Java ME graphics specification
Specifications
JSR-000184 Mobile 3D Graphics API for J2ME(TM) 1.1 Maintenance Release
JSR 184 1.1 Specification (Mobile 3D Graphics API Technical Specification, Version 1.1, June 22 2005)
Getting Started With the Mobile 3D Graphics API for J2ME
3D graphics for Java mobile devices: Part 1 and Part 2
list of compatible devices
JSR 184 compatible devices (Performance listing of most mobile 3D devices)
Source code released by Symbian Foundation on GitHub
SymbianSource/oss.FCL.sf.app.JRT
3D graphics file formats
3D scenegraph APIs
Cross-platform free software
Java device platform
Java specification requests
Java APIs
History_of_software
2003 software
Mobile software
Nokia platforms
Discontinued software | Mobile 3D Graphics API | Technology | 815 |
1,474,305 | https://en.wikipedia.org/wiki/Synapsin | The synapsins are a family of proteins that have long been implicated in the regulation of neurotransmitter release at synapses. Specifically, they are thought to be involved in regulating the number of synaptic vesicles available for release via exocytosis at any one time. Synapsins are present in invertebrates and vertebrates and are strongly conserved across all species. They are expressed in highest concentration in the nervous system, although they also express in other body systems such as the reproductive organs, including both eggs and spermatozoa. Synapsin function also increases as the organism matures, reaching its peak at sexual maturity.
Current studies suggest the following hypothesis for the role of synapsin: synapsins bind synaptic vesicles to components of the cytoskeleton which prevents them from migrating to the presynaptic membrane and releasing neurotransmitter. During an action potential, synapsins are phosphorylated by PKA (cAMP dependent protein kinase), releasing the synaptic vesicles and allowing them to move to the membrane and release their neurotransmitter.
Gene knockout studies in mice (where the mouse is unable to produce synapsin) have had some surprising results. Consistently, knockout studies have shown that mice lacking one or more synapsins have defects in synaptic transmission induced by high‐frequency stimulation, suggesting that the synapsins may be one of the factors boosting release probability in synapses at high firing rates, such as by aiding the recruitment of vesicles from the reserve pool. Furthermore, mice lacking all three synapsins are prone to seizures, and experience learning defects. These results suggest that while synapsins are not essential for synaptic function, they do serve an important modulatory role. Lastly, observed effects seemed to vary between inhibitory and excitatory synapses, suggesting synapsins may play a slightly different role in each type.
Family members
Humans and most other vertebrates possess three genes encoding three different synapsin proteins. Each gene in turn is alternatively spliced to produce at least two different protein isoforms for a total of six isoforms:
Different neuron terminals will express varying amounts of each of these synapsin proteins and collectively these synapsins will comprise 1% of the total expressed protein at any one time. Synapsin Ia has been implicated in bipolar disorder and schizophrenia.
References
Molecular neuroscience
Protein families
Peripheral membrane proteins | Synapsin | Chemistry,Biology | 523 |
541,351 | https://en.wikipedia.org/wiki/Champernowne%20constant | In mathematics, the Champernowne constant is a transcendental real constant whose decimal expansion has important properties. It is named after economist and mathematician D. G. Champernowne, who published it as an undergraduate in 1933. The number is defined by concatenating the base-10 representations of the positive integers:
.
Champernowne constants can also be constructed in other bases similarly; for example,
and
.
The Champernowne word or Barbier word is the sequence of digits of C10 obtained by writing it in base 10 and juxtaposing the digits:
More generally, a Champernowne sequence (sometimes also called a Champernowne word) is any sequence of digits obtained by concatenating all finite digit-strings (in any given base) in some recursive order.
For instance, the binary Champernowne sequence in shortlex order is
where spaces (otherwise to be ignored) have been inserted just to show the strings being concatenated.
Properties
A real number x is said to be normal if its digits in every base follow a uniform distribution: all digits being equally likely, all pairs of digits equally likely, all triplets of digits equally likely, etc. A number x is said to be normal in base b if its digits in base b follow a uniform distribution.
If we denote a digit string as [a0, a1, ...], then, in base 10, we would expect strings [0], [1], [2], …, [9] to occur 1/10 of the time, strings [0,0], [0,1], ..., [9,8], [9,9] to occur 1/100 of the time, and so on, in a normal number.
Champernowne proved that is normal in base 10, while Nakai and Shiokawa proved a more general theorem, a corollary of which is that is normal in base for any b. It is an open problem whether is normal in bases . For example, it is not known if is normal in base 9. For example, 54 digits of is 0.123456789101112131415161718192021222324252627282930313. When we express this in base 9 we get .
Kurt Mahler showed that the constant is transcendental.
The irrationality measure of is , and more generally for any base .
The Champernowne word is a disjunctive sequence. A disjunctive sequence is an infinite sequence (over a finite alphabet of characters) in which every finite string appears as a substring
Series
The definition of the Champernowne constant immediately gives rise to an infinite series representation involving a double sum,
where is the number of digits between the decimal point and the first contribution from an -digit base-10 number; these expressions generalize to an arbitrary base by replacing 10 and 9 with and respectively. Alternative forms are
and
where and denote the floor and ceiling functions.
Returning to the first of these series, both the summand of the outer sum and the expression for can be simplified using the closed form for the two-dimensional geometric series:
The resulting expression for is
while the summand of the outer sum becomes
Summing over all gives
Observe that in the summand, the expression in parentheses is approximately for and rapidly approaches that value as grows, while the exponent grows exponentially with . As a consequence, each additional term provides an exponentially growing number of correct digits even though the number of digits in the numerators and denominators of the fractions comprising these terms grows only linearly. For example, the first few terms of are
Continued fraction expansion
The simple continued fraction expansion of Champernowne's constant does not terminate (because the constant is not rational) and is aperiodic (because it is not an irreducible quadratic). A simple continued fraction is a continued fraction where the denominator is 1. The simple continued fraction expansion of Champernowne's constant exhibits extremely large terms appearing between many small ones. For example, in base 10,
C10 = [0; 8, 9, 1, 149083, 1, 1, 1, 4, 1, 1, 1, 3, 4, 1, 1, 1, 15, 4 57540 11139 10310 76483 64662 82429 56118 59960 39397 10457 55500 06620 04393 09026 26592 56314 93795 32077 47128 65631 38641 20937 55035 52094 60718 30899 84575 80146 98631 48833 59214 17830 10987, 6, 1, 1, ...].
The large number at position 18 has 166 digits, and the next very large term at position 40 of the continued fraction has 2504 digits. That there are such large numbers as terms of the continued fraction expansion means that the convergents obtained by stopping before these large numbers provide an exceptionally good approximation of the Champernowne constant. For example, truncating just before the 4th partial quotient, gives
which matches the first term in the rapidly converging series expansion of the previous section and which approximates Champernowne's constant with an error of about . Truncating just before the 18th partial quotient gives an approximation that matches the first two terms of the series, that is, the terms up to the term containing ,
which approximates Champernowne's constant with error approximately .
The first and second incrementally largest terms ("high-water marks") after the initial zero are 8 and 9, respectively, and occur at positions 1 and 2. Sikora (2012) noticed that the number of digits in the high-water marks starting with the fourth display an apparent pattern. Indeed, the high-water marks themselves grow doubly-exponentially, and the number of digits in the nth mark for are
6, 166, 2504, 33102, 411100, 4911098, 57111096, 651111094, 7311111092, ...
whose pattern becomes obvious starting with the 6th high-water mark. The number of terms can be given by
However, it is still unknown as to whether or not there is a way to determine where the large terms (with at least 6 digits) occur, or their values. The high-water marks themselves are located at positions
1, 2, 4, 18, 40, 162, 526, 1708, 4838, 13522, 34062, ....
See also
Copeland–Erdős constant, a similar normal number, defined using the prime numbers
Liouville's constant, another constant defined by its decimal representation
Smarandache–Wellin number, another number obtained through concatenation a representation in a given base.
References
.
.
Mathematical constants
Number theory
Real transcendental numbers
Sequences and series | Champernowne constant | Mathematics | 1,483 |
12,133,394 | https://en.wikipedia.org/wiki/Testicular%20receptor | The testicular receptor proteins are members of the nuclear receptor family of intracellular transcription factors. There are two forms of the receptor, TR2 and TR4, each encode by a separate gene ( and respectively).
References
External links
Intracellular receptors
Transcription factors | Testicular receptor | Chemistry,Biology | 54 |
11,422,283 | https://en.wikipedia.org/wiki/SroC%20RNA | The bacterial SroC RNA is a non-coding RNA gene of around 160 nucleotides in length. SroC is found in several enterobacterial species. This RNA interacts with the Hfq protein.
SroC acts as a ‘sponge,’ and base pairs with and regulates activity of the sRNA GcvB. This interaction triggers the degradation of GcvB by RNase E, alleviating the GcvB-mediated mRNA repression of other amino acid-related transport and metabolic genes.
References
External links
Non-coding RNA | SroC RNA | Chemistry | 115 |
312,255 | https://en.wikipedia.org/wiki/Partition%20function%20%28quantum%20field%20theory%29 | In quantum field theory, partition functions are generating functionals for correlation functions, making them key objects of study in the path integral formalism. They are the imaginary time versions of statistical mechanics partition functions, giving rise to a close connection between these two areas of physics. Partition functions can rarely be solved for exactly, although free theories do admit such solutions. Instead, a perturbative approach is usually implemented, this being equivalent to summing over Feynman diagrams.
Generating functional
Scalar theories
In a -dimensional field theory with a real scalar field and action , the partition function is defined in the path integral formalism as the functional
where is a fictitious source current. It acts as a generating functional for arbitrary n-point correlation functions
The derivatives used here are functional derivatives rather than regular derivatives since they are acting on functionals rather than regular functions. From this it follows that an equivalent expression for the partition function reminiscent to a power series in source currents is given by
In curved spacetimes there is an added subtlety that must be dealt with due to the fact that the initial vacuum state need not be the same as the final vacuum state. Partition functions can also be constructed for composite operators in the same way as they are for fundamental fields. Correlation functions of these operators can then be calculated as functional derivatives of these functionals. For example, the partition function for a composite operator is given by
Knowing the partition function completely solves the theory since it allows for the direct calculation of all of its correlation functions. However, there are very few cases where the partition function can be calculated exactly. While free theories do admit exact solutions, interacting theories generally do not. Instead the partition function can be evaluated at weak coupling perturbatively, which amounts to regular perturbation theory using Feynman diagrams with insertions on the external legs. The symmetry factors for these types of diagrams differ from those of correlation functions since all external legs have identical insertions that can be interchanged, whereas the external legs of correlation functions are all fixed at specific coordinates and are therefore fixed.
By performing a Wick transformation, the partition function can be expressed in Euclidean spacetime as
where is the Euclidean action and are Euclidean coordinates. This form is closely connected to the partition function in statistical mechanics, especially since the Euclidean Lagrangian is usually bounded from below in which case it can be interpreted as an energy density. It also allows for the interpretation of the exponential factor as a statistical weight for the field configurations, with larger fluctuations in the gradient or field values leading to greater suppression. This connection with statistical mechanics also lends additional intuition for how correlation functions should behave in a quantum field theory.
General theories
Most of the same principles of the scalar case hold for more general theories with additional fields. Each field requires the introduction of its own fictitious current, with antiparticle fields requiring their own separate currents. Acting on the partition function with a derivative of a current brings down its associated field from the exponential, allowing for the construction of arbitrary correlation functions. After differentiation, the currents are set to zero when correlation functions in a vacuum state are desired, but the currents can also be set to take on particular values to yield correlation functions in non-vanishing background fields.
For partition functions with Grassmann valued fermion fields, the sources are also Grassmann valued. For example, a theory with a single Dirac fermion requires the introduction of two Grassmann currents and so that the partition function is
Functional derivatives with respect to give fermion fields while derivatives with respect to give anti-fermion fields in the correlation functions.
Thermal field theories
A thermal field theory at temperature is equivalent in Euclidean formalism to a theory with a compactified temporal direction of length . Partition functions must be modified appropriately by imposing periodicity conditions on the fields and the Euclidean spacetime integrals
This partition function can be taken as the definition of the thermal field theory in imaginary time formalism. Correlation functions are acquired from the partition function through the usual functional derivatives with respect to currents
Free theories
The partition function can be solved exactly in free theories by completing the square in terms of the fields. Since a shift by a constant does not affect the path integral measure, this allows for separating the partition function into a constant of proportionality arising from the path integral, and a second term that only depends on the current. For the scalar theory this yields
where is the position space Feynman propagator
This partition function fully determines the free field theory.
In the case of a theory with a single free Dirac fermion, completing the square yields a partition function of the form
where is the position space Dirac propagator
References
Further reading
Ashok Das, Field Theory: A Path Integral Approach, 2nd edition, World Scientific (Singapore, 2006); paperback .
Kleinert, Hagen, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 4th edition, World Scientific (Singapore, 2004); paperback (also available online: PDF-files).
Jean Zinn-Justin (2009), Scholarpedia, 4(2): 8674.
Quantum field theory | Partition function (quantum field theory) | Physics | 1,038 |
45,716,143 | https://en.wikipedia.org/wiki/Chrysocolla%20%28gold-solder%29 | Chrysocolla (gold-solder, Greek ; Latin chrȳsocolla, oerugo, santerna; Syriac "tankar" (Bar Bahlul), alchemical symbol 🜸), also known as "goldsmith's solder" and "solder of Macedonia" (Pseudo-Democritus), denotes:
The soldering of gold.
The materials used for soldering gold, as well as certain gold alloys, still used by goldsmiths. Martin Ruland (Lexicon alchemiae) explains chrysocolla as molybdochalkos, a copper-lead alloy. In Leyden papyrus X recipe 31 chrysocolla is an alloy composed of 4 parts copper, 2 parts asem (a kind of tin-copper alloy) and 1 part gold. Argyrochrysocolla appears to designate an alloy of gold and silver.
A mix of copper and iron salts, produced by the dissolution of a metallic vein by water, either spontaneously or by introducing water into a mine from winter to summer, and letting the mass dry during summer, which results in a yellow product.
Malachite (green carbonate of copper), and other alkaline copper salts of green colour. Azurite, the blue congener of malachite, was known as armenion, as it was mined in Armenia. On heating, malachite decomposes to carbon dioxide and copper, the latter inducing the soldering effect. According to an older opinion, chrysocolla was borax, which had been found in ancient gold foundries and is still used for soldering gold. Aristoteles (De mirabilibus) mentions that the Chalcedonian island Demonesus has a mine of cyan () and chrysocolla. Theophrastus (De lapidibus) describes chrysocolla as a kind of "false emerald" found in gold and copper mines, used for soldering gold. Pliny (Historia Naturalis) and Celsus mention that chrysocolla is extracted along with gold, and is used as a pigment and medicament. Dioscorides (De materia medica) describes the purification of the ore and its use in healing wounds, also noting its poisonous effect.
Greenish copper salts obtained by boiling infant's urine and natron in copper vessels. The resulting copper salts were scraped off and used for soldering gold. Infant's urine (Greek , Latin ) appears in many ancient recipes (Dioscorides, Pliny, Celsus, etc.) as a source of phosphates and ammonia.
A particular copper hydrosilicate is named chrysocolla by modern mineralogists.
See also
Chrysoberyl
Chrysolite
Chrysoprase
Chrysotile
Sarcocolla
References
History of metallurgy
Alchemical substances | Chrysocolla (gold-solder) | Chemistry,Materials_science | 594 |
10,851,309 | https://en.wikipedia.org/wiki/Acidity%20function | An acidity function is a measure of the acidity of a medium or solvent system, usually expressed in terms of its ability to donate protons to (or accept protons from) a solute (Brønsted acidity). The pH scale is by far the most commonly used acidity function, and is ideal for dilute aqueous solutions. Other acidity functions have been proposed for different environments, most notably the Hammett acidity function, H0, for superacid media and its modified version H− for superbasic media. The term acidity function is also used for measurements made on basic systems, and the term basicity function is uncommon.
Hammett-type acidity functions are defined in terms of a buffered medium containing a weak base B and its conjugate acid BH+:
where pKa is the dissociation constant of BH+. They were originally measured by using nitroanilines as weak bases or acid-base indicators and by measuring the concentrations of the protonated and unprotonated forms with UV-visible spectroscopy. Other spectroscopic methods, such as NMR, may also be used. The function H− is defined similarly for strong bases:
Here BH is a weak acid used as an acid-base indicator, and B− is its conjugate base.
Comparison of acidity functions with aqueous acidity
In dilute aqueous solution, the predominant acid species is the hydrated hydrogen ion H3O+ (or more accurately [H(OH2)n]+). In this case H0 and H− are equivalent to pH values determined by the buffer equation or Henderson-Hasselbalch equation.
However, an H0 value of −21 (a 25% solution of SbF5 in HSO3F) does not imply a hydrogen ion concentration of 1021 mol/dm3: such a "solution" would have a density more than a hundred times greater than a neutron star. Rather, H0 = −21 implies that the reactivity (protonating power) of the solvated hydrogen ions is 1021 times greater than the reactivity of the hydrated hydrogen ions in an aqueous solution of pH 0. The actual reactive species are different in the two cases, but both can be considered to be sources of H+, i.e. Brønsted acids. The hydrogen ion H+ never exists on its own in a condensed phase, as it is always solvated to a certain extent. The high negative value of H0 in SbF5/HSO3F mixtures indicates that the solvation of the hydrogen ion is much weaker in this solvent system than in water. Other way of expressing the same phenomenon is to say that SbF5·FSO3H is a much stronger proton donor than H3O+.
References
Acids
Chemical properties
Solvents | Acidity function | Chemistry | 598 |
2,853,478 | https://en.wikipedia.org/wiki/Orellanine | Orellanine or orellanin is a mycotoxin found in a group of mushrooms known as the Orellani within the family Cortinariaceae. Structurally, it is a bipyridine N-oxide compound somewhat related to the herbicide diquat.
History
Orellanine first came to people's attention in 1952 when a mass poisoning of 102 people in Konin, Poland, resulted in 11 deaths. Orellanine comes from a class of mushrooms that fall under the genus Cortinarius, and has been found in the species C. orellanus, rubellus, henrici, rainerensis and bruneofulvus. Poisonings related to these mushrooms have occurred predominately in Europe where mushroom foraging was common, though cases of orellanine poisoning have been reported in North America and Australia as well. There are several reported cases of people ingesting orellanine-containing mushrooms after mistaking them for edible or hallucinogenic mushrooms.
Orellanine was first isolated in 1962, when Stanisław Grzymala extracted and isolated orellanine from the mushroom C. orellanus. Grzymala was also able to demonstrate the nephrotoxicity of C. orellanus and determine various physical and chemical properties of orellanine. He found that the toxicity of the mushroom was due to both delayed and acute kidney injury.
The chemical structure of orellanine was first deduced by Antkowiak and Gessner in 1979, who identified it as 3,3',4,4'-tetrahydroxy- 2,2'-bipyridine-1,1'-dioxide.
The first successful synthesis of orellanine was reported in 1985. The total synthesis of orellanine starting with the bromination of 3-hydroxypyridine was reported a year later in 1986.
Synthesis
The first synthesis of orellanine was reported in 1985 by Dehmlow and Schulz, and required ten steps starting from 3-aminopyridine. The following year, Tiecco et al. reported the total synthesis of orellanine in nine steps starting from 3-hydroxypyridine.
Structure
Orellanine is a bipyridine N-oxide. Orellanine displays tautomerism, with the more stable tautomer being the pyridine N-oxide form.The chemical structure of synthetically produced orellanine has been confirmed by X-ray crystallography. In this crystal structure, the two pyridine rings are nearly perpendicular to each other, making orellanine chiral. However, samples of orellanine extracted from mushrooms are optically inactive racemic mixtures, likely due to racemization during the extraction process.
Toxicity
Orellanine displays a wide spectrum of toxin effects in plants, animals, and microorganisms. Although the mechanism of toxicity of orellanine is not yet fully understood, it likely targets cellular processes found in both prokaryotes and eukaryotes. Orellanine has been found to inhibit the synthesis of biomolecules such as proteins, RNA, and DNA, and promote non-competitive inhibition of several enzymes such as alkaline phosphatase, γ-glutamyltransferase, and leucyl aminopeptidase. In addition, orellanine has also been shown to interfere with the production of adenosine triphosphatase.
Orellanine is a bipyridine with positively charged nitrogen atoms, and chemically resembles the bipyridine herbicides paraquat and diquat. Like orellanine, paraquat and diquat are toxic not only to plants, but also to humans and livestock. Bipyridine compounds with charged nitrogen atoms disrupt important redox reactions in organisms, 'stealing' one or two electrons and sometimes passing the electrons along into other, often undesirable, redox reactions. The terminal products of these reactions can be harmful reactive oxygen species such as peroxide or superoxide ions, the latter of which are harmful to cells. It is thought that orellanine produces oxidative stress in a similar manner to paraquat and diquat.
In humans, a characteristic of poisoning by the nephrotoxin orellanine is the long latency; the first symptoms usually do not appear until 2–4 to 14 days after ingestion. The latent period decreases with the quantity of mushrooms consumed. The first symptoms of orellanine poisoning are similar to the common flu (nausea, vomiting, stomach pains, headaches, myalgia, etc.), these symptoms are followed by early stages of kidney failure (immense thirst, frequent urination, pain on and around the kidneys) and eventually decreased or nonexistent urine output and other symptoms of kidney failure occur. If left untreated death will follow.
The of orellanine in mice is 12 to 20 mg per kg body weight; this is the dose which leads to death within two weeks. From cases of orellanine-related mushroom poisoning in humans it seems that the lethal dose for humans is considerably lower.
Treatment
There is no known antidote against orellanine poisoning. Treatment consists mainly of supportive care and hemodialysis, if needed. Complete recovery of renal function is recovered in only 30% of poisoned patients. There are reports of cases where treatment using corticosteroids and antioxidants led to improved clinical outcomes.
Research
This compound is currently in clinical trials as a potential treatment for various forms of renal cancer.
See also
Lethal webcaps
Cortinarius
Nephrotoxin
Diquat
References
External links
Cortinarius rubellus Pacific Northwest Fungi, Featured Fungus Number 4''
Mycotoxins found in Basidiomycota
Alkaloids
Amine oxides
Bipyridines
Nephrotoxins
Experimental cancer drugs
Drugs with unknown mechanisms of action | Orellanine | Chemistry | 1,225 |
71,466,674 | https://en.wikipedia.org/wiki/Hydrogen%20transport | Hydrogen transport involves the use of technology to transport hydrogen from the point of generation to the point of use.
Techniques
Hydrogen can be transported in a variety of forms.
Gas
Hydrogen can be transported in gaseous form, typically in a pipeline. Because hydrogen gas is highly reactive, the pipeline or other container must be able to resist interacting with the gas. Hydrogen's low density at atmospheric pressure means that gas transport is suitable only for low volume requirements.
Liquid
Hydrogen switches to the liquid phase at . Thus, transporting liquid hydrogen requires sophisticated refrigeration technologies such as cryogenic tanker trucks and liquefaction plants.
Compound
Hydrogen can be reacted with other elements to form a variety of compounds. This allows it to be transported in either liquid (e.g., water) or solid form. One variation on this concept is to transport atomic silicon, produced using renewable energy. Mixing silicon with water separates water's oxygen from its hydrogen without requiring additional energy. The hydrogen can then be oxidixed with the oxygen (or air) to produce energy (with water as the only byproduct).
Mechanochemical
Mechanochemistry refers to chemical reactions triggered by mechanical forces as opposed to heat, light, or electric potential. Ball milling can crush material such as boron nitride or graphene, allowing hydrogen gas to be absorbed by the powder, storing the hydrogen. The hydrogen can be released by heating the powder. These techniques offer the potential of substantial net energy savings.
Safety
Hydrogen transport must address various safety threats.
It is highly flammable, requiring little energy to ignite. However, it is low density (0.0837 g/L), which allows leaked gas to rapidly dissipate, rather than accumulate as a higher density gas might, such as chlorine (3.214 g/L).
Liquid hydrogen requires such low temperatures that leaks may solidify other air components such as nitrogen and oxygen. Solid oxygen can mix with liquid hydrogen, forming a mixture that could self-ignite. A jet fire can also ignite.
At high concentrations, hydrogen gas is an asphyxiant, but is not otherwise toxic.
ISO Technical Committee 197 is developing standards governing hydrogen applications. Standards are available onboard systems, fuel tanks and vehicle refueling systems and for production (including electrolysis and steam methane reformers).
Individual jurisdictions such as Italy have developed additional standards.
See also
Hydrogen transportation
References
External links
Hazardous materials
Energy in transport | Hydrogen transport | Physics,Chemistry,Technology | 501 |
63,642,672 | https://en.wikipedia.org/wiki/Adrian%20Hooke | Adrian Hooke (died January 7, 2013) was an aerospace telecommunications engineer, and a cofounder of the Consultative Committee for Space Data Systems.
Biography
Adrian Hooke held a B.Sc in Electronic and Electrical Engineering from the University of Birmingham, England.
He worked on the Apollo program and other NASA programs as a young engineer. In 1982, he cofounded the Consultative Committee for Space Data Systems (CCSDS), an international consortium of space agencies, and remained active in the organization until 2012. Hooke helped develop standards published by the CCSDS, including the Space Communications Protocol Specifications (SCPS). He was involved in the Interplanetary Internet and Delay Tolerant Networking efforts to bring more computer networking into NASA telecommunications.
Awards
NASA Exceptional Service Medal (twice)
NASA Exceptional Achievement Medal
Special CCSDS Lifetime Leader Award, 2012
References
Astronautics
Consultative Committee for Space Data Systems
Telecommunications engineers
Electronics engineers
2013 deaths | Adrian Hooke | Engineering | 186 |
39,328,862 | https://en.wikipedia.org/wiki/Sexual%20selection%20in%20spiders | Sexual selection in spiders shows how sexual selection explains the evolution of phenotypic traits in spiders. Male spiders have many complex courtship rituals and have to avoid being eaten by the females, with the males of most species surviving only a few matings and consequently having short life-spans.
Pre-copulatory mate choice processes have been observed in a wide range of spider species, including Stegodyphus lineatus, Argiope aurantia, Schizocosa floridana, Hygrolycosa rubrofasciata, and Schizocosa stridulans.
Sexual selection occurs after copulation as well as before copulation. Post-copulatory sexual selection involves sperm competition and cryptic female choice. Sperm competition occurs when the sperm of more than one male competes to fertilize the egg of the female. Cryptic female choice involves the expelling of a males sperm during or after copulations.
Male to male competition
Size is a factor in the reproductive success of males with species such as Stegodyphus lineatus, Argiope aurantia and Argyroneta aquatica showing sexual dimorphism, beneficial for larger males, stronger and more aggressive, who fight off the smaller ones using their large chelicerae and forelegs. This leads to a decrease in the paternal success for smaller males since they are unable to gain access to females. Argiope aurantia males can lose legs in combat, with the loss more prevalent in smaller males, evidence that larger males are favored in male-to-male competition. In the water spider Argyroneta aquatica, where males and females permanently live in the water the males are larger, indicating sexual selective pressures for large body size. The large male water spiders are more mobile, helping them obtain more females.
Sexual selection provides benefits to smaller male spiders under certain conditions, such as Misumena vatia and Nephila clavipes, whose smaller males climb faster to reach their mates: Explained by the gravity hypothesis, outcompeting larger males thus having more reproductive success, especially when females live in high patches of flowers, whereas females live in low lying areas, larger males are favored.
In spiders like Tetragnathidae, Araneidae, Thomisidae and Pholcidae there is an optimal body size that favors climbing speed. Smaller males will have an advantage over the largest males of the species, however the smallest male will not be the fastest climber. This optimal body size for climbing is observed in different males from the same species express phenotypes, weapons such as chelicerae, teeth or even legs to fight off competition are used to fight off oncoming rivals, with larger bodied spiders contained larger chelicerae. In most cases body size correlated with mating success. This is observed in Lyssomanes viridis, whose males display weapons that are very pronounced in comparison with females and selected to help males fight off competition.
The time it takes to develop is crucial to the overall fitness of a spider. This idea is true, however does not mean that larger males will always have better fitness. In Latrodectus hasselti, larger males outcompete smaller males by getting to the females web first. However, these large male spiders have long development times, meaning that the larger male will need more time before being able to copulate. Smaller males tend to have a quick development time which gives them an advantage in mating with a female. This advantage correlates with high paternal success in the species Latrodectus hasselti. Larger males are able to outcompete smaller males, but not able to mate. Smaller males risk getting outcompeted, but are more likely to have paternal success.
Sperm competition
Sperm competition occurs in many species, such as Unicorn catleyi, Nephila Pilipes and Argiope aurantia, with males acting to limit it by guarding the female or inserting parts of the male genitalia into the female's reproductive organs, or using mating plugs which come from the males seminal fluid. This process is observed in the species Unicorn catleyi, for example. In this species, males plug a female's insemination duct with a portion of their palp that contains the ejaculatory duct called the embolus. The embolus that is found in the female's posterior receptaculum suggests that males are trying to limit sperm competition.
In some spider species, such as the Nephila pilipes, multiple males try to mate with only one female. This can be harmful to the female, because it forces her to participate in energy costly matings. In response to this polyandry, the female produces mating plugs of her own to prevent too many males from copulating with her.
The mating plugs transferred to females by the males are believed to be a possible cause of monogyny. For example, in the spider species Argiope aurantia, males will sometimes plug a female with both pedipalps to prevent sperm competition. When this occurs, the male loses his ability to mate with more than one female.
Mate choice
Mate choice is typically displayed by females, but males can be choosy as well. Traits associated with winning competitive bouts are more likely to be chosen by females. As body size effects male-to-male competition, females will choose the male with the more efficient body size. A Wolf spider, Schizocosa floridana, females assess males based on their ability to cope with a changing environment, observing the way males adapt to differences in food availabilities at different times. Males who are able to adapt to the changes in food availability are well conditioned and usually show courtship displays such as tapping on their forelegs and waving. females choose the males who express these courtship displays and are larger in size based on predictions of the males foraging past.
Courtship displays, such as degrees of ornamentation, colors, and movements, are commonly expressed in individuals of a species to attract the opposite sex. The male Hygrolycosa rubrofasciata spider displays certain signals, known as drumming, where a male taps his legs on a rough surface such as a leaf to signal he is ready to mate, with its speed influencing female choice towards faster drummers. Once the female chooses the male, her body starts to shake, a signal that she is ready to mate too. Males who exhibit better drumming behavior typically are more viable.
Schizocosa stridulans males have ornamentation traits in their forelegs which affect their mating success. When courtship rates are high, ornamentation does not increase the reproductive rates of males because of the correlation between the aggressiveness of a spider and the degrees of ornamentation. Due to this correlation it is hypothesized that females choose males without ornamentation to avoid aggression from the males. Females are able to be choosy when courtship rates are high because they do not have to worry about missing out on copulations if there are plenty of male spiders to mate with. When courtship rates are low, males with high degrees of ornamentation are able to get to the female more quickly, thus giving them an advantage over non-ornamented males.
Sometimes facial color or leg brightness can play a role in mate choice. In several species of jumping spiders, including Habronattus pyrrithrix, and Cosmophasis umbratica, males show different brightness and color of body parts prior to copulation. These colors can be used to the males advantage in attracting a mate. In the species Habronattus pyrrithrix, the males who have faces that are red and non bright green legs are more likely to attract a mate than males who do not, indicating that females prefer males with those particular traits.
Although females from the species Hygrolycosa rubrofasciata, Schizocosa floridana and Schizocosa stridulans tend to be the choosier sex, it is not uncommon to observe males from different spider species such as the Zygiella x-notata and Latrodectus hesperus, to be choosy as well. In the orb weaving spider Zygiella x-notata, reproduction rates are affected by male choice under different conditions. These external conditions depend on the amount of competition between males of the species. When competition rates are low, males mate opportunistically with as many females as possible. When competition between males is high, larger males choose to mate with a large female as opposed to the smaller males who choose to mate with any female. The belief is that the advantages of larger size in competition, will give the larger males an opportunity to increase their paternal success by allowing them to be more selective of females.
Sometimes males choose females who are large and better conditioned to avoid being eaten. Choosing a malnourished female can result in a male being cannibalized before copulation. Cannibalism by females is often expressed as a way for females to get nutrition from their mates after copulation. This cannibalistic behavior by females makes males more selective with whom to mate. The males from the species Latrodectus hesperus show high mate preference for better conditioned females. By choosing well nourished females, males are able to increase their mating success while limiting their chance of being consumed. This is because well nourished females are less likely to eat their mates than malnourished females.
Cryptic female choice
Cryptic female choice is a post-copulatory process of mate choice. This process is observed in numerous spider species such as Physocyclus globosus and Argiope bruennichi. For example in the Argiope bruennichi species, males produce energetic courtship displays prior to copulation. Regardless of the displays, females are observed to mate with multiple males. Once copulation is over, the offspring of the female is more likely to have the courtship display phenotype than not. The females of this species must be cryptically discarding sperm from the non-courtship males while keeping the other males' sperm for copulation. This allows a female to mate with as many males as she wants prior to copulation, while being more choosy of males after copulation. Discarding the sperm of a male who does not perform courtship displays indicates that females feel that males who perform courtship displays have the greatest fitness.
References
spiders, Sexual selection in | Sexual selection in spiders | Biology | 2,165 |
1,008,028 | https://en.wikipedia.org/wiki/Sudo | sudo () is a program for Unix-like computer operating systems that enables users to run programs with the security privileges of another user, by default the superuser. It originally stood for "superuser do", as that was all it did, and this remains its most common usage; however, the official Sudo project page lists it as "su 'do. The current Linux manual pages for su define it as "substitute user", making the correct meaning of sudo "substitute user, do", because sudo can run a command as other users as well.
Unlike the similar command su, users must, by default, supply their own password for authentication, rather than the password of the target user. After authentication, and if the configuration file (typically /etc/sudoers) permits the user access, the system invokes the requested command. The configuration file offers detailed access permissions, including enabling commands only from the invoking terminal; requiring a password per user or group; requiring re-entry of a password every time or never requiring a password at all for a particular command line. It can also be configured to permit passing arguments or multiple commands.
History
Robert Coggeshall and Cliff Spencer wrote the original subsystem around 1980 at the Department of Computer Science at SUNY/Buffalo. Robert Coggeshall brought sudo with him to the University of Colorado Boulder. Between 1986 and 1993, the code and features were substantially modified by the IT staff of the University of Colorado Boulder Computer Science Department and the College of Engineering and Applied Science, including Todd C. Miller. The current version has been publicly maintained by OpenBSD developer Todd C. Miller since 1994, and has been distributed under an ISC-style license since 1999.
In November 2009 Thomas Claburn, in response to concerns that Microsoft had patented sudo, characterized such suspicions as overblown. The claims were narrowly framed to a particular GUI, rather than to the sudo concept.
The logo is a reference to an xkcd strip, where an order for a sandwich is accepted when preceded with 'sudo'.
Design
Unlike the command su, users supply their personal password to sudo (if necessary) rather than that of the superuser or other account. This allows authorized users to exercise altered privileges without compromising the secrecy of the other account's password. Users must be in a certain group to use the sudo command, typically either the wheel group or the sudo group. After authentication, and if the configuration file permits the user access, the system invokes the requested command. sudo retains the user's invocation rights through a grace period (typically 5 minutes) per pseudo terminal, allowing the user to execute several successive commands as the requested user without having to provide a password again.
As a security and auditing feature, sudo may be configured to log each command run. When a user attempts to invoke sudo without being listed in the configuration file, an exception indication is presented to the user indicating that the attempt has been recorded. If configured, the root user will be alerted via mail. By default, an entry is recorded in the system.
Configuration
The /etc/sudoers file contains a list of users or user groups with permission to execute a subset of commands while having the privileges of the root user or another specified user. The file is recommended to be edited by using the command sudo visudo. Sudo contains several configuration options such as allowing commands to be run as sudo without a password, changing which users can use sudo, and changing the message displayed upon entering an incorrect password. Sudo features an easter egg that can be enabled from the configuration file that will display an insult every time an incorrect password is entered.
Impact
In some system distributions, sudo has largely supplanted the default use of a distinct superuser login for administrative tasks, most notably in some Linux distributions as well as Apple's macOS. This allows for more secure logging of admin commands and prevents some exploits.
RBAC
In association with SELinux, sudo can be used to transition between roles in role-based access control (RBAC).
Tools and similar programs
visudo is a command-line utility that allows editing the sudo configuration file in a fail-safe manner. It prevents multiple simultaneous edits with locks and performs sanity and syntax checks.
Sudoedit is a program that symlinks to the sudo binary. When sudo is run via its sudoedit alias, sudo behaves as if the -e flag has been passed and allows users to edit files that require additional privileges to write to.
Microsoft released its own version of sudo for Windows in February 2024. It functions similar to its Unix counterpart by giving the ability to run elevated commands from an unelevated console session. The program runas provides comparable functionality in Windows, but it cannot pass current directories, environment variables or long command lines to the child. And while it supports running the child as another user, it does not support simple elevation. Hamilton C shell also includes true su and sudo for Windows that can pass all of that state information and start the child either elevated or as another user (or both).
Graphical user interfaces exist for sudo – notably gksudo – but are deprecated in Debian and no longer included in Ubuntu. Other user interfaces are not directly built on sudo, but provide similar temporary privilege elevation for administrative purposes, such as pkexec in Unix-like operating systems, User Account Control in Microsoft Windows and Mac OS X Authorization Services.
doas, available since OpenBSD 5.8 (October 2015), has been written in order to replace sudo in the OpenBSD base system, with the latter still being made available as a port.
gosu is a tool similar to sudo that is popular in containers where the terminal may not be fully functional or where there are undesirable effects from running sudo in a containerized environment.
See also
chroot
doas
runas
Comparison of privilege authorization features
References
External links
Computer security software
System administration
Unix user management and support-related utilities
Software using the ISC license | Sudo | Technology,Engineering | 1,275 |
6,727,548 | https://en.wikipedia.org/wiki/Foreign-language%20writing%20aid | A foreign language writing aid is a computer program or any other instrument that assists a non-native language user (also referred to as a foreign language learner) in writing decently in their target language. Assistive operations can be classified into two categories: on-the-fly prompts and post-writing checks. Assisted aspects of writing include: lexical, syntactic (syntactic and semantic roles of a word's frame), lexical semantic (context/collocation-influenced word choice and user-intention-driven synonym choice) and idiomatic expression transfer, etc. Different types of foreign language writing aids include automated proofreading applications, text corpora, dictionaries, translation aids and orthography aids.
Background
The four major components in the acquisition of a language are namely; listening, speaking, reading and writing. While most people have no difficulties in exercising these skills in their native language, doing so in a second or foreign language is not that easy. In the area of writing, research has found that foreign language learners find it painstaking to compose in the target language, producing less eloquent sentences and encountering difficulties in the revisions of their written work. However, these difficulties are not attributed to their linguistic abilities.
Many language learners experience foreign language anxiety, feelings of apprehensiveness and nervousness, when learning a second language. In the case of writing in a foreign language, this anxiety can be alleviated via foreign language writing aids as they assist non-native language users in independently producing decent written work at their own pace, hence increasing confidence about themselves and their own learning abilities.
With advancements in technology, aids in foreign language writing are no longer restricted to traditional mediums such as teacher feedback and dictionaries. Known as computer-assisted language learning (CALL), use of computers in language classrooms has become more common, and one example would be the use of word processors to assist learners of a foreign language in the technical aspects of their writing, such as grammar. In comparison with correction feedback from the teacher, the use of word processors is found to be a better tool in improving the writing skills of students who are learning English as a foreign language (EFL), possibly because students find it more encouraging to learn their mistakes from a neutral and detached source. Apart from learners' confidence in writing, their motivation and attitudes will also improve through the use of computers.
Foreign language learners' awareness of the conventions in writing can be improved through reference to guidelines showing the features and structure of the target genre. At the same time, interactions and feedback help to engage the learners and expedite their learning, especially with active participation. In online writing situations, learners are isolated without face-to-face interaction with others. Therefore, a foreign language writing aid should provide interaction and feedback so as to ease the learning process. This complements communicative language teaching (CLT); which is a teaching approach that highlights interaction as both the means and aim of learning a language.
Automation of proofreading process
In accordance with the simple view of writing, both lower-order and higher-order skills are required. Lower-order skills involve those of spelling and transcription, whereas higher order-skills involve that of ideation; which refers to idea generation and organisation. Proofreading is helpful for non-native language users in minimising errors while writing in a foreign language. Spell checkers and grammar checkers are two applications that aid in the automatic proofreading process of written work.
Spelling check and applications
To achieve writing competence in a non-native language, especially in an alphabetic language, spelling proficiency is of utmost importance. Spelling proficiency has been identified as a good indicator of a learner’s acquisition and comprehension of alphabetic principles in the target language. Documented data on misspelling patterns indicate that majority of misspellings fall under the four categories of letter insertion, deletion, transposition and substitution. In languages where pronunciation of certain sequences of letters may be similar, misspellings may occur when the non-native language learner relies heavily on the sounds of the target language because they are unsure about the accurate spelling of the words. The spell checker application is a type of writing aid that non-native language learners can rely on to detect and correct their misspellings in the target language.
Operating modes
In general, spell checkers can operate one of two modes, the interactive spell checking mode or the batch spell checking. In the interactive mode, the spell checker detects and marks misspelled words with a squiggly underlining as the words are being typed. On the other hand, batch spell checking is performed on a batch-by-batch basis as the appropriate command is entered. Spell checkers, such as those used in Microsoft Word, can operate in either mode.
Evaluation
Although spell checkers are commonplace in numerous software products, errors specifically made by learners of a target language may not be sufficiently catered for. This is because generic spell checkers function on the assumption that their users are competent speakers of the target language, whose misspellings are primarily due to accidental typographical errors. The majority of misspellings were found to be attributed to systematic competence errors instead of accidental typographical ones, with up to 48% of these errors failing to be detected or corrected by the generic spell checker used.
In view of the deficiency of generic spell checkers, programs have been designed to gear towards non-native misspellings, such as FipsCor and Spengels. In FipsCor, a combination of methods, such as the alpha-code method, phonological reinterpretation method and morphological treatment method, has been adopted in an attempt to create a spell checker tailored to French language learners. On the other hand, Spengels is a tutoring system developed to aid Dutch children and non-native Dutch writers of English in accurate English spelling.
Grammar check and applications
Grammar (syntactical and morphological) competency is another indicator of a non-native speaker’s proficiency in writing in the target language. Grammar checkers are a type of computerised application which non-native speakers can make use of to proofread their writings as such programs endeavor to identify syntactical errors. Grammar and style checking is recognized as one of the seven major applications of Natural Language Processing and every project in this field aims to build grammar checkers into a writing aid instead of a robust man-machine interface.
Evaluation
Currently, grammar checkers are incapable of inspecting the linguistic or even syntactic correctness of text as a whole. They are restricted in their usefulness in that they are only able to check a small fraction of all the possible syntactic structures. Grammar checkers are unable to detect semantic errors in a correctly structured syntax order; i.e. grammar checkers do not register the error when the sentence structure is syntactically correct but semantically meaningless.
Although grammar checkers have largely been concentrated on ensuring grammatical writing, majority of them are modelled after native writers, neglecting the needs of non-native language users. Much research have attempted to tailor grammar checkers to the needs of non-native language users. Granska, a Swedish grammar checker, has been greatly worked upon by numerous researchers in the investigation of grammar checking properties for foreign language learners. The Universidad Nacional de Educación a Distancia has a computerised grammar checker for native Spanish speakers of EFL to help identify and correct grammatical mistakes without feedback from teachers.
Dichotomy between spell and grammar checkers
Theoretically, the functions of a conventional spell checker can be incorporated into a grammar checker entirely and this is likely the route that the language processing industry is working towards. In reality, internationally available word processors such as Microsoft Word have difficulties combining spell checkers and grammar checkers due to licensing issues; various proofing instrument mechanisms for a certain language would have been licensed under different providers at different times.
Corpora
Electronic corpora in the target language provide non-native language users with authentic examples of language use rather than fixed examples, which may not be reflected in daily interactions. The contextualised grammatical knowledge acquired by non-native language users through exposure to authentic texts in corpora allows them to grasp the manner of sentence formation in the target language, enabling effective writing.
Acquisition of lexico-grammatical patterns
Concordance set up through concordancing programs of corpora allow non-native language users to conveniently grasp lexico-grammatical patterns of the target language. Collocational frequencies of words (i.e. word pairings frequencies) provide non-native language users with information about accurate grammar structures which can be used when writing in the target language. Collocational information also enable non-native language users to make clearer distinctions between words and expressions commonly regarded as synonyms. In addition, corpora information about the semantic prosody; i.e. appropriate choices of words to be used in positive and negative co-texts, is available as reference for non-native language users in writing. The corpora can also be used to check for the acceptability or syntactic "grammaticality" of their written work.
Evaluation
A survey conducted on English as a Second Language (ESL) students revealed corpus activities to be generally well received and thought to be especially useful for learning word usage patterns and improving writing skills in the foreign language. It was also found that students' writings became more natural after using two online corpora in a 90-minute training session. In recent years, there were also suggestions to incorporate the applications of corpora into EFL writing courses in China to improve the writing skills of learners.
Dictionaries
Dictionaries of the target learning languages are commonly recommended to non-native language learners. They serve as reference tools by offering definitions, phonetic spelling, word classes and sample sentences. It was found that the use of a dictionary can help learners of a foreign language write better if they know how to use them. Foreign language learners can make use of grammar-related information from the dictionary to select appropriate words, check the correct spelling of a word and look up synonyms to add more variety to their writing. Nonetheless, learners have to be careful when using dictionaries as the lexical-semantic information contained in dictionaries might not be sufficient with regards to language production in a particular context and learners may be misled into choosing incorrect words.
Presently, many notable dictionaries are available online and basic usage is usually free. These online dictionaries allow learners of a foreign language to find references for a word much faster and more conveniently than with a manual version, thus minimising the disruption to the flow of writing. Online dictionaries available can be found under the list of online dictionaries.
Different types of dictionaries
Dictionaries come in different levels of proficiency; such as advanced, intermediate and beginner, which learners can choose accordingly to the level best suited to them. There are many different types of dictionaries available; such as thesaurus or bilingual dictionaries, which cater to the specific needs of a learner of a foreign language. In recent years, there is also specialised dictionaries for foreign language learners that employ natural language processing tools to assist in the compilations of dictionary entries by generating feedback on the vocabulary that learners use and automatically providing inflectional and/or derivational forms for referencing items in the explanations.
Thesaurus
The word thesaurus means 'treasury' or 'storehouse' in Greek and Latin is used to refer to several varieties of language resources, it is most commonly known as a book that groups words in synonym clusters and related meanings. Its original sense of 'dictionary or encyclopedia' has been overshadowed by the emergence of the Roget-style thesaurus and it is considered as a writing aid as it helps writers with the selection of words. The differences between a Roget-style thesaurus and a dictionary would be the indexing and information given; the words in thesaurus are grouped by meaning, usually without definitions, while the latter is by alphabetical order with definitions. When users are unable to find a word in a dictionary, it is usually due to the constraint of searching alphabetically by common and well-known headwords and the use of a thesaurus eliminates this issue by allowing users to search for a word through another word based on concept.
Foreign language learners can make use of thesaurus to find near synonyms of a word to expand their vocabulary skills and add variety to their writing. Many word processors are equipped with a basic function of thesaurus, allowing learners to change a word to another similar word with ease. However, learners must be mindful that even if the words are near synonyms, they might not be suitable replacements depending on the context.
Spelling dictionaries
Spelling dictionaries are referencing materials that specifically aid users in finding the correct spelling of a word. Unlike common dictionaries, spelling dictionaries do not typically provide definitions and other grammar-related information of the words. While typical dictionaries can be used to check or search for correct spellings, new and improved spelling dictionaries can assist users in finding the correct spelling of words even when the user does not know the first alphabet or knows it imperfectly. This circumvents the alphabetic ordering limitations of a classic dictionary. These spelling dictionaries are especially useful to foreign language learners as inclusion of concise definitions and suggestions for commonly confused words help learners to choose the correct spellings of words that sound alike or are pronounced wrongly by them.
Personal spelling dictionary
A personal spelling dictionary, being a collection of a single learner’s regularly misspelled words, is tailored to the individual and can be expanded with new entries that the learner does not know how to spell or contracted when the learner had mastered the words. Learners also use the personal spelling dictionary more than electronic spellcheckers, and additions can be easily made to better enhance it as a learning tool as it can include things like rules for writing and proper nouns, which are not included in electronic spellcheckers. Studies also suggest that personal spelling dictionaries are better tools for learners to improve their spelling as compared to trying to memorize words that are unrelated from lists or books.
Bilingual dictionaries
Current research have shown that language learners utilise dictionaries predominantly to check for meanings and that bilingual dictionaries are preferred over monolingual dictionaries for these uses. Bilingual dictionaries have proved to be helpful for learners of a new language, although in general, they hold less extensive coverage of information as compared to monolingual dictionaries. Nonetheless, good bilingual dictionaries capitalize on the fact that they are useful for learners to integrate helpful information about commonly known errors, false friends and contrastive predicaments from the two languages.
Studies have shown that learners of English have benefited from the use of bilingual dictionaries on their production and comprehension of unknown words. When using bilingual dictionaries, learners also tend to read entries in both native and target languages and this helps them to map the meanings of the target word in the foreign language onto its counterpart in their native language. It was also found that the use of bilingual dictionaries improves the results of translation tasks by learners of ESL, thus showing that language learning can be enhanced with the use of bilingual dictionaries.
The use of bilingual dictionaries in foreign language writing tests remains a debate. Some studies support the view that the use of a dictionary in a foreign language examination increases the mean score of the test, and hence is one of the factors that influenced the decision to ban the use of dictionaries in several foreign language tests in the UK. More recent studies, however, present that further research into the use of bilingual dictionaries during writing tests have shown that there is no significant differences in the test scores that can be attributed to the use of a dictionary. Nevertheless, from the perspective of foreign language learners, being able to use a bilingual dictionary during a test is reassuring and increases their confidence.
Translations aids
There are many free translation aids online, also known as machine translation (MT) engines, such as Google Translate and Babel Fish (now defunct), that allow foreign language learners to translate between their native language and the target language quickly and conveniently. Out of the three major categories in computerised translation tools; computer-assisted translation (CAT), Terminology data banks and machine translation. Machine translation is the most ambitious as it is designed to handle the whole process of translation entirely without the intervention of human assistance.
Studies have shown that translation into the target language can be used to improve the linguistic proficiency of foreign language learners. Machine translation aids help beginner learners of a foreign language to write more and produce better quality work in the target language; writing directly in the target language without any aid requires more effort on the learners' part, resulting in the difference in quantity and quality.
However, teachers advise learners against the use of machine translation aids as output from the machine translation aids are highly misleading and unreliable; producing the wrong answers most of the time. Over-reliance on the aids also hinder the development of learners' writing skills, and is viewed as an act of plagiarism since the language used is technically not produced by the student.
Orthography aids
The orthography of a language is the usage of a specific script to write a language according to a conventionalised usage. One’s ability to read in a language is further enhanced by a concurrent learning of writing. This is because writing is a means of helping the language learner recognise and remember the features of the orthography, which is particularly helpful when the orthography has irregular phonetic-to-spelling mapping. This, in turn, helps the language learner to focus on the components which make up the word.
Online
Online orthography aids provide language learners with a step-by-step process on learning how to write characters. These are especially useful for learners of languages with logographic writing systems, such as Chinese or Japanese, in which the ordering of strokes for characters are important. Alternatively, tools like Skritter provide an interactive way of learning via a system similar to writing tablets albeit on computers, at the same time providing feedback on stroke ordering and progress.
Handwriting recognition is supported on certain programs, which help language learners in learning the orthography of the target language. Practice of orthography is also available in many applications, with tracing systems in place to help learners with stroke orders.
Offline
Apart from online orthography programs, offline orthography aids for language learners of logographic languages are also available. Character cards, which contain lists of frequently used characters of the target language, serve as a portable form of visual writing aid for language learners of logographic languages who may face difficulties in recalling the writing of certain characters.
Evaluation
Studies have shown that tracing logographic characters improves the word recognition abilities of foreign language learners, as well as their ability to map the meanings onto the characters. This, however, does not improve their ability to link pronunciation with characters, which suggests that these learners need more than orthography aids to help them in mastering the language in both writing and speech.
See also
Computer-assisted language learning
Foreign-language reading aid
Language education
Second language
References
Language education materials
Language software | Foreign-language writing aid | Technology | 3,949 |
40,946,235 | https://en.wikipedia.org/wiki/Mycena%20alphitophora | Mycena alphitophora is a species of agaric fungus in the family Mycenaceae. Its small, white, delicate fruit bodies are characterized by the powdery coatings on the surfaces of both the cap and stipe. The stipe base is not swollen or disk-like. The stipe surface is more hairy than Mycena adscendens.
Taxonomy
The species was first described as Agaricus alphitophorus by Miles Joseph Berkeley in 1877, based on specimens collected in 1873 from the Devonshire Marsh, a peatland in Bermuda. Pier Andrea Saccardo transferred it to the genus Mycena in 1887. William Alphonso Murrill placed the species in Prunulus in 1916. Jakob Emanuel Lange's Mycena osmundicola, published in 1914, is a synonym. P. Manimohan and K.M. Leelavathy defined the varieties distincta and globispora from southern India in 1989. It is classified in the section Saccharifera of Mycena.
Similar species
Mycena adscendens has a swollen or disk-like stipe base; also, the stipe surface is more densely hairy with caulocystida. Mycena stylobates has a pruinose stipe that arises from a basal disc, but the cap is up to 10 mm and lacks white granules. White Hemimycena species lack granules and all have inamyloid spores.
References
alphitophora
Fungi described in 1875
Fungi of Europe
Fungi of North America
Taxa named by Miles Joseph Berkeley
Fungus species | Mycena alphitophora | Biology | 329 |
2,623,640 | https://en.wikipedia.org/wiki/Religious%20fanaticism | Religious fanaticism (or the prefix ultra- being used with a religious term (such as ultra-Orthodox Judaism), or (especially when violence is involved) religious extremism) is a pejorative designation used to indicate uncritical zeal or obsessive enthusiasm that is related to one's own, or one's group's, devotion to a religion – a form of human fanaticism that could otherwise be expressed in one's other involvements and participation, including employment, role, and partisan affinities. In psychiatry, the term hyperreligiosity is used. Historically, the term was applied in Christian antiquity to denigrate non-Christian religions, and subsequently acquired its current usage with the Age of Enlightenment.
Features
Lloyd Steffen cites several features associated with religious fanaticism or extremism:
Spiritual needs: Human beings have a spiritual longing for understanding and meaning, and given the mystery of existence, that spiritual quest can only be fulfilled through some kind of relationship with ultimacy, whether or not that takes the form as a "transcendent other". Religion has power to meet this need for meaning and transcendent relationship.
Attractiveness: It presents itself in such a way that those who find their way into it come to express themselves in ways consistent with the particular vision of ultimacy at the heart of this religious form.
A 'live' option: It is present to the moral consciousness as a live option that addresses spiritual need and satisfies human longing for meaning, power, and belonging.
Examples of religious fanaticism
Christianity
Ever since Christianity was established, some of those in authority have sought to expand and control the church, often through the fanatical use of force. Grant Shafer says, "Jesus of Nazareth is best known as a preacher of nonviolence".
J. Harold Ellens states that the start of Christian fanatic rule came with the Roman Emperor Constantine I, saying, "When Christianity came to power in the empire of Constantine, it proceeded to almost viciously repress all non-Christians and all Christians who did not line up with official Orthodox ideology, policy, and practice". An example of Christians who didn't line up with Orthodox ideology is the Donatists, who "refused to accept repentant clergy who had formerly given way to apostasy when persecuted".
Fanatical Christian activity continued into the Middle Ages with the Crusades. These religious wars were attempts by the Catholics, sanctioned by the Pope, to conquer the Holy Land from the Muslims. However many Catholics see the crusades as a just war. Charles Selengut, in his book Sacred Fury: Understanding Religious Violence, said:
The Crusades were very much holy wars waged to maintain Christianity's theological and social control. On their way to conquering the Holy Land from the Muslims by force of arms, the crusaders destroyed dozens of Jewish communities and killed thousands because the Jews would not accept the Christian faith. Jews had to be killed in the religious campaign because their very existence challenged the sole truth espoused by the Christian Church.
Shafer adds that, "When the crusaders captured Jerusalem in 1099, they killed Muslims, Jews, and native Christians indiscriminately". Contrary to what Shafer alleges, however, no eyewitness source refers to Crusaders killing native Christians in Jerusalem, and early Eastern Christian sources (Matthew of Edessa, Anna Comnena, Michael the Syrian, etc.) make no such allegation about the Crusaders in Jerusalem. According to the Syriac Chronicle, all the Christians had already been expelled from Jerusalem before the Crusaders arrived. Presumably this would have been done by the Fatimid governor to prevent their possible collusion with the Crusaders.
Another prominent form of fanaticism according to some came a few centuries later with the Spanish Inquisition. The Inquisition was the monarchy's way of making sure their people stayed within Catholic Christianity. Selengut said, "The inquisitions were attempts at self-protection and targeted primarily "internal enemies" of the church". The driving force of the Inquisition was the Inquisitors, who were responsible for spreading the truth of Christianity. Selengut continues, saying:
The inquisitors generally saw themselves as educators helping people maintain correct beliefs by pointing out errors in knowledge and judgment... Punishment and death came only to those who refused to admit their errors ... during the Spanish Inquisitions of the fifteenth century, the clear distinction between confession and innocence and remaining in error became muddled.... The investigators had to invent all sorts of techniques, including torture, to ascertain whether ... new converts' beliefs were genuine.During the Reformation Christian fanaticism increased between Catholics and the recently formed Protestants. Many Christians were killed for having rival viewpoints. The Reformation set off a chain of sectarian wars between the Catholics and the sectarian Protestants, culminating in the wars of religion.
Islam
Islamic extremism dates back to the early history of Islam with the emergence of the Kharijites in the 7th century CE. The original schism between Kharijites, Sunnīs, and Shīʿas among Muslims was disputed over the political and religious succession to the guidance of the Muslim community (Ummah) after the death of the Islamic prophet Muhammad. From their essentially political position, the Kharijites developed extreme doctrines that set them apart from both mainstream Sunnī and Shīʿa Muslims. Shīʿas believe ʿAlī ibn Abī Ṭālib is the true successor to Muhammad, while Sunnīs consider Abu Bakr to hold that position. The Kharijites broke away from both the Shīʿas and the Sunnīs during the First Fitna (the first Islamic Civil War); they were particularly noted for adopting a radical approach to takfīr (excommunication), whereby they declared both Sunnī and Shīʿa Muslims to be either infidels (kuffār) or false Muslims (munāfiḳūn), and therefore deemed them worthy of death for their perceived apostasy (ridda).
Sayyid Qutb, an Egyptian Islamist ideologue and prominent figurehead of the Muslim Brotherhood in Egypt, was influential in promoting the Pan-Islamist ideology in the 1960s. When he was executed by the Egyptian government under the regime of Gamal Abdel Nasser, Ayman al-Zawahiri formed the organization Egyptian Islamic Jihad to replace the government with an Islamic state that would reflect Qutb's ideas for the Islamic revival that he yearned for. The Qutbist ideology has been influential on jihadist movements and Islamic terrorists that seek to overthrow secular governments, most notably Osama bin Laden and Ayman al-Zawahiri of al-Qaeda, as well as the Salafi-jihadi terrorist group ISIL/ISIS/IS/Daesh. Moreover, Qutb's books have been frequently been cited by Osama bin Laden and Anwar al-Awlaki.
Since Osama bin Laden's fatwa in 1998, jihad has increasingly become an internationally recognized term. Bin Laden's concept, though, is very different from the actual meaning of the term. In the religious context, jihad most nearly means "working urgently for a certain godly objective, generally an imperialist one". The word jihad in Arabic means 'struggle'. The struggle can be a struggle of implementing the Islamic values in daily activities, a struggle with others to counter arguments against Islam, or self-defense when physically attacked because of belief in Islam. According to Steffen, there are portions of the Quran where military jihad is used. As Steffen says, though, "Jihad in these uses is always defensive. Not only does 'jihad' not endorse acts of military aggression, but 'jihad' is invoked in Qur'anic passages to indicate how uses of force are always subject to restraint and qualification". This kind of jihad differs greatly from the kind most commonly discussed today.
Thomas Farr, in an essay titled Islam's Way to Freedom, states that "Even though most Muslims reject violence, the extremists' use of sacred texts lends their actions authenticity and recruiting power". (Freedom 24) He goes on to say, "The radicals insist that their central claim – God's desire for Islam's triumph – requires no interpretation. According to them, true Muslims will pursue it by any means necessary, including dissimulation, civil coercion, and the killing of innocents". (Freedom 24)
According to certain observers this disregard for others and rampant use of violence is markedly different from the peaceful message that jihad is meant to employ. Although fanatic jihadists have committed many terroristic acts throughout the world, perhaps the best known is the September 11 attacks. According to Ellens, the al-Qaeda members who took part in the terrorist attacks did so out of their belief that, by doing it, they would "enact a devastating blow against the evil of secularized and non-Muslim America. They were cleansing this world, God's temple".
Hinduism
Violence based on communalistic-ideologies are quite predominant in the Indian subcontinent, especially since the British Raj, even resulting in the partition of British India based on religious lines by demand of Muslims to burn the subcontinent if not given separate land.
Judaism
Bibliography
Teaching in a World of Violent Extremism. N.p., Wipf & Stock Publishers, 2021.
See also
Antitheism
Cult suicide
Extremism
Religious fundamentalism
Hyperreligiosity
Religious ecstasy
Religious order
Religious intolerance
Just war theory
Mass suicide
Nonviolent extremism
Religious terrorism
Religious violence
Religious war
Sectarian violence
Violent extremism
Hindu terrorism
Hindutva
Hindu nationalism
Violence against Muslims in independent India
Violence against Christians in India
Citations
Further reading
Moran, Seán Farrell, "Patrick Pearse and Patriotic Soteriology," in Yonah Alexander and Alan O'Day, The Irish Terrorism Experience, Aldershot: Dartmouth, 17–30.
Religious practices
Pejorative terms
Age of Enlightenment | Religious fanaticism | Biology | 2,064 |
20,939,260 | https://en.wikipedia.org/wiki/Hemagglutinin | Hemagglutinins (alternatively spelt haemagglutinin, from the Greek , 'blood' + Latin , 'glue') are homotrimeric glycoproteins present on the protein capsids of viruses in the Paramyxoviridae and Orthomyxoviridae families. Hemagglutinins are responsible for binding to receptors, sialic acid residues, on host cell membranes to initiate virus docking and infection.
Specifically, they recognize cell-surface glycoconjugates containing sialic acid on the surface of host red blood cells with a low affinity and use them to enter the endosome of host cells. Hemagglutinins tend to recognize α-2,6-linked sialic acids of the host cells in humans and α-2,3-linked sialic acids in avian species, although there is evidence that hemagglutinin specificity can vary. This correlates to the fact that Influenza A typically establishes infections in the upper respiratory tract in humans, where many of these α-2,6-linked sialic acids are present. There are various subtypes of hemagglutinins, in which H1, H2, and H3 are known to have human susceptibility. It is the variation in hemagglutinin (and neuraminidase) subtypes that require health organizations (ex. WHO) to constantly update and surveil the known circulating flu viruses in human and animal populations (ex. H5N1).
In the endosome, hemagglutinins undergo conformational changes due to a pH drop to of 5–6.5 enabling viral attachment through a fusion peptide.
Virologist George K. Hirst discovered agglutination and hemagglutinins in 1941. Alfred Gottschalk proved in 1957 that hemagglutinins bind a virus to a host cell by attaching to sialic acids on carbohydrate side chains of cell-membrane glycoproteins and glycolipids.
The name "hemagglutinin" comes from the protein's ability to cause red blood cells (erythrocytes) to clump together ("agglutinate") in vitro.
Types
Influenza hemagglutinin: a homotrimeric glycoprotein that is found on the surface of influenza viruses which is responsible for their infectivity. Influenza strains are named for the specific hemagglutinin variant they produce, along with the specific variant of another surface protein, neuraminidase.
These hemagglutinins are subject to rapid evolution via antigenic shift and drift in the influenza avian reservoir. This results in new subtype of hemagglutinins being created frequently, and is the cause of seasonal influenza outbreaks in humans.
Measles hemagglutinin: a hemagglutinin produced by the measles virus that encodes six structural proteins, with hemagglutinin and fusion proteins being surface glycoproteins involved in attachment and entry.
Parainfluenza hemagglutinin-neuraminidase: a type of hemagglutinin-neuraminidase produced by parainfluenza, which is closely associated with both human and veterinary disease.
Mumps hemagglutinin-neuraminidase: a kind of hemagglutinin that the mumps virus (MuV) produces.
Hemagglutinin: the PH-E form of phytohaemagglutinin.
Structure
Hemagglutinins are small proteins that extend from the surface of the virus membrane as spikes that are 135 Angstroms (Å) in length and 30-50 Å in diameter. Each spike is composed of three identical monomer subunits, making the protein a homotrimer. These monomers are formed of two glycopeptides, HA1 and HA2, and linked by two disulphide polypeptides, including membrane-distal HA1 and the smaller membrane-proximal HA2. X-ray crystallography, NMR spectroscopy, and cryo-electron microscopy were used to solve the protein's structure, the majority of which is α-helical. In addition to the homotrimeric core structure, hemagglutinins have four subdomains: the membrane-distal receptor binding R subdomain, the vestigial domain E, that functions as a receptor-destroying esterase, the fusion domain F, and the membrane anchor subdomain M. The membrane anchor subdomain forms elastic protein chains linking the hemagglutinin to the ectodomain.
Step-By-Step Mechanism (Influenza Hemagglutinin)
On the viral capsids of influenza types A and B, hemagglutinin is initially inactive. Only when cleaved by host proteins, does each monomer polypeptide of the homotrimer transforms into a dimer – composed of HA1 and HA2 subunits attached by disulfide bridges. The HA1 subunit is responsible for docking the viral capsid onto the host cell by binding to sialic acid residues present on the surface of host respiratory cells. This binding triggers endocytosis. The pH in the endosomal compartment then decreases from proton influx, and this causes a conformational change in HA that forces the HA2 subunit to “flip outward.” The HA2 subunit is responsible for membrane fusion. It binds to the endosomal membrane, pulling the viral capsid membrane and the endosomal membrane tightly together, eventually forming a pore through which the viral genome can enter into the host cell cytoplasm. From here, the virus can use host machinery to proliferate.
Uses in serology
Hemagglutination Inhibition Assay: A serologic assay which can be used either to screen for antibodies using RBCs with known surface antigens, or to identify RBCs surface antigens such as viruses or bacteria using a panel of known antibodies. This method, performed first by George K. Hirst in 1942, consists of mixing virus samples with serum dilutions so that antibodies bind to the virus before RBCs are added to the mix. Consequently, those viruses bound to antibodies are unable to link RBCs, meaning that a test’s positive result due to hemagglutination has been inhibited. On the contrary, if hemagglutination occurs, the test will result negative.
Hemagglutination blood typing detection: This method consists of measuring the blood’s reflectance spectrum alone (non-agglutination), and that of blood mixed with antibody reagents (agglutination) using a waveguide-mode sensor. As a result, some differences in reflectance between the samples are observed. Once antibodies are added, blood types and Rh(D) typing can be determined using the waveguide-mode sensor. This technique is able to detect weak agglutinations that are almost impossible to detect with the human eye.
ABO blood group determination: Using anti-A and anti-B antibodies that bind specifically to either the A or to the B blood group surface antigens on RBCs, it is possible to test a small sample of blood and determine the ABO blood type of an individual. It does not identify the Rh(D) antigen (Rh blood type).
The bedside card method of blood grouping relies on visual agglutination to determine an individual's blood group. The card contains dried blood group antibody reagents fixed onto its surface. A drop of the individual's blood is placed on each blood group area on the card. The presence or absence of flocculation (visual agglutination) enables a quick and convenient method of determining the ABO and Rhesus status of the individual. As this technique depends on human eyes, it is less reliable than the blood typing based on waveguide-mode sensors.
The agglutination of red blood cells is used in the Coombs test in diagnostic immunohematology to test for autoimmune hemolytic anemia.
In the case of red blood cells, transformed cells are known as kodecytes. Kode technology exposes exogenous antigens on the surface of cells, allowing antibody-antigen responses to be detected by the traditional hemagglutination test.
See also
Cold agglutinin disease
Hemagglutination assay
Neuraminidase
Influenza hemagglutinin (HA)
Agglutination
References
External links
Hematology
Immunologic tests
Viral structural proteins | Hemagglutinin | Biology | 1,809 |
60,591,756 | https://en.wikipedia.org/wiki/Microphysiometry | Microphysiometry is the in vitro measurement of the functions and activities of life or of living matter (as organs, tissues, or cells) and of the physical and chemical phenomena involved on a very small (micrometer) scale. The term microphysiometry emerged in the scientific literature at the end of the 1980s.
The primary parameters assessed in microphysiometry comprise pH and the concentration of dissolved oxygen, glucose, and lactic acid, with an emphasis on the first two. Measuring these parameters experimentally in combination with a fluidic system for cell culture maintenance and a defined application of drugs or toxins provides the quantitative output parameters extracellular acidification rates (EAR), oxygen uptake rates (OUR), and rates of glucose consumption or lactate release to characterize the metabolic situation.
Due to the label-free nature of sensor-based measurements, dynamic monitoring of cells or tissues for several days or even longer is feasible. On an extended timescale, a dynamic analysis of a cell's metabolic response to an experimental treatment can distinguish acute effects (e.g., one hour after a treatment), early effects (e.g., at 24 hours), and delayed, chronic responses (e.g., at 96 hours). As stated by Alajoki et al., "The concept is that it is possible to detect receptor activation and other physiological changes in living cells by monitoring the activity of energy metabolism".
See also
Organ-on-a-chip
References
Biotechnology
Research methods
Laboratory techniques | Microphysiometry | Chemistry,Biology | 311 |
723,125 | https://en.wikipedia.org/wiki/Incidence%20structure | In mathematics, an incidence structure is an abstract system consisting of two types of objects and a single relationship between these types of objects. Consider the points and lines of the Euclidean plane as the two types of objects and ignore all the properties of this geometry except for the relation of which points are incident on which lines for all points and lines. What is left is the incidence structure of the Euclidean plane.
Incidence structures are most often considered in the geometrical context where they are abstracted from, and hence generalize, planes (such as affine, projective, and Möbius planes), but the concept is very broad and not limited to geometric settings. Even in a geometric setting, incidence structures are not limited to just points and lines; higher-dimensional objects (planes, solids, -spaces, conics, etc.) can be used. The study of finite structures is sometimes called finite geometry.
Formal definition and terminology
An incidence structure is a triple () where is a set whose elements are called points, is a distinct set whose elements are called lines and is the incidence relation. The elements of are called flags. If () is in then one may say that point "lies on" line or that the line "passes through" point . A more "symmetric" terminology, to reflect the symmetric nature of this relation, is that " is incident with " or that " is incident with " and uses the notation synonymously with .
In some common situations may be a set of subsets of in which case incidence will be containment ( if and only if is a member of ). Incidence structures of this type are called set-theoretic. This is not always the case, for example, if is a set of vectors and a set of square matrices, we may define
This example also shows that while the geometric language of points and lines is used, the object types need not be these geometric objects.
Examples
An incidence structure is uniform if each line is incident with the same number of points. Each of these examples, except the second, is uniform with three points per line.
Graphs
Any graph (which need not be simple; loops and multiple edges are allowed) is a uniform incidence structure with two points per line. For these examples, the vertices of the graph form the point set, the edges of the graph form the line set, and incidence means that a vertex is an endpoint of an edge.
Linear spaces
Incidence structures are seldom studied in their full generality; it is typical to study incidence structures that satisfy some additional axioms. For instance, a partial linear space is an incidence structure that satisfies:
Any two distinct points are incident with at most one common line, and
Every line is incident with at least two points.
If the first axiom above is replaced by the stronger:
Any two distinct points are incident with exactly one common line,
the incidence structure is called a linear space.
Nets
A more specialized example is a -net. This is an incidence structure in which the lines fall into parallel classes, so that two lines in the same parallel class have no common points, but two lines in different classes have exactly one common point, and each point belongs to exactly one line from each parallel class. An example of a -net is the set of points of an affine plane together with parallel classes of affine lines.
Dual structure
If we interchange the role of "points" and "lines" in
we obtain the dual structure,
where is the converse relation of . It follows immediately from the definition that:
This is an abstract version of projective duality.
A structure that is isomorphic to its dual is called self-dual. The Fano plane above is a self-dual incidence structure.
Other terminology
The concept of an incidence structure is very simple and has arisen in several disciplines, each introducing its own vocabulary and specifying the types of questions that are typically asked about these structures. Incidence structures use a geometric terminology, but in graph theoretic terms they are called hypergraphs and in design theoretic terms they are called block designs. They are also known as a set system or family of sets in a general context.
Hypergraphs
Each hypergraph or set system can be regarded as an incidence
structure in which the universal set plays the role of "points", the corresponding family of subsets plays the role of "lines" and the incidence relation is set membership "". Conversely, every incidence structure can be viewed as a hypergraph by identifying the lines with the sets of points that are incident with them.
Block designs
A (general) block design is a set together with a family of subsets of (repeated subsets are allowed). Normally a block design is required to satisfy numerical regularity conditions. As an incidence structure, is the set of points and is the set of lines, usually called blocks in this context (repeated blocks must have distinct names, so is actually a set and not a multiset). If all the subsets in have the same size, the block design is called uniform. If each element of appears in the same number of subsets, the block design is said to be regular. The dual of a uniform design is a regular design and vice versa.
Example: Fano plane
Consider the block design/hypergraph given by:
This incidence structure is called the Fano plane. As a block design it is both uniform and regular.
In the labeling given, the lines are precisely the subsets of the points that consist of three points whose labels add up to zero using nim addition. Alternatively, each number, when written in binary, can be identified with a non-zero vector of length three over the binary field. Three vectors that generate a subspace form a line; in this case, that is equivalent to their vector sum being the zero vector.
Representations
Incidence structures may be represented in many ways. If the sets and are finite these representations can compactly encode all the relevant information concerning the structure.
Incidence matrix
The incidence matrix of a (finite) incidence structure is a (0,1) matrix that has its rows indexed by the points and columns indexed by the lines where the -th entry is a 1 if and 0 otherwise. An incidence matrix is not uniquely determined since it depends upon the arbitrary ordering of the points and the lines.
The non-uniform incidence structure pictured above (example number 2) is given by:
An incidence matrix for this structure is:
which corresponds to the incidence table:
If an incidence structure has an incidence matrix , then the dual structure has the transpose matrix T as its incidence matrix (and is defined by that matrix).
An incidence structure is self-dual if there exists an ordering of the points and lines so that the incidence matrix constructed with that ordering is a symmetric matrix.
With the labels as given in example number 1 above and with points ordered and lines ordered , the Fano plane has the incidence matrix:
Since this is a symmetric matrix, the Fano plane is a self-dual incidence structure.
Pictorial representations
An incidence figure (that is, a depiction of an incidence structure), is constructed by representing the points by dots in a plane and having some visual means of joining the dots to correspond to lines. The dots may be placed in any manner, there are no restrictions on distances between points or any relationships between points. In an incidence structure there is no concept of a point being between two other points; the order of points on a line is undefined. Compare this with ordered geometry, which does have a notion of betweenness. The same statements can be made about the depictions of the lines. In particular, lines need not be depicted by "straight line segments" (see examples 1, 3 and 4 above). As with the pictorial representation of graphs, the crossing of two "lines" at any place other than a dot has no meaning in terms of the incidence structure; it is only an accident of the representation. These incidence figures may at times resemble graphs, but they aren't graphs unless the incidence structure is a graph.
Realizability
Incidence structures can be modelled by points and curves in the Euclidean plane with the usual geometric meaning of incidence. Some incidence structures admit representation by points and (straight) lines. Structures that can be are called realizable. If no ambient space is mentioned then the Euclidean plane is assumed. The Fano plane (example 1 above) is not realizable since it needs at least one curve. The Möbius–Kantor configuration (example 4 above) is not realizable in the Euclidean plane, but it is realizable in the complex plane. On the other hand, examples 2 and 5 above are realizable and the incidence figures given there demonstrate this. Steinitz (1894) has shown that (incidence structures with points and lines, three points per line and three lines through each point) are either realizable or require the use of only one curved line in their representations. The Fano plane is the unique () and the Möbius–Kantor configuration is the unique ().
Incidence graph (Levi graph)
Each incidence structure corresponds to a bipartite graph called the Levi graph or incidence graph of the structure. As any bipartite graph is two-colorable, the Levi graph can be given a black and white vertex coloring, where black vertices correspond to points and white vertices correspond to lines of . The edges of this graph correspond to the flags (incident point/line pairs) of the incidence structure. The original Levi graph was the incidence graph of the generalized quadrangle of order two (example 3 above), but the term has been extended by H.S.M. Coxeter to refer to an incidence graph of any incidence structure.
Levi graph examples
The Levi graph of the Fano plane is the Heawood graph. Since the Heawood graph is connected and vertex-transitive, there exists an automorphism (such as the one defined by a reflection about the vertical axis in the figure of the Heawood graph) interchanging black and white vertices. This, in turn, implies that the Fano plane is self-dual.
The specific representation, on the left, of the Levi graph of the Möbius–Kantor configuration (example 4 above) illustrates that a rotation of about the center (either clockwise or counterclockwise) of the diagram interchanges the blue and red vertices and maps edges to edges. That is to say that there exists a color interchanging automorphism of this graph. Consequently, the incidence structure known as the Möbius–Kantor configuration is self-dual.
Generalization
It is possible to generalize the notion of an incidence structure to include more than two types of objects. A structure with types of objects is called an incidence structure of rank or a rank geometry. Formally these are defined as tuples with and
The Levi graph for these structures is defined as a multipartite graph with vertices corresponding to each type being colored the same.
See also
Incidence (geometry)
Incidence geometry
Projective configuration
Abstract polytope
Notes
References
Bibliography
Further reading
CRC Press (2000). Handbook of discrete and combinatorial mathematics, (Chapter 12.2),
Harold L. Dorwart (1966) The Geometry of Incidence, Prentice Hall
Families of sets
Combinatorics
Finite geometry
Incidence geometry | Incidence structure | Mathematics | 2,286 |
14,777,439 | https://en.wikipedia.org/wiki/PRKAR1B | cAMP-dependent protein kinase type I-beta regulatory subunit is an enzyme that in humans is encoded by the PRKAR1B gene.
Clinical significance
Mutations in PRKAR1B cause neurodegenerative disorder.
Interactions
PRKAR1B has been shown to interact with AKAP1 and PRKAR1A.
References
Further reading
External links
Genes on human chromosome 7 | PRKAR1B | Chemistry | 75 |
41,672,648 | https://en.wikipedia.org/wiki/Alifedrine | Alifedrine (; developmental code name D-13625) is a drug described as a sympathomimetic and cardiotonic or positive inotropic agent which was never marketed. It is a β-adrenergic receptor partial agonist and was studied in the treatment of heart failure. The drug is taken by mouth or intravenously. It is a β-hydroxylated substituted amphetamine derivative.
See also
Prenalterol
Xamoterol
References
Abandoned drugs
Beta-adrenergic agonists
Beta-Hydroxyamphetamines
Cardiac stimulants
Drugs acting on the cardiovascular system
Inotropic agents
Sympathomimetics | Alifedrine | Chemistry | 141 |
25,409 | https://en.wikipedia.org/wiki/Reptile | Reptiles, as commonly defined, are a group of tetrapods with an ectothermic ('cold-blooded') metabolism and amniotic development. Living traditional reptiles comprise four orders: Testudines (turtles), Crocodilia (crocodilians), Squamata (lizards and snakes), and Rhynchocephalia (the tuatara). As of May 2023, about 12,000 living species of reptiles are listed in the Reptile Database. The study of the traditional reptile orders, customarily in combination with the study of modern amphibians, is called herpetology.
Reptiles have been subject to several conflicting taxonomic definitions. In Linnaean taxonomy, reptiles are gathered together under the class Reptilia ( ), which corresponds to common usage. Modern cladistic taxonomy regards that group as paraphyletic, since genetic and paleontological evidence has determined that birds (class Aves), as members of Dinosauria, are more closely related to living crocodilians than to other reptiles, and are thus nested among reptiles from an evolutionary perspective. Many cladistic systems therefore redefine Reptilia as a clade (monophyletic group) including birds, though the precise definition of this clade varies between authors. Others prioritize the clade Sauropsida, which typically refers to all amniotes more closely related to modern reptiles than to mammals.
The earliest known proto-reptiles originated from the Carboniferous period, having evolved from advanced reptiliomorph tetrapods which became increasingly adapted to life on dry land. The earliest known eureptile ("true reptile") was Hylonomus, a small and superficially lizard-like animal which lived in Nova Scotia during the Bashkirian age of the Late Carboniferous, around . Genetic and fossil data argues that the two largest lineages of reptiles, Archosauromorpha (crocodilians, birds, and kin) and Lepidosauromorpha (lizards, and kin), diverged during the Permian period. In addition to the living reptiles, there are many diverse groups that are now extinct, in some cases due to mass extinction events. In particular, the Cretaceous–Paleogene extinction event wiped out the pterosaurs, plesiosaurs, and all non-avian dinosaurs alongside many species of crocodyliforms and squamates (e.g., mosasaurs). Modern non-bird reptiles inhabit all the continents except Antarctica.
Reptiles are tetrapod vertebrates, creatures that either have four limbs or, like snakes, are descended from four-limbed ancestors. Unlike amphibians, reptiles do not have an aquatic larval stage. Most reptiles are oviparous, although several species of squamates are viviparous, as were some extinct aquatic clades – the fetus develops within the mother, using a (non-mammalian) placenta rather than contained in an eggshell. As amniotes, reptile eggs are surrounded by membranes for protection and transport, which adapt them to reproduction on dry land. Many of the viviparous species feed their fetuses through various forms of placenta analogous to those of mammals, with some providing initial care for their hatchlings. Extant reptiles range in size from a tiny gecko, Sphaerodactylus ariasae, which can grow up to to the saltwater crocodile, Crocodylus porosus, which can reach over in length and weigh over .
Classification
Research history
In the 13th century, the category of reptile was recognized in Europe as consisting of a miscellany of egg-laying creatures, including "snakes, various fantastic monsters, lizards, assorted amphibians, and worms", as recorded by Beauvais in his Mirror of Nature.
In the 18th century, the reptiles were, from the outset of classification, grouped with the amphibians. Linnaeus, working from species-poor Sweden, where the common adder and grass snake are often found hunting in water, included all reptiles and amphibians in class in his Systema Naturæ.
The terms reptile and amphibian were largely interchangeable, reptile (from Latin repere, 'to creep') being preferred by the French. J.N. Laurenti was the first to formally use the term Reptilia for an expanded selection of reptiles and amphibians basically similar to that of Linnaeus. Today, the two groups are still commonly treated under the single heading herpetology.
It was not until the beginning of the 19th century that it became clear that reptiles and amphibians are, in fact, quite different animals, and P.A. Latreille erected the class Batracia (1825) for the latter, dividing the tetrapods into the four familiar classes of reptiles, amphibians, birds, and mammals. The British anatomist T.H. Huxley made Latreille's definition popular and, together with Richard Owen, expanded Reptilia to include the various fossil "antediluvian monsters", including dinosaurs and the mammal-like (synapsid) Dicynodon he helped describe. This was not the only possible classification scheme: In the Hunterian lectures delivered at the Royal College of Surgeons in 1863, Huxley grouped the vertebrates into mammals, sauroids, and ichthyoids (the latter containing the fishes and amphibians). He subsequently proposed the names of Sauropsida and Ichthyopsida for the latter two groups. In 1866, Haeckel demonstrated that vertebrates could be divided based on their reproductive strategies, and that reptiles, birds, and mammals were united by the amniotic egg.
The terms Sauropsida ("lizard faces") and Theropsida ("beast faces") were used again in 1916 by E.S. Goodrich to distinguish between lizards, birds, and their relatives on the one hand (Sauropsida) and mammals and their extinct relatives (Theropsida) on the other. Goodrich supported this division by the nature of the hearts and blood vessels in each group, and other features, such as the structure of the forebrain. According to Goodrich, both lineages evolved from an earlier stem group, Protosauria ("first lizards") in which he included some animals today considered reptile-like amphibians, as well as early reptiles.
In 1956, D.M.S. Watson observed that the first two groups diverged very early in reptilian history, so he divided Goodrich's Protosauria between them. He also reinterpreted Sauropsida and Theropsida to exclude birds and mammals, respectively. Thus his Sauropsida included Procolophonia, Eosuchia, Millerosauria, Chelonia (turtles), Squamata (lizards and snakes), Rhynchocephalia, Crocodilia, "thecodonts" (paraphyletic basal Archosauria), non-avian dinosaurs, pterosaurs, ichthyosaurs, and sauropterygians.
In the late 19th century, a number of definitions of Reptilia were offered. The biological traits listed by Lydekker in 1896, for example, include a single occipital condyle, a jaw joint formed by the quadrate and articular bones, and certain characteristics of the vertebrae. The animals singled out by these formulations, the amniotes other than the mammals and the birds, are still those considered reptiles today.
The synapsid/sauropsid division supplemented another approach, one that split the reptiles into four subclasses based on the number and position of temporal fenestrae, openings in the sides of the skull behind the eyes. This classification was initiated by Henry Fairfield Osborn and elaborated and made popular by Romer's classic Vertebrate Paleontology. Those four subclasses were:
Anapsida – no fenestrae – cotylosaurs and chelonia (turtles and relatives)
Synapsida – one low fenestra – pelycosaurs and therapsids (the 'mammal-like reptiles')
Euryapsida – one high fenestra (above the postorbital and squamosal) – protorosaurs (small, early lizard-like reptiles) and the marine sauropterygians and ichthyosaurs, the latter called Parapsida in Osborn's work.
Diapsida – two fenestrae – most reptiles, including lizards, snakes, crocodilians, dinosaurs and pterosaurs.
The composition of Euryapsida was uncertain. Ichthyosaurs were, at times, considered to have arisen independently of the other euryapsids, and given the older name Parapsida. Parapsida was later discarded as a group for the most part (ichthyosaurs being classified as incertae sedis or with Euryapsida). However, four (or three if Euryapsida is merged into Diapsida) subclasses remained more or less universal for non-specialist work throughout the 20th century. It has largely been abandoned by recent researchers: In particular, the anapsid condition has been found to occur so variably among unrelated groups that it is not now considered a useful distinction.
Phylogenetics and modern definition
By the early 21st century, vertebrate paleontologists were beginning to adopt phylogenetic taxonomy, in which all groups are defined in such a way as to be monophyletic; that is, groups which include all descendants of a particular ancestor. The reptiles as historically defined are paraphyletic, since they exclude both birds and mammals. These respectively evolved from dinosaurs and from early therapsids, both of which were traditionally called "reptiles". Birds are more closely related to crocodilians than the latter are to the rest of extant reptiles. Colin Tudge wrote:
Mammals are a clade, and therefore the cladists are happy to acknowledge the traditional taxon Mammalia; and birds, too, are a clade, universally ascribed to the formal taxon Aves. Mammalia and Aves are, in fact, subclades within the grand clade of the Amniota. But the traditional class Reptilia is not a clade. It is just a section of the clade Amniota: The section that is left after the Mammalia and Aves have been hived off. It cannot be defined by synapomorphies, as is the proper way. Instead, it is defined by a combination of the features it has and the features it lacks: reptiles are the amniotes that lack fur or feathers. At best, the cladists suggest, we could say that the traditional Reptilia are 'non-avian, non-mammalian amniotes'.
Despite the early proposals for replacing the paraphyletic Reptilia with a monophyletic Sauropsida, which includes birds, that term was never adopted widely or, when it was, was not applied consistently.
When Sauropsida was used, it often had the same content or even the same definition as Reptilia. In 1988, Jacques Gauthier proposed a cladistic definition of Reptilia as a monophyletic node-based crown group containing turtles, lizards and snakes, crocodilians, and birds, their common ancestor and all its descendants. While Gauthier's definition was close to the modern consensus, nonetheless, it became considered inadequate because the actual relationship of turtles to other reptiles was not yet well understood at this time. Major revisions since have included the reassignment of synapsids as non-reptiles, and classification of turtles as diapsids. Gauthier 1994 and Laurin and Reisz 1995's definition of Sauropsida defined the scope of the group as distinct and broader than that of Reptilia, encompassing Mesosauridae as well as Reptilia sensu stricto.
A variety of other definitions were proposed by other scientists in the years following Gauthier's paper. The first such new definition, which attempted to adhere to the standards of the PhyloCode, was published by Modesto and Anderson in 2004. Modesto and Anderson reviewed the many previous definitions and proposed a modified definition, which they intended to retain most traditional content of the group while keeping it stable and monophyletic. They defined Reptilia as all amniotes closer to Lacerta agilis and Crocodylus niloticus than to Homo sapiens. This stem-based definition is equivalent to the more common definition of Sauropsida, which Modesto and Anderson synonymized with Reptilia, since the latter is better known and more frequently used. Unlike most previous definitions of Reptilia, however, Modesto and Anderson's definition includes birds, as they are within the clade that includes both lizards and crocodiles.
Taxonomy
General classification of extinct and living reptiles, focusing on major groups.
Reptilia/Sauropsida
Parareptilia
Eureptilia
Captorhinidae
Diapsida
Araeoscelidia
Neodiapsida
Drepanosauromorpha (placement uncertain)
Younginiformes (paraphyletic)
Ichthyosauromorpha (placement uncertain)
Thalattosauria (placement uncertain)
Sauria
Lepidosauromorpha
Lepidosauriformes
Rhynchocephalia (tuatara)
Squamata (lizards and snakes)
Choristodera (placement uncertain)
Sauropterygia (placement uncertain)
Pantestudines (turtles and kin, placement uncertain)
Archosauromorpha
Protorosauria (paraphyletic)
Rhynchosauria
Allokotosauria
Archosauriformes
Phytosauria
Archosauria
Pseudosuchia
Crocodilia (crocodilians)
Avemetatarsalia/Ornithodira
Pterosauria
Dinosauria
Ornithischia
Saurischia (including birds (Aves))
Phylogeny
The cladogram presented here illustrates the "family tree" of reptiles, and follows a simplified version of the relationships found by M.S. Lee, in 2013. All genetic studies have supported the hypothesis that turtles are diapsids; some have placed turtles within Archosauromorpha, though a few have recovered turtles as Lepidosauromorpha instead. The cladogram below used a combination of genetic (molecular) and fossil (morphological) data to obtain its results.
The position of turtles
The placement of turtles has historically been highly variable. Classically, turtles were considered to be related to the primitive anapsid reptiles. Molecular work has usually placed turtles within the diapsids. As of 2013, three turtle genomes have been sequenced. The results place turtles as a sister clade to the archosaurs, the group that includes crocodiles, non-avian dinosaurs, and birds. However, in their comparative analysis of the timing of organogenesis, Werneburg and Sánchez-Villagra (2009) found support for the hypothesis that turtles belong to a separate clade within Sauropsida, outside the saurian clade altogether.
Evolutionary history
Origin of the reptiles
The origin of the reptiles lies about 310–320 million years ago, in the steaming swamps of the late Carboniferous period, when the first reptiles evolved from advanced reptiliomorphs.
The oldest known animal that may have been an amniote is Casineria (though it may have been a temnospondyl). A series of footprints from the fossil strata of Nova Scotia dated to show typical reptilian toes and imprints of scales. These tracks are attributed to Hylonomus, the oldest unquestionable reptile known.
It was a small, lizard-like animal, about long, with numerous sharp teeth indicating an insectivorous diet. Other examples include Westlothiana (for the moment considered a reptiliomorph rather than a true amniote) and Paleothyris, both of similar build and presumably similar habit.
However, microsaurs have been at times considered true reptiles, so an earlier origin is possible.
Rise of the reptiles
The earliest amniotes, including stem-reptiles (those amniotes closer to modern reptiles than to mammals), were largely overshadowed by larger stem-tetrapods, such as Cochleosaurus, and remained a small, inconspicuous part of the fauna until the Carboniferous Rainforest Collapse. This sudden collapse affected several large groups. Primitive tetrapods were particularly devastated, while stem-reptiles fared better, being ecologically adapted to the drier conditions that followed. Primitive tetrapods, like modern amphibians, need to return to water to lay eggs; in contrast, amniotes, like modern reptiles – whose eggs possess a shell that allows them to be laid on land – were better adapted to the new conditions. Amniotes acquired new niches at a faster rate than before the collapse and at a much faster rate than primitive tetrapods. They acquired new feeding strategies including herbivory and carnivory, previously only having been insectivores and piscivores. From this point forward, reptiles dominated communities and had a greater diversity than primitive tetrapods, setting the stage for the Mesozoic (known as the Age of Reptiles). One of the best known early stem-reptiles is Mesosaurus, a genus from the Early Permian that had returned to water, feeding on fish.
A 2021 examination of reptile diversity in the Carboniferous and the Permian suggests a much higher degree of diversity than previously thought, comparable or even exceeding that of synapsids. Thus, the "First Age of Reptiles" was proposed.
Anapsids, synapsids, diapsids, and sauropsids
It was traditionally assumed that the first reptiles retained an anapsid skull inherited from their ancestors. This type of skull has a skull roof with only holes for the nostrils, eyes and a pineal eye. The discoveries of synapsid-like openings (see below) in the skull roof of the skulls of several members of Parareptilia (the clade containing most of the amniotes traditionally referred to as "anapsids"), including lanthanosuchoids, millerettids, bolosaurids, some nycteroleterids, some procolophonoids and at least some mesosaurs made it more ambiguous and it is currently uncertain whether the ancestral amniote had an anapsid-like or synapsid-like skull. These animals are traditionally referred to as "anapsids", and form a paraphyletic basic stock from which other groups evolved. Very shortly after the first amniotes appeared, a lineage called Synapsida split off; this group was characterized by a temporal opening in the skull behind each eye giving room for the jaw muscle to move. These are the "mammal-like amniotes", or stem-mammals, that later gave rise to the true mammals. Soon after, another group evolved a similar trait, this time with a double opening behind each eye, earning them the name Diapsida ("two arches"). The function of the holes in these groups was to lighten the skull and give room for the jaw muscles to move, allowing for a more powerful bite.
Turtles have been traditionally believed to be surviving parareptiles, on the basis of their anapsid skull structure, which was assumed to be primitive trait. The rationale for this classification has been disputed, with some arguing that turtles are diapsids that evolved anapsid skulls, improving their armor. Later morphological phylogenetic studies with this in mind placed turtles firmly within Diapsida. All molecular studies have strongly upheld the placement of turtles within diapsids, most commonly as a sister group to extant archosaurs.
Permian reptiles
With the close of the Carboniferous, the amniotes became the dominant tetrapod fauna. While primitive, terrestrial reptiliomorphs still existed, the synapsid amniotes evolved the first truly terrestrial megafauna (giant animals) in the form of pelycosaurs, such as Edaphosaurus and the carnivorous Dimetrodon. In the mid-Permian period, the climate became drier, resulting in a change of fauna: The pelycosaurs were replaced by the therapsids.
The parareptiles, whose massive skull roofs had no postorbital holes, continued and flourished throughout the Permian. The pareiasaurian parareptiles reached giant proportions in the late Permian, eventually disappearing at the close of the period (the turtles being possible survivors).
Early in the period, the modern reptiles, or crown-group reptiles, evolved and split into two main lineages: the Archosauromorpha (forebears of turtles, crocodiles, and dinosaurs) and the Lepidosauromorpha (predecessors of modern lizards and tuataras). Both groups remained lizard-like and relatively small and inconspicuous during the Permian.
Mesozoic reptiles
The close of the Permian saw the greatest mass extinction known (see the Permian–Triassic extinction event), an event prolonged by the combination of two or more distinct extinction pulses. Most of the earlier parareptile and synapsid megafauna disappeared, being replaced by the true reptiles, particularly archosauromorphs. These were characterized by elongated hind legs and an erect pose, the early forms looking somewhat like long-legged crocodiles. The archosaurs became the dominant group during the Triassic period, though it took 30 million years before their diversity was as great as the animals that lived in the Permian. Archosaurs developed into the well-known dinosaurs and pterosaurs, as well as the ancestors of crocodiles. Since reptiles, first rauisuchians and then dinosaurs, dominated the Mesozoic era, the interval is popularly known as the "Age of Reptiles". The dinosaurs also developed smaller forms, including the feather-bearing smaller theropods. In the Cretaceous period, these gave rise to the first true birds.
The sister group to Archosauromorpha is Lepidosauromorpha, containing lizards and tuataras, as well as their fossil relatives. Lepidosauromorpha contained at least one major group of the Mesozoic sea reptiles: the mosasaurs, which lived during the Cretaceous period. The phylogenetic placement of other main groups of fossil sea reptiles – the ichthyopterygians (including ichthyosaurs) and the sauropterygians, which evolved in the early Triassic – is more controversial. Different authors linked these groups either to lepidosauromorphs or to archosauromorphs, and ichthyopterygians were also argued to be diapsids that did not belong to the least inclusive clade containing lepidosauromorphs and archosauromorphs.
Cenozoic reptiles
The close of the Cretaceous period saw the demise of the Mesozoic era reptilian megafauna (see the Cretaceous–Paleogene extinction event, also known as K-T extinction event). Of the large marine reptiles, only sea turtles were left; and of the non-marine large reptiles, only the semi-aquatic crocodiles and broadly similar choristoderes survived the extinction, with last members of the latter, the lizard-like Lazarussuchus, becoming extinct in the Miocene. Of the great host of dinosaurs dominating the Mesozoic, only the small beaked birds survived. This dramatic extinction pattern at the end of the Mesozoic led into the Cenozoic. Mammals and birds filled the empty niches left behind by the reptilian megafauna and, while reptile diversification slowed, bird and mammal diversification took an exponential turn. However, reptiles were still important components of the megafauna, particularly in the form of large and giant tortoises.
After the extinction of most archosaur and marine reptile lines by the end of the Cretaceous, reptile diversification continued throughout the Cenozoic. Squamates took a massive hit during the K–Pg event, only recovering ten million years after it, but they underwent a great radiation event once they recovered, and today squamates make up the majority of living reptiles (> 95%). Approximately 10,000 extant species of traditional reptiles are known, with birds adding about 10,000 more, almost twice the number of mammals, represented by about 5,700 living species (excluding domesticated species).
Morphology and physiology
Circulation
All lepidosaurs and turtles have a three-chambered heart consisting of two atria, one variably partitioned ventricle, and two aortas that lead to the systemic circulation. The degree of mixing of oxygenated and deoxygenated blood in the three-chambered heart varies depending on the species and physiological state. Under different conditions, deoxygenated blood can be shunted back to the body or oxygenated blood can be shunted back to the lungs. This variation in blood flow has been hypothesized to allow more effective thermoregulation and longer diving times for aquatic species, but has not been shown to be a fitness advantage.
For example, iguana hearts, like the majority of the squamate hearts, are composed of three chambers with two aorta and one ventricle, cardiac involuntary muscles. The main structures of the heart are the sinus venosus, the pacemaker, the left atrium, the right atrium, the atrioventricular valve, the cavum venosum, cavum arteriosum, the cavum pulmonale, the muscular ridge, the ventricular ridge, pulmonary veins, and paired aortic arches.
Some squamate species (e.g., pythons and monitor lizards) have three-chambered hearts that become functionally four-chambered hearts during contraction. This is made possible by a muscular ridge that subdivides the ventricle during ventricular diastole and completely divides it during ventricular systole. Because of this ridge, some of these squamates are capable of producing ventricular pressure differentials that are equivalent to those seen in mammalian and avian hearts.
Crocodilians have an anatomically four-chambered heart, similar to birds, but also have two systemic aortas and are therefore capable of bypassing their pulmonary circulation. In turtles, the ventricle is not perfectly divided, so a mix of aerated and nonaerated blood can occur.
Metabolism
Modern non-avian reptiles exhibit some form of cold-bloodedness (i.e. some mix of poikilothermy, ectothermy, and bradymetabolism) so that they have limited physiological means of keeping the body temperature constant and often rely on external sources of heat. Due to a less stable core temperature than birds and mammals, reptilian biochemistry requires enzymes capable of maintaining efficiency over a greater range of temperatures than in the case for warm-blooded animals. The optimum body temperature range varies with species, but is typically below that of warm-blooded animals; for many lizards, it falls in the range, while extreme heat-adapted species, like the American desert iguana Dipsosaurus dorsalis, can have optimal physiological temperatures in the mammalian range, between . While the optimum temperature is often encountered when the animal is active, the low basal metabolism makes body temperature drop rapidly when the animal is inactive.
As in all animals, reptilian muscle action produces heat. In large reptiles, like leatherback turtles, the low surface-to-volume ratio allows this metabolically produced heat to keep the animals warmer than their environment even though they do not have a warm-blooded metabolism. This form of homeothermy is called gigantothermy; it has been suggested as having been common in large dinosaurs and other extinct large-bodied reptiles.
The benefit of a low resting metabolism is that it requires far less fuel to sustain bodily functions. By using temperature variations in their surroundings, or by remaining cold when they do not need to move, reptiles can save considerable amounts of energy compared to endothermic animals of the same size. A crocodile needs from a tenth to a fifth of the food necessary for a lion of the same weight and can live half a year without eating. Lower food requirements and adaptive metabolisms allow reptiles to dominate the animal life in regions where net calorie availability is too low to sustain large-bodied mammals and birds.
It is generally assumed that reptiles are unable to produce the sustained high energy output necessary for long distance chases or flying. Higher energetic capacity might have been responsible for the evolution of warm-bloodedness in birds and mammals. However, investigation of correlations between active capacity and thermophysiology show a weak relationship. Most extant reptiles are carnivores with a sit-and-wait feeding strategy; whether reptiles are cold blooded due to their ecology is not clear. Energetic studies on some reptiles have shown active capacities equal to or greater than similar sized warm-blooded animals.
Respiratory system
All reptiles breathe using lungs. Aquatic turtles have developed more permeable skin, and some species have modified their cloaca to increase the area for gas exchange. Even with these adaptations, breathing is never fully accomplished without lungs. Lung ventilation is accomplished differently in each main reptile group. In squamates, the lungs are ventilated almost exclusively by the axial musculature. This is also the same musculature that is used during locomotion. Because of this constraint, most squamates are forced to hold their breath during intense runs. Some, however, have found a way around it. Varanids, and a few other lizard species, employ buccal pumping as a complement to their normal "axial breathing". This allows the animals to completely fill their lungs during intense locomotion, and thus remain aerobically active for a long time. Tegu lizards are known to possess a proto-diaphragm, which separates the pulmonary cavity from the visceral cavity. While not actually capable of movement, it does allow for greater lung inflation, by taking the weight of the viscera off the lungs.
Crocodilians actually have a muscular diaphragm that is analogous to the mammalian diaphragm. The difference is that the muscles for the crocodilian diaphragm pull the pubis (part of the pelvis, which is movable in crocodilians) back, which brings the liver down, thus freeing space for the lungs to expand. This type of diaphragmatic setup has been referred to as the "hepatic piston". The airways form a number of double tubular chambers within each lung. On inhalation and exhalation air moves through the airways in the same direction, thus creating a unidirectional airflow through the lungs. A similar system is found in birds, monitor lizards and iguanas.
Most reptiles lack a secondary palate, meaning that they must hold their breath while swallowing. Crocodilians have evolved a bony secondary palate that allows them to continue breathing while remaining submerged (and protect their brains against damage by struggling prey). Skinks (family Scincidae) also have evolved a bony secondary palate, to varying degrees. Snakes took a different approach and extended their trachea instead. Their tracheal extension sticks out like a fleshy straw, and allows these animals to swallow large prey without suffering from asphyxiation.
Turtles and tortoises
How turtles breathe has been the subject of much study. To date, only a few species have been studied thoroughly enough to get an idea of how those turtles breathe. The varied results indicate that turtles have found a variety of solutions to this problem.
The difficulty is that most turtle shells are rigid and do not allow for the type of expansion and contraction that other amniotes use to ventilate their lungs. Some turtles, such as the Indian flapshell (Lissemys punctata), have a sheet of muscle that envelops the lungs. When it contracts, the turtle can exhale. When at rest, the turtle can retract the limbs into the body cavity and force air out of the lungs. When the turtle protracts its limbs, the pressure inside the lungs is reduced, and the turtle can suck air in. Turtle lungs are attached to the inside of the top of the shell (carapace), with the bottom of the lungs attached (via connective tissue) to the rest of the viscera. By using a series of special muscles (roughly equivalent to a diaphragm), turtles are capable of pushing their viscera up and down, resulting in effective respiration, since many of these muscles have attachment points in conjunction with their forelimbs (indeed, many of the muscles expand into the limb pockets during contraction).
Breathing during locomotion has been studied in three species, and they show different patterns. Adult female green sea turtles do not breathe as they crutch along their nesting beaches. They hold their breath during terrestrial locomotion and breathe in bouts as they rest. North American box turtles breathe continuously during locomotion, and the ventilation cycle is not coordinated with the limb movements. This is because they use their abdominal muscles to breathe during locomotion. The last species to have been studied is the red-eared slider, which also breathes during locomotion, but takes smaller breaths during locomotion than during small pauses between locomotor bouts, indicating that there may be mechanical interference between the limb movements and the breathing apparatus. Box turtles have also been observed to breathe while completely sealed up inside their shells.
Sound production
Compared with frogs, birds, and mammals, reptiles are less vocal. Sound production is usually limited to hissing, which is produced merely by forcing air though a partly closed glottis and is not considered to be a true vocalization. The ability to vocalize exists in crocodilians, some lizards and turtles; and typically involves vibrating fold-like structures in the larynx or glottis. Some geckos and turtles possess true vocal cords, which have elastin-rich connective tissue.
Hearing in snakes
Hearing in humans relies on 3 parts of the ear; the outer ear that directs sound waves into the ear canal, the middle ear that transmits incoming sound waves to the inner ear, and the inner ear that helps in hearing and keeping their balance. Unlike humans and other mammals, snakes do not possess an outer ear, a middle ear, and a tympanum but have an inner ear structure with cochleas directly connected to their jawbone. They are able to feel the vibrations generated from the sound waves in their jaw as they move on the ground. This is done by the use of mechanoreceptors, sensory nerves that run along the body of snakes directing the vibrations along the spinal nerves to the brain. Snakes have a sensitive auditory perception and can tell which direction sound being made is coming from so that they can sense the presence of prey or predator but it is still unclear how sensitive snakes are to sound waves traveling through the air.
Skin
Reptilian skin is covered in a horny epidermis, making it watertight and enabling reptiles to live on dry land, in contrast to amphibians. Compared to mammalian skin, that of reptiles is rather thin and lacks the thick dermal layer that produces leather in mammals.
Exposed parts of reptiles are protected by scales or scutes, sometimes with a bony base (osteoderms), forming armor. In lepidosaurs, such as lizards and snakes, the whole skin is covered in overlapping epidermal scales. Such scales were once thought to be typical of the class Reptilia as a whole, but are now known to occur only in lepidosaurs. The scales found in turtles and crocodiles are of dermal, rather than epidermal, origin and are properly termed scutes. In turtles, the body is hidden inside a hard shell composed of fused scutes.
Lacking a thick dermis, reptilian leather is not as strong as mammalian leather. It is used in leather-wares for decorative purposes for shoes, belts and handbags, particularly crocodile skin.
Shedding
Reptiles shed their skin through a process called ecdysis which occurs continuously throughout their lifetime. In particular, younger reptiles tend to shed once every five to six weeks while adults shed three to four times a year. Younger reptiles shed more because of their rapid growth rate. Once full size, the frequency of shedding drastically decreases. The process of ecdysis involves forming a new layer of skin under the old one. Proteolytic enzymes and lymphatic fluid is secreted between the old and new layers of skin. Consequently, this lifts the old skin from the new one allowing shedding to occur. Snakes will shed from the head to the tail while lizards shed in a "patchy pattern". Dysecdysis, a common skin disease in snakes and lizards, will occur when ecdysis, or shedding, fails. There are numerous reasons why shedding fails and can be related to inadequate humidity and temperature, nutritional deficiencies, dehydration and traumatic injuries. Nutritional deficiencies decrease proteolytic enzymes while dehydration reduces lymphatic fluids to separate the skin layers. Traumatic injuries on the other hand, form scars that will not allow new scales to form and disrupt the process of ecdysis.
Excretion
Excretion is performed mainly by two small kidneys. In diapsids, uric acid is the main nitrogenous waste product; turtles, like mammals, excrete mainly urea. Unlike the kidneys of mammals and birds, reptile kidneys are unable to produce liquid urine more concentrated than their body fluid. This is because they lack a specialized structure called a loop of Henle, which is present in the nephrons of birds and mammals. Because of this, many reptiles use the colon to aid in the reabsorption of water. Some are also able to take up water stored in the bladder. Excess salts are also excreted by nasal and lingual salt glands in some reptiles.
In all reptiles, the urinogenital ducts and the rectum both empty into an organ called a cloaca. In some reptiles, a midventral wall in the cloaca may open into a urinary bladder, but not all. It is present in all turtles and tortoises as well as most lizards, but is lacking in the monitor lizard, the legless lizards. It is absent in the snakes, alligators, and crocodiles.
Many turtles and lizards have proportionally very large bladders. Charles Darwin noted that the Galapagos tortoise had a bladder which could store up to 20% of its body weight. Such adaptations are the result of environments such as remote islands and deserts where water is very scarce. Other desert-dwelling reptiles have large bladders that can store a long-term reservoir of water for up to several months and aid in osmoregulation.
Turtles have two or more accessory urinary bladders, located lateral to the neck of the urinary bladder and dorsal to the pubis, occupying a significant portion of their body cavity. Their bladder is also usually bilobed with a left and right section. The right section is located under the liver, which prevents large stones from remaining in that side while the left section is more likely to have calculi.
Digestion
Most reptiles are insectivorous or carnivorous and have simple and comparatively short digestive tracts due to meat being fairly simple to break down and digest. Digestion is slower than in mammals, reflecting their lower resting metabolism and their inability to divide and masticate their food. Their poikilotherm metabolism has very low energy requirements, allowing large reptiles like crocodiles and large constrictors to live from a single large meal for months, digesting it slowly.
While modern reptiles are predominantly carnivorous, during the early history of reptiles several groups produced some herbivorous megafauna: in the Paleozoic, the pareiasaurs; and in the Mesozoic several lines of dinosaurs. Today, turtles are the only predominantly herbivorous reptile group, but several lines of agamas and iguanas have evolved to live wholly or partly on plants.
Herbivorous reptiles face the same problems of mastication as herbivorous mammals but, lacking the complex teeth of mammals, many species swallow rocks and pebbles (so called gastroliths) to aid in digestion: The rocks are washed around in the stomach, helping to grind up plant matter. Fossil gastroliths have been found associated with both ornithopods and sauropods, though whether they actually functioned as a gastric mill in the latter is disputed. Salt water crocodiles also use gastroliths as ballast, stabilizing them in the water or helping them to dive. A dual function as both stabilizing ballast and digestion aid has been suggested for gastroliths found in plesiosaurs.
Nerves
The reptilian nervous system contains the same basic part of the amphibian brain, but the reptile cerebrum and cerebellum are slightly larger. Most typical sense organs are well developed with certain exceptions, most notably the snake's lack of external ears (middle and inner ears are present). There are twelve pairs of cranial nerves. Due to their short cochlea, reptiles use electrical tuning to expand their range of audible frequencies.
Vision
Most reptiles are diurnal animals. The vision is typically adapted to daylight conditions, with color vision and more advanced visual depth perception than in amphibians and most mammals.
Reptiles usually have excellent vision, allowing them to detect shapes and motions at long distances. They often have poor vision in low-light conditions. Birds, crocodiles and turtles have three types of photoreceptor: rods, single cones and double cones, which gives them sharp color vision and enables them to see ultraviolet wavelengths. The lepidosaurs appear to have lost the duplex retina and only have a single class of receptor that is cone-like or rod-like depending on whether the species is diurnal or nocturnal. In many burrowing species, such as blind snakes, vision is reduced.
Many lepidosaurs have a photosensory organ on the top of their heads called the parietal eye, which are also called third eye, pineal eye or pineal gland. This "eye" does not work the same way as a normal eye does as it has only a rudimentary retina and lens and thus, cannot form images. It is, however, sensitive to changes in light and dark and can detect movement.
Some snakes have extra sets of visual organs (in the loosest sense of the word) in the form of pits sensitive to infrared radiation (heat). Such heat-sensitive pits are particularly well developed in the pit vipers, but are also found in boas and pythons. These pits allow the snakes to sense the body heat of birds and mammals, enabling pit vipers to hunt rodents in the dark.
Most reptiles, as well as birds, possess a nictitating membrane, a translucent third eyelid which is drawn over the eye from the inner corner. In crocodilians, it protects its eyeball surface while allowing a degree of vision underwater. However, many squamates, geckos and snakes in particular, lack eyelids, which are replaced by a transparent scale. This is called the brille, spectacle, or eyecap. The brille is usually not visible, except for when the snake molts, and it protects the eyes from dust and dirt.
Reproduction
Reptiles generally reproduce sexually, though some are capable of asexual reproduction. All reproductive activity occurs through the cloaca, the single exit/entrance at the base of the tail where waste is also eliminated. Most reptiles have copulatory organs, which are usually retracted or inverted and stored inside the body. In turtles and crocodilians, the male has a single median penis, while squamates, including snakes and lizards, possess a pair of hemipenes, only one of which is typically used in each session. Tuatara, however, lack copulatory organs, and so the male and female simply press their cloacas together as the male discharges sperm.
Most reptiles lay amniotic eggs covered with leathery or calcareous shells. An amnion (5), chorion (6), and allantois (8) are present during embryonic life. The eggshell (1) protects the crocodile embryo (11) and keeps it from drying out, but it is flexible to allow gas exchange. The chorion (6) aids in gas exchange between the inside and outside of the egg. It allows carbon dioxide to exit the egg and oxygen gas to enter the egg. The albumin (9) further protects the embryo and serves as a reservoir for water and protein. The allantois (8) is a sac that collects the metabolic waste produced by the embryo. The amniotic sac (10) contains amniotic fluid (12) which protects and cushions the embryo. The amnion (5) aids in osmoregulation and serves as a saltwater reservoir. The yolk sac (2) surrounding the yolk (3) contains protein and fat rich nutrients that are absorbed by the embryo via vessels (4) that allow the embryo to grow and metabolize. The air space (7) provides the embryo with oxygen while it is hatching. This ensures that the embryo will not suffocate while it is hatching. There are no larval stages of development. Viviparity and ovoviviparity have evolved in squamates and many extinct clades of reptiles. Among squamates, many species, including all boas and most vipers, use this mode of reproduction. The degree of viviparity varies; some species simply retain the eggs until just before hatching, others provide maternal nourishment to supplement the yolk, and yet others lack any yolk and provide all nutrients via a structure similar to the mammalian placenta. The earliest documented case of viviparity in reptiles is the Early Permian mesosaurs, although some individuals or taxa in that clade may also have been oviparous because a putative isolated egg has also been found. Several groups of Mesozoic marine reptiles also exhibited viviparity, such as mosasaurs, ichthyosaurs, and Sauropterygia, a group that includes pachypleurosaurs and Plesiosauria.
Asexual reproduction has been identified in squamates in six families of lizards and one snake. In some species of squamates, a population of females is able to produce a unisexual diploid clone of the mother. This form of asexual reproduction, called parthenogenesis, occurs in several species of gecko, and is particularly widespread in the teiids (especially Aspidocelis) and lacertids (Lacerta). In captivity, Komodo dragons (Varanidae) have reproduced by parthenogenesis.
Parthenogenetic species are suspected to occur among chameleons, agamids, xantusiids, and typhlopids.
Some reptiles exhibit temperature-dependent sex determination (TDSD), in which the incubation temperature determines whether a particular egg hatches as male or female. TDSD is most common in turtles and crocodiles, but also occurs in lizards and tuatara. To date, there has been no confirmation of whether TDSD occurs in snakes.
Longevity
Giant tortoises are among the longest-lived vertebrate animals (over 100 years by some estimates) and have been used as a model for studying longevity. DNA analysis of the genomes of Lonesome George, the iconic last member of Chelonoidis abingdonii, and the Aldabra giant tortoise Aldabrachelys gigantea led to the detection of lineage-specific variants affecting DNA repair genes that might contribute to our understanding of increased lifespan.
Cognition
Reptiles are generally considered less intelligent than mammals and birds. The size of their brain relative to their body is much less than that of mammals, the encephalization quotient being about one tenth of that of mammals, though larger reptiles can show more complex brain development. Larger lizards, like the monitors, are known to exhibit complex behavior, including cooperation and cognitive abilities allowing them to optimize their foraging and territoriality over time. Crocodiles have relatively larger brains and show a fairly complex social structure. The Komodo dragon is even known to engage in play, as are turtles, which are also considered to be social creatures, and sometimes switch between monogamy and promiscuity in their sexual behavior. One study found that wood turtles were better than white rats at learning to navigate mazes. Another study found that giant tortoises are capable of learning through operant conditioning, visual discrimination and retained learned behaviors with long-term memory. Sea turtles have been regarded as having simple brains, but their flippers are used for a variety of foraging tasks (holding, bracing, corralling) in common with marine mammals.
There is evidence that reptiles are sentient and able to feel emotions including anxiety and pleasure.
Defense mechanisms
Many small reptiles, such as snakes and lizards, that live on the ground or in the water are vulnerable to being preyed on by all kinds of carnivorous animals. Thus, avoidance is the most common form of defense in reptiles. At the first sign of danger, most snakes and lizards crawl away into the undergrowth, and turtles and crocodiles will plunge into water and sink out of sight.
Camouflage and warning
Reptiles tend to avoid confrontation through camouflage. Two major groups of reptile predators are birds and other reptiles, both of which have well-developed color vision. Thus the skins of many reptiles have cryptic coloration of plain or mottled gray, green, and brown to allow them to blend into the background of their natural environment. Aided by the reptiles' capacity for remaining motionless for long periods, the camouflage of many snakes is so effective that people or domestic animals are most typically bitten because they accidentally step on them.
When camouflage fails to protect them, blue-tongued skinks will try to ward off attackers by displaying their blue tongues, and the frill-necked lizard will display its brightly colored frill. These same displays are used in territorial disputes and during courtship. If danger arises so suddenly that flight is useless, crocodiles, turtles, some lizards, and some snakes hiss loudly when confronted by an enemy. Rattlesnakes rapidly vibrate the tip of the tail, which is composed of a series of nested, hollow beads to ward off approaching danger.
In contrast to the normal drab coloration of most reptiles, the lizards of the genus Heloderma (the Gila monster and the beaded lizard) and many of the coral snakes have high-contrast warning coloration, warning potential predators they are venomous. A number of non-venomous North American snake species have colorful markings similar to those of the coral snake, an oft cited example of Batesian mimicry.
Alternative defense in snakes
Camouflage does not always fool a predator. When caught out, snake species adopt different defensive tactics and use a complicated set of behaviors when attacked. Some species, like cobras or hognose snakes, first elevate their head and spread out the skin of their neck in an effort to look large and threatening. Failure of this strategy may lead to other measures practiced particularly by cobras, vipers, and closely related species, which use venom to attack. The venom is modified saliva, delivered through fangs from a venom gland. Some non-venomous snakes, such as American hognose snakes or European grass snake, play dead when in danger; some, including the grass snake, exude a foul-smelling liquid to deter attackers.
Defense in crocodilians
When a crocodilian is concerned about its safety, it will gape to expose the teeth and tongue. If this does not work, the crocodilian gets a little more agitated and typically begins to make hissing sounds. After this, the crocodilian will start to change its posture dramatically to make itself look more intimidating. The body is inflated to increase apparent size. If absolutely necessary, it may decide to attack an enemy.
Some species try to bite immediately. Some will use their heads as sledgehammers and literally smash an opponent, some will rush or swim toward the threat from a distance, even chasing the opponent onto land or galloping after it. The main weapon in all crocodiles is the bite, which can generate very high bite force. Many species also possess canine-like teeth. These are used primarily for seizing prey, but are also used in fighting and display.
Shedding and regenerating tails
Geckos, skinks, and some other lizards that are captured by the tail will shed part of the tail structure through a process called autotomy and thus be able to flee. The detached tail will continue to thrash, creating a deceptive sense of continued struggle and distracting the predator's attention from the fleeing prey animal. The detached tails of leopard geckos can wiggle for up to 20 minutes. The tail grows back in most species, but some, like crested geckos, lose their tails for the rest of their lives. In many species the tails are of a separate and dramatically more intense color than the rest of the body so as to encourage potential predators to strike for the tail first. In the shingleback skink and some species of geckos, the tail is short and broad and resembles the head, so that the predators may attack it rather than the more vulnerable front part.
Reptiles that are capable of shedding their tails can partially regenerate them over a period of weeks. The new section will however contain cartilage rather than bone, and will never grow to the same length as the original tail. It is often also distinctly discolored compared to the rest of the body and may lack some of the external sculpting features seen in the original tail.
Relations with humans
In cultures and religions
Dinosaurs have been widely depicted in culture since the English palaeontologist Richard Owen coined the name dinosaur in 1842. As soon as 1854, the Crystal Palace Dinosaurs were on display to the public in south London. One dinosaur appeared in literature even earlier, as Charles Dickens placed a Megalosaurus in the first chapter of his novel Bleak House in 1852.
The dinosaurs featured in books, films, television programs, artwork, and other media have been used for both education and entertainment. The depictions range from the realistic, as in the television documentaries of the 1990s and first decade of the 21st century, to the fantastic, as in the monster movies of the 1950s and 1960s.
The snake or serpent has played a powerful symbolic role in different cultures. In Egyptian history, the Nile cobra adorned the crown of the pharaoh. It was worshipped as one of the gods and was also used for sinister purposes: murder of an adversary and ritual suicide (Cleopatra). In Greek mythology, snakes are associated with deadly antagonists, as a chthonic symbol, roughly translated as earthbound. The nine-headed Lernaean Hydra that Hercules defeated and the three Gorgon sisters are children of Gaia, the earth. Medusa was one of the three Gorgon sisters who Perseus defeated. Medusa is described as a hideous mortal, with snakes instead of hair and the power to turn men to stone with her gaze. After killing her, Perseus gave her head to Athena who fixed it to her shield called the Aegis. The Titans are depicted in art with their legs replaced by bodies of snakes for the same reason: They are children of Gaia, so they are bound to the earth. In Hinduism, snakes are worshipped as gods, with many women pouring milk on snake pits. The cobra is seen on the neck of Shiva, while Vishnu is depicted often as sleeping on a seven-headed snake or within the coils of a serpent. There are temples in India solely for cobras sometimes called Nagraj (King of Snakes), and it is believed that snakes are symbols of fertility. In the annual Hindu festival of Nag Panchami, snakes are venerated and prayed to. In religious terms, the snake and jaguar are arguably the most important animals in ancient Mesoamerica. "In states of ecstasy, lords dance a serpent dance; great descending snakes adorn and support buildings from Chichen Itza to Tenochtitlan, and the Nahuatl word coatl meaning serpent or twin, forms part of primary deities such as Mixcoatl, Quetzalcoatl, and Coatlicue." In Christianity and Judaism, a serpent appears in Genesis to tempt Adam and Eve with the forbidden fruit from the Tree of Knowledge of Good and Evil.
The turtle has a prominent position as a symbol of steadfastness and tranquility in religion, mythology, and folklore from around the world. A tortoise's longevity is suggested by its long lifespan and its shell, which was thought to protect it from any foe. In the cosmological myths of several cultures a World Turtle carries the world upon its back or supports the heavens.
Medicine
Deaths from snakebites are uncommon in many parts of the world, but are still counted in tens of thousands per year in India. Snakebite can be treated with antivenom made from the venom of the snake. To produce antivenom, a mixture of the venoms of different species of snake is injected into the body of a horse in ever-increasing dosages until the horse is immunized. Blood is then extracted; the serum is separated, purified and freeze-dried. The cytotoxic effect of snake venom is being researched as a potential treatment for cancers.
Lizards such as the Gila monster produce toxins with medical applications. Gila toxin reduces plasma glucose; the substance is now synthesised for use in the anti-diabetes drug exenatide (Byetta). Another toxin from Gila monster saliva has been studied for use as an anti-Alzheimer's drug.
Geckos have also been used as medicine, especially in China. Turtles have been used in Chinese traditional medicine for thousands of years, with every part of the turtle believed to have medical benefits. There is a lack of scientific evidence that would correlate claimed medical benefits to turtle consumption. Growing demand for turtle meat has placed pressure on vulnerable wild populations of turtles.
Commercial farming
Crocodiles are protected in many parts of the world, and are farmed commercially. Their hides are tanned and used to make leather goods such as shoes and handbags; crocodile meat is also considered a delicacy. The most commonly farmed species are the saltwater and Nile crocodiles. Farming has resulted in an increase in the saltwater crocodile population in Australia, as eggs are usually harvested from the wild, so landowners have an incentive to conserve their habitat. Crocodile leather is made into wallets, briefcases, purses, handbags, belts, hats, and shoes. Crocodile oil has been used for various purposes.
Snakes are also farmed, primarily in East and Southeast Asia, and their production has become more intensive in the last decade. Snake farming has been troubling for conservation in the past as it can lead to overexploitation of wild snakes and their natural prey to supply the farms. However, farming snakes can limit the hunting of wild snakes, while reducing the slaughter of higher-order vertebrates like cows. The energy efficiency of snakes is higher than expected for carnivores, due to their ectothermy and low metabolism. Waste protein from the poultry and pig industries is used as feed in snake farms. Snake farms produce meat, snake skin, and antivenom.
Turtle farming is another known but controversial practice. Turtles have been farmed for a variety of reasons, ranging from food to traditional medicine, the pet trade, and scientific conservation. Demand for turtle meat and medicinal products is one of the main threats to turtle conservation in Asia. Though commercial breeding would seem to insulate wild populations, it can stoke the demand for them and increase wild captures. Even the potentially appealing concept of raising turtles at a farm to release into the wild is questioned by some veterinarians who have had some experience with farm operations. They caution that this may introduce into the wild populations infectious diseases that occur on the farm, but have not (yet) been occurring in the wild.
Reptiles in captivity
A herpetarium is a zoological exhibition space for reptiles and amphibians.
In the Western world, some snakes (especially relatively docile species such as the ball python and corn snake) are sometimes kept as pets. Numerous species of lizard are kept as pets, including bearded dragons, iguanas, anoles, and geckos (such as the popular leopard gecko and the crested gecko).
Turtles and tortoises are increasingly popular pets, but keeping them can be challenging due to their particular requirements, such as temperature control, the need for UV light sources, and a varied diet. The long lifespans of turtles and especially tortoises mean they can potentially outlive their owners. Good hygiene and significant maintenance is necessary when keeping reptiles, due to the risks of Salmonella and other pathogens. Regular hand-washing after handling is an important measure to prevent infection.
See also
Amphibian and reptile tunnel
List of reptiles
Lists of reptiles by region
Reptile Database
Notes
References
Further reading
Duellman, William E., Berg, Barbara (1962), Type Specimens of Amphibians and Reptiles in the Museum of Natural History, the University of Kansas
External links
— an online full text copy of a 22 volume 13,000 page summary of the state of reptile research.
Extant Pennsylvanian first appearances
Paraphyletic groups
Articles containing video clips | Reptile | Biology | 12,928 |
70,870,585 | https://en.wikipedia.org/wiki/Endococcus%20hafellneri | Endococcus hafellneri is a species of lichenicolous (lichen-eating) fungus in the family Verrucariaceae. It is found in North Asia and the Russian Far East, Estonia, and Japan, where it grows on the lobes of the lichens Flavocetraria cucullata and Cetraria islandica.
Taxonomy
The fungus was formally described as a new species in 2009 by Mikhail Zhurbenko. He placed the species provisionally in the genus Stigmidium, but unlike all other species of that genus, the new fungus has coloured (brown) ascospores. The species epithet honours German lichenologist Josef Hafellner, "in recognition of his important contribution to the knowledge of lichenicolous fungi".
In 2019, Zhurbenko transferred the taxon to the Endococcus. Having had the opportunity to collect and observe more specimens, he noted the constancy of the coloured spores, and concluded that the traits of genus Endococcus are better aligned with the characteristics of the fungus.
Description
Endococcus hafellneri produces ascomata with a perithecioid morphology–more or less rounded, with an ostiole. They are black and shiny and protrude slightly from the surface of the host lichen, measuring up to 50 μm in diameter. Infection by the fungus causes grey and sometimes perforated patches in the host lichen up to across, sometimes with a dark greyish-brown rim around the margin of the patch.
Habitat and distribution
In Asian Russia, Endococcus hafellneri has been recorded from Buryatia, Sakha, the Magadan Oblast, and the Caucasus. It was reported from Kihnu island (Estonia) in 2015, and from Hokkaido, Japan, in 2019. Known hosts for the fungus are Flavocetraria cucullata and Cetraria islandica.
References
Verrucariales
Fungi described in 2009
Fungi of Asia
Fungi of Europe
Lichenicolous fungi
Taxa named by Mikhail Petrovich Zhurbenko
Fungus species | Endococcus hafellneri | Biology | 429 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.