id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
351,080
https://en.wikipedia.org/wiki/Transparency%20%28projection%29
A transparency, also known variously as a viewfoil or foil (from the French word "feuille" or sheet), or viewgraph, is a thin sheet of transparent flexible material, typically polyester (historically cellulose acetate), onto which figures can be drawn. These are then placed on an overhead projector for display to an audience. Many companies and small organizations use a system of projectors and transparencies in meetings and other groupings of people, though this system is being largely replaced by video projectors and interactive whiteboards. Printing Transparencies can be printed using a variety of technologies. In the 1960s and 70s the GAF OZALID "projecto-viewfoil" used a diazo process to make a clear sheet framed in cardboard and protected by a rice paper cover. In the 1980's laser printers or copiers could make foil sheets using standard xerographic processes. Specialist transparencies are available for use with laser printers that are better able to handle the high temperatures present in the fuser unit. For inkjet printers, coated transparencies are available that can absorb and hold the liquid ink—although care must be taken to avoid excessive exposure to moisture, which can cause the transparency to become cloudy; they must also be loaded correctly into the printer as they are only usually coated on one side. Uses Uses for transparencies are as varied as the organizations that use them. Certain classes, such as those associated with mathematics or history and geography use transparencies to illustrate a point or problem. Until the advent of LaTeX, math classes in particular used rolls of acetate to illustrate sufficiently long problems and to display mathematical symbols missing from common computer keyboards. Aerospace companies, like Boeing and Beechcraft, used transparencies for years in management meetings in order to brief engineers and relevant personnel about new aircraft designs and changes to existing designs, as well as bring up illustrated problems. Some churches and other religious organizations used them to show sermon outlines and illustrate certain topics such as Old Testament battles and Jewish artifacts during worship services, as well as outline business meetings. Spatial light modulators (SLMs) Many overhead projectors are used with a flat-panel LCD which, when used this way, is referred to as a spatial light modulator or SLM. Data projectors are often based on some form of SLM in a projection path. An LCD is a transmissive SLM, whereas other technologies such as Texas Instrument's DLP are reflective SLMs. Not all projectors use SLMs (e.g., some use devices that produce their own light rather than function as transparencies). An example of non-SLM system are organic light-emitting diodes (OLEDs). See also Presentation slide Projection panel Reversal film References External links Transparency (projection) – semanticscholar.org Display technology Office equipment Presentation
Transparency (projection)
[ "Technology", "Engineering" ]
586
[ "Multimedia", "Electronic engineering", "Presentation", "Display technology" ]
351,088
https://en.wikipedia.org/wiki/Transparency%20%28telecommunication%29
In telecommunications, transparency can refer to: The property of an entity that allows another entity to pass through it without altering either of the entities. The property that allows a transmission system or channel to accept, at its input, unmodified user information, and deliver corresponding user information at its output, unchanged in form or information content. The user information may be changed internally within the transmission system, but it is restored to its original form prior to the output without the involvement of the user. The quality of a data communications system or device that uses a bit-oriented link protocol that does not depend on the bit sequence structure used by the data source. Some communication systems are not transparent. Non-transparent communication systems have one or both of the following problems: user data may be incorrectly interpreted as internal commands. For example, modems with a Time Independent Escape Sequence or 20th century Signaling System No. 5 and R2 signalling telephone systems, which occasionally incorrectly interpreted user data (from a "blue box") as commands. output "user data" may not always be the same as input user data. For example, many early email systems were not 8-bit clean; they seemed to transfer typical short text messages properly, but converted "unusual" characters (the control characters, the "high ASCII" characters) in an irreversible way into some other "usual" character. Many of these systems also changed user data in other irreversible ways – such as inserting linefeeds to make sure each line is less than some maximum length, and inserting a ">" at the beginning of every line that begins with "From ". Until 8BITMIME, a variety of binary-to-text encoding techniques have been overlaid on top of such systems to restore transparency – to make sure that any possible file can be transferred so that the final output "user data" is actually identical to the original user data. References See also In-band signaling out-of-band communication Telecommunications engineering
Transparency (telecommunication)
[ "Engineering" ]
409
[ "Electrical engineering", "Telecommunications engineering" ]
351,091
https://en.wikipedia.org/wiki/Transparency%20%28human%E2%80%93computer%20interaction%29
Any change in a computing system, such as a new feature or new component, is transparent if the system after change adheres to previous external interface as much as possible while changing its internal behavior. The purpose is to shield change from all systems (or human users) on the other end of the interface. Confusingly, the term refers to the overall invisibility of the component, it does not refer to visibility of component's internals (as in white box or open system). The term transparent is widely used in computing marketing in substitution of the term invisible, since the term invisible has a bad connotation (usually seen as something that the user can't see, and has no control over) while the term transparent has a good connotation (usually associated with not hiding anything). The vast majority of the times, the term transparent is used in a misleading way to refer to the actual invisibility of a computing process, which is also described by the term opaque, especially with regards to data structures. Because of this misleading and counter-intuitive definition, modern computer literature tends to prefer use of "agnostic" over "transparent". The term is used particularly often with regard to an abstraction layer that is invisible either from its upper or lower neighboring layer. Also temporarily used later around 1969, in IBM and Honeywell programming manuals, the term referred to a certain computer programming technique. An application code was transparent when it was clear of the low-level detail (such as device-specific management) and contained only the logic solving a main problem. It was achieved through encapsulation – putting the code into modules that hid internal details, making them invisible for the main application. Examples For example, the Network File System is transparent, because it introduces the access to files stored remotely on the network in a way uniform with previous local access to a file system, so the user might even not notice it while using the folder hierarchy. The early File Transfer Protocol (FTP) is considerably less transparent, because it requires each user to learn how to access files through an ftp client. Similarly, some file systems allow transparent compression and decompression of data, enabling users to store more files on a medium without any special knowledge; some file systems encrypt files transparently. This approach does not require running a compression or encryption utility manually. In software engineering, it is also considered good practice to develop or use abstraction layers for database access, so that the same application will work with different databases; here, the abstraction layer allows other parts of the program to access the database transparently (see Data Access Object, for example). In object-oriented programming, transparency is facilitated through the use of interfaces that hide actual implementations done with different underlying classes. Types of transparency in distributed system Transparency means that any form of distributed system should hide its distributed nature from its users, appearing and functioning as a normal centralized system. There are many types of transparency: Access transparency – Regardless of how resource access and representation has to be performed on each individual computing entity, the users of a distributed system should always access resources in a single, uniform way. Example: SQL Queries Location transparency – Users of a distributed system should not have to be aware of where a resource is physically located. Example: Pages in the Web Migration transparency – Users should not be aware of whether a resource or computing entity possesses the ability to move to a different physical or logical location. Relocation transparency – Should a resource move while in use, this should not be noticeable to the end user. Replication transparency – If a resource is replicated among several locations, it should appear to the user as a single resource. Concurrent transparency – While multiple users may compete for and share a single resource, this should not be apparent to any of them. Failure transparency – Always try to hide any failure and recovery of computing entities and resources. Persistence transparency – Whether a resource lies in volatile or permanent memory should make no difference to the user. Security transparency – Negotiation of cryptographically secure access of resources must require a minimum of user intervention, or users will circumvent the security in preference of productivity. Formal definitions of most of these concepts can be found in RM-ODP, the Open Distributed Processing Reference Model (ISO 10746). The degree to which these properties can or should be achieved may vary widely. Not every system can or should hide everything from its users. For instance, due to the existence of a fixed and finite speed of light there will always be more latency on accessing resources distant from the user. If one expects real-time interaction with the distributed system, this may be very noticeable. References https://lightcast.io/open-skills/skills/KS441HX6SDYW15ZBFJNJ/transparency-human-computer-interaction Human–computer interaction Distributed computing architecture Software architecture
Transparency (human–computer interaction)
[ "Engineering" ]
993
[ "Human–computer interaction", "Human–machine interaction" ]
351,131
https://en.wikipedia.org/wiki/Virtual%20file%20system
A virtual file system (VFS) or virtual filesystem switch is an abstract layer on top of a more concrete file system. The purpose of a VFS is to allow client applications to access different types of concrete file systems in a uniform way. A VFS can, for example, be used to access local and network storage devices transparently without the client application noticing the difference. It can be used to bridge the differences in Windows, classic Mac OS/macOS and Unix filesystems, so that applications can access files on local file systems of those types without having to know what type of file system they are accessing. A VFS specifies an interface (or a "contract") between the kernel and a concrete file system. Therefore, it is easy to add support for new file system types to the kernel simply by fulfilling the contract. The terms of the contract might change incompatibly from release to release, which would require that concrete file system support be recompiled, and possibly modified before recompilation, to allow it to work with a new release of the operating system; or the supplier of the operating system might make only backward-compatible changes to the contract, so that concrete file system support built for a given release of the operating system would work with future versions of the operating system. Implementations One of the first virtual file system mechanisms on Unix-like systems was introduced by Sun Microsystems in SunOS 2.0 in 1985. It allowed Unix system calls to access local UFS file systems and remote NFS file systems transparently. For this reason, Unix vendors who licensed the NFS code from Sun often copied the design of Sun's VFS. Other file systems could be plugged into it also: there was an implementation of the MS-DOS FAT file system developed at Sun that plugged into the SunOS VFS, although it wasn't shipped as a product until SunOS 4.1. The SunOS implementation was the basis of the VFS mechanism in System V Release 4. John Heidemann developed a stacking VFS under SunOS 4.0 for the experimental Ficus file system. This design provided for code reuse among file system types with differing but similar semantics (e.g., an encrypting file system could reuse all of the naming and storage-management code of a non-encrypting file system). Heidemann adapted this work for use in 4.4BSD as a part of his thesis research; descendants of this code underpin the file system implementations in modern BSD derivatives including macOS. Other Unix virtual file systems include the File System Switch in System V Release 3, the Generic File System in Ultrix, and the VFS in Linux. In OS/2 and Microsoft Windows, the virtual file system mechanism is called the Installable File System. The Filesystem in Userspace (FUSE) mechanism allows userland code to plug into the virtual file system mechanism in Linux, NetBSD, FreeBSD, OpenSolaris, and macOS. In Microsoft Windows, virtual filesystems can also be implemented through userland Shell namespace extensions; however, they do not support the lowest-level file system access application programming interfaces in Windows, so not all applications will be able to access file systems that are implemented as namespace extensions. KIO and GVfs/GIO provide similar mechanisms in the KDE and GNOME desktop environments (respectively), with similar limitations, although they can be made to use FUSE techniques and therefore integrate smoothly into the system. Single-file virtual file systems Sometimes Virtual File System refers to a file or a group of files (not necessarily inside a concrete file system) that acts as a manageable container which should provide the functionality of a concrete file system through the usage of software. Examples of such containers are CBFS Storage or a single-file virtual file system in an emulator like PCTask or so-called WinUAE, Oracle's VirtualBox, Microsoft's Virtual PC, VMware. The primary benefit for this type of file system is that it is centralized and easy to remove. A single-file virtual file system may include all the basic features expected of any file system (virtual or otherwise), but access to the internal structure of these file systems is often limited to programs specifically written to make use of the single-file virtual file system (instead of implementation through a driver allowing universal access). Another major drawback is that performance is relatively low when compared to other virtual file systems. Low performance is mostly due to the cost of shuffling virtual files when data is written or deleted from the virtual file system. Implementation of single-file virtual filesystems Direct examples of single-file virtual file systems include emulators, such as PCTask and WinUAE, which encapsulate not only the filesystem data but also emulated disk layout. This makes it easy to treat an OS installation like any other piece of software—transferring it with removable media or over the network. PCTask The Amiga emulator PCTask emulated an Intel PC 8088 based machine clocked at 4.77MHz (and later an 80486SX clocked at 25 MHz). Users of PCTask could create a file of large size on the Amiga filesystem, and this file would be virtually accessed from the emulator as if it were a real PC Hard Disk. The file could be formatted with the FAT16 filesystem to store normal MS-DOS or Windows files. WinUAE The UAE for Windows, WinUAE, allows for large single files on Windows to be treated as Amiga file systems. In WinUAE this file is called a hardfile. UAE could also treat a directory on the host filesystem (Windows, Linux, macOS, AmigaOS) as an Amiga filesystem. See also 9P (protocol) a distributed file system protocol that maps directly to the VFS layer of Plan 9, making all file system access network-transparent Synthetic file system a hierarchical interface to non-file objects that appear as if they were regular files in the tree of a disk-based file system Notes Emulation on Amiga Comparison between PCX and PCTask, Amiga PC emulators. See also This article explaining how it works PCTask. Help About WinUAE (See Hardfile section). Help About WinUAE (See Add Directory section) References Linux kernel's Virtual File System The Linux VFS, Chapter 4 of Linux File Systems by Moshe Bar (McGraw-Hill, 2001). Chapter 12 of Understanding the Linux Kernel by Daniel P. Bovet, Marco Cesati (O'Reilly Media, 2005). The Linux VFS Model: Naming structure External links Anatomy of the Linux virtual file system switch Computer file systems Virtualization
Virtual file system
[ "Engineering" ]
1,409
[ "Computer networks engineering", "Virtualization" ]
351,216
https://en.wikipedia.org/wiki/Stripboard
Stripboard is the generic name for a widely used type of electronics prototyping material for circuit boards characterized by a pre-formed regular (rectangular) grid of holes, with wide parallel strips of copper cladding running in one direction all the way along one side of an insulating bonded paper board. It is commonly also known by the name of the original product Veroboard, which is a trademark, in the UK, of British company Vero Technologies Ltd and Canadian company Pixel Print Ltd. It was originated and developed in the early 1960s by the Electronics Department of Vero Precision Engineering Ltd (VPE). It was introduced as a general-purpose material for use in constructing electronic circuits - differing from purpose-designed printed circuit boards (PCBs) in that a variety of electronics circuits may be constructed using a standard wiring board. In using the board, breaks are made in the tracks, usually around holes, to divide the strips into multiple electrical nodes. With care, it is possible to break between holes to allow for components that have two pin rows only one position apart such as twin row headers for IDCs. Stripboard is not designed for surface-mount components, though it is possible to mount many such components on the track side, particularly if tracks are cut/shaped with a knife or small cutting disc in a rotary tool. The first single-size Veroboard product was the forerunner of the numerous types of prototype wiring board which, with worldwide use over five decades, have become known as stripboard. The generic terms 'veroboard' and 'stripboard' are now taken to be synonymous. History By the mid-1950s, the printed circuit board (PCB) had become commonplace in electronics production. In early 1959, the VPE Electronics Department was formed when managing director Geoffrey Verdon-Roe hired two former Saunders-Roe Ltd employees, Peter H Winter (aircraft design department) and Terry Fitzpatrick (electronics division). After the failure of a project to develop machine tool control equipment, the department remained operative as a result of success with the invention and development of the new material. New equipment using PCBs was displayed at the 1959 Radio and Electronics Components Manufacturers Federation (RECMF) Exhibition held in The Dorchester Hotel, Park Lane, London. The usual configuration for most of the PCBs of that time had components placed in a regular pattern with the circuit formed by maze-like conductive pathways. An interesting alternative, proposed by Fitzpatrick after visiting the RECMF Exhibition on behalf of VPE, envisaged a standard circuit board carrying straight-line conductors on which the components could be suitably dispersed and connected to the conductors to produce the required circuit. A patent application was immediately filed 25 May 1959 and the invention was developed for Vero by associates Winter, Fitzpatrick and machine shop engineers. The advent of the Arduino integrated development environment, designed to introduce computer programming to newcomers unfamiliar with software development, presents a new opportunity to use Veroboard. Arduino development regularly involves the use of 'shields', which plug into the main Arduino board using standard 0.1 in header connections and carry project-specific I/O hardware. However the Arduino design makes this difficult, as one of the four header sockets is offset from the 0.1 in spacing of the others by 0.05 in. The British company Vero Technologies Ltd currently holds the UK trademark for Veroboard. In the Americas the Veroboard trademark is now held by the Canadian company Pixel Print Ltd. of Vancouver. Hole spacing Stripboard holes are drilled on centers. This spacing allows components having pins with a spacing to be inserted. Compatible parts include DIP ICs, sockets for ICs, some types of connectors, and other devices. Stripboards have evolved over time into several variants and related products. For example, a larger version using a 0.15 inch (3.81 mm) grid and larger holes is available, but is generally less popular (presumably because it does not match up with standard IC pin spacing). Board dimensions Stripboard is available in a variety of sizes. Assemblies The components are usually placed on the plain side of the board, with their leads protruding through the holes. The leads are then soldered to the copper tracks on the other side of the board to make the desired connections, and any excess wire is cut off. The continuous tracks may be easily and neatly cut as desired to form breaks between conductors using a 3 mm (⅛") twist drill, a hand cutter made for the purpose, or a knife. Tracks may be linked up on either side of the board using wire. With practice, very neat and reliable assemblies can be created, though such a method is labour-intensive and therefore unsuitable for production assemblies except in very small quantity. External wire connections to the board are made either by soldering the wires through the holes or, for wires too thick to pass through the holes, by soldering them to specially made pins called Veropins which fit tightly into the holes. Alternatively, some types of connectors have a suitable pin spacing to be inserted directly into the board. Production Production of the proposed new product, Veroboard, was undertaken by the VPE machine tool department. Bought-in sheets of 1.6 mm (0.06 in) copper-clad SRBP printed circuit material were cut to give 122 mm x 456 mm (4.8 in x 18 in) size boards with the individual boards then being machined to form the final product according to the original Veroboard specification. A multiple milling cutter tool, which comprised a bank of side-and-face cutters with suitably shaped cutting teeth, was fabricated, to be used in removing part of the bonded copper on each board leaving 21 conductive strips. For a second operation a special tool with 63 hardened punch bits 1.35 mm (0.052 in) in diameter mounted on a solid base block was constructed to repeat-punch a matrix of holes, on spacing, through the copper strips and the base board. Many dimensional, material quality, and tooling problems were encountered before finished boards of acceptable quality could be produced in quantity. These machining problems were encountered due to the non-availability, in 1960, of advanced printed circuit board milling and drilling techniques or facilities for chemical milling (etching) the copper strips. In 1961, as production rates improved with experience, Vero Electronics Ltd was formed as a separate company to market the increasing sales of Veroboard. Use As with other stripboards, in using Veroboard, components are suitably positioned and soldered to the conductors to form the required circuit. Breaks can be made in the tracks, usually around holes, to divide the strips into multiple electrical nodes enabling increased circuit complexity. This type of wiring board may be used for initial electronic circuit development, to construct prototypes for bench testing or in the production of complete electronic units in small quantity. Veroboard was first used for prototype construction within Vero Electronics Department in 1961. The images of a binary decade counter sub-unit clearly show both the assembled components and the copper conductors with the required discontinuities. A number of these sub-units were interconnected through connectors mounted on a motherboard similar to that shown in the Veroboard Display image and comprised a very early PCB-based backplane system. Each sub-unit had a digital capacity equivalent to 1/2 byte of data storage - i.e. 2,000,000 would be required to store 1 megabyte. Two forms of Veroboard are produced with hole pitch of 2.54 mm (0.1 in) or 3.5 mm (0.15 in). The larger pitch is and was considered easier to assemble, especially at a time when many constructors were still more familiar with valves and tag strips. The increasingly popular integrated circuits in dual in-line packages would only fit the 0.1 boards. Very soon 0.1 pitch became by far the dominant form. Integrated circuits and the common layout of short parallel strips protruding from the sides of an IC package encouraged the development of specialist boards such as Verostrip. This was a long, thin board with the copper strips arranged transversely, rather than the usual lengthwise. A ready-cut central gap was provided to isolate the sides of the IC. A 1979 Vero Electronics Ltd production drawing shows a special Veroboard product made for RS Components Ltd. The versatility of the veroboard/stripboard type of product is demonstrated by the large number of design examples currently () to be found on the Internet. Variations Stripboard is available from many vendors. All versions have copper strips on one side. Some are made using printed circuit board etching and drilling techniques, although some have milled strips and punched holes. The original Veroboard used FR-2 synthetic-resin-bonded paper (SRBP) (also known as phenolic board) as the base board material. Some versions of stripboard now use higher quality FR-4 (fiberglass-reinforced epoxy laminate) material. Comparison with other systems For high density prototyping, especially of digital circuits, wire wrap is faster and more reliable than Stripboard for experienced personnel. Veroboard is similar in concept and usage to a plug-in breadboard, but is cheaper and more permanent—connections are soldered and while some limited reuse may be possible, more than a few cycles of soldering and desoldering are likely to render both the components and the board unusable. In contrast, breadboard connections are held by friction, and the breadboard can be reused many times. However, a breadboard is not very suitable for prototyping that needs to remain in a set configuration for an appreciable period of time nor for physical mock-ups containing a working circuit or for any environment subject to vibration or movement. Stripboards have further evolved into a larger class of prototype boards, available in different shapes and sizes, with different conductive trace layouts. For example, one variant is called a TriPad board. This is similar to stripboard, except that the conductive tracks do not run continuously along the board but are broken into sections, each of which spans three holes. This allows the legs of two or three components to be easily linked together in the circuit conveniently without the need for track breaks to be made. However, in order to link more than three holes together, wire links or bridges must be formed and this can result in a less compact layout than is possible with ordinary stripboard. Another variant is Perf+. This is best described as a selective stripboard. Instead of having all the holes connected together in a strip, a Perf+ board can have holes connected to the bus using a small dab of solder. On the other side the busses run in another direction, allowing compact layouts of complicated circuits by passing signals over each other on different layers of the board. Other prototype board variants have generic layouts to simplify building prototypes with integrated circuits, typically in DIP shapes, or with transistors (pads forming triangles). In particular, some boards mimic the layout of breadboards, to simplify moving a non-permanent prototype on a breadboard to a permanent construction on a PCB. Some types of boards have patterns for connectors on the periphery, like DB9 or IDC headers, to allow connectors with non-standard pin spacings to be easily used. Some come in special physical shapes, to be used to prototype plug-in boards for computer bus systems. See also Point-to-point construction Breadboard Perfboard References Electronics substrates British inventions Brands that became generic Electronics prototyping de:Leiterplatte#Prototypen
Stripboard
[ "Engineering" ]
2,404
[ "Electronic engineering", "Electronics substrates" ]
4,228,914
https://en.wikipedia.org/wiki/Lydia%20Fairchild
Lydia Fairchild (born 1976) is an American woman who exhibits chimerism, having two distinct populations of DNA among the cells of her body. She was pregnant with her third child when she and the father of her children, Jamie Townsend, separated. When Fairchild applied for enforcement of child support in 2002, providing DNA evidence of Townsend's paternity was a routine requirement. While the results showed Townsend to certainly be their father, they seemed to rule out her being their mother. Fairchild stood accused of fraud by either claiming benefits for other people's children, or taking part in a surrogacy scam, and records of her prior births were put similarly in doubt. Prosecutors called for her two children to be taken away from her, believing them not to be hers. As time came for her to give birth to her third child, the judge ordered that an observer be present at the birth, ensure that blood samples were immediately taken from both the child and Fairchild, and be available to testify. Two weeks later, DNA tests seemed to indicate that she was also not the mother of that child. A breakthrough came when her defense attorney, Alan Tindell, learned of Karen Keegan, a chimeric woman in Boston, and suggested a similar possibility for Fairchild and then introduced an article in the New England Journal of Medicine about Keegan. He realized that Fairchild's case might also be caused by chimerism. As in Keegan's case, DNA samples were taken from members of the extended family. The DNA of Fairchild's children matched that of Fairchild's mother to the extent expected of a grandmother. They also found that, although the DNA in Fairchild's skin and hair did not match her children's, the DNA from a cervical smear test did match. Fairchild was carrying two different sets of DNA, the defining characteristic of chimerism. Other examples of chimerism Taylor Muhl Karen Keegan Foekje Dillema See also Mater semper certa est References Further reading ABC News: She's Her Own Twin Article on Fairchild Kids' DNA Tested, Parent Informed The DNA Is Not A Match Article on Fairchild's case The Stranger Within New Scientist Article on Karen Keegan's case Genetic Mosaics Discussion on Tetragametic Humans DNA Tests Shed Light on 'Hybrid Humans' NPR recording 20th-century American women 21st-century American women 1976 births Applied genetics Chimerism Living people 20th-century American people
Lydia Fairchild
[ "Biology" ]
499
[ "Chimerism", "Behavior", "Reproduction" ]
4,229,206
https://en.wikipedia.org/wiki/NGC%205102
NGC 5102, also known as Iota's Ghost, is a lenticular galaxy in the Centaurus A/M83 Group of galaxies. It was discovered by John Herschel in 1835. Distance measurements At least two techniques have been used to measure the distance to NGC 5102. The surface brightness fluctuations distance measurement technique estimates distances to spiral galaxies based on the graininess of the appearance of their bulges. The distance measured to NGC 5102 using this technique is 13.0 ± 0.8 Mly (4.0 ± 0.2 Mpc). However, NGC 5102 is close enough that the tip of the red giant branch (TRGB) method may be used to estimate its distance. The estimated distance to NGC 5102 using this technique is 11.1 ± 1.3 Mly (3.40 ± 0.39 Mpc). Averaged together, these distance measurements give a distance estimate of 12.1 ± 0.7 Mly (3.70 ± 0.23 Mpc). References External links Lenticular galaxies Unbarred lenticular galaxies Centaurus Centaurus A/M83 Group 5102 Astronomical objects discovered in 1835
NGC 5102
[ "Astronomy" ]
240
[ "Centaurus", "Constellations" ]
4,229,296
https://en.wikipedia.org/wiki/NGC%205164
NGC 5164 is a barred spiral galaxy in the constellation Ursa Major. It was discovered by William Herschel on April 14, 1789. References External links Barred spiral galaxies Ursa Major 5164 08458 047124
NGC 5164
[ "Astronomy" ]
48
[ "Ursa Major", "Constellations" ]
4,229,382
https://en.wikipedia.org/wiki/NGC%205253
NGC 5253 is an irregular galaxy in the constellation Centaurus. It was discovered by William Herschel on 15 March 1787. Properties NGC 5253 is located within the M83 Subgroup of the Centaurus A/M83 Group, a relatively nearby galaxy group that includes the radio galaxy Centaurus A and the spiral galaxy M83 (the Southern Pinwheel Galaxy). NGC 5253 is considered a dwarf starburst galaxy and also a blue compact galaxy. Supernovae Two supernovae have been observed in NGC 5253: SN 1895B (type unknown, mag. 8) was discovered by Williamina Fleming on 7 July 1895. SN 1972E (type Ia, mag. 8.5), the second-brightest recent supernova visible from Earth, was discovered by Charles Kowal on 6 May 1972. With a peak apparent magnitude of 8.5, the only brighter supernova observed in the 20th century was SN 1987A. Contents NGC 5253 contains a giant dust cloud hiding a cluster (believed to be a super star cluster) of more than one million stars, among them up to 7,000 O-type stars. The cluster is 3 million years old and has a total luminosity of more than one billion suns. It is the site of efficient star formation, with a rate at least 10 times higher than comparable regions in the Milky Way. Image gallery References External links 17870315 Discoveries by William Herschel Irregular galaxies Peculiar galaxies Centaurus A/M83 Group Centaurus 5253 048334 369 -05-32-060 445-004 13370-3123
NGC 5253
[ "Astronomy" ]
336
[ "Centaurus", "Constellations" ]
4,229,421
https://en.wikipedia.org/wiki/NGC%205408
NGC 5408 is an irregular galaxy in the constellation Centaurus. It was discovered by John Herschel on June 5, 1834. Galaxy group information NGC 5408 is located near the M83 Subgroup of the Centaurus A/M83 Group, a relatively nearby group of galaxies. However, it is unclear as to whether NGC 5408 is part of the group. References External links Irregular galaxies Dwarf irregular galaxies Dwarf barred irregular galaxies Centaurus 5408 50073 Virgo Supercluster
NGC 5408
[ "Astronomy" ]
102
[ "Galaxy stubs", "Centaurus", "Astronomy stubs", "Constellations" ]
4,229,687
https://en.wikipedia.org/wiki/Iron-56
Iron-56 (56Fe) is the most common isotope of iron. About 91.754% of all iron is iron-56. Of all nuclides, iron-56 has the lowest mass per nucleon. With 8.8 MeV binding energy per nucleon, iron-56 is one of the most tightly bound nuclei. The high nuclear binding energy for 56Fe represents the point where further nuclear reactions become energetically unfavorable. Because of this, it is among the heaviest elements formed in stellar nucleosynthesis reactions in massive stars. These reactions fuse lighter elements like magnesium, silicon, and sulfur to form heavier elements. Among the heavier elements formed is 56Ni, which subsequently decays to 56Co and then 56Fe. Relationship to nickel-62 Nickel-62, a relatively rare isotope of nickel, has a higher nuclear binding energy per nucleon; this is consistent with having a higher mass-per-nucleon because nickel-62 has a greater proportion of neutrons, which are slightly more massive than protons. (See the nickel-62 article for more). Light elements undergoing nuclear fusion and heavy elements undergoing nuclear fission release energy as their nucleons bind more tightly, so 62Ni might be expected to be common. However, during stellar nucleosynthesis the competition between photodisintegration and alpha capturing causes more 56Ni to be produced than 62Ni (56Fe is produced later in the star's ejection shell as 56Ni decays). Although nickel-62 has a higher binding energy per nucleon, the conversion of 28 atoms of nickel-62 into 31 atoms of iron-56 releases of energy. As the universe ages, matter will slowly convert to ever more tightly bound nuclei, approaching 56Fe, ultimately leading to the formation of iron stars over ≈ 101500 years, assuming an expanding universe without proton decay. See also Isotopes of iron Iron star References Isotopes of iron
Iron-56
[ "Chemistry" ]
404
[ "Isotopes of iron", "Isotopes" ]
4,229,712
https://en.wikipedia.org/wiki/Statutory%20liquidity%20ratio
In India, the Statutory liquidity ratio (SLR) is the Government term for the reserve requirement that commercial banks are required to maintain in the form of cash, gold reserves, Govt. bonds and other Reserve Bank of India (RBI)- approved securities before providing credit to the customers. The SLR to be maintained by banks is determined by the RBI in order to control liquidity expansion. The SLR is determined as a percentage of total demand and time liabilities. Time liabilities refer to the liabilities which the commercial banks are liable to repay to the customers after an agreed period, and demand liabilities are customer deposits which are repayable on demand. An example of a time liability is a six-month fixed deposit which is not payable on demand but only after six months. An example of a demand liability is a deposit maintained in a saving account or current account that is payable on demand. The SLR is commonly used to control inflation and fuel growth, by decreasing or increasing the money supply. Indian banks' holdings of government securities are now close to the statutory minimum that banks are required to hold to comply with existing regulation. When measured in rupees, such holdings decreased for the first time in a little less than 40 years (since the nationalisation of banks in 1969) in 2005–06. It is 18.00 percent as in June 2020. Usage SLR is used by bankers and indicates the minimum percentage of deposits that the bank has to maintain in form of gold, cash or other approved securities. Thus, we can say that it is ratio of cash and some other approved liability (deposits). It regulates the credit growth in India. The liabilities that the banks are liable to pay within one month's time, due to completion of maturity period, are also considered as time liabilities. The maximum limit of SLR is 40% and minimum limit of SLR is 0 In India, Reserve Bank of India always determines the percentage of SLR. There are some statutory requirements for temporarily placing the money in government bonds. Following this requirement, Reserve Bank of India fixes the level of SLR. However, as most banks currently keep an SLR higher than required (>26%) due to lack of credible lending options, near term reductions are unlikely to increase liquidity and are more symbolic. The SLR is fixed for a number of reasons. The chief driving force is increasing or decreasing liquidity which can result in a desired outcome. A few uses of mandating SLR are: Controlling the expansion of bank credit. By changing the level of SLR, the Reserve Bank of India can increase or decrease bank credit expansion. Ensuring the solvency of commercial banks By reducing the level of SLR, the RBI can increase liquidity with the commercial banks, resulting in increased investment. This is done to fuel growth and demand. Compelling the commercial banks to invest in government securities like government bonds If any Indian bank fails to maintain the required level of the statutory liquidity ratio, it becomes liable to pay penalty to the Reserve Bank of India. The defaulter bank pays penal interest at the rate of 3% per annum above the bank rate, on the shortfall amount for that particular day. However, according to the Circular released by the Department of Banking Operations and Development, Reserve Bank of India, if the defaulter bank continues to default on the next working day, the rate of penal interest can be increased to 5% per annum above the bank rate. This restriction is imposed by RBI on banks to make funds available to customers on demand as soon as possible. Gold and government securities (or gilts) are included along with cash because they are highly liquid and safe assets. The RBI can increase the SLR to control inflation, suck liquidity out of the market, to tighten the measure to safeguard the customers' money. Decrease in SLR rate is done to encourage growth. In a growing economy banks would like to invest in stock market, not in government securities or gold as the latter would yield less returns. One more reason is long term government securities (or any bond) are sensitive to interest rate changes. However, in an emerging economy, interest rate change is a common activity. Value and formula The quantum is specified as some percentage of the total demand and time liabilities ( i.e. the liabilities of the bank which are payable on demand anytime, and those liabilities which are accruing in one months time due to maturity) of a bank. SLR rate = (liquid assets / (demand + time liabilities)) × 100% This percentage is fixed by the Reserve Bank of India. The maximum limit for the SLR was 40% in India. Following the amendment of the Banking regulation Act (1949) in January 2017, the floor rate of 20.75% for SLR was removed. From April 11, 2020, rate of SLR is 18.00%. See also Bank rate Basel Accords Capital adequacy Cash reserve ratio References Further reading SLR Historical Chart . . . . Banking Monetary policy Financial ratios Capital requirement
Statutory liquidity ratio
[ "Mathematics" ]
1,030
[ "Financial ratios", "Quantity", "Metrics" ]
4,229,946
https://en.wikipedia.org/wiki/Soil%20contamination
Soil contamination, soil pollution, or land pollution as a part of land degradation is caused by the presence of xenobiotic (human-made) chemicals or other alteration in the natural soil environment. It is typically caused by industrial activity, agricultural chemicals or improper disposal of waste. The most common chemicals involved are petroleum hydrocarbons, polynuclear aromatic hydrocarbons (such as naphthalene and benzo(a)pyrene), solvents, pesticides, lead, and other heavy metals. Contamination is correlated with the degree of industrialization and intensity of chemical substance. The concern over soil contamination stems primarily from health risks, from direct contact with the contaminated soil, vapour from the contaminants, or from secondary contamination of water supplies within and underlying the soil. Mapping of contaminated soil sites and the resulting clean ups are time-consuming and expensive tasks, and require expertise in geology, hydrology, chemistry, computer modelling, and GIS in Environmental Contamination, as well as an appreciation of the history of industrial chemistry. In North America and South-Western Europe the extent of contaminated land is best known for as many of the countries in these areas having a legal framework to identify and deal with this environmental problem. Developing countries tend to be less tightly regulated despite some of them having undergone significant industrialization. Causes Soil pollution can be caused by the following (non-exhaustive list): Microplastics Oil spills Mining and activities by other heavy industries Accidental spills may happen during activities, etc. Corrosion of underground storage tanks (including piping used to transmit the contents) Acid rain Intensive farming Agrochemicals, such as pesticides, herbicides and fertilizers Petrochemicals Industrial accidents Road debris Construction activities Exterior lead-based paints Drainage of contaminated surface water into the soil Ammunitions, chemical agents, and other agents of war Waste disposal Oil and fuel dumping Nuclear wastes Direct discharge of industrial wastes to the soil Discharge of sewage Landfill and illegal dumping Coal ash Electronic waste Contaminated by rocks containing large amounts of toxic elements. Contaminated by Pb due to vehicle exhaust, Cd, and Zn caused by tire wear. Contamination by strengthening air pollutants by incineration of fossil raw materials. The most common chemicals involved are petroleum hydrocarbons, solvents, pesticides, lead, and other heavy metals. Any activity that leads to other forms of soil degradation (erosion, compaction, etc.) may indirectly worsen the contamination effects in that soil remediation becomes more tedious. Historical deposition of coal ash used for residential, commercial, and industrial heating, as well as for industrial processes such as ore smelting, were a common source of contamination in areas that were industrialized before about 1960. Coal naturally concentrates lead and zinc during its formation, as well as other heavy metals to a lesser degree. When the coal is burned, most of these metals become concentrated in the ash (the principal exception being mercury). Coal ash and slag may contain sufficient lead to qualify as a "characteristic hazardous waste", defined in the US as containing more than 5 mg/L of extractable lead using the TCLP procedure. In addition to lead, coal ash typically contains variable but significant concentrations of polynuclear aromatic hydrocarbons (PAHs; e.g., benzo(a)anthracene, benzo(b)fluoranthene, benzo(k)fluoranthene, benzo(a)pyrene, indeno(cd)pyrene, phenanthrene, anthracene, and others). These PAHs are known human carcinogens and the acceptable concentrations of them in soil are typically around 1 mg/kg. Coal ash and slag can be recognised by the presence of off-white grains in soil, gray heterogeneous soil, or (coal slag) bubbly, vesicular pebble-sized grains. Treated sewage sludge, known in the industry as biosolids, has become controversial as a "fertilizer". As it is the byproduct of sewage treatment, it generally contains more contaminants such as organisms, pesticides, and heavy metals than other soil. In the European Union, the Urban Waste Water Treatment Directive allows sewage sludge to be sprayed onto land. The volume is expected to double to 185,000 tons of dry solids in 2005. This has good agricultural properties due to the high nitrogen and phosphate content. In 1990/1991, 13% wet weight was sprayed onto 0.13% of the land; however, this is expected to rise 15 fold by 2005. Advocates say there is a need to control this so that pathogenic microorganisms do not get into water courses and to ensure that there is no accumulation of heavy metals in the top soil. Pesticides and herbicides A pesticide is a substance used to kill a pest. A pesticide may be a chemical substance, biological agent (such as a virus or bacteria), antimicrobial, disinfectant or device used against any pest. Pests include insects, plant pathogens, weeds, mollusks, birds, mammals, fish, nematodes (roundworms) and microbes that compete with humans for food, destroy property, spread or are a vector for disease or cause a nuisance. Although there are benefits to the use of pesticides, there are also drawbacks, such as potential toxicity to humans and other organisms. Herbicides are used to kill weeds, especially on pavements and railways. They are similar to auxins and most are biodegradable by soil bacteria. However, one group derived from trinitrotoluene (2:4 D and 2:4:5 T) have the impurity dioxin, which is very toxic and causes fatality even in low concentrations. Another herbicide is Paraquat. It is highly toxic but it rapidly degrades in soil due to the action of bacteria and does not kill soil fauna. Insecticides are used to rid farms of pests which damage crops. The insects damage not only standing crops but also stored ones and in the tropics it is reckoned that one third of the total production is lost during food storage. As with fungicides, the first insecticides used in the nineteenth century were inorganic e.g. Paris Green and other compounds of arsenic. Nicotine has also been used since 1690. There are now two main groups of synthetic insecticides: 1. Organochlorines include DDT, Aldrin, Dieldrin and BHC. They are cheap to produce, potent and persistent. DDT was used on a massive scale from the 1930s, with a peak of 72,000 tonnes used 1970. Then usage fell as the harmful environmental effects were realized. It was found worldwide in fish and birds and was even discovered in the snow in the Antarctic. It is only slightly soluble in water but is very soluble in the bloodstream. It affects the nervous and endocrine systems and causes the eggshells of birds to lack calcium causing them to be easily breakable. It is thought to be responsible for the decline of the numbers of birds of prey like ospreys and peregrine falcons in the 1950s – they are now recovering. As well as increased concentration via the food chain, it is known to enter via permeable membranes, so fish get it through their gills. As it has low water solubility, it tends to stay at the water surface, so organisms that live there are most affected. DDT found in fish that formed part of the human food chain caused concern, but the levels found in the liver, kidney and brain tissues was less than 1 ppm and in fat was 10 ppm, which was below the level likely to cause harm. However, DDT was banned in the UK and the United States to stop the further buildup of it in the food chain. U.S. manufacturers continued to sell DDT to developing countries, who could not afford the expensive replacement chemicals and who did not have such stringent regulations governing the use of pesticides. 2. Organophosphates, e.g. parathion, methyl parathion and about 40 other insecticides are available nationally. Parathion is highly toxic, methyl-parathion is less so and Malathion is generally considered safe as it has low toxicity and is rapidly broken down in the mammalian liver. This group works by preventing normal nerve transmission as cholinesterase is prevented from breaking down the transmitter substance acetylcholine, resulting in uncontrolled muscle movements. Agents of war The disposal of munitions, and a lack of care in manufacture of munitions caused by the urgency of production, can contaminate soil for extended periods. There is little published evidence on this type of contamination largely because of restrictions placed by governments of many countries on the publication of material related to war effort. However, mustard gas stored during World War II has contaminated some sites for up to 50 years and the testing of Anthrax as a potential biological weapon contaminated the whole island of Gruinard. Human health Exposure pathways Contaminated or polluted soil directly affects human health through direct contact with soil or via inhalation of soil contaminants that have vaporized; potentially greater threats are posed by the infiltration of soil contamination into groundwater aquifers used for human consumption, sometimes in areas apparently far removed from any apparent source of above-ground contamination. Toxic metals can also make their way up the food chain through plants that reside in soils containing high concentrations of heavy metals. This tends to result in the development of pollution-related diseases. Most exposure is accidental, and exposure can happen through: Ingesting dust or soil directly Ingesting food or vegetables grown in contaminated soil or with foods in contact with contaminants Skin contact with dust or soil Vapors from the soil Inhaling clouds of dust while working in soils or windy environments However, some studies estimate that 90% of exposure is through eating contaminated food. Consequences Health consequences from exposure to soil contamination vary greatly depending on pollutant type, the pathway of attack, and the vulnerability of the exposed population. Researchers suggest that pesticides and heavy metals in soil may harm cardiovascular health, including inflammation and change in the body's internal clock. Chronic exposure to chromium, lead, and other metals, petroleum, solvents, and many pesticide and herbicide formulations can be carcinogenic, can cause congenital disorders, or can cause other chronic health conditions. Industrial or human-made concentrations of naturally occurring substances, such as nitrate and ammonia associated with livestock manure from agricultural operations, have also been identified as health hazards in soil and groundwater. Chronic exposure to benzene at sufficient concentrations is known to be associated with a higher incidence of leukemia. Mercury and cyclodienes are known to induce higher incidences of kidney damage and some irreversible diseases. PCBs and cyclodienes are linked to liver toxicity. Organophosphates and carbonates can cause a chain of responses leading to neuromuscular blockage. Many chlorinated solvents induce liver changes, kidney changes, and depression of the central nervous system. There is an entire spectrum of further health effects such as headache, nausea, fatigue, eye irritation and skin rash for the above cited and other chemicals. At sufficient dosages a large number of soil contaminants can cause death by exposure via direct contact, inhalation or ingestion of contaminants in groundwater contaminated through soil. The Scottish Government has commissioned the Institute of Occupational Medicine to undertake a review of methods to assess risk to human health from contaminated land. The overall aim of the project is to work up guidance that should be useful to Scottish Local Authorities in assessing whether sites represent a significant possibility of significant harm (SPOSH) to human health. It is envisaged that the output of the project will be a short document providing high level guidance on health risk assessment with reference to existing published guidance and methodologies that have been identified as being particularly relevant and helpful. The project will examine how policy guidelines have been developed for determining the acceptability of risks to human health and propose an approach for assessing what constitutes unacceptable risk in line with the criteria for SPOSH as defined in the legislation and the Scottish Statutory Guidance. Ecosystem effects Not unexpectedly, soil contaminants can have significant deleterious consequences for ecosystems. There are radical soil chemistry changes which can arise from the presence of many hazardous chemicals even at low concentration of the contaminant species. These changes can manifest in the alteration of metabolism of endemic microorganisms and arthropods resident in a given soil environment. The result can be virtual eradication of some of the primary food chain, which in turn could have major consequences for predator or consumer species. Even if the chemical effect on lower life forms is small, the lower pyramid levels of the food chain may ingest alien chemicals, which normally become more concentrated for each consuming rung of the food chain. Many of these effects are now well known, such as the concentration of persistent DDT materials for avian consumers, leading to weakening of egg shells, increased chick mortality and potential extinction of species. Effects occur to agricultural lands which have certain types of soil contamination. Contaminants typically alter plant metabolism, often causing a reduction in crop yields. This has a secondary effect upon soil conservation, since the languishing crops cannot shield the Earth's soil from erosion. Some of these chemical contaminants have long half-lives and in other cases derivative chemicals are formed from decay of primary soil contaminants. Potential effects of contaminants to soil functions Heavy metals and other soil contaminants can adversely affect the activity, species composition and abundance of soil microorganisms, thereby threatening soil functions such as biochemical cycling of carbon and nitrogen. However, soil contaminants can also become less bioavailable by time, and microorganisms and ecosystems can adapt to altered conditions. Soil properties such as pH, organic matter content and texture are very important and modify mobility, bioavailability and toxicity of pollutants in contaminated soils. The same amount of contaminant can be toxic in one soil but totally harmless in another soil. This stresses the need for soil-specific risks assessment and measures. Cleanup options Cleanup or environmental remediation is analyzed by environmental scientists who utilize field measurement of soil chemicals and also apply computer models (GIS in Environmental Contamination) for analyzing transport and fate of soil chemicals. Various technologies have been developed for remediation of oil-contaminated soil and sediments There are several principal strategies for remediation: Excavate soil and take it to a disposal site away from ready pathways for human or sensitive ecosystem contact. This technique also applies to dredging of bay muds containing toxins. Aeration of soils at the contaminated site (with attendant risk of creating air pollution) Thermal remediation by introduction of heat to raise subsurface temperatures sufficiently high to volatilize chemical contaminants out of the soil for vapor extraction. Technologies include ISTD, electrical resistance heating (ERH), and ET-DSP. Bioremediation, involving microbial digestion of certain organic chemicals. Techniques used in bioremediation include landfarming, biostimulation and bioaugmentating soil biota with commercially available microflora. Extraction of groundwater or soil vapor with an active electromechanical system, with subsequent stripping of the contaminants from the extract. Containment of the soil contaminants (such as by capping or paving over in place). Phytoremediation, or using plants (such as willow) to extract heavy metals. Mycoremediation, or using fungus to metabolize contaminants and accumulate heavy metals. Remediation of oil contaminated sediments with self-collapsing air microbubbles. Surfactant leaching Interfacial solar evaporation to extract heavy metal ions from moist soil By country Various national standards for concentrations of particular contaminants include the United States EPA Region 9 Preliminary Remediation Goals (U.S. PRGs), the U.S. EPA Region 3 Risk Based Concentrations (U.S. EPA RBCs) and National Environment Protection Council of Australia Guideline on Investigation Levels in Soil and Groundwater. People's Republic of China The immense and sustained growth of the People's Republic of China since the 1970s has exacted a price from the land in increased soil pollution. The Ministry of Ecology and Environment believes it to be a threat to the environment, to food safety and to sustainable agriculture. According to a scientific sampling, 150 million mu (100,000 square kilometres) of China's cultivated land have been polluted, with contaminated water being used to irrigate a further 32.5 million mu (21,670 square kilometres) and another 2 million mu (1,300 square kilometres) covered or destroyed by solid waste. In total, the area accounts for one-tenth of China's cultivatable land, and is mostly in economically developed areas. An estimated 12 million tonnes of grain are contaminated by heavy metals every year, causing direct losses of 20 billion yuan ($2.57 billion USD). Recent survey shows that 19% of the agricultural soils are contaminated which contains heavy metals and metalloids. And the rate of these heavy metals in the soil has been increased dramatically. European Union According to the received data from Member states, in the European Union the number of estimated potential contaminated sites is more than 2.5 million and the identified contaminated sites around 342 thousand. Municipal and industrial wastes contribute most to soil contamination (38%), followed by the industrial/commercial sector (34%). Mineral oil and heavy metals are the main contaminants contributing around 60% to soil contamination. In terms of budget, the management of contaminated sites is estimated to cost around 6 billion Euros (€) annually. United Kingdom Generic guidance commonly used in the United Kingdom are the Soil Guideline Values published by the Department for Environment, Food and Rural Affairs (DEFRA) and the Environment Agency. These are screening values that demonstrate the minimal acceptable level of a substance. Above this there can be no assurances in terms of significant risk of harm to human health. These have been derived using the Contaminated Land Exposure Assessment Model (CLEA UK). Certain input parameters such as Health Criteria Values, age and land use are fed into CLEA UK to obtain a probabilistic output. Guidance by the Inter Departmental Committee for the Redevelopment of Contaminated Land (ICRCL) has been formally withdrawn by DEFRA, for use as a prescriptive document to determine the potential need for remediation or further assessment. The CLEA model published by DEFRA and the Environment Agency (EA) in March 2002 sets a framework for the appropriate assessment of risks to human health from contaminated land, as required by Part IIA of the Environmental Protection Act 1990. As part of this framework, generic Soil Guideline Values (SGVs) have currently been derived for ten contaminants to be used as "intervention values". These values should not be considered as remedial targets but values above which further detailed assessment should be considered; see Dutch standards. Three sets of CLEA SGVs have been produced for three different land uses, namely residential (with and without plant uptake) allotments commercial/industrial It is intended that the SGVs replace the former ICRCL values. The CLEA SGVs relate to assessing chronic (long term) risks to human health and do not apply to the protection of ground workers during construction, or other potential receptors such as groundwater, buildings, plants or other ecosystems. The CLEA SGVs are not directly applicable to a site completely covered in hardstanding, as there is no direct exposure route to contaminated soils. To date, the first ten of fifty-five contaminant SGVs have been published, for the following: arsenic, cadmium, chromium, lead, inorganic mercury, nickel, selenium ethyl benzene, phenol and toluene. Draft SGVs for benzene, naphthalene and xylene have been produced but their publication is on hold. Toxicological data (Tox) has been published for each of these contaminants as well as for benzo[a]pyrene, benzene, dioxins, furans and dioxin-like PCBs, naphthalene, vinyl chloride, 1,1,2,2 tetrachloroethane and 1,1,1,2 tetrachloroethane, 1,1,1 trichloroethane, tetrachloroethene, carbon tetrachloride, 1,2-dichloroethane, trichloroethene and xylene. The SGVs for ethyl benzene, phenol and toluene are dependent on the soil organic matter (SOM) content (which can be calculated from the total organic carbon (TOC) content). As an initial screen the SGVs for 1% SOM are considered to be appropriate. Canada As of February 2021, there are a total of 2,500 plus contaminated sites in Canada. One infamous contaminated sited is located near a nickel-copper smelting site in Sudbury, Ontario. A study investigating the heavy metal pollution in the vicinity of the smelter reveals that elevated levels of nickel and copper were found in the soil; values going as high as 5,104ppm Ni, and 2,892 ppm Cu within a 1.1 km range of the smelter location. Other metals were also found in the soil; such metals include iron, cobalt, and silver. Furthermore, upon examining the different vegetation surrounding the smelter it was evident that they too had been affected; the results show that the plants contained nickel, copper and aluminium as a result of soil contamination. India In March 2009, the issue of uranium poisoning in Punjab attracted press coverage. It was alleged to be caused by fly ash ponds of thermal power stations, which reportedly lead to severe birth defects in children in the Faridkot and Bhatinda districts of Punjab. The news reports claimed the uranium levels were more than 60 times the maximum safe limit. In 2012, the Government of India confirmed that the ground water in Malwa belt of Punjab has uranium metal that is 50% above the trace limits set by the United Nations' World Health Organization (WHO). Scientific studies, based on over 1000 samples from various sampling points, could not trace the source to fly ash and any sources from thermal power plants or industry as originally alleged. The study also revealed that the uranium concentration in ground water of Malwa district is not 60 times the WHO limits, but only 50% above the WHO limit in 3 locations. This highest concentration found in samples was less than those found naturally in ground waters currently used for human purposes elsewhere, such as Finland. Research is underway to identify natural or other sources for the uranium. See also Contamination control Dutch pollutant standards Environmental policy in China#Soil pollution GIS in environmental contamination Groundwater pollution Habitat destruction Index of waste management articles Land degradation Landfill List of solid waste treatment technologies List of waste management companies Litter Pesticide drift Plasticulture Plastic-eating organisms Remediation of contaminated sites with cement Triangle of death (Italy) Water pollution References Further reading External links Portal for soil and water management in Europe Independent information gateway originally funded by the European Commission for topics related to soil and water, including contaminated land, soil and water management. European Soil Portal: Soil Contamination At EU-level, the issue of contaminated sites (local contamination) and contaminated land (diffuse contamination) has been considered by: European Soil Data Centre (ESDAC). Article on soil contamination in China Arsenic in groundwater Book on arsenic in groundwater by IAH's Netherlands Chapter and the Netherlands Hydrological Society Environmental chemistry Environmental issues with soil Pollution Soil chemistry
Soil contamination
[ "Chemistry", "Environmental_science" ]
4,932
[ "Environmental chemistry", "Soil chemistry", "Soil contamination", "nan", "Environmental soil science", "Environmental issues with soil" ]
4,230,269
https://en.wikipedia.org/wiki/HD%2073526
HD 73526 is a star in the southern constellation of Vela. With an apparent visual magnitude of +8.99, it is much too faint to be viewed with the naked eye. The star is located at a distance of approximately from the Sun based on parallax, and is drifting further away with a radial velocity of +26 km/s. It is a member of the thin disk population. The stellar classification of HD 73526 is G6 V, indicating this is a G-type main-sequence star that, like the Sun, is generating energy through core hydrogen fusion. Based on its properties, it may be starting to evolve off the main sequence. This star has slightly more mass than the Sun and a 53% greater radius. The abundance of iron in its atmosphere suggests the star's metallicity – what astronomers term the abundance of elements with higher atomic number than helium – is 70% greater than in the Sun. It is a much older star with an estimated age of nearly ten billion years, and is spinning slowly with a projected rotational velocity of 1.7 km/s. The star is radiating more than double the luminosity of the Sun from its photosphere at an effective temperature of 5,564 K. Planetary system On June 13 2002, a 2.1 MJ planet HD 73526 b was announced orbiting HD 73526 in an orbit just a little smaller than that of Venus' orbit around the Sun. This planet receives an insolation 3.65 times that of Earth or 1.89 times that of Venus. This was a single planet system until 2006 when a 2.3 MJ second planet HD 73526 c was discovered. These planets forms a 2:1 orbital resonance with planet b. In fact, they seem to be in a very deep resonance with very long timescale stability due to an ACR (Apsidal Corotation Resonance) the planets seem to satisfy. Although these are minimum masses as the inclinations of these planets are unknown, orbital stability analysis indicates that the orbital inclinations of both planets are likely to be near 90°, making the minimum masses very close to the true masses of the planets. See also List of extrasolar planets Gliese 876 References External links Extrasolar Planet Interactions by Rory Barnes & Richard Greenberg, Lunar and Planetary Lab, University of Arizona G-type main-sequence stars Planetary systems with two confirmed planets Vela (constellation) Durchmusterung objects 073526 042282
HD 73526
[ "Astronomy" ]
513
[ "Vela (constellation)", "Constellations" ]
4,230,480
https://en.wikipedia.org/wiki/Wideband%20materials
Wideband material refers to material that can convey Microwave signals (light/sound) over a variety of wavelengths. These materials possess exemplary attenuation and dielectric constants, and are excellent dielectrics for semiconductor gates. Examples of such material include gallium nitride (GaN) and silicon carbide (SiC). SiC has been used extensively in the creation of lasers for several years. However, it performs poorly (providing limited brightness) because it has an indirect band gap. GaN has a wide band gap (~3.4 eV), which usually results in high energies for structures which possess electrons in the conduction band. References External links UCSB.edu – Wideband Gap Semiconductors Materials science
Wideband materials
[ "Physics", "Materials_science", "Engineering" ]
150
[ "Materials science stubs", "Applied and interdisciplinary physics", "Materials science", "Condensed matter physics", "nan", "Condensed matter stubs" ]
4,230,598
https://en.wikipedia.org/wiki/Optical%20flat
An optical flat is an optical-grade piece of glass lapped and polished to be extremely flat on one or both sides, usually within a few tens of nanometres (billionths of a metre). They are used with a monochromatic light to determine the flatness (surface accuracy) of other surfaces (whether optical, metallic, ceramic, or otherwise), by means of wave interference. When an optical flat is placed on another surface and illuminated, the light waves reflect off both the bottom surface of the flat and the surface it is resting on. This causes a phenomenon similar to thin-film interference. The reflected waves interfere, creating a pattern of interference fringes visible as light and dark bands. The spacing between the fringes is smaller where the gap is changing more rapidly, indicating a departure from flatness in one of the two surfaces. This is comparable to the contour lines one would find on a map. A flat surface is indicated by a pattern of straight, parallel fringes with equal spacing, while other patterns indicate uneven surfaces. Two adjacent fringes indicate a difference in elevation of one-half wavelength of the light used, so by counting the fringes, differences in elevation of the surface can be measured to better than one micrometre. Usually only one of the two surfaces of an optical flat is made flat to the specified tolerance, and this surface is indicated by an arrow on the edge of the glass. Optical flats are sometimes given an optical coating and used as precision mirrors or optical windows for special purposes, such as in a Fabry–Pérot interferometer or laser cavity. Optical flats have uses in spectrophotometry as well. Flatness testing An optical flat is usually placed upon a flat surface to be tested. If the surface is clean and reflective enough, rainbow colored bands of interference fringes will form when the test piece is illuminated with white light. However, if a monochromatic light is used to illuminate the work piece, such as helium, low-pressure sodium, or a laser, then a series of dark and light interference fringes will form. These interference fringes determine the flatness of the work piece, relative to the optical flat, to within a fraction of the wavelength of the light. If both surfaces are perfectly the same flatness and parallel to each other, no interference fringes will form. However, there is usually some air trapped between the surfaces. If the surfaces are flat, but a tiny optical wedge of air exists between them, then straight, parallel interference fringes will form, indicating the angle of the wedge (i.e.: more, thinner fringes indicate a steeper wedge while fewer but wider fringes indicate less of a wedge). The shape of the fringes also indicate the shape of the test surface, because fringes with a bend, a contour, or rings indicate high and low points on the surface, such as rounded edges, hills or valleys, or convex and concave surfaces. Preparation Both the optical flat and the surface to be tested need to be extremely clean. The tiniest bit of dust settling between the surfaces can ruin the results. Even the thickness of a streak or a fingerprint on the surfaces can be enough to change the width of the gap between them. Before the test, the surfaces are usually cleaned very thoroughly. Most commonly, acetone is used as the cleaning agent, because it dissolves most oils and it evaporates completely, leaving no residue. Typically, the surface will be cleaned using the "drag" method, in which a lint-free, scratch-free tissue is wetted, stretched, and dragged across the surface, pulling any impurities along with it. This process is usually performed dozens of times, ensuring that the surface is completely free of impurities. A new tissue will need to be used each time, to prevent recontamination of the surfaces from previously removed dust and oils. Testing is often done in a clean-room or another dust-free environment, keeping the dust from settling on the surfaces between cleaning and assembly. Sometimes, the surfaces may be assembled by sliding them together, helping to scrape off any dust that might happen to land on the flat. The testing is usually done in a temperature-controlled environment to prevent any distortions in the glass, and needs to be performed on a very stable work-surface. After testing, the flats are usually cleaned again and stored in a protective case, and are often kept in a temperature-controlled environment until used again. Lighting For the best test-results, a monochromatic light, consisting of only a single wavelength, is used to illuminate the flats. To show the fringes properly, several factors need to be taken into account when setting up the light source, such as the angle of incidence between the light and the observer, the angular size of the light source in relation to the pupil of the eye, and the homogeneity of the light source when reflected off of the glass. Many sources for monochromatic light can be used. Most lasers emit light of a very narrow bandwidth, and often provide a suitable light source. A helium–neon laser emits light at 632 nanometres (red), while a frequency doubled Nd:YAG laser emits light at 532 nm (green). Various laser diodes and diode-pumped solid-state lasers emit light in red, yellow, green, blue or violet. Dye lasers can be tuned to emit nearly any color. However, lasers also experience a phenomenon called laser speckle, which shows up in the fringes. Several gas or metal-vapor lamps can also be used. When operated at low pressure and current, these lamps generally produce light in various spectral lines, with one or two lines being most predominant. Because these lines are very narrow, the lamps can be combined with narrow-bandwidth filters to isolate the strongest line. A helium-discharge lamp will produce a line at 587.6 nm (yellow), while a mercury-vapor lamp produces a line at 546.1 (yellowish green). Cadmium vapor produces a line at 643.8 nm (red), but low pressure sodium produces a line at 589.3 nm (yellow). Of all the lights, low pressure sodium is the only one that produces a single line, requiring no filter. The fringes only appear in the reflection of the light source, so the optical flat must be viewed from the exact angle of incidence that the light shines upon it. If viewed from a zero degree angle (from directly above), the light must also be at a zero degree angle. As the viewing angle changes, the lighting angle must also change. The light must be positioned so that its reflection can be seen covering the entire surface. Also, the angular size of the light source needs to be many times greater than the eye. For example, if an incandescent light is used, the fringes may only show up in the reflection of the filament. By moving the lamp much closer to the flat, the angular size becomes larger and the filament may appear to cover the entire flat, giving clearer readings. Sometimes, a diffuser may be used, such as the powder coating inside frosted bulbs, to provide a homogenous reflection off the glass. Typically, the measurements will be more accurate when the light source is as close to the flat as possible, but the eye is as far away as possible. How interference fringes form The diagram shows an optical flat resting on a surface to be tested. Unless the two surfaces are perfectly flat, there will be a small gap between them (shown), which will vary with the contour of the surface. Monochromatic light (red) shines through the glass flat and reflects from both the bottom surface of the optical flat and the top surface of the test piece, and the two reflected rays combine and superpose. However, the ray reflecting off the bottom surface travels a longer path. The additional path length is equal to twice the gap between the surfaces. In addition, the ray reflecting off the bottom surface undergoes a 180° phase reversal, while the internal reflection of the other ray from the underside of the optical flat causes no phase reversal. The brightness of the reflected light depends on the difference in the path length of the two rays: If the gap between the surfaces is not constant, this interference results in a pattern of bright and dark lines or bands called "interference fringes" being observed on the surface. These are similar to contour lines on maps, revealing the height differences of the bottom test surface. The gap between the surfaces is constant along a fringe. The path length difference between two adjacent bright or dark fringes is one wavelength of the light, so the difference in the gap between the surfaces is one-half wavelength. Since the wavelength of light is so small, this technique can measure very small departures from flatness. For example, the wavelength of red light is about 700 nm, so the difference in height between two fringes is half that, or 350 nm, about 1/100 the diameter of a human hair. Mathematical derivation The variation in brightness of the reflected light as a function of gap width can be found by deriving the formula for the sum of the two reflected waves. Assume that the z-axis is oriented in the direction of the reflected rays. Assume for simplicity that the intensity A of the two reflected light rays is the same (this is almost never true, but the result of differences in intensity is just a smaller contrast between light and dark fringes). The equation for the electric field of the sinusoidal light ray reflected from the top surface traveling along the z-axis is where is the peak amplitude, λ is the wavelength, and is the angular frequency of the wave. The ray reflected from the bottom surface will be delayed by the additional path length and the 180° phase reversal at the reflection, causing a phase shift with respect to the top ray where is the phase difference between the waves in radians. The two waves will superpose and add: the sum of the electric fields of the two waves is Using the trigonometric identity for the sum of two cosines: , this can be written This represents a wave at the original wavelength whose amplitude is proportional to the cosine of , so the brightness of the reflected light is an oscillating, sinusoidal function of the gap width d. The phase difference is equal to the sum of the phase shift due to the path length difference 2d and the additional 180° phase shift at the reflection so the electric field of the resulting wave will be This represents an oscillating wave whose magnitude varies sinusoidally between and zero as increases. Constructive interference: The brightness will be maximum where , which occurs when Destructive interference: The brightness will be zero (or in the more general case minimum) where , which occurs when Thus the bright and dark fringes alternate, with the separation between two adjacent bright or dark fringes representing a change in the gap length of one half wavelength (λ/2). Precision and errors Counterintuitively, the fringes do not exist within the gap or the flat itself. The interference fringes actually form when the light waves all converge at the eye or camera, forming the image. Because the image is the compilation of all converging wavefronts interfering with each other, the flatness of the test piece can only be measured relative to the flatness of the optical flat. Any deviations on the flat will be added to the deviations on the test surface. Therefore, a surface polished to a flatness of λ/4 cannot be effectively tested with a λ/4 flat, as it is not possible to determine where the errors lie, but its contours can be revealed by testing with more accurate surfaces like a λ/20 or λ/50 optical flat. This also means that both the lighting and viewing angle have an effect on the accuracy of the results. When lighted or viewed at an angle, the distance that the light must travel across the gap is longer than when viewed and illuminated straight on. Thus, as the angle of incidence becomes steeper, the fringes will also appear to move and change. A zero degree angle of incidence is usually the most desirable angle, both for lighting and viewing. Unfortunately, this is usually impossible to achieve with the naked eye. Many interferometers use beamsplitters to obtain such an angle. Because the results are relative to the wavelength of the light, accuracy can also be increased by using light of shorter wavelengths, although the 632 nm line from a helium–neon laser is often used as the standard. No surface is ever completely flat. Therefore, any errors or irregularities that exist on the optical flat will affect the results of the test. Optical flats are extremely sensitive to temperature changes, which can cause temporary surface deviations resulting from uneven thermal expansion. The glass often experiences poor thermal conduction, taking a long time to reach thermal equilibrium. Merely handling the flats can transfer enough heat to offset the results, so glasses such as fused silica or borosilicate are used, which have very low coefficients of thermal expansion. The glass needs to be hard and very stable, and is usually very thick to prevent flexing. When measuring on the nanometre scale, the slightest bit of pressure can cause the glass to flex enough to distort the results. Therefore, a very flat and stable work-surface is also needed, on which the test can be performed, preventing both the flat and the test-piece from sagging under their combined weight, Often, a precision-ground surface plate is used as a work surface, providing a steady table-top for testing upon. To provide an even flatter surface, sometimes the test may be performed on top of another optical flat, with the test surface sandwiched in the middle. Absolute flatness Absolute flatness is the flatness of an object when measured against an absolute scale, in which the reference flat (standard) is completely free of irregularities. The flatness of any optical flat is relative to the flatness of the original standard that was used to calibrate it. Therefore, because both surfaces have some irregularities, there are few ways to know the true, absolute flatness of any optical flat. The only surface that can achieve nearly absolute flatness is a liquid surface, such as mercury, and can sometimes achieve flatness readings to within λ/100, which equates to a deviation of only 6.32 nm (632 nm/100). However, liquid flats are very difficult to use and align properly, so they are typically only used when preparing a standard flat for calibrating other flats. The other method for determining absolute flatness is the "three-flat test." In this test, three flats of equal size and shape are tested against each other. By analyzing the patterns and their different phase shifts, the absolute contours of each surface can be extrapolated. This usually requires at least twelve individual tests, checking each flat against every other flat in at least two different orientations. To eliminate any errors, the flats sometimes may be tested while resting on edge, rather than lying flat, helping to prevent sagging. Wringing Wringing occurs when nearly all of the air becomes forced out from between the surfaces, causing the surfaces to lock together, partly through the vacuum between them. The flatter the surfaces; the better they will wring together, especially when the flatness extends all the way to the edges. If two surfaces are very flat, they may become wrung together so tightly that a lot of force may be needed to separate them. The interference fringes typically only form once the optical flat begins to wring to the testing surface. If the surfaces are clean and very flat, they will begin to wring almost immediately after the first contact. After wringing begins, as air is slowly forced out from between the surfaces, an optical wedge forms between the surfaces. The interference fringes form perpendicular to this wedge. As the air is forced out, the fringes will appear to move toward the thickest gap, spreading out and becoming wider but fewer. As the air is forced out, the vacuum holding the surfaces together becomes stronger. The optical flat should usually never be allowed to fully wring to the surface, otherwise it can be scratched or even broken when separating them. In some cases, if left for many hours, a block of wood may be needed to knock them loose. Testing flatness with an optical flat is typically done as soon a viable interference pattern develops, and then the surfaces are separated before they can fully wring. Because the angle of the wedge is extremely shallow and the gap extremely small, wringing may take a few hours to complete. Sliding the flat in relation to the surface can speed up wringing, but trying to press the air out will have little effect. If the surfaces are insufficiently flat, if any oil films or impurities exist on the surface, or if slight dust-particles land between the surfaces, they may not wring at all. Therefore, the surfaces must be very clean and free of debris to get an accurate measurement. Determining surface shape The fringes act very much like the lines on a topography map, where the fringes are always perpendicular to the wedge between the surfaces. When wringing first begins, there is a large angle in the air wedge and the fringes will resemble grid topography-lines. If the fringes are straight; then the surface is flat. If the surfaces are allowed to fully wring and become parallel, the straight fringes will widen until only a dark fringe remains, and they will disappear completely. If the surface is not flat, the grid lines will have some bends in them, indicating the topography of the surface. Straight fringes with bends in them may indicate a raised elevation or a depression. Straight fringes with a "V" shape in the middle indicate a ridge or valley running across the center, while straight fringes with curves near the ends indicate edges that are either rounded-off or have a raised lip. If the surfaces are not completely flat, as wringing progresses the fringes will widen and continue to bend. When fully wrung, they will resemble contour topography-lines, indicating the deviations on the surface. Rounded fringes indicate gentle sloping or slightly cylindrical surfaces, while tight corners in the fringes indicate sharp angles in the surface. Small, round circles may indicate bumps or depressions, while concentric circles indicate a conical shape. Unevenly spaced concentric circles indicate a convex or concave surface. Before the surfaces fully wring, these fringes will be distorted due to the added angle of the air wedge, changing into the contours as the air is slowly pushed out. A single dark-fringe has the same gap thickness, following a line that runs the entire length of the fringe. The adjacent bright-fringe will indicate a thickness which is either 1/2 of the wavelength narrower or 1/2 of the wavelength wider. The thinner and closer the fringes are; the steeper the slope is, while wider fringes, spaced further apart, show a shallower slope. Unfortunately, it is impossible to tell whether the fringes are indicating an uphill or downhill slope from just a single view of the fringes alone, because the adjacent fringes can be going either way. A ring of concentric circles can indicate that the surface is either concave or convex, which is an effect similar to the hollow-mask illusion. There are three ways to test the surface for shape, but the most common is the "finger-pressure test." In this test, slight pressure is applied to the flat, to see which way the fringes move. The fringes will move away from the narrow end of the wedge. If the testing surface is concave, when pressure is applied to the center of the rings, the flat will flex a little and the fringes will appear to move inward. However, if the surface is convex, the flat will be in point-contact with the surface in that spot, so it will have no room to flex. Thus, the fringes will remain stationary, merely growing a little wider. If pressure is applied to the edge of the flat something similar happens. If the surface is convex the flat will rock a little, causing the fringes to move toward the finger. However, if the surface is concave the flat will flex a little, and the fringes will move away from the finger toward the center. Although this is called a "finger" pressure test, a wooden stick or some other instrument is often used to avoid heating the glass (with the mere weight of a toothpick often being enough pressure). Another method involves exposing the flat to white light, allowing rainbow fringes to form, and then pressing in the center. If the surface is concave, there will be point-contact along the edge, and the outer fringe will turn dark. If the surface is convex, there will be point-contact in the center, and the central fringe will turn dark. Much like tempering colors of steel, the fringes will be slightly brownish at the narrower side of the fringe and blue on the wider side, so if the surface is concave the blue will be on the inside of the rings, but if convex the blue will be on the outside. The third method involves moving the eye in relation to the flat. When moving the eye from a zero-degree angle of incidence to an oblique angle, the fringes will appear to move. If the testing surface is concave, the fringes will appear to move toward the center. If the surface is convex, the fringes will move away from the center. To get a truly accurate reading of the surface, the test should usually be performed in at least two different directions. As grid lines, the fringes only represent part of a grid, so a valley running across the surface may only show as a slight bend in the fringe if it is running parallel to the valley. However, if the optical flat is rotated 90 degrees and retested, the fringes will run perpendicular to the valley and it will show up as a row of V- or U-shaped contours in the fringes. By testing in more than one orientation, a better map of the surface can be made. Long-term stability During reasonable care and use, optical flats need to maintain their flatness over long periods of time. Therefore, hard glasses with low coefficients of thermal expansion, such as fused silica, are often used for the manufacturing material. However, a few laboratory measurements of room temperature, fused-silica optical-flats have shown a motion consistent with a material viscosity on the order of 1017–1018 Pa·s. This equates to a deviation of a few nanometres over the period of a decade. Because the flatness of an optical flat is relative to the flatness of the original test flat, the true (absolute) flatness at the time of manufacture can only be determined by performing an interferometer test using a liquid flat, or by performing a "three flat test", in which the interference patterns produced by three flats are computer-analyzed. A few tests that have been carried out have shown that a deviation sometimes occurs on the fused silica's surface. However, the tests show that the deformation may be sporadic, with only some of the flats deforming during the test period, some partially deforming, and others remaining the same. The cause of the deformation is unknown and would never be visible to the human eye during a lifetime. (A λ/4 flat has a normal surface-deviation of 158 nanometres, while a λ/20 flat has a normal deviation of over 30 nm.) This deformation has only been observed in fused silica, while soda-lime glass still shows a viscosity of 1041Pa·s, which is many orders of magnitude higher. See also Newton's rings Optical contact bonding Gauge block, another type of component designed for flatness Surface plate References Optical devices
Optical flat
[ "Materials_science", "Engineering" ]
4,936
[ "Glass engineering and science", "Optical devices" ]
4,231,031
https://en.wikipedia.org/wiki/Canavanine
L-(+)-(S)-Canavanine is a non-proteinogenic amino acid found in certain leguminous plants. It is structurally related to the proteinogenic α-amino acid L-arginine, the sole difference being the replacement of a methylene bridge (-- unit) in arginine with an oxa group (i.e., an oxygen atom) in canavanine. Canavanine is accumulated primarily in the seeds of the organisms which produce it, where it serves both as a highly deleterious defensive compound against herbivores (due to cells mistaking it for arginine) and a vital source of nitrogen for the growing embryo. The related L-canaline is similar to ornithine. Toxicity The mechanism of canavanine's toxicity is that organisms that consume it typically mistakenly incorporate it into their own proteins in place of L-arginine, thereby producing structurally aberrant proteins that may not function properly. Cleavage by arginase also produces canaline, a potent insecticide. The toxicity of canavanine may be enhanced under conditions of protein starvation, and canavanine toxicity, resulting from consumption of Hedysarum alpinum seeds with a concentration of 1.2% canavanine weight/weight, has been implicated in the death of a malnourished Christopher McCandless. (McCandless was the subject of Jon Krakauer's book (and subsequent movie) Into the Wild). In mammals NZB/W F1, NZB, and DBA/2 mice fed L-canavanine develop a syndrome similar to systemic lupus erythematosus, while BALB/c mice fed a steady diet of protein containing 1% canavanine showed no change in lifespan. Alfalfa seeds and sprouts contain L-canavanine. The L-canavanine in alfalfa has been linked to lupus-like symptoms in primates, including humans, and other auto-immune diseases. Often stopping consumption reverses the problem. Tolerance Some specialized herbivores tolerate L-canavanine either because they metabolize it efficiently (cf. L-canaline) or avoid its incorporation into their own nascent proteins. By metabolic detoxification Herbivores may be able to metabolize canavanine efficiently. The beetle Caryedes brasiliensis is able to break canavanine down to canaline, then further detoxifies canaline by reductive deamination to form homoserine and ammonia. As a result, the beetle not only tolerates the chemical, but uses it as a source of nitrogen to synthesize its other amino acids to allow it to develop. By selectivity An example of this ability can be found in the larvae of the tobacco budworm Heliothis virescens, which can tolerate large (lethal concentration 50 or LC50 300 mM) amounts of dietary canavanine. These larvae fastidiously avoid incorporation of L-canavanine into their nascent proteins due to gastrointestinal expression of canavanine hydrolase, an enzyme that cleaves L-canavanine into L-homoserine and hydroxyguanidine, and L-arginine kinase, which phosphorylates L-canavanine. In contrast, larvae of the tobacco hornworm Manduca sexta can only tolerate tiny amounts (1.0 microgram per kilogram of fresh body weight) of dietary canavanine because their arginine-tRNA ligase has little, if any, discriminatory capacity. No one has examined experimentally the arginine-tRNA synthetase of these organisms. But comparative studies of the incorporation of radiolabeled L-arginine and L-canavanine have shown that in Manduca sexta, the ratio of incorporation is about 3 to 1. Dioclea megacarpa seeds contain high levels of canavanine. The beetle Caryedes brasiliensis is able to tolerate this however as it has the most highly discriminatory arginine-tRNA ligase known (as of 1982). In this insect, the level of radiolabeled L-canavanine incorporated into newly synthesized proteins is barely measurable. Moreover, this beetle uses canavanine as a nitrogen source (see above). See also Canaline Arginine References Bibliography and in particularly large amounts in Canavalia gladiata (sword bean). ......... Alpha-Amino acids Toxic amino acids Non-proteinogenic amino acids Plant toxins Oxime ethers
Canavanine
[ "Chemistry" ]
954
[ "Chemical ecology", "Plant toxins" ]
4,231,059
https://en.wikipedia.org/wiki/Pleurotus%20eryngii
Pleurotus eryngii (also known as king trumpet mushroom, French horn mushroom, eryngi, king oyster mushroom, king brown mushroom, boletus of the steppes, trumpet royale, aliʻi oyster) is an edible mushroom native to Mediterranean regions of Europe, the Middle East, and North Africa, but also grown in many parts of Asia. Taxonomy Its species name is derived from the fact that it grows in association with the roots of Eryngium campestre or other Eryngium plants (English names: 'sea holly' or 'eryngo'). P. eryngii is a species complex, and a number of varieties have been described, with differing plant associates in the carrot family (Apiaceae). Pleurotus eryngii var. eryngii (DC.) Quél 1872 – associated with Eryngium ssp. Pleurotus eryngii var. ferulae (Lanzi) Sacc. 1887 – associated with Ferula communis Pleurotus eryngii var. tingitanus Lewinsohn 2002 – associated with Ferula tingitana Pleurotus eryngii var. elaeoselini Venturella, Zervakis & La Rocca 2000 – associated with Elaeoselinum asclepium Pleurotus eryngii var. thapsiae Venturella, Zervakis & Saitta 2002 – associated with Thapsia garganica Other specimens of P. eryngii have been reported in association with plants in the genera Ferulago, Cachrys, Laserpitium, and Diplotaenia, all in Apiaceae. Molecular studies have shown Pleurotus nebrodensis to be closely related to, but distinct from, P. eryngii. Pleurotus fossulatus may be another closely related species. Description Pleurotus eryngii is the largest species in the oyster mushroom genus, Pleurotus, which also contains the oyster mushroom Pleurotus ostreatus. It has a thick, meaty white stem and a small tan cap (in young specimens). Its natural range extends from the Atlantic Ocean through the Mediterranean Basin and Central Europe into Western Asia and India. Unlike other species of Pleurotus, which are primarily wood-decay fungi, the P. eryngii complex are also weak parasites on the roots of herbaceous plants in the carrot family (Apiaceae), although they may also be cultured on organic wastes. Verification Sequence analysis of the ITS1–5.8S rDNA–ITS2 of P. eryngii and the control strains P. ostreatus and P. ferulae, demonstrated that the DNA regions share almost 99% of sequence identity, indicating closely related mushroom strains. ITS1–5.8S rDNA–ITS2 sequence analysis is DNA sequencing used to confirm the mushroom species at hand, although it does distinguish variants in the mushroom species. RAPD are superior to DNA sequence-based methods with distinguishing strains in species. To verify the mushroom strains RAPD was used, and DNA fragments were amplified from the total cellular DNA. Verification of Pleurotus eryngii strains was assessed using ITS sequence analysis and RAPD fingerprinting. Analysis of the DNA fragment pattern showed that the 22 P. eryngii strains were clearly distinguished from the control strains P. ostreatus and P. ferulae, and could be categorized into five subgroups: Group 1- commonly showed widely spaced gills under the convex cap. They tended to form small fruiting bodies. Eastern Europe. 24-25C optimum growth Group 2- funnel-shaped cap phenotype with a stout stem. Members in this group grew faster than other mushrooms. They required 15–16 d from the fructification for harvest whereas the others required 18–21 d. Group 3- shared similar morphological characteristics; they formed thin fruiting bodies with a small convex cap. Strains KNR2514 and KNR 2522 Group 4- resembled group I mushrooms morphologically but grew at around 27 °C. Group 5- was collected from Iran; they grew as mycelia but hardly formed fruiting bodies. In this group, we only succeeded in generating fruiting bodies for KNR2517, which had a wide, white, convex cap. Their optimal growth temperature was the lowest among the strains tested (19–21 °C), which may reflect their geographical origin. Phylogeny Pleurotus populations growing on umbellifers seem to have recently diverged through a sympatric speciation process, that is based on both intrinsic reproductive barriers and extrinsic ecogeographical factors. Pleurotus eryngii is a saprotrophic fungus. Saprotrophic fungi use the process of chemoheterotrophic extracellular digestion involved in the processing of decayed organic matter. They are also an NTF, nematode-trapping fungi, that survives by trapping and digesting nematodes working as a natural pesticide. These fungi produce trapping devices to capture, kill, and digest nematodes as food sources. Traps are not only the weapons that NTF use to capture and infect nematodes but also an important indicator of their switch from a saprophytic to a predacious lifestyle. Pleurotus eryngii can live both saprophytically on organic matter and as predators by capturing tiny animals. The development of traps shows their evolutionary importance of them. They provide a crucial role in obtaining nutrients and may confer competitive advantages over non-predatory fungi. This fungal carnivorism diverged from saprophytism about 419 million years ago (Mya), after the origin of nematodes about 550–600 Mya. This following evolution of the fungi after the nematode suggests the co-evolution of the species. Phylogenetic analysis suggested that NTF have a common ancestor and the ability to capture nematodes has been an important trait for speciation and diversification within the clade. P. eryngii extract reduced the number of Panagrellus sp. larvae after 24 h by 90%. P. eryngii fungus has predatory activity against Panagrellus sp. larvae due to toxin production and negatively affects Meloidogyne javanica eggs and juveniles development. Uses The mushroom has a good shelf life and is cultivated widely. It has little flavor or aroma when raw. When cooked, it develops rich umami flavor and a meaty texture. When cultivating Random amplified polymorphic DNA (RAPD) can be used in the mushroom industry for the classification and maintenance of high-quality mushroom spawns. P. eryngii, are commercially produced, edible mushrooms, with P. eryngii making up 30% of the Korean edible mushroom market since its introduction in 1995. It is commonly used as a meat substitute in many vegan recipes. Pleurotus eryngii may contain chemicals that stimulate the immune system. Dietary intake of Pleurotus eryngii may function as cholesterol-lowering dietary agent. Like some other Pleurotus species, P. eryngii attacks nematodes and may provide a control method for these parasites when they infect cats and dogs. It is very frequently used in Apulian cuisine. An example of this is when it is put on top of orecchiette. See also Medicinal fungi List of Pleurotus species Notes References Sources External links Pleurotus eryngii photos Pleurotaceae Fungi of Europe Edible fungi Parasitic fungi Carnivorous fungi Fungi in cultivation Fungus species
Pleurotus eryngii
[ "Biology" ]
1,601
[ "Fungi", "Fungus species" ]
4,231,527
https://en.wikipedia.org/wiki/Rubik%27s%20Snake
A Rubik's Snake (also Rubik's Twist, Rubik's Transformable Snake, Rubik’s Snake Puzzle) is a toy with 24 wedges that are right isosceles triangular prisms. The wedges are connected by spring bolts, so that they can be twisted, but not separated. By being twisted, the Rubik's Snake can be made to resemble a wide variety of objects, animals, or geometric shapes. Its "ball" shape in its packaging is a non-uniform concave rhombicuboctahedron. The snake was invented by Ernő Rubik, better known as the inventor of the Rubik's Cube. Rubik's Snake was released during 1981 at the height of the Rubik's Cube craze. According to Ernő Rubik: "The snake is not a problem to be solved; it offers infinite possibilities of combination. It is a tool to test out ideas of shape in space. Speaking theoretically, the number of the snake's combinations is limited. But speaking practically, that number is limitless, and a lifetime is not sufficient to realize all of its possibilities." Other manufacturers have produced versions with more pieces than the original. Structure The 24 prisms are aligned in row with an alternating orientation (normal and upside down). Each prism can adopt 4 different positions, each with an offset of 90°. Usually the prisms have alternating colors. Notation Twisting instructions The steps needed to make an arbitrary shape or figure can be described in a number of ways. One common starting configuration is a straight bar with alternating upper and lower prisms, with the rectangular faces facing up and down, and the triangular faces facing towards the player. The 12 lower prisms are numbered 1 through 12 starting from the left, with the left and the right sloping faces of these prisms are labeled L and R respectively. The last of the upper prisms is on the right, so the L face of prism 1 does not have an adjacent prism. The four possible positions of the adjacent prism on each L and R sloping face are numbered 0, 1, 2 and 3 (representing the number of twists between the bottom prism and the L or R adjacent prism). Numbering is based on always twisting the adjacent prism so it swings towards the player: position 1 turns the adjacent blocks towards them, position 2 makes a 90° turn, and position 3 turns the adjacent block away from the player. Position 0 is the starting position, therefore it is not explicitly noted in step-by-step instructions. Using these rules, a twist can be simply described as: Number of the downward-facing prism (from the left): 1 to 12 Left or right sloping side of the prism: L or R Position of the twist: 1, 2 or 3 Machine processing The position of the 23 turning areas can also be written directly after each other. Here the positions 0, 1, 2 and 3 are always based on the degrees of twist between the right-hand prisms relative to the left-hand prism, when viewed from the right of the axis of rotation. However, this notation is impractical for human readers, because it is difficult to determine the order of the twists. Fiore method Rather than numbers, Albert Fiore uses letters to refer to the direction the second (rightward) section is turned in relation to the first (leftward) section: D, L, U, and R. These are listed consecutively rather than numbered, so that a completely straight figure rather than being presumed as a starting point is notated DDDDDDDDDDDDDDDDDDDDDDD. Mathematical formulation The number of different shapes of the Rubik's Snake is at most 423 = (⁠ ⁠≈⁠ ⁠7×1013 or 70trillion), i.e. 23 turning areas with 4 positions each. The real number of different shapes is lower, since some configurations are spatially impossible (because they would require multiple prisms to occupy the same region of space). Peter Aylett computed by an exhaustive search that (≈ 1.3×1013 or 13 trillion) positions are possible when prohibiting prism collisions, or passing through a collision to reach another position; or (≈ 6.7×1012) when mirror images (defined as the same sequence of turns, but from the other end of the snake) are counted as the one position, and likewise for rotational symmetries in loops (where the sequence of turns in a loop is cycled). World Record The world record for the "Fastest time to Solve a Snake Rubik's Cube" was set by Lim Kai Yi on January 7, 2024. He completed the puzzle in 2.19 seconds. See also Combination puzzles Mechanical puzzles Nonplanar flexagons References / External links Rubiks Snake Fansite, collection of shapes and figures of Rubik's Snake glsnake - open-source cross-platform implementation of Rubik's Snake (also ported to XScreenSaver) ^RecbixSnake - open-source cross-platform implementation of Rubik's Snake in Rec Room's visual programming language “Circuits” magicsnake - open-source javascript implementation with no blocks limitation 1980s toys Mechanical puzzles Combination puzzles Hungarian inventions Novelty items
Rubik's Snake
[ "Mathematics" ]
1,084
[ "Recreational mathematics", "Mechanical puzzles" ]
4,231,552
https://en.wikipedia.org/wiki/Clome%20oven
A clome oven (or cloam oven) is a type of masonry oven with a removable door made of clay or cast iron. It was a standard fitting for most kitchen fireplaces in Cornwall and Devon. The oven would be built into the side of the chimney breast, often appearing as a round bulge in the chimney. This bulge was the masonry surrounding the oven, and was intended to be dismantled should the oven ever need to be replaced. During installation, they are surrounded by packed clay to prevent the actual oven cracking. To use a clome oven, one must enter the fireplace and build a fire within the oven. Dried gorse or blackthorn was traditionally used. As the oven has no internal chimney, the smoke is allowed to escape through the oven door, and into the adjacent fireplace where it leaves through the main chimney. Once the oven is white hot, the hot ashes are either raked out or pushed aside, then the item to be baked is put in, and the door propped up to the opening. As cast-iron range cookers were brought into common use, it became standard practice to build a dividing wall to split the fireplace into two separate fireplaces, thus allowing access to the clome oven, as well as providing a space of the correct dimensions to fit a Cornish range or similar. Bricks were the most common building material for this task, since the installation of a Cornish range required a brick flue to be built up the back of the fireplace. Many clome ovens were preserved in situ in this way. References See also Horno Fireplaces Masonry Ovens Cornish cuisine
Clome oven
[ "Engineering" ]
326
[ "Construction", "Masonry" ]
4,231,766
https://en.wikipedia.org/wiki/Megastructures%20%28TV%20series%29
Megastructures is a documentary television series appearing on the National Geographic Channel in the United States and the United Kingdom, Channel 5 in the United Kingdom, France 5 in France, and 7mate in Australia. Each episode is an educational look of varying depth into the construction, operation, and staffing of various structures or construction projects, but not ordinary construction products. Generally containing interviews with designers and project managers, it presents the problems of construction and the methodology or techniques used to overcome obstacles. In some cases (such as the Akashi-Kaikyo Bridge and Petronas Towers) this involved the development of new materials or products that are now in general use within the construction industry. Megastructures focuses on constructions that are extreme; in the sense that they are the biggest, tallest, longest, or deepest in the world. Alternatively, a project may appear if it had an element of novelty or are a world first (such as Dubai's Palm Islands). This type of project is known as a megaproject. The series follows similar subjects as the History Channel's Modern Marvels and Discovery Channel's Extreme Engineering, covering areas of architecture, transport, construction and manufacturing. Episodes Season 1 (2004) Season 2 (2005) Season 3 (2006) Season 4 (2007–2008) Season 5 (2009–2010) Season 6 (2011) Unknown season Unknown season 2 Spin-offs Megastructures: Built from Disaster "Megastructures: Built from Disaster – Bridges" // Wednesday, 26 August 2009 8–9pm on Channel 5 "Megastructures: Built from Disaster – Ships" // Thursday, 3 September 2009 8–9pm on Channel 5 "Megastructures: Built from Disaster – Tunnels" // Thursday, 10 September 2009 8–9pm on Channel 5 "Megastructures: Built from Disaster – Stadiums" // Thursday, 24 September 2009 8–9pm on Channel 5 "Megastructures: Built from Disaster – Trains" // Thursday, 8 October 2009 8–9pm on Channel 5 "Megastructures: Built from Disaster – Skyscrapers" // Thursday, 15 October 2009 8–9pm on Channel 5 Ancient Megastructures "Ancient Megastructures: The Great Pyramid" "Ancient Megastructures: The Colosseum" "Ancient Megastructures: Chartres Cathedral" "Ancient Megastructures: Istanbuls Hagia Sophia" "Ancient Megastructures: Machu Picchu" "Ancient Megastructures: Angkor Wat" "Ancient Megastructures: Petra Cathedral" "Ancient Megastructures: St Pauls Cathedral" "Ancient Megastructures: The Alhambra" International broadcasts In January 2020, Indonesian TV channel, NET interested to broadcasting Megastructures in Indonesia in July 2020. See also Mega Builders Monster Moves Ultimate Factories Nazi Megastructures References External links Megastructures official site on National Geographic https://web.archive.org/web/20091119041617/http://www.locatetv.com/tv/ultimate-factories/2105893/episode-guide http://www.twofourbroadcast.com/news-250809-megastructures.asp Megastructures 2004 American television series debuts 2000s American documentary television series 2010s American documentary television series Channel 5 (British TV channel) documentary series Construction Documentary television series about aviation American aviation television series Documentary television series about industry Documentary television series about technology National Geographic (American TV channel) original programming
Megastructures (TV series)
[ "Technology", "Engineering" ]
763
[ "Construction", "Exploratory engineering", "Megastructures" ]
4,231,780
https://en.wikipedia.org/wiki/Methacrylate
Methacrylates are derivatives of methacrylic acid. These derivatives are mainly used to make poly(methyl methacrylate) and related polymers. Monomers Methyl methacrylate Ethyl methacrylate Butyl methacrylate Hydroxyethyl methacrylate Glycidyl methacrylate Carboxylate anions Monomers Methacrylate esters
Methacrylate
[ "Chemistry", "Materials_science" ]
93
[ "Monomers", "Polymer chemistry" ]
4,231,961
https://en.wikipedia.org/wiki/Plasma%20etching
Plasma etching is a form of plasma processing used to fabricate integrated circuits. It involves a high-speed stream of glow discharge (plasma) of an appropriate gas mixture being shot (in pulses) at a sample. The plasma source, known as etch species, can be either charged (ions) or neutral (atoms and radicals). During the process, the plasma generates volatile etch products at room temperature from the chemical reactions between the elements of the material etched and the reactive species generated by the plasma. Eventually the atoms of the shot element embed themselves at or just below the surface of the target, thus modifying the physical properties of the target. Mechanisms Plasma generation A plasma is a high energetic condition in which a lot of processes can occur. These processes happen because of electrons and atoms. To form the plasma electrons have to be accelerated to gain energy. Highly energetic electrons transfer the energy to atoms by collisions. Three different processes can occur because of this collisions: Excitation Dissociation Ionization Different species are present in the plasma such as electrons, ions, radicals, and neutral particles. Those species are interacting with each other constantly. Two processes occur during plasma etching: generation of chemical species interaction with the surrounding surfaces Without a plasma, all those processes would occur at a higher temperature. There are different ways to change the plasma chemistry and get different kinds of plasma etching or plasma depositions. One way to form a plasma is by using RF excitation by a power source of 13.56 MHz, a frequency allocated for this application in the ISM bands. The mode of operation of the plasma system will change if the operating pressure changes. Also, it is different for different structures of the reaction chamber. In the simple case, the electrode structure is symmetrical, and the sample is placed upon the grounded electrode. Influences on the process The key to develop successful complex etching processes is to find the appropriate gas etch chemistry that will form volatile products with the material to be etched as shown in Table 1. For some difficult materials (such as magnetic materials), the volatility can only be obtained when the wafer temperature is increased. The main factors that influence the plasma process: Electron source Pressure Gas species Vacuum Surface interaction The reaction of the products depend on the likelihood of dissimilar atoms, photons, or radicals reacting to form chemical compounds. The temperature of the surface also affects the reaction of products. Adsorption happens when a substance is able to gather and reach the surface in a condensed layer, ranging in thickness (usually a thin, oxidized layer.) Volatile products desorb in the plasma phase and help the plasma etching process as the material interacts with the sample's walls. If the products are not volatile, a thin film will form at the surface of the material. Different principles that affect a sample's ability for plasma etching: Volatility Adsorption Chemical Affinity Ion-bombarding Sputtering Plasma etching can change the surface contact angles, such as hydrophilic to hydrophobic, or vice versa. Argon plasma etching has reported to enhance contact angle from 52 deg to 68 deg, and, Oxygen plasma etching to reduce contact angle from 52 deg to 19 deg for CFRP composites for bone plate applications. Plasma etching has been reported to reduce the surface roughness from hundreds of nanometers to as much lower as 3 nm for metals. Types Pressure influences the plasma etching process. For plasma etching to happen, the chamber has to be under low pressure, less than 100 Pa. In order to generate low-pressure plasma, the gas has to be ionized. The ionization happens by a glow charge. Those excitations happen by an external source, which can deliver up to 30 kW and frequencies from 50 Hz (dc) over 5–10 Hz (pulsed dc) to radio and microwave frequency (MHz-GHz). Microwave plasma etching Microwave etching happens with an excitation sources in the microwave frequency, so between MHz and GHz. One example of plasma etching is shown here. Hydrogen plasma etching One form to use gas as plasma etching is hydrogen plasma etching. Therefore, an experimental apparatus like this can be used: Plasma etcher A plasma etcher, or etching tool, is a tool used in the production of semiconductor devices. A plasma etcher produces a plasma from a process gas, typically oxygen or a fluorine-bearing gas, using a high frequency electric field, typically 13.56 MHz. A silicon wafer is placed in the plasma etcher, and the air is evacuated from the process chamber using a system of vacuum pumps. Then a process gas is introduced at low pressure, and is excited into a plasma through dielectric breakdown. Plasma confinement Industrial plasma etchers often feature plasma confinement to enable repeatable etch rates and precise spatial distributions in plasmas. One method of confining plasmas is by using the properties of the Debye sheath, a near-surface layer in plasmas similar to the double layer in other fluids. For example, if the Debye sheath length on a slotted quartz part is at least half the width of the slot, the sheath will close off the slot and confine the plasma, while still permitting uncharged particles to pass through the slot. Applications Plasma etching is currently used to process semiconducting materials for their use in the fabrication of electronics. Small features can be etched into the surface of the semiconducting material in order to be more efficient or enhance certain properties when used in electronic devices. For example, plasma etching can be used to create deep trenches on the surface of silicon for uses in microelectromechanical systems. This application suggests that plasma etching also has the potential to play a major role in the production of microelectronics. Similarly, research is currently being done on how the process can be adjusted to the nanometer scale. Hydrogen plasma etching, in particular, has other interesting applications. When used in the process of etching semiconductors, hydrogen plasma etching has been shown to be effective in removing portions of native oxides found on the surface. Hydrogen plasma etching also tends to leave a clean and chemically balanced surface, which is ideal for a number of applications. Oxygen plasma etching can be used for anisotropic deep-etching of diamond nanostructures by application of high bias in inductively coupled plasma/reactive ion etching (ICP/RIE) reactor. On the other hand, the use of oxygen 0V bias plasmas can be used for isotropic surface termination of C-H terminated diamond surface. Integrated circuits Plasma can be used to grow a silicon dioxide film on a silicon wafer (using an oxygen plasma), or can be used to remove silicon dioxide by using a fluorine bearing gas. When used in conjunction with photolithography, silicon dioxide can be selectively applied or removed to trace paths for circuits. For the formation of integrated circuits it is necessary to structure various layers. This can be done with a plasma etcher. Before etching, a photoresist is deposited on the surface, illuminated through a mask, and developed. The dry etch is then performed so that structured etching is achieved. After the process, the remaining photoresist has to be removed. This is also done in a special plasma etcher, called an asher. Dry etching allows a reproducible, uniform etching of all materials used in silicon and III-V semiconductor technology. By using inductively coupled plasma/reactive ion etching (ICP/RIE), even hardest materials like e.g. diamond can be nanostructured. Plasma etchers are also used for de-layering integrated circuits in failure analysis. Printed circuit boards Plasma is used to etch printed circuit boards, including de-smear vias. See also Plasma cleaning References External links http://stage.iupac.org/publications/pac/pdf/1990/pdf/6209x1699.pdf Plasma processing Semiconductor device fabrication
Plasma etching
[ "Materials_science" ]
1,666
[ "Semiconductor device fabrication", "Microtechnology" ]
4,232,047
https://en.wikipedia.org/wiki/Attenuator%20%28electronics%29
An attenuator is a passive broadband electronic device that reduces the power of a signal without appreciably distorting its waveform. An attenuator is effectively the opposite of an amplifier, though the two work by different methods. While an amplifier provides gain, an attenuator provides loss, or gain less than unity. An attenuator is often referred to as a "pad" in audio electronics. Construction and usage Attenuators are usually passive devices made from simple voltage divider networks. Switching between different resistances forms adjustable stepped attenuators and continuously adjustable ones using potentiometers. For higher frequencies precisely matched low voltage standing wave ratio (VSWR) resistance networks are used. Fixed attenuators in circuits are used to lower voltage, dissipate power, and to improve impedance matching. In measuring signals, attenuator pads or adapters are used to lower the amplitude of the signal a known amount to enable measurements, or to protect the measuring device from signal levels that might damage it. Attenuators are also used to 'match' impedance by lowering apparent SWR (Standing Wave Ratio). Attenuator circuits Basic circuits used in attenuators are pi (Π) pads (π-type) and T pads. These may be required to be balanced or unbalanced networks depending on whether the line geometry with which they are to be used is balanced or unbalanced. For instance, attenuators used with coaxial lines would be the unbalanced form while attenuators for use with twisted pair are required to be the balanced form. Four fundamental attenuator circuit diagrams are given in the figures on the left. Since an attenuator circuit consists solely of passive resistor elements, it is both linear and reciprocal. If the circuit is also made symmetrical (this is usually the case since it is usually required that the input and output impedance Z1 and Z2 are equal), then the input and output ports are not distinguished, but by convention the left and right sides of the circuits are referred to as input and output, respectively. Various tables and calculators are available that provide a means of determining the appropriate resistor values for achieving particular loss values, such as that published by the NAB in 1960 for losses ranging from 1/2 to 40 dB, for use in 600 ohm circuits. Attenuator characteristics Key specifications for attenuators are: Attenuation expressed in decibels of relative power. A 3 dB pad reduces power to one half, 6 dB to one fourth, 10 dB to one tenth, 20 dB to one hundredth, 30 dB to one thousandth and so on. When input and output impedances are the same, voltage attenuation will be the square root of power attenuation, so, for example, a 6 dB attenuator that reduces power to one fourth will reduce the voltage (and the current) by half. Nominal impedance, for example 50 ohm Frequency bandwidth, for example DC-18 GHz Power dissipation depends on mass and surface area of resistance material as well as possible additional cooling fins. SWR is the standing wave ratio for input and output ports Accuracy Repeatability RF attenuators Radio frequency attenuators are typically coaxial in structure with precision connectors as ports and coaxial, micro strip or thin-film internal structure. Above SHF special waveguide structure is required. The flap attenuator is designed for use in waveguides to attenuate the signal. Important characteristics are: accuracy, low SWR, flat frequency-response and repeatability. The size and shape of the attenuator depends on its ability to dissipate power. RF attenuators are used as loads for and as known attenuation and protective dissipation of power in measuring RF signals. Audio attenuators A line-level attenuator in the preamp or a power attenuator after the power amplifier uses electrical resistance to reduce the amplitude of the signal that reaches the speaker, reducing the volume of the output. A line-level attenuator has lower power handling, such as a 1/2-watt potentiometer or voltage divider and controls preamp level signals, whereas a power attenuator has higher power handling capability, such as 10 watts or more, and is used between the power amplifier and the speaker. Power attenuator (guitar) Guitar amplifier Component values for resistive pads and attenuators This section concerns pi-pads, T-pads and L-pads made entirely from resistors and terminated on each port with a purely real resistance. All impedance, currents, voltages and two-port parameters will be assumed to be purely real. For practical applications, this assumption is often close enough. The pad is designed for a particular load impedance, ZLoad, and a particular source impedance, Zs. The impedance seen looking into the input port will be ZS if the output port is terminated by ZLoad. The impedance seen looking into the output port will be ZLoad if the input port is terminated by ZS. Reference figures for attenuator component calculation The attenuator two-port is generally bidirectional. However, in this section it will be treated as though it were one way. In general, either of the two figures applies, but the first figure (which depicts the source on the left) will be tacitly assumed most of the time. In the case of the L-pad, the second figure will be used if the load impedance is greater than the source impedance. Each resistor in each type of pad discussed is given a unique designation to decrease confusion. The L-pad component value calculation assumes that the design impedance for port 1 (on the left) is equal or higher than the design impedance for port 2. Terms used Pad will include pi-pad, T-pad, L-pad, attenuator, and two-port. Two-port will include pi-pad, T-pad, L-pad, attenuator, and two-port. Input port will mean the input port of the two-port. Output port will mean the output port of the two-port. Symmetric means a case where the source and load have equal impedance. Loss means the ratio of power entering the input port of the pad divided by the power absorbed by the load. Insertion Loss means the ratio of power that would be delivered to the load if the load were directly connected to the source divided by the power absorbed by the load when connected through the pad. Symbols used Passive, resistive pads and attenuators are bidirectional two-ports, but in this section they will be treated as unidirectional. ZS = the output impedance of the source. ZLoad = the input impedance of the load. Zin = the impedance seen looking into the input port when ZLoad is connected to the output port. Zin is a function of the load impedance. Zout = the impedance seen looking into the output port when Zs is connected to the input port. Zout is a function of the source impedance. Vs = source open circuit or unloaded voltage. Vin = voltage applied to the input port by the source. Vout = voltage applied to the load by the output port. Iin = current entering the input port from the source. Iout = current entering the load from the output port. Pin = Vin Iin = power entering the input port from the source. Pout = Vout Iout = power absorbed by the load from the output port. Pdirect = the power that would be absorbed by the load if the load were connected directly to the source. Lpad = 10 log10 (Pin / Pout), always. Further, if Zs = ZLoad, then Lpad = 20 log10 (Vin / Vout ). Note, as defined, Loss ≥ 0 dB Linsertion = 10 log10 (Pdirect / Pout ). Further, if Zs = ZLoad, then Linsertion = Lpad. Loss ≡ Lpad. Loss is defined to be Lpad. Symmetric T pad resistor calculation see Valkenburg p 11-3 Symmetric pi pad resistor calculation see Valkenburg p 11-3 L-Pad for impedance matching resistor calculation If a source and load are both resistive (i.e. Z1 and Z2 have zero or very small imaginary part) then a resistive L-pad can be used to match them to each other. As shown, either side of the L-pad can be the source or load, but the Z1 side must be the side with the higher impedance. see Large positive numbers means loss is large. The loss is a monotonic function of the impedance ratio. Higher ratios require higher loss. Converting T-pad to pi-pad This is the Y-Δ transform Converting pi-pad to T-pad This is the Δ-Y transform Conversion between two-ports and pads T-pad to impedance parameters The impedance parameters for a passive two-port are It is always possible to represent a resistive t-pad as a two-port. The representation is particularly simple using impedance parameters as follows: Impedance parameters to T-pad The preceding equations are trivially invertible, but if the loss is not enough, some of the t-pad components will have negative resistances. Impedance parameters to pi-pad These preceding T-pad parameters can be algebraically converted to pi-pad parameters. Pi-pad to admittance parameters The admittance parameters for a passive two port are It is always possible to represent a resistive pi pad as a two-port. The representation is particularly simple using admittance parameters as follows: Admittance parameters to pi-pad The preceding equations are trivially invertible, but if the loss is not enough, some of the pi-pad components will have negative resistances. General case, determining impedance parameters from requirements Because the pad is entirely made from resistors, it must have a certain minimum loss to match source and load if they are not equal. The minimum loss is given by Although a passive matching two-port can have less loss, if it does it will not be convertible to a resistive attenuator pad. Once these parameters have been determined, they can be implemented as a T or pi pad as discussed above. See also RF and microwave variable attenuators Optical attenuator Notes References External links Guitar amp power attenuator FAQ Basic attenuator circuits Explanation of attenuator types, impedance matching, and very useful calculator Resistive components Microwave technology Audio engineering
Attenuator (electronics)
[ "Physics", "Engineering" ]
2,244
[ "Physical quantities", "Resistive components", "Electrical engineering", "Audio engineering", "Electrical resistance and conductance" ]
4,232,119
https://en.wikipedia.org/wiki/Attenuator%20%28genetics%29
In genetics, attenuation is a regulatory mechanism for some bacterial operons that results in premature termination of transcription. The canonical example of attenuation used in many introductory genetics textbooks, is ribosome-mediated attenuation of the trp operon. Ribosome-mediated attenuation of the trp operon relies on the fact that, in bacteria, transcription and translation proceed simultaneously. Attenuation involves a provisional stop signal (attenuator), located in the DNA segment that corresponds to the leader sequence of mRNA. During attenuation, the ribosome becomes stalled (delayed) in the attenuator region in the mRNA leader. Depending on the metabolic conditions, the attenuator either stops transcription at that point or allows read-through to the structural gene part of the mRNA and synthesis of the appropriate protein. Attenuation is a regulatory feature found throughout Archaea and Bacteria causing premature termination of transcription. Attenuators are 5'-cis acting regulatory regions which fold into one of two alternative RNA structures which determine the success of transcription. The folding is modulated by a sensing mechanism producing either a Rho-independent terminator, resulting in interrupted transcription and a non-functional RNA product; or an anti-terminator structure, resulting in a functional RNA transcript. There are now many equivalent examples where the translation, not transcription, is terminated by sequestering the Shine-Dalgarno sequence (ribosomal binding site) in a hairpin-loop structure. While not meeting the previous definition of (transcriptional) attenuation, these are now considered to be variants of the same phenomena and are included in this article. Attenuation is an ancient regulatory system, prevalent in many bacterial species providing fast and sensitive regulation of gene operons and is commonly used to repress genes in the presence of their own product (or a downstream metabolite). Classes of attenuators Attenuators may be classified according to the type of molecule which induces the change in RNA structure. It is likely that transcription-attenuation mechanisms developed early, perhaps prior to the archaea/bacteria separation and have since evolved to use a number of different sensing molecules (the tryptophan biosynthetic operon has been found to use three different mechanisms in different organisms.) Ribosome-mediated attenuation In this situation RNA polymerase is dependent on (lagging) ribosome activity; if the ribosome pauses due to insufficient charged tRNA then the anti-terminator structure is favoured. The canonical attenuator example of the trp operon uses this mechanism in E. coli. Similar regulatory mechanisms have been found in many amino acid biosynthetic operons. Small-molecule-mediated attenuation (riboswitches) Riboswitch sequences (in the mRNA leader transcript) bind molecules such as amino acids, nucleotides, sugars, vitamins, metal ions and other small ligands which cause a conformational change in the mRNA. Most of these attenuators are inhibitory and are employed by genes for biosynthetic enzymes or transporters whose expression is inversely related to the concentration of their corresponding metabolites. Example- Cobalamine biosynthesis, Cyclic AMP-GMP switch, lysin biosynthesis, glycine biosynthesis, fluroide switch etc. T-boxes These elements are bound by specific uncharged tRNAs and modulate the expression of corresponding aminoacyl-tRNA synthetase operons. High levels of uncharged tRNA promote the anti-terminator sequence leading to increased concentrations of charged tRNA. These are considered by some to be a separate family of riboswitches but are significantly more complex than the previous class of attenuators. Protein-mediated attenuation Protein-RNA interactions may prevent or stabilize the formation of an anti-terminator structure. .. karima eric discovery RNA thermometers Temperature dependent loop formations introduce temperature-dependence in the expression of downstream operons. All such elements act in a translation-dependent manner by controlling the accessibility of the Shine-Dalgarno sequence, for example the expression of pathogenicity islands of some bacteria upon entry to a host. Recent data predict the existence of temperature-dependent alternative secondary structures (including Rho-independent terminators) upstream of cold shock proteins in E. coli. Discovery Attenuation was first observed by Charles Yanofsky in the trp operon of E. coli. The first observation was linked to two separate scientific facts. Mutations which knocked out the trp R (repressor) gene still showed some regulation of the trp operon (these mutants were not fully induced/repressed by tryptophan). The total range of trp operon regulation is about 700 X (on/off). When the trp repressor was knocked out, one still got about 10 X regulation by the absence or presence of trp. When the sequence of the beginning of the trp operon was determined an unusual open reading frame (ORF) was seen immediately preceding the ORFs for the known structural genes for the tryptophan biosynthetic enzymes. The general structural information shown below was observed from the sequence of the trp operon. First, Yanofsky observed that the ORF contained two tandem Trp codons and the protein had a Trp percent composition which was about 10X normal. Second, the mRNA in this region contained regions of dyad symmetry which would allow it to form two mutually exclusive secondary structures. One of the structures looked exactly like a rho-independent transcription termination signal. The other secondary structure, if formed, would prevent the formation of this secondary structure and thus the terminator. This other structure is called the "preemptor". The trp operon An example is the trp gene in bacteria. When there is a high level of tryptophan in the region, it is inefficient for the bacterium to synthesize more. When the RNA polymerase binds and transcribes the trp gene, the ribosome will start translating. (This differs from eukaryotic cells, where RNA must exit the nucleus before translation starts.) The attenuator sequence, which is located between the mRNA leader sequence (5' UTR) and trp operon gene sequence, contains four domains, where domain 3 can pair with domain 2 or domain 4. The attenuator sequence at domain 1 contains instruction for peptide synthesis that requires tryptophans. A high level of tryptophan will permit ribosomes to translate the attenuator sequence domains 1 and 2, allowing domains 3 and 4 to form a hairpin structure, which results in termination of transcription of the trp operon. Since the protein coding genes are not transcribed due to rho independent termination, no tryptophan is synthesised. In contrast, a low level of tryptophan means that the ribosome will stall at domain 1, causing the domains 2 and 3 to form a different hairpin structure that does not signal termination of transcription. Therefore, the rest of the operon will be transcribed and translated, so that tryptophan can be produced. Thus, domain 4 is an attenuator. Without domain 4, translation can continue regardless of the level of tryptophan. The attenuator sequence has its codons translated into a leader peptide, but is not part of the trp operon gene sequence. The attenuator allows more time for the attenuator sequence domains to form loop structures, but does not produce a protein that is used in later tryptophan synthesis. Attenuation is a second mechanism of negative feedback in the trp operon. While the TrpR repressor decreases transcription by a factor of 70, attenuation can further decrease it by a factor of 10, thus allowing accumulated repression of about 700-fold. Attenuation is made possible by the fact that in prokaryotes (which have no nucleus), the ribosomes begin translating the mRNA while RNA polymerase is still transcribing the DNA sequence. This allows the process of translation to directly affect transcription of the operon. At the beginning of the transcribed genes of the trp operon is a sequence of 140 nucleotides termed the leader transcript (trpL). This transcript includes four short sequences designated 1–4. Sequence 1 is partially complementary to sequence 2, which is partially complementary to sequence 3, which is partially complementary to sequence 4. Thus, three distinct secondary structures (hairpins) can form: 1–2, 2–3 or 3–4. The hybridization of strands 1 and 2 to form the 1–2 structure prevents the formation of the 2–3 structure, while the formation of 2-3 prevents the formation of 3–4. The 3–4 structure is a transcription termination sequence, once it forms RNA polymerase will disassociate from the DNA and transcription of the structural genes of the operon will not occur. Part of the leader transcript codes for a short polypeptide of 14 amino acids, termed the leader peptide. This peptide contains two adjacent tryptophan residues, which is unusual, since tryptophan is a fairly uncommon amino acid (about one in a hundred residues in a typical E. coli protein is tryptophan). If the ribosome attempts to translate this peptide while tryptophan levels in the cell are low, it will stall at either of the two trp codons. While it is stalled, the ribosome physically shields sequence 1 of the transcript, thus preventing it from forming the 1-2 secondary structure. Sequence 2 is then free to hybridize with sequence 3 to form the 2-3 structure, which then prevents the formation of the 3-4 termination hairpin. RNA polymerase is free to continue transcribing the entire operon. If tryptophan levels in the cell are high, the ribosome will translate the entire leader peptide without interruption and will only stall during translation termination at the stop codon. At this point the ribosome physically shields both sequences 1 and 2. Sequences 3 and 4 are thus free to form the 3-4 structure which terminates transcription. The result is that the operon will be transcribed only when tryptophan is unavailable for the ribosome, while the trpL transcript is constitutively expressed. To ensure that the ribosome binds and begins translation of the leader transcript immediately following its synthesis, a pause site exists in the trpL sequence. Upon reaching this site, RNA polymerase pauses transcription and apparently waits for translation to begin. This mechanism allows for synchronization of transcription and translation, a key element in attenuation. A similar attenuation mechanism regulates the synthesis of histidine, phenylalanine and threonine. Mechanism in the trp operon The proposed mechanism of how this mRNA secondary structure and the trp leader peptide could regulate transcription of the trp biosynthetic enzymes includes the following. RNAP initiates transcription of the trp promoter. RNAP pauses at about nucleotide 90 at a secondary structure (?the first one shown above?). Ribosomes engage this nascent mRNA and initiate translation of the leader peptide. RNAP is then "released" from its pause and continues transcription. When RNAP reaches the region of the potential terminator, whether it continues or not is dependent on the position of the ribosome "trailing behind". If the ribosome stalls at the tandem Trp codons, waiting for the appropriate tRNA, region 1 is sequestered within the ribosome and thus cannot base pair with region 2. This means that region 2 and 3 become based paired before region 4 can be transcribed. This forces region 4 when it is made to be single stranded, preventing the formation of the region 3/4 terminator structure. Transcription will then continue. If the ribosome translates the leader peptide with no hesitation, it then covers a portion of region 2 preventing it from base pairing with region 3. Then when region 4 is transcribed, it forms a stem and loop with region 3 and transcription is terminated, generating a ca. 140 base transcript. This mechanism of control measures the amount of available, charged Trp-tRNA. The location of ribosomes determines which alternate secondary structures form. Other operons controlled by attenuation The discovery of this type of mechanism to control the expression of genes in a biosynthetic operon lead to its identification in a wide variety of such operons for which repressors had never been discovered. For example: Attenuation in eukaryotes Although an attenuation mechanism that involves translation while transcription is ongoing, like to the mechanism for the trp operon (and some other amino acid biosynthetic operons), would not work in eukaryotes, there is evidence for attenuation in Eukaryotes. Research conducted on microRNA processing provides evidence of eukaryotic attenuation; after co-transcriptional endonucleolitical cleavage by Drosha 5'->3' exonuclease XRN2 may terminate further transcription by torpedo mechanism. References Genes VI pp. 374–380 M. Ballarino, "Coupled RNA Processing and Transcription of Intergenic Primary MicroRNAs", MOLECULAR AND CELLULAR BIOLOGY, Oct. 2009, p. 5632–5638 Gene expression
Attenuator (genetics)
[ "Chemistry", "Biology" ]
2,819
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
4,232,358
https://en.wikipedia.org/wiki/Hackers%20%26%20Painters
Hackers & Painters: Big Ideas from the Computer Age is a collection of essays from Paul Graham discussing hacking, programming languages, start-up companies, and many other technological issues. "Hackers & Painters" is also the title of one of those essays. The image on its cover is 'The Tower of Babel' by Pieter Bruegel. Table of contents Why Nerds Are Unpopular Hackers and Painters What You Can't Say Good Bad Attitude The Other Road Ahead How to Make Wealth Mind the Gap A Plan for Spam Taste for Makers Programming Languages Explained The Hundred-Year Language Beating the Averages Revenge of the Nerds The Dream Language Design and Research Publication data References External links Paul Graham's essays online, including some of the essays in the book Slashdot Review: Hackers & Painters Dabblers and Blowhards: Essay by a hacker and painter. Computer books American essay collections O'Reilly Media books
Hackers & Painters
[ "Technology" ]
188
[ "Works about computing", "Computer books" ]
4,232,526
https://en.wikipedia.org/wiki/Solar%20tracker
A solar tracker is a device that orients a payload toward the Sun. Payloads are usually solar panels, parabolic troughs, Fresnel reflectors, lenses, or the mirrors of a heliostat. For flat-panel photovoltaic systems, trackers are used to minimize the angle of incidence between the incoming sunlight and a photovoltaic panel, sometimes known as the cosine error. Reducing this angle increases the amount of energy produced from a fixed amount of installed power-generating capacity. In standard photovoltaic applications, it was predicted in 2008–2009 that trackers could be used in at least 85% of commercial installations greater than one megawatt from 2009 to 2012. As the pricing, reliability, and performance of single-axis trackers have improved, the systems have been installed in an increasing percentage of utility-scale projects. According to data from WoodMackenzie/GTM Research, global solar tracker shipments hit a record 14.5 gigawatts in 2017. This represents growth of 32 percent year-over-year, with similar or greater growth projected as large-scale solar deployment accelerates. In concentrator photovoltaics (CPV) and concentrated solar power (CSP) applications, trackers are used to enable the optical components in the CPV and CSP systems. The optics in concentrated solar applications accept the direct component of sunlight light and therefore must be oriented appropriately to collect energy. Tracking systems are found in all concentrator applications because such systems collect the sun's energy with maximum efficiency when the optical axis is aligned with incident solar radiation. Basic concept Sunlight has two components: the "direct beam" that carries about 90% of the solar energy and the "diffuse sunlight" that carries the remainder – the diffuse portion is the blue sky on a clear day, and is a larger proportion of the total on cloudy days. As the majority of the energy is in the direct beam, maximizing collection requires the Sun to be visible to the panels for as long as possible. However, on cloudier days the ratio of direct vs. diffuse light can be as low as 60:40 or even lower. The energy contributed by the direct beam drops off with the cosine of the angle between the incoming light and the panel. In addition, the reflectance (averaged across all polarizations) is approximately constant for angles of incidence up to around 50°, beyond which reflectance increases rapidly. Notes For example, trackers that have accuracies of ± 5° can capture more than 99.6% of the energy delivered by the direct beam plus 100% of the diffuse light. As a result, high-accuracy tracking is not typically used in non-concentrating PV applications. The purpose of a tracking mechanism is to follow the Sun as it moves across the sky. In the following sections, in which each of the main factors are described in a little more detail, the complex path of the Sun is simplified by considering its daily east-west motion separately from its yearly north-south variation with the seasons of the year. Solar energy intercepted The amount of solar energy available for collection from the direct beam is the amount of light intercepted by the panel. This is given by the area of the panel multiplied by the cosine of the angle of incidence of the direct beam (see illustration above). Put another way, the energy intercepted is equivalent to the area of the shadow cast by the panel onto a surface perpendicular to the direct beam. This cosine relationship is very closely related to the observation formalized in 1760 by Lambert's cosine law. This describes that the observed brightness of an object is proportional to the cosine of the angle of incidence of the light illuminating it. Reflective losses Not all of the intercepted light is transmitted into the panel; some is reflected at its surface. The amount reflected depends on both the refractive index of the surface material and the angle of incidence of the incoming light. The amount reflected also differs depending on the polarization of the incoming light. Incoming sunlight is a mixture of all polarizations, with equal amounts in direct sunlight. Averaged over all polarizations, the reflective losses are approximately constant at angles of incidence up to around 50°, beyond which they increase rapidly. See for example the accompanying graph, appropriate for glass. Solar panels are often coated with an anti-reflective coating, which is one or more thin layers of substances with refractive indices intermediate between those of silicon and air. This causes destructive interference in the reflected light, diminishing the reflected amount. Photovoltaic manufacturers have been working to decrease reflectance with improved anti-reflective coatings and with textured glass. Daily east-west motion of the Sun The Sun travels through 360° east to west per day, but from the perspective of any fixed location, the visible portion is 180° during an average half-day period (more in summer, slightly less in spring and fall, and significantly less in winter). Local horizon effects reduce this somewhat, making the effective motion about 150°. A solar panel in a fixed orientation between the dawn and sunset extremes will see a motion of 75° to either side, and thus, according to the table above, will lose over 75% of the energy in the morning and evening. Rotating the panels to the east and west can help recapture those losses. A tracker that only attempts to compensate for the east-west movement of the Sun is known as a single-axis tracker. Seasonal north-south motion of the Sun Due to the tilt of the Earth's axis, the Sun also moves through 46° north and south during a year. The same set of panels set at the midpoint between the two local extremes will thus see the Sun move 23° on either side. Thus according to the above table, an optimally aligned single-axis tracker (see polar aligned tracker below) will only lose 8.3% at the summer and winter seasonal extremes, or around 5% averaged over a year. Conversely a vertically- or horizontally-aligned single-axis tracker will lose considerably more as a result of these seasonal variations in the Sun's path. For example, a vertical tracker at a site at 60° latitude will lose up to 40% of the available energy in summer, while a horizontal tracker located at 25° latitude will lose up to 33% in winter. A tracker that accounts for both the daily and seasonal motions is known as a dual-axis tracker. Generally speaking, the losses due to seasonal angle changes are complicated by changes in the length of the day, increasing collection in the summer in northern or southern latitudes. This biases collection toward the summer, so if the panels are tilted closer to the average summer angles, the total yearly losses are reduced compared to a system tilted at the spring/fall equinox angle (which is the same as the site's latitude). There is considerable argument within the industry about whether the small difference in yearly collection between single- and dual-axis trackers makes the added complexity of a two-axis tracker worthwhile. A recent review of actual production statistics from southern Ontario suggested the difference was about 4% in total, which was far less than the added costs of the dual-axis systems. This compares unfavorably with the 24–32% improvement between a fixed-array and single-axis tracker. Other factors Clouds The above models assume uniform likelihood of cloud cover at different times of day or year. In different climate zones cloud cover can vary with seasons, affecting the averaged performance figures described above. Alternatively, for example in an area where cloud cover on average builds up during the day, there can be particular benefits in collecting morning sun. Atmosphere The distance that sunlight travels through the atmosphere increases as the sun approaches the horizon, as the sunlight travels diagonally through the atmosphere. As the path length through the atmosphere increases, the solar intensity reaching the collector decreases. This increasing path length is referred to as the air mass (AM) or air mass coefficient, where AM0 is at the top of the atmosphere, AM1 refers to the direct vertical path down to sea-level with Sun overhead, and AM greater than 1 refers to diagonal paths as the Sun approaches the horizon. Even though the sun may not feel particularly hot in the early mornings or during the winter months, the diagonal path through the atmosphere has a less than expected impact on the solar intensity. Even when the sun is only 15° above the horizon the solar intensity can be around 60% of its maximum value, around 50% at 10° and 25% at only 5° above the horizon. Therefore, if trackers can follow the Sun from horizon to horizon, then their solar panels can collect a significant amount of energy. Solar cell efficiency The underlying power conversion efficiency of a photovoltaic cell has a major influence on the end result, regardless of whether tracking is employed. Temperature Photovoltaic solar cell efficiency decreases with increasing temperature, at the rate of about 0.4%/°C. For example, there is about 20% higher efficiency at 10 °C in early morning or winter than at 60 °C in the heat of the day or summer. Therefore, trackers can deliver additional benefit by collecting early morning and winter energy when the cells are operating at their highest efficiency. Summary Trackers for concentrating collectors must employ high-accuracy tracking so as to keep the collector at the focus point. Trackers for non-concentrating flat-panel do not need high accuracy tracking: low power loss: under 10% loss even at 25° misalignment reflectance consistent even to around 50° misalignment diffuse sunlight contributes 10% independent of orientation, and a larger proportion on cloudy days. The benefits of tracking non-concentrating flat-panel collectors flow from the following: power loss rises rapidly beyond about 30° misalignment significant power is available even when the Sun is very close to the horizon, e.g. around 60% of full power at 15° above the horizon, around 50% at 10°, and even 25% at only 5° above the horizon – of particular relevance at high latitudes and/or during the winter months photovoltaic panels are around 20% more efficient in the cool of the early mornings as compared with during the heat of the day; similarly, they are more efficient in winter than summer – and effectively capturing early morning and winter sun requires tracking. Types of solar collector Solar collectors may be non-concentrating flat-panels, usually photovoltaic or hot-water, or concentrating systems, of a variety of types. Solar collector mounting systems may be fixed (manually aligned) or tracking. Different types of solar collector and their location (latitude) require different types of tracking mechanism. Tracking systems may be configured as a fixed collector / moving mirror – a Heliostat – or as a moving collector Non-tracking fixed mount Residential and small-capacity commercial or industrial rooftop solar panels and solar water heater panels are usually fixed, often flush-mounted on an appropriately-facing pitched roof. Advantages of fixed mounts over trackers include the following: Mechanical Advantages: Simple to manufacture, lower installation and maintenance costs. Wind-loading: it is easier and cheaper to provision a sturdy mount; all mounts other than fixed flush-mounted panels must be carefully designed having regard to wind loading due to greater exposure. Indirect light: approximately 10% of the incident solar radiation is diffuse light, available at any angle of misalignment with the Sun. Tolerance to misalignment: effective collection area for a flat panel is relatively insensitive to quite high levels of misalignment with the Sun – see the table and diagram at Basic concept section above – for example even a 25° misalignment reduces the direct solar energy collected by less than 10%. Fixed mounts are usually used in conjunction with non-concentrating systems; however, an important class of non-tracking concentrating collectors, of particular value in the third world, are portable solar cookers. These use relatively low levels of concentration, typically around 2 to 8 Suns and are manually aligned. Trackers Even though a fixed flat panel can be set to collect a high proportion of available noon-time energy, significant power is also available in the early mornings and late afternoons when the misalignment with a fixed panel becomes too excessive to collect a reasonable proportion of the available energy. For example, even when the Sun is only 10° above the horizon, the available energy can be around half the noon-time energy levels (or even greater depending on latitude, season, and atmospheric conditions). Thus the primary benefit of a tracking system is to collect solar energy for the longest period of the day, and with the most accurate alignment as the Sun's position shifts with the seasons. In addition, the greater the level of concentration employed, the more important accurate tracking becomes, because the proportion of energy derived from direct radiation is higher, and the region where that concentrated energy is focused becomes smaller. Fixed collector / moving mirror Many collectors cannot be moved, such as high-temperature collectors where the energy is recovered as hot liquid or gas (e.g. steam). Other examples include direct heating and lighting of buildings and fixed in-built solar cookers, such as Scheffler reflectors. In such cases it is necessary to employ a moving mirror so that, regardless of where the Sun is positioned in the sky, the Sun's rays are redirected onto the collector. Due to the complicated motion of the Sun across the sky, and the level of precision required to correctly aim the Sun's rays onto the target, a heliostat mirror generally employs a dual axis tracking system, with at least one axis mechanized. In different applications, mirrors may be flat or concave. Moving collector Trackers can be grouped into classes by the number and orientation of the tracker's axes. Compared to a fixed mount, a single-axis tracker increases annual output by approximately 30%, and a dual axis tracker an additional 10–20%. Photovoltaic trackers can be classified into two types: standard photovoltaic (PV) trackers and concentrated photovoltaic (CPV) trackers. Each of these tracker types can be further categorized by the number and orientation of their axes, their actuation architecture and drive type, their intended applications, their vertical supports, and foundation. Floating mount Floating islands of solar panels are being installed on reservoirs and lakes in the Netherlands, China, the UK, and Japan. The sun-tracking system controlling the direction of the panels operates automatically according to the time of year, changing position by means of ropes attached to buoys. Floating ground mount Solar trackers can be built using a "floating" foundation, which sits on the ground without the need for invasive concrete foundations. Instead of placing the tracker on concrete foundations, the tracker is placed on a gravel pan that can be filled with a variety of materials, such as sand or gravel, to secure the tracker to the ground. These "floating" trackers can sustain the same wind load as a traditional fixed mounted tracker. The use of floating trackers increases the number of potential sites for commercial solar projects since they can be placed on top of capped landfills or in areas where excavated foundations are not feasible. Motion-Free Optical Tracking Solar trackers can be built without the need for mechanical tracking equipment. These are called motion-free optical tracking. Renkube pioneered a glass based design to redirect light using motion-free optical tracking technology. Non-concentrating photovoltaic (PV) trackers Photovoltaic panels accept both direct and diffuse light from the sky. The panels on standard photovoltaic trackers gather both the available direct and diffuse light. The tracking functionality in standard photovoltaic trackers is used to minimize the angle of incidence between incoming light and the photovoltaic panel. This increases the amount of energy gathered from the direct component of the incoming sunlight. The physics behind standard photovoltaic trackers works with all standard photovoltaic module technologies. These include all types of crystalline silicon panels (either mono-Si, or multi-Si) and all types of thin film panels (amorphous silicon, CdTe, CIGS, microcrystalline). Concentrator photovoltaic (CPV) trackers The optics in CPV modules accept the direct component of the incoming light and therefore must be oriented appropriately to maximize the energy collected. In low-concentration applications, a portion of the diffuse light from the sky can also be captured. The tracking functionality in CPV modules is used to orient the optics such that the incoming light is focused to a photovoltaic collector. CPV modules that concentrate in one dimension must be tracked normal to the Sun in one axis. CPV modules that concentrate in two dimensions must be tracked normal to the Sun in two axes. Accuracy requirements The physics behind CPV optics requires that tracking accuracy increases as the system's concentration ratio increases. However, for a given concentration, nonimaging optics provide the widest possible acceptance angles, which may be used to reduce tracking accuracy. In typical high-concentration systems, tracking accuracy must be in the ± 0.1° range to deliver approximately 90% of the rated power output. In low concentration systems, tracking accuracy must be in the ± 2.0° range to deliver 90% of the rated power output. As a result, high-accuracy tracking systems are typical. Technologies supported Concentrated photovoltaic trackers are used with refractive and reflective concentrator systems. There are a range of emerging photovoltaic cell technologies used in these systems. These range from conventional, crystalline-silicon-based photovoltaic receivers to germanium-based triple junction receivers. Single-axis trackers Single-axis trackers have one degree of freedom that acts as an axis of rotation. The axis of rotation of single-axis trackers is typically aligned along a true North meridian. It is possible to align them in any cardinal direction with advanced tracking algorithms. There are several common implementations of single-axis trackers. These include horizontal single-axis trackers (HSAT), horizontal single-axis tracker with tilted modules (HTSAT), vertical single-axis trackers (VSAT), tilted single-axis trackers (TSAT), and polar-aligned single-axis trackers (PSAT). The orientation of the module with respect to the tracker axis is important when modeling performance. Horizontal Horizontal single axis tracker (HSAT) The axis of rotation for a horizontal single-axis tracker is horizontal with respect to the ground, and the axis can be on either a north-south line or a east-west line. The posts at either end of the axis of rotation of a horizontal single-axis tracker can be shared between trackers to lower the installation cost. This type of solar tracker is most appropriate for low-latitude regions. Field layouts with horizontal single-axis trackers are very flexible. The simple geometry means that keeping all of the axes of rotation parallel to one another is all that is required for appropriately positioning the trackers with respect to one another. Appropriate spacing can maximize the ratio of energy production to cost, with this being dependent upon local terrain and shading conditions and the time-of-day value of the energy produced. Backtracking is one means of computing the disposition of panels. Horizontal trackers typically have the face of the module oriented parallel to the axis of rotation. As a module tracks, it sweeps a cylinder that is rotationally symmetric around the axis of rotation. In single-axis horizontal trackers, a long horizontal tube is supported on bearings mounted upon pylons or frames. Panels are mounted upon the tube, and the tube will rotate on its axis to track the apparent motion of the Sun through the day. The tracking aims to minimize the angle between the beam light and the normal of the panel at any instant. Horizontal single-axis tracker with tilted modules (HTSAT) In HSATs, the modules are mounted flat at 0°, while in HTSATs, the modules are installed at a certain tilt. It works on the same principle as HSAT, keeping the axis of tube horizontal in north-south line and rotates the solar modules from east to west throughout the day. These trackers are usually suitable in high-latitude locations but do not take as much land space as vertical single-axis trackers (VSATs). Therefore, it brings the advantages of VSATs in a horizontal tracker and minimizes the overall cost of solar project. Vertical Vertical single-axis tracker (VSAT) The axis of rotation for vertical single-axis trackers is vertical with respect to the ground. These trackers rotate from east to west over the course of the day. Such trackers are more effective at high latitudes than horizontal single-axis trackers are. Field layouts must consider shading to avoid unnecessary energy losses and to optimize land use. Also, optimization for dense packing is limited due to the nature of the shading over the course of a year. Vertical single-axis trackers typically have the face of the module oriented at an angle with respect to the axis of rotation. As a module tracks, it sweeps a cone that is rotationally symmetric around the axis of rotation. Tilted Tilted single-axis tracker (TSAT) All trackers with axes of rotation between horizontal and vertical are considered tilted single-axis trackers. Tracker tilt angles are often limited to reduce the wind profile and decrease the elevated end height. With backtracking, they can be packed without shading perpendicular to their axes of rotation at any density. However, the packing parallel to their axes of rotation is limited by the tilt angle and the latitude. Tilted single-axis trackers typically have the face of the module oriented parallel to the axis of rotation. As a module tracks, it sweeps a cylinder that is rotationally symmetric around the axis of rotation. Dual-axis trackers Dual-axis trackers have two degrees of freedom that act as axes of rotation. These axes are typically normal to one another. The axis that is fixed with respect to the ground can be considered a primary axis. The axis that is referenced to the primary axis can be considered a secondary axis. There are several common implementations of dual-axis trackers. They are classified by the orientation of their primary axes with respect to the ground. Two common implementations are tip-tilt dual-axis trackers (TTDAT) and azimuth-altitude dual-axis trackers (AADAT). The orientation of the module with respect to the tracker axis is important when modeling performance. Dual-axis trackers typically have modules oriented parallel to the secondary axis of rotation. Dual-axis trackers allow for optimum solar energy levels due to their ability to follow the Sun vertically and horizontally. No matter where the Sun is in the sky, dual-axis trackers are able to angle themselves to point directly at the Sun. Tip-tilt A tip-tilt dual-axis tracker (TTDAT) is so named because the panel array is mounted on the top of a pole. On top of the pole is a two axis universal joint that provides both the effective horizontal rotation and vertical tilt of the panels and provides the dead load bearing capacity for the array. The tipping and tilting are managed by externally placed actuators. Movement around the horizon is driven by rolling the array around the top of the pole. This allows for great flexibility of the payload connection to the ground mounted equipment because there is no twisting of the cabling around the pole. The simple geometry means that keeping the axes of rotation parallel to one another is all that is required for appropriately positioning the trackers with respect to one another. Normally the trackers would have to be positioned at fairly low density to avoid one tracker casting a shadow on others when the Sun is low in the sky. Properly spacing trackers in an array is the only way to make sure that the morning/evening solar energy can be harvested. The morning/evening solar energy harvest is what sets the 2 axis tracker apart from fixed or 1 axis tracking. One axis trackers use "Backtracking" to account for self-shading, but this doesn't need to be an issue for 2 axis tracking. If one is going to the expense of putting up a 2 axis tracker why cut corners by limiting the evening sun, space the trackers properly and enjoy a maximized harvest. The early generation tracker axes of rotation of many tip-tilt dual-axis trackers are typically aligned either along a true north meridian or an east-west line of latitude. The sun following solar tracker described in this paragraph has a horizontal primary axis of rotation and a secondary axis of rotation that remains orthogonal to the primary axis at all times. There is no array rotation about the vertical axis (pole mount). The net rotation about the primary and secondary axes allows the array to "roll" about the vertical axis (top of pole). Given the unique capabilities of this tip-tilt configuration and controller, a totally-automatic tracking is possible for use on portable or fixed platforms. This "sun following" tracker only responds to the location of the sun or brightest area of a clouded sky (diffuse lighting). Consequently, it can follow the sun around the Horizon as it moves throughout the Arctic 24 hour summer day. There is no need for an astronomical calculation to locate the sun's position and the orientation of the tracker axes is of no particular importance and can be placed as needed. Azimuth-altitude An azimuth-altitude (or alt-azimuth) dual axis tracker (AADAT) has its primary axis (the azimuth axis) vertical to the ground. The secondary axis, often called elevation axis, is then typically normal to the primary axis. They are similar to tip-tilt systems in operation, but they differ in the way the array is rotated for daily tracking. Instead of rotating the array around the top of the pole, AADAT systems can use a large ring mounted on the ground with the array mounted on a series of rollers. The main advantage of this arrangement is the weight of the array is distributed over a portion of the ring, as opposed to the single loading point of the pole in the TTDAT. This allows AADAT to support much larger arrays. Unlike the TTDAT, however, the AADAT system cannot be placed closer together than the diameter of the ring, which may reduce the system density, especially considering inter-tracker shading. Construction and (Self-)Build As described later, the economic balance between the costs of panels and trackers. The steep drop in cost for solar panels in the early 2010s made it more challenging to find a sensible solution. As can be seen in the attached media files, most constructions use industrial and/or heavy materials unsuitable for small or craft workshops. Even commercial offers may have rather unsuitable solutions (a big rock) for stabilization. For a small (amateur/enthusiast) construction, the criteria that must be met include economy, stability of end product against elemental hazards, ease of handling materials, and joinery. Tracker type selection The selection of tracker type is dependent on many factors including installation size, electric rates, government incentives, land constraints, latitude, and local weather. Horizontal single-axis trackers are typically used for large distributed generation projects and utility scale projects. The combination of energy improvement, lower product cost, and lower installation complexity results in compelling economics in large deployments. In addition, the strong afternoon performance is particularly desirable for large grid-tied photovoltaic systems so that production will match the peak demand time. Horizontal single-axis trackers also add a substantial amount of productivity during the spring and summer seasons when the Sun is high in the sky. The inherent robustness of their supporting structure and the simplicity of the mechanism also result in high reliability which keeps maintenance costs low. Since the panels are horizontal, they can be compactly placed on the axle tube without danger of self-shading and are also readily accessible for cleaning. A vertical-axis tracker pivots only about a vertical axle, with the panels at a fixed, adjustable, or tracked elevation angle. Such trackers with fixed or (seasonally) adjustable angles are suitable for high latitudes, where the apparent solar path is not especially high, but which leads to long days in summer, with the Sun traveling through a long arc. Dual-axis trackers are typically used in smaller residential installations and locations with very high government feed in tariffs. Of course, that will change when the industries associated with solar realize the significance of the typical 30% loss of energy harvest at peak demand periods. Incentives for producing solar when it is needed most will drive the renewed interest in dual axis trackers. Multi-mirror concentrating PV This device uses multiple mirrors in a horizontal plane to reflect sunlight upward to a high-temperature system requiring concentrated solar power. Structural problems and expense are greatly reduced since the mirrors are not significantly exposed to wind loads. Through the employment of a patented mechanism, only two drive systems are required for each device. Because of the configuration of the device, it is especially suited for use on flat roofs and at lower latitudes. The units illustrated each produce approximately 200 peak DC watts. A multiple-mirror reflective system combined with a central power tower was employed at the Sierra SunTower, located in Lancaster, California. This generation plant, operated by eSolar, operated from 2009 to 2014. This system, which used multiple heliostats in a north-south alignment, used pre-fabricated parts and construction as a way of decreasing startup and operating costs. Drive types Active tracker Active trackers use motors and gear trains to perform solar tracking. They can use microprocessors and sensors, date-and-time-based algorithms, or a combination of both to detect the position of the sun. To control and manage the movement of these massive structures, special slewing drives are designed and rigorously tested. The technologies used to direct the tracker are constantly evolving and recent developments at Google and Eternegy have included the use of wire-ropes and winches to replace some of the more costly and more fragile components. Counter-rotating slewing drives sandwiching a fixed-angle support can be applied to create a "multi-axis" tracking method which eliminates rotation relative to longitudinal alignment. This method, if placed on a column or pillar, will generate more electricity than fixed PV, and its PV array will never rotate into a parking lot drive lane. It will also allow for maximum solar generation in virtually any parking lot lane/row orientation, including circular or curvilinear. Active two-axis trackers are also used to orient heliostats – movable mirrors that reflect sunlight toward the absorber of a central power station. As each mirror in a large field will have an individual orientation, these are controlled programmatically through a central computer system, which also allows the system to be shut down when necessary. Light-sensing trackers typically have two or more photosensors, such as photodiodes, configured differentially so that they output a null when receiving the same light flux. Mechanically, they should be omnidirectional (i.e. flat) and are aimed 90 degrees apart. This will cause the steepest part of their cosine transfer functions to balance at the steepest part, which translates into maximum sensitivity. For more information about controllers, see active daylighting. Since the motors consume energy, one wants to use them only as necessary. So instead of a continuous motion, the heliostat is moved in discrete steps. Also, if the light is below some threshold, there would not be enough power generated to warrant reorientation. This is also true when there is not enough difference in light level from one direction to another, such as when clouds are passing overhead. Consideration must be made to keep the tracker from wasting energy during cloudy periods. Passive tracker The most common Passive trackers use a low-boiling-point compressed gas that is driven to one side or the other (by solar heat creating gas pressure) to cause the tracker to move in response to an imbalance. As this is an imprecise orientation, it is unsuitable for certain types of concentrating photovoltaic collectors but works fine for common PV panel types. These will have viscous dampers to prevent excessive motion in response to wind gusts. Shader/reflectors are used to reflect early morning sunlight to "wake up" the panel and tilt it toward the Sun, which can take some hours, depending on shading conditions. The time to do this can be greatly reduced by adding a self-releasing tiedown that positions the panel slightly past the zenith (so that the fluid does not have to overcome gravity) and using the tiedown in the evening. (A slack-pulling spring will prevent release in windy overnight conditions.) A newly emerging type of passive tracker for photovoltaic solar panels uses a hologram behind stripes of photovoltaic cells so that sunlight passes through the transparent part of the module and reflects on the hologram. This allows sunlight to hit the cell from behind, thereby increasing the module's efficiency. Also, the panel does not have to move since the hologram always reflects sunlight from the correct angle towards the cells. Manual tracking In some developing nations, drives have been replaced by operators who adjust the trackers. This has the benefits of robustness, having staff available for maintenance, and creating employment for the population in the vicinity of the site. Rotating buildings In Freiburg im Breisgau, Germany, Rolf Disch built the Heliotrop in 1996, a residential building that is rotating with the sun and has an additional dual-axis photovoltaic sail on the roof. It produces four times the amount of energy the building consumes. The Gemini house is a unique example of a vertical axis tracker. This cylindrical house in Austria (latitude above 45 degrees north) rotates in its entirety to track the Sun, with vertical solar panels mounted on one side of the building, rotating independently, allowing control of the natural heating from the Sun. ReVolt House is a rotating, floating house designed by TU Delft students for the Solar Decathlon Europe competition in Madrid. The house was completed in September 2012. An opaque façade turns itself towards the Sun in summer to prevent the interior from heating up. In winter, a glass façade faces the Sun for passive solar heating of the house. Since the house is floating frictionlessly on water, rotating it does not require much energy. Disadvantages Trackers add cost and maintenance to the system – if they add 25% to the cost, and improve the output by 25%, then the same performance can be obtained by making the system 25% larger, eliminating the additional maintenance. Tracking was very cost effective in the past when photovoltaic modules were expensive compared to today. Because they were expensive, it was important to use tracking to minimize the number of panels used in a system with a given power output. But as panels get cheaper, the cost effectiveness of tracking vs using a greater number of panels decreases. However, in off-grid installations where batteries store power for overnight use, a tracking system reduces the hours that stored energy is used, thus requiring less battery capacity. As the batteries themselves are expensive (either traditional lead-acid stationary cells or newer lithium-ion batteries), their cost needs to be included in the cost analysis. Tracking is also not suitable for typical residential rooftop photovoltaic installations. Since tracking requires that panels tilt or otherwise move, provisions must be made to allow this. This requires that panels be offset a significant distance from the roof, which requires expensive racking and increases wind load. Also, such a setup would not make for an aesthetically pleasing install on residential rooftops. Because of this (and the high cost of such a system), tracking is not used on residential rooftop installations, and is unlikely to ever be used in such installations. This is especially true as the cost of photovoltaic modules continues to decrease, which makes increasing the number of modules for more power the more cost-effective option. Tracking can (and sometimes is) used for residential ground mount installations, where greater freedom of movement is possible. Tracking can also cause shading problems. As the panels move during the course of the day, it is possible that, if the panels are located too close to one another, they may shade one another due to profile angle effects. As an example, if one has several panels in a row from east to west, there will be no shading during solar noon, but in the afternoon, panels could be shaded by their west neighboring panel if they are sufficiently close. This means that panels must be spaced sufficiently far to prevent shading in systems with tracking, which can reduce the available power from a given area during the peak Sun hours. This is not a big problem if there is sufficient land area to widely space the panels. But it will reduce output during certain hours of the day (i.e. around solar noon) compared to a fixed array. Optimizing this problem with math is called backtracking. Further, single-axis tracking systems are prone to becoming unstable at relatively modest wind speeds (galloping). This is due to the torsional instability of single-axis solar tracking systems. Anti-galloping measures such as automatic stowing and external dampers must be implemented. See also Air mass coefficient Bifacial solar cells - vertical bifacial solar array Heliostat Solar energy Sun path Nextracker References Solar energy Tracking Photovoltaics
Solar tracker
[ "Technology" ]
7,688
[ "Tracking", "Wireless locating" ]
4,232,656
https://en.wikipedia.org/wiki/Trakhtenbrot%27s%20theorem
In logic, finite model theory, and computability theory, Trakhtenbrot's theorem (due to Boris Trakhtenbrot) states that the problem of validity in first-order logic on the class of all finite models is undecidable. In fact, the class of valid sentences over finite models is not recursively enumerable (though it is co-recursively enumerable). Trakhtenbrot's theorem implies that Gödel's completeness theorem (that is fundamental to first-order logic) does not hold in the finite case. Also it seems counter-intuitive that being valid over all structures is 'easier' than over just the finite ones. The theorem was first published in 1950: "The Impossibility of an Algorithm for the Decidability Problem on Finite Classes". Mathematical formulation We follow the formulations as in Ebbinghaus and Flum Theorem Satisfiability for finite structures is not decidable in first-order logic. That is, the set {φ | φ is a sentence of first-order logic that is satisfied in some finite structure} is undecidable. (p. 127, Th. 7.2.1 in ) Corollary Let σ be a relational vocabulary with one at least binary relation symbol. The set of σ-sentences valid in all finite structures is not recursively enumerable. Remarks This implies that Gödel's completeness theorem fails in the finite since completeness implies recursive enumerability. It follows that there is no recursive function f such that: if φ has a finite model, then it has a model of size at most f(φ). In other words, there is no effective analogue to the Löwenheim–Skolem theorem in the finite. Intuitive proof This proof is taken from Chapter 10, section 4, 5 of Mathematical Logic by H.-D. Ebbinghaus. As in the most common proof of Gödel's First Incompleteness Theorem through using the undecidability of the halting problem, for each Turing machine there is a corresponding arithmetical sentence , effectively derivable from , such that it is true if and only if halts on the empty tape. Intuitively, asserts "there exists a natural number that is the Gödel code for the computation record of on the empty tape that ends with halting". If the machine does halt in finite steps, then the complete computation record is also finite, then there is a finite initial segment of the natural numbers such that the arithmetical sentence is also true on this initial segment. Intuitively, this is because in this case, proving requires the arithmetic properties of only finitely many numbers. If the machine does not halt in finite steps, then is false in any finite model, since there's no finite computation record of that ends with halting. Thus, if halts, is true in some finite models. If does not halt, is false in all finite models. So, does not halt if and only if is true over all finite models. The set of machines that does not halt is not recursively enumerable, so the set of valid sentences over finite models is not recursively enumerable. Alternative proof In this section we exhibit a more rigorous proof from Libkin. Note in the above statement that the corollary also entails the theorem, and this is the direction we prove here. Theorem For every relational vocabulary τ with at least one binary relation symbol, it is undecidable whether a sentence φ of vocabulary τ is finitely satisfiable. Proof According to the previous lemma, we can in fact use finitely many binary relation symbols. The idea of the proof is similar to the proof of Fagin's theorem, and we encode Turing machines in first-order logic. What we want to prove is that for every Turing machine M we construct a sentence φM of vocabulary τ such that φM is finitely satisfiable if and only if M halts on the empty input, which is equivalent to the halting problem and therefore undecidable. Let M= ⟨Q, Σ, δ, q0, Qa, Qr⟩ be a deterministic Turing machine with a single infinite tape. Q is the set of states, Σ is the input alphabet, Δ is the tape alphabet, δ is the transition function, q0 is the initial state, Qa and Qr are the sets of accepting and rejecting states. Since we are dealing with the problem of halting on an empty input we may assume w.l.o.g. that Δ={0,1} and that 0 represents a blank, while 1 represents some tape symbol. We define τ so that we can represent computations: τ := {<, min, T0 (⋅,⋅), T1 (⋅,⋅), (Hq(⋅,⋅))(q ∈ Q)} Where: < is a linear order and min is a constant symbol for the minimal element with respect to < (our finite domain will be associated with an initial segment of the natural numbers). T0 and T1 are tape predicates. Ti(s,t) indicates that position s at time t contains i, where i ∈ {0,1}. Hq's are head predicates. Hq(s,t) indicates that at time t the machine is in state q, and its head is in position s. The sentence φM states that (i) <, min, Ti's and Hq's are interpreted as above and (ii) that the machine eventually halts. The halting condition is equivalent to saying that Hq∗(s, t) holds for some s, t and q∗ ∈ Qa ∪ Qr and after that state, the configuration of the machine does not change. Configurations of a halting machine (the nonhalting is not finite) can be represented as a τ (finite) sentence (more precisely, a finite τ-structure which satisfies the sentence). The sentence φM is: φ ≡ α ∧ β ∧ γ ∧ η ∧ ζ ∧ θ. We break it down by components: α states that < is a linear order and that min is its minimal element γ defines the initial configuration of M: it is in state q0, the head is in the first position and the tape contains only zeros: γ ≡ Hq0(min,min) ∧ ∀s T0 (s, min) η states that in every configuration of M, each tape cell contains exactly one element of Δ: ∀s∀t(T0(s, t) ↔ ¬ T1(s, t)) β imposes a basic consistency condition on the predicates Hq's: at any time the machine is in exactly one state: ζ states that at some point M is in a halting state: θ consists of a conjunction of sentences stating that Ti's and Hq's are well behaved with respect to the transitions of M. As an example, let δ(q,0)=(q',1, left) meaning that if M is in state q reading 0, then it writes 1, moves the head one position to the left and goes into the state q'. We represent this condition by the disjunction of θ0 and θ1: Where θ2 is: And: Where θ3 is: s-1 and t+1 are first-order definable abbreviations for the predecessor and successor according to the ordering <. The sentence θ0 assures that the tape content in position s changes from 0 to 1, the state changes from q to q', the rest of the tape remains the same and that the head moves to s-1 (i. e. one position to the left), assuming s is not the first position in the tape. If it is, then all is handled by θ1: everything is the same, except the head does not move to the left but stays put. If φM has a finite model, then such a model that represents a computation of M (that starts with the empty tape (i.e. tape containing all zeros) and ends in a halting state). If M halts on the empty input, then the set of all configurations of the halting computations of M (coded with <, Ti's and Hq's) is a model of φM, which is finite, since the set of all configurations of halting computations is finite. It follows that M halts on the empty input iff φM has a finite model. Since halting on the empty input is undecidable, so is the question of whether φM has a finite model (equivalently, whether φM is finitely satisfiable) is also undecidable (recursively enumerable, but not recursive). This concludes the proof. Corollary The set of finitely satisfiable sentences is recursively enumerable. Proof Enumerate all pairs where is finite and . Corollary For any vocabulary containing at least one binary relation symbol, the set of all finitely valid sentences is not recursively enumerable. Proof From the previous lemma, the set of finitely satisfiable sentences is recursively enumerable. Assume that the set of all finitely valid sentences is recursively enumerable. Since ¬φ is finitely valid iff φ is not finitely satisfiable, we conclude that the set of sentences which are not finitely satisfiable is recursively enumerable. If both a set A and its complement are recursively enumerable, then A is recursive. It follows that the set of finitely satisfiable sentences is recursive, which contradicts Trakhtenbrot's theorem. References Boolos, Burgess, Jeffrey. Computability and Logic, Cambridge University Press, 2002. Simpson, S. "Theorems of Church and Trakhtenbrot". 2001. Finite model theory Computability theory Undecidable problems
Trakhtenbrot's theorem
[ "Mathematics" ]
2,107
[ "Mathematical theorems", "Foundations of mathematics", "Mathematical logic", "Computational problems", "Finite model theory", "Undecidable problems", "Model theory", "Computability theory", "Mathematical problems", "Theorems in the foundations of mathematics" ]
4,233,727
https://en.wikipedia.org/wiki/Aurea%20Alexandrina
Aurea Alexandrina was an ancient opiate. It is called Aurea from the gold which enters its composition, and Alexandrina for the physician Nicolaus Myresus Alexandrinus, who invented it. It was considered to be a good preservative against colic and apoplexy. References Opioids Antidotes
Aurea Alexandrina
[ "Chemistry" ]
73
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
4,233,899
https://en.wikipedia.org/wiki/Dialogo%20de%20Cecco%20di%20Ronchitti%20da%20Bruzene%20in%20perpuosito%20de%20la%20stella%20Nuova
Dialogo de Cecco di Ronchitti da Bruzene in perpuosito de la stella Nuova (Dialogue of Cecco di Ronchitti of Brugine concerning the New star) is the title of an early 17th-century pseudonymous pamphlet ridiculing the views of an aspiring Aristotelian philosopher, Antonio Lorenzini da Montepulciano, on the nature and properties of Kepler's Supernova, which had appeared in October 1604. The pseudonymous Dialogue was written in the coarse language of a rustic Paduan dialect, and first published in about March, 1605, in Padua. A second edition was published later the same year in Verona. Antonio Favaro republished the contents of the pamphlet in its original language in 1881, with annotations and a commentary in Italian. He republished it again in Volume 2 of the National Edition of Galileo's works in 1891, along with a translation into standard Italian. An English translation was published by Stillman Drake in 1976. The Dialogo is dedicated to Antonio Querenghi. Scholars agree that the pamphlet was written either by Galileo Galilei or one of his followers, Girolamo Spinelli, or by both in collaboration, but do not agree on the extent of the contribution—if any—made by each of them to its composition. Footnotes Bibliography 1605 books Astronomy pamphlets Historical physics publications History of astronomy 1605 in science Philosophy of science books
Dialogo de Cecco di Ronchitti da Bruzene in perpuosito de la stella Nuova
[ "Astronomy" ]
291
[ "History of astronomy", "Works about astronomy", "Astronomy stubs", "Astronomy book stubs", "Astronomy pamphlets" ]
4,234,010
https://en.wikipedia.org/wiki/3D%20Systems
3D Systems Corporation is an American company based in Rock Hill, South Carolina, that engineers, manufactures, and sells 3D printers, 3D printing materials, 3D printed parts, and application engineering services. The company creates product concept models, precision and functional prototypes, master patterns for tooling, as well as production parts for direct digital manufacturing. It uses proprietary processes to fabricate physical objects using input from computer-aided design and manufacturing software, or 3D scanning and 3D sculpting devices. 3D Systems' technologies and services are used in the design, development, and production stages of many industries, including aerospace, automotive, healthcare, dental, entertainment, and durable goods. The company offers a range of professional- and production-grade 3D printers, as well as software, materials, and the online rapid part printing service on demand. It is notable within the 3D printing industry for developing stereolithography and the STL file format. Chuck Hull, CTO and former president, pioneered stereolithography and obtained a patent for the technology in 1986. As of 2020, 3D Systems employed over 2,400 people in 25 offices worldwide. History 3D Systems was founded in Valencia, California, by Chuck Hull, the inventor and patent-holder of the first stereolithography (SLA) rapid prototyping system. Prior to Hull's introduction of SLA rapid prototyping, concept models required extensive time and money to produce. The innovation of SLA reduced these resource expenditures while increasing the quality and accuracy of the resulting model. Early SLA systems were complex and costly, and required extensive redesigns before achieving commercial viability. Primary issues concerned hydrodynamic and chemical complications. In 1996, the introduction of solid-state lasers permitted Hull and his team to reformulate their materials. Engineers in transportation, healthcare, and consumer products helped fuel early phases of 3D Systems' rapid prototyping research and development. These industries remain key followers of 3D Systems' technology. In late 2001, 3D Systems began an acquisitions program that expanded the company's technology through ownership of software, materials, printers, and printable content, as well as access to the skills of engineers and designers. The rate of 3D Systems' acquisitions (16 in 2011) raised questions with regard to the task facing the company's management team. Other onlookers pointed to the encompassing scope of the acquisitions as indicating calculated steps by 3D Systems to consolidate the 3D printing industry under one roof and logo, and to become capable of servicing each link in the scan/create-to-print chain. In 2003, Hull was succeeded by Avi Reichental. Both Reichental and Hull are listed among the top twenty most influential people in rapid technologies by TCT Magazine. Hull remains an active member of 3D Systems' board and serves as the company's Chief Technology Officer and Executive Vice President. In 2005, 3D Systems relocated its headquarters to Rock Hill, South Carolina, citing a favorable business climate, a sustained lower cost of doing business, and significant investment and tax benefits as reasons for the move. In May 2011, 3D Systems transferred from Nasdaq (TDSC) to the New York Stock Exchange (DDD). In January 2012 3D Systems acquired Z Corporation for US$137 million. That same year a Gray Wolf Report predicted 3D Systems' rate of growth to be unsustainable, pointing to inflated impressions from acquisitions as a corporate misstatement of organic growth. 3D Systems responded to this article on November 19, 2012, claiming it to "contain materially false statements and erroneous conclusions that we believe defamed the company and its reputation and resulted in losses to our shareholders". In January 2014 it was announced that 3D Systems had acquired the Burbank, CA-based collectibles company Gentle Giant Studios, which designs, develops, and manufactures three-dimensional representations of characters from a variety of globally recognized franchises, including Marvel, Disney, AMC’s The Walking Dead, Avatar, Harry Potter and Star Wars. In July 2014, 3D Systems announced the acquisition of Israeli medical imaging company Simbionix for . In September 2014, 3D Systems acquired the Leuven, Belgium-based LayerWise, a principal provider of direct metal 3D printing and manufacturing services spun off from KU Leuven. The terms of the acquisition were not disclosed by either company. In January 2015, 3D Systems acquired the 3D printer manufacturer botObjects, the first company to commercialize a full-color printer using the fused filament fabrication technique. botObjects was founded by Martin Warner (CEO) and Mike Duma (CTO). botObjects' proprietary 5-color CMYKW cartridge system was claimed to be able to generate color combinations and gradients by mixing primary printing colors. There was some skepticism about botObjects' claims. In April 2015, 3D Systems announced its acquisition of the Chinese Easyway Group, creating 3D Systems China. Easyway is a Chinese 3D printing sales and service provider, with key operations in Shanghai, Wuxi, Beijing, Guangdong, and Chongqing. In October 2015, Reichental stepped down as the president and CEO of 3D Systems, Inc. and was replaced on an interim basis by the company's chief legal officer Andrew Johnson. Vyomesh Joshi (VJ) was appointed as president and CEO on April 4, 2016. On May 14, 2020, the 3D Systems board named Jeff Graves as president and CEO, effective May 26. He remains the CEO as of February 17, 2023. Technology 3D Systems manufactures stereolithography (SLA), fused deposition modeling (FDM), selective laser sintering (SLS), color-jet printing (CJP), multi-jet printing (MJP), and direct metal printing (DMP, a version of SLS that uses metal powder) systems. Each technology uses digital 3D data to create parts through an additive layer-by-layer process. The systems vary in their materials, print capacities, and applications. Color jet printing uses inkjet technology to deposit a liquid binder across a bed of powder. Powder is released and spread with a roller to form each new layer. This technology was originally developed by Z Corporation. Multi-jet printing refers to the process of depositing liquid photopolymers onto a build surface using inkjet technology. A high resolution is attainable, with a support material that can be easily removed in post-processing. Products and patents As part of 3D Systems' effort to consolidate 3D printing under one company, its products span a range of 3D printers and print products to target users of its technologies across industries. 3D Systems offers both professional and production printers. In addition to printers, 3D Systems offers content creation software, including reverse engineering software and organic 3D modeling software. Following a razor and blades model, 3D Systems offers more than one hundred materials to be used with its printers, including waxes, rubber-like materials, metals, composites, plastics and nylons. 3D Systems is a closed-source company, using in-house technologies for product development and patents to protect their technologies from competitors. Critics of the closed-source model have blamed seemingly slow development and innovation in 3D printing not on a lack of technology, but on a lack of open information sharing within the industry, and supporters argue that the right to patents inspires and motivates higher-quality innovations, leading to a better and more impressive final product. In November 2012, 3D Systems filed a lawsuit against prosumer 3D printer company Formlabs and the Kickstarter crowdfunding website over Formlabs' attempt to fund a printer which it claimed infringed its patent on "Simultaneous multiple layer curing in stereolithography." The legal procedure lasted more than two years and was significant enough to be covered in a Netflix documentary about 3D printing, called "Print the Legend". 3D Systems has applied for patents for the following innovations and technologies: the rapid prototyping and manufacturing system and method; radiation-curable compositions useful in image projection systems; compensation of actinic radiation intensity profiles for 3D modelers; apparatus and methods for cooling laser-sintered parts; radiation-curable compositions useful in solid freeform fabrication systems; apparatus for 3D printing using imaged layers; compositions and methods for selective deposition modeling; edge smoothness with low-resolution projected images for use in solid imaging; an elevator and method for tilting a solid image build platform for reducing air entrapment and for build release; selective deposition modeling methods for improved support-object interface; region-based supports for parts produced by solid freeform fabrication; additive manufacturing methods for improved curl control and sidewall quality; support and build material and applications. Applications and industries 3D Systems' products and services are used across industries to assist, either in part or in full, the design, manufacture and/or marketing processes. 3D Systems' technologies and materials are used for prototyping and the production of functional end-use parts, in addition to fast, precise design communication. Current 3D Systems-reliant industries include automotive, aerospace and defense, architecture, dental and healthcare, consumer goods, and manufacturing. Examples of industry-specific applications include: Aerospace, for the manufacture and tooling of complex, durable and lighter-weight flight parts Architecture, for structure verification, design review, client concept communication, reverse structure engineering, and expedited scaled modeling Automotive, for design verification, difficult visualizations, and new engine development Defense, for lightweight flight and surveillance parts and the reduction of inventory with on-demand printing Dentistry, for restorations, molds and treatments. Invisalign orthodontics devices use 3D Systems' technologies. Education, for equation and geometry visualizations, art education, and design initiatives Entertainment, for the manufacture and prototyping of action figures, toys, games and game components; printing of sustainable guitars and basses, multifunction synthesizers, etc. Healthcare, for customized hearing aids and prosthetics, improved medicine delivery methods, respiratory devices, therapeutics, and flexible endoscopy and laparoscopy devices for improved procedures and recovery times Manufacturing, for faster product development cycles, mold production, prototypes, and design troubleshooting For industries such as aerospace and automotive, 3D Systems' technologies have reduced the time needed to incorporate design drafts and enabled the production of more efficient parts of lighter weight. Because 3D printing builds layer-by-layer according to design, it does not need to accommodate the traditional manufacturing tools of subtractive methods, often resulting in lighter parts and more efficient geometries. Operations In 2007, the company consolidated its offices, operations, and research and development functions into a new global headquarters in Rock Hill, South Carolina, US. About half of the headquarters' consist of research and development laboratories with an Rapid Manufacturing Center (RMC) with 3D Systems' rapid prototyping, rapid manufacturing and 3D printing systems at work. With customers in 80 countries, 3D Systems has over 2100 employees in 25 worldwide locations, including San Francisco, Leuven, France, Germany, Italy, Switzerland, South Korea, Brazil, the United Kingdom, China and Japan. The company has more than 359 U.S. and foreign patents. In 2019, the company consolidated resources within its On Demand domestic rapid printing service locations into Littleton, Seattle, Lawrenceburg, and Wilsonville. Restructuring and additions were made to the Lawrenceburg facility for future expansions and growth, which nearly doubled its size. Community involvement and partnerships 3D Systems is involved in a multi-year agreement with the Smithsonian Institution as part of an effort to strengthen collections' stewardship and increase collection accessibility through 3D representations. In 2012, 3D Systems began partnering with the Scholastic Art & Writing Awards in the Future New category, where three winners are awarded with a $1000 scholarship in addition to the prizes and recognition granted to winners by the Scholastic Awards, and contributed two production-grade 3D printers to the National Network for Manufacturing Innovation (NNMI), which aims to re-localize manufacturing and increase US manufacturing competitiveness. 3D Systems is also a corporate underwriter of the National Children's Oral Health Foundation (NCOHF), which delivers educational, preventative and treatment oral health services to children in at-risk populations. On February 18 of 2014, Ekso Bionics debuted the first ever 3D-printed hybrid exoskeleton in collaboration with 3D Systems. See also List of 3D printer manufacturers References External links 1986 establishments in California Companies listed on the New York Stock Exchange 3D printer companies Computer-aided design Manufacturing companies based in South Carolina Technology companies established in 1986 American companies established in 1986 Manufacturing companies established in 1986 Multinational companies headquartered in the United States Technology companies of the United States Fused filament fabrication Rock Hill, South Carolina
3D Systems
[ "Engineering" ]
2,598
[ "Computer-aided design", "Design engineering" ]
4,234,170
https://en.wikipedia.org/wiki/Windows%20Live%20OneCare%20Safety%20Scanner
Windows Live OneCare Safety Scanner (formerly Windows Live Safety Center and codenamed Vegas) was an online scanning, PC cleanup, and diagnosis service to help remove of viruses, spyware/adware, and other malware. It was a free web service that was part of Windows Live. On November 18, 2008, Microsoft announced the discontinuation of Windows Live OneCare, offering users a new free anti-malware suite Microsoft Security Essentials, which had been available since the second half of 2009. However, Windows Live OneCare Safety Scanner, under the same branding as Windows Live OneCare, was not discontinued during that time. The service was officially discontinued on April 15, 2011 and replaced with Microsoft Safety Scanner. Overview Windows Live OneCare Safety Scanner offered a free online scanning and protection from threats. The Windows Live OneCare Safety Scanner must be downloaded and installed to your computer to scan your computer. The "Full Service Scan" looks for common PC health issues such as viruses, temporary files, and open network ports. It searches and removes viruses, improves a computer's performance, and removes unnecessary clutter on the PC's hard disk. The user can choose between a "Full Scan" (which can be customized) or a "Quick Scan". The "Full Scan" scans for viruses (comprehensive scan or quick scan), hard disk performance (Disk fragmentation scan and/or Desk cleanup scan) and network safety (open port scan). The "Quick Scan" only scans for viruses, only on specific areas on the computer. The quick scan is faster than the full scan, hence that appellation. The service also provides a virus database, information about online threats, and general computer security documentation and tools. Limits The virus scanner on the Windows Live OneCare Safety Scanner site runs a scan of the user's computer only when the site is visited. It does not run periodic scans of the system, and does not provide features to prevent viruses from infecting the computer at the time, or thereafter. It simply resolves detected infections. Many users who have posted on the Product Feedback forum report script errors relating to Internet Explorer 7 (besides IE being the only browser supported by this service). The OneCare safety scanner team have been actively solving these problems, many of them registry-related. References OneCare Safety Scanner Computer security software Web applications 2006 software
Windows Live OneCare Safety Scanner
[ "Engineering" ]
483
[ "Cybersecurity engineering", "Computer security software" ]
4,234,662
https://en.wikipedia.org/wiki/Raymond%20Daudel
Raymond Daudel (2 February 1920 – 20 June 2006) was a French theoretical and quantum chemist. Trained as a physicist, he was an assistant to Irène Joliot-Curie at the Radium Institute. Daudel spent almost the entirety of his career as professor at the Sorbonne and director of a laboratory of the Centre National de la Recherche Scientifique (CNRS). He is quoted as saying that the latter "was much better because the CNRS was very rich". This allowed Daudel to attract many co-workers from elsewhere in France and internationally. Raymond Daudel was Officier de la Légion d'honneur and Officier de l'Ordre National du Mérite. He served as President of the European Academy of Arts Sciences and Humanities, in Paris, France. Daudel was a founding member and Honorary President of the International Academy of Quantum Molecular Science. An author as well as an academic, Raymond Daudel authored several books, including Quantum chemistry, originally with R. Lefebyre and C. Moser in 1959 (Interscience Publishers, Inc., New York) and later with G. Leroy, D. Peeters, and M. Sana, published by Wiley in 1983. He was responsible for the organization of the first International Congress in Quantum Chemistry, held in Menton, France in 1973. References 20th-century French chemists 1920 births 2006 deaths Academic staff of the University of Paris Theoretical chemists Members of the International Academy of Quantum Molecular Science Members of the French Academy of Sciences Officers of the Legion of Honour Research directors of the French National Centre for Scientific Research
Raymond Daudel
[ "Chemistry" ]
337
[ "Quantum chemistry", "Theoretical chemistry", "Theoretical chemists", "Physical chemists" ]
4,234,672
https://en.wikipedia.org/wiki/Roland%20Fra%C3%AFss%C3%A9
Roland Fraïssé (; 12 March 1920 – 30 March 2008) was a French mathematical logician. Life Fraïssé received his doctoral degree from the University of Paris in 1953. In his thesis, Fraïssé used the back-and-forth method to determine whether two model-theoretic structures were elementarily equivalent. This method of determining elementary equivalence was later formulated as the Ehrenfeucht–Fraïssé game. Fraïssé worked primarily in relation theory. Another of his important works was the Fraïssé construction of a Fraïssé limit of finite structures. He also formulated Fraïssé's conjecture on order embeddings, and introduced the notion of compensor in the theory of posets. Most of his career was spent as Professor at the University of Provence in Marseille, France. Selected publications Sur quelques classifications des systèmes de relations, thesis, University of Paris, 1953; published in Publications Scientifiques de l'Université d'Alger, series A 1 (1954), 35–182. Cours de logique mathématique, Paris: Gauthier-Villars Éditeur, 1967; second edition, 3 vols., 1971–1975; tr. into English and ed. by David Louvish as Course of Mathematical Logic, 2 vols., Dordrecht: Reidel, 1973–1974. Theory of relations, tr. into English by P. Clote, Amsterdam: North-Holland, 1986; rev. ed. 2000. References French logicians Model theorists Academic staff of the University of Provence 20th-century French mathematicians 21st-century French mathematicians 1920 births 2008 deaths Mathematical logicians French male non-fiction writers 20th-century French philosophers 20th-century French male writers University of Paris alumni
Roland Fraïssé
[ "Mathematics" ]
355
[ "Model theorists", "Mathematical logic", "Model theory", "Mathematical logicians" ]
4,234,786
https://en.wikipedia.org/wiki/Volvariella%20volvacea
Volvariella volvacea (also known as paddy straw mushroom or straw mushroom) is a species of edible mushroom cultivated throughout East and Southeast Asia and used extensively in Asian cuisine. They are often available fresh in regions they are cultivated, but elsewhere are more frequently found canned or dried. Worldwide, straw mushrooms are the third-most-consumed mushroom. Description In their button stage, straw mushrooms resemble poisonous death caps, but can be distinguished by several mycological features, including their pink spore print (spore prints of death caps are white). The two mushrooms have different distributions, with the death cap generally not found where the straw mushroom grows natively, but immigrants, particularly those from Southeast Asia to California and Australia, have been poisoned due to misidentification. Uses Straw mushrooms are grown on rice straw beds and are most commonly picked when immature (often labelled "unpeeled"), during their button or egg phase, and before the veil ruptures. They are adaptable, taking four to five days to mature, and are most successfully grown in subtropical climates with high annual rainfall. No record has been found of their cultivation before the 19th century. Nutrition One cup () of straw mushrooms is nutritionally dense and provides of food energy, 27.7 μg selenium (50.36% of RDA), 699 mg sodium (46.60%), 2.6 mg iron (32.50%), 0.242 mg copper (26.89%), 69 μg vitamin B9 (folate) (17.25%), 111 mg phosphorus (15.86%), 0.75 mg vitamin B5 (pantothenic acid) (15.00%), 6.97 g protein (13.94%), 4.5 g total dietary fiber (11.84%), and 1.22 mg zinc (11.09%). References External links Straw Mushroom http://www.indexfungorum.org/Names/SynSpecies.asp?RecordID=307802 http://www.indexfungorum.org/Names/NamesRecord.asp?RecordID=307802 https://doi.org/10.1016/j.jfma.2019.09.008 Pluteaceae Chinese edible mushrooms Fungi described in 1786 Fungi in cultivation Fungi of Asia Fungus species
Volvariella volvacea
[ "Biology" ]
502
[ "Fungi", "Fungus species" ]
4,234,887
https://en.wikipedia.org/wiki/AP%20Computer%20Science
The Advanced Placement (AP) Computer Science (shortened to AP Comp Sci or APCS) program includes two Advanced Placement courses and examinations covering the field of computer science. They are offered by the College Board to high school students as an opportunity to earn college credit for college-level courses. The program consists of two current courses (Computer Science Principles and Computer Science A) and one discontinued course (Computer Science AB). AP Computer Science was taught using Pascal for the 1984–1998 exams, C++ for 1999–2003, and Java since 2004. Courses There are two AP computer science courses currently offered. Computer Science Principles is considered to be a more "big picture" course than the programming-intensive Computer Science A. AP Computer Science A AP Computer Science A is a programming-based course, equivalent to a first-semester–level college course. AP CSA emphasizes object-oriented programming and is taught using the programming language of Java. The course has an emphasis on problem-solving using data structures and algorithms. AP Computer Science Principles AP Computer Science Principles is an introductory college-level course in computer science with an emphasis on computational thinking and the impacts of computing. The course has no designated programming language, and teaches algorithms and programming, complementing Computer Science A. AP Computer Science AB (discontinued) AP Computer Science AB included all the topics of AP Computer Science A, as well as a more formal and a more in-depth study of algorithms, data structures, and data abstraction. For example, binary trees were studied in AP Computer Science AB but not in AP Computer Science A. The use of recursive data structures and dynamically allocated structures were fundamental to AP Computer Science AB. AP Computer Science AB was equivalent to a full-year college course. Due to low numbers of students taking the exam, AP Computer Science AB was discontinued following the May 2009 exam administration. See also Computer science education References Further reading Computer Science Computer science education
AP Computer Science
[ "Technology" ]
390
[ "Computer science education", "Computer science" ]
4,234,894
https://en.wikipedia.org/wiki/AP%20Calculus
Advanced Placement (AP) Calculus (also known as AP Calc, Calc AB / BC, AB / BC Calc or simply AB / BC) is a set of two distinct Advanced Placement calculus courses and exams offered by the American nonprofit organization College Board. AP Calculus AB covers basic introductions to limits, derivatives, and integrals. AP Calculus BC covers all AP Calculus AB topics plus additional topics (including integration by parts, Taylor series, parametric equations, vector calculus, and polar coordinate functions). AP Calculus AB AP Calculus AB is an Advanced Placement calculus course. It is traditionally taken after precalculus and is the first calculus course offered at most schools except for possibly a regular or honors calculus class. The Pre-Advanced Placement pathway for math helps prepare students for further Advanced Placement classes and exams. Purpose According to the College Board: Topic outline The material includes the study and application of differentiation and integration, and graphical analysis including limits, asymptotes, and continuity. An AP Calculus AB course is typically equivalent to one semester of college calculus. Analysis of graphs (predicting and explaining behavior) Limits of functions (one and two sided) Asymptotic and unbounded behavior Continuity Derivatives Concept At a point As a function Applications Higher order derivatives Techniques Integrals Interpretations Properties Applications Techniques Numerical approximations Fundamental theorem of calculus Antidifferentiation L'Hôpital's rule Separable differential equations AP Calculus BC AP Calculus BC is equivalent to a full year regular college course, covering both Calculus I and II. After passing the exam, students may move on to Calculus III (Multivariable Calculus). Purpose According to the College Board, Topic outline AP Calculus BC includes all of the topics covered in AP Calculus AB, as well as the following: Convergence tests for series Taylor series Parametric equations Polar functions (including arc length in polar coordinates and calculating area) Arc length calculations using integration Integration by parts Improper integrals Differential equations for logistic growth Using partial fractions to integrate rational functions It can be seen from the tables that the pass rate (score of 3 or higher) of AP Calculus BC is higher than AP Calculus AB. It can also be noted that about 1/3 as many take the BC exam as take the AB exam. A possible explanation for the higher scores on BC is that students who take AP Calculus BC are more prepared and advanced in math. The 5-rate is consistently over 40% (much higher than almost all the other AP exams). AB sub-score distribution AP Exam The College Board intentionally schedules the AP Calculus AB exam at the same time as the AP Calculus BC exam to make it impossible for a student to take both tests in the same academic year, though the College Board does not make Calculus AB a prerequisite class for Calculus BC. Some schools do this, though many others only require precalculus as a prerequisite for Calculus BC. The AP awards given by College Board count both exams. However, they do not count the AB sub-score piece of the BC exam. Format The structures of the AB and BC exams are identical. Both exams are three hours and fifteen minutes long, comprising a total of 45 multiple choice questions and six free response questions. They are usually administered on a Monday or Tuesday morning in May. The two parts of the multiple choice section are timed and taken independently. Students are required to put away their calculators after 30 minutes have passed during the Free-Response section, and only at that point may begin Section II Part B. However, students may continue to work on Section II Part A during the entire Free-Response time, although without a calculator during the later two thirds. Scoring The multiple choice section is scored by computer, with a correct answer receiving 1 point, with omitted and incorrect answers not affecting the raw score. This total is multiplied by 1.2 to calculate the adjusted multiple-choice score. The free response section is hand-graded by hundreds of AP teachers and professors each June. The raw score is then added to the adjusted multiple choice score to receive a composite score. This total is compared to a composite-score scale for that year's exam and converted into an AP score of 1 to 5. For the Calculus BC exam, an AB sub-score is included in the score report to reflect their proficiency in the fundamental topics of introductory calculus. The AB sub-score is based on the correct number of answers for questions pertaining to AB-material only. See also AP Physics C: Mechanics and AP Physics C: Electricity and Magnetism AP Precalculus Glossary of calculus Mathematics education in the United States Stand and Deliver (1988 film) References External links AP Calculus AB College Board description of the AP Calculus AB course content College Board description of the AP Calculus AB examination AP Calculus BC College Board description of the AP Calculus BC course content College Board description of the AP Calculus BC examination Further reading AP courses in mathematics Calculus Advanced Placement zh:大学先修课程#科目
AP Calculus
[ "Mathematics" ]
1,005
[ "Calculus" ]
4,235,196
https://en.wikipedia.org/wiki/Sodium%20tartrate
Sodium tartrate (Na2C4H4O6) is a salt used as an emulsifier and a binding agent in food products such as jellies, margarine, and sausage casings. As a food additive, it is known by the E number E335. It is made by the combination reaction of baking soda/Sodium Bicarbonate (NaHCO₃) with tartaric acid. Because its crystal structure captures a very precise amount of water, it is also a common primary standard for Karl Fischer titration, a common technique to assay water content. See also Monosodium tartrate References External links Properties of Sodium Tartrate at linanwindow Properties of Sodium Tartrate at JTBaker Tartrates Organic sodium salts Food additives E-number additives
Sodium tartrate
[ "Chemistry" ]
171
[ "Salts", "Organic compounds", "Organic sodium salts", "Organic compound stubs", "Organic chemistry stubs" ]
4,235,381
https://en.wikipedia.org/wiki/Bernard%20Pullman
Bernard Pullman (19 March 1919, Włocławek Poland – 9 June 1996) was a French theoretical quantum chemist and quantum biochemist. Pullman studied at the Sorbonne, then spent the Second World War as a French Army officer in Africa and the Middle East. Returning to Paris in 1946, he completed his licence ès sciences in 1946 and the Docteur-es-Science in 1948. From 1946 to 1954, he worked at the Centre National de la Recherche Scientifique (CNRS). In 1954 he was appointed Professor at the Sorbonne. In 1959, he became Director of the Department of Quantum Biochemistry at the Institut de biologie physico-chimique. In 1963, he was promoted to Director of the Institute. He was a founding member of the International Academy of Quantum Molecular Science. Over the course of his career, Pullman published about 400 scientific papers and 5 books, three with his wife Alberte Pullman, his lifelong collaborator. In joint work published in the 1950s and 1960s, they founded the new field of quantum biochemistry. They also pioneered the application of quantum chemistry to predicting the carcinogenic properties of aromatic hydrocarbons. After his 1989 retirement, he wrote The Atom in the History of Human Thought (Paris: Fayard, 1995), a work approachable by general readers. References International Journal of Quantum Chemistry 75(3), 1999, Special Issue: In Memory of Bernard Pullman. Books by Pullman 1963 (with Alberte Pullman). Quantum Biochemistry. New York: John Wiley Interscience. ; . 1965 (with M. Weissbluth). Molecular Biophysics. New York: Academic Press, New York. 1998. The Atom in the History of Human Thought, trans. by Axel Reisinger. Oxford Univ. Press. External links His International Academy of Quantum Molecular Science page An interview with Mme Prof. Dr. Alberte Pullman 1919 births Polish emigrants to France 20th-century French chemists Members of the International Academy of Quantum Molecular Science Theoretical chemists University of Paris alumni Members of the French Academy of Sciences 1996 deaths
Bernard Pullman
[ "Chemistry" ]
432
[ "Quantum chemistry", "Theoretical chemistry", "Theoretical chemists", "Physical chemists" ]
4,236,111
https://en.wikipedia.org/wiki/Radia%20Perlman
Radia Joy Perlman (; born December 18, 1951) is an American computer programmer and network engineer. She is a major figure in assembling the networks and technology to enable what we now know as the internet. She is most famous for her invention of the Spanning Tree Protocol (STP), which is fundamental to the operation of network bridges, while working for Digital Equipment Corporation, thus earning her nickname "Mother of the Internet". Her innovations have made a huge impact on how networks self-organize and move data. She also made large contributions to many other areas of network design and standardization: for example, enabling today's link-state routing protocols, to be more robust, scalable, and easy to manage. Perlman was elected a member of the National Academy of Engineering in 2019 for contributions to Internet routing and bridging protocols. She holds over 100 issued patents. She was elected to the Internet Hall of Fame in 2014, and to the National Inventors Hall of Fame in 2016. She received lifetime achievement awards from USENIX in 2006 and from the Association for Computing Machinery’s SIGCOMM in 2010. More recently she has invented the TRILL protocol to correct some of the shortcomings of spanning trees, allowing Ethernet to make optimal use of bandwidth. As of 2022, she was a Fellow at Dell Technologies. Early life Perlman was born in 1951 , Portsmouth, Virginia. She grew up in Loch Arbour, New Jersey. She is Jewish. Both of her parents worked as engineers for the US government. Her father worked on radar and her mother was a mathematician by training who worked as a computer programmer. During her school years Perlman found math and science to be “effortless and fascinating”, but had no problem achieving top grades in other subjects as well. She enjoyed playing the piano and French horn. While her mother helped her with her math homework, they mainly talked about literature and music. But she didn't feel like she fit underneath the stereotype of an "engineer" as she did not break apart computer parts. Despite being the best science and math student in her school it was only when Perlman took a programming class in high school that she started to consider a career that involved computers. She was the only woman in the class and later reflected "I was not a hands-on type person. It never occurred to me to take anything apart. I assumed I'd either get electrocuted, or I'd break something". She graduated from Ocean Township High School in 1969. Education As an undergraduate at MIT Perlman learned programming for a physics class. She was given her first paid job in 1971 as part-time programmer for the LOGO Lab at the (then) MIT Artificial Intelligence Laboratory, programming system software such as debuggers. Working under the supervision of Seymour Papert, she developed a child-friendly version of the educational robotics language LOGO, called TORTIS ("Toddler's Own Recursive Turtle Interpreter System"). During research performed in 1974–76, young children—the youngest aged 3½ years, programmed a LOGO educational robot called a Turtle. Perlman has been described as a pioneer of teaching young children computer programming. Afterwards, she was inspired to make a new programming language that would teach much younger children similar to Logo, but using special "keyboards" and input devices. This project was abandoned because "being the only woman around, I wanted to be taken seriously as a 'scientist' and was a little embarrassed that my project involved cute little kids". MIT media project later tracked her down and told her that she started a new field called tangible user interface from the leftovers of her abandoned project. As a math grad at MIT she needed to find an adviser for her thesis, and joined the MIT group at BBN Technologies. There she first got involved with designing network protocols. Perlman obtained a B.S. and M.S. in Mathematics and a Ph.D. in Computer Science from MIT in 1988. Her doctoral thesis on routing in environments where malicious network failures are present serves as the basis for much of the work that now exists in this area. When studying at MIT in the late 60s she was one among the 50 or so women students, in a class of about 1,000 students. To begin with MIT only had one women’s dorm, limiting the number of women students that could study. When the men’s dorms at MIT became coed Perlman moved out of the women’s dorm into a mixed dorm, where she became the "resident female". She later said that she was so used to the gender imbalance, that it became normal. Only when she saw other women students among a crowd of men she noticed that "it kind of looked weird". Career After graduation, she accepted a position with Bolt, Beranek, and Newman (BBN), a government contractor that developed software for network equipment. While working for BBN, Perlman made an impression on a manager for Digital Equipment Corp and was offered a job, joining the firm in 1980. During her time working at Digital, she quickly produced a solution that did exactly what the team wanted it to; the Spanning Tree Protocol. It allows a network to deliver data reliably by making it possible to design the network with redundant links. This setup provides automatic backup paths if an active link fails, and disables the links that are not part of the tree. This leaves a single, active path between any pair of network nodes. She is most famous for STP, which is fundamental to the operation of network bridges in many smaller networks. Perlman is the author of a textbook on networking called “Interconnections: Bridges, Routers, Switches, and Internetworking Protocols” and coauthor of another on network security called “Network Security: Private Communication in a Public World”, which is a now popular college textbook. Her contributions to network security include trust models for Public Key Infrastructure, data expiration, and distributed algorithms resilient despite malicious participants. She left Digital in 1993 and joined Novell. Then, in 1997 she left Novell and joined Sun Microsystems. Over the course of her career she has earned over 200 patents, 40 of them while working for Sun Microsystems, where in 2007 she held the title of Distinguished Engineer. She has taught courses at the University of Washington, Harvard University, MIT, and Texas A&M, and has been the keynote speaker at events all over the world. Perlman is the recipient of awards such as Lifetime Achievement awards from USENIX and the Association for Computing Machinery’s Special Interest Group on Data Communication (SIGCOMM). Spanning Tree Protocol Perlman invented the spanning tree algorithm and protocol. While working as a consulting engineer at Digital Equipment Corporation (DEC) in 1984 she was tasked with developing a straightforward protocol that enabled network bridges to locate loops in a local area network (LAN). It was required that the protocol should use a constant amount of memory when implemented on the network devices, regardless of how large the network was. Building and expanding bridged networks was difficult because loops, where more than one path leads to the same destination, could result in the collapse of the network. Redundant paths in the network meant that a bridge could forward a frame in multiple directions. Therefore loops could cause Ethernet frames to fail to reach their destination, thus flooding the network. Perlman utilized the fact that bridges had unique 48 bit MAC addresses, and devised a network protocol so that bridges within the LAN communicated with one another. The algorithm implemented on all bridges in the network allowed the bridges to designate one root bridge in the network. Each bridge then mapped the network and determined the shortest path to the root bridge, deactivating other redundant paths. Despite Perlman's concerns that it took the spanning tree protocol about a minute to react when changes in the network topology occurred, during which time a loop could bring down the network, it was standardized as 802.1d by the Institute of Electrical and Electronics Engineers (IEEE). Perlman said that the benefits of the protocol amount to the fact that "you don't have to worry about topology" when changing the way a LAN is connected. Perlman has however criticized changes which were made in the course of the standardization of the protocol. Perlman published a poem on STP, called 'Algorhyme': Other network protocols Perlman was the principal designer of the DECnet IV and V protocols, and IS-IS, the OSI equivalent of OSPF. She also made major contributions to the Connectionless Network Protocol (CLNP). Perlman has collaborated with Yakov Rekhter on developing network routing standards, such as the OSI Inter-Domain Routing Protocol (IDRP), the OSI equivalent of BGP. At DEC she also oversaw the transition from distance vector to link-state routing protocols. Link-state routing protocols had the advantage that they adapted to changes in the network topology faster, and DEC's link-state routing protocol was second only to the link-state routing protocol of the Advanced Research Projects Agency Network (ARPANET). While working on the DECnet project Perlman also helped to improve the intermediate-system to intermediate-system routing protocol, known as IS-IS, so that it could route the Internet Protocol (IP), AppleTalk and the Internetwork Packet Exchange (IPX) protocol. The Open Shortest Path First (OSPF) protocol relied in part on Perlman's research on fault-tolerant broadcasting of routing information. Perlman subsequently worked as a network engineer for Sun Microsystems, now Oracle. She specialized in network and security protocols and while working for Oracle and obtained more than 50 patents. When standarizing her work on TRILL, a combined bridging and routing protocol that proposes to supersede STP, she included version 2 of the earlier "Algorhyme": Awards Fellow of the Association for Computing Machinery,(class of 2016) National Inventors Hall of Fame induction (2016) Internet Hall of Fame induction (2014) SIGCOMM Award (2010) USENIX Lifetime Achievement Award (2006) Recipient of the first Anita Borg Institute Women of Vision Award for Innovation in 2005 Silicon Valley Intellectual Property Law Association Inventor of the year (2003) Honorary Doctorate, Royal Institute of Technology (June 28, 2000) Twice named as one of the 20 most influential people in the industry by Data Communications magazine: in the 20th anniversary issue (January 15, 1992) and the 25th anniversary issue (January 15, 1997). Perlman is the only person to be named in both issues. IEEE Fellow in 2008 for contributions to network routing and security protocols Fellow of the Association for Computing Machinery, class of 2016 Bibliography References External links Inventor of the Week archive at MIT: Spanning Tree Protocol 1951 births Living people American computer scientists Internet pioneers American women inventors Women Internet pioneers Computer systems researchers Computer security academics Digital Equipment Corporation people Massachusetts Institute of Technology School of Science alumni American women computer scientists 2016 fellows of the Association for Computing Machinery People in information technology People from Loch Arbour, New Jersey People from Portsmouth, Virginia Scientists from Virginia Sun Microsystems people Network topology Jewish American scientists Jewish women scientists Ocean Township High School alumni 21st-century American Jews 21st-century American women
Radia Perlman
[ "Mathematics", "Technology" ]
2,300
[ "Network topology", "People in information technology", "Information technology", "Topology" ]
4,236,528
https://en.wikipedia.org/wiki/Dissolved%20organic%20carbon
Dissolved organic carbon (DOC) is the fraction of organic carbon operationally defined as that which can pass through a filter with a pore size typically between 0.22 and 0.7 micrometers. The fraction remaining on the filter is called particulate organic carbon (POC). Dissolved organic matter (DOM) is a closely related term often used interchangeably with DOC. While DOC refers specifically to the mass of carbon in the dissolved organic material, DOM refers to the total mass of the dissolved organic matter. So DOM also includes the mass of other elements present in the organic material, such as nitrogen, oxygen and hydrogen. DOC is a component of DOM and there is typically about twice as much DOM as DOC. Many statements that can be made about DOC apply equally to DOM, and vice versa. DOC is abundant in marine and freshwater systems and is one of the greatest cycled reservoirs of organic matter on Earth, accounting for the same amount of carbon as in the atmosphere and up to 20% of all organic carbon. In general, organic carbon compounds are the result of decomposition processes from dead organic matter including plants and animals. DOC can originate from within or outside any given body of water. DOC originating from within the body of water is known as autochthonous DOC and typically comes from aquatic plants or algae, while DOC originating outside the body of water is known as allochthonous DOC and typically comes from soils or terrestrial plants. When water originates from land areas with a high proportion of organic soils, these components can drain into rivers and lakes as DOC. The marine DOC pool is important for the functioning of marine ecosystems because they are at the interface between the chemical and the biological worlds. DOC fuels marine food webs, and is a major component of the Earth's carbon cycling. Overview DOC is a basic nutrient, supporting growth of microorganisms and plays an important role in the global carbon cycle through the microbial loop. In some organisms (stages) that do not feed in the traditional sense, dissolved matter may be the only external food source. Moreover, DOC is an indicator of organic loadings in streams, as well as supporting terrestrial processing (e.g., within soil, forests, and wetlands) of organic matter. Dissolved organic carbon has a high proportion of biodegradable dissolved organic carbon (BDOC) in first order streams compared to higher order streams. In the absence of extensive wetlands, bogs, or swamps, baseflow concentrations of DOC in undisturbed watersheds generally range from approximately 1 to 20 mg/L carbon. Carbon concentrations considerably vary across ecosystems. For example, the Everglades may be near the top of the range and the middle of oceans may be near the bottom. Occasionally, high concentrations of organic carbon indicate anthropogenic influences, but most DOC originates naturally. The BDOC fraction consists of organic molecules that heterotrophic bacteria can use as a source of energy and carbon. Some subset of DOC constitutes the precursors of disinfection byproducts for drinking water. BDOC can contribute to undesirable biological regrowth within water distribution systems. The dissolved fraction of total organic carbon (TOC) is an operational classification. Many researchers use the term "dissolved" for compounds that pass through a 0.45 μm filter, but 0.22 μm filters have also been used to remove higher colloidal concentrations. A practical definition of dissolved typically used in marine chemistry is all substances that pass through a GF/F filter, which has a nominal pore size of approximately 0.7 μm (Whatman glass microfiber filter, 0.6–0.8 μm particle retention). The recommended procedure is the HTCO technique, which calls for filtration through pre-combusted glass fiber filters, typically the GF/F classification. Labile and recalcitrant Dissolved organic matter can be classified as labile or as recalcitrant, depending on its reactivity. Recalcitrant DOC is also called refractory DOC, and these terms seem to be used interchangeably in the context of DOC. Depending on the origin and composition of DOC, its behavior and cycling are different; the labile fraction of DOC decomposes rapidly through microbially or photochemically mediated processes, whereas refractory DOC is resistant to degradation and can persist in the ocean for millennia. In the coastal ocean, organic matter from terrestrial plant litter or soils appears to be more refractory and thus often behaves conservatively. In addition, refractory DOC is produced in the ocean by the bacterial transformation of labile DOC, which reshapes its composition. Due to the continuous production and degradation in natural systems, the DOC pool contains a spectrum of reactive compounds each with their own reactivity, that have been divided into fractions from labile to recalcitrant, depending on the turnover times, as shown in the following table... This wide range in turnover or degradation times has been linked with the chemical composition, structure and molecular size, but degradation also depends on the environmental conditions (e.g., nutrients), prokaryote diversity, redox state, iron availability, mineral-particle associations, temperature, sun-light exposure, biological production of recalcitrant compounds, and the effect of priming or dilution of individual molecules. For example, lignin can be degraded in aerobic soils but is relatively recalcitrant in anoxic marine sediments. This example shows bioavailability varies as a function of the ecosystem's properties. Accordingly, even normally ancient and recalcitrant compounds, such as petroleum, carboxyl-rich alicyclic molecules, can be degraded in the appropriate environmental setting. Terrestrial ecosystems Soil Dissolved organic matter (DOM) is one of the most active and mobile carbon pools and has an important role in global carbon cycling. In addition, dissolved organic carbon (DOC) affects the soil negative electrical charges denitrification process, acid-base reactions in the soil solution, retention and translocation of nutrients (cations), and immobilization of heavy metals and xenobiotics. Soil DOM can be derived from different sources (inputs), such as atmospheric carbon dissolved in rainfall, litter and crop residues, manure, root exudates, and decomposition of soil organic matter (SOM). In the soil, DOM availability depends on its interactions with mineral components (e.g., clays, Fe and Al oxides) modulated by adsorption and desorption processes. It also depends on SOM fractions (e.g., stabilized organic molecules and microbial biomass) by mineralization and immobilization processes. In addition, the intensity of these interactions changes according to soil inherent properties, land use, and crop management. During the decomposition of organic material, most carbon is lost as CO2 to the atmosphere by microbial oxidation. Soil type and landscape slope, leaching, and runoff are also important processes associated to DOM losses in the soil. In well-drained soils, leached DOC can reach the water table and release nutrients and pollutants that can contaminate groundwater, whereas runoff transports DOM and xenobiotics to other areas, rivers, and lakes. Groundwater Precipitation and surface water leaches dissolved organic carbon (DOC) from vegetation and plant litter and percolates through the soil column to the saturated zone. The concentration, composition, and bioavailability of DOC are altered during transport through the soil column by various physicochemical and biological processes, including sorption, desorption, biodegradation and biosynthesis. Hydrophobic molecules are preferentially partitioned onto soil minerals and have a longer retention time in soils than hydrophilic molecules. The hydrophobicity and retention time of colloids and dissolved molecules in soils are controlled by their size, polarity, charge, and bioavailability. Bioavailable DOM is subjected to microbial decomposition, resulting in a reduction in size and molecular weight. Novel molecules are synthesized by soil microbes, and some of these metabolites enter the DOC reservoir in groundwater. Freshwater ecosystems Aquatic carbon occurs in different forms. Firstly, a division is made between organic and inorganic carbon. Organic carbon is a mixture of organic compounds originating from detritus or primary producers. It can be divided into POC (particulate organic carbon; particles > 0.45 μm) and DOC (dissolved organic carbon; particles < 0.45 μm). DOC usually makes up 90% of the total amount of aquatic organic carbon. Its concentration ranges from 0.1 to >300 mg L−1. Likewise, inorganic carbon also consists of a particulate (PIC) and a dissolved phase (DIC). PIC mainly consists of carbonates (e.g., CaCO3), DIC consists of carbonate (CO32-), bicarbonate (HCO3−), CO2 and a negligibly small fraction of carbonic acid (H2CO3). The inorganic carbon compounds exist in equilibrium that depends on the pH of the water. DIC concentrations in freshwater range from about zero in acidic waters to 60 mg C L−1 in areas with carbonate-rich sediments. POC can be degraded to form DOC; DOC can become POC by flocculation. Inorganic and organic carbon are linked through aquatic organisms. CO2 is used in photosynthesis (P) by for instance macrophytes, produced by respiration (R), and exchanged with the atmosphere. Organic carbon is produced by organisms and is released during and after their life; e.g., in rivers, 1–20% of the total amount of DOC is produced by macrophytes. Carbon can enter the system from the catchment and is transported to the oceans by rivers and streams. There is also exchange with carbon in the sediments, e.g., burial of organic carbon, which is important for carbon sequestration in aquatic habitats. Aquatic systems are very important in global carbon sequestration; e.g., when different European ecosystems are compared, inland aquatic systems form the second largest carbon sink (19–41 Tg C y−1); only forests take up more carbon (125–223 Tg C y−1). Marine ecosystems Sources In marine systems DOC originates from either autochthonous or allochthonous sources. Autochthonous DOC is produced within the system, primarily by plankton organisms and in coastal waters additionally by benthic microalgae, benthic fluxes, and macrophytes, whereas allochthonous DOC is mainly of terrestrial origin supplemented by groundwater and atmospheric inputs. In addition to soil derived humic substances, terrestrial DOC also includes material leached from plants exported during rain events, emissions of plant materials to the atmosphere and deposition in aquatic environments (e.g., volatile organic carbon and pollens), and also thousands of synthetic human-made organic chemicals that can be measured in the ocean at trace concentrations. Dissolved organic carbon (DOC) represents one of the Earth's major carbon pools. It contains a similar amount of carbon as the atmosphere and exceeds the amount of carbon bound in marine biomass by more than two-hundred times. DOC is mainly produced in the near-surface layers during primary production and zooplankton grazing processes. Other sources of marine DOC are dissolution from particles, terrestrial and hydrothermal vent input, and microbial production. Prokaryotes (bacteria and archaea) contribute to the DOC pool via release of capsular material, exopolymers, and hydrolytic enzymes, as well as via mortality (e.g. viral shunt). Prokaryotes are also the main decomposers of DOC, although for some of the most recalcitrant forms of DOC very slow abiotic degradation in hydrothermal systems or possibly sorption to sinking particles may be the main removal mechanism. Mechanistic knowledge about DOC-microbe-interactions is crucial to understand the cycling and distribution of this active carbon reservoir. Phytoplankton Phytoplankton produces DOC by extracellular release commonly accounting between 5 and 30% of their total primary production, although this varies from species to species. Nonetheless, this release of extracellular DOC is enhanced under high light and low nutrient levels, and thus should increase relatively from eutrophic to oligotrophic areas, probably as a mechanism for dissipating cellular energy. Phytoplankton can also produce DOC by autolysis during physiological stress situations e.g., nutrient limitation. Other studies have demonstrated DOC production in association with meso- and macro-zooplankton feeding on phytoplankton and bacteria. Zooplankton Zooplankton-mediated release of DOC occurs through sloppy feeding, excretion and defecation which can be important energy sources for microbes. Such DOC production is largest during periods with high food concentration and dominance of large zooplankton species. Bacteria and viruses Bacteria are often viewed as the main consumers of DOC, but they can also produce DOC during cell division and viral lysis. The biochemical components of bacteria are largely the same as other organisms, but some compounds from the cell wall are unique and are used to trace bacterial derived DOC (e.g., peptidoglycan). These compounds are widely distributed in the ocean, suggesting that bacterial DOC production could be important in marine systems. Viruses are the most abundant life forms in the oceans infecting all life forms including algae, bacteria and zooplankton. After infection, the virus either enters a dormant (lysogenic) or productive (lytic) state. The lytic cycle causes disruption of the cell(s) and release of DOC. Macrophytes Marine macrophytes (i.e., macroalgae and seagrass) are highly productive and extend over large areas in coastal waters but their production of DOC has not received much attention. Macrophytes release DOC during growth with a conservative estimate (excluding release from decaying tissues) suggesting that macroalgae release between 1-39% of their gross primary production, while seagrasses release less than 5% as DOC of their gross primary production. The released DOC has been shown to be rich in carbohydrates, with rates depending on temperature and light availability. Globally the macrophyte communities have been suggested to produce ~160 Tg C yr−1 of DOC, which is approximately half the annual global river DOC input (250 Tg C yr−1). Marine sediments Marine sediments represent the main sites of OM degradation and burial in the ocean, hosting microbes in densities up to 1000 times higher than found in the water column. The DOC concentrations in sediments are often an order of magnitude higher than in the overlying water column. This concentration difference results in a continued diffusive flux and suggests that sediments are a major DOC source releasing 350 Tg C yr−1, which is comparable to the input of DOC from rivers. This estimate is based on calculated diffusive fluxes and does not include resuspension events which also releases DOC and therefore the estimate could be conservative. Also, some studies have shown that geothermal systems and petroleum seepage contribute with pre-aged DOC to the deep ocean basins, but consistent global estimates of the overall input are currently lacking. Globally, groundwaters account for an unknown part of the freshwater DOC flux to the oceans. The DOC in groundwater is a mixture of terrestrial, infiltrated marine, and in situ microbially produced material. This flux of DOC to coastal waters could be important, as concentrations in groundwater are generally higher than in coastal seawater, but reliable global estimates are also currently lacking. Sinks The main processes that remove DOC from the ocean water column are: (1) Thermal degradation in e.g., submarine hydrothermal systems; (2) bubble coagulation and abiotic flocculation into microparticles or sorption to particles; (3) abiotic degradation via photochemical reactions; and (4) biotic degradation by heterotrophic marine prokaryotes. It has been suggested that the combined effects of photochemical and microbial degradation represent the major sinks of DOC. Thermal degradation Thermal degradation of DOC has been found at high-temperature hydrothermal ridge-flanks, where outflow DOC concentrations are lower than in the inflow. While the global impact of these processes has not been investigated, current data suggest it is a minor DOC sink. Abiotic DOC flocculation is often observed during rapid (minutes) shifts in salinity when fresh and marine waters mix. Flocculation changes the DOC chemical composition, by removing humic compounds and reducing molecular size, transforming DOC to particulate organic flocs which can sediment and/or be consumed by grazers and filter feeders, but it also stimulates the bacterial degradation of the flocculated DOC. The impacts of flocculation on the removal of DOC from coastal waters are highly variable with some studies suggesting it can remove up to 30% of the DOC pool, while others find much lower values (3–6%;). Such differences could be explained by seasonal and system differences in the DOC chemical composition, pH, metallic cation concentration, microbial reactivity, and ionic strength. CDOM The colored fraction of DOC (CDOM) absorbs light in the blue and UV-light range and therefore influences plankton productivity both negatively by absorbing light, that otherwise would be available for photosynthesis, and positively by protecting plankton organisms from harmful UV-light. However, as the impact of UV damage and ability to repair is extremely variable, there is no consensus on how UV-light changes might impact overall plankton communities. The CDOM absorption of light initiates a complex range of photochemical processes, which can impact nutrient, trace metal and DOC chemical composition, and promote DOC degradation. Photodegradation Photodegradation involves the transformation of CDOM into smaller and less colored molecules (e.g., organic acids), or into inorganic carbon (CO, CO2), and nutrient salts (NH4−, HPO). Therefore, it generally means that photodegradation transforms recalcitrant into labile DOC molecules that can be rapidly used by prokaryotes for biomass production and respiration. However, it can also increase CDOM through the transformation of compounds such as triglycerides, into more complex aromatic compounds, which are less degradable by microbes. Moreover, UV radiation can produce e.g., reactive oxygen species, which are harmful to microbes. The impact of photochemical processes on the DOC pool depends also on the chemical composition, with some studies suggesting that recently produced autochthonous DOC becomes less bioavailable while allochthonous DOC becomes more bioavailable to prokaryotes after sunlight exposure, albeit others have found the contrary. Photochemical reactions are particularly important in coastal waters which receive high loads of terrestrial derived CDOM, with an estimated ~20–30% of terrestrial DOC being rapidly photodegraded and consumed. Global estimates also suggests that in marine systems photodegradation of DOC produces ~180 Tg C yr−1 of inorganic carbon, with an additional 100 Tg C yr−1 of DOC made more available to microbial degradation. Another attempt at global ocean estimates also suggest that photodegradation (210 Tg C yr−1) is approximately the same as the annual global input of riverine DOC (250 Tg C yr−1;), while others suggest that direct photodegradation exceeds the riverine DOC inputs. Recalcitrant DOC DOC is conceptually divided into labile DOC, which is rapidly taken up by heterotrophic microbes, and the recalcitrant DOC reservoir, which has accumulated in the ocean (following a definition by Hansell). As a consequence of its recalcitrance, the accumulated DOC reaches average radiocarbon ages between 1,000 and 4,000 years in surface waters, and between 3,000 and 6,000 years in the deep ocean, indicating that it persists through several deep ocean mixing cycles between 300 and 1,400 years each. Behind these average radiocarbon ages, a large spectrum of ages is hidden. Follett et al. showed DOC comprises a fraction of modern radiocarbon age, as well as DOC reaching radiocarbon ages of up to 12,000 years. Distribution More precise measurement techniques developed in the late 1990s have allowed for a good understanding of how dissolved organic carbon is distributed in marine environments both vertically and across the surface. It is now understood that dissolved organic carbon in the ocean spans a range from very labile to very recalcitrant (refractory). The labile dissolved organic carbon is mainly produced by marine organisms and is consumed in the surface ocean, and consists of sugars, proteins, and other compounds that are easily used by marine bacteria. Recalcitrant dissolved organic carbon is evenly spread throughout the water column and consists of high molecular weight and structurally complex compounds that are difficult for marine organisms to use such as the lignin, pollen, or humic acids. As a result, the observed vertical distribution consists of high concentrations of labile DOC in the upper water column and low concentrations at depth. In addition to vertical distributions, horizontal distributions have been modeled and sampled as well. In the surface ocean at a depth of 30 meters, the higher dissolved organic carbon concentrations are found in the South Pacific Gyre, the South Atlantic Gyre, and the Indian Ocean. At a depth of 3,000 meters, highest concentrations are in the North Atlantic Deep Water where dissolved organic carbon from the high concentration surface ocean is removed to depth. While in the northern Indian Ocean high DOC is observed due to high fresh water flux and sediments. Since the time scales of horizontal motion along the ocean bottom are in the thousands of years, the refractory dissolved organic carbon is slowly consumed on its way from the North Atlantic and reaches a minimum in the North Pacific. As emergent Dissolved organic matter is a heterogeneous pool of thousands, likely millions, of organic compounds. These compounds differ not only in composition and concentration (from pM to μM), but also originate from various organisms (phytoplankton, zooplankton, and bacteria) and environments (terrestrial vegetation and soils, coastal fringe ecosystems) and may have been produced recently or thousands of years ago. Moreover, even organic compounds deriving from the same source and of the same age may have been subjected to different processing histories prior to accumulating within the same pool of DOM. Interior ocean DOM is a highly modified fraction that remains after years of exposure to sunlight, utilization by heterotrophs, flocculation and coagulation, and interaction with particles. Many of these processes within the DOM pool are compound- or class-specific. For example, condensed aromatic compounds are highly photosensitive, whereas proteins, carbohydrates, and their monomers are readily taken up by bacteria. Microbes and other consumers are selective in the type of DOM they utilize and typically prefer certain organic compounds over others. Consequently, DOM becomes less reactive as it is continually reworked. Said another way, the DOM pool becomes less labile and more refractory with degradation. As it is reworked, organic compounds are continually being added to the bulk DOM pool by physical mixing, exchange with particles, and/or production of organic molecules by the consumer community. As such, the compositional changes that occur during degradation are more complex than the simple removal of more labile components and resultant accumulation of remaining, less labile compounds. Dissolved organic matter recalcitrance (i.e., its overall reactivity toward degradation and/or utilization) is therefore an emergent property. The perception of DOM recalcitrance changes during organic matter degradation and in conjunction with any other process that removes or adds organic compounds to the DOM pool under consideration. The surprising resistance of high concentrations of DOC to microbial degradation has been addressed by several hypotheses. The prevalent notion is that the recalcitrant fraction of DOC has certain chemical properties, which prevent decomposition by microbes ("intrinsic stability hypothesis"). An alternative or additional explanation is given by the "dilution hypothesis", that all compounds are labile, but exist in concentrations individually too low to sustain microbial populations but collectively form a large pool. The dilution hypothesis has found support in recent experimental and theoretical studies. DOM isolation and analysis DOM is found in low concentrations in nature for direct analysis with NMR or MS. Moreover, DOM samples often contain high concentrations of inorganic salts that are incompatible with such techniques. Therefore, it is necessary a concentration and isolation step of the sample. The most used isolation techniques are ultrafiltration, reverse osmosis, and solid-phase extraction. Among them solid-phase extraction is considered as the cheapest and easiest technique. See also Blackwater river Dissolved inorganic carbon Foam line Microbial loop Total organic carbon References External links Hansell DA and Carlson CA (Eds.) (2014) Biogeochemistry of Marine Dissolved Organic Matter, Second edition, Academic Press. . Environmental chemistry Water quality indicators Water chemistry
Dissolved organic carbon
[ "Chemistry", "Environmental_science" ]
5,206
[ "Environmental chemistry", "Water quality indicators", "nan", "Water pollution" ]
4,236,543
https://en.wikipedia.org/wiki/Ashvini
Ashvini (अश्विनी, ) is the first nakshatra (lunar mansion) in Indian astronomy having a spread from 0°-0'-0" to 13°-20', corresponding to the head of Aries, including the stars β and γ Arietis. The name aśvinī is used by Varahamihira (6th century). The older name of the asterism, found in the Atharvaveda (AVS 19.7; in the dual) and in Panini (4.3.36), was aśvayúja, "harnessing horses". This nakshatra belongs to Mesha Rasi. Notable personalities born in this nakshatra are Sania Mirza, Bhimsen Joshi, Yukta Mookhey. Astrology Ashvini is ruled by Ketu, the descending lunar node. In electional astrology, Ashvini is classified as a small constellation, meaning that it is believed to be advantageous to begin works of a precise or delicate nature while the moon is in Ashvini. Ashvini is ruled by the Ashvinas, the heavenly twin brother gods who served as physicians to the gods and goddesses. Ashvini is represented by the bee hive. Traditional Indian names are determined by which pada (quarter) of a nakshatra the Ascendant was in at the time of birth. In the case of Ashvini, the given name would begin with the following syllables: Chu, Che, Cho, La. See also List of Nakshatras References Nakshatra
Ashvini
[ "Astronomy" ]
332
[ "Nakshatra", "Constellations" ]
4,236,583
https://en.wikipedia.org/wiki/Visual%20search
Visual search is a type of perceptual task requiring attention that typically involves an active scan of the visual environment for a particular object or feature (the target) among other objects or features (the distractors). Visual search can take place with or without eye movements. The ability to consciously locate an object or target amongst a complex array of stimuli has been extensively studied over the past 40 years. Practical examples of using visual search can be seen in everyday life, such as when one is picking out a product on a supermarket shelf, when animals are searching for food among piles of leaves, when trying to find a friend in a large crowd of people, or simply when playing visual search games such as Where's Wally? Much previous literature on visual search used reaction time in order to measure the time it takes to detect the target amongst its distractors. An example of this could be a green square (the target) amongst a set of red circles (the distractors). However, reaction time measurements do not always distinguish between the role of attention and other factors: a long reaction time might be the result of difficulty directing attention to the target, or slowed decision-making processes or slowed motor responses after attention is already directed to the target and the target has already been detected. Many visual search paradigms have therefore used eye movement as a means to measure the degree of attention given to stimuli. However, eyes can move independently of attention, and therefore eye movement measures do not completely capture the role of attention. Search types Feature search Feature search (also known as "disjunctive" or "efficient" search) is a visual search process that focuses on identifying a previously requested target amongst distractors that differ from the target by a unique visual feature such as color, shape, orientation, or size. An example of a feature search task is asking a participant to identify a white square (target) surrounded by black squares (distractors). In this type of visual search, the distractors are characterized by the same visual features. The efficiency of feature search in regards to reaction time (RT) and accuracy depends on the "pop out" effect, bottom-up processing, and parallel processing. However, the efficiency of feature search is unaffected by the number of distractors present. The "pop out" effect is an element of feature search that characterizes the target's ability to stand out from surrounding distractors due to its unique feature. Bottom-up processing, which is the processing of information that depends on input from the environment, explains how one utilizes feature detectors to process characteristics of the stimuli and differentiate a target from its distractors. This draw of visual attention towards the target due to bottom-up processes is known as "saliency." Lastly, parallel processing is the mechanism that then allows one's feature detectors to work simultaneously in identifying the target. Conjunction search Conjunction search (also known as inefficient or serial search) is a visual search process that focuses on identifying a previously requested target surrounded by distractors possessing no distinct features from the target itself. An example of a conjunction search task is having a person identify a green X (target) amongst distractors composed of purple Xs (same shape) and green Os (same color). Unlike feature search, conjunction search involves distractors (or groups of distractors) that may differ from each other but exhibit at least one common feature with the target. The efficiency of conjunction search in regards to reaction time (RT) and accuracy is dependent on the distractor-ratio and the number of distractors present. As the distractors represent the differing individual features of the target more equally amongst themselves (distractor-ratio effect), reaction time(RT) increases and accuracy decreases. As the number of distractors present increases, the reaction time (RT) increases and the accuracy decreases. However, with practice the original reaction time (RT) restraints of conjunction search tend to show improvement. In the early stages of processing, conjunction search utilizes bottom-up processes to identify pre-specified features amongst the stimuli. These processes are then overtaken by a more serial process of consciously evaluating the indicated features of the stimuli in order to properly allocate one's focal spatial attention towards the stimulus that most accurately represents the target. In many cases, top-down processing affects conjunction search by eliminating stimuli that are incongruent with one's previous knowledge of the target-description, which in the end allows for more efficient identification of the target. An example of the effect of top-down processes on a conjunction search task is when searching for a red 'K' among red 'Cs' and black 'Ks', individuals ignore the black letters and focus on the remaining red letters in order to decrease the set size of possible targets and, therefore, more efficiently identify their target. Real world visual search In everyday situations, people are most commonly searching their visual fields for targets that are familiar to them. When it comes to searching for familiar stimuli, top-down processing allows one to more efficiently identify targets with greater complexity than can be represented in a feature or conjunction search task. In a study done to analyze the reverse-letter effect, which is the idea that identifying the asymmetric letter among symmetric letters is more efficient than its reciprocal, researchers concluded that individuals more efficiently recognize an asymmetric letter among symmetric letters due to top-down processes. Top-down processes allowed study participants to access prior knowledge regarding shape recognition of the letter N and quickly eliminate the stimuli that matched their knowledge. In the real world, one must use prior knowledge everyday in order to accurately and efficiently locate objects such as phones, keys, etc. among a much more complex array of distractors. Despite this complexity, visual search with complex objects (and search for categories of objects, such as "phone", based on prior knowledge) appears to rely on the same active scanning processes as conjunction search with less complex, contrived laboratory stimuli, although global statistical information available in real-world scenes can also help people locate target objects. While bottom-up processes may come into play when identifying objects that are not as familiar to a person, overall top-down processing highly influences visual searches that occur in everyday life. Familiarity can play especially critical roles when parts of objects are not visible (as when objects are partly hidden from view because they are behind other objects). Visual information from hidden parts can be recalled from long-term memory and used to facilitate search for familiar objects. Reaction time slope It is also possible to measure the role of attention within visual search experiments by calculating the slope of reaction time over the number of distractors present. Generally, when high levels of attention are required when looking at a complex array of stimuli (conjunction search), the slope increases as reaction times increase. For simple visual search tasks (feature search), the slope decreases due to reaction times being fast and requiring less attention. However, the use of a reaction time slope to measure attention is controversial because non-attentional factors can also affect reaction time slope. Visual orienting and attention One obvious way to select visual information is to turn towards it, also known as visual orienting. This may be a movement of the head and/or eyes towards the visual stimulus, called a saccade. Through a process called foveation, the eyes fixate on the object of interest, making the image of the visual stimulus fall on the fovea of the eye, the central part of the retina with the sharpest visual acuity. There are two types of orienting: Exogenous orienting is the involuntary and automatic movement that occurs to direct one's visual attention toward a sudden disruption in his peripheral vision field. Attention is therefore externally guided by a stimulus, resulting in a reflexive saccade. Endogenous orienting is the voluntary movement that occurs in order for one to focus visual attention on a goal-driven stimulus. Thus, the focus of attention of the perceiver can be manipulated by the demands of a task. A scanning saccade is triggered endogenously for the purpose of exploring the visual environment. Visual search relies primarily on endogenous orienting because participants have the goal to detect the presence or absence of a specific target object in an array of other distracting objects. Early research suggested that attention could be covertly (without eye movement) shifted to peripheral stimuli, but later studies found that small saccades (microsaccades) occur during these tasks, and that these eye movements are frequently directed towards the attended locations (whether or not there are visible stimuli). These findings indicate that attention plays a critical role in understanding visual search. Subsequently, competing theories of attention have come to dominate visual search discourse. The environment contains a vast amount of information. We are limited in the amount of information we are able to process at any one time, so it is therefore necessary that we have mechanisms by which extraneous stimuli can be filtered and only relevant information attended to. In the study of attention, psychologists distinguish between pre-attentive and attentional processes. Pre-attentive processes are evenly distributed across all input signals, forming a kind of "low-level" attention. Attentional processes are more selective and can only be applied to specific preattentive input. A large part of the current debate in visual search theory centres on selective attention and what the visual system is capable of achieving without focal attention. Theory Feature integration theory (FIT) A popular explanation for the different reaction times of feature and conjunction searches is the feature integration theory (FIT), introduced by Treisman and Gelade in 1980. This theory proposes that certain visual features are registered early, automatically, and are coded rapidly in parallel across the visual field using pre-attentive processes. Experiments show that these features include luminance, colour, orientation, motion direction, and velocity, as well as some simple aspects of form. For example, a red X can be quickly found among any number of black Xs and Os because the red X has the discriminative feature of colour and will "pop out." In contrast, this theory also suggests that in order to integrate two or more visual features belonging to the same object, a later process involving integration of information from different brain areas is needed and is coded serially using focal attention. For example, when locating an orange square among blue squares and orange triangles, neither the colour feature "orange" nor the shape feature "square" is sufficient to locate the search target. Instead, one must integrate information of both colour and shape to locate the target. Evidence that attention and thus later visual processing is needed to integrate two or more features of the same object is shown by the occurrence of illusory conjunctions, or when features do not combine correctly For example, if a display of a green X and a red O are flashed on a screen so briefly that the later visual process of a serial search with focal attention cannot occur, the observer may report seeing a red X and a green O. The FIT is a dichotomy because of the distinction between its two stages: the preattentive and attentive stages. Preattentive processes are those performed in the first stage of the FIT model, in which the simplest features of the object are being analyzed, such as color, size, and arrangement. The second attentive stage of the model incorporates cross-dimensional processing, and the actual identification of an object is done and information about the target object is put together. This theory has not always been what it is today; there have been disagreements and problems with its proposals that have allowed the theory to be amended and altered over time, and this criticism and revision has allowed it to become more accurate in its description of visual search. There have been disagreements over whether or not there is a clear distinction between feature detection and other searches that use a master map accounting for multiple dimensions in order to search for an object. Some psychologists support the idea that feature integration is completely separate from this type of master map search, whereas many others have decided that feature integration incorporates this use of a master map in order to locate an object in multiple dimensions. The FIT also explains that there is a distinction between the brain's processes that are being used in a parallel versus a focal attention task. Chan and Hayward have conducted multiple experiments supporting this idea by demonstrating the role of dimensions in visual search. While exploring whether or not focal attention can reduce the costs caused by dimension-switching in visual search, they explained that the results collected supported the mechanisms of the feature integration theory in comparison to other search-based approaches. They discovered that single dimensions allow for a much more efficient search regardless of the size of the area being searched, but once more dimensions are added it is much more difficult to efficiently search, and the bigger the area being searched the longer it takes for one to find the target. Guided search model A second main function of preattentive processes is to direct focal attention to the most "promising" information in the visual field. There are two ways in which these processes can be used to direct attention: bottom-up activation (which is stimulus-driven) and top-down activation (which is user-driven). In the guided search model by Jeremy Wolfe, information from top-down and bottom-up processing of the stimulus is used to create a ranking of items in order of their attentional priority. In a visual search, attention will be directed to the item with the highest priority. If that item is rejected, then attention will move on to the next item and the next, and so forth. The guided search theory follows that of parallel search processing. An activation map is a representation of visual space in which the level of activation at a location reflects the likelihood that the location contains a target. This likelihood is based on preattentive, featural information of the perceiver. According to the guided search model, the initial processing of basic features produces an activation map, with every item in the visual display having its own level of activation. Attention is demanded based on peaks of activation in the activation map in a search for the target. Visual search can proceed efficiently or inefficiently. During efficient search, performance is unaffected by the number of distractor items. The reaction time functions are flat, and the search is assumed to be a parallel search. Thus, in the guided search model, a search is efficient if the target generates the highest, or one of the highest activation peaks. For example, suppose someone is searching for red, horizontal targets. Feature processing would activate all red objects and all horizontal objects. Attention is then directed to items depending on their level of activation, starting with those most activated. This explains why search times are longer when distractors share one or more features with the target stimuli. In contrast, during inefficient search, the reaction time to identify the target increases linearly with the number of distractor items present. According to the guided search model, this is because the peak generated by the target is not one of the highest. Biological basis During visual search experiments the posterior parietal cortex has elicited much activation during functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) experiments for inefficient conjunction search, which has also been confirmed through lesion studies. Patients with lesions to the posterior parietal cortex show low accuracy and very slow reaction times during a conjunction search task but have intact feature search remaining to the ipsilesional (the same side of the body as the lesion) side of space. Ashbridge, Walsh, and Cowey in (1997) demonstrated that during the application of transcranial magnetic stimulation (TMS) to the right parietal cortex, conjunction search was impaired by 100 milliseconds after stimulus onset. This was not found during feature search. Nobre, Coull, Walsh and Frith (2003) identified using functional magnetic resonance imaging (fMRI) that the intraparietal sulcus located in the superior parietal cortex was activated specifically to feature search and the binding of individual perceptual features as opposed to conjunction search. Conversely, the authors further identify that for conjunction search, the superior parietal lobe and the right angular gyrus elicit bilaterally during fMRI experiments. In contrast, Leonards, Sunaert, Vam Hecke and Orban (2000) identified that significant activation is seen during fMRI experiments in the superior frontal sulcus primarily for conjunction search. This research hypothesises that activation in this region may in fact reflect working memory for holding and maintaining stimulus information in mind in order to identify the target. Furthermore, significant frontal activation including the ventrolateral prefrontal cortex bilaterally and the right dorsolateral prefrontal cortex were seen during positron emission tomography for attentional spatial representations during visual search. The same regions associated with spatial attention in the parietal cortex coincide with the regions associated with feature search. Furthermore, the frontal eye field (FEF) located bilaterally in the prefrontal cortex, plays a critical role in saccadic eye movements and the control of visual attention. Moreover, research into monkeys and single cell recording found that the superior colliculus is involved in the selection of the target during visual search as well as the initiation of movements. Conversely, it also suggested that activation in the superior colliculus results from disengaging attention, ensuring that the next stimulus can be internally represented. The ability to directly attend to a particular stimuli during visual search experiments has been linked to the pulvinar nucleus (located in the midbrain) while inhibiting attention to unattended stimuli. Conversely, Bender and Butter (1987) found that during testing on monkeys, no involvement of the pulvinar nucleus was identified during visual search tasks. There is evidence for the V1 Saliency Hypothesis that the primary visual cortex (V1) creates a bottom-up saliency map to guide attention exogenously, and this V1 saliency map is read out by the superior colliculus which receives monosynaptic inputs from V1. Evolution There is a variety of speculation about the origin and evolution of visual search in humans. It has been shown that during visual exploration of complex natural scenes, both humans and nonhuman primates make highly stereotyped eye movements. Furthermore, chimpanzees have demonstrated improved performance in visual searches for upright human or dog faces, suggesting that visual search (particularly where the target is a face) is not peculiar to humans and that it may be a primal trait. Research has suggested that effective visual search may have developed as a necessary skill for survival, where being adept at detecting threats and identifying food was essential. The importance of evolutionarily relevant threat stimuli was demonstrated in a study by LoBue and DeLoache (2008) in which children (and adults) were able to detect snakes more rapidly than other targets amongst distractor stimuli. However, some researchers question whether evolutionarily relevant threat stimuli are detected automatically. Face recognition Over the past few decades there have been vast amounts of research into face recognition, specifying that faces endure specialized processing within a region called the fusiform face area (FFA) located in the mid fusiform gyrus in the temporal lobe. Debates are ongoing whether both faces and objects are detected and processed in different systems and whether both have category specific regions for recognition and identification. Much research to date focuses on the accuracy of the detection and the time taken to detect the face in a complex visual search array. When faces are displayed in isolation, upright faces are processed faster and more accurately than inverted faces, but this effect was observed in non-face objects as well. When faces are to be detected among inverted or jumbled faces, reaction times for intact and upright faces increase as the number of distractors within the array is increased. Hence, it is argued that the 'pop out' theory defined in feature search is not applicable in the recognition of faces in such visual search paradigm. Conversely, the opposite effect has been argued and within a natural environmental scene, the 'pop out' effect of the face is significantly shown. This could be due to evolutionary developments as the need to be able to identify faces that appear threatening to the individual or group is deemed critical in the survival of the fittest. More recently, it was found that faces can be efficiently detected in a visual search paradigm, if the distracters are non-face objects, however it is debated whether this apparent 'pop out' effect is driven by a high-level mechanism or by low-level confounding features. Furthermore, patients with developmental prosopagnosia, who have impaired face identification, generally detect faces normally, suggesting that visual search for faces is facilitated by mechanisms other than the face-identification circuits of the fusiform face area. Patients with forms of dementia can also have deficits in facial recognition and the ability to recognize human emotions in the face. In a meta-analysis of nineteen different studies comparing normal adults with dementia patients in their abilities to recognize facial emotions, the patients with frontotemporal dementia were seen to have a lower ability to recognize many different emotions. These patients were much less accurate than the control participants (and even in comparison with Alzheimer's patients) in recognizing negative emotions, but were not significantly impaired in recognizing happiness. Anger and disgust in particular were the most difficult for the dementia patients to recognize. Face recognition is a complex process that is affected by many factors, both environmental and individually internal. Other aspects to be considered include race and culture and their effects on one's ability to recognize faces. Some factors such as the cross-race effect can influence one's ability to recognize and remember faces. Considerations Ageing Research indicates that performance in conjunctive visual search tasks significantly improves during childhood and declines in later life. More specifically, young adults have been shown to have faster reaction times on conjunctive visual search tasks than both children and older adults, but their reaction times were similar for feature visual search tasks. This suggests that there is something about the process of integrating visual features or serial searching that is difficult for children and older adults, but not for young adults. Studies have suggested numerous mechanisms involved in this difficulty in children, including peripheral visual acuity, eye movement ability, ability of attentional focal movement, and the ability to divide visual attention among multiple objects. Studies have suggested similar mechanisms in the difficulty for older adults, such as age related optical changes that influence peripheral acuity, the ability to move attention over the visual field, the ability to disengage attention, and the ability to ignore distractors. A study by Lorenzo-López et al. (2008) provides neurological evidence for the fact that older adults have slower reaction times during conjunctive searches compared to young adults. Event-related potentials (ERPs) showed longer latencies and lower amplitudes in older subjects than young adults at the P3 component, which is related to activity of the parietal lobes. This suggests the involvement of the parietal lobe function with an age-related decline in the speed of visual search tasks. Results also showed that older adults, when compared to young adults, had significantly less activity in the anterior cingulate cortex and many limbic and occipitotemporal regions that are involved in performing visual search tasks. Alzheimer's disease Research has found that people with Alzheimer's disease (AD) are significantly impaired overall in visual search tasks. People with AD manifest enhanced spatial cueing, but this benefit is only obtained for cues with high spatial precision. Abnormal visual attention may underlie certain visuospatial difficulties in patients with (AD). People with AD have hypometabolism and neuropathology in the parietal cortex, and given the role of parietal function for visual attention, patients with AD may have hemispatial neglect, which may result in difficulty with disengaging attention in visual search. An experiment conducted by Tales et al. (2000) investigated the ability of patients with AD to perform various types of visual search tasks. Their results showed that search rates on "pop-out" tasks were similar for both AD and control groups, however, people with AD searched significantly slower compared to the control group on a conjunction task. One interpretation of these results is that the visual system of AD patients has a problem with feature binding, such that it is unable to communicate the different feature descriptions for the stimulus efficiently. Binding of features is thought to be mediated by areas in the temporal and parietal cortex, and these areas are known to be affected by AD-related pathology. Another possibility for the impairment of people with AD on conjunction searches is that there may be some damage to general attentional mechanisms in AD, and therefore any attention-related task will be affected, including visual search. Tales et al. (2000) detected a double dissociation with their experimental results on AD and visual search. Earlier work was carried out on patients with Parkinson's disease (PD) concerning the impairment patients with PD have on visual search tasks. In those studies, evidence was found of impairment in PD patients on the "pop-out" task, but no evidence was found on the impairment of the conjunction task. As discussed, AD patients show the exact opposite of these results: normal performance was seen on the "pop-out" task, but impairment was found on the conjunction task. This double dissociation provides evidence that PD and AD affect the visual pathway in different ways, and that the pop-out task and the conjunction task are differentially processed within that pathway. Autism Studies have consistently shown that autistic individuals performed better and with lower reaction times in feature and conjunctive visual search tasks than matched controls without autism. Several explanations for these observations have been suggested. One possibility is that people with autism have enhanced perceptual capacity. This means that autistic individuals are able to process larger amounts of perceptual information, allowing for superior parallel processing and hence faster target location. Second, autistic individuals show superior performance in discrimination tasks between similar stimuli and therefore may have an enhanced ability to differentiate between items in the visual search display. A third suggestion is that autistic individuals may have stronger top-down target excitation processing and stronger distractor inhibition processing than controls. Keehn et al. (2008) used an event-related functional magnetic resonance imaging design to study the neurofunctional correlates of visual search in autistic children and matched controls of typically developing children. Autistic children showed superior search efficiency and increased neural activation patterns in the frontal, parietal, and occipital lobes when compared to the typically developing children. Thus, autistic individuals' superior performance on visual search tasks may be due to enhanced discrimination of items on the display, which is associated with occipital activity, and increased top-down shifts of visual attention, which is associated with the frontal and parietal areas. Consumer psychology In the past decade, there has been extensive research into how companies can maximise sales using psychological techniques derived from visual search to determine how products should be positioned on shelves. Pieters and Warlop (1999) used eye tracking devices to assess saccades and fixations of consumers while they visually scanned/searched an array of products on a supermarket shelf. Their research suggests that consumers specifically direct their attention to products with eye-catching properties such as shape, colour or brand name. This effect is due to a pressured visual search where eye movements accelerate and saccades minimise, thus resulting in the consumer's quickly choosing a product with a 'pop out' effect. This study suggests that efficient search is primarily used, concluding that consumers do not focus on items that share very similar features. The more distinct or maximally visually different a product is from surrounding products, the more likely the consumer is to notice it. Janiszewski (1998) discussed two types of consumer search. One search type is goal directed search taking place when somebody uses stored knowledge of the product in order to make a purchase choice. The second is exploratory search. This occurs when the consumer has minimal previous knowledge about how to choose a product. It was found that for exploratory search, individuals would pay less attention to products that were placed in visually competitive areas such as the middle of the shelf at an optimal viewing height. This was primarily due to the competition in attention meaning that less information was maintained in visual working memory for these products. References Neuropsychology Perception Cognitive psychology
Visual search
[ "Biology" ]
5,831
[ "Behavioural sciences", "Behavior", "Cognitive psychology" ]
4,236,766
https://en.wikipedia.org/wiki/Foot%E2%80%93pound%E2%80%93second%20system%20of%20units
The foot–pound–second system (FPS system) is a system of units built on three fundamental units: the foot for length, the (avoirdupois) pound for either mass or force (see below), and the second for time. Variants Collectively, the variants of the FPS system were the most common system in technical publications in English until the middle of the 20th century. Errors can be avoided and translation between the systems facilitated by labelling all physical quantities consistently with their units. Especially in the context of the FPS system this is sometimes known as the Stroud system after William Stroud, who popularized it. Pound as mass unit When the pound is used as a unit of mass, the core of the coherent system is similar and functionally equivalent to the corresponding subsets of the International System of Units (SI), using metre, kilogram and second (MKS), and the earlier centimetre–gram–second system of units (CGS). This system is often called the Absolute English System. In this sub-system, the unit of force is a derived unit known as the poundal. The international standard symbol for the pound as unit of mass rather than force is lb. Everett (1861) proposed the metric dyne and erg as the units of force and energy in the FPS system. Latimer Clark's (1891) "Dictionary of Measures" contains celo (acceleration), vel or velo (velocity) and pulse (momentum) as proposed names for FPS absolute units. Pound as force unit The technical or gravitational FPS system or British gravitational system is a coherent variant of the FPS system that is most common among engineers in the United States. It takes the pound-force as a fundamental unit of force instead of the pound as a fundamental unit of mass. In this sub-system, the unit of mass is a derived unit known as the slug. In the context of the gravitational FPS system, the pound-force (lbf) is sometimes referred to as the pound (lb). Pound-force as force unit and pound-mass as mass unit Another variant of the FPS system uses both the pound-mass and the pound-force, but neither the slug nor the poundal. The resulting system is sometimes also known as the English engineering system. Despite its name, the system is based on United States customary units of measure; it is not used in England. Other units Molar units The unit of substance in the FPS system is the pound-mole (lb-mol) = . Until the SI decided to adopt the gram-mole, the mole was directly derived from the mass unit as (mass unit)/(atomic mass unit). The unit (lbf⋅s2/ft)-mol also appears in a former definition of the atmosphere. Electromagnetic units The electrostatic and electromagnetic systems are derived from units of length and force, mainly. As such, these are ready extensions of any system of containing length, mass, time. Stephen Dresner gives the derived electrostatic and electromagnetic units in both the foot–pound–second and foot–slug–second systems. In practice, these are most associated with the centimetre–gram–second system. The 1929 "International Critical Tables" gives in the symbols and systems fpse = FPS electrostatic system and fpsm = FPS electromagnetic system. Under the conversions for charge, the following are given. The CRC Handbook of Chemistry and Physics 1979 (Edition 60), also lists fpse and fpsm as standard abbreviations. Electromagnetic FPS (EMU, stat-) 1 fpsm unit = 117.581866 cgsm unit (Biot-second) Electrostatic FPS (ESU, ab-) 1 fpse unit = 3583.8953 cgse unit (Franklin) 1 fpse unit = 1.1954588×10−7 abs coulomb Units of light The candle and the foot-candle were the first defined units of light, defined in the Metropolitan Gas Act (1860). The foot-candle is the intensity of light at one foot from a standard candle. The units were internationally recognized in 1881, and adopted into the metric system. Conversions Together with the fact that the term "weight" is used for the gravitational force in some technical contexts (physics, engineering) and for mass in others (commerce, law), and that the distinction often does not matter in practice, the coexistence of variants of the FPS system causes confusion over the nature of the unit "pound". Its relation to international metric units is expressed in kilograms, not newtons, though, and in earlier times it was defined by means of a mass prototype to be compared with a two-pan balance which is agnostic of local gravitational differences. In July 1959, the various national foot and avoirdupois pound standards were replaced by the international foot of precisely and the international pound of precisely , making conversion between the systems a matter of simple arithmetic. The conversion for the poundal is given by 1 pdl = 1 lb·ft/s2 = (precisely). To convert between the absolute and gravitational FPS systems one needs to fix the standard acceleration g which relates the pound to the pound-force. While g strictly depends on one's location on the Earth surface, since 1901 in most contexts it is fixed conventionally at precisely g0 =  ≈ . See also Metre–tonne–second system of units (MTS) FFF system Mars Climate Orbiter References Systems of units Imperial units Customary units of measurement in the United States
Foot–pound–second system of units
[ "Mathematics" ]
1,156
[ "Quantity", "Systems of units", "Units of measurement" ]
4,237,146
https://en.wikipedia.org/wiki/Catamorphism
In functional programming, the concept of catamorphism (from the Ancient Greek: "downwards" and "form, shape") denotes the unique homomorphism from an initial algebra into some other algebra. Catamorphisms provide generalizations of folds of lists to arbitrary algebraic data types, which can be described as initial algebras. The dual concept is that of anamorphism that generalize unfolds. A hylomorphism is the composition of an anamorphism followed by a catamorphism. Definition Consider an initial -algebra for some endofunctor of some category into itself. Here is a morphism from to . Since it is initial, we know that whenever is another -algebra, i.e. a morphism from to , there is a unique homomorphism from to . By the definition of the category of -algebra, this corresponds to a morphism from to , conventionally also denoted , such that . In the context of -algebra, the uniquely specified morphism from the initial object is denoted by and hence characterized by the following relationship: Terminology and history Another notation found in the literature is . The open brackets used are known as banana brackets, after which catamorphisms are sometimes referred to as bananas, as mentioned in Erik Meijer et al. One of the first publications to introduce the notion of a catamorphism in the context of programming was the paper “Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire”, by Erik Meijer et al., which was in the context of the Squiggol formalism. The general categorical definition was given by Grant Malcolm. Examples We give a series of examples, and then a more global approach to catamorphisms, in the Haskell programming language. Catamorphism for Maybe-algebra Consider the functor Maybe defined in the below Haskell code: data Maybe a = Nothing | Just a -- Maybe type class Functor f where -- class for functors fmap :: (a -> b) -> (f a -> f b) -- action of functor on morphisms instance Functor Maybe where -- turn Maybe into a functor fmap g Nothing = Nothing fmap g (Just x) = Just (g x) The initial object of the Maybe-Algebra is the set of all objects of natural number type Nat together with the morphism ini defined below: data Nat = Zero | Succ Nat -- natural number type ini :: Maybe Nat -> Nat -- initial object of Maybe-algebra (with slight abuse of notation) ini Nothing = Zero ini (Just n) = Succ n The cata map can be defined as follows: cata :: (Maybe b -> b) -> (Nat -> b) cata g Zero = g (fmap (cata g) Nothing) -- Notice: fmap (cata g) Nothing = g Nothing and Zero = ini(Nothing) cata g (Succ n) = g (fmap (cata g) (Just n)) -- Notice: fmap (cata g) (Just n) = Just (cata g n) and Succ n = ini(Just n) As an example consider the following morphism: g :: Maybe String -> String g Nothing = "go!" g (Just str) = "wait..." ++ str Then cata g ((Succ. Succ . Succ) Zero) will evaluate to "wait... wait... wait... go!". List fold For a fixed type a consider the functor MaybeProd a defined by the following: data MaybeProd a b = Nothing | Just (a, b) -- (a,b) is the product type of a and b class Functor f where -- class for functors fmap :: (a -> b) -> (f a -> f b) -- action of functor on morphisms instance Functor (MaybeProd a) where -- turn MaybeProd a into a functor, the functoriality is only in the second type variable fmap g Nothing = Nothing fmap g (Just (x,y)) = Just (x, g y) The initial algebra of MaybeProd a is given by the lists of elements with type a together with the morphism ini defined below: data List a = EmptyList | Cons a (List a) ini :: MaybeProd a (List a) -> List a -- initial algebra of MaybeProd a ini Nothing = EmptyList ini (Just (n,l)) = Cons n l The cata map can be defined by: cata :: (MaybeProd a b -> b) -> (List a -> b) cata g EmptyList = g (fmap (cata g) Nothing) -- Note: ini Nothing = EmptyList cata g (Cons s l) = g (fmap (cata g) (Just (s,l))) -- Note: Cons s l = ini (Just (s,l)) Notice also that cata g (Cons s l) = g (Just (s, cata g l)). As an example consider the following morphism: g :: MaybeProd Int Int -> Int g Nothing = 3 g (Just (x,y)) = x*y cata g (Cons 10 EmptyList) evaluates to 30. This can be seen by expanding cata g (Cons 10 EmptyList) = g (Just (10,cata g EmptyList)) = 10*(cata g EmptyList) = 10*(g Nothing) = 10*3. In the same way it can be shown, that cata g (Cons 10 (Cons 100 (Cons 1000 EmptyList))) will evaluate to 10*(100*(1000*3)) = 3.000.000. The cata map is closely related to the right fold (see Fold (higher-order function)) of lists foldrList. The morphism lift defined by lift :: (a -> b -> b) -> b -> (MaybeProd a b -> b) lift g b0 Nothing = b0 lift g b0 (Just (x,y)) = g x y relates cata to the right fold foldrList of lists via: foldrList :: (a -> b -> b) -> b-> List a -> b foldrList fun b0 = cata (lift fun b0) The definition of cata implies, that foldrList is the right fold and not the left fold. As an example: foldrList (+) 1 (Cons 10 (Cons 100 (Cons 1000 EmptyList))) will evaluate to 1111 and foldrList (*) 3 (Cons 10 (Cons 100 (Cons 1000 EmptyList)) to 3.000.000. Tree fold For a fixed type a, consider the functor mapping types b to a type that contains a copy of each term of a as well as all pairs of b's (terms of the product type of two instances of the type b). An algebra consists of a function to b, which either acts on an a term or two b terms. This merging of a pair can be encoded as two functions of type a -> b resp. b -> b -> b. type TreeAlgebra a b = (a -> b, b -> b -> b) -- the "two cases" function is encoded as (f, g) data Tree a = Leaf a | Branch (Tree a) (Tree a) -- which turns out to be the initial algebra foldTree :: TreeAlgebra a b -> (Tree a -> b) -- catamorphisms map from (Tree a) to b foldTree (f, g) (Leaf x) = f x foldTree (f, g) (Branch left right) = g (foldTree (f, g) left) (foldTree (f, g) right) treeDepth :: TreeAlgebra a Integer -- an f-algebra to numbers, which works for any input type treeDepth = (const 1, \i j -> 1 + max i j) treeSum :: (Num a) => TreeAlgebra a a -- an f-algebra, which works for any number type treeSum = (id, (+)) General case Deeper category theoretical studies of initial algebras reveal that the F-algebra obtained from applying the functor to its own initial algebra is isomorphic to it. Strong type systems enable us to abstractly specify the initial algebra of a functor f as its fixed point a = f a. The recursively defined catamorphisms can now be coded in single line, where the case analysis (like in the different examples above) is encapsulated by the fmap. Since the domain of the latter are objects in the image of f, the evaluation of the catamorphisms jumps back and forth between a and f a. type Algebra f a = f a -> a -- the generic f-algebras newtype Fix f = Iso { invIso :: f (Fix f) } -- gives us the initial algebra for the functor f cata :: Functor f => Algebra f a -> (Fix f -> a) -- catamorphism from Fix f to a cata alg = alg . fmap (cata alg) . invIso -- note that invIso and alg map in opposite directions Now again the first example, but now via passing the Maybe functor to Fix. Repeated application of the Maybe functor generates a chain of types, which, however, can be united by the isomorphism from the fixed point theorem. We introduce the term zero, which arises from Maybe's Nothing and identify a successor function with repeated application of the Just. This way the natural numbers arise. type Nat = Fix Maybe zero :: Nat zero = Iso Nothing -- every 'Maybe a' has a term Nothing, and Iso maps it into a successor :: Nat -> Nat successor = Iso . Just -- Just maps a to 'Maybe a' and Iso maps back to a new term pleaseWait :: Algebra Maybe String -- again the silly f-algebra example from above pleaseWait (Just string) = "wait.. " ++ string pleaseWait Nothing = "go!" Again, the following will evaluate to "wait.. wait.. wait.. wait.. go!": cata pleaseWait (successor.successor.successor.successor $ zero) And now again the tree example. For this we must provide the tree container data type so that we can set up the fmap (we didn't have to do it for the Maybe functor, as it's part of the standard prelude). data Tcon a b = TconL a | TconR b b instance Functor (Tcon a) where fmap f (TconL x) = TconL x fmap f (TconR y z) = TconR (f y) (f z) type Tree a = Fix (Tcon a) -- the initial algebra end :: a -> Tree a end = Iso . TconL meet :: Tree a -> Tree a -> Tree a meet l r = Iso $ TconR l r treeDepth :: Algebra (Tcon a) Integer -- again, the treeDepth f-algebra example treeDepth (TconL x) = 1 treeDepth (TconR y z) = 1 + max y z The following will evaluate to 4: cata treeDepth $ meet (end "X") (meet (meet (end "YXX") (end "YXY")) (end "YY")) See also Morphism Morphisms of F-algebras From a coalgebra to a final coalgebra: Anamorphism An anamorphism followed by an catamorphism: Hylomorphism Extension of the idea of catamorphisms: Paramorphism Extension of the idea of anamorphisms: Apomorphism References Further reading External links Catamorphisms at HaskellWiki Catamorphisms by Edward Kmett Catamorphisms in F# (Part 1, 2, 3, 4, 5, 6, 7) by Brian McNamara Catamorphisms in Haskell Category theory Recursion schemes Functional programming Morphisms Iteration in programming
Catamorphism
[ "Mathematics" ]
2,724
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Category theory", "Mathematical relations", "Morphisms" ]
4,237,207
https://en.wikipedia.org/wiki/Error%20correction%20code
In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels. The central idea is that the sender encodes the message in a redundant way, most often by using an error correction code, or error correcting code (ECC). The redundancy allows the receiver not only to detect errors that may occur anywhere in the message, but often to correct a limited number of errors. Therefore a reverse channel to request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth. The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming (7,4) code. FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers in multicast. Long-latency connections also benefit; in the case of satellites orbiting distant planets, retransmission due to errors would create a delay of several hours. FEC is also widely used in modems and in cellular networks. FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initial analog-to-digital conversion in the receiver. The Viterbi decoder implements a soft-decision algorithm to demodulate digital data from an analog signal corrupted by noise. Many FEC decoders can also generate a bit-error rate (BER) signal which can be used as feedback to fine-tune the analog receiving electronics. FEC information is added to mass storage (magnetic, optical and solid state/flash based) devices to enable recovery of corrupted data, and is used as ECC computer memory on systems that require special provisions for reliability. The maximum proportion of errors or missing bits that can be corrected is determined by the design of the ECC, so different forward error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effective signal-to-noise ratio. The noisy-channel coding theorem of Claude Shannon can be used to compute the maximum achievable communication bandwidth for a given maximum acceptable error probability. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. After years of research, some advanced FEC systems like polar code come very close to the theoretical maximum given by the Shannon channel capacity under the hypothesis of an infinite length frame. Method ECC is accomplished by adding redundancy to the transmitted information using an algorithm. A redundant bit may be a complicated function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output are systematic, while those that do not are non-systematic. A simplistic example of ECC is to transmit each data bit three times, which is known as a (3,1) repetition code. Through a noisy channel, a receiver might see eight versions of the output, see table below. This allows an error in any one of the three samples to be corrected by "majority vote", or "democratic voting". The correcting ability of this ECC is: Up to one bit of triplet in error, or up to two bits of triplet omitted (cases not shown in table). Though simple to implement and widely used, this triple modular redundancy is a relatively inefficient ECC. Better ECC codes typically examine the last several tens or even the last several hundreds of previously received bits to determine how to decode the current small handful of bits (typically in groups of two to eight bits). Averaging noise to reduce errors ECC could be said to work by "averaging noise"; since each data bit affects many transmitted symbols, the corruption of some symbols by noise usually allows the original user data to be extracted from the other, uncorrupted received symbols that also depend on the same user data. Because of this "risk-pooling" effect, digital communication systems that use ECC tend to work well above a certain minimum signal-to-noise ratio and not at all below it. This all-or-nothing tendency – the cliff effect – becomes more pronounced as stronger codes are used that more closely approach the theoretical Shannon limit. Interleaving ECC coded data can reduce the all or nothing properties of transmitted ECC codes when the channel errors tend to occur in bursts. However, this method has limits; it is best used on narrowband data. Most telecommunication systems use a fixed channel code designed to tolerate the expected worst-case bit error rate, and then fail to work at all if the bit error rate is ever worse. However, some systems adapt to the given channel error conditions: some instances of hybrid automatic repeat-request use a fixed ECC method as long as the ECC can handle the error rate, then switch to ARQ when the error rate gets too high; adaptive modulation and coding uses a variety of ECC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed. Types The two main categories of ECC codes are block codes and convolutional codes. Block codes work on fixed-size blocks (packets) of bits or symbols of predetermined size. Practical block codes can generally be hard-decoded in polynomial time to their block length. Convolutional codes work on bit or symbol streams of arbitrary length. They are most often soft decoded with the Viterbi algorithm, though other algorithms are sometimes used. Viterbi decoding allows asymptotically optimal decoding efficiency with increasing constraint length of the convolutional code, but at the expense of exponentially increasing complexity. A convolutional code that is terminated is also a 'block code' in that it encodes a block of input data, but the block size of a convolutional code is generally arbitrary, while block codes have a fixed size dictated by their algebraic characteristics. Types of termination for convolutional codes include "tail-biting" and "bit-flushing". There are many types of block codes; Reed–Solomon coding is noteworthy for its widespread use in compact discs, DVDs, and hard disk drives. Other examples of classical block codes include Golay, BCH, Multidimensional parity, and Hamming codes. Hamming ECC is commonly used to correct NAND flash memory errors. This provides single-bit error correction and 2-bit error detection. Hamming codes are only suitable for more reliable single-level cell (SLC) NAND. Denser multi-level cell (MLC) NAND may use multi-bit correcting ECC such as BCH or Reed–Solomon. NOR Flash typically does not use any error correction. Classical block codes are usually decoded using hard-decision algorithms, which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded using soft-decision algorithms like the Viterbi, MAP or BCJR algorithms, which process (discretized) analog signals, and which allow for much higher error-correction performance than hard-decision decoding. Nearly all classical block codes apply the algebraic properties of finite fields. Hence classical block codes are often referred to as algebraic codes. In contrast to classical block codes that often specify an error-detecting or error-correcting ability, many modern block codes such as LDPC codes lack such guarantees. Instead, modern codes are evaluated in terms of their bit error rates. Most forward error correction codes correct only bit-flips, but not bit-insertions or bit-deletions. In this setting, the Hamming distance is the appropriate way to measure the bit error rate. A few forward error correction codes are designed to correct bit-insertions and bit-deletions, such as Marker Codes and Watermark Codes. The Levenshtein distance is a more appropriate way to measure the bit error rate when using such codes. Code-rate and the tradeoff between reliability and data rate The fundamental principle of ECC is to add redundant bits in order to help the decoder to find out the true message that was encoded by the transmitter. The code-rate of a given ECC system is defined as the ratio between the number of information bits and the total number of bits (i.e., information plus redundancy bits) in a given communication package. The code-rate is hence a real number. A low code-rate close to zero implies a strong code that uses many redundant bits to achieve a good performance, while a large code-rate close to 1 implies a weak code. The redundant bits that protect the information have to be transferred using the same communication resources that they are trying to protect. This causes a fundamental tradeoff between reliability and data rate. In one extreme, a strong code (with low code-rate) can induce an important increase in the receiver SNR (signal-to-noise-ratio) decreasing the bit error rate, at the cost of reducing the effective data rate. On the other extreme, not using any ECC (i.e., a code-rate equal to 1) uses the full channel for information transfer purposes, at the cost of leaving the bits without any additional protection. One interesting question is the following: how efficient in terms of information transfer can an ECC be that has a negligible decoding error rate? This question was answered by Claude Shannon with his second theorem, which says that the channel capacity is the maximum bit rate achievable by any ECC whose error rate tends to zero: His proof relies on Gaussian random coding, which is not suitable to real-world applications. The upper bound given by Shannon's work inspired a long journey in designing ECCs that can come close to the ultimate performance boundary. Various codes today can attain almost the Shannon limit. However, capacity achieving ECCs are usually extremely complex to implement. The most popular ECCs have a trade-off between performance and computational complexity. Usually, their parameters give a range of possible code rates, which can be optimized depending on the scenario. Usually, this optimization is done in order to achieve a low decoding error probability while minimizing the impact to the data rate. Another criterion for optimizing the code rate is to balance low error rate and retransmissions number in order to the energy cost of the communication. Concatenated ECC codes for improved performance Classical (algebraic) block codes and convolutional codes are frequently combined in concatenated coding schemes in which a short constraint-length Viterbi-decoded convolutional code does most of the work and a block code (usually Reed–Solomon) with larger symbol size and block length "mops up" any errors made by the convolutional decoder. Single pass decoding with this family of error correction codes can yield very low error rates, but for long range transmission conditions (like deep space) iterative decoding is recommended. Concatenated codes have been standard practice in satellite and deep space communications since Voyager 2 first used the technique in its 1986 encounter with Uranus. The Galileo craft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna. Low-density parity-check (LDPC) Low-density parity-check (LDPC) codes are a class of highly efficient linear block codes made from many single parity check (SPC) codes. They can provide performance very close to the channel capacity (the theoretical maximum) using an iterated soft-decision decoding approach, at linear time complexity in terms of their block length. Practical implementations rely heavily on decoding the constituent SPC codes in parallel. LDPC codes were first introduced by Robert G. Gallager in his PhD thesis in 1960, but due to the computational effort in implementing encoder and decoder and the introduction of Reed–Solomon codes, they were mostly ignored until the 1990s. LDPC codes are now used in many recent high-speed communication standards, such as DVB-S2 (Digital Video Broadcasting – Satellite – Second Generation), WiMAX (IEEE 802.16e standard for microwave communications), High-Speed Wireless LAN (IEEE 802.11n), 10GBase-T Ethernet (802.3an) and G.hn/G.9960 (ITU-T Standard for networking over power lines, phone lines and coaxial cable). Other LDPC codes are standardized for wireless communication standards within 3GPP MBMS (see fountain codes). Turbo codes Turbo coding is an iterated soft-decoding scheme that combines two or more relatively simple convolutional codes and an interleaver to produce a block code that can perform to within a fraction of a decibel of the Shannon limit. Predating LDPC codes in terms of practical application, they now provide similar performance. One of the earliest commercial applications of turbo coding was the CDMA2000 1x (TIA IS-2000) digital cellular technology developed by Qualcomm and sold by Verizon Wireless, Sprint, and other carriers. It is also used for the evolution of CDMA2000 1x specifically for Internet access, 1xEV-DO (TIA IS-856). Like 1x, EV-DO was developed by Qualcomm, and is sold by Verizon Wireless, Sprint, and other carriers (Verizon's marketing name for 1xEV-DO is Broadband Access, Sprint's consumer and business marketing names for 1xEV-DO are Power Vision and Mobile Broadband, respectively). Local decoding and testing of codes Sometimes it is only necessary to decode single bits of the message, or to check whether a given signal is a codeword, and do so without looking at the entire signal. This can make sense in a streaming setting, where codewords are too large to be classically decoded fast enough and where only a few bits of the message are of interest for now. Also such codes have become an important tool in computational complexity theory, e.g., for the design of probabilistically checkable proofs. Locally decodable codes are error-correcting codes for which single bits of the message can be probabilistically recovered by only looking at a small (say constant) number of positions of a codeword, even after the codeword has been corrupted at some constant fraction of positions. Locally testable codes are error-correcting codes for which it can be checked probabilistically whether a signal is close to a codeword by only looking at a small number of positions of the signal. Not all testing codes are locally decoding and testing of codes Not all locally decodable codes (LDCs) are locally testable codes (LTCs) neither locally correctable codes (LCCs), q-query LCCs are bounded exponentially while LDCs can have subexponential lengths. Interleaving Interleaving is frequently used in digital communication and storage systems to improve the performance of forward error correcting codes. Many communication channels are not memoryless: errors typically occur in bursts rather than independently. If the number of errors within a code word exceeds the error-correcting code's capability, it fails to recover the original code word. Interleaving alleviates this problem by shuffling source symbols across several code words, thereby creating a more uniform distribution of errors. Therefore, interleaving is widely used for burst error-correction. The analysis of modern iterated codes, like turbo codes and LDPC codes, typically assumes an independent distribution of errors. Systems using LDPC codes therefore typically employ additional interleaving across the symbols within a code word. For turbo codes, an interleaver is an integral component and its proper design is crucial for good performance. The iterative decoding algorithm works best when there are not short cycles in the factor graph that represents the decoder; the interleaver is chosen to avoid short cycles. Interleaver designs include: rectangular (or uniform) interleavers (similar to the method using skip factors described above) convolutional interleavers random interleavers (where the interleaver is a known random permutation) S-random interleaver (where the interleaver is a known random permutation with the constraint that no input symbols within distance S appear within a distance of S in the output). a contention-free quadratic permutation polynomial (QPP). An example of use is in the 3GPP Long Term Evolution mobile telecommunication standard. In multi-carrier communication systems, interleaving across carriers may be employed to provide frequency diversity, e.g., to mitigate frequency-selective fading or narrowband interference. Example Transmission without interleaving: Error-free message: Transmission with a burst error: Here, each group of the same letter represents a 4-bit one-bit error-correcting codeword. The codeword is altered in one bit and can be corrected, but the codeword is altered in three bits, so either it cannot be decoded at all or it might be decoded incorrectly. With interleaving: Error-free code words: Interleaved: Transmission with a burst error: Received code words after deinterleaving: In each of the codewords "", "", "", and "", only one bit is altered, so one-bit error-correcting code will decode everything correctly. Transmission without interleaving: Original transmitted sentence: Received sentence with a burst error: The term "" ends up mostly unintelligible and difficult to correct. With interleaving: Transmitted sentence: Error-free transmission: Received sentence with a burst error: Received sentence after deinterleaving: No word is completely lost and the missing letters can be recovered with minimal guesswork. Disadvantages of interleaving Use of interleaving techniques increases total delay. This is because the entire interleaved block must be received before the packets can be decoded. Also interleavers hide the structure of errors; without an interleaver, more advanced decoding algorithms can take advantage of the error structure and achieve more reliable communication than a simpler decoder combined with an interleaver. An example of such an algorithm is based on neural network structures. Software for error-correcting codes Simulating the behaviour of error-correcting codes (ECCs) in software is a common practice to design, validate and improve ECCs. The upcoming wireless 5G standard raises a new range of applications for the software ECCs: the Cloud Radio Access Networks (C-RAN) in a Software-defined radio (SDR) context. The idea is to directly use software ECCs in the communications. For instance in the 5G, the software ECCs could be located in the cloud and the antennas connected to this computing resources: improving this way the flexibility of the communication network and eventually increasing the energy efficiency of the system. In this context, there are various available Open-source software listed below (non exhaustive). AFF3CT(A Fast Forward Error Correction Toolbox): a full communication chain in C++ (many supported codes like Turbo, LDPC, Polar codes, etc.), very fast and specialized on channel coding (can be used as a program for simulations or as a library for the SDR). IT++: a C++ library of classes and functions for linear algebra, numerical optimization, signal processing, communications, and statistics. OpenAir: implementation (in C) of the 3GPP specifications concerning the Evolved Packet Core Networks. List of error-correcting codes AN codes Algebraic geometry code BCH code, which can be designed to correct any arbitrary number of errors per code block. Barker code used for radar, telemetry, ultra sound, Wifi, DSSS mobile phone networks, GPS etc. Berger code Constant-weight code Convolutional code Expander codes Group codes Golay codes, of which the Binary Golay code is of practical interest Goppa code, used in the McEliece cryptosystem Hadamard code Hagelbarger code Hamming code Latin square based code for non-white noise (prevalent for example in broadband over powerlines) Lexicographic code Linear Network Coding, a type of erasure correcting code across networks instead of point-to-point links Long code Low-density parity-check code, also known as Gallager code, as the archetype for sparse graph codes LT code, which is a near-optimal rateless erasure correcting code (Fountain code) m of n codes Nordstrom-Robinson code, used in Geometry and Group Theory Online code, a near-optimal rateless erasure correcting code Polar code (coding theory) Raptor code, a near-optimal rateless erasure correcting code Reed–Solomon error correction Reed–Muller code Repeat-accumulate code Repetition codes, such as Triple modular redundancy Spinal code, a rateless, nonlinear code based on pseudo-random hash functions Tornado code, a near-optimal erasure correcting code, and the precursor to Fountain codes Turbo code Walsh–Hadamard code Cyclic redundancy checks (CRCs) can correct 1-bit errors for messages at most bits long for optimal generator polynomials of degree , see Locally Recoverable Codes See also Burst error-correcting code Code rate Erasure codes Error detection and correction Error-correcting codes with feedback Linear code Quantum error correction Soft-decision decoder References Further reading (xxii+762+6 pages) (x+2+208+4 pages) "Error Correction Code in Single Level Cell NAND Flash memories" 2007-02-16 "Error Correction Code in NAND Flash memories" 2004-11-29 Observations on Errors, Corrections, & Trust of Dependent Systems, by James Hamilton, 2012-02-26 Sphere Packings, Lattices and Groups, By J. H. Conway, Neil James Alexander Sloane, Springer Science & Business Media, 2013-03-09 – Mathematics – 682 pages. External links error correction zoo. Database of error correcting codes. lpdec: library for LP decoding and related things (Python) Error detection and correction
Error correction code
[ "Engineering" ]
4,705
[ "Error detection and correction", "Reliability engineering" ]
1,034,781
https://en.wikipedia.org/wiki/Vulgarity
Vulgarity is the quality of being common, coarse, or unrefined. This judgement may refer to language, visual art, social class, or social climbers. John Bayley claims the term can never be self-referential, because to be aware of vulgarity is to display a degree of sophistication which thereby elevates the subject above the vulgar. Evolution of the term From the fifteenth to seventeenth centuries, "vulgar" simply described the common language or vernacular of a country. From the mid-seventeenth century onward, it began to take on a pejorative aspect: "having a common and offensively mean character, coarsely commonplace; lacking in refinement or good taste; uncultured; ill bred". In the Victorian age, vulgarity broadly described many activities, such as wearing ostentatious clothing. In a George Eliot novel, one character could be vulgar for talking about money, a second because he criticizes the first for doing so, and a third for being fooled by the excessive refinement of the second. The effort to avoid vulgar phrasing could leave characters at a loss for words. In George Meredith's Beauchamp's Career, an heiress does not wish to make the commonplace statement that she is "engaged", nor "betrothed", "affianced", or "plighted". Though such words are not vulgarity in the vulgar sense, they nonetheless could stigmatize the user as a member of a socially inferior class. Even favored euphemisms such as toilet eventually become stigmatized like the words they replace (the so-called euphemism treadmill), and currently favored words serve as a sort of "cultural capital". Language Vulgarity, in the sense of vulgar speech, can refer to language which is offensive or obscene. The word most associated with the verbal form of vulgarity is "cursing." However, there are many subsections of vulgar words. American psychologist Timothy Jay classifies "dirty words" because it "allows people interested in language to define the different types of reference or meaning that dirty words employ. One can see that what is considered taboo or obscene revolves around a few dimensions of human experience that there is a logic behind dirty word usage." One of the most commonly used vulgar terms in the English language is fuck. References External links Aesthetics Cultural concepts Etiquette Language
Vulgarity
[ "Biology" ]
493
[ "Etiquette", "Behavior", "Human behavior" ]
1,034,794
https://en.wikipedia.org/wiki/Oneirology
In the field of psychology, the subfield of oneirology (; ) is the scientific study of dreams. Research seeks correlations between dreaming and knowledge about the functions of the brain, as well as an understanding of how the brain works during dreaming as pertains to memory formation and mental disorders. The study of oneirology can be distinguished from dream interpretation in that the aim is to quantitatively study the process of dreams instead of analyzing the meaning behind them. History In the 19th century, two advocates of this discipline were the French sinologists Marquis d'Hervey de Saint Denys and Alfred Maury. The field gained momentum in 1952, when Nathaniel Kleitman and his student Eugene Aserinsky discovered regular cycles. A further experiment by Kleitman and William C. Dement, then another medical student, demonstrated the particular period of sleep during which electrical brain activity, as measured by an electroencephalograph (EEG), closely resembled that of waking, in which the eyes dart about actively. This kind of sleep became known as rapid eye movement (REM) sleep, and Kleitman and Dement's experiment found a correlation of 0.80 between REM sleep and dreaming. Field of work Research into dreams includes exploration of the mechanisms of dreaming, the influences on dreaming, and disorders linked to dreaming. Work in oneirology overlaps with neurology and can vary from quantifying dreams to analyzing brain waves during dreaming, to studying the effects of drugs and neurotransmitters on sleeping or dreaming. Though debate continues about the purpose and origins of dreams, there could be great gains from studying dreams as a function of brain activity. For example, knowledge gained in this area could have implications for the treatment of certain mental illnesses. Mechanisms of dreaming Dreaming occurs mainly during REM sleep, and brain scans recording brain activity have witnessed heavy activity in the limbic system and the amygdala during this period. Though current research has reversed the myth that dreaming occurs only during REM sleep, it has also shown that the dreams reported in non-rapid eye movement (NREM) and REM differ qualitatively and quantitatively, suggesting that the mechanisms that control each are different. During REM sleep, researchers theorize that the brain goes through a process known as synaptic efficacy refreshment. This is observed as brain waves self-firing during sleep, in slow cycles at a rate of around 14 Hz, and is believed to serve the purpose of consolidating recent memories and reinforcing old memories. In this type of brain stimulation, the dreaming that occurs is a by-product of the process. Stages of sleep During normal sleep cycles, humans alternate between normal, NREM sleep and REM sleep. The brain waves characteristic of dreaming that are observed during REM sleep are the most commonly studied in dream research because most dreaming occurs during REM sleep. REM sleep In 1952, Eugene Aserinsky discovered REM sleep while working in the surgery of his PhD advisor. Aserinsky noticed that the sleepers' eyes fluttered beneath their closed eyelids, later using a polygraph machine to record their brain waves during these periods. In one session, he awakened a subject who was wailing and crying out during REM and confirmed his suspicion that dreaming was occurring. In 1953, Aserinsky and his advisor published the ground-breaking study in Science. Accumulated observation shows that dreams are strongly associated with REM sleep, during which an electroencephalogram shows brain activity to be most like wakefulness. While REMS is associated with dreaming, not all REMS periods result in reported dreams, and not all dreams occur during REMS. Participant-nonremembered dreams during NREM are normally more mundane in comparison. During a typical lifespan, a human spends a total of about six years dreaming (which is about two hours each night). Most dreams last only 5 to 20 minutes. It is unknown where in the brain dreams originate, if there is a single origin for dreams, if multiple portions of the brain are involved, or what the purpose of dreaming is for the body or mind. During REM sleep, the release of certain neurotransmitters is completely suppressed. As a result, motor neurons are not stimulated, a condition known as REM atonia. This prevents dreams from resulting in dangerous movements of the body. Animals have complex dreams and are able to retain and recall long sequences of events while they are asleep. Studies show that various species of mammals and birds experience REM during sleep, and follow the same series of sleeping states as humans. The discovery that dreams take place primarily during a distinctive electrophysiological state of sleep (REM), which can be identified by objective criteria, led to rebirth of interest in this phenomenon. When REM sleep episodes were timed for their duration and subjects awakened to make reports before major editing or forgetting could take place, it was determined that subjects accurately matched the length of time they judged the dream narrative to occupy with the length of REM sleep that preceded the awakening. This close correlation of REM sleep and dream experience was the basis of the first series of reports describing the nature of dreaming: that it is a regular nightly occurrence, rather than an occasional phenomenon, and that it is a high-frequency activity within each sleep period occurring at predictable intervals of approximately every 60–90 minutes in all humans throughout the life span. REM sleep episodes and the dreams that accompany them lengthen progressively across the night, with the first episode the shortest, of approximately 10–12 minutes duration, and the second and third episodes increasing to 15–20 minutes. Dreams at the end of the night may last typically 15 minutes, although these may be experienced as several distinct stories due to momentary arousals interrupting sleep as the night ends. Dream reports can normally be made 50% of the time when an awakening occurs prior to the end of the first REM period. This rate of retrieval is increased to about 99% when awakenings occur during the last REM period of the night. This increase in the ability to recall appears to be related to intensification across the night in the vividness of dream imagery, colors and emotions. The dream story itself in the last REM period is farthest from reality, containing more bizarre elements, and it is these properties, coupled with the increased likelihood of morning waking review to take place, that heighten the chance of recall of the last dream. Definition of a dream The definition of dream used in quantitative research is defined through four base components: a form of thinking that occurs under minimal brain direction, external stimuli are blocked, and the part of the brain that recognizes self shuts down a form of experience that we believed we experience through our senses something memorable have some interpretation of experience by self In summary, a dream, as defined by G. William Domhoff and Adam Schneider, is "a report of a memory of a cognitive experience that happens under the kinds of conditions that are most frequently produced in a state called 'sleep. Commonplace bizarreness in dreaming Certain kinds of bizarre cognitions, such as disjunctive cognitions and interobjects, are common in dreams. Interobject Interobjects, like disjunctive cognitions, are a commonplace bizarreness of dreamlife. Interobjects are a kind of dream condensation that creates a new object that could not occur in waking life. It may have a vague structure that is described as "something between an X and a Y". Hobson dreamt of "a piece of hardware, something like the lock of a door or perhaps a pair of paint-frozen hinges." Authentic dreaming Authentic dreams are defined by their tendency to occur "within the realm of experience" and reflect actual memories or experiences the dreamer can relate to. Authentic dreams are believed to be the side effect of synaptic efficacy refreshment that occurs without errors. Research suggests that the brain stimulation that occurs during dreaming authentic dreams is significant in reinforcing neurological pathways, serving as a method for the mind to "rehearse" certain things during sleep. Illusory dreaming Illusory dreams are defined as dreams that contain impossible, incongruent, or bizarre content and are hypothesized to stem from memory circuits accumulating efficacy errors. In theory, old memories having undergone synaptic efficacy refreshment multiple times throughout one's lifetime result in accumulating errors that manifest as illusory dreams when stimulated. Qualities of illusory dreaming have been linked to delusions observed in mental disorders. Illusory dreams are believed to most likely stem from older memories that experience this accumulation of errors in contrast to authentic dreams that stem from more recent experiences. Influences on dreaming One aspect of dreaming studied is the capability to externally influence the contents of dreams with various stimuli. One such successful connection was made to the olfactory system, influencing the emotions of dreams through a smell stimulus. Their research has shown that the introduction of a positive smelling stimulus (roses) induced positive dreams while negative smelling stimulus (rotten eggs) induced negative dreams. Memories and experience Though there is much debate within the field about the purpose of dreaming, a leading theory involves the consolidation of memories and experiences that occurs during REM sleep. The electric involuntary stimulus the brain undergoes during sleep is believed to be a basis for a majority of dreaming. Research suggests that dreams, especially during REM sleep, help consolidate memories by integrating new information with existing memories. This process may prioritize emotionally significant or unresolved experiences. The link between memory, sleep, and dreams becomes more significant in studies analyzing memory consolidation during sleep. Research has shown that NREM sleep is responsible for the consolidation of facts and episodes in contrast to REM sleep that consolidates more emotionally related aspects of memory. The correlation between REM and emotional consolidation could be interpreted as the reason why dreams are of such an emotional nature and produce strong reactions from humans. Interpersonal attachment In addition to the conscious role people are aware of memory and experience playing in dreaming, unconscious effects such as health of relationships factor into the types of dreams the brain produces. Of the people analyzed, those suffering from "insecure attachments" were found to dream with more frequency and more vividly than those who were evaluated to have "secure attachments". Drugs affecting dreaming Correlations between the usage of drugs and dreaming have been documented, particularly the use of drugs, such as sedatives, and the suppression of dreaming because of drugging effects on the cycles and stages of sleep while not allowing the user to reach REM. Drugs used for their stimulating properties (cocaine, methamphetamine, and ecstasy) have been shown to also decrease the restorative properties of REM sleep and its duration. Dreaming disorders Dreaming disorders are difficult to quantify due to the ambiguous nature of dreaming. However, dreaming disorders can be linked to psychological disorders such as post-traumatic stress disorder expressed as nightmares. Research into dreaming also suggests similarity and links in illusory dreaming and delusions. Post-traumatic stress disorder Diagnostic symptoms include re-experiencing original trauma(s), by means of flashbacks or nightmares; avoidance of stimuli associated with the trauma; and increased arousal, such as difficulty falling or staying asleep, anger, and hypervigilance. Links to post-traumatic stress disorder (PTSD) and dreaming have been made in studying the flashbacks or nightmares the victims would suffer. Measurement of the brain waves exhibited by the subjects experiencing these episodes showed great similarity between those of dreaming. The drugs used to treat those suffering from these symptoms of flashbacks and nightmares would suppress not only these traumatic episodes but also any other sort of dreaming function. Schizophrenia The symptoms of schizophrenia involve abnormalities in the perception or expression of reality primarily focused on delusions and hallucinations. The delusions experienced by those with schizophrenia have been likened to the experience of illusory dreams that have come to be interpreted by the subject as actual experiences. Additional research into medication to suppress symptoms of schizophrenia have also shown to influence the REM cycle of those taking the medication and as a result influence the patterns of sleep and dreaming in the subjects. See also The Lathe of Heaven Carl Jung Sigmund Freud Dream Dreamwork Dreams in analytical psychology Dreaming (journal) Lucid dreaming Oneiromancy Oneironautics Unconscious mind International Association for the Study of Dreams (IASD) References Further reading Carl Jung Sigmund Freud Dream Lucid dreams Analytical psychology Psychoanalytic theory Symbols Sleep physiology
Oneirology
[ "Mathematics", "Biology" ]
2,541
[ "Oneirology", "Behavior", "Sleep physiology", "Dream", "Symbols", "Sleep" ]
1,034,826
https://en.wikipedia.org/wiki/Secure%20file%20transfer%20program
is a command-line interface client program to transfer files using the SSH File Transfer Protocol (SFTP), which runs inside the encrypted Secure Shell connection. It provides an interactive interface similar to that of traditional command-line FTP clients. One common implementation of is part of the OpenSSH project. There are other command-line SFTP clients that use different names, such as lftp, PSFTP and PSCP (from PuTTY package) and WinSCP. See also Comparison of SSH servers Comparison of SSH clients References Command-line software SSH File Transfer Protocol clients
Secure file transfer program
[ "Technology" ]
125
[ "Command-line software", "Computing commands", "Windows commands" ]
1,034,969
https://en.wikipedia.org/wiki/Levelling
Levelling or leveling (American English; see spelling differences) is a branch of surveying, the object of which is to establish or verify or measure the height of specified points relative to a datum. It is widely used in geodesy and cartography to measure vertical position with respect to a vertical datum, and in construction to measure height differences of construction artifacts. Optical levelling Optical levelling, also known as spirit levelling and differential levelling, employs an optical level, which consists of a precision telescope with crosshairs and stadia marks. The cross hairs are used to establish the level point on the target, and the stadia allow range-finding; stadia are usually at ratios of 100:1, in which case one metre between the stadia marks on the level staff (or rod) represents 100metres from the target. The complete unit is normally mounted on a tripod, and the telescope can freely rotate 360° in a horizontal plane. The surveyor adjusts the instrument's level by coarse adjustment of the tripod legs and fine adjustment using three precision levelling screws on the instrument to make the rotational plane horizontal. The surveyor does this with the use of a bull's eye level built into the instrument mount. Procedure The surveyor looks through the eyepiece of telescope while an assistant holds a vertical level staff which is graduated in inches or centimeters. The level staff is placed vertically using a level, with its foot on the point for which the level measurement is required. The telescope is rotated and focused until the level staff is plainly visible in the crosshairs. In the case of a high accuracy manual level, the fine level adjustment is made by an altitude screw, using a high accuracy bubble level fixed to the telescope. This can be viewed by a mirror whilst adjusting or the ends of the bubble can be displayed within the telescope, which also allows assurance of the accurate level of the telescope whilst the sight is being taken. However, in the case of an automatic level, altitude adjustment is done automatically by a suspended prism due to gravity, as long as the coarse levelling is accurate within certain limits. When level, the staff graduation reading at the crosshairs is recorded, and an identifying mark or marker placed where the level staff rested on the object or position being surveyed. A typical procedure for a linear track of levels from a known datum is as follows. Set up the instrument within of a point of known or assumed elevation. A rod or staff is held vertical on that point and the instrument is used manually or automatically to read the rod scale. This gives the height of the instrument above the starting (backsight) point and allows the height of the instrument (H.I.) above the datum to be computed. The rod is then held on an unknown point and a reading is taken in the same manner, allowing the elevation of the new (foresight) point to be computed. The difference between these two readings equals the change in elevation, which is why this method is also called differential levelling. The procedure is repeated until the destination point is reached. It is usual practice to perform either a complete loop back to the starting point or else close the traverse on a second point whose elevation is already known. The closure check guards against blunders in the operation, and allows residual error to be distributed in the most likely manner among the stations. Some instruments provide three crosshairs which allow stadia measurement of the foresight and backsight distances. These also allow use of the average of the three readings (3-wire leveling) as a check against blunders and for averaging out the error of interpolation between marks on the rod scale. The two main types of levelling are single-levelling as already described, and double-levelling (double-rodding). In double-levelling, a surveyor takes two foresights and two backsights and makes sure the difference between the foresights and the difference between the backsights are equal, thereby reducing the amount of error. Double-levelling costs twice as much as single-levelling. Turning a level When using an optical level, the endpoint may be out of the effective range of the instrument. There may be obstructions or large changes of elevation between the endpoints. In these situations, extra setups are needed. Turning is a term used when referring to moving the level to take an elevation shot from a different location. To "turn" the level, one must first take a reading and record the elevation of the point the rod is located on. While the rod is being kept in exactly the same location, the level is moved to a new location where the rod is still visible. A reading is taken from the new location of the level and the height difference is used to find the new elevation of the level gun. This is repeated until the series of measurements is completed. The level must be horizontal to get a valid measurement. Because of this, if the horizontal crosshair of the instrument is lower than the base of the rod, the surveyor will not be able to sight the rod and get a reading. The rod can usually be raised up to 25 feet high, allowing the level to be set much higher than the base of the rod. Trigonometric levelling The other standard method of levelling in construction and surveying is called trigonometric levelling, which is preferred when levelling "out" to a number of points from one stationary point. This is done by using a total station, or any other instrument to read the vertical, or zenith angle to the rod, and the change in elevation is calculated using trigonometric functions (see example below). At greater distances (typically 1,000 feet and greater), the curvature of the Earth, and the refraction of the instrument wave through the air must be taken into account in the measurements as well (see section below). Ex: an instrument at Point A reading to a rod at Point B a zenith angle of < 88°15'22" (degrees, minutes, seconds of arc) and a slope distance of 305.50 feet not factoring rod or instrument height would be calculated thus: cos(88°15'22")(305.5)≈ 9.30 ft., meaning an elevation change of approximately 9.30 feet in elevation between Points A and B. So if Point A is at 1,000 feet of elevation, then Point B would be at approximately 1,009.30 feet of elevation, as the reference line (0°) for zenith angles is straight up going clockwise one complete revolution, and so an angle reading of less than 90 degrees (horizontal or flat) would be looking uphill and not down (and opposite for angles greater than 90 degrees), and so would gain elevation. Refraction and curvature The curvature of the earth means that a line of sight that is horizontal at the instrument will be higher and higher above a spheroid at greater distances. The effect may be insignificant for some work at distances under 100 meters. The increase in height of a straight line with distance is: where R is the radius of the earth. The line of sight is horizontal at the instrument, but is not a straight line because of atmospheric refraction. The change of air density with elevation causes the line of sight to bend toward the earth. The combined correction for refraction and curvature is approximately: or For precise work these effects need to be calculated and corrections applied. For most work it is sufficient to keep the foresight and backsight distances approximately equal so that the refraction and curvature effects cancel out. Refraction is generally the greatest source of error in leveling. For short level lines the effects of temperature and pressure are generally insignificant, but the effect of the temperature gradient dT / dh can lead to errors. Levelling loops and gravity variations Assuming error-free measurements, if the Earth's gravity field were completely regular and gravity constant, leveling loops would always close precisely: around a loop. In the real gravity field of the Earth, this happens only approximately; on small loops typical of engineering projects, the loop closure is negligible, but on larger loops covering regions or continents it is not. Instead of height differences, geopotential differences do close around loops: where stands for gravity at the leveling interval i. For precise leveling networks on a national scale, the latter formula should always be used. should be used in all computations, producing geopotential values for the benchmarks of the network. High precision levelling, especially when conducted over long distances as used for the establishment and maintenance of vertical datums, is called geodetic levelling. Instruments Classical instruments The dumpy level was developed by English civil engineer William Gravatt, while surveying the route of a proposed railway line from London to Dover. More compact and hence both more robust and easier to transport, it is commonly believed that dumpy levelling is less accurate than other types of levelling, but such is not the case. Dumpy levelling requires shorter and therefore more numerous sights, but this fault is compensated by the practice of making foresights and backsights equal. Precise level designs were often used for large leveling projects where utmost accuracy was required. They differ from other levels in having a very precise spirit level tube and a micrometer adjustment to raise or lower the line of sight so that the crosshair can be made to coincide with a line on the rod scale and no interpolation is required. Automatic level Automatic levels make use of a compensator that ensures that the line of sight remains horizontal once the operator has roughly leveled the instrument (to within maybe 0.05 degree). The compensator consists of small prisms suspended from wires inside of the level's chassis that are connected together in the shape of a pendulum. This allows for only horizontal light rays to enter, even in cases where the telescope of the instrument is not perfectly plumb. The surveyor sets the instrument up quickly and does not have to re-level it carefully each time they sight on a rod on another point. It also reduces the effect of minor settling of the tripod to the actual amount of motion instead of leveraging the tilt over the sight distance. Because the level of the instrument only needs to be adjusted once per setup, the surveyor can quickly and easily read as many side-shots as necessary between turns. Three level screws are used to level the instrument, as opposed to the four screws historically found in dumpy levels. Laser level Laser levels project a beam which is visible and/or detectable by a sensor on the leveling rod. This style is widely used in construction work but not for more precise control work. An advantage is that one person can perform the levelling independently, whereas other types require one person at the instrument and one holding the rod. The sensor can be mounted on earth-moving machinery to allow automated grading. See also Astrogeodetic levelling Dynamic height Glossary of levelling terms Hydrostatic levelling Land levelling Orthometric height Physical geodesy Survey camp References External links USALandSurveyor Differential leveling video tutorials E-Learning-site with online-exercise for differential levelling Differential levelling online calculation Civil engineering Geomatics engineering Surveying Vertical position
Levelling
[ "Physics", "Engineering" ]
2,304
[ "Vertical position", "Physical quantities", "Distance", "Construction", "Surveying", "Civil engineering" ]
1,034,972
https://en.wikipedia.org/wiki/Natural%20border
A natural border is a border between states or their subdivisions which is concomitant with natural formations such as rivers or mountain ranges. The "doctrine of natural boundaries" developed in Western culture in the 18th century being based upon the "natural" ideas of Jean-Jacques Rousseau and developing concepts of nationalism. The similar concept in China developed earlier from natural zones of control. Natural borders have historically been strategically useful because they are easily defended. Natural borders remain meaningful in modern warfare even though military technology and engineering have somewhat reduced their strategic value. Expanding until natural borders are reached, and maintaining those borders once conquered, have been a major policy goal for a number of states. For example, the Roman Republic, and later, the Roman Empire expanded continuously until it reached certain natural borders: first the Alps, later the Rhine river, the Danube river and the Sahara desert. From the Middle Ages onwards until the 19th century, France sought to expand its borders towards the Alps, the Pyrenees, and the Rhine River. Natural borders can be a source of territorial disputes when they shift. One such example is the Rio Grande, which defines part of the border between the United States and Mexico, whose movement has led to multiple conflicts. Natural borders are not to be confused with landscape borders, which are also geographical features that demarcate political boundaries. Although landscape borders, like natural borders, also take forms of forests, water bodies, and mountains, they are manmade instead of natural. Installing a landscape border, usually motivated by demarcating treaty-designated political boundaries, goes against nature by modifying the borderland's natural geography. For one, China's Song Dynasty built an extensive defensive forest in its northern border to thwart the nomadic Khitan people. Criticism In Chapter IV of his 1916 book The New Europe: Essays in Reconstruction, British historian Arnold J. Toynbee criticized the concept of natural borders. Specifically, Toynbee criticized this concept as providing a justification for launching additional wars so that countries can attain their natural borders. Toynbee also pointed out how once a country attained one set of natural borders, it could subsequently aim to attain another, further set of natural borders; for instance, the German Empire set its western natural border at the Vosges Mountains in 1871 but during World War I, some Germans began to advocate for even more western natural borders—specifically ones that extend all of the way up to Calais and the English Channel—conveniently justifying the permanent German retention of those Belgian and French territories that Germany had just conquered during World War I. As an alternative to the idea of natural borders, Toynbee proposes making free trade, partnership, and cooperation between various countries with interconnected economies considerably easier so that there would be less need for countries to expand even further—whether to their natural borders or otherwise. In addition, Toynbee advocated making national borders based more on the principle of national self-determination—as in, based on which country the people in a particular area or territory actually wanted to live in. See also Natural borders of France References Borders Main Nationalism
Natural border
[ "Physics" ]
621
[ "Spacetime", "Borders", "Space" ]
1,035,039
https://en.wikipedia.org/wiki/Smooth%20number
In number theory, an n-smooth (or n-friable) number is an integer whose prime factors are all less than or equal to n. For example, a 7-smooth number is a number in which every prime factor is at most 7. Therefore, 49 = 72 and 15750 = 2 × 32 × 53 × 7 are both 7-smooth, while 11 and 702 = 2 × 33 × 13 are not 7-smooth. The term seems to have been coined by Leonard Adleman. Smooth numbers are especially important in cryptography, which relies on factorization of integers. 2-smooth numbers are simply the powers of 2, while 5-smooth numbers are also known as regular numbers. Definition A positive integer is called B-smooth if none of its prime factors are greater than B. For example, 1,620 has prime factorization 22 × 34 × 5; therefore 1,620 is 5-smooth because none of its prime factors are greater than 5. This definition includes numbers that lack some of the smaller prime factors; for example, both 10 and 12 are 5-smooth, even though they miss out the prime factors 3 and 5, respectively. All 5-smooth numbers are of the form 2a × 3b × 5c, where a, b and c are non-negative integers. The 3-smooth numbers have also been called "harmonic numbers", although that name has other more widely used meanings. 5-smooth numbers are also called regular numbers or Hamming numbers; 7-smooth numbers are also called humble numbers, and sometimes called highly composite, although this conflicts with another meaning of highly composite numbers. Here, note that B itself is not required to appear among the factors of a B-smooth number. If the largest prime factor of a number is p then the number is B-smooth for any B ≥ p. In many scenarios B is prime, but composite numbers are permitted as well. A number is B-smooth if and only if it is p-smooth, where p is the largest prime less than or equal to B. Applications An important practical application of smooth numbers is the fast Fourier transform (FFT) algorithms (such as the Cooley–Tukey FFT algorithm), which operates by recursively breaking down a problem of a given size n into problems the size of its factors. By using B-smooth numbers, one ensures that the base cases of this recursion are small primes, for which efficient algorithms exist. (Large prime sizes require less-efficient algorithms such as Bluestein's FFT algorithm.) 5-smooth or regular numbers play a special role in Babylonian mathematics. They are also important in music theory (see Limit (music)), and the problem of generating these numbers efficiently has been used as a test problem for functional programming. Smooth numbers have a number of applications to cryptography. While most applications center around cryptanalysis (e.g. the fastest known integer factorization algorithms, for example: the general number field sieve), the VSH hash function is another example of a constructive use of smoothness to obtain a provably secure design. Distribution Let denote the number of y-smooth integers less than or equal to x (the de Bruijn function). If the smoothness bound B is fixed and small, there is a good estimate for : where denotes the number of primes less than or equal to . Otherwise, define the parameter u as u = log x / log y: that is, x = yu. Then, where is the Dickman function. For any k, almost all natural numbers will not be k-smooth. If where is -smooth and is not (or is equal to 1), then is called the -smooth part of . The relative size of the -smooth part of a random integer less than or equal to is known to decay much more slowly than . Powersmooth numbers Further, m is called n-powersmooth (or n-ultrafriable) if all prime powers dividing m satisfy: For example, 720 (24 × 32 × 51) is 5-smooth but not 5-powersmooth (because there are several prime powers greater than 5, e.g. and ). It is 16-powersmooth since its greatest prime factor power is 24 = 16. The number is also 17-powersmooth, 18-powersmooth, etc. Unlike n-smooth numbers, for any positive integer n there are only finitely many n-powersmooth numbers, in fact, the n-powersmooth numbers are exactly the positive divisors of “the least common multiple of 1, 2, 3, …, n” , e.g. the 9-powersmooth numbers (also the 10-powersmooth numbers) are exactly the positive divisors of 2520. n-smooth and n-powersmooth numbers have applications in number theory, such as in Pollard's p − 1 algorithm and ECM. Such applications are often said to work with "smooth numbers," with no n specified; this means the numbers involved must be n-powersmooth, for some unspecified small number n. As n increases, the performance of the algorithm or method in question degrades rapidly. For example, the Pohlig–Hellman algorithm for computing discrete logarithms has a running time of O(n1/2)—for groups of n-smooth order. Smooth over a set A Moreover, m is said to be smooth over a set A if there exists a factorization of m where the factors are powers of elements in A. For example, since 12 = 4 × 3, 12 is smooth over the sets A1 = {4, 3}, A2 = {2, 3}, and , however it would not be smooth over the set A3 = {3, 5}, as 12 contains the factor 4 = 22, and neither 4 nor 2 are in A3. Note the set A does not have to be a set of prime factors, but it is typically a proper subset of the primes as seen in the factor base of Dixon's factorization method and the quadratic sieve. Likewise, it is what the general number field sieve uses to build its notion of smoothness, under the homomorphism . See also Highly composite number Rough number Round number Størmer's theorem Unusual number Notes and references Bibliography G. Tenenbaum, Introduction to analytic and probabilistic number theory, (AMS, 2015) A. Granville, Smooth numbers: Computational number theory and beyond, Proc. of MSRI workshop, 2008 External links The On-Line Encyclopedia of Integer Sequences (OEIS) lists B-smooth numbers for small Bs: 2-smooth numbers: A000079 (2i) 3-smooth numbers: A003586 (2i3j) 5-smooth numbers: A051037 (2i3j5k) 7-smooth numbers: A002473 (2i3j5k7l) 11-smooth numbers: A051038 (etc...) 13-smooth numbers: A080197 17-smooth numbers: A080681 19-smooth numbers: A080682 23-smooth numbers: A080683 Analytic number theory Integer sequences
Smooth number
[ "Mathematics" ]
1,513
[ "Sequences and series", "Analytic number theory", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
1,035,337
https://en.wikipedia.org/wiki/Alfentanil
Alfentanil (R-39209), sold under the brand name Alfenta among others, is a potent but short-acting synthetic opioid analgesic drug used for anesthesia in surgery. It is an analogue of fentanyl with around one-fourth to one-tenth the potency, one-third the duration of action, and an onset of action four times faster than that of fentanyl. Alfentanil has a pKa of approximately 6.5, which leads to a very high proportion of the drug being uncharged at physiologic pH, a characteristic responsible for its rapid-onset. It is an agonist of the μ-opioid receptor. While alfentanil tends to cause fewer cardiovascular complications than other similar drugs such as fentanyl and remifentanil, it tends to give stronger respiratory depression and so requires careful monitoring of breathing and vital signs. Almost exclusively used by anesthesia providers during portions of a case where quick, fast-acting (though not long-lasting) pain control is needed (as, for example, during nerve blocks), alfentanil is administered by the parenteral (injected) route for fast-onset and precise control of dosage. Discovered at Janssen Pharmaceutica in 1976, alfentanil is classified as a Schedule II drug in the United States. Side effects of fentanyl analogs are similar to those of fentanyl itself and include itching, nausea and potentially life-threatening respiratory depression. Fentanyl analogs have killed hundreds of people throughout Europe and the former Soviet republics since the most recent resurgence in use began in Estonia in the early 2000s, and novel derivatives continue to appear. References External links Medline Plus Patient Information - 09/01/2010 https://www.fda.gov/Drugs/DevelopmentApprovalProcess/DevelopmentResources/DrugInteractionsLabeling/ucm093664.htm February 2017 Genf interaction table- https://www.hug.ch/sites/interhug/files/structures/pharmacologie_et_toxicologie_cliniques/carte_cytochromes_2016_final.pdf February 2017 General anesthetics Synthetic opioids Piperidines Tetrazoles Ureas Ethers Lactams Anilides Mu-opioid receptor agonists Janssen Pharmaceutica Belgian inventions Fentanyl
Alfentanil
[ "Chemistry" ]
518
[ "Organic compounds", "Functional groups", "Ethers", "Ureas" ]
1,035,450
https://en.wikipedia.org/wiki/In%20silico
In biology and other experimental sciences, an in silico experiment is one performed on a computer or via computer simulation software. The phrase is pseudo-Latin for 'in silicon' (correct ), referring to silicon in computer chips. It was coined in 1987 as an allusion to the Latin phrases , , and , which are commonly used in biology (especially systems biology). The latter phrases refer, respectively, to experiments done in living organisms, outside living organisms, and where they are found in nature. History The earliest known use of the phrase was by Christopher Langton to describe artificial life, in the announcement of a workshop on that subject at the Center for Nonlinear Studies at the Los Alamos National Laboratory in 1987. The expression in silico was first used to characterize biological experiments carried out entirely in a computer in 1989, in the workshop "Cellular Automata: Theory and Applications" in Los Alamos, New Mexico, by Pedro Miramontes, a mathematician from National Autonomous University of Mexico (UNAM), presenting the report "DNA and RNA Physicochemical Constraints, Cellular Automata and Molecular Evolution". The work was later presented by Miramontes as his dissertation. In silico has been used in white papers written to support the creation of bacterial genome programs by the Commission of the European Community. The first referenced paper where in silico appears was written by a French team in 1991. The first referenced book chapter where in silico appears was written by Hans B. Sieburg in 1990 and presented during a Summer School on Complex Systems at the Santa Fe Institute. The phrase in silico originally applied only to computer simulations that modeled natural or laboratory processes (in all the natural sciences), and did not refer to calculations done by computer generically. Drug discovery with virtual screening In silico study in medicine is thought to have the potential to speed the rate of discovery while reducing the need for expensive lab work and clinical trials. One way to achieve this is by producing and screening drug candidates more effectively. In 2010, for example, using the protein docking algorithm EADock (see Protein-ligand docking), researchers found potential inhibitors to an enzyme associated with cancer activity in silico. Fifty percent of the molecules were later shown to be active inhibitors in vitro. This approach differs from use of expensive high-throughput screening (HTS) robotic labs to physically test thousands of diverse compounds a day, often with an expected hit rate on the order of 1% or less, with still fewer expected to be real leads following further testing (see drug discovery). As an example, the technique was utilized for a drug repurposing study in order to search for potential cures for COVID-19 (SARS-CoV-2). Cell models Efforts have been made to establish computer models of cellular behavior. For example, in 2007 researchers developed an in silico model of tuberculosis to aid in drug discovery, with the prime benefit of its being faster than real time simulated growth rates, allowing phenomena of interest to be observed in minutes rather than months. More work can be found that focus on modeling a particular cellular process such as the growth cycle of Caulobacter crescentus. These efforts fall far short of an exact, fully predictive computer model of a cell's entire behavior. Limitations in the understanding of molecular dynamics and cell biology, as well as the absence of available computer processing power, force large simplifying assumptions that constrain the usefulness of present in silico cell models. Genetics Digital genetic sequences obtained from DNA sequencing may be stored in sequence databases, be analyzed (see Sequence analysis), be digitally altered or be used as templates for creating new actual DNA using artificial gene synthesis. Other examples In silico computer-based modeling technologies have also been applied in: Whole cell analysis of prokaryotic and eukaryotic hosts e.g. E. coli, B. subtilis, yeast, CHO- or human cell lines Discovery of potential cure for COVID-19. Bioprocess development and optimization e.g. optimization of product yields Simulation of oncological clinical trials exploiting grid computing infrastructures, such as the European Grid Infrastructure, for improving the performance and effectiveness of the simulations. Analysis, interpretation and visualization of heterologous data sets from various sources e.g. genome, transcriptome or proteome data Validation of taxonomic assignment steps in herbivore metagenomics study. Protein design. One example is RosettaDesign, a software package under development and free for academic use. See also Virtual screening Computational biology Computational biomodeling Computer experiment Folding@home Exscalate4Cov Cellular model Nonclinical studies Organ-on-a-chip In silico molecular design programs In silico medicine Dry lab References External links World Wide Words: In silico CADASTER Seventh Framework Programme project aimed to develop in silico computational methods to minimize experimental tests for REACH Registration, Evaluation, Authorisation and Restriction of Chemicals In Silico Biology. Journal of Biological Systems Modeling and Simulation In Silico Pharmacology Pharmaceutical industry Latin biological phrases Alternatives to animal testing Animal test conditions
In silico
[ "Chemistry", "Biology" ]
1,062
[ "Animal testing", "Pharmacology", "Life sciences industry", "Pharmaceutical industry", "Alternatives to animal testing", "Latin biological phrases", "Animal test conditions" ]
1,035,507
https://en.wikipedia.org/wiki/Radiopharmacology
Radiopharmacology is radiochemistry applied to medicine and thus the pharmacology of radiopharmaceuticals (medicinal radiocompounds, that is, pharmaceutical drugs that are radioactive). Radiopharmaceuticals are used in the field of nuclear medicine as radioactive tracers in medical imaging and in therapy for many diseases (for example, brachytherapy). Many radiopharmaceuticals use technetium-99m (Tc-99m) which has many useful properties as a gamma-emitting tracer nuclide. In the book Technetium a total of 31 different radiopharmaceuticals based on Tc-99m are listed for imaging and functional studies of the brain, myocardium, thyroid, lungs, liver, gallbladder, kidneys, skeleton, blood and tumors. The term radioisotope, which in its general sense refers to any radioactive isotope (radionuclide), has historically been used to refer to all radiopharmaceuticals, and this usage remains common. Technically, however, many radiopharmaceuticals incorporate a radioactive tracer atom into a larger pharmaceutically-active molecule, which is localized in the body, after which the radionuclide tracer atom allows it to be easily detected with a gamma camera or similar gamma imaging device. An example is fludeoxyglucose in which fluorine-18 is incorporated into deoxyglucose. Some radioisotopes (for example gallium-67, gallium-68, and radioiodine) are used directly as soluble ionic salts, without further modification. This use relies on the chemical and biological properties of the radioisotope itself, to localize it within the body. History See nuclear medicine. Production Production of a radiopharmaceutical involves two processes: The production of the radionuclide on which the pharmaceutical is based. The preparation and packaging of the complete radiopharmaceutical. Radionuclides used in radiopharmaceuticals are mostly radioactive isotopes of elements with atomic numbers less than that of bismuth, that is, they are radioactive isotopes of elements that also have one or more stable isotopes. These may be roughly divided into two classes: Those with more neutrons in the nucleus than those required for stability are known as proton-deficient, and tend to be most easily produced in a nuclear reactor. The majority of radiopharmaceuticals are based on proton deficient isotopes, with technetium-99m being the most commonly used medical isotope, and therefore nuclear reactors are the prime source of medical radioisotopes. Those with fewer neutrons in the nucleus than those required for stability are known as neutron-deficient, and tend to be most easily produced using a proton accelerator such as a medical cyclotron. Practical use Because radiopharmeuticals require special licenses and handling techniques, they are often kept in local centers for medical radioisotope storage, often known as radiopharmacies. A radiopharmacist may dispense them from there, to local centers where they are handled at the practical medicine facility. Drug nomenclature for radiopharmaceuticals As with other pharmaceutical drugs, there is standardization of the drug nomenclature for radiopharmaceuticals, although various standards coexist. The International Nonproprietary Name (INN) gives the base drug name, followed by the radioisotope (as mass number, no space, element symbol) in parentheses with no superscript, followed by the ligand (if any). It is common to see square brackets and superscript superimposed onto the INN name, because chemical nomenclature (such as IUPAC nomenclature) uses those. The United States Pharmacopeia (USP) name gives the base drug name, followed by the radioisotope (as element symbol, space, mass number) with no parentheses, no hyphen, and no superscript, followed by the ligand (if any). The USP style is not the INN style, despite their being described as one and the same in some publications (e.g., AMA, whose style for radiopharmaceuticals matches the USP style). The United States Pharmacopeial Convention is a sponsor organization of the USAN Council, and the USAN for a given drug is often the same as the USP name. See also Radioactive tracer Nuclear medicine References Further reading Notes for guidance on the clinical administration of radiopharmaceuticals and use of sealed radioactive sources. Administration of radioactive substances advisory committee. March 2006. Produced by the Health Protection Agency. Malabsorption. In: The Merck Manual of Geriatrics, chapter 111. Leukoscan summary of product characteristics (Tc99m-Sulesomab). Schwochau, Klaus. Technetium. Wiley-VCH (2000). External links National Isotope Development Center U.S. Government resources for isotopes - production, distribution, and information Isotope Development & Production for Research and Applications (IDPRA) U.S. Department of Energy program sponsoring isotope production and production research and development Radiobiology Radiation therapy Medicinal chemistry Medicinal radiochemistry
Radiopharmacology
[ "Chemistry", "Biology" ]
1,103
[ "Medicinal radiochemistry", "Radiobiology", "Radiopharmaceuticals", "nan", "Medicinal chemistry", "Biochemistry", "Chemicals in medicine", "Radioactivity" ]
1,035,528
https://en.wikipedia.org/wiki/Astrological%20age
An astrological age is a time period which, according to astrology, parallels major changes in the development of human society, culture, history, and politics. There are twelve astrological ages corresponding to the twelve zodiacal signs in western astrology. One cycle of the twelve astrological ages is called a Great Year, comprising 25,772 solar years, at the end of which another cycle begins. Some astrologers believe that during a given age, some events are directly caused or indirectly influenced by the astrological sign associated with that age, while other astrologers believe that astrological ages do not influence events in any way. Astrologers do not agree upon exact dates for the beginning or ending of the ages, with given dates varying by hundreds of years. Overview There are three broad perspectives on the astrological ages: Archeoastronomers do not necessarily believe in astrology as a science, but rather study the cultural traditions of societies that did refer extensively to astrology. Astrologers have been interested in relating world history to the astrological ages since the late 19th century; however, most astrologers study horoscopes, not astrological ages. The pop-culture concept of the Age of Aquarius referring to major societal changes of the 1960s, popularized in the 1967 musical (and subsequent 1979 film) Hair. The following table of astrological ages was compiled by Neil Mann, giving commonly cited durations for each era, as well as developments in human history typically believed to have been influenced by the vernal equinox sign of their age. He notes that the claims of zodiac influences on human history are biased, relying on widely varying dates for events and selective cherry picking of evidence. Contentious aspects Definitive details on the astrological ages are lacking or disputed. The 20th-century British astrologer Charles Carter stated that "It is probable that there is no branch of Astrology upon which more nonsense has been poured forth than the doctrine of the precession of the equinoxes." Neil Spencer in his 2000 book True as the Stars Above expressed a similar opinion about the astrological ages. Spencer characterizes the concept as being "fuzzy", "speculative", and the least-defined area of astrological lore. Derek and Julia Parker state that it is impossible to state the exact date for the start of any astrological age and acknowledge that many astrologers believe the Age of Aquarius has arrived while many say the world is at the end of the Age of Pisces. Ray Grasse states in Signs of the Times that "there is considerable dispute over the exact starting and ending times for the different Great Ages." Paul Wright in The Great Ages and Other Astrological Cycles says that much of the uncertainty related to the astrological ages is because many astrologers have a poor understanding of the meaning of the astrological symbolism and "even poorer historical knowledge". Consensus approach Though so many issues are contentious or disputed, there are two aspects of the astrological ages that have virtually unanimous consensus—firstly, the theorized link of the astrological ages to the axial precession of the Earth and commonly referred to as precession of the equinoxes; secondly, that, due to the nature of the precession of the equinoxes, the progression of the ages proceeds in reverse direction through the zodiacal signs. Ages of equal or variable lengths Astrologers use many ways to divide the Great Year into twelve astrological ages. There are two popular methods. One method is to divide the Great Year into twelve astrological ages of approximately equal lengths of around 2156 years per age based on the vernal equinox (also known as vernal point) moving through the sidereal zodiac. Another method is to significantly vary the duration of each astrological age based on the passage of the vernal equinox measured against the actual zodiacal constellations. Each of those twelve sections of the Great Year can be called either an astrological age, Precessional Age, or a Great Month. The method based on the zodiacal constellations has a flaw in that from the reckoning of classical-era astronomer/astrologers like Claudius Ptolemy, many constellations overlap, a problem only eliminated in the past 200 years by the adoption of official constellation boundaries. For example, by 2700 CE the vernal point will have moved into Aquarius, but from a classical-era point of view, the vernal point will also point to Pisces due to the pre-boundary overlap. Age transitions Many astrologers consider the entrance into a new astrological age a gradual transition called a "cusp". For example, Ray Grasse states that an astrological age does not begin at an exact day or year. Paul Wright states that a transition effect does occur at the border of the astrological ages. Consequently, the beginning of any age cannot be defined to a single year or a decade but blend its influences with the previous age for a period of time until the new age can stand in its own right. In Nicholas Campion's The Book of World Horoscopes there are six pages listing researchers and their proposed dates for the start of the Age of Aquarius indicating that many researchers believe that each age commences at an exact date. Other opinions Ages exactly 2,000 years each Many astrologers find ages too erratic based on either the vernal point moving through the randomly sized zodiacal constellations or sidereal zodiac and, instead, round all astrological ages to exactly 2000 years each. In this approach the ages are usually neatly aligned so that the Aries age is found from 2000 BC to AD 1, Pisces age AD 1 to AD 2000, the Aquarian Age AD 2000 – AD 4000, and so on. This approach is inconsistent with the precession of the equinoxes. Based on precession of the equinoxes, there is a one-degree shift approximately every 72 years, so a 30-degree movement requires 2160 years to complete. Ages involving the opposite sign An established school of thought is that an age is also influenced by the sign opposite to the one of the astrological age. Referring back to the precession of the Equinoxes, as the Sun crosses one constellation in the Northern Hemisphere's spring Equinox (21 March), it will cross the opposite sign in the spring Equinox in the Southern Hemisphere (21 September). For instance, the Age of Pisces is complemented by its opposite astrological sign of Virgo (the Virgin); so a few refer to the Piscean age as the 'Age of Pisces-Virgo'. Adopting this approach, the Age of Aquarius would become the Age of Aquarius-Leo. In his writings Ray Grasse also espouses the link between each sign of the zodiac and its opposite sign. History Hipparchus and the discovery of the precession of the equinoxes Hipparchus of Nicaea (c. 190–120 BCE) is often credited with the discovery of the precession of the equinoxes, a fundamental astronomical phenomenon that plays a crucial role in the concept of astrological ages. Precession refers to the gradual shift in the orientation of Earth's axis of rotation, which causes the positions of the equinoxes to move slowly westward along the ecliptic, completing a full cycle approximately every 26,000 years. Hipparchus made this discovery while comparing his observations of the positions of stars with records from earlier astronomers, particularly those from Babylon. He noticed that the positions of certain fixed stars had shifted relative to the equinoxes over time, an observation that could not be explained by the prevailing astronomical models of his time. In his work, Hipparchus noted that the position of the vernal equinox had shifted by about 2° relative to the stars over the course of a century, which implied a slow, continuous motion of the celestial sphere. This discovery was groundbreaking because it revealed that the celestial sphere was not as fixed as previously thought. Hipparchus' calculation of the precession rate was remarkably close to the modern value, estimating it at roughly 1° per century, which is only slightly different from the current measurement of approximately 1° every 72 years. Hipparchus' findings were later documented by the Alexandrian astronomer Claudius Ptolemy in his seminal work, the Almagest (c. 150 CE), where he further refined and expanded upon Hipparchus' observations. Ptolemy's Almagest became the standard reference for astronomers for many centuries and solidified the concept of precession in the astronomical canon. The recognition of precession had profound implications for astrology, particularly in the development of the concept of astrological ages. As the equinoxes precess through the zodiac, they mark the beginning and end of these ages, each lasting roughly 2,160 years, based on the 12 zodiacal constellations. The shift from one age to another is thought to bring about significant cultural and spiritual changes, a belief that has influenced astrological thought since antiquity. Post-Hipparchus Trepidation In the early post-Hipparchus period, two schools of thought developed about the slow shift of the fixed sphere of stars as discovered by Hipparchus. One school believed that at 1 degree shift per 100 years, the sphere of fixed stars would return to its starting point after 36,000 years. The trepidation school believed that the fixed stars first moved one way, then moved the other way – similar to a giant pendulum. It was believed that the 'swinging' stars first moved 8 degrees one direction, then reversed this 8 degrees travelling the other direction. Theon of Alexandria in the 4th century AD includes trepidation when he wrote Small Commentary to the Handy Tables. In the 5th century AD, the Greek Neoplatonist philosopher Proclus mentions that both theories were being discussed. The Indians around the 5th century AD preferred the trepidation theory but because they had observed the movement of the fixed stars by 25 degrees since ancient times (since around 1325 BC), they considered that trepidation swung back and forth around 27 degrees. The significant early exponent of the 'circular 36,000' years method was Ptolemy and, due to the status placed upon Ptolemy by later scholars, the Christian and Muslim astronomers of the Middle Ages accepted the Great Year of 36,000 years rather than trepidation. However some scholars gave credence to both theories based on the addition of another sphere which is represented in the Alfonsine tables produced by the Toledo School of Translators in the 12th and 13th centuries. The Alfonsine tables computed the positions of the sun, moon, and planets relative to the fixed stars. The Italian astronomer Cecco d'Ascoli, professor of astrology at the University of Bologna in the early 14th century, continued to have faith in trepidation but believed it swung 10 degrees in either direction. Copernicus refers to trepidation in De revolutionibus orbium coelestium published in 1543. Mithraism Hipparchus' discovery of precession of the equinoxes may have created the Mithraic Mysteries, colloquially also known as Mithraism, a 1st – 4th century neo-Platonic mystery cult of the Roman god Mithras. The near-total lack of written descriptions or scripture necessitates a reconstruction of beliefs and practices from the archaeological evidence, such as that found in Mithraic temples (in modern times called mithraea), which were real or artificial caves representing the cosmos. Until the 1970s most scholars followed Franz Cumont in identifying Mithras as a continuation of the Persian god Mithra. Cumont's continuity hypothesis led him to believe that the astrological component was a late and unimportant accretion. Cumont's views are no longer followed. Today, the cult and its beliefs are recognized as a synthesis of late-classical Greco-Roman thought, with an astrological component even more astrology-centric than Roman beliefs generally were during the early Roman Empire. The details remain debated. As far as axial precession is concerned, one scholar of Mithraism, David Ulansey, has interpreted Mithras as a personification of the force responsible for precession. He argues that the cult was a religious response to Hipparchus's discovery of precession, which – from the ancient geocentric perspective – amounted to the discovery that the entire cosmos (i.e., the outermost celestial sphere of the fixed stars) was moving in a previously unknown way. Ulansey's analysis is based on the tauroctony: the image of Mithras killing a bull that was placed at the center of every Mithraic temple. In the standard tauroctony, Mithras and the bull are accompanied by a dog, a snake, a raven, a scorpion and two identical young men, with torches. According to Ulansey, the tauroctony is a schematic star chart. The bull is Taurus, a constellation of the zodiac. In the astrological age that preceded the time of Hipparchus, the vernal equinox had taken place when the Sun was in the constellation of Taurus, and during that previous epoch the constellations of Canis Minor (The Dog), Hydra (The Snake), Corvus (The Raven), and Scorpius (The Scorpion) – that is, the constellations that correspond to the animals depicted in the tauroctony – all lay on the celestial equator (the location of which is shifted by the precession) and thus had privileged positions in the sky during that epoch. Mithras himself represents the constellation Perseus, which is located directly above Taurus the Bull: The same location occupied by Mithras in the tauroctony image. Mithras' killing of the Bull, by this reasoning, represented the power possessed by this new god to shift the entire cosmic structure, turning the cosmic sphere so that the location of the spring equinox left the constellation of Taurus (a transition symbolized by the killing of the Bull), and the Dog, Snake, Raven, and Scorpion likewise lost their privileged positions on the celestial equator. The iconography also contains two torch-bearing twins (Cautes and Cautopates) framing the bull-slaying image – one holding a torch pointing up and the other a torch pointing down. These torch-bearers are sometimes depicted with one of them (torch up) holding or associated with a Bull and a tree with leaves, and the other (torch down) holding or associated with a Scorpion and a tree with fruit. Ulansey interprets these torch-bearers as representing the spring equinox (torch up, tree with leaves, Bull) and the autumn equinox (torch down, tree with fruit, Scorpion) in Taurus and Scorpius respectively, which is where the equinoxes were located during the preceding "Age of Taurus" symbolized in the tauroctony as a whole. From this, Ulansey concludes that Mithraic iconography was an "astronomical code" whose secret was the existence of a new cosmic divinity, unknown to those outside the cult, whose fundamental attribute was his ability to shift the structure of the entire cosmos, and thereby to control the astrological forces believed at that time to determine human existence. That gave him the power to grant his devotees success during life and salvation after death (i.e., a safe journey through the planetary spheres and a subsequent immortal existence in the sphere of the stars). Rate of precession Though the one degree per hundred years calculated for precession of the equinoxes as defined by Hipparchus and promulgated by Ptolemy was too slow, another rate of precession that was too fast also gained popularity in the 1st millennium AD. By the fourth century AD, Theon of Alexandria assumed a changing rate (trepidation) of one degree per 66 years. The tables of the Shah (Zij-i Shah) originate in the sixth century, but are lost, but many later Arabic and Persian astronomers and astrologers refer to them and also use this value. These later astronomers-astrologers or sources include: Al-Khwarizmi, Zij al Sindhind or "Star Tables Based on the Indian Calculation Method"(c. 800); "Tabulae probatae" or "az-Zig al-mumtan" (c. 830); Al-Battani, Albategnius, al-Zij (c. 880); and al-Sufi, Azophi (c. 965); Al Biruni (973–1048), "al Canon al Masud" or "The Masʿūdic Canon"; Arabic fixed star catalogue of 1 October 1112 (ed. Paul Kunitzsch); and "Libros del Saber de Astronomía" by Alfonso X of Castile (1252–1284). Anno Domini There exists evidence that the modern calendar developed by Dionysius Exiguus in the 6th century AD commencing with the birth of Jesus Christ at AD 1 was influenced by precession of the equinoxes and astrological ages. Dionysius' desire to replace Diocletian years (Diocletian persecuted Christians) with a calendar based on the incarnation of Christ was to prevent people from believing the imminent end of the world. At the time it was believed that the Resurrection and end of the world would occur 500 years after the birth of Jesus. The current Anno Mundi calendar theoretically commenced with the creation of the world based on information in the Old Testament. It was believed that based on the Anno Mundi calendar Jesus was born in the year 5500 (or 5500 years after the world was created) with the year 6000 of the Anno Mundi calendar marking the end of the world. Anno Mundi 6000 (approximately AD 500) was thus equated with the Second Coming of Christ and the end of the world. Since this date had already passed in the time of Dionysius, he therefore searched for a new end of the world at a later date. He was heavily influenced by ancient cosmology, in particular the doctrine of the Great Year that places a strong emphasis on planetary conjunctions. This doctrine says that when all the planets were in conjunction that this cosmic event would mark the end of the world. Dionysius accurately calculated that this conjunction would occur in May AD 2000. Dionysius then applied another astronomical timing mechanism based on precession of the equinoxes. Though incorrect, some oriental astronomers at the time believed that the precessional cycle was 24,000 years which included twelve astrological ages of 2,000 years each. Dionysius believed that if the planetary alignment marked the end of an age (i.e. the Pisces age), then the birth of Jesus Christ marked the beginning of the Age of Pisces 2,000 years earlier. He therefore deducted 2,000 years from the May 2000 conjunction to produce AD 1 for the incarnation of Christ. Mashallah ibn Athari The renowned Persian Jewish astronomer and astrologer Masha'Allah (c.740 – 815 CE) employed precession of the equinoxes for calculating the period "Era of the Flood" dated as 3360 BCE or 259 years before the Indian Kali Yuga, believed to have commenced in 3101 BCE. Giovanni Pico della Mirandola The 15th century Italian Renaissance philosopher Giovanni Pico della Mirandola published a massive attack on astrological predictions, but he did not object to all of astrology and he commented on the position of the vernal point in his day. Pico was aware of the effects of precession of the equinoxes and knew that the first point of Aries no longer existed in the constellation of Aries. Pico not only knew that the vernal point had shifted back into Pisces, he stated that in his time, the vernal point (zero degrees tropical Aries) was located at 2 degrees (sidereal) Pisces. This suggests that by whatever method of calculation he was employing, Pico expected the vernal point to shift into (sidereal) Aquarius age 144 years later as a one degree shift takes 72 years. Isaac Newton Isaac Newton (1642 – 1726–27 ) determined the cause of precession and established the rate of precession at 1 degree per 72 years, very close to the best value measured today, thus demonstrating the magnitude of the error in the earlier value of 1 degree per century. Calculation aspects The Earth, in addition to its diurnal (daily) rotation upon its axis and annual rotation around the Sun, incurs a precessional motion involving a slow periodic shift of the axis itself: approximately one degree every 72 years. This motion, which is caused mostly by the Moon's gravity, gives rise to the precession of the equinoxes in which the Sun's position on the ecliptic at the time of the vernal equinox, measured against the background of fixed stars, gradually changes with time. In graphical terms, the Earth behaves like a spinning top, and tops tend to wobble as they spin. The spin of the Earth is its daily (diurnal) rotation. The spinning Earth slowly wobbles over a period slightly less than 26,000 years. From our perspective on Earth, the stars are ever so slightly 'moving' from west to east at the rate of one degree approximately every 72 years. One degree is about twice the diameter of the Sun or Moon as viewed from Earth. The easiest way to notice this slow movement of the stars is at any fixed time each year. The most common fixed time is at the vernal equinox around 21 March each year. In astrology, an astrological age has usually been defined by the constellation or superimposed sidereal zodiac in which the Sun actually appears at the vernal equinox. This is the method that Hipparchus appears to have applied around 127 BC when he calculated precession. Since each sign of the zodiac is composed of 30 degrees, each astrological age might be thought to last about 72 (years) × 30 (degrees) = about 2160 years. This means the Sun crosses the equator at the vernal equinox moving backward against the fixed stars from one year to the next at the rate of one degree in seventy-two years, one constellation (on average) in about 2148 years, and the whole twelve signs in about 25,772 years, sometimes called a Platonic Year. However the length of the ages are decreasing with time as the rate of precession is increasing. Therefore, no two ages are of equal length. First point of Aries alignment – the fiducial point Approximately every 26,000 years the zodiacal constellations, the associated sidereal zodiac, and the tropical zodiac used by western astrologers basically align. Technically this is when the tropical and sidereal "first point in Aries" (Aries 0°) coincided. This alignment is often called the fiducial point and, if the fiducial point could be found, fairly exact timeframes of all the astrological ages could be accurately determined if the method used to determine the astrological ages is based on the equal-sized 30 degrees per age and do not correspond to the exact constellation configuration in the sky. However this fiducial point is difficult to determine because while there is no ambiguity about the tropical zodiac used by western astrologers, the same cannot be said of the sidereal zodiac used by Vedic astrologers. Vedic astrologers do not have unanimity on the exact location in space of their sidereal zodiac. This is because the sidereal zodiac is superimposed upon the irregular zodiacal constellation, and there are no unambiguous boundaries of the zodiacal constellations. Modern day astronomers have defined boundaries, but this is a recent development by astronomers who are divorced from astrology, and cannot be assumed to be correct from the astrological perspective. While most astronomers and some astrologers agree that the fiducial point occurred in or around the 3rd to 5th centuries AD, there is no consensus on any exact date or tight timeframe within these three centuries. A number of dates are proposed by various astronomers and even wider timeframes by astrologers. (For an alternative approach to calibrating precession, see Alternative approach to calibrating precession in New, alternative, and fringe theories section below). As an example of a mystic contemporary approach to precession, in Max Heindel's astrology writings, it is described, that last time the starting-point of the sidereal zodiac agreed with the tropical zodiac occurred in AD 498. A year after these points were in exact agreement, the Sun crossed the equator about fifty seconds of space into the constellation Pisces. The year following it was one minute and forty seconds into Pisces, and so it has been creeping backward ever since, until at the present time the Sun crosses the equator in about nine degrees in the constellation Pisces. Based on this approach, it will thus be about 600 years before it actually crosses the celestial equator in the constellation Aquarius. However this is only one of many approaches and so this must remain speculation at this point of time. Present and future ages Age of Pisces (Piscean Age) Symbol for Pisces: When the March equinox occurs in Pisces. Timeframes Zodiacal 30 degrees: Neil Mann interpretation: began c. AD 1 and ends c. AD 2150 Heindel-Rosicrucian interpretation: began c. AD 498 and ends c. AD 2654 Age of Aquarius (Aquarian Age) Symbol for Aquarius: When the March equinox occurs in Aquarius. Timeframes In 1928, at the Conference of the International Astronomical Union (IAU) in Leiden, the Netherlands, the edges of the 88 official constellations became defined in astronomical terms. The edge established between Pisces and Aquarius locates the beginning of the Aquarian Age around the year 2600. The Austrian astronomer examined the question of when the Age of Aquarius begins in an article published in 1992 by the Austrian Academy of Science: with the German title ("The Start of the Aquarian Age, an Astronomical Question?"). Based on the boundaries accepted by IAU in 1928, Haupt's article investigates the start of the Age of Aquarius by calculating the entry of the spring equinox point over the parallel cycle (d = – 4°) between the constellations Pisces and Aquarius and reaches, using the usual formula of precession (Gliese, 1982), the year 2595. However Haupt concludes: Zodiacal 30 degrees: Neil Mann interpretation: begins AD 2150. Dane Rudhyar's interpretation states that the Age of Aquarius will begin in AD 2062. Nicholas Campion in The Book of World Horoscopes indicates that he has collected over 90 dates provided by researchers for the start of the Age of Aquarius and these dates have a range of over 2,000 years commencing in the 15th century AD. The range of dates for the possible start of the Aquarian Age range from 1447 to 3621. Constellation boundary year: Hermann Haupt interpretation: begins c. AD 2595. Astrological predictions There is an expectation that the Aquarian Age will usher in a period of group consciousness. Marcia Moore and Mark Douglas write that the lighting up of the earth artificially by electricity is a sign of the Age of Aquarius. Sub-periods of ages Many research astrologers believe that the astrological ages can be divided into smaller sections along the lines of 'wheels within wheels'. The most common method is to divide each astrological ages into twelve sub-periods. There are two common ways of undertaking this process and two ways of applying these sub-periods. Furthermore, some astrologers divide the ages in different ways. For example, David Williams employs a decanate sub-division whereby each age is divided into three equal sections. Aries to Pisces sub-periods The most popular method of sub-dividing astrological ages is to divide each age equally into twelve sub-periods with the first sub-period Aries, followed by Taurus, Gemini, and so on, until the last sub-division, Pisces. Charles Carter was an early advocate of this approach. Technically this approach is based on the twelfth harmonic of the zodiacal signs. Dwadasamsa sub-periods The alternative approach is to apply a method commonly used in Vedic astrology but with long antecedents also in Western astrology. This method also divides each astrological age into twelve sub-periods but the first sub-period for each sign is the same as the sign itself, then with the following sub-periods in natural order. For example, the twelve dwadasamsa of Aquarius are Aquarius, Pisces, Aries, Taurus, and so on, until the last dwadasamsa – Capricorn. Technically this approach is based on attributes of both the twelfth and thirteenth harmonics of the zodiacal signs and can be considered to be halfway between the 12th and 13th harmonics. Sub-period direction (forward or retrograde?) There are two ways of applying the above sub-periods to the astrological ages. Natural Order – The most common way is to arrange the sub-periods so that they go forward in the natural order. Therefore, if the Aries to Pisces method is adopted for example in the Aquarian Age, the first sub-period is Aries, followed by Taurus, Gemini and so on until the last sub-division – Pisces. This is the approach made by Charles Carter. If the dwadasamsa sub-period is adopted they also progress in the natural order of the signs. For example, the twelve dwadasamsa of Aquarius are Aquarius, Pisces, Aries, Taurus, and so on, until the last dwadasamsa – Capricorn. Geometric Order (Retrograde) – The other approach is to arrange the sub-periods geometrically and reverse the direction of the sub-periods in line with the retrograde order of the astrological ages. For example, if applying the Aries to Pisces method, the first sub-period of any astrological age is Pisces, followed by Aquarius, Capricorn, and so on, until the last sub-period – Aries. Charles Carter indicated there was some merit to this approach. If applying the dwadasamsa sub-period system geometrically for example the first sub-period in the Aquarian Age is Capricorn, followed by Sagittarius, Scorpio, and so on, until the last sub-period – Aquarius. This approach is adopted by Terry MacKinnell, Patrizia Norelli-Bachelet and David Williams applied his [decans] (threefold division) geometrically thus supporting this approach. New, alternative, and fringe myths Due to the lack of consensus of almost all aspects of the astrological ages, except for the astrological ages relationship to precession of the equinoxes and the retrograde order of the astrological ages, there are alternative, esoteric, innovative, fringe and newly expressed ideas about the astrological ages which have not established credibility in the wider astrological community or among archaeoastronomers. Alternative approach to calibrating precession Terry MacKinnell has developed an alternative approach to calibrating precession of the equinoxes to determine the Astrological Age. His major point of departure from the traditional modern approach is how he applies the vernal equinox to the zodiacal constellations. Instead of referring to the position of the Sun at the vernal equinox (a 'modern' mathematical technique developed by the Greeks in the late 1st millennium BC), he refers to the heliacal rising constellation on the day of the vernal equinox. This approach is based on the ancient approach to astronomical observations (the same ancient period that also saw the invention of the zodiacal constellations) prior to the development of mathematical astronomy by the ancient Greeks in the 1st millennium BC. All ancient astronomical observations were based on visual techniques. Of all the key techniques used in ancient times, the most common in Babylon (most likely the source of astrology) and most other ancient cultures were based on phenomena that occurred close to the eastern or western horizons. The heliacal rising constellation at the vernal equinox is based on the last zodiacal constellation rising above the Eastern Horizon just before dawn and before the light of the approaching Sun obliterates the stars on the eastern horizon. Currently at the vernal equinox the constellation of Aquarius has been the heliacal rising constellation for some centuries. The stars disappear about one hour before dawn depending upon magnitude, latitude, and date. This one hour represents approximately 15 degrees difference compared to the contemporary method based on the position of the Sun among the zodiacal constellations. Each age is composed of 30 degrees, therefore, 15 degrees represents about half an age or about 1080 years. Based on the heliacal rising method, the Age of Aquarius arrived about 1,080 years earlier than the modern system. John H Rogers in part one of his paper Origins of the ancient constellations also states that using the ancient heliacal rising method compared to the (modern) solar method produces a result that is approximately 1,000 in advance. Using this approach, the astrological ages arrive about half an age earlier compared to the common contemporary approach to calibrating precession based on modern mathematical techniques. Thus, MacKinnell has the Aquarian Age arriving in the 15th century while most astrologers have the Age of Aquarius arriving in the 27th century, almost 700 years in the future. See also References Citations Works cited Further reading Precession
Astrological age
[ "Physics" ]
7,086
[ "Astrological ages", "Physical quantities", "Time", "Precession", "Spacetime", "Wikipedia categories named after physical quantities" ]
1,035,534
https://en.wikipedia.org/wiki/Domain%20Technologie%20Control
Domain Technologie Control (DTC) was a web hosting control panel aimed at providing a graphics-oriented layout for managing commercial hosting of web servers, intended for shared web hosting servers, virtual private servers (VPSes), and dedicated servers. Domain Technologie Control is free software released under the GNU LGPL v2.1 license. It is fully skinnable and translated into several languages. Domain Technologie Control allows the administrator to create web hosting plans that provide email and FTP accounts, domain purchasing, subdomains, SSH, and MySQL databases to the end users under controllable quota for the web sites that these users own. DTC also maintains the automation of billing, generates backup scripts, and monitors traffic (per user and per service) using a single system, UID or GID. Also integrated into DTC are the support ticket system and customizable HTTP error pages. DTC itself manages its own MySQL database to store its setup configuration, web hosting plans, and users. It has support for many other free software: MySQL, Apache, PHP, qmail, Postfix, Courier, Dovecot, ProFTPd, Webalizer, mod-log-sql, and more. It also connects to dtc-xen to manage and monitor the usage of Virtual Private Servers (VPS). DTC is fully open source (LGPL). DTC is also the first web hosting control panel that has reached inclusion in major distributions like Debian (since Lenny in 2009), Ubuntu (since 2008) and FreeBSD. See also Comparison of web hosting control panels Footnotes and References External links http://www.gplhost.com/about-us/our-gpl-software/dtc-domain-technologie-control.html Unix Internet software Web server management software Software using the GNU Lesser General Public License
Domain Technologie Control
[ "Technology" ]
398
[ "Computing stubs", "World Wide Web stubs" ]
1,035,562
https://en.wikipedia.org/wiki/Radio%20scanner
A scanner (also referred to as a radio scanner) is a radio receiver that can automatically tune, or scan, two or more discrete frequencies, stopping when it finds a signal on one of them and then continuing to scan other frequencies when the initial transmission ceases. The term scanner generally refers to a communications receiver that is primarily intended for monitoring VHF and UHF landmobile radio systems, as opposed to, for instance, a receiver used to monitor international shortwave transmissions, although these may be classified as scanners too. More often than not, these scanners can also tune to different types of modulation as well (AM, FM, SSB, etc.). Early scanners were slow, bulky, and expensive. Today, modern microprocessors have enabled scanners to store thousands of channels and monitor hundreds of channels per second. Recent models can follow trunked radio systems and decode APCO-P25 digital transmissions. Both hand held and desktop models are available. Scanners are often used to monitor police, fire and emergency medical services. Radio scanning also serves an important role in the fields of journalism and crime investigation, as well as a hobby for many people around the world. History and use Scanners developed from earlier tunable and fixed-frequency radios that received one frequency at a time. Non-broadcast radio systems, such as those used by public safety agencies, do not transmit continuously. With a radio fixed on a single frequency, much time could pass between transmissions, while other frequencies might be active. A scanning radio will sequentially monitor multiple programmed channels, or scan between user defined frequency limits and user defined frequency steps. The scanner will stop on an active frequency strong enough to break the radio's squelch setting and resume scanning other frequencies when that activity ceases. Scanners first became popular and widely available during the popularity height of CB radio in the 1970s. The first scanners often had between four and ten channels and required the purchase of a separate crystal for each frequency received. A US patent was issued to Peter W. Pflasterer on June 1, 1976. An early 1976 US entry was the Tennelec MCP-1, sold at the January 1976 Consumer Electronics Show in Chicago. Features Many recent models will allow scanning of the specific DCS or CTCSS code used on a specific frequency should it have multiple users. Memory banks are also common. For example, one memory bank can be assigned to air traffic control, another can be for local marine communications, and yet another for local police frequencies. These can be switched on and off depending on the user's preference. Most scanners also have a weather radio band, allowing the listener to tune into weather radio broadcasts from a NOAA transmitter. Some scanners are equipped with Fire-Tone out. Fire tone out decodes Quick Call type tones and acts as a pager when the correct sequence of tones is detected. Modern scanners allow hundreds or thousands of frequencies to be entered via a keypad and stored in various "memory banks" and can scan at a rapid rate for activity due to modern microprocessors. Active frequencies can be found by searching the internet and frequency reference books or can be discovered through a programmable scanner's search function. Antenna modifications may be used. For example, an external antenna can be used for a desktop scanner or an extendable antenna for a hand held unit will provide greater performance than the original equipment "stock" antennas provided by manufacturers. Uses Scanners are often used by hobbyists, railfans, aviation enthusiasts, auto race fans, siren enthusiasts, off-duty emergency services personnel, and reporters. Many scanner clubs exist to allow members to share information about frequencies, codes, and operations. Many have internet presence, such as websites, email lists or web forums. Legislation Australia It is legal to possess a scanner in Australia and to listen to any transmission that is not classified as telecommunication (i.e. anything not connected to the telephone network). Phone app police scanners are also legal. Austria Possession of a radio scanner is legal. However, article 93 of the Telekommunikationsgesetz prohibits the intentional reception of signals by third parties without authorization from the user. Brazil In Brazil, it is legal to have a scanner, but the user is required to have an amateur radio license. Individuals are prohibited from spreading or recording any information obtained through scanning. Belgium In Belgium it is allowed to possess and listen to a scanner. Although it is only allowed to listen to frequencies where you have a permission to listen to. Without permission it is only allowed to listen to HAM Radio or other so called 'free to listen' channels Canada In Canada, according to the Radiocommunication Act, it is completely legal to install, operate or possess a radio apparatus that is capable only of the reception of broadcasting (digital and analogue, but not encrypted data) provided that private information is not passed on or disclosed to any other person(s) or party(s). A situation that occurred in the Toronto area on 28 June 2011 involving York Regional Police officer Constable Garrett Styles was picked up by scanners. Online streaming of communications between the fatally injured officer and police dispatch were picked up by local media. The tragedy was widely reported before the officer's family was notified and several media outlets rebroadcast the recorded emergency transmission. A police initiative pressuring the government to create legislation to stop online streaming of scanner captured police communications was announced in April 2012. Although it is currently legal to stream information from a scanner in Canada, using the information for profit is not legal. Some Canadian police forces use encrypted communications which cannot legally be decrypted and streamed onto the Internet. Applications are available permitting anyone with an Internet-ready computer or smartphone to access scanner communications that are streamed onto the Internet by private individuals who possess the appropriate scanner and computer equipment. Germany German law does not prohibit possession of a scanner. However, the Abhörverbot laid down in article 5 of the Telekommunikation-Medien-Datenschutz-Gesetz (TTDSG) stipulates that it is only legal to listen to or otherwise take knowledge of the contents of four classes of transmissions: those intended for the user of the radio receiver, those made by licensed amateur radio operators, those intended for the general public, and those intended for an indefinite group of people. Violation of this provision is punishable by up to two years in prison or a fine. This prohibition was previously included in the Telekommunikationsgesetz, but was moved to the TTSDG as a part of the German telecom law reform in 2021. Until 2016, the Telekommunikationsgesetz only prohibited the act of listening to other classes of transmissions. This was broadened as a response to a decision of the Cologne Administrative Court, which in 2008 questioned whether the mere reception and decoding of aircraft transponder signals to display aircraft movements on a screen could be considered listening, as it lacks an acoustic element. This updated wording was carried over to the TTDSG in 2021. Ireland Unlicensed possession of a wireless telegraphy apparatus is generally prohibited under Section 3 of the Wireless Telegraphy Act 1926, subject to exemptions. One such exemption covers most apparatuses only capable of reception, including radio scanners. Moreover, Section 11(2) of the Act states that "no person shall improperly divulge the purport of any message, communication, or signal sent or proposed to be sent by wireless telegraphy." The aforementioned exemption echoes this wording as a condition of use of covered receive-only apparatuses. No further information regarding the scope of this prohibition is provided. The Airport Bye-Laws for the Cork Airport and the Dublin Airport specifically ban monitoring air traffic control or airport or airline operational frequencies with radio receiving or recording equipment. Italy Owning a scanner that is able to intercept the frequencies of law enforcement is illegal and carries a jail sentence from eighteen months to five years, as per Article 617 of the Civil Penal Code. Japan It is legal to possess, install and operate a scanner in Japan. The radio law prohibits from disclosing or passing on information received to other persons and using the information to gain personal profit. It is illegal to listen to telephone communication and those transmitted using tapping devices. An amateur radio license is required when amateur radio apparatus is used to listen to radio. Mexico In Mexico, it is legal to have an unblocked scanner and listen to any radio spectrum frequencies, including encrypted and cellular band. According to the Federal Law of General Ways of Communication, individuals are prohibited from spreading any information obtained via a scanner. Netherlands In the Netherlands, it is legal to listen to any radio spectrum frequency because of the "freedom of information"-doctrine. However, if a "special" (i.e., unusual) effort is needed to intercept the information on a frequency (such as decrypting encrypted traffic or using an unauthorized or bootleg radio), then it is considered illegal. In 2008, the Dutch Supreme Court ruled that receivers that can solely be used to detect certain frequencies (such as radar detectors) are illegal because they cannot be used to "convey knowledge or thoughts" and thus are not covered by the aforementioned doctrine. New Zealand In New Zealand, according to section 133A of the Radiocommunications Act of 1989, it is legal to possess and use a scanner at any time to tune to any private voice radio (but not encrypted data), provided that private information is not passed on or disclosed to any other person(s) or party(s). Switzerland Possession of a radio scanner is legal in Switzerland. However, it may only be used to listen to public radio traffic such as CB radio and amateur radio, as well as airband frequencies. United Kingdom In the United Kingdom, it is not illegal to own or use a scanner except in particular circumstances. For example, particular transmissions or frequencies should only be listened to with authorization an example of this being UK aviation frequencies and police radio, which in many other countries may be publicly listened to (and are even available to be streamed online) but in the UK are restricted. Many emergency services have now switched to digital encrypted radio systems, so that it is more difficult for the general public to listen to them. United States The legality of radio scanners in the United States varies considerably between jurisdictions, although it is a federal crime to monitor encrypted cellular phone calls. Five U.S. states restrict the use of a scanner in an automobile. Although scanners capable of following trunked radio systems and demodulating some digital radio systems such as APCO Project 25 are available, decryption-capable scanners would be a violation of United States law and possibly laws of other countries. A law passed by the Congress of the United States, under the pressure from cellular telephone interests, prohibited scanners sold after a certain date from receiving frequencies allocated to the Cellular Radio Service. The law was later amended to make it illegal to modify radios to receive those frequencies, and also to sell radios that could be easily modified to do so. This law remains in effect even though no cellular subscribers still use analog technology. There are Canadian and European unblocked versions available, but these are illegal to import into the U.S. Frequencies used by early cordless phones at 43.720–44.480 MHz, 46.610–46.930 MHz, and 902.000–906.000 MHz can still be picked up by many commercially available scanners, however. The proliferation of scanners led most cordless phone manufacturers to produce cordless handsets operating on a more secure 2.4 GHz system using spread-spectrum technology. Certain states in the United States such as New York and Florida, prohibit the use of scanners in a vehicle unless the operator has a radio license issued from the Federal Communications Commission (FCC) (Amateur Radio, etc.) or the operator's job requires the use of a scanner in a vehicle (e.g., police, fire, utilities). Many scanner user manuals include a warning saying that, while it is legal to listen to almost every transmission a scanner can receive, but there are some that persons should not intentionally listen to (such as telephone conversations, pager transmissions, or any scrambled or encrypted transmissions) under the Electronic Communications Privacy Act, and that modifications to do so are illegal. In some parts of the United States, there are extra penalties for the possession of a scanner during a crime, and some states, such as Michigan, also prohibit the possession of a scanner by a person who has been convicted of a felony in the last five years. It is illegal to use police scanners while driving in Florida, Indiana, Kentucky, New York, and Minnesota. It is also illegal to use police scanners in furtherance of a crime in California, New Jersey, Michigan, Oklahoma, Rhode Island, South Dakota, Vermont, Virginia, Nebraska and West Virginia. Many people including siren enthusiasts, aviation enthusiasts, and more use scanner audio or footage and post them online or live-stream them. Older people who are involved in these group (mainly siren enthusiasts) have said that putting siren activation tones in videos is either illegal or dangerous. Their reasoning is that in 2017 a very large siren system in Dallas, Texas had been hacked and all of the sirens in Dallas County went off in the middle of the night. According to some siren enthusiasts the hack was done by using a two-way radio and using a video online using activation tones from Dallas County's dispatch center. The hacker then transmitted the video with tones in it over the dispatch frequency which led to all of the sirens going off in Dallas. More of these hacks happened in places such as Cincinnati, Ohio, Milwaukee, Wisconsin and other cities. After this many siren enthusiasts stopped putting activation tones in videos so that they would not be used maliciously. A lot of arguments in the siren community have spun up after these hacks. Some enthusiasts began altering or pitch shifting tones so that they do not sound like the real activation tones and some still keep them in there, however they put a disclaimer in the description of the video saying they will not be held responsible for misuse of activation tones. The reason why activation tones are in videos in the first place is to alert the enthusiasts of when said siren is about to go off. With this being in mind, this is what some sources say about putting scanner audio in videos (including tones). Section 705 of the Communications Act States that: No person not being authorized by the sender shall intercept any radio communication and divulge or publish the existence, contents, substance, purport, effect, or meaning of such intercepted communication to any person. 47 U.S.C. § 605(a). The penalties for violating this section are severe: a fine of not more than $2000, imprisonment, or both or, where such violation is “willful" and for purposes of direct or indirect commercial advantage or private financial gain,” a fine of up to $50,000 and imprisonment of not more than two years for the first such conviction and up to $100,000 and five years for subsequent convictions. In addition, the statute provides for a private civil remedy to any person aggrieved by a violation of this section. The FCC regulations implementing this section more specifically provide those messages originated by “privately-owned non-broadcast stations . . . may be broadcast only upon receipt of prior permission from the non-broadcast licensee.” When people read this, they took it as putting scanner broadcasts online is illegal. This is not true because it only refers to the interception of broadcasts. This means it is still legal to put scanner audio in videos, but you cannot re-broadcast them over said frequency. Since most police, fire, EMS, and public safety frequencies are public and publicly available in the FCC Database, you can still put audio in videos no matter what the audio is. In the United States, licensed amateur radio operators with a valid FCC license may possess amateur radio transceivers capable of reception beyond the amateur radio bands per an FCC Memorandum & Order known as FCC Docket PR91-36 (also known as FCC 93-410). See also Dispatcher Police radio Communications receiver Uniden References External links Intro to the police or radio scanner at YouTube (2014) Police Scanner Radio Resources & Learning Center Are Police Scanners Legal in the US? Radio Reference Website Radio hobbies Receiver (radio) ja:受信機#ゼネラルカバレッジ受信機(ワイドバンドレシーバ)
Radio scanner
[ "Engineering" ]
3,411
[ "Radio electronics", "Receiver (radio)" ]
1,035,597
https://en.wikipedia.org/wiki/Testbed
A testbed (also spelled test bed) is a platform for conducting rigorous, transparent, and replicable testing of scientific theories, computing tools, and new technologies. The term is used across many disciplines to describe experimental research and new product development platforms and environments. They may vary from hands-on prototype development in manufacturing industries such as automobiles (known as "mules"), aircraft engines or systems and to intellectual property refinement in such fields as computer software development shielded from the hazards of testing live. Software development In software development, testbedding is a method of testing a particular module (function, class, or library) in an isolated fashion. It may be used as a proof of concept or when a new module is tested apart from the program or system it will later be added to. A skeleton framework is implemented around the module so that the module behaves as if already part of the larger program. A typical testbed could include software, hardware, and networking components. In software development, the specified hardware and software environment can be set up as a testbed for the application under test. In this context, a testbed is also known as the test environment made of: Testing hardware equipment (test bench, optical table, custom testing rig, dummy equipment as simulates an actual product or its counterpart, external environment means, like showers, heaters, fans, vacuum chamber, anechoic chamber). Computing equipment (processing units, data centers, in-line FPGA, environment simulation equipment). Testing software (DAQ / oscilloscopes, visualisation and testing software, environment software to feed dummy equipment with data). Testbeds are also pages on the Internet where the public are given the opportunity to test CSS or HTML they have created and want to preview the results, for example: The Arena web browser was created by the World Wide Web Consortium (W3C) and CERN for testing HTML3, Cascading Style Sheets (CSS), Portable Network Graphics (PNG) and the libwww. The Line Mode browser got a new function to interact with the libwww library as a sample and test application. The libwww was also created to test network communication protocols which are under development or to experiment with new protocols. Aircraft development In aircraft development there are also examples of testbed use like in development of new aircraft engines when these are fitted to a testbed aircraft for flight testing. Such usage of testbeds was originally pioneered by Rolls Royce in their development of jet engines. See also Iron bird (aviation) References External links PlanetLab Europe, the European portion of the publicly available PlanetLab testbed CMU's eRulemaking Testbed US National Science Foundation GENI - Global Environment for Network Innovations Initiative Helsinki Testbed (meteorology) Collaborative Adaptive Sensing of the Atmosphere (CASA) IP1 test bed Hardware testing Software testing
Testbed
[ "Engineering" ]
595
[ "Software engineering", "Software testing" ]
1,035,712
https://en.wikipedia.org/wiki/Counts%20per%20minute
The measurement of ionizing radiation is sometimes expressed as being a rate of counts per unit time as registered by a radiation monitoring instrument, for which counts per minute (cpm) and counts per second (cps) are commonly used quantities. Count rate measurements are associated with the detection of particles, such as alpha particles and beta particles. However, for gamma ray and X-ray dose measurements a unit such as the sievert is normally used. Both cpm and cps are the rate of detection events registered by the measuring instrument, not the rate of emission from the source of radiation. For radioactive decay measurements it must not be confused with disintegrations per unit time (dpm), which represents the rate of atomic disintegration events at the source of the radiation. Count rates The count rates of cps and cpm are generally accepted and convenient practical rate measurements. They are not SI units, but are de facto radiological units of measure in widespread use. Counts per minute (abbreviated to cpm) is a measure of the detection rate of ionization events per minute. Counts are only manifested in the reading of the measuring instrument, and are not an absolute measure of the strength of the source of radiation. Whilst an instrument will display a rate of cpm, it does not have to detect counts for one minute, as it can infer the total per minute from a smaller sampling period. Counts per second (abbreviated to cps) is used for measurements when higher count rates are being encountered, or if hand held radiation survey instruments are being used which can be subject to rapid changes of count rate when the instrument is moved over a source of radiation in a survey area. Conversion to dose rate Count rate does not universally equate to dose rate, and there is no simple universal conversion factor. Any conversions are instrument-specific. Counts is the number of events detected, but dose rate relates to the amount of ionising energy deposited in the sensor of the radiation detector. The conversion calculation is dependent on the radiation energy levels, the type of radiation being detected and the radiometric characteristic of the detector. The continuous current ion chamber instrument can easily measure dose but cannot measure counts. However the Geiger counter can measure counts but not the energy of the radiation, so a technique known as energy compensation of the detector tube is used to produce a dose reading. This modifies the tube characteristic so each count resulting from a particular radiation type is equivalent to a specific quantity of deposited dose. More can be found on radiation dose and dose rate at absorbed dose and equivalent dose. Count rates versus disintegration rates Disintegrations per minute (dpm) and disintegrations per second (dps) are measures of the activity of the source of radioactivity. The SI unit of radioactivity, the becquerel (Bq), is equivalent to one disintegration per second. This unit should not be confused with cps, which is the number of counts received by an instrument from the source. The quantity dps (dpm) is the number of atoms that have decayed in one second (one minute), not the number of atoms that have been measured as decayed. The efficiency of the radiation detector and its relative position to the source of radiation must be accounted for when relating cpm to dpm. This is known as the counting efficiency. The factors affecting counting efficiency are shown in the accompanying diagram. Surface emission rate The surface emission rate (SER) is used as a measure of the rate of particles emitted from a radioactive source which is being used as a calibration standard. When the source is of plate or planar construction and the radiation of interest is emitting from one face, it is known as " emission". When the emissions are from a "point source" and the radiation of interest is emitting from all faces, it is known as " emission". These terms correspond to the spherical geometry over which the emissions are being measured. The SER is the measured emission rate from the source and is related to, but different from, the source activity. This relationship is affected by the type of radiation being emitted and the physical nature of the radioactive source. Sources with emissions will nearly always have a lower SER than the Bq activity due to self-shielding within the active layer of the source. Sources with emissions suffer from self-shielding or backscatter, so the SER is variable, and individually can be greater than or less than 50% of the Bq activity, depending on construction and the particle types being measured. Backscatter will reflect particles off the backing plate of the active layer and will increase the rate; beta particle plate sources usually have a significant backscatter, whereas alpha plate sources usually have no backscatter. However alpha particles are easily attenuated if the active layer is made too thick. The SER is established by measurement using calibrated equipment, normally traceable to a national standard source of radiation. Ratemeters and scalers In radiation protection practice, an instrument which reads a rate of detected events is normally known as a ratemeter, which was first developed by R D Robley Evans in 1939. This mode of operation provides real-time dynamic indication of the radiation rate, and the principle has found widespread application in radiation survey meters used in health physics. An instrument which totalises the events detected over a time period is known as a scaler. This colloquial name comes from the early days of automatic radiation counting, when a pulse-dividing circuit was required to "scale down" a high count rate to a speed which mechanical counters could register. This technique was developed by C E Wynn-Williams at The Cavendish Laboratory and first published in 1932. The original counters used a cascade of "Eccles-Jordan" divide-by-two circuits, today known as flip flops. Early count readings were therefore binary numbers and had to be manually re-calculated into decimal values. Later, with the development of electronic indicators, which started with the introduction of the Dekatron readout tube in the 1950s, and culminating in the modern digital indicator, totalised readings came to be directly indicated in decimal notation. SI Units for radioactive disintegration One becquerel (Bq) is equal to one disintegration per second; 1 becquerel (Bq) is equal to 60 dpm. One curie (Ci) an old non-SI unit is equal to or dps, which is equal to . References External links Units of radioactivity Units of frequency Radiation protection
Counts per minute
[ "Chemistry", "Mathematics" ]
1,341
[ "Quantity", "Units of radioactivity", "Radioactivity", "Units of frequency", "Units of measurement" ]
1,035,767
https://en.wikipedia.org/wiki/Stockholm%20Convention%20on%20Persistent%20Organic%20Pollutants
Stockholm Convention on Persistent Organic Pollutants is an international environmental treaty, signed on 22 May 2001 in Stockholm and effective from 17 May 2004, that aims to eliminate or restrict the production and use of persistent organic pollutants (POPs). History In 1995, the Governing Council of the United Nations Environment Programme (UNEP) called for global action to be taken on POPs, which it defined as "chemical substances that persist in the environment, bio-accumulate through the food web, and pose a risk of causing adverse effects to human health and the environment". Following this, the Intergovernmental Forum on Chemical Safety (IFCS) and the International Programme on Chemical Safety (IPCS) prepared an assessment of the 12 worst offenders, known as the dirty dozen. The INC met five times between June 1998 and December 2000 to elaborate the convention, and delegates adopted the Stockholm Convention on POPs at the Conference of the Plenipotentiaries convened from 22 to 23 May 2001 in Stockholm, Sweden. The negotiations for the convention were completed on 23 May 2001 in Stockholm. The convention entered into force on 17 May 2004 with ratification by an initial 128 parties and 151 signatories. Co-signatories agree to outlaw nine of the dirty dozen chemicals, limit the use of DDT to malaria control, and curtail inadvertent production of dioxins and furans. Parties to the convention have agreed to a process by which persistent toxic compounds can be reviewed and added to the convention, if they meet certain criteria for persistence and transboundary threat. The first set of new chemicals to be added to the convention were agreed at a conference in Geneva on 8 May 2009. As of September 2022, there are 186 parties to the convention (185 states and the European Union). Notable non-ratifying states include the United States, Israel, and Malaysia. The Stockholm Convention was adopted to EU legislation in Regulation (EC) No 850/2004. In 2019, the latter was replaced by Regulation (EU) 2019/1021. Summary of provisions Key elements of the Convention include the requirement that developed countries provide new and additional financial resources and measures to eliminate production and use of intentionally produced POPs, eliminate unintentionally produced POPs where feasible, and manage and dispose of POPs wastes in an environmentally sound manner. Precaution is exercised throughout the Stockholm Convention, with specific references in the preamble, the objective, and the provision on identifying new POPs. Persistent Organic Pollutants Review Committee When adopting the convention, provision was made for a procedure to identify additional POPs and the criteria to be considered in doing so. At the first meeting of the Conference of the Parties (COP1), held in Punta del Este, Uruguay, from 2–6 May 2005, the POPRC was established to consider additional candidates nominated for listing under the convention. The committee is composed of 31 experts nominated by parties from the five United Nations regional groups and reviews nominated chemicals in three stages. The Committee first determines whether the substance fulfills POP screening criteria detailed in Annex D of the convention, relating to its persistence, bioaccumulation, potential for long-range environmental transport (LRET), and toxicity. If a substance is deemed to fulfill these requirements, the Committee then drafts a risk profile according to Annex E to evaluate whether the substance is likely, as a result of its LRET, to lead to significant adverse human health and/or environmental effects and therefore warrants global action. Finally, if the POPRC finds that global action is warranted, it develops a risk management evaluation, according to Annex F, reflecting socioeconomic considerations associated with possible control measures. Based on this, the POPRC decides to recommend that the COP list the substance under one or more of the annexes to the convention. The POPRC has met annually in Geneva, Switzerland, since its establishment. The seventh meeting of the Persistent Organic Pollutants Review Committee (POPRC-7) of the Stockholm Convention on Persistent Organic Pollutants (POPs) took place from 10 to 14 October 2011 in Geneva. POPRC-8 was held from 15 to 19 October 2012 in Geneva, POPRC-9 to POPRC-15 were held in Rome, while POPRC-16 needed to be held online. Listed substances There were initially twelve distinct chemicals ("dirty dozen") listed in three categories. Two chemicals, hexachlorobenzene and polychlorinated biphenyls, were listed in both categories A and C. Currently, five chemicals are listed in both categories. Chemicals newly proposed for inclusion in Annexes A, B, C POPRC-7 considered three proposals for listing in Annexes A, B and/or C of the convention: chlorinated naphthalenes (CNs), hexachlorobutadiene (HCBD) and pentachlorophenol (PCP), its salts and esters. The proposal is the first stage of the POPRC's work in assessing a substance, and requires the POPRC to assess whether the proposed chemical satisfies the criteria in Annex D of the convention. The criteria for forwarding a proposed chemical to the risk profile preparation stage are persistence, bioaccumulation, potential for long-range environmental transport (LRET), and adverse effects. POPRC-8 proposed hexabromocyclododecane for listing in Annex A, with specific exemptions for production and use in expanded polystyrene and extruded polystyrene in buildings. This proposal was agreed at the sixth Conference of Parties on 28 April-10 May 2013. POPRC-9 proposed di-, tri-, tetra-, penta-, hexa-, hepta- and octa-chlorinated naphthalenes, and hexachlorobutadiene for listing in Annexes A and C. It also set up further work on pentachlorophenol, its salts and esters, and decabromodiphenyl ether, perfluorooctanesulfonic acid, its salts and perfluorooctane sulfonyl chloride. POPRC-15 proposed PFHxS for listing in Annex A without specific exemptions. Currently, chlorpyrifos, long-chain perfluorocarboxylic acids and medium-chain chlorinated paraffins are under review. Controversies Although some critics have alleged that the treaty is responsible for the continuing death toll from malaria, in reality the treaty specifically permits the public health use of DDT for the control of mosquitoes (the malaria vector). There are also ways to prevent high amounts of DDT consumed by using other malaria controls such as window screens. As long as there are specific measures taken, such as use of DDT indoors, then the limited amount of DDT can be used in a regulated fashion. From a developing country perspective, a lack of data and information about the sources, releases, and environmental levels of POPs hampers negotiations on specific compounds, and indicates a strong need for research. Another controversy would be certain POPs (which are continually active, specifically in the Arctic Biota) that were mentioned in the Stockholm Convention, but were not part of the Dirty Dozen such as perfluorooctane sulfonate (PFOS). PFOS have many general uses such as stain repellents but have many properties which can make it a dangerous due to the fact that PFOS can be highly resistant to environmental breakdown. PFOS can be toxic in terms of increased offspring death, decrease in body weight, and the disruption of neurological systems. What makes this compound controversial is the economic and political impact it can have among various countries and businesses. Related conventions and other ongoing negotiations regarding pollution Rotterdam Convention on the Prior Informed Consent Procedure for Certain Hazardous Chemicals and Pesticides in International Trade Convention on Long-Range Transboundary Air Pollution (CLRTAP) Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and their Disposal Minamata Convention on Mercury Ongoing negotiations Intergovernmental Forum on Chemical Safety (IFCS) Strategic Approach to International Chemicals Management (SAICM) References Further reading Chasek, Pam, David L. Downie, and J.W. Brown (2013). Global Environmental Politics, 6th Edition, Boulder: Westview Press. Downie, D., Krueger, J. and Selin, H. (2005). "Global Policy for Toxic Chemicals", in R. Axelrod, D. Downie and N. Vig (eds.) The Global Environment: Institutions, Law & Policy, 2nd Edition, Washington: CQ Press. Downie, David and Jessica Templeton (2013). "Persistent Organic Pollutants." The Routledge Handbook of Global Environmental Politics. New York: Routledge. Porta, M., Gasull, M., López, T., Pumarega, J. Distribution of blood concentrations of persistent organic pollutants in representative samples of the general population. United Nations Environment Programme – Regional Activity Centre for Cleaner Production (CP/RAC) Annual Technical Publication 2010, vol. 9, pp. 24–31 (PDF). Selin, H. (2010). Global Governance of Hazardous Chemicals: Challenges of Multilevel Management, Cambridge: The MIT Press. External links Stockholm Convention Secretariat Text of the Convention Ratifications Earth Negotiation Bulletin coverage of Stockholm Convention Meetings Introduction to the POPs Convention Environmental treaties Biodegradable waste management Chemical safety Obsolete pesticides Treaties concluded in 2001 Treaties entered into force in 2004 History of Stockholm Waste treaties Regulation of chemicals 2004 in the environment Treaties of Afghanistan Treaties of Albania Treaties of Algeria Treaties of Angola Treaties of Antigua and Barbuda Treaties of Argentina Treaties of Armenia Treaties of Australia Treaties of Austria Treaties of Azerbaijan Treaties of the Bahamas Treaties of Bahrain Treaties of Bangladesh Treaties of Barbados Treaties of Belarus Treaties of Belgium Treaties of Belize Treaties of Benin Treaties of Bolivia Treaties of Bosnia and Herzegovina Treaties of Botswana Treaties of Brazil Treaties of Bulgaria Treaties of Burkina Faso Treaties of Burundi Treaties of Cambodia Treaties of Cameroon Treaties of Canada Treaties of Cape Verde Treaties of the Central African Republic Treaties of Chad Treaties of Chile Treaties of the People's Republic of China Treaties of Colombia Treaties of the Comoros Treaties of the Republic of the Congo Treaties of the Cook Islands Treaties of Costa Rica Treaties of Ivory Coast Treaties of Croatia Treaties of Cuba Treaties of Cyprus Treaties of the Czech Republic Treaties of North Korea Treaties of the Democratic Republic of the Congo Treaties of Denmark Treaties of Djibouti Treaties of Dominica Treaties of the Dominican Republic Treaties of Ecuador Treaties of Egypt Treaties of El Salvador Treaties of Eritrea Treaties of Estonia Treaties of Ethiopia Treaties of Fiji Treaties of Finland Treaties of France Treaties of Gabon Treaties of the Gambia Treaties of Georgia (country) Treaties of Germany Treaties of Ghana Treaties of Greece Treaties of Guatemala Treaties of Guinea Treaties of Guinea-Bissau Treaties of Guyana Treaties of Honduras Treaties of Hungary Treaties of Iceland Treaties of India Treaties of Indonesia Treaties of Iran Treaties of Iraq Treaties of Ireland Treaties of Italy Treaties of Jamaica Treaties of Japan Treaties of Jordan Treaties of Kazakhstan Treaties of Kenya Treaties of Kiribati Treaties of Kuwait Treaties of Kyrgyzstan Treaties of Laos Treaties of Latvia Treaties of Lebanon Treaties of Lesotho Treaties of Liberia Treaties of the Libyan Arab Jamahiriya Treaties of Liechtenstein Treaties of Lithuania Treaties of Luxembourg Treaties of Madagascar Treaties of Malawi Treaties of the Maldives Treaties of Mali Treaties of Malta Treaties of the Marshall Islands Treaties of Mauritania Treaties of Mauritius Treaties of Mexico Treaties of the Federated States of Micronesia Treaties of Monaco Treaties of Mongolia Treaties of Montenegro Treaties of Morocco Treaties of Mozambique Treaties of Myanmar Treaties of Namibia Treaties of Nauru Treaties of Nepal Treaties of the Netherlands Treaties of New Zealand Treaties of Nicaragua Treaties of Niger Treaties of Nigeria Treaties of Niue Treaties of Norway Treaties of Oman Treaties of Pakistan Treaties of Palau Treaties of the State of Palestine Treaties of Panama Treaties of Papua New Guinea Treaties of Paraguay Treaties of Peru Treaties of the Philippines Treaties of Poland Treaties of Portugal Treaties of Qatar Treaties of South Korea Treaties of Moldova Treaties of Romania Treaties of Russia Treaties of Rwanda Treaties of Samoa Treaties of São Tomé and Príncipe Treaties of Senegal Treaties of Serbia Treaties of Seychelles Treaties of Sierra Leone Treaties of Singapore Treaties of Slovakia Treaties of Slovenia Treaties of the Solomon Islands Treaties of the Transitional Federal Government of Somalia Treaties of South Africa Treaties of Spain Treaties of Sri Lanka Treaties of Saint Kitts and Nevis Treaties of Saint Lucia Treaties of Saint Vincent and the Grenadines Treaties of the Republic of the Sudan (1985–2011) Treaties of Suriname Treaties of Eswatini Treaties of Sweden Treaties of Switzerland Treaties of Syria Treaties of Tajikistan Treaties of Thailand Treaties of North Macedonia Treaties of Timor-Leste Treaties of Togo Treaties of Tonga Treaties of Trinidad and Tobago Treaties of Tunisia Treaties of Turkey Treaties of Tuvalu Treaties of Uganda Treaties of Ukraine Treaties of the United Arab Emirates Treaties of the United Kingdom Treaties of Tanzania Treaties of Uruguay Treaties of Vanuatu Treaties of Venezuela Treaties of Vietnam Treaties of Yemen Treaties of Zambia Treaties of Zimbabwe Treaties entered into by the European Union United Nations treaties 2001 in Sweden Treaties extended to the Faroe Islands Treaties extended to Hong Kong Treaties extended to Macau
Stockholm Convention on Persistent Organic Pollutants
[ "Chemistry" ]
2,687
[ "Chemical accident", "Biodegradable waste management", "Regulation of chemicals", "Biodegradation", "nan", "Chemical safety" ]
1,035,900
https://en.wikipedia.org/wiki/Saber-toothed%20predator
A saber-tooth (alternatively spelled sabre-tooth) is any member of various extinct groups of predatory therapsids, predominantly carnivoran mammals, that are characterized by long, curved saber-shaped canine teeth which protruded from the mouth when closed. Among the earliest animals that can be described as "sabertooths" are the gorgonopsids, a group of non-mammalian therapsids that lived during the Middle-Late Permian, around 270-252 million years ago. Saber-toothed mammals have been found almost worldwide from the Eocene epoch to the end of the Pleistocene epoch (42 million years ago – 11,000 years ago). One of the best-known genera is the machairodont or "saber-toothed cat" Smilodon, the species of which, especially S. fatalis, are popularly referred to as "saber-toothed tigers", although they are not closely related to tigers (Panthera). Despite some similarities, not all saber-tooths are closely related to saber-toothed cats or felids in-general. Instead, many members are classified into different families of Feliformia, such as Barbourofelidae and Nimravidae; the oxyaenid "creodont" genera Machaeroides and Apataelurus; and two extinct lineages of metatherian mammals, the thylacosmilids of Sparassodonta, and deltatheroideans, which are more closely related to marsupials. In this regard, these saber-toothed mammals can be viewed as examples of convergent evolution. This convergence is remarkable due not only to the development of elongated canines, but also a suite of other characteristics, such as a wide gape and bulky forelimbs, which is so consistent that it has been termed the "saber-tooth suite." Of the feliform lineages, the family Nimravidae is the oldest, entering the landscape around 42 mya and becoming extinct by 7.2 mya. Barbourofelidae entered around 16.9 mya and were extinct by 9 mya. These two would have shared some habitats. Morphology The different groups of saber-toothed predators evolved their saber-toothed characteristics entirely independently. They are most known for having maxillary canines which extended down from the mouth when the mouth was closed. Saber-toothed cats were generally more robust than today's cats and were quite bear-like in build. They are believed to have been excellent hunters, taking animals such as sloths, mammoths, and other large prey. Evidence from the numbers found at the La Brea Tar Pits suggests that Smilodon, like modern lions, was a social carnivore. The first saber-tooths to appear were non-mammalian synapsids, such as the gorgonopsids; they were one of the first groups of animals within Synapsida to experience the specialization of saber teeth, and many had long canines. Some had two pairs of upper canines with two jutting down from each side, but most had one pair of upper extreme canines. Because of their primitiveness, they are extremely easy to tell from machairodonts. Several defining characteristics are a lack of a coronoid process, many sharp "premolars" more akin to pegs than scissors, and very long skulls. Despite their large canines, however, most gorgonopsians probably lacked the other specializations found in true saber-toothed predator ecomorphs. Two gorgonopsians, Smilesaurus and Inostrancevia, had exceptionally large canines and may have been closer functional analogues to later sabertooths. The second appearance is in Deltatheroida, a lineage of Cretaceous metatherians. At least one genus, Lotheridium, possessed long canines, and given both the predatory habits of the clade as well as the generally incomplete material, this may have been a more widespread adaptation. The third appearance of long canines is Thylacosmilus, which is the most distinctive of the saber-tooth mammals and is also easy to tell apart. It differs from machairodonts in possessing a very prominent flange and a tooth that is triangular in cross section. The root of the canines is more prominent than in machairodonts and a true sagittal crest is absent. The fourth instance of saber-teeth is from the clade Oxyaenidae. The small and slender Machaeroides bore canines that were thinner than in the average machairodont. Its muzzle was longer and narrower. The fifth saber-tooth appearance is the ancient feliform (carnivoran) family Nimravidae. Both groups have short skulls with tall sagittal crests, and their general skull shape is very similar. Some have distinctive flanges, and some have none at all, so this confuses the matter further. Machairodonts were almost always bigger, though, and their canines were longer and more stout for the most part, but exceptions do appear. The sixth appearance is the barbourofelids. These feliform carnivorans are very closely related to actual cats. The best-known barbourofelid is the eponymous Barbourofelis, which differs from most machairodonts by having a much heavier and more stout mandible, smaller orbits, massive and almost knobby flanges, and canines that are farther back. The average machairodont had well-developed incisors, but barbourofelids' were more extreme. The seventh and last saber-toothed group to evolve were the machairodonts themselves. Diet The evolution of enlarged canines in Tertiary carnivores was a result of large mammals being the source of prey for saber-toothed predators. The development of the saber-toothed condition appears to represent a shift in function and killing behavior, rather than one in predator-prey relations. Many hypotheses exist concerning saber-tooth killing methods, some of which include attacking soft tissue such as the belly and throat, where biting deep was essential to generate killing blows. The elongated teeth also aided with strikes reaching major blood vessels in these large mammals. However, the precise functional advantage of the saber-tooth's bite, particularly in relation to prey size, is a mystery. A new point-to-point bite model is introduced in the article by Andersson et al., showing that for saber-tooth cats, the depth of the killing bite decreases dramatically with increasing prey size. The extended gape of saber-toothed cats results in a considerable increase in bite depth when biting into prey with a radius of less than 10 cm. For the saber-tooth, this size-reversed functional advantage suggests predation on species within a similar size range to those attacked by present-day carnivorans, rather than "megaherbivores" as previously believed. A disputing view of the cat's hunting technique and ability is presented by C. K. Brain in The Hunters or the Hunted?, in which he attributes the cat's prey-killing abilities to its large neck muscles rather than its jaws. Large cats use both the upper and lower jaw to bite down and bring down the prey. The strong bite of the jaw is accredited to the strong temporalis muscle that attach from the skull to the coronoid process of the jaw. The larger the coronoid process, the larger the muscle that attaches there, so the stronger the bite. As C.K. Brain points out, the saber-toothed cats had a greatly reduced coronoid process and therefore a disadvantageously weak bite. The cat did, however, have an enlarged mastoid process, a muscle attachment at the base of the skull, which attaches to neck muscles. According to C.K. Brain, the saber-tooth would use a "downward thrust of the head, powered by the neck muscles" to drive the large upper canines into the prey. This technique was "more efficient than those of true cats". Biology The similarity in all these unrelated families involves the convergent evolution of the saber-like canines as a hunting adaptation. Meehan et al. note that it took around 8 million years for a new type of saber-toothed cat to fill the niche of an extinct predecessor in a similar ecological role; this has happened at least four times with different families of animals developing this adaptation. Although the adaptation of the saber-like canines made these creatures successful, it seems that the shift to obligate carnivorism, along with co-evolution with large prey animals, led the saber-toothed cats of each time period to extinction. As per Van Valkenburgh, the adaptations that made saber-toothed cats successful also made the creatures vulnerable to extinction. In her example, trends toward an increase in size, along with greater specialization, acted as a "macro-evolutionary ratchet": when large prey became scarce or extinct, these creatures would be unable to adapt to smaller prey or consume other sources of food, and would be unable to reduce their size so as to need less food. More recently, it has been suggested that Thylacosmilus differed radically from its placental counterparts in possessing differently shaped canines and lacking incisors. This suggests that it was not ecologically analogous to other saber-teeth and possibly an entrail specialist. Another study has found that other saber toothed species similarly had diverse lifestyles and that superficial anatomical similarities obscure them. Phylogeny of feliform saber-tooths The following cladogram shows the relationships between the feliform saber-tooths, including the Nimravidae, Barbourofelidae and Machairodontinae.Piras P, Maiorino L, Teresi L, Meloro C, Lucci F, Kotsakis T, Raia P (2013) Data from: Bite of the cats: relationships between functional integration and mechanical performance as revealed by mandible geometry. Dryad Digital Repository. https://dx.doi.org/10.5061/dryad.kp8t3 Saber-toothed groups are marked with background colors. Saber-tooth genera Saber-tooth taxonomy All saber-toothed mammals lived between 33.7 million and 9,000 years ago, but the evolutionary lines that led to the various saber-tooth genera started to diverge much earlier. It is thus a polyphyletic grouping. The lineage that led to Thylacosmilus was the first to split off, in the late Cretaceous. It is a metatherian, and thus more closely related to kangaroos and opossums than the felines. The hyaenodonts diverged next, possibly before Laurasiatheria, then the oxyaenids, and then the nimravids, before the diversification of the truly feline saber-tooths. Clade Therapsida Clade: †Gorgonopsia †Inostrancevia †Smilesaurus Class: Mammalia Clade: Metatheria (diverged ?, in the Cretaceous) Order: †Deltatheroida (an extinct group of metatherian carnivores) Family: †Deltatheridiidae †Lotheridium Order: †Sparassodonta (an extinct group of metatherian carnivores) Family: †Thylacosmilidae †Patagosmilus †Anachlysictis †Thylacosmilus Subclass: Placentalia Order: †Hyaenodonta †Boualitomus Family: †Sinopidae Genus: †Sinopa Superfamily: †Hyaenodontoidea Family: †Hyaenodontidae Subfamily: †Hyaenodontinae Tribe: †Hyaenodontini Genus: †Hyaenodon Family: †Proviverridae †Parvagula Superfamily: †Hyainailouroidea Family: †Hyainailouridae (paraphyletic family) Subfamily: †Hyainailourinae (paraphyletic subfamily) Tribe: †Leakitheriini Genus: †Leakitherium Tribe: †Metapterodontini Genus: †Metapterodon Order: †Oxyaenodonta Family: †Oxyaenidae Subfamily: †Machaeroidinae Genus: †Apataelurus Genus: †Machaeroides Order Carnivora Family †Nimravidae (diverged from the feliforms 48–55 Ma BP, in the late Eocene) Subfamily †Nimravinae (Dinictis) Subfamily †Hoplophoninae Suborder Feliformia ('cat-like' carnivores) Family †Barbourofelidae (sister taxa to Felidae) Family Felidae (true cats) Subfamily †Machairodontinae (diverged ?, in the ?) Tribe †Homotherini †Homotherium †Machairodus †Xenosmilus Tribe †Metailurini †Dinofelis †Metailurus Tribe †Smilodontini †Megantereon †Paramachairodus †Smilodon References Further reading Mol, D., W. v. Logchem, K. v. Hooijdonk, R. Bakker. The Saber-Toothed Cat. DrukWare, Norg 2008. . External links Saber-toothed Cats at the Illinois State Museum Saber-toothed Cats at the UC Berkeley Museum of Paleontology Prehistoric cats and prehistoric cat-like creatures See also Sabertooth salmon Saber-toothed whale Saber-toothed squirrel Apex predators Convergent evolution Saber-toothed cats
Saber-toothed predator
[ "Biology" ]
2,857
[ "Convergent evolution", "Evolutionary biology concepts" ]
1,035,915
https://en.wikipedia.org/wiki/Sieve%20theory
Sieve theory is a set of general techniques in number theory, designed to count, or more realistically to estimate the size of, sifted sets of integers. The prototypical example of a sifted set is the set of prime numbers up to some prescribed limit X. Correspondingly, the prototypical example of a sieve is the sieve of Eratosthenes, or the more general Legendre sieve. The direct attack on prime numbers using these methods soon reaches apparently insuperable obstacles, in the way of the accumulation of error terms. In one of the major strands of number theory in the twentieth century, ways were found of avoiding some of the difficulties of a frontal attack with a naive idea of what sieving should be. One successful approach is to approximate a specific sifted set of numbers (e.g. the set of prime numbers) by another, simpler set (e.g. the set of almost prime numbers), which is typically somewhat larger than the original set, and easier to analyze. More sophisticated sieves also do not work directly with sets per se, but instead count them according to carefully chosen weight functions on these sets (options for giving some elements of these sets more "weight" than others). Furthermore, in some modern applications, sieves are used not to estimate the size of a sifted set, but to produce a function that is large on the set and mostly small outside it, while being easier to analyze than the characteristic function of the set. The term sieve was first used by the Norwegian mathematician Viggo Brun in 1915. However Brun's work was inspired by the works of the French mathematician Jean Merlin who died in the World War I and only two of his manuscripts survived. Basic sieve theory For information on notation see at the end. We follow the Ansatz from Opera de Cribro by John Friedlander and Henryk Iwaniec. We start with some countable sequence of non-negative numbers . In the most basic case this sequence is just the indicator function of some set we want to sieve. However this abstraction allows for more general situations. Next we introduce a general set of prime numbers called the sifting range and their product up to as a function . The goal of sieve theory is to estimate the sifting function In the case of this just counts the cardinality of a subset of numbers, that are coprime to the prime factors of . The inclusion–exclusion principle For define and for each prime denote the subset of multiples and let be the cardinality. We now introduce a way to calculate the cardinality of . For this the sifting range will be a concrete example of primes of the form . If one wants to calculate the cardinality of , one can apply the inclusion–exclusion principle. This algorithm works like this: first one removes from the cardinality of the cardinality and . Now since one has removed the numbers that are divisible by and twice, one has to add the cardinality . In the next step one removes and adds and again. Additionally one has now to remove , i.e. the cardinality of all numbers divisible by and . This leads to the inclusion–exclusion principle Notice that one can write this as where is the Möbius function and the product of all primes in and . Legendre's identity We can rewrite the sifting function with Legendre's identity by using the Möbius function and some functions induced by the elements of Example Let and . The Möbius function is negative for every prime, so we get Approximation of the congruence sum One assumes then that can be written as where is a density, meaning a multiplicative function such that and is an approximation of and is some remainder term. The sifting function becomes or in short One tries then to estimate the sifting function by finding upper and lower bounds for respectively and . The partial sum of the sifting function alternately over- and undercounts, so the remainder term will be huge. Brun's idea to improve this was to replace in the sifting function with a weight sequence consisting of restricted Möbius functions. Choosing two appropriate sequences and and denoting the sifting functions with and , one can get lower and upper bounds for the original sifting functions Since is multiplicative, one can also work with the identity Notation: a word of caution regarding the notation, in the literature one often identifies the set of sequences with the set itself. This means one writes to define a sequence . Also in the literature the sum is sometimes notated as the cardinality of some set , while we have defined to be already the cardinality of this set. We used to denote the set of primes and for the greatest common divisor of and . Types of sieving Modern sieves include the Brun sieve, the Selberg sieve, the Turán sieve, the large sieve, the larger sieve and the Goldston–Pintz–Yıldırım sieve. One of the original purposes of sieve theory was to try to prove conjectures in number theory such as the twin prime conjecture. While the original broad aims of sieve theory still are largely unachieved, there have been some partial successes, especially in combination with other number theoretic tools. Highlights include: Brun's theorem, which shows that the sum of the reciprocals of the twin primes converges (whereas the sum of the reciprocals of all primes diverges); Chen's theorem, which shows that there are infinitely many primes p such that p + 2 is either a prime or a semiprime (the product of two primes); a closely related theorem of Chen Jingrun asserts that every sufficiently large even number is the sum of a prime and another number which is either a prime or a semiprime. These can be considered to be near-misses to the twin prime conjecture and the Goldbach conjecture respectively. The fundamental lemma of sieve theory, which asserts that if one is sifting a set of N numbers, then one can accurately estimate the number of elements left in the sieve after iterations provided that is sufficiently small (fractions such as 1/10 are quite typical here). This lemma is usually too weak to sieve out primes (which generally require something like iterations), but can be enough to obtain results regarding almost primes. The Friedlander–Iwaniec theorem, which asserts that there are infinitely many primes of the form . Zhang's theorem , which shows that there are infinitely many pairs of primes within a bounded distance. The Maynard–Tao theorem generalizes Zhang's theorem to arbitrarily long sequences of primes. Techniques of sieve theory The techniques of sieve theory can be quite powerful, but they seem to be limited by an obstacle known as the parity problem, which roughly speaking asserts that sieve theory methods have extreme difficulty distinguishing between numbers with an odd number of prime factors and numbers with an even number of prime factors. This parity problem is still not very well understood. Compared with other methods in number theory, sieve theory is comparatively elementary, in the sense that it does not necessarily require sophisticated concepts from either algebraic number theory or analytic number theory. Nevertheless, the more advanced sieves can still get very intricate and delicate (especially when combined with other deep techniques in number theory), and entire textbooks have been devoted to this single subfield of number theory; a classic reference is and a more modern text is . The sieve methods discussed in this article are not closely related to the integer factorization sieve methods such as the quadratic sieve and the general number field sieve. Those factorization methods use the idea of the sieve of Eratosthenes to determine efficiently which members of a list of numbers can be completely factored into small primes. Literature External links References
Sieve theory
[ "Mathematics" ]
1,622
[ "Sieve theory", "Combinatorics" ]
1,036,008
https://en.wikipedia.org/wiki/Hugs%20and%20kisses
Hugs and kisses, abbreviated in the Anglosphere as XO or XOXO, is an informal term used for expressing sincerity, faith, love, or good friendship at the end of a written letter, email or text message. Origins The earliest attestation of the use of either x or o to indicate kisses identified by the Oxford English Dictionary appears in the English novellist Florence Montgomery's 1878 book Seaforth, which mentions "This letter [...] ends with the inevitable row of kisses,—sometimes expressed by × × × × ×, and sometimes by o o o o o o, according to the taste of the young scribbler". Here it appears that x and o are both ways to indicate a kiss. (Earlier versions of the dictionary identified an example from 1763, one Gil. White signing off a letter with "I am with many a xxxxxxx and many a Pater noster and Ave Maria, Gil. White". This has, however, since been reinterpreted as an indication of blessings rather than kisses, perhaps evoking the Christian sign of the cross.) Nothing more is known about the origins of x and o meaning 'hugs' or 'kisses'. A 2014 article in The Washington Post that drew on interviews with scholars noted that "the Internet abounds with origin theories" yet that "there is no definitive answer to how a cross came to mean a kiss" and even that "less is known about how 'o' came to signify a hug". Speculations include that the use of x to indicate a kiss was transferred from earlier symbolic uses of the letter. Allegedly, in the Middle Ages, a Christian cross might be drawn on documents or letters to mean sincerity, faith, and honesty; the sign was certainly sometimes used in place of a signature. Unscholarly speculations sometimes extend to the idea that after a cross was written at the end of a document, the writer might kiss it as a show of their sincerity. The Greek word for Christ, ΧΡΙΣΤΟΣ, gave rise to the practice of using the Latin letter X as an abbreviation for 'Christ' (similar to the more elaborate Chi Rho symbol). Supposedly, this was then kissed in this tradition of displaying a sacred oath. There is speculation on the Internet that the 'O' is of North American descent: when arriving in the United States, Jewish immigrants, most of whose first language was Yiddish, would use an 'O' to sign documents, thus not using the sign of the cross, and shop keepers would often use an 'O' when signing documents, in place of an 'X'. See also Signature P.S. References Intimate relationships Interpersonal relationships Emoticons
Hugs and kisses
[ "Mathematics", "Biology" ]
561
[ "Behavior", "Symbols", "Emoticons", "Interpersonal relationships", "Human behavior" ]
1,036,106
https://en.wikipedia.org/wiki/Flash%20mob%20computing
Flash mob computing or flash mob computer is a temporary ad hoc computer cluster running specific software to coordinate the individual computers into one single supercomputer. A flash mob computer is distinct from other types of computer clusters in that it is set up and broken down on the same day or during a similar brief amount of time and involves many independent owners of computers coming together at a central physical location to work on a specific problem and/or social event. Flash mob computer derives its name from the more general term flash mob which can mean any activity involving many people co-ordinated through virtual communities coming together for brief periods of time for a specific task or event. Flash mob computing is a more specific type of flash mob for the purpose of bringing people and their computers together to work on a single task or event. History The first flash mob computer was created on April 3, 2004 at the University of San Francisco using software written at USF called FlashMob (not to be confused with the more general term flash mob). The event, called FlashMob I, was a success. There was a call for computers on the computer news website Slashdot. An article in The New York Times "Hey, Gang, Let’s Make Our Own Supercomputer" brought a lot of attention to the effort. More than 700 computers were brought to the gym at the University of San Francisco, and were wired to a network donated by Foundry Networks. At FlashMob I the participants were able to run a benchmark on 256 of the computers, and achieved a peak rate of 180 Gflops (billions of calculations per second), though this computation stopped three quarters of the way due to a node failure. The best, complete run used 150 computers and resulted in 77 Gflops. FlashMob I was run off a bootable CD-ROM that ran a copy of Morphix Linux, which was only available for the x86 platform. Despite these efforts, the project was unable to achieve its original goal of running a cluster momentarily fast enough to enter the (November 2003) Top 500 list of supercomputers. The system would have had to provide at least 402.5 Gflops to match a Chinese cluster of 256 Intel Xeon nodes. For comparison, the fastest super computer at the time, Earth Simulator, provided 35,860 Gflops. Creators of flash mob computing Pat Miller was a research scientist at a national lab and adjunct professor at USF. His class on Do-It-Yourself Supercomputers evolved into FlashMob I from the original idea of every student bringing a commodity CPU or an Xbox to class to make an evanescent cluster at each meeting. Pat worked on all aspects of the FlashMob software. Greg Benson, USF Associate Professor of Computer Science, invented the name "flash mob computing", and proposed the first idea of wireless flash mob computers. Greg worked on the core infrastructure of the FlashMob run time environment. John Witchel (Stuyvesant High School '86) was a USF graduate student in computer science during 2004. After talking to Greg about the challenges of networking a stadium of wireless computers and listening to Pat lecture on what it takes to break the Top 500, John asked, "Couldn't we just invite people off the street and get enough power to break the Top 500?" FlashMob I and the FlashMob software was John's master's thesis. See also Flash mob Supercomputer References Markoff, J. (2004). "Hey, Gang, Let’s Make Our Own Supercomputer", The New York Times, February 23, 2004. External links San Francisco Flashmob Attempts Supercomputer Original Slashdot article FlashMobComputing.org Supercomputers University of San Francisco
Flash mob computing
[ "Technology" ]
780
[ "Supercomputers", "Supercomputing" ]
1,036,232
https://en.wikipedia.org/wiki/Waterfall%20rail%20accident
The Waterfall rail accident was a train accident that occurred on 31 January 2003 near Waterfall, New South Wales, Australia. The train derailed, killing seven people aboard, including the train driver, and injuring 40. The accident is famously remembered by systems engineers due to the poorly designed safety systems. Incident On the day of the disaster, a Tangara interurban train service, set G7, which had come from Sydney Central station at 6:24 am, departed Sydney Waterfall railway station moving south towards Port Kembla station via Wollongong. At approximately 7:15 am, the driver suffered a sudden heart attack and lost control of the train. The train was thus travelling at as it approached a curve in the tracks through a small cutting. The curve is rated for speeds no greater than . The train derailed, overturned and collided with the rocky walls of the cutting in a remote area south of the station. It was reported that the rescuers had to carry heavy lifting equipment for more than to reach the site. Two of the carriages landed on their sides and another two were severely damaged in the accident. In addition to the seven fatalities, many more passengers were injured. The subsequent official inquiry discovered the deadman's brake had not been applied. The train guard's solicitor stated that the guard was in a microsleep for as much as 30 seconds, just prior to the accident. The human-factors accident investigator determined the organisational culture had the driver firmly in charge, making it psychologically more difficult for the guard to act. Causes of the accident Tangara trains have a number of safety and vigilance devices installed, such as a deadman's brake, to address problems when the driver becomes incapacitated. If the driver releases pressure from this brake, the train will safely come to a halt. The train in question was a four-car Outer Suburban Tangara set, numbered G7 and fitted with a Mitsubishi Electric alternating current traction system for evaluation purposes. The driver was in the leading driving carriage and the guard was in the rear driving carriage, in between which were two non-driving motor cars. On this service, the guard, who could have applied the emergency brake, and the deadman's brake were the main safety mechanisms in place. The train was later found to be travelling in excess of as it approached the curve where the accident occurred. Neither the deadman's brake nor the guard had intervened in this situation, and this excessive speed was found to be the direct cause of the accident. Deficient training of train staff was also found to be a contributing factor in the accident. Train G7 did not re-enter service. It was scrapped in 2005 due to the damage sustained in the accident as all four cars were damaged beyond repair. These were the official findings of the NSW Ministry of Transport investigation of the accident. A report of the accident, managed by Commissioner Peter McInerney, was released in January 2004. Systemic causes and ignored technical problems It was reported that G7 was said to have been reported for technical problems "possibly half a dozen times" and had developed a reputation amongst the mechanical operations branch, saying the problems were "normal" for the set in question. During the six months leading up to the accident, three reports of technical problems were made. The inquiry found a number of flaws in the deadman's handle (which was not implicated in the accident) and related to the deadman's pedal: The dead weight of the unconscious and overweight driver after he suffered a heart attack appeared to be enough to defeat the deadman's pedal, of which 44% of Sydney train drivers' legs were of sufficient mass. The design of the deadman's pedal did not appear to be able to operate as intended with drivers above a certain weight. Marks near the deadman's pedal indicated some drivers were wedging a conveniently-sized red signalling flag to defeat the deadman's pedal to prevent their legs from cramping in the poorly-configured footwell and to give themselves freedom of movement in the cabin. Some of the technical problems reported for Tangaras generally, included brake failure and reported power surge problems. After the accident, they were often blamed by some for being the cause of the accident. Many of the survivors of the accident mentioned a large acceleration before the accident occurred. Furthermore, there was an understanding that the emergency brake should be seldom used because the train would accelerate between before the brake came into effect. It was noted that the G7 trainset was the only train in the Tangara fleet to use 3-phase induction motors, and that these are not able to "run-away". Furthermore, the majority of braking and traction system components were thoroughly examined and tested by experts from Australia and overseas, and found to be working normally. Those damaged in the crash were examined and were also found not to have had pre-existing damage able to cause such an accident. Official findings into the accident also blamed an "underdeveloped safety culture". There has been criticism of the way CityRail managed safety issues, resulting in what the NSW Ministry of Transport termed "a reactive approach to risk management". At the inquiry, Paul Webb, Queen's Counsel, representing the guard on the train, said the guard was in a microsleep at the time of the question, for as much as 30 seconds, which would have removed the opportunity for the guard to halt the train. Webb had also proposed there had been attitudes that the driver was completely in charge of the train, and speeding was not an acceptable reason for the guard to slow or halt the train, which would have been a contributing factor in the accident. Prior to this derailment, neither training nor procedures mandated the guard to exercise control over the speed of the train by using the emergency brake pipe cock ("the tail"). Apart from the driver being considered to be the sole operator of the train, the emergency brake pipe cock does not provide the same degree of control over the automatic brake as a proper brake valve. The consensus among train crews was that a sudden emergency application from the rear could cause a breakaway (which is in fact not possible, as the cock does not apply the brakes solely to the rear car but rather uniformly along the full length of the train), and there was some evidence from previous accidents to validate such an opinion, however these were not involving the modern multiple-unit train design of which the Tangara is an example. Since this derailment, CityRail training and operational procedures now emphasise the guard's responsibility to monitor the train's speed, and if necessary, open the emergency brake pipe tap to stop the train. Changes implemented All Sydney and Intercity NSW TrainLink trains now have an additional safety feature, which has been fitted since the accident. In addition to the deadman handle and foot pedal, the trains are fitted with "task linked vigilance" – which resets a timer every time the driver activates certain controls. If there is no change in control, a flashing lamp and then buzzer sound and the driver is required to acknowledge a vigilance button. If the train's driver does not use the controls and does not acknowledge the vigilance alarm, the vigilance system is activated and makes an emergency brake application. All trains have also been fitted with data loggers to record the driver's and guard's actions as they work the train, as well as the train's speed. Such a system had been fitted to G7, but was in the early stage of fleet roll-out, and hence had not been commissioned and switched on at the time of the accident. Rescue workers who attended the scene were impeded from accessing the trapped passengers on the train because they did not have the keys required to open the emergency exit doors. Emergency exit mechanisms have all been modified, to allow them to be used without requiring a key. RailCorp has installed internal emergency door release mechanisms on all new trains. However many passengers found their own way out since the train was broken into three pieces during the accident. CityRail/RailCorp incorporated emergency door releases on the insides of the new Waratah trains as a result of the inquiries to this disaster, enabling passengers to open the doors themselves in case of an emergency, where the crew are incapacitated and the train is at a standstill. The 2004 changes to medical assessments of rail workers were developed in response to the incident. Overseen by the National Transport Commission, Cardiac assessments are mandatory for certification and re-certification with a proscribed mandatory checklist as part of the national standard in the interest of ensuring public safety, the intended purpose of the health assessments whereas previously the health assessments did not have an occupational risk but a clinical focus. References External links Special Commission of Inquiry into the Waterfall Rail Accident 2003 in New South Wales Derailments in Australia Disasters in Sydney January 2003 events in Australia Railway accidents in 2003 Runaway train disasters Railway accidents and incidents in New South Wales 2003 disasters in Australia
Waterfall rail accident
[ "Technology", "Engineering" ]
1,833
[ "Systems engineering", "Reliability engineering", "Technological failures", "Engineering failures", "Civil engineering" ]
1,036,238
https://en.wikipedia.org/wiki/Nicotinamide%20adenine%20dinucleotide%20phosphate
Nicotinamide adenine dinucleotide phosphate, abbreviated NADP or, in older notation, TPN (triphosphopyridine nucleotide), is a cofactor used in anabolic reactions, such as the Calvin cycle and lipid and nucleic acid syntheses, which require NADPH as a reducing agent ('hydrogen source'). NADPH is the reduced form, whereas NADP is the oxidized form. NADP is used by all forms of cellular life. NADP is essential for life because it is needed for cellular respiration. NADP differs from NAD by the presence of an additional phosphate group on the 2' position of the ribose ring that carries the adenine moiety. This extra phosphate is added by NAD+ kinase and removed by NADP+ phosphatase. Biosynthesis NADP In general, NADP+ is synthesized before NADPH is. Such a reaction usually starts with NAD+ from either the de-novo or the salvage pathway, with NAD+ kinase adding the extra phosphate group. ADP-ribosyl cyclase allows for synthesis from nicotinamide in the salvage pathway, and NADP+ phosphatase can convert NADPH back to NADH to maintain a balance. Some forms of the NAD+ kinase, notably the one in mitochondria, can also accept NADH to turn it directly into NADPH. The prokaryotic pathway is less well understood, but with all the similar proteins the process should work in a similar way. NADPH NADPH is produced from NADP+. The major source of NADPH in animals and other non-photosynthetic organisms is the pentose phosphate pathway, by glucose-6-phosphate dehydrogenase (G6PDH) in the first step. The pentose phosphate pathway also produces pentose, another important part of NAD(P)H, from glucose. Some bacteria also use G6PDH for the Entner–Doudoroff pathway, but NADPH production remains the same. Ferredoxin–NADP reductase, present in all domains of life, is a major source of NADPH in photosynthetic organisms including plants and cyanobacteria. It appears in the last step of the electron chain of the light reactions of photosynthesis. It is used as reducing power for the biosynthetic reactions in the Calvin cycle to assimilate carbon dioxide and help turn the carbon dioxide into glucose. It has functions in accepting electrons in other non-photosynthetic pathways as well: it is needed in the reduction of nitrate into ammonia for plant assimilation in nitrogen cycle and in the production of oils. There are several other lesser-known mechanisms of generating NADPH, all of which depend on the presence of mitochondria in eukaryotes. The key enzymes in these carbon-metabolism-related processes are NADP-linked isoforms of malic enzyme, isocitrate dehydrogenase (IDH), and glutamate dehydrogenase. In these reactions, NADP+ acts like NAD+ in other enzymes as an oxidizing agent. The isocitrate dehydrogenase mechanism appears to be the major source of NADPH in fat and possibly also liver cells. These processes are also found in bacteria. Bacteria can also use a NADP-dependent glyceraldehyde 3-phosphate dehydrogenase for the same purpose. Like the pentose phosphate pathway, these pathways are related to parts of glycolysis. Another carbon metabolism-related pathway involved in the generation of NADPH is the mitochondrial folate cycle, which uses principally serine as a source of one-carbon units to sustain nucleotide synthesis and redox homeostasis in mitochondria. Mitochondrial folate cycle has been recently suggested as the principal contributor to NADPH generation in mitochondria of cancer cells. NADPH can also be generated through pathways unrelated to carbon metabolism. The ferredoxin reductase is such an example. Nicotinamide nucleotide transhydrogenase transfers the hydrogen between NAD(P)H and NAD(P)+, and is found in eukaryotic mitochondria and many bacteria. There are versions that depend on a proton gradient to work and ones that do not. Some anaerobic organisms use NADP+-linked hydrogenase, ripping a hydride from hydrogen gas to produce a proton and NADPH. Like NADH, NADPH is fluorescent. NADPH in aqueous solution excited at the nicotinamide absorbance of ~335 nm (near UV) has a fluorescence emission which peaks at 445-460 nm (violet to blue). NADP has no appreciable fluorescence. Function NADPH provides the reducing agents, usually hydrogen atoms, for biosynthetic reactions and the oxidation-reduction involved in protecting against the toxicity of reactive oxygen species (ROS), allowing the regeneration of glutathione (GSH). NADPH is also used for anabolic pathways, such as cholesterol synthesis, steroid synthesis, ascorbic acid synthesis, xylitol synthesis, cytosolic fatty acid synthesis and microsomal fatty acid chain elongation. The NADPH system is also responsible for generating free radicals in immune cells by NADPH oxidase. These radicals are used to destroy pathogens in a process termed the respiratory burst. It is the source of reducing equivalents for cytochrome P450 hydroxylation of aromatic compounds, steroids, alcohols, and drugs. Stability NADH and NADPH are very stable in basic solutions, but NAD+ and NADP+ are degraded in basic solutions into a fluorescent product that can be used conveniently for quantitation. Conversely, NADPH and NADH are degraded by acidic solutions while NAD+/NADP+ are fairly stable to acid. Enzymes that use NADP(H) as a coenzyme Many enzymes that bind NADP share a common super-secondary structure named named the "Rossmann fold". The initial beta-alpha-beta (βαβ) fold is the most conserved segment of the Rossmann folds. This segment is in contact with the ADP portion of NADP. Therefore, it is also called an "ADP-binding βαβ fold". Adrenodoxin reductase: This enzyme is present ubiquitously in most organisms. It transfers two electrons from NADPH to FAD. In vertebrates, it serves as the first enzyme in the chain of mitochondrial P450 systems that synthesize steroid hormones. Enzymes that use NADP(H) as a substrate In 2018 and 2019, the first two reports of enzymes that catalyze the removal of the 2' phosphate of NADP(H) in eukaryotes emerged. First the cytoplasmic protein MESH1 (), then the mitochondrial protein nocturnin were reported. Of note, the structures and NADPH binding of MESH1 (5VXA) and nocturnin (6NF0) are not related. References Carbohydrate metabolism Nucleotides Coenzymes Pyridinium compounds
Nicotinamide adenine dinucleotide phosphate
[ "Chemistry" ]
1,492
[ "Carbohydrate metabolism", "Coenzymes", "Organic compounds", "Carbohydrate chemistry", "Metabolism" ]
1,036,259
https://en.wikipedia.org/wiki/Refrigerator
A refrigerator, commonly shortened to fridge, is a commercial and home appliance consisting of a thermally insulated compartment and a heat pump (mechanical, electronic or chemical) that transfers heat from its inside to its external environment so that its inside is cooled to a temperature below the room temperature. Refrigeration is an essential food storage technique around the world. The low temperature reduces the reproduction rate of bacteria, so the refrigerator lowers the rate of spoilage. A refrigerator maintains a temperature a few degrees above the freezing point of water. The optimal temperature range for perishable food storage is . A freezer is a specialized refrigerator, or portion of a refrigerator, that maintains its contents’ temperature below the freezing point of water. The refrigerator replaced the icebox, which had been a common household appliance for almost a century and a half. The United States Food and Drug Administration recommends that the refrigerator be kept at or below and that the freezer be regulated at . The first cooling systems for food involved ice. Artificial refrigeration began in the mid-1750s, and developed in the early 1800s. In 1834, the first working vapor-compression refrigeration system, using the same technology seen in air conditioners, was built. The first commercial ice-making machine was invented in 1854. In 1913, refrigerators for home use were invented. In 1923 Frigidaire introduced the first self-contained unit. The introduction of Freon in the 1920s expanded the refrigerator market during the 1930s. Home freezers as separate compartments (larger than necessary just for ice cubes) were introduced in 1940. Frozen foods, previously a luxury item, became commonplace. Freezer units are used in households as well as in industry and commerce. Commercial refrigerator and freezer units were in use for almost 40 years prior to the common home models. The freezer-over-refrigerator style had been the basic style since the 1940s, until modern, side-by-side refrigerators broke the trend. A vapor compression cycle is used in most household refrigerators, refrigerator–freezers and freezers. Newer refrigerators may include automatic defrosting, chilled water, and ice from a dispenser in the door. Domestic refrigerators and freezers for food storage are made in a range of sizes. Among the smallest are Peltier-type refrigerators designed to chill beverages. A large domestic refrigerator stands as tall as a person and may be about wide with a capacity of . Refrigerators and freezers may be free standing, or built into a kitchen. The refrigerator allows the modern household to keep food fresh for longer than before. Freezers allow people to buy perishable food in bulk and eat it at leisure, and make bulk purchases. History Technology development Ancient origins Ancient Iranians were among the first to invent a form of cooler utilizing the principles of evaporative cooling and radiative cooling called yakhchāls. These complexes used subterranean storage spaces, a large thickly insulated above-ground domed structure, and outfitted with badgirs (wind-catchers) and series of qanats (aqueducts). Pre-electric refrigeration In modern times, before the invention of the modern electric refrigerator, icehouses and iceboxes were used to provide cool storage for most of the year. Placed near freshwater lakes or packed with snow and ice during the winter, they were once very common. Natural means are still used to cool foods today. On mountainsides, runoff from melting snow is a convenient way to cool drinks, and during the winter one can keep milk fresh much longer just by keeping it outdoors. The word "refrigeratory" was used at least as early as the 17th century. Artificial refrigeration The history of artificial refrigeration began when Scottish professor William Cullen designed a small refrigerating machine in 1755. Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled, absorbing heat from the surrounding air. The experiment even created a small amount of ice, but had no practical application at that time. In 1805, American inventor Oliver Evans described a closed vapor-compression refrigeration cycle for the production of ice by ether under vacuum. In 1820, the British scientist Michael Faraday liquefied ammonia and other gases by using high pressures and low temperatures, and in 1834, an American expatriate in Great Britain, Jacob Perkins, built the first working vapor-compression refrigeration system. It was a closed-cycle device that could operate continuously. A similar attempt was made in 1842, by American physician, John Gorrie, who built a working prototype, but it was a commercial failure. American engineer Alexander Twining took out a British patent in 1850 for a vapor compression system that used ether. The first practical vapor compression refrigeration system was built by James Harrison, a Scottish Australian. His 1856 patent was for a vapor compression system using ether, alcohol or ammonia. He built a mechanical ice-making machine in 1851 on the banks of the Barwon River at Rocky Point in Geelong, Victoria, and his first commercial ice-making machine followed in 1854. Harrison also introduced commercial vapor-compression refrigeration to breweries and meat packing houses, and by 1861, a dozen of his systems were in operation. The first gas absorption refrigeration system (compressor-less and powered by a heat-source) was developed by Edward Toussaint of France in 1859 and patented in 1860. It used gaseous ammonia dissolved in water ("aqua ammonia"). Carl von Linde, an engineering professor at the Technological University Munich in Germany, patented an improved method of liquefying gases in 1876, creating the first reliable and efficient compressed-ammonia refrigerator. His new process made possible the use of gases such as ammonia (NH3), sulfur dioxide (SO2) and methyl chloride (CH3Cl) as refrigerants, which were widely used for that purpose until the late 1920s despite safety concerns. In 1895 he discovered the refrigeration cycle. Electric refrigerators In 1894, Hungarian inventor and industrialist István Röck started to manufacture a large industrial ammonia refrigerator which was powered by electric compressors (together with the Esslingen Machine Works). Its electric compressors were manufactured by the Ganz Works. At the 1896 Millennium Exhibition, Röck and the Esslingen Machine Works presented a 6-tonne capacity artificial ice producing plant. In 1906, the first large Hungarian cold store (with a capacity of 3,000 tonnes, the largest in Europe) opened in Tóth Kálmán Street, Budapest, the machine was manufactured by the Ganz Works. Until nationalisation after the Second World War, large-scale industrial refrigerator production in Hungary was in the hands of Röck and Ganz Works. Commercial refrigerator and freezer units, which go by many other names, were in use for almost 40 years prior to the common home models. They used gas systems such as ammonia (R-717) or sulfur dioxide (R-764), which occasionally leaked, making them unsafe for home use. Practical household refrigerators were introduced in 1915 and gained wider acceptance in the United States in the 1930s as prices fell and non-toxic, non-flammable synthetic refrigerants such as Freon-12 (R-12) were introduced. However, R-12 proved to be damaging to the ozone layer, causing governments to issue a ban on its use in new refrigerators and air-conditioning systems in 1994. The less harmful replacement for R-12, R-134a (tetrafluoroethane), has been in common use since 1990, but R-12 is still found in many old systems. Refrigeration, continually operated, typically consumes up to 50% of the energy used by a supermarket. Doors, made of glass to allow inspection of contents, improve efficiency significantly over open display cases, which use 1.3 times the energy. Residential refrigerators In 1913, the first electric refrigerators for home and domestic use were invented and produced by Fred W. Wolf of Fort Wayne, Indiana, with models consisting of a unit that was mounted on top of an ice box. His first device, produced over the next few years in several hundred units, was called DOMELRE. In 1914, engineer Nathaniel B. Wales of Detroit, Michigan, introduced an idea for a practical electric refrigeration unit, which later became the basis for the Kelvinator. A self-contained refrigerator, with a compressor on the bottom of the cabinet was invented by Alfred Mellowes in 1916. Mellowes produced this refrigerator commercially but was bought out by William C. Durant in 1918, who started the Frigidaire company to mass-produce refrigerators. In 1918, Kelvinator company introduced the first refrigerator with any type of automatic control. The absorption refrigerator was invented by Baltzar von Platen and Carl Munters from Sweden in 1922, while they were still students at the Royal Institute of Technology in Stockholm. It became a worldwide success and was commercialized by Electrolux. Other pioneers included Charles Tellier, David Boyle, and Raoul Pictet. Carl von Linde was the first to patent and make a practical and compact refrigerator. These home units usually required the installation of the mechanical parts, motor and compressor, in the basement or an adjacent room while the cold box was located in the kitchen. There was a 1922 model that consisted of a wooden cold box, water-cooled compressor, an ice cube tray and a compartment, and cost $714. (A 1922 Model-T Ford cost about $476.) By 1923, Kelvinator held 80 percent of the market for electric refrigerators. Also in 1923 Frigidaire introduced the first self-contained unit. About this same time porcelain-covered metal cabinets began to appear. Ice cube trays were introduced more and more during the 1920s; up to this time freezing was not an auxiliary function of the modern refrigerator. The first refrigerator to see widespread use was the General Electric "Monitor-Top" refrigerator introduced in 1927, so-called, by the public, because of its resemblance to the gun turret on the ironclad warship USS Monitor of the 1860s. The compressor assembly, which emitted a great deal of heat, was placed above the cabinet, and enclosed by a decorative ring. Over a million units were produced. As the refrigerating medium, these refrigerators used either sulfur dioxide, which is corrosive to the eyes and may cause loss of vision, painful skin burns and lesions, or methyl formate, which is highly flammable, harmful to the eyes, and toxic if inhaled or ingested. The introduction of Freon in the 1920s expanded the refrigerator market during the 1930s and provided a safer, low-toxicity alternative to previously used refrigerants. Separate freezers became common during the 1940s; the term for the unit, popular at the time, was deep freeze. These devices, or appliances, did not go into mass production for use in the home until after World War II. The 1950s and 1960s saw technical advances like automatic defrosting and automatic ice making. More efficient refrigerators were developed in the 1970s and 1980s, even though environmental issues led to the banning of very effective (Freon) refrigerants. Early refrigerator models (from 1916) had a cold compartment for ice cube trays. From the late 1920s fresh vegetables were successfully processed through freezing by the Postum Company (the forerunner of General Foods), which had acquired the technology when it bought the rights to Clarence Birdseye's successful fresh freezing methods. Styles of refrigerators The majority of refrigerators were white in the early 1950s, but between the mid-1950s and the present, manufacturers and designers have added color. Pastel colors, such as pink and turquoise, gained popularity in the late 1950s and early 1960s. Certain versions also had brushed chrome plating, which is akin to a stainless steel appearance. During the latter part of the 1960s and the early 1970s, earth tone colors were popular, including Harvest Gold, Avocado Green and almond. In the 1980s, black became fashionable. In the late 1990s stainless steel came into vogue. Since 1961 the Color Marketing Group has attempted to coordinate the colors of appliances and other consumer goods. Freezer Freezer units are used in households and in industry and commerce. Food stored at or below is safe indefinitely. Most household freezers maintain temperatures from , although some freezer-only units can achieve and lower. Refrigerator freezers generally do not achieve lower than , since the same coolant loop serves both compartments: Lowering the freezer compartment temperature excessively causes difficulties in maintaining above-freezing temperature in the refrigerator compartment. Domestic freezers can be included as a separate compartment in a refrigerator, or can be a separate appliance. Domestic freezers may be either upright, resembling a refrigerator, or chest freezers, wider than tall with the lid or door on top, sacrificing convenience for efficiency and partial immunity to power outages. Many modern upright freezers come with an ice dispenser built into their door. Some upscale models include thermostat displays and controls. Home freezers as separate compartments (larger than necessary just for ice cubes), or as separate units, were introduced in the United States in 1940. Frozen foods, previously a luxury item, became commonplace. In 1955 the domestic deep freezer, which was cold enough to allow the owners to freeze fresh food themselves rather than buying food already frozen with Clarence Birdseye's process, went on sale. Walk-in freezer There are walk in freezers, as the name implies, they allow for one to walk into the freezer. Safety regulations requires an emergency releases and employers should check to ensure no one will trapped inside when the unit gets locked as hypothermia is possible if one is in freezer for longer periods of time. Refrigerator technologies Compressor refrigerators A vapor compression cycle is used in most household refrigerators, refrigerator–freezers and freezers. In this cycle, a circulating refrigerant such as R134a enters a compressor as low-pressure vapor at or slightly below the temperature of the refrigerator interior. The vapor is compressed and exits the compressor as high-pressure superheated vapor. The superheated vapor travels under pressure through coils or tubes that make up the condenser; the coils or tubes are passively cooled by exposure to air in the room. The condenser cools the vapor, which liquefies. As the refrigerant leaves the condenser, it is still under pressure but is now only slightly above room temperature. This liquid refrigerant is forced through a metering or throttling device, also known as an expansion valve (essentially a pin-hole sized constriction in the tubing) to an area of much lower pressure. The sudden decrease in pressure results in explosive-like flash evaporation of a portion (typically about half) of the liquid. The latent heat absorbed by this flash evaporation is drawn mostly from adjacent still-liquid refrigerant, a phenomenon known as auto-refrigeration. This cold and partially vaporized refrigerant continues through the coils or tubes of the evaporator unit. A fan blows air from the compartment ("box air") across these coils or tubes and the refrigerant completely vaporizes, drawing further latent heat from the box air. This cooled air is returned to the refrigerator or freezer compartment, and so keeps the box air cold. Note that the cool air in the refrigerator or freezer is still warmer than the refrigerant in the evaporator. Refrigerant leaves the evaporator, now fully vaporized and slightly heated, and returns to the compressor inlet to continue the cycle. Modern domestic refrigerators are extremely reliable because motor and compressor are integrated within a welded container, "sealed unit", with greatly reduced likelihood of leakage or contamination. By comparison, externally-coupled refrigeration compressors, such as those in automobile air conditioning, inevitably leak fluid and lubricant past the shaft seals. This leads to a requirement for periodic recharging and, if ignored, possible compressor failure. Dual compartment designs Refrigerators with two compartments need special design to control the cooling of refrigerator or freezer compartments. Typically, the compressors and condenser coils are mounted at the top of the cabinet, with a single fan to cool them both. This arrangement has a few downsides: each compartment cannot be controlled independently and the more humid refrigerator air is mixed with the dry freezer air. Multiple manufacturers offer dual compressor models. These models have separate freezer and refrigerator compartments that operate independently of each other, sometimes mounted within a single cabinet. Each has its own separate compressor, condenser and evaporator coils, insulation, thermostat, and door. A hybrid between the two designs is using a separate fan for each compartment, the Dual Fan approach. Doing so allows for separate control and airflow on a single compressor system. Absorption refrigerators An absorption refrigerator works differently from a compressor refrigerator, using a source of heat, such as combustion of liquefied petroleum gas, solar thermal energy or an electric heating element. These heat sources are much quieter than the compressor motor in a typical refrigerator. A fan or pump might be the only mechanical moving parts; reliance on convection is considered impractical. Other uses of an absorption refrigerator (or "chiller") include large systems used in office buildings or complexes such as hospitals and universities. These large systems are used to chill a brine solution that is circulated through the building. Peltier effect refrigerators The Peltier effect uses electricity to pump heat directly; refrigerators employing this system are sometimes used for camping, or in situations where noise is not acceptable. They can be totally silent (if a fan for air circulation is not fitted) but are less energy-efficient than other methods. Ultra-low temperature refrigerators "Ultra-cold" or "ultra-low temperature (ULT)" (typically ) freezers, as used for storing biological samples, also generally employ two stages of cooling, but in cascade. The lower temperature stage uses methane, or a similar gas, as a refrigerant, with its condenser kept at around −40°C by a second stage which uses a more conventional refrigerant. For much lower temperatures, laboratories usually purchase liquid nitrogen (), kept in a Dewar flask, into which the samples are suspended. Cryogenic chest freezers can achieve temperatures of down to , and may include a liquid nitrogen backup. Other refrigerators Alternatives to the vapor-compression cycle not in current mass production include: Acoustic cooling Air cycle Magnetic cooling Malone engine Pulse tube Stirling cycle Thermoelectric cooling Thermionic cooling Vortex tube Water cycle systems. Layout Many modern refrigerator/freezers have the freezer on top and the refrigerator on the bottom. Most refrigerator-freezers—except for manual defrost models or cheaper units—use what appears to be two thermostats. Only the refrigerator compartment is properly temperature controlled. When the refrigerator gets too warm, the thermostat starts the cooling process and a fan circulates the air around the freezer. During this time, the refrigerator also gets colder. The freezer control knob only controls the amount of air that flows into the refrigerator via a damper system. Changing the refrigerator temperature will inadvertently change the freezer temperature in the opposite direction. Changing the freezer temperature will have no effect on the refrigerator temperature. The freezer control may also be adjusted to compensate for any refrigerator adjustment. This means the refrigerator may become too warm. However, because only enough air is diverted to the refrigerator compartment, the freezer usually re-acquires the set temperature quickly, unless the door is opened. When a door is opened, either in the refrigerator or the freezer, the fan in some units stops immediately to prevent excessive frost build up on the freezer's evaporator coil, because this coil is cooling two areas. When the freezer reaches temperature, the unit cycles off, no matter what the refrigerator temperature is. Modern computerized refrigerators do not use the damper system. The computer manages fan speed for both compartments, although air is still blown from the freezer. Features Newer refrigerators may include: Automatic defrosting A power failure warning that alerts the user by flashing a temperature display. It may display the maximum temperature reached during the power failure, and whether frozen food has defrosted or may contain harmful bacteria. Chilled water and ice from a dispenser in the door. Water and ice dispensing became available in the 1970s. In some refrigerators, the process of making ice is built-in so the user doesn't have to manually use ice trays. Some refrigerators have water chillers and water filtration systems. Cabinet rollers that lets the refrigerator roll out for easier cleaning Adjustable shelves and trays A status indicator that notifies when it is time to change the water filter An in-door ice caddy, which relocates the ice-maker storage to the freezer door and saves approximately of usable freezer space. It is also removable, and helps to prevent ice-maker clogging. A cooling zone in the refrigerator door shelves. Air from the freezer section is diverted to the refrigerator door, to cool milk or juice stored in the door shelf. A drop down door built into the refrigerator main door, giving easy access to frequently used items such as milk, thus saving energy by not having to open the main door. A Fast Freeze function to rapidly cool foods by running the compressor for a predetermined amount of time and thus temporarily lowering the freezer temperature below normal operating levels. It is recommended to use this feature several hours before adding more than 1 kg of unfrozen food to the freezer. For freezers without this feature, lowering the temperature setting to the coldest will have the same effect. Freezer Defrost: Early freezer units accumulated ice crystals around the freezing units. This was a result of humidity introduced into the units when the doors to the freezer were opened condensing on the cold parts, then freezing. This frost buildup required periodic thawing ("defrosting") of the units to maintain their efficiency. Manual Defrost (referred to as Cyclic) units are still available. Advances in automatic defrosting eliminating the thawing task were introduced in the 1950s, but are not universal, due to energy performance and cost. These units used a counter that only defrosted the freezer compartment (Freezer Chest) when a specific number of door openings had been made. The units were just a small timer combined with an electrical heater wire that heated the freezer's walls for a short amount of time to remove all traces of frost/frosting. Also, early units featured freezer compartments located within the larger refrigerator, and accessed by opening the refrigerator door, and then the smaller internal freezer door; units featuring an entirely separate freezer compartment were introduced in the early 1960s, becoming the industry standard by the middle of that decade. These older freezer compartments were the main cooling body of the refrigerator, and only maintained a temperature of around , which is suitable for keeping food for a week. Butter heater: In the early 1950s, the butter conditioner's patent was filed and published by the inventor Nave Alfred E. This feature was supposed to "provide a new and improved food storage receptacle for storing butter or the like which may quickly and easily be removed from the refrigerator cabinet for the purpose of cleaning." Because of the high interest to the invention, companies in UK, New Zealand, and Australia started to include the feature into the mass refrigerator production and soon it became a symbol of the local culture. However, not long after that it was removed from production as according to the companies this was the only way for them to meet new ecology regulations and they found it inefficient to have a heat generating device inside a refrigerator. Later advances included automatic ice units and self compartmentalized freezing units. Types of domestic refrigerators Domestic refrigerators and freezers for food storage are made in a range of sizes. Among the smallest is a Peltier refrigerator advertised as being able to hold 6 cans of beer. A large domestic refrigerator stands as tall as a person and may be about wide with a capacity of . Some models for small households fit under kitchen work surfaces, usually about high. Refrigerators may be combined with freezers, either stacked with refrigerator or freezer above, below, or side by side. A refrigerator without a frozen food storage compartment may have a small section just to make ice cubes. Freezers may have drawers to store food in, or they may have no divisions (chest freezers). Refrigerators and freezers may be free-standing, or built into a kitchen's cabinet. Three distinct classes of refrigerator are common: Compressor refrigerators Compressor refrigerators are by far the most common type; they make a noticeable noise, but are most efficient and give greatest cooling effect. Portable compressor refrigerators for recreational vehicle (RV) and camping use are expensive but effective and reliable. Refrigeration units for commercial and industrial applications can be made in various sizes, shapes and styles to fit customer needs. Commercial and industrial refrigerators may have their compressors located away from the cabinet (similar to split system air conditioners) to reduce noise nuisance and reduce the load on air conditioning in hot weather. Absorption refrigerator Absorption refrigerators may be used in caravans and trailers, and dwellings lacking electricity, such as farms or rural cabins, where they have a long history. They may be powered by any heat source: gas (natural or propane) or kerosene being common. Models made for camping and RV use often have the option of running (inefficiently) on 12 volt battery power. Peltier refrigerators Peltier refrigerators are powered by electricity, usually 12 volt DC, but mains-powered wine coolers are available. Peltier refrigerators are inexpensive but inefficient and become progressively more inefficient with increased cooling effect; much of this inefficiency may be related to the temperature differential across the short distance between the "hot" and "cold" sides of the Peltier cell. Peltier refrigerators generally use heat sinks and fans to lower this differential; the only noise produced comes from the fan. Reversing the polarity of the voltage applied to the Peltier cells results in a heating rather than cooling effect. Other specialized cooling mechanisms may be used for cooling, but have not been applied to domestic or commercial refrigerators. Magnetic refrigerator Magnetic refrigerators are refrigerators that work on the magnetocaloric effect. The cooling effect is triggered by placing a metal alloy in a magnetic field. Acoustic refrigerators are refrigerators that use resonant linear reciprocating motors/alternators to generate a sound that is converted to heat and cold using compressed helium gas. The heat is discarded and the cold is routed to the refrigerator. Energy efficiency In a house without air-conditioning (space heating and/or cooling) refrigerators consume more energy than any other home device. In the early 1990s a competition was held among the major US manufacturers to encourage energy efficiency. Current US models that are Energy Star qualified use 50% less energy than the average 1974 model used. The most energy-efficient unit made in the US consumes about half a kilowatt-hour per day (equivalent to 20 W continuously). But even ordinary units are reasonably efficient; some smaller units use less than 0.2 kWh per day (equivalent to 8 W continuously). Larger units, especially those with large freezers and icemakers, may use as much as 4 kW·h per day (equivalent to 170 W continuously). The European Union uses a letter-based mandatory energy efficiency rating label, with A being the most efficient, instead of the Energy Star. For US refrigerators, the Consortium on Energy Efficiency (CEE) further differentiates between Energy Star qualified refrigerators. Tier 1 refrigerators are those that are 20% to 24.9% more efficient than the Federal minimum standards set by the National Appliance Energy Conservation Act (NAECA). Tier 2 are those that are 25% to 29.9% more efficient. Tier 3 is the highest qualification, for those refrigerators that are at least 30% more efficient than Federal standards. About 82% of the Energy Star qualified refrigerators are Tier 1, with 13% qualifying as Tier 2, and just 5% at Tier 3. Besides the standard style of compressor refrigeration used in ordinary household refrigerators and freezers, there are technologies such as absorption and magnetic refrigeration. Although these designs generally use a much more energy than compressor refrigeration, other qualities such as silent operation or the ability to use gas can favor their use in small enclosures, a mobile environment or in environments where failure of refrigeration must not be possible. Many refrigerators made in the 1930s and 1940s were far more efficient than most that were made later. This is partly due to features added later, such as auto-defrost, that reduced efficiency. Additionally, after World War 2, refrigerator style became more important than efficiency. This was especially true in the US in the 1970s, when side-by-side models (known as American fridge-freezers outside of the US) with ice dispensers and water chillers became popular. The amount of insulation used was also often decreased to reduce refrigerator case size and manufacturing costs. Improvement Over time standards of refrigerator energy efficiency have been introduced and tightened, which has driven steady improvement; 21st-century refrigerators are typically three times more energy-efficient than in the 1930s. The efficiency of older refrigerators can be improved by regular defrosting (if the unit is manual defrost) and cleaning, replacing deteriorated door seals with new ones, not setting the thermostat colder than actually required (a refrigerator does not usually need to be colder than ), and replacing insulation, where applicable. Cleaning condenser coils to remove dust impeding heat flow, and ensuring that there is space for air flow around the condenser can improve efficiency. Auto defrosting Frost-free refrigerators and freezers use electric fans to cool the appropriate compartment. This could be called a "fan forced" refrigerator, whereas manual defrost units rely on colder air lying at the bottom, versus the warm air at the top to achieve adequate cooling. The air is drawn in through an inlet duct and passed through the evaporator where it is cooled, the air is then circulated throughout the cabinet via a series of ducts and vents. Because the air passing the evaporator is supposedly warm and moist, frost begins to form on the evaporator (especially on a freezer's evaporator). In cheaper and/or older models, a defrost cycle is controlled via a mechanical timer. This timer is set to shut off the compressor and fan and energize a heating element located near or around the evaporator for about 15 to 30 minutes at every 6 to 12 hours. This melts any frost or ice build-up and allows the refrigerator to work normally once more. It is believed that frost free units have a lower tolerance for frost, due to their air-conditioner-like evaporator coils. Therefore, if a door is left open accidentally (especially the freezer), the defrost system may not remove all frost, in this case, the freezer (or refrigerator) must be defrosted. If the defrosting system melts all the ice before the timed defrosting period ends, then a small device (called a defrost limiter) acts like a thermostat and shuts off the heating element to prevent too large a temperature fluctuation, it also prevents hot blasts of air when the system starts again, should it finish defrosting early. On some early frost-free models, the defrost limiter also sends a signal to the defrost timer to start the compressor and fan as soon as it shuts off the heating element before the timed defrost cycle ends. When the defrost cycle is completed, the compressor and fan are allowed to cycle back on. Frost-free refrigerators, including some early frost-free refrigerators/freezers that used a cold plate in their refrigerator section instead of airflow from the freezer section, generally don't shut off their refrigerator fans during defrosting. This allows consumers to leave food in the main refrigerator compartment uncovered, and also helps keep vegetables moist. This method also helps reduce energy consumption, because the refrigerator is above freeze point and can pass the warmer-than-freezing air through the evaporator or cold plate to aid the defrosting cycle. Inverter With the advent of digital inverter compressors, the energy consumption is even further reduced than a single-speed induction motor compressor, and thus contributes far less in the way of greenhouse gases. The energy consumption of a refrigerator is also dependent on the type of refrigeration being done. For instance, Inverter Refrigerators consume comparatively less energy than a typical non-inverter refrigerator. In an inverter refrigerator, the compressor is used conditionally on requirement basis. For instance, an inverter refrigerator might use less energy during the winters than it does during the summers. This is because the compressor works for a shorter time than it does during the summers. Further, newer models of inverter compressor refrigerators take into account various external and internal conditions to adjust the compressor speed and thus optimize cooling and energy consumption. Most of them use at least 4 sensors which help detect variance in external temperature, internal temperature owing to opening of the refrigerator door or keeping new food inside; humidity and usage patterns. Depending on the sensor inputs, the compressor adjusts its speed. For example, if door is opened or new food is kept, the sensor detects an increase in temperature inside the cabin and signals the compressor to increase its speed till a pre-determined temperature is attained. After which, the compressor runs at a minimum speed to just maintain the internal temperature. The compressor typically runs between 1200 and 4500 rpm. Inverter compressors not only optimizes cooling but is also superior in terms of durability and energy efficiency. A device consumes maximum energy and undergoes maximum wear and tear when it switches itself on. As an inverter compressor never switches itself off and instead runs on varying speed, it minimizes wear and tear and energy usage. LG played a significant role in improving inverter compressors as we know it by reducing the friction points in the compressor and thus introducing Linear Inverter Compressors. Conventionally, all domestic refrigerators use a reciprocating drive which is connected to the piston. But in a linear inverter compressor, the piston which is a permanent magnet is suspended between two electromagnets. The AC changes the magnetic poles of the electromagnet, which results in the push and pull that compresses the refrigerant. LG claims that this helps reduce energy consumption by 32% and noise by 25% compared to their conventional compressors. Form factor The physical design of refrigerators also plays a large part in its energy efficiency. The most efficient is the chest-style freezer, as its top-opening design minimizes convection when opening the doors, reducing the amount of warm moist air entering the freezer. On the other hand, in-door ice dispensers cause more heat leakage, contributing to an increase in energy consumption. Impact Global adoption The gradual global adoption of refrigerators marks a transformative era in food preservation and domestic convenience. Since the refrigerators introduction in the 20th century, refrigerators have transitioned from being luxurious items to everyday commodities which have altered the understandings of food storage practices. Refrigerators have significantly impacted various aspects of many individual's daily lives by providing food safety to people around the world spanning across a wide variety of cultural and socioeconomic backgrounds. The global adoption of refrigerators has also changed how societies handle their food supply. The introduction of the refrigerator in different societies has resulted in the monetization and industrialized mass food production systems which are commonly linked to increased food waste, animal wastes, and dangerous chemical wastes being traced back into different ecosystems. In addition, refrigerators have also provided an easier way to access food for many individuals around the world, with many options that commercialization has promoted leaning towards low-nutrient dense foods. After consumer refrigerators became financially viable for production and sale on a large scale, their prevalence around the globe expanded greatly. In the United States, an estimated 99.5% of households have a refrigerator. Refrigerator ownership is more common in developed Western countries, but has stayed relatively low in Eastern and developing countries despite its growing popularity. Throughout Eastern Europe and the Middle East, only 80% of the population own refrigerators. In addition to this, 65% of the population in China are stated to have refrigerators. The distribution of consumer refrigerators is also skewed as urban areas exhibit larger refrigeration ownership percentages compared to rural areas. Supplantation of the ice trade The ice trade was an industry in the 19th and 20th centuries of the harvesting, transportation, and sale of natural and artificial ice for the purposes of refrigeration and consumption. The majority of the ice used for trade was harvested from North America and transported globally with some smaller operations working out of Norway. With the introduction of more affordable large and home scale refrigeration around the 1920s, the need for large scale ice harvest and transportation was no longer needed, and the ice trade subsequently slowed and shrank to smaller scale local services or disappeared altogether. Effect on diet and lifestyle The refrigerator allows households to keep food fresh for longer than before. The most notable improvement is for meat and other highly perishable wares, which previously needed to be preserved or otherwise processed for long-term storage and transport. This change in the supply chains of food products led to a marked increase in the quality of food in areas where refrigeration was being used. Additionally, the increased freshness and shelf life of food caused by the advent of refrigeration in addition to growing global communication methods has resulted in an increase in cultural exchange through food products from different regions of the world. There have also been claims that this increase in the quality of food is responsible for an increase in the height of United States citizens around the early 1900s. Refrigeration has also contributed to a decrease in the quality of food in some regions. By allowing, in part, for the phenomenon of globalization in the food sector, refrigeration has made the creation and transportation of ultra-processed foods and convenience foods inexpensive, leading to their prevalence, especially in lower-income regions. These regions of lessened access to higher quality foods are referred to as food deserts. Freezers allow people to buy food in bulk and eat it at leisure, and bulk purchases may save money. Ice cream, a popular commodity of the 20th century, could previously only be obtained by traveling to where the product was made and eating it on the spot. Now it is a common food item. Ice on demand not only adds to the enjoyment of cold drinks, but is useful for first-aid, and for cold packs that can be kept frozen for picnics or in case of emergency. Temperature zones and ratings Residential units The capacity of a refrigerator is measured in either liters or cubic feet. Typically the volume of a combined refrigerator-freezer is split with 1/3 to 1/4 of the volume allocated to the freezer although these values are highly variable. Temperature settings for refrigerator and freezer compartments are often given arbitrary numbers by manufacturers (for example, 1 through 9, warmest to coldest), but generally is ideal for the refrigerator compartment and for the freezer. Some refrigerators must be within certain external temperature parameters to run properly. This can be an issue when placing units in an unfinished area, such as a garage. Some refrigerators are now divided into four zones to store different types of food: (freezer) (meat zone) (cooling zone) (crisper) European freezers, and refrigerators with a freezer compartment, have a four-star rating system to grade freezers. Although both the three- and four-star ratings specify the same storage times and same minimum temperature of , only a four-star freezer is intended for freezing fresh food, and may include a "fast freeze" function (runs the compressor continually, down to as low as ) to facilitate this. Three (or fewer) stars are used for frozen food compartments that are only suitable for storing frozen food; introducing fresh food into such a compartment is likely to result in unacceptable temperature rises. This difference in categorization is shown in the design of the 4-star logo, where the "standard" three stars are displayed in a box using "positive" colours, denoting the same normal operation as a 3-star freezer, and the fourth star showing the additional fresh food/fast freeze function is prefixed to the box in "negative" colours or with other distinct formatting. Most European refrigerators include a moist cold refrigerator section (which does require (automatic) defrosting at irregular intervals) and a (rarely frost-free) freezer section. Commercial refrigeration temperatures (from warmest to coolest) Refrigerators , and not greater than maximum refrigerator temperature at Freezer, Reach-in Freezer, Walk-in Freezer, Ice Cream Cryogenics Cryocooler: below -153 °C (-243.4 °F) Dilution refrigerator: down to -273.148 °C (-459.6664 °F) Disposal An increasingly important environmental concern is the disposal of old refrigerators—initially because freon coolant damages the ozone layer—but as older generation refrigerators wear out, the destruction of CFC-bearing insulation also causes concern. Modern refrigerators usually use a refrigerant called HFC-134a (1,1,1,2-Tetrafluoroethane), which does not deplete the ozone layer, unlike Freon. R-134a is becoming much rarer in Europe, where newer refrigerants are being used instead. The main refrigerant now used is R-600a (also known as isobutane), which has a smaller effect on the atmosphere if released. There have been reports of refrigerators exploding if the refrigerant leaks isobutane in the presence of a spark. If the coolant leaks into the refrigerator, at times when the door is not being opened (such as overnight) the concentration of coolant in the air within the refrigerator can build up to form an explosive mixture that can be ignited either by a spark from the thermostat or when the light comes on as the door is opened, resulting in documented cases of serious property damage and injury or even death from the resulting explosion. Disposal of discarded refrigerators is regulated, often mandating the removal of doors for safety reasons. Children have been asphyxiated while playing with discarded refrigerators, particularly older models with latching doors. Since the 1950s regulations in many places have banned the use of refrigerator doors that cannot be opened by pushing from inside. Modern units use a magnetic door gasket that holds the door sealed but allows it to be pushed open from the inside. This gasket was invented, developed and manufactured by Max Baermann (1903–1984) of Bergisch Gladbach/Germany. Regarding total life-cycle costs, many governments offer incentives to encourage recycling of old refrigerators. One example is the Phoenix refrigerator program launched in Australia. This government incentive picked up old refrigerators, paying their owners for "donating" the refrigerator. The refrigerator was then refurbished, with new door seals, a thorough cleaning, and the removal of items such as the cover that is strapped to the back of many older units. The resulting refrigerators, now over 10% more efficient, were then given to low-income families. The United States also has a program for collecting and replacing older, less-efficient refrigerators and other white goods. These programs seek to replace large appliances that are old and inefficient or faulty by newer, more energy-efficient appliances, to reduce the cost imposed on lower-income families, and reduce pollution caused by the older appliances. Gallery See also Auto-defrost Cold chain Continuous freezers Einstein refrigerator Home automation Ice cream maker Ice famine Smart refrigerator Kimchi refrigerator Home appliance Pot-in-pot refrigerator Refrigerator death Refrigerator magnet Solar-powered refrigerator Star rating Water dispenser Wine cellar References Further reading Rees, Jonathan. Refrigeration Nation: A History of Ice, Appliances, and Enterprise in America (Johns Hopkins University Press; 2013) 256 pages External links Refrigerating apparatus Refrigerating apparatus The History of the Refrigerator and Freezers Refrigerators, Canada Science and Technology Museum 20th-century inventions Articles containing video clips Australian inventions Cooling technology Food preservation Food storage Heat pumps Home appliances Home automation Kitchen
Refrigerator
[ "Physics", "Technology" ]
9,320
[ "Physical systems", "Home automation", "Machines", "Home appliances" ]
1,036,351
https://en.wikipedia.org/wiki/Bible%20citation
A citation from the Bible is usually referenced with the book name, chapter number and verse number. Sometimes, the name of the Bible translation is also included. There are several formats for doing so. Common formats A common format for biblical citations is Book chapter:verses, using a colon to delimit chapter from verse, as in: "In the beginning, God created the heaven and the earth" (Gen. 1:1). Or, stated more formally, Book chapter for a chapter (John 3); Book chapter1–chapter2 for a range of chapters (John 1–3); book chapter:verse for a single verse (John 3:16); book chapter:verse1–verse2 for a range of verses (John 3:16–17); book chapter:verse1,verse2 for multiple disjoint verses (John 6:14, 44). The range delimiter is an en-dash, and there are no spaces on either side of it. This format is the one accepted by the Chicago Manual of Style to cite scriptural standard works. The MLA style is similar, but replaces the colon with a period. Citations in the APA style add the translation of the Bible after the verse. For example, (John 3:16, New International Version). Translation names should not be abbreviated (e.g., write out King James Version instead of using KJV). Subsequent citations do not require the translation unless that changes. In APA 7th edition, the Bible is listed in the references at the end of the document, which has changed since previous versions. Citations in Turabian style requires that when referring to books or chapters, do not italicize or underline them. The book names must also be spelled out. For example, (The beginning of Genesis recounts the creation of our universe.) When referring directly to a particular passage, the abbreviated book name, chapter number, a colon, and verse number must be provided. Additionally, the Bible is not listed in the references at the end of the document and the edition of the Bible is required when citing inside parentheses. For example, (Eph. 2:10 [New International Version]). Punctuation When citations are used in run-in quotations, they should not, according to The Christian Writer's Manual of Style, contain the punctuation either from the quotation itself (such as a terminating exclamation mark or question mark) or from the surrounding prose. The full-stop at the end of the surrounding sentence belongs outside of the parentheses that surround the citation. For example: Take him away! Take him away! Crucify him! (John 19:15). The Christian Writer's Manual of Style also states that a citation that follows a block quotation of text may either be in parentheses flush against the text, or right-aligned following an em-dash on a new line. For example: These things I have spoken to you, so that in Me you may have peace. In the world you have tribulation, but take courage; I have overcome the world. (John 16:33 NASB) These things I have spoken to you, so that in Me you may have peace. In the world you have tribulation, but take courage; I have overcome the world. — John 16:33 NASB Abbreviating book names The names of the books of the Bible can be abbreviated. Most Bibles give preferred abbreviation guides in their tables of contents, or at the front of the book. Abbreviations may be used when the citation is a reference that follows a block quotation of text. Abbreviations should not be used, according to The Christian Writer's Manual of Style, when the citation is in running text. Instead, the full name should be spelled out. Hudson observes, however, that for scholarly or reference works that contain a large number of citations in running text, abbreviations may be used simply to reduce the length of the prose, and that a similar exception can be made for cases where a large number of citations are used in parentheses. There are two commonly accepted styles for abbreviating the book names, one used in general books and one used in scholarly works. Electronic editions of Bibles use internal abbreviations. Some of these abbreviation schemes are standardized. These include OSIS and ParaTExt USFM. Roman numerals Roman numerals are often used for the numbered books of the Bible. For example, Paul's First Epistle to the Corinthians may be written as "I Corinthians", using the Roman numeral "I" rather than the Arabic numeral "1". The Christian Writer's Manual of Style, however, recommends using Arabic numerals for numbered books, as in "2 Corinthians" rather than "II Corinthians". Editions The Student Supplement to the SBL Handbook of Style published by the Society of Biblical Literature states that for modern editions of the Bible, publishers information is not required in a citation. One should simply use the standard abbreviation of the version of the Bible (e.g. "KJV" for King James Version, "RSV" for Revised Standard Version, "NIV" for New International Version, and so forth). Multiple citations The Student Supplement to the SBL Handbook of Style recommends that multiple citations be given in the form of a list separated by a semicolon, without a conjunction before the final item in the list. When multiple consecutive citations reference the same book, the name of the book is omitted from the second and subsequent citations. For example: John 1–3; 3:16; 6:14, 44 Citing non-biblical text in Bibles Some Bibles, particularly study bibles, contain additional text that is not the biblical text. This includes footnotes, annotations, and special articles. The Student Supplement to the SBL Handbook of Style recommends that such text be cited in the form of a normal book citation, not as a Bible citation. For example: See also Books of the Bible Christian popular culture Notes References External links Search and read Bible passages at Bible Gateway (various versions) Summary of MLA rules at Purdue University's Online Writing Lab Citing the Bible at Grove City College's Henry Buhl Library A list of abbreviations for the books of the Bible Bible chapters Bible verses Grammar Referencing systems
Bible citation
[ "Technology" ]
1,316
[ "Referencing systems", "Information systems" ]
1,036,472
https://en.wikipedia.org/wiki/Maunsell%20Forts
The Maunsell Forts are towers built in the Thames and Mersey estuaries during the Second World War to help defend the United Kingdom. They were operated as army and navy forts, and named for their designer, Guy Maunsell. The forts were decommissioned during the late 1950s and later used for other activities including pirate radio broadcasting. One of the forts is managed by the unrecognised Principality of Sealand; boats visit the remaining forts occasionally, and a consortium named Project Redsands is planning to conserve the fort situated at Red Sands. The aesthetic attraction of the Maunsell forts has been considered to be associated with the aesthetics of decay, transience and nostalgia. During the summers of 2007 and 2008 Red Sands Radio, a station commemorating the pirate radio stations of the 1960s, operated from the Red Sands fort on 28-day Restricted Service Licences. The fort was subsequently declared unsafe, and Red Sands Radio has moved its operations ashore to Whitstable. Forts had been built in river mouths and similar locations to defend against ships, such as the Grain Tower Battery at the mouth of the Medway dating from 1855, Plymouth Breakwater Fort, completed 1865, the four Spithead Forts: Horse Sand Fort, No Mans Land and St Helens Forts which were built 1865–1880 and Spitbank Fort, built during the 1880s, the Humber Forts on Bull & Haile Sands, completed in late 1919, and the Nab Tower, intended as part of a World War I anti-submarine defense but only set in place in 1920. Maunsell naval forts The Maunsell naval forts were built in the Thames estuary and operated by the Royal Navy, to deter and report German air raids following the Thames as a landmark, and prevent attempts to lay mines by aircraft in this important shipping channel. There were four naval forts: Rough Sands (HM Fort Roughs) (U1) Sunk Head (U2) Tongue Sands (U3) Knock John (U4) This artificial naval installation is similar in some respects to early "fixed" offshore oil platforms. It consisted of a rectangular reinforced concrete pontoon base with a support superstructure of two tall, diameter hollow reinforced concrete towers, walls roughly thick; overall weight is estimated to have been approximately 4,500 tons. The twin concrete supporting towers were divided into seven floors, four for crew quarters; the remainder provided dining, operational, and storage areas for several generators, and for fresh water tanks and antiaircraft munitions. There was a steel framework at one end supporting a landing jetty and crane which was used to hoist supplies aboard; the wooden landing stage itself became known as a "dolphin". The towers were joined above the eventual waterline by a steel platform deck upon which other structures could be added; this became a gun deck, on which an upper deck and a central tower unit were constructed. QF 3.7 inch anti-aircraft guns were positioned at each end of this main deck, with a further two Bofors 40 mm anti-aircraft guns and the central tower radar installations atop a central living area that contained a galley, medical, and officers quarters. The design of these concrete structures is equal to a military grade bunker, due to the ends of the stilts (under water), that are locked into the ground. Many species of fish live near the forts because the forts create cover. They have provided landmark references for shipping. They were laid down in dry dock and assembled as complete units. They were then fitted out—the crews going aboard at the same time for familiarization—before being towed out and sunk onto their sand bank positions in 1942. The naval fort design was the latest of several that Maunsell had devised in response to Admiralty inquiries. Early ideas had considered forts in the English Channel able to combat enemy vessels. During World War II, the Thames estuary Navy forts destroyed one German E-Boat. Rough Sands Fort (U1) Rough Sands fort was built to protect the ports of Felixstowe, Harwich and the town of Ipswich from aerial and sea attack. It is situated on Rough Sands, a sandbar located approximately from the coast of Suffolk and from the coast of Essex. Fort Roughs or the "Rough Towers" was "the first of originally four naval forts designed by G. Maunsell to protect the Thames Estuary". The artificial sea fort was constructed in dry dock at Red Lion Wharf, Gravesend, and was commissioned "H.M. Fort Roughs" on 8 February 1942. After an eventful journey its grounding was supervised by Maunsell at 16:45 on 11 February 1942. With "almost 100 men" having earlier embarked at Tilbury docks, the fort began service immediately. In 1966 Paddy Roy Bates, who operated Radio Essex, and Ronan O'Rahilly, who operated Radio Caroline, landed on Fort Roughs and occupied it. However, after disagreements, Roy Bates seized the tower as his own. O'Rahilly attempted to storm the fort in 1967, but Roy Bates defended the fort with guns and petrol bombs and continued to occupy it. The British Royal Marines were alerted and the British authorities ordered Roy Bates to surrender. He and his son were arrested and charged, but the court dismissed the case as it did not have jurisdiction over international affairs: Roughs Tower lay beyond the territorial waters of Britain. Bates took this as de facto recognition of his country and seven years later issued a constitution, flag, and national anthem, among other things, for the Principality of Sealand (founded on 2 September 1967). Sunk Head Fort (U2) Sunk Head fort was situated approximately from the coast off Essex and was grounded on 1 June 1942. The fort was decommissioned on 14 June 1945 though maintained until 1956 when it was abandoned. Unlike some of the other forts, Sunk Head was clearly well outside territorial waters, and when the Marine, &c., Broadcasting (Offences) Act 1967 came into effect in August 1967 the Government was anxious to ensure that it would not be taken over again by an offshore broadcaster. On 18 August 1967 Sunk Head was boarded by a contingent of the 24th Field Squadron of Royal Engineers from Maidstone from the tug Collie, commanded by Major David Ives. The Fort was weakened by acetylene cutting torches and 3,200 pounds of explosives were set. On 21 August 1967 Sunk Head was blown, leaving 20 feet of the leg stumps remaining. Tongue Sands Fort (U3) Tongue Sands Fort was situated approximately from the coast off Margate, Kent and was grounded on 27 June 1942. On the night of 22/23 January 1945, fifteen German E-boats were seen on radar, with five close by. The S.119 or S.199 operating out of IJmuiden, Holland was just over 4 miles away and came under heavy fire from Tongue Sands Fort's 3.7-inch guns. The German E-Boat's captain was unsure of where the attack was coming from and manoeuvred to avoid being hit, ramming another E-Boat in the process. The captain scuttled his badly damaged vessel. The Tongue Sands Fort was decommissioned on 14 February 1945 and reduced to care and maintenance until 1949 when it was abandoned. The fort had settled badly when it was grounded and as a result became unstable. On 5 December 1947 the Fort shook violently and sections began falling into the sea. The caretaker crew sent a distress call and were rescued by HMS Uplifter. Divers later established that the foundations were solid, but in a later storm the fort took on a 15 degree list. During the mid-1960s under-scouring had further distorted the fort: large holes had appeared in east leg, sea water had flooded the lower levels and the platform had become detached with huge gaps between the deck. Tongue Sands Fort finally collapsed into the under-scouring hole during storms on 21/22 February 1996, leaving only a single 18 foot stump of the south leg remaining visible above sea level. Knock John Fort (U4) Knock John fort is situated approximately from the coast off Essex and was grounded on 1 August 1942. It was decommissioned on 14 June 1945 and evacuated on 25 June 1945. The platform was maintained until May 1956 when it was abandoned. In 2009, it was observed that there was a slight distortion of the legs when viewing the tower from west to east. It is thought that underscouring is the cause of this. Maunsell army forts Maunsell also designed forts for anti-aircraft defence. These were larger installations comprising seven connected steel platforms. Four towers arranged in a semicircle ahead of the control centre and accommodation each carried a QF 3.7-inch gun, a tower to the rear of the control centre mounted Bofors 40 mm guns, while the seventh tower, set to one side of the gun towers and further out, was the searchlight tower. Three forts were placed in Liverpool Bay: Queens AA Towers Formby AA Towers Burbo AA Towers and three in the Thames estuary: Nore (U5), Red Sands (U6) Shivering Sands (U7) The Mersey forts were constructed at Bromborough Dock and the Thames forts at Gravesend. Proposals to construct forts off the Humber, Portsmouth & Rosyth, Belfast & Londonderry came to nothing. During World War II, the Thames estuary forts shot down 22 aircraft and about 30 flying bombs; they were decommissioned by the Ministry of Defence during the late 1950s. Nore Fort (U5) Nore fort was the only one built in British territorial waters at the time it was established. Other forts were in international waters until the three-mile limit was extended to . The fort was damaged badly in 1953 during a storm, then later in the year a Norwegian ship, Baalbek, collided with it, destroying two of the towers, killing four civilians and destroying guns, radar equipment and supplies. The ruins were considered a hazard to shipping and dismantled in 1959–1960. Parts of the bases were towed ashore by the Cliffe fort at Alpha wharf near the village of Cliffe, Kent, where they were still visible at low tide. Red Sands Fort (U6) There are seven towers in the Red Sands group at the mouth of the Thames Estuary. The towers had been connected by metal grate walk-ways. In 1959 consideration was given to refloating the Red Sands Fort and bringing the towers ashore but the costs were prohibitive. Radio 390 (1965–1967) was a pirate radio station on Red Sands Fort. During that time, the fort was also used as the setting for the third season finale episode, "Not So Jolly Roger" (first aired on 7 April 1966), of the 1960's UK television series Danger Man (known as Secret Agent in the U.S.) starring Patrick McGoohan; that episode concerned a pirate radio station that was a front for spies passing on secrets, and included substantial scenes filmed on location at the fort. During the early 21st century, in response to proposals to demolish the fort, a group named Project Redsands was formed to try to preserve it. It was the only fort that could be visited safely from a platform in between the legs of one of the towers. The fort was inspected by the structural engineering company Structural Repairs in 2021. They found that 6 of the towers had severe structural defects, with elements already lost to the sea, the 7th tower also had the same defects, with elements due to imminently fall into the sea. The fort could not be accessed safely in its present condition. Shivering Sands Fort (U7) This group was built near the Thames estuary for anti-aircraft defence and made up of several towers north of Herne Bay from the nearest land. One of the seven towers collapsed in 1963 when fog caused the ship Ribersborg to stray off course and collide with one of the towers. In 1964, the Port of London Authority placed wind and tide monitoring equipment on the Shivering Sands searchlight tower, which was isolated from the rest of the fort by the demolished tower. This relayed data to the mainland via a radio link. In August and September 2005, artist Stephen Turner spent six weeks living alone in the searchlight tower of the Shivering Sands Fort in what he described as "an artistic exploration of isolation, investigating how one's experience of time changes in isolation, and what creative contemplation means in a 21st-century context". Liverpool Army Forts The Liverpool sea forts were constructed in the same way the forts in the Thames estuary were; they were designed to defend Liverpool and the industrial heartland of Liverpool from an aerial attack from the west. Originally 38 towers were intended to be built but only 21 towers were built (three forts). The forts were built from October 1941. No fort engaged in enemy action during WWII. Demolition of the structures started during the 1950s with these forts considered a priority over the Thames estuary ports due to being a hazard to shipping. Demolition was delayed in 1954 when the salvage ship working at Queens Fort was diverted to assist with the urgent demolition of The Nore Fort in the Thames Estuary, which had been damaged due to a collision with a Norwegian ship, leaving remains considered hazardous to shipping in the area. Demolition of the three forts was completed in 1955. Illicit radio stations Various forts were re-occupied for pirate radio during the mid-1960s. In 1964, a few months after Radios Caroline and Atlanta began broadcasting, Screaming Lord Sutch installed Radio Sutch in one of the towers at Shivering Sands. Sutch soon became bored with the project and sold the station to Reginald Calvert who had assisted in establishing the station and who renamed the station Radio City and expanded operations into all of the five towers that remained connected. Calvert's killing in a dispute concerning the station's ownership (found to be self-defence rather than murder) contributed to the Government passing legislation against the offshore stations in 1967. During the illicit radio era the Port of London Authority frequently complained that its monitoring radio link was being disrupted by the nearby Radio City transmitter. Red Sands was likewise occupied by Radio Invicta, which was renamed KING Radio and then Radio 390, after its wavelength of approximately 390 metres. The station's managing director was ex-spy and thriller writer Ted Allbeury. The size of the Army forts made them ideal antenna platforms, since a large antenna could be based on the central tower and guyed from the surrounding towers. A small group of radio enthusiasts established Radio Tower on Sunk Head Naval fort, but the station had a small budget, had poor coverage and lasted only a few months. Claims by the group that they also intended to provide television service from the fort were never credible. In order to prevent further illicit broadcasting, a team of Royal Engineers laid 2,200 lbs of explosive charges on Sunk Head, commencing on 18 August 1967. At 4:18 PM on 21 August the charges were detonated, destroying the entire superstructure and most of the concrete legs above the waterline. Paddy Roy Bates occupied the Knock John Fort in 1965 and established Radio Essex, later renamed BBMS—Britain's Better Music Station, but was better known for his post-pirate activities. After the termination of BBMS in late 1966 he moved the station's equipment to Roughs Tower, further from the coast, but did not recommence broadcasting. He, or a representative, has lived in Roughs Tower since 1967, self-styling the tower as the Principality of Sealand. Cultural references The 1966 television series Danger Man episode "Not-so-Jolly Roger" was filmed partly at Redsands Army Sea Fort and includes an acknowledgement to Radio 390 in its closing credits. Redsands Fort was also used for the 1968 Doctor Who serial Fury from the Deep, in which the complex stood in for a North Sea gas refinery besieged by an intelligent seaweed creature. In the 2020 film Artemis Fowl, the Redsands towers, seen from the air, appear as the exterior of a secret MI6 interrogation centre. In the 2013 movie The Hunger Games: Catching Fire, as the characters travel in a train along the coast, two Sea Forts can be seen in the water. The Red Sands Fort and Radio City feature in the Glam Rock band, Slade's movie, Slade in Flame. The newly formed band, Flame, are interviewed by the pirate radio station, just as an attack is begun on the forts. The Shivering Sands Forts, filmed from a North Sea ferry, appeared in the 1984 music video for the song "A Sort of Homecoming", by the Irish popular music band U2. The 2002 video game Reign of Fire features the forts during the dragon campaign, where remnants of British Armed Forces make a last stand during a dragon apocalypse. The 2015 video game Stranded Deep includes abandoned Sea Forts that have the appearance of Maunsell Army Forts. These are difficult-to-find Easter Eggs built into the game for players to explore. The setting of the 2023 science fiction film Last Sentinel is based on a structure modelled after a single tower of the Maunsell army forts. The Red Sands Forts are seen in Episode 1 of Whitstable Pearl, mentioned as a drop-off and pick-up point for illicit drugs, as part of the story. See also Admiralty M-N Scheme Sea fort Palmerston Forts – including several Sea Forts built on pre-existing islands or rocky islets, as well as some built directly upon the seabed Humber Forts Texas Towers References Further reading (3 volumes, ; ; ) Kauffmann, J.E. and Jurga, Robert M. Fortress Europe: European Fortifications of World War II, Da Capo Press, 2002. External links Maunsell Sea Forts information Guide to Sea Forts from HerneBayOnline Project Redsand from project-redsand.com Red Sands Radio official website Maunsell Towers from undergroundkent.co.uk Map of the Forts from BenvenutiaSealand.it (English version via Google Translate) 20th-century forts in England Artificial islands of England British World War II defensive lines History of the Royal Navy History of the North Sea Geography of the River Thames North Sea offshore buildings and structures Pirate radio River Mersey Sea forts Thames Estuary Towers World War II sites in England 1942 establishments in England
Maunsell Forts
[ "Engineering" ]
3,699
[ "Structural engineering", "Towers" ]
1,036,490
https://en.wikipedia.org/wiki/Print%20Screen
Print Screen (often abbreviated Print Scrn, Prnt Scrn, Prnt Scr, Prt Scrn, Prt Scn, Prt Scr, Prt Sc, Pr Sc, or PS) is a key present on most PC keyboards. It is typically situated in the same section as the break key and scroll lock key. The print screen may share the same key as system request. Original use Under command-line based operating systems such as MS-DOS, this key causes the contents of the current text mode screen memory buffer to be copied to the standard printer port, usually LPT1. In essence, whatever is currently on the screen when the key is pressed will be printed. Pressing the key in combination with turns on and off the "printer echo" feature. When echo is in effect, any conventional text output to the screen will be copied ("echoed") to the printer. There is also a Unicode character for print screen, . Modern use Newer-generation operating systems using a graphical interface tend to save a bitmap image of the current screen, or screenshot, to their clipboard or comparable storage area. Some shells allow modification of the exact behavior using modifier keys such as the control key. In Microsoft Windows, pressing will capture the entire screen, while pressing the key in combination with will capture the currently selected window. The captured image can then be pasted into an editing program such as a graphics program or even a word processor. Pressing with both the left key and left pressed turns on a high contrast mode (this keyboard shortcut can be turned off by the user). Since Windows 8, pressing the key in combination with (and optionally in addition to the key) will save the captured image to disk (the default pictures location). This behavior is therefore backward compatible with users who learned Print Screen actions under operating systems such as MS-DOS. In Windows 10, the key can be configured to open the 'New' function of the Snip & Sketch tool. This allows the user to take a full screen, specific window, or defined area screenshot and copy it to clipboard. This behaviour can be enabled by going to Snip & Sketch, accessing Settings via the menu and enabling the 'Use the PrtScn button to open screen snipping'. In KDE and GNOME, very similar shortcuts are available, which open a screenshot tool (Spectacle or GNOME Screenshot respectively), giving options to save the screenshot, plus more options like manually picking a specific window, screen area, using a timeout, etc. Sending the image to many services (KDE), or even screen recording (GNOME), is built-in too. Macintosh does not use a print screen key; instead, key combinations are used that start with . These key combinations are used to provide more functionality including the ability to select screen objects. captures the whole screen, while allows for part of the screen to be selected. The standard print screen functions described above save the image to the desktop. However, using any of the key sequences described above, but additionally pressing the will modify the behavior to copy the image to the system clipboard instead. Notable keyboards On the IBM Model F keyboard, the key is labeled PrtSc and is located under . On the IBM Model M, it is located next to and is labeled Print Screen. References Computer keys Computing terminology
Print Screen
[ "Technology" ]
695
[ "Computing terminology" ]
1,036,508
https://en.wikipedia.org/wiki/The%20System%20of%20Nature
The System of Nature or, the Laws of the Moral and Physical World (French: ) is a 1770 work of philosophy by Paul-Henri Thiry, Baron d'Holbach. Overview The work was originally published under the name of Jean-Baptiste de Mirabaud, a deceased member of the French Academy of Science. D'Holbach wrote and published this book – possibly with the assistance of Denis Diderot but with the support of Jacques-André Naigeon – anonymously in 1770, describing the universe in terms of the principles of philosophical materialism: the mind is identified with the brain, there is no "soul" without a living body, the world is governed by strict deterministic laws, free will is an illusion, there are no final causes, and whatever happens takes place because it inexorably must. The work explicitly denies the existence of God, arguing that belief in a higher being is the product of fear, lack of understanding, and anthropomorphism. Though not a scientist himself, d'Holbach was scientifically literate and he tried to develop his philosophy in accordance with the known facts of nature and the scientific knowledge of the day, citing, for example, the experiments of John Needham as proof that life could develop autonomously without the intervention of a deity. It makes a critical distinction between mythology as a more or less benign way of bringing law-ordered thought on society, nature and their powers to the masses and theology. Theology which, when it separates from mythology raises the power of nature above nature itself and thus alienates the two (i.e. "nature", all that actually exists, from its power, now personified in a being outside nature), is by contrast a pernicious force in human affairs without parallel. Its principles are summed up in a more popular form in d'Holbach's . Criticism The book was considered extremely radical in its day and the list of people writing refutations of the work was long. The Catholic theologian Nicolas-Sylvestre Bergier wrote a refutation titled Examen du matérialisme ("Materialism examined"). Voltaire, too, seized his pen to refute the philosophy of the in the article "Dieu" in his Dictionnaire philosophique, while Frederick the Great also drew up an answer to it. Commenting on the book, Frederick observed: It is speculated that Frederick was motivated to write a criticism of the System of Nature because the book contained an attack not just on religion, but also on monarchy. Appreciation and influence D'Holbach's friend Denis Diderot had enthusiastically endorsed the book: Percy Bysshe Shelley became an ardent atheist after reading The System of Nature, and proceeded to translate the book into English. According to Will Durant, the System of Nature contains the most comprehensive description of materialism and atheism in the entire history of philosophy. In his student days, Goethe had recoiled with revulsion at the contents in the book: "It appeared to us so grey, so Cimmerian, so corpse - like that we had difficulty in enduring its presence and shuddered before it as before a spectre"; in his old age he harbored similar views: "We belong to the laws of nature, even when we rebel against them." According to Voltaire, the book was very popular among the populace, including "scholars, the ignorant, and women". References External links The System of Nature--English translation 1770 non-fiction books System of Nature Books critical of religion Works published anonymously Materialism Books with atheism-related themes Works by Baron d'Holbach
The System of Nature
[ "Physics" ]
755
[ "Materialism", "Matter" ]
1,036,810
https://en.wikipedia.org/wiki/Glossary%20of%20nautical%20terms%20%28A%E2%80%93L%29
This glossary of nautical terms is an alphabetical listing of terms and expressions connected with ships, shipping, seamanship and navigation on water (mostly though not necessarily on the sea). Some remain current, while many date from the 17th to 19th centuries. The word nautical derives from the Latin nauticus, from Greek nautikos, from nautēs: "sailor", from naus: "ship". Further information on nautical terminology may also be found at Nautical metaphors in English, and additional military terms are listed in the Multiservice tactical brevity code article. Terms used in other fields associated with bodies of water can be found at Glossary of fishery terms, Glossary of underwater diving terminology, Glossary of rowing terms, and Glossary of meteorology. A B C D E F G H I J K L See also Articles that link to this glossary List of ship directions References Sources (1848 edition) Further reading Nautical Shipbuilding Water transport Wikipedia glossaries using description lists
Glossary of nautical terms (A–L)
[ "Engineering" ]
205
[ "Naval architecture", "Shipbuilding", "Marine engineering" ]
1,037,139
https://en.wikipedia.org/wiki/Kingda%20Ka
Kingda Ka is a retired hydraulically launched steel roller coaster located at Six Flags Great Adventure in Jackson, New Jersey, United States. Manufactured by Intamin and designed by Werner Stengel, Kingda Ka opened as the in the world on May 21, 2005, surpassing Top Thrill Dragster. It was the second strata coaster ever built, exceeding in height. Both were made with similar designs, although Kingda Ka's layout added an airtime hill on the return portion of the track. The ride featured a hydraulic launch mechanism which accelerated the train to in 3.5 seconds. Its top hat tower element stands at , which cemented Kingda Ka as the tallest roller coaster in the world. It would retain this record for its entire operating lifetime, although its speed record was broken in 2010 by Formula Rossa at Ferrari World in Abu Dhabi, United Arab Emirates. On November 14, 2024, following rumors and speculation regarding the future of the attraction, Six Flags Great Adventure announced that Kingda Ka had permanently closed. , the ride has not yet been modified, though the park has applied for the work permit necessary to do so. History On September 29, 2004, it was announced that Kingda Ka would be added to the Six Flags Great Adventure amusement park in 2005. This announcement occurred at an event held for roller coaster enthusiasts and the media. The event revealed the park's goal to build "the tallest and fastest roller coaster on earth", reaching and accelerating up to in 3.5 seconds. The ride would be part of the Golden Kingdom, an themed area being developed at Six Flags Great Adventure. Six Flags CEO Kieran Burke said: "This is the first step in a process of really transforming Six Flags Great Adventure from the largest regional theme park in the world to a true regional destination." Intamin subcontracted Stakotra to assist with construction. On January 13, 2005, workers completed Kingda Ka's tower with a topping out ceremony. For the ceremony, one 50-story crane was used to hoist two workers to the top of the ride; another crane lifted a steel beam, with an American flag, to the ride's pinnacle. The ride was still under construction when the park opened for the season in March 2005. The attraction was originally scheduled to open on April 23, 2005, but its opening was delayed to May 21, as the park stated that more time was needed to complete testing. A media event was held two days prior on May 19, 2005. Upon its opening, Kingda Ka became the tallest and fastest roller coaster in the world, taking both world records from Top Thrill Dragster at Cedar Point. Intamin designed both Kingda Ka and Top Thrill Dragster, and the two share a similar design and layout that differs primarily by the theme and the additional hill featured on Kingda Ka. Both rides were built by Stakotra and installed by Martin & Vleminckx. Though Kingda Ka was popular among both the general public and roller coaster enthusiasts, its use of relatively new technology meant that Six Flags Great Adventure had to hire a dedicated maintenance team for the ride. Because of maintenance issues, the ride was closed for almost two months during its first season, and it was closed for an additional three weeks at the beginning of the 2006 season. Kingda Ka continued to be the world's fastest coaster until Formula Rossa at Ferrari World opened in November 2010. On August 29, 2013, Six Flags Great Adventure officially announced Zumanjaro: Drop of Doom for the 2014 season. The new attraction was attached to the Kingda Ka coaster. The drop tower features three gondolas integrated into the existing structure which was also built by Intamin. Kingda Ka closed at the start of the 2014 season in order to construct Zumanjaro: Drop of Doom on to Kingda Ka. Kingda Ka reopened on weekends on Memorial Day Weekend and fully reopened when Zumanjaro: Drop of Doom was completed on July 4, 2014. In late 2024, rumors began circulating that Kingda Ka was slated to be closed permanently following the 2024 season, though nothing was confirmed by the park. On November 14, 2024, Six Flags Great Adventure confirmed that the ride had permanently closed. Kingda Ka is to be removed to make way for a new "multi-record breaking launched roller coaster" with an anticipated opening in 2026. Along with Kingda Ka, the park would also close Zumanjaro: Drop of Doom, Green Lantern, the Parachute Drop ride, and Twister (a HUSS Top Spin flat ride), to make room for the new attraction. On December 18, 2024, about a month after the official closure announcement, the park applied to the local government for a work permit; the comment on the permit states "[demolition] of Kingda Ka / Zumanjaro ride." Later that month, the park sent out a project bid notice for "demolition and controlled implosion" of the ride. Ride experience Queue Kingda Ka originally featured a detailed and elaborate queue line that ran between the launch and brakes of the coaster. Guests would enter the ride, then walk down a narrow pathway where they would eventually cross under the launch track. A themed tunnel was built where guests crossed under the launch to ensure safety. Guests would then enter a series of three switchbacks, with the third being underneath a permanent structure. This structure featured poles with detailed carvings of animals to help immerse guests into the Golden Kingdom. Following this final series of switchbacks, guests would approach the station, where the line would divide in two to equally fill both sides of Kingda Ka's station. This queue was designed to handle the large crowds the park was anticipating Kingda Ka would draw. After an incident in the ride's opening year that occurred right where guests crossed under the launch, the decision was made to not use this queue to ensure guest safety. From that point forward, the overflow queue would be used as the permanent queue, which is still in use today. Parts of the original queue are still visible from Kingda Ka's station. Guests pass under the jungle-themed entrance sign and enter the queue line, which is surrounded by bamboo, which augments the jungle-themed music that plays in the background. Along the way, there are safety and warning signs about the ride. Following a long straight section, guests turn left and head into a switchback section. Guests walk through some curved paths before entering the station. Layout After the train has been locked and checked, it moves slowly out of the station to the launch area, then passes through a switch track which allows four trains on two tracks to load simultaneously. When the signal is given to launch, the train rolls back slightly so that the catch car can latch on to the middle car, and the brakes retract on the launch track. As the brake fins are retracting, a recording announces: "Arms down, head back, hold on!" The train is launched approximately five seconds later. When the train is in position, the hydraulic launch mechanism accelerates it from in 3.5 seconds. The hydraulic launch motor is capable of producing 20,800 peak horsepower (15.5 MW). At the end of the launch track, the train climbs the main tower (top hat) and rolls 90 degrees to the right before reaching a height of . It then descends straight down through a 270-degree, clockwise spiral. It climbs the second hill of , producing a moment of weightlessness before being smoothly brought to a stop by the magnetic brakes; it then makes a left-hand U-turn and enters the station. The ride lasts 28 seconds from the start of the launch. The track measures about long. Trains and station Kingda Ka's four trains are color-coded for easy identification (green, dark blue, teal, and orange) and are numbered; the four colors are also used for the seats and restraints. Each train seats 18 people (two per row). The rear car has one row, while the rest have two. The rear row of each car is positioned higher than its front row for better visibility. Kingda Ka's over-the-shoulder restraint system consists of a thick, rigid lap bar and two flexible over-the-shoulder restraints. Kingda Ka's station has two parallel tracks with switch tracks at the entrance and exit. Each of the station's tracks is designed to accommodate two trains, so each of the four trains can be operated from its own station. Because all of Kingda Ka's trains are mechanically identical and able to load and unload at each of the four individual station bays, the original plan was for all trains to operate at the same time, and for each train to load and unload at its own station. Trains on one side would be loaded while trains on the other side would be launched with an employee directing riders in line to a particular side, where they could then choose to sit anywhere within the train. Theme Kingda Ka is located in the jungle-themed area of the park known as The Golden Kingdom. The ride portrays a mythical Bengal tiger named after one that was housed in the nearby Temple of the Tiger attraction, an interactive exhibit that was closed in 2010. Rollbacks A train may occasionally experience a rollback following a launch. A rollback occurs when the train fails to make it over the top of the tower and descends back down the side it was launched. Kingda Ka includes retractable magnetic brakes on its launch track to prevent a train from rolling back all the way into the loading station (and potentially colliding with the next about-to-be-launched train). Incidents On June 8, 2005, a bolt failed inside a trough through which the launch cable travels. This caused the liner to come loose, creating friction on the cable and preventing the train from accelerating to the correct speed. The cable rubbing against the trough caused sparks and shards of metal to fly out from the bottom of the train. The ride was closed for almost two months following the incident. Damage occurred to the launch cable, which was frayed and required replacement, including minor damage to seals and brake fins. The incident caused stress on a number of fins, and Six Flags did not have enough replacement fins. Extra brake fins were ordered, and the ride had to undergo thorough testing following the repair. Kingda Ka reopened on August 4. Kingda Ka was struck by lightning in May 2009 and suffered serious damage. The ride was closed for three months for repairs and reopened on August 21, 2009. On August 27, 2011, Kingda Ka suffered unspecified damage shortly before Hurricane Irene, and Six Flags Great Adventure did not open. It is unknown whether additional damage occurred due to the storm, but the coaster was damaged to the extent that it could not run before Irene. Kingda Ka remained closed until the start of the 2012 operating season on April 5. Shortly before 5:00p.m. on July 26, 2012, a young boy was sent to the hospital after suffering minor injuries from being struck by a bird during normal operation. The ride resumed normal operation shortly after the incident. In 2019, a guest sued Six Flags and Intamin in U.S. federal court, claiming that tall riders could be subjected to "extreme speed and torqueing forces" and that the harnesses could also cause injuries. According to the guest, he had suffered multiple back injuries after riding Kingda Ka in 2017. This guest was tall, three inches below the ride's posted height limit of . Both Six Flags and Intamin filed a motion to dismiss the lawsuit, which was partially granted and partially denied in 2020. Awards Records Notes References External links Official Kingda Ka page Kingda Ka Preview Article Roller coasters operated by Six Flags Roller coasters in New Jersey Roller coasters introduced in 2005 Six Flags Great Adventure 2005 establishments in New Jersey
Kingda Ka
[ "Engineering" ]
2,426
[ "Buildings and structures demolished by controlled implosion", "Architecture" ]
1,037,163
https://en.wikipedia.org/wiki/Whispering%20gallery
A whispering gallery is usually a circular, hemispherical, elliptical or ellipsoidal enclosure, often beneath a dome or a vault, in which whispers can be heard clearly in other parts of the gallery. Such galleries can also be set up using two parabolic dishes. Sometimes the phenomenon is detected in caves. Theory A whispering gallery is most simply constructed in the form of a circular wall, and allows whispered communication from any part of the internal side of the circumference to any other part. The sound is carried by waves, known as whispering-gallery waves, that travel around the circumference clinging to the walls, an effect that was discovered in the whispering gallery of St Paul's Cathedral in London. The extent to which the sound travels at St Paul's can also be judged by clapping in the gallery, which produces four echoes. Other historical examples are the Gol Gumbaz mausoleum in Bijapur, India and the Echo Wall of the Temple of Heaven in Beijing. A hemispherical enclosure will also guide whispering gallery waves. The waves carry the words so that others will be able to hear them from the opposite side of the gallery. The gallery may also be in the form of an ellipse or ellipsoid, with an accessible point at each focus. In this case, when a visitor stands at one focus and whispers, the line of sound emanating from this focus reflects directly to the focus at the other end of the gallery, where the whispers may be heard. In a similar way, two large concave parabolic dishes, serving as acoustic mirrors, may be erected facing each other in a room or outdoors to serve as a whispering gallery, a common feature of science museums. Egg-shaped galleries, such as the Golghar Granary at Bankipore, and irregularly shaped smooth-walled galleries in the form of caves, such as the Ear of Dionysius in Syracuse, also exist. Examples India The Gol Gumbaz in Bijapur, India. The Golghar Granary in Bankipore, India. The Victoria Memorial in Kolkata. United Kingdom St Paul's Cathedral in London is the place where whispering-gallery waves were first discovered by Lord Rayleigh . Gloucester Cathedral has a whispering gallery. The Berkeley Wetherspoons Bristol has a whispering gallery. United States Grand Central Terminal in New York City: a landing amid the Oyster Bar ramps, in front of the Oyster Bar restaurant Statuary Hall in the United States Capitol. Salt Lake Tabernacle in Salt Lake City, Utah Centennial fountain in front of Green Library at Stanford University in California Gates Circle, Buffalo, New York The Whispering Arch in St. Louis Union Station Charles Stover Bench, Central Park, New York, New York Waldo Hutchins Bench, Central Park, New York, New York Other parts of the world The Echo Wall in the Temple of Heaven in Beijing. Basilica of St. John Lateran, Rome. The Salle des Caryatides in the Louvre, Paris, France. Ear of Dionysius cave in Syracuse, Sicily. Banco dos Namorados (Lovers' bench) in Santiago de Compostela, Spain. In science The term whispering gallery has been borrowed in the physical sciences to describe other forms of whispering-gallery waves such as light or matter waves. See also Acoustic mirror Parabolic loudspeaker Room acoustics References External links Ear of Dionysius: visiting information, videos and sounds of this cave. Grand Central Station: visiting information, videos and sounds of the whispering gallery. St Paul's Cathedral: visiting information, videos and sounds of the whispering gallery. Acoustics Rooms
Whispering gallery
[ "Physics", "Engineering" ]
746
[ "Rooms", "Classical mechanics", "Acoustics", "Architecture" ]
1,037,188
https://en.wikipedia.org/wiki/Persian%20red
Persian red is a deep reddish orange earth or pigment from the Persian Gulf composed of a silicate of iron and alumina, with magnesia. It is also called artificial vermillion. The first recorded use of Persian red as a color name in English was in 1895. Other colors associated with Persia include Persian pink, Persian rose, Persian orange, Persian blue and Persian green. In human culture Architecture Henry Hobson Richardson insisted upon a ground of Persian red for the murals John LaFarge executed lining the interior of Trinity Church, Boston. See also List of inorganic pigments References Persian red Inorganic pigments
Persian red
[ "Chemistry" ]
126
[ "Inorganic pigments", "Inorganic compounds" ]
1,037,197
https://en.wikipedia.org/wiki/Persian%20powder
The term Persian powder can also refer to a type of dry snow in the Zagros Mountains. Persian powder is an insecticide powder with natural pyrethrin as the active agent. It is also known as Persian pellitory, insect powder and internationally as pyrethrum. Biological pest control Persian powder is a green pesticide that has been used for centuries for the biological pest extermination of household insects, garden pests, and agricultural pests. It may first have been exported from Persia to Ancient Rome. Pyrethrin and pyrethroids are used indoors, in gardens and the horticulture industry, and in agriculture. It is produced from the powdered flowers of certain species of pyrethrum, plants in the genera Chrysanthemum and Tanacetum. In more recent times it has had formulations with brand names such as Zacherlin. Synthetic forms Pyrethroids are synthetic insecticides based on natural pyrethrum (pyrethrins), such as permethrin. A common formulation of pyrethrin is in preparations containing the synthetic chemical piperonyl butoxide: this has the effect of enhancing the toxicity to insects and speeding the effects when compared with pyrethrins used alone. These formulations are known as synergized pyrethrins. See also In the novel Anna Karenina, the character Kitty used Persian powder to sanitize beds in an unclean hotel. List of pest-repelling plants Organic gardening Organic farming References Pyrethroids Plant toxin insecticides Biopesticides +P Powders de:Pyrethrum eo:Piretro io:Piretro it:Piretro pl:Pyretryna pt:Piretro (inseticida)
Persian powder
[ "Physics", "Chemistry" ]
370
[ "Plant toxin insecticides", "Chemical ecology", "Materials", "Powders", "Matter" ]
1,037,221
https://en.wikipedia.org/wiki/Fructooligosaccharide
Fructooligosaccharides (FOS) also sometimes called oligofructose or oligofructan, are oligosaccharide fructans, used as an alternative sweetener. FOS exhibits sweetness levels between 30 and 50 percent of sugar in commercially prepared syrups. It occurs naturally, and its commercial use emerged in the 1980s in response to demand for healthier and calorie-reduced foods. Chemistry Two different classes of fructooligosaccharide (FOS) mixtures are produced commercially, based on inulin degradation or transfructosylation processes. FOS can be produced by degradation of inulin, or polyfructose, a polymer of D-fructose residues linked by β(2→1) bonds with a terminal α(1→2) linked D-glucose. The degree of polymerization of inulin ranges from 10 to 60. Inulin can be degraded enzymatically or chemically to a mixture of oligosaccharides with the general structure Glu–Frun (abbrev. GFn) and Frum (Fm), with n and m ranging from 1 to 7. This process also occurs to some extent in nature, and these oligosaccharides may be found in a large number of plants, especially in Jerusalem artichoke, chicory and the blue agave plant. The main components of commercial products are kestose (GF2), nystose (GF3), fructosylnystose (GF4), bifurcose (GF3), inulobiose (F2), inulotriose (F3), and inulotetraose (F4). The second class of FOS is prepared by the transfructosylation action of a β-fructosidase of Aspergillus niger or Aspergillus on sucrose. The resulting mixture has the general formula of GFn, with n ranging from 1 to 5. Contrary to the inulin-derived FOS, not only is there β(1→2) binding but other linkages do occur, however, in limited numbers. Because of the configuration of their glycosidic bonds, fructooligosaccharides resist hydrolysis by salivary and intestinal digestive enzymes. In the colon they are fermented by anaerobic bacteria. In other words, they have a lower caloric value, while contributing to the dietary fiber fraction of the diet. Fructooligosaccharides are more soluble than inulins and are, therefore, sometimes used as an additive to yogurt and other (dairy) products. Fructooligosaccharides are used specially in combination with high-intensity artificial sweeteners, whose sweetness profile and aftertaste it improves. Food sources FOS is extracted from the blue agave plant as well as fruits and vegetables such as bananas, onions, chicory root, garlic, asparagus, jícama, and leeks. Some grains and cereals, such as wheat and barley, also contain FOS. The Jerusalem artichoke and its relative yacón together with the blue agave plant have been found to have the highest concentrations of FOS of cultured plants. Health benefits FOS has been a popular sweetener in Japan and Korea for many years, even before 1990, when the Japanese government installed a "Functionalized Food Study Committee" of 22 experts to start to regulate "special nutrition foods or functional foods" that contain the categories of fortified foods (e.g., vitamin-fortified wheat flour), and is now becoming increasingly popular in Western cultures for its prebiotic effects. FOS serves as a substrate for microflora in the large intestine, increasing the overall gastrointestinal tract health. It has also been proposed as a supplement for treating yeast infections. Several studies have found that FOS and inulin promote calcium absorption in both the animal and the human gut. The intestinal microflora in the lower gut can ferment FOS, which results in a reduced pH. Calcium is more soluble in acid, and, therefore, more of it comes out of food and is available to move from the gut into the bloodstream. In a randomized controlled trial involving 36 twin pairs aged 60 and above, participants were given either a prebiotic (3.375 mg inulin and 3.488 mg FOS) or a placebo daily for 12 weeks along with resistance exercise and branched-chain amino acid (BCAA) supplementation. The trial, conducted remotely, showed that the prebiotic supplement led to changes in the gut microbiome, specifically increasing Bifidobacterium abundance. While there was no significant difference in chair rise time between the prebiotic and placebo groups, the prebiotic did improve cognition. The study suggests that simple gut microbiome interventions could enhance cognitive function in the elderly. FOS can be considered a small dietary fibre with (like all types of fibre) low caloric value. The fermentation of FOS results in the production of gases and short chain fatty acids. The latter provide some energy to the body. Side-effects All inulin-type prebiotics, including FOS, are generally thought to stimulate the growth of Bifidobacteria species. Bifidobacteria are considered beneficial bacteria. This effect has not been uniformly found in all studies, either for bifidobacteria or for other gut organisms. FOS are also fermented by numerous bacterial species in the intestine, including Klebsiella, E. coli and many Clostridium species, which can be pathogenic in the gut. These species are responsible mainly for the gas formation (hydrogen and carbon dioxide), which results after ingestion of FOS. Studies have shown that up to 20 grams/day is well tolerated. Regulation US FDA regulation FOS is classified as generally recognized as safe (GRAS). NZ FSANZ regulation The Food Safety Authority warned parents of babies that a major European baby-formula brand made in New Zealand does not comply with local regulations (because it contains fructo-oligosaccharides (FOS)), and urged them to stop using it. EU regulation FOS use has been approved in the European Union; allowing addition of FOS in restricted amounts to baby formula (for babies up to 6 months) and follow-on formula (for babies between 6 and 12 months). Infant and follow-on formula products containing FOS have been sold in the EU since 1999. Canadian regulations FOS is currently not approved for use in baby formula. See also Xylooligosaccharide (XOS) References Oligosaccharides Prebiotics (nutrition) Sugar substitutes
Fructooligosaccharide
[ "Chemistry" ]
1,431
[ "Oligosaccharides", "Carbohydrates" ]
1,037,401
https://en.wikipedia.org/wiki/Picatinny%20rail
The 1913 rail (MIL-STD-1913 rail) is an American rail integration system designed by Richard Swan that provides a mounting platform for firearm accessories. It forms part of the NATO standard STANAG 2324 rail. It was originally used for mounting of telescopic sights atop the receivers of larger caliber rifles. Once established as United States Military Standard, its use expanded to also attaching other accessories, such as: iron sights, tactical lights, laser sights, night-vision devices, reflex sights, holographic sights, foregrips, bipods, slings and bayonets. An updated version of the rail is adopted as a NATO standard as the STANAG 4694 NATO Accessory Rail. History Attempts to standardize the Weaver rail mount designs date from work by the A.R.M.S. company and Richard Swanson in the early 1980s. Specifications for the M16A2E4 rifle and the M4E1 carbine received type classification generic in December 1994. These were the M16A2 and the M4 modified with new upper receivers where rails replaced hand guards. The MIL-STD-1913 rail is commonly called the "Picatinny Rail", in reference to the Picatinny Arsenal in New Jersey. Picatinny Arsenal works as a contracting office for small arms design (they contracted engineers to work on the M4). Picatinny Arsenal requested Swan's help in developing the rail, but did not draft blueprints or request paperwork for a patent. That credit goes to Richard Swanson of A.R.M.S., who conducted research and development and acquired a patent for the rail in 1995. Swan has litigated in civil court against Colt and Troy industries regarding patent infringement. The courts found that Troy had developed rifles with rail mounting systems nearly identical to the MIL-STD-1913 rail. A metric-upgraded version of the 1913 rail, the STANAG 4694 NATO Accessory Rail, was designed in conjunction with weapon manufacturers like Aimpoint, Beretta, Colt, FN Herstal and Heckler & Koch, and was approved by the NATO Army Armaments Group (NAAG), Land Capability Group 1 Dismounted Soldier (LCG1-DS) on May 8, 2009. Many firearm manufacturers include a MIL-STD-1913 rail system from the factory, such as the Ruger Mini-14 Ranch Rifle. Design The rail consists of a strip undercut to form a "flattened T" with a hexagonal top cross-section, with cross slots interspersed with flats that allow accessories to be slid into place from the end of the rail and then locked in place. It is similar in concept to the earlier commercial Weaver rail mount used to mount telescopic sights, but is taller and has wider slots at regular intervals along the entire length. The MIL-STD-1913 locking slot width is . The spacing of slot centres is and the slot depth is . Comparison to Weaver rail The only significant difference between the MIL-STD-1913 rail and the similar Weaver rail mount are the size and shapes of the slots. Whereas the earlier Weaver rail is modified from a low, wide dovetail rail and has rounded slots, the 1913 rail has a more pronounced angular section and square-bottomed slots. This means that an accessory designed for a Weaver rail will fit onto a MIL-STD-1913 rail whereas the opposite might not be possible, unless the slots in the Weaver rail are modified to have square bottoms. While some accessories are designed to fit on both Weaver and 1913 rails, most 1913 compatible devices will not fit on Weaver rails. From May 2012, most mounting rails are cut to MIL-STD-1913 standards. Many accessories can be secured to a rail with a single spring-loaded retaining pin. Designed to mount heavy sights of various kinds, a great variety of accessories and attachments are now available and the rails are no longer confined to the rear upper surface (receiver) of long arms but are either fitted to or machine milled into the upper, side or lower surfaces of all manner of weapons from crossbows to pistols and long arms up to and including anti-materiel rifles. Impact Because of their many uses, 1913 rails and accessories have replaced iron sights in the design of many firearms and are available as aftermarket add-on parts for most actions that do not have them integrated, and they are also on the undersides of semi-automatic pistol frames and grips. Their usefulness has led to them being used in paintball, gel blasters and airsoft. See also Third Arm Weapon Interface System Warsaw Pact rail Zeiss rail References External links Picatinny Rail Specifications Firearm components Mechanical standards Military equipment introduced in the 1990s
Picatinny rail
[ "Technology", "Engineering" ]
963
[ "Firearm components", "Mechanical standards", "Components", "Mechanical engineering" ]
1,037,442
https://en.wikipedia.org/wiki/124%20%28number%29
124 (one hundred [and] twenty-four) is the natural number following 123 and preceding 125. In mathematics 124 is an untouchable number, meaning that it is not the sum of proper divisors of any positive number. It is a stella octangula number, the number of spheres packed in the shape of a stellated octahedron. It is also an icosahedral number. There are 124 different polygons of length 12 formed by edges of the integer lattice, counting two polygons as the same only when one is a translated copy of the other. 124 is a perfectly partitioned number, meaning that it divides the number of partitions of 124. It is the first number to do so after 1, 2, and 3. References Integers
124 (number)
[ "Mathematics" ]
160
[ "Mathematical objects", "Number stubs", "Elementary mathematics", "Integers", "Numbers" ]
1,037,556
https://en.wikipedia.org/wiki/Certified%20health%20physicist
Certified Health Physicist is an official title granted by the American Board of Health Physics, the certification board for health physicists in the United States. A Certified Health Physicist is designated by the letters CHP or DABHP (Diplomate of the American Board of Health Physics) after his or her name. A certification by the ABHP is not a license to practice and does not confer any legal qualification to practice health physics. However, the certification is well respected and indicates a high level of achievement by those who obtain it. Certified Health Physicists are plenary or emeritus members of the American Academy of Health Physics (AAHP). In 2019, the AAHP web site listed over 1600 plenary and emeritus members. Professional responsibilities A person certified as a health physicist has a responsibility to uphold the professional integrity associated with the certification to promote the practice and science of radiation safety. It is expected that such a person will always give health physics information based on the highest standards of science and professional ethics. A certified individual has a responsibility to remain professionally active in the health physics field and remain technically competent in the scientific, technical and regulatory developments in the field. General requirements required to receive the certification The requirements for prospective candidates for certification are Academics. At least a bachelor's degree from an accredited college or university in physical sciences, engineering, or in a biological science, with a minimum of 20 semester hours in physical science. Experience. At least six years of professional experience in health physics. By permission of the Board, advanced degrees may substitute for one year (master's degree) or two years (doctorate) of the required experience. References. A reference from the immediate supervisor and from at least two other individuals, including one from a currently certified Health Physicist. Written Report. A written report that reflects a professional health physics effort. Examination. A two-part exam, which is currently given during one week of the year. Part I consists of 150 multiple choice questions in fundamental aspects of health physics. This portion of the test is three hours long, and can be taken without most of the above requirements. It is given at Pearson Vue testing centers throughout the world in the week before the Health Physics Society's annual meeting. Part II consists of open-ended written questions, which determine competency in applied health physics. This portion of the exam is six hours long, and can only be taken after having passed Part I, or immediately after having taken Part I the week before. It is given on the Monday of the Health Physics Society's annual meeting, and on the same day at other locations throughout the country. After passing Part I, the applicant must pass Part II within a period of seven years, or retake both parts. If a candidate scores particularly poorly on Part II, he or she will be barred from taking it the following year. Both parts include all of the topics below, but Part II requires candidates to answer only six mandatory questions and four of eight topic area questions. Examination topics Atomic structure/Radioactivity/Radioactive decay Interaction of radiation with matter Internal dosimetry/Internal dose calculations Biological effects of ionizing radiation NRC, OSHA, Regulations/Standards ICRU, ICRP, NCRP Radiation Risk, BEIR III, IV, V, VI External dose calculations/External dosimetry Statistics Instrumentation Low Level Wastes, Fuel Cycle, DOT Regulations Shielding and Activation Air sampling/Modeling/Environmental Health Physics Medical Health Physics and X-ray Protection Reactor Health Physics/Criticality Accelerator Health Physics Lasers, UV, Microwave, RF Radon Exam reference sources Health Physics Topics Johnson, T.E. (2017). Introduction to health physics. Introduction to health physics. Johnson, T.E. & Birky, B.K. (1998). Health Physics and Radiological Health. Knoll, G.F. (1979). Radiation Detection and Measurement. Turner, J.E. (2007). Atoms, Radiation, and Radiation Protection. Part I Bevelacqua, J.J. (1999). Basic Health Physics: Problems and Solutions. ABHP Part I Question and Solutions Part II Bevelacqua, J. J. (2009). Contemporary health physics: Problems and Solutions. Turner, J.E. (1988). Problems and Solutions in Radiation Protection. American Board of Health Physics ABHP Part II Question and Solutions References External links American Board of Health Physics Health Physics Society American Academy of Health Physics Medical physics Professional certification in science Health physicists
Certified health physicist
[ "Physics" ]
909
[ "Applied and interdisciplinary physics", "Medical physics" ]
1,037,781
https://en.wikipedia.org/wiki/Beta%20function%20%28physics%29
In theoretical physics, specifically quantum field theory, a beta function, β(g), encodes the dependence of a coupling parameter, g, on the energy scale, μ, of a given physical process described by quantum field theory. It is defined as and, because of the underlying renormalization group, it has no explicit dependence on μ, so it only depends on μ implicitly through g. This dependence on the energy scale thus specified is known as the running of the coupling parameter, a fundamental feature of scale-dependence in quantum field theory, and its explicit computation is achievable through a variety of mathematical techniques. The concept of Beta function was Introduced by Ernst Stueckelberg and André Petermann in 1953. Scale invariance If the beta functions of a quantum field theory vanish, usually at particular values of the coupling parameters, then the theory is said to be scale-invariant. Almost all scale-invariant QFTs are also conformally invariant. The study of such theories is conformal field theory. The coupling parameters of a quantum field theory can run even if the corresponding classical field theory is scale-invariant. In this case, the non-zero beta function tells us that the classical scale invariance is anomalous. Examples Beta functions are usually computed in some kind of approximation scheme. An example is perturbation theory, where one assumes that the coupling parameters are small. One can then make an expansion in powers of the coupling parameters and truncate the higher-order terms (also known as higher loop contributions, due to the number of loops in the corresponding Feynman graphs). Here are some examples of beta functions computed in perturbation theory: Quantum electrodynamics The one-loop beta function in quantum electrodynamics (QED) is or, equivalently, written in terms of the fine structure constant in natural units, . This beta function tells us that the coupling increases with increasing energy scale, and QED becomes strongly coupled at high energy. In fact, the coupling apparently becomes infinite at some finite energy, resulting in a Landau pole. However, one cannot expect the perturbative beta function to give accurate results at strong coupling, and so it is likely that the Landau pole is an artifact of applying perturbation theory in a situation where it is no longer valid. Quantum chromodynamics The one-loop beta function in quantum chromodynamics with flavours and scalar colored bosons is or written in terms of αs = . Assuming ns=0, if nf ≤ 16, the ensuing beta function dictates that the coupling decreases with increasing energy scale, a phenomenon known as asymptotic freedom. Conversely, the coupling increases with decreasing energy scale. This means that the coupling becomes large at low energies, and one can no longer rely on perturbation theory. SU(N) Non-Abelian gauge theory While the (Yang–Mills) gauge group of QCD is , and determines 3 colors, we can generalize to any number of colors, , with a gauge group . Then for this gauge group, with Dirac fermions in a representation of and with complex scalars in a representation , the one-loop beta function is where is the quadratic Casimir of and is another Casimir invariant defined by for generators of the Lie algebra in the representation R. (For Weyl or Majorana fermions, replace by , and for real scalars, replace by .) For gauge fields (i.e. gluons), necessarily in the adjoint of , ; for fermions in the fundamental (or anti-fundamental) representation of , . Then for QCD, with , the above equation reduces to that listed for the quantum chromodynamics beta function. This famous result was derived nearly simultaneously in 1973 by Politzer, Gross and Wilczek, for which the three were awarded the Nobel Prize in Physics in 2004. Unbeknownst to these authors, G. 't Hooft had announced the result in a comment following a talk by K. Symanzik at a small meeting in Marseilles in June 1972, but he never published it. Standard Model Higgs–Yukawa Couplings In the Standard Model, quarks and leptons have "Yukawa couplings" to the Higgs boson. These determine the mass of the particle. Most all of the quarks' and leptons' Yukawa couplings are small compared to the top quark's Yukawa coupling. These Yukawa couplings change their values depending on the energy scale at which they are measured, through running. The dynamics of Yukawa couplings of quarks are determined by the renormalization group equation: , where is the color gauge coupling (which is a function of and associated with asymptotic freedom) and is the Yukawa coupling. This equation describes how the Yukawa coupling changes with energy scale . The Yukawa couplings of the up, down, charm, strange and bottom quarks, are small at the extremely high energy scale of grand unification, GeV. Therefore, the term can be neglected in the above equation. Solving, we then find that is increased slightly at the low energy scales at which the quark masses are generated by the Higgs, GeV. On the other hand, solutions to this equation for large initial values cause the rhs to quickly approach smaller values as we descend in energy scale. The above equation then locks to the QCD coupling . This is known as the (infrared) quasi-fixed point of the renormalization group equation for the Yukawa coupling. No matter what the initial starting value of the coupling is, if it is sufficiently large it will reach this quasi-fixed point value, and the corresponding quark mass is predicted. Minimal Supersymmetric Standard Model Renomalization group studies in the Minimal Supersymmetric Standard Model (MSSM) of grand unification and the Higgs–Yukawa fixed points were very encouraging that the theory was on the right track. So far, however, no evidence of the predicted MSSM particles has emerged in experiment at the Large Hadron Collider. See also Banks–Zaks fixed point Callan–Symanzik equation Quantum triviality References Further reading Peskin, M and Schroeder, D.; An Introduction to Quantum Field Theory, Westview Press (1995). A standard introductory text, covering many topics in QFT including calculation of beta functions; see especially chapter 16. Weinberg, Steven; The Quantum Theory of Fields, (3 volumes) Cambridge University Press (1995). A monumental treatise on QFT. Zinn-Justin, Jean; Quantum Field Theory and Critical Phenomena, Oxford University Press (2002). Emphasis on the renormalization group and related topics. Renormalization group Scaling symmetries
Beta function (physics)
[ "Physics" ]
1,406
[ "Symmetry", "Physical phenomena", "Critical phenomena", "Renormalization group", "Statistical mechanics", "Scaling symmetries" ]
1,037,854
https://en.wikipedia.org/wiki/Free%20electron%20model
In solid-state physics, the free electron model is a quantum mechanical model for the behaviour of charge carriers in a metallic solid. It was developed in 1927, principally by Arnold Sommerfeld, who combined the classical Drude model with quantum mechanical Fermi–Dirac statistics and hence it is also known as the Drude–Sommerfeld model. Given its simplicity, it is surprisingly successful in explaining many experimental phenomena, especially the Wiedemann–Franz law which relates electrical conductivity and thermal conductivity; the temperature dependence of the electron heat capacity; the shape of the electronic density of states; the range of binding energy values; electrical conductivities; the Seebeck coefficient of the thermoelectric effect; thermal electron emission and field electron emission from bulk metals. The free electron model solved many of the inconsistencies related to the Drude model and gave insight into several other properties of metals. The free electron model considers that metals are composed of a quantum electron gas where ions play almost no role. The model can be very predictive when applied to alkali and noble metals. Ideas and assumptions In the free electron model four main assumptions are taken into account: Free electron approximation: The interaction between the ions and the valence electrons is mostly neglected, except in boundary conditions. The ions only keep the charge neutrality in the metal. Unlike in the Drude model, the ions are not necessarily the source of collisions. Independent electron approximation: The interactions between electrons are ignored. The electrostatic fields in metals are weak because of the screening effect. Relaxation-time approximation: There is some unknown scattering mechanism such that the electron probability of collision is inversely proportional to the relaxation time , which represents the average time between collisions. The collisions do not depend on the electronic configuration. Pauli exclusion principle: Each quantum state of the system can only be occupied by a single electron. This restriction of available electron states is taken into account by Fermi–Dirac statistics (see also Fermi gas). Main predictions of the free-electron model are derived by the Sommerfeld expansion of the Fermi–Dirac occupancy for energies around the Fermi level. The name of the model comes from the first two assumptions, as each electron can be treated as free particle with a respective quadratic relation between energy and momentum. The crystal lattice is not explicitly taken into account in the free electron model, but a quantum-mechanical justification was given a year later (1928) by Bloch's theorem: an unbound electron moves in a periodic potential as a free electron in vacuum, except for the electron mass me becoming an effective mass m* which may deviate considerably from me (one can even use negative effective mass to describe conduction by electron holes). Effective masses can be derived from band structure computations that were not originally taken into account in the free electron model. From the Drude model Many physical properties follow directly from the Drude model, as some equations do not depend on the statistical distribution of the particles. Taking the classical velocity distribution of an ideal gas or the velocity distribution of a Fermi gas only changes the results related to the speed of the electrons. Mainly, the free electron model and the Drude model predict the same DC electrical conductivity σ for Ohm's law, that is with where is the current density, is the external electric field, is the electronic density (number of electrons/volume), is the mean free time and is the electron electric charge. Other quantities that remain the same under the free electron model as under Drude's are the AC susceptibility, the plasma frequency, the magnetoresistance, and the Hall coefficient related to the Hall effect. Properties of an electron gas Many properties of the free electron model follow directly from equations related to the Fermi gas, as the independent electron approximation leads to an ensemble of non-interacting electrons. For a three-dimensional electron gas we can define the Fermi energy as where is the reduced Planck constant. The Fermi energy defines the energy of the highest energy electron at zero temperature. For metals the Fermi energy is in the order of units of electronvolts above the free electron band minimum energy. Density of states The 3D density of states (number of energy states, per energy per volume) of a non-interacting electron gas is given by: where is the energy of a given electron. This formula takes into account the spin degeneracy but does not consider a possible energy shift due to the bottom of the conduction band. For 2D the density of states is constant and for 1D is inversely proportional to the square root of the electron energy. Fermi level The chemical potential of electrons in a solid is also known as the Fermi level and, like the related Fermi energy, often denoted . The Sommerfeld expansion can be used to calculate the Fermi level () at higher temperatures as: where is the temperature and we define as the Fermi temperature ( is Boltzmann constant). The perturbative approach is justified as the Fermi temperature is usually of about 105 K for a metal, hence at room temperature or lower the Fermi energy and the chemical potential are practically equivalent. Compressibility of metals and degeneracy pressure The total energy per unit volume (at ) can also be calculated by integrating over the phase space of the system, we obtain which does not depend on temperature. Compare with the energy per electron of an ideal gas: , which is null at zero temperature. For an ideal gas to have the same energy as the electron gas, the temperatures would need to be of the order of the Fermi temperature. Thermodynamically, this energy of the electron gas corresponds to a zero-temperature pressure given by where is the volume and is the total energy, the derivative performed at temperature and chemical potential constant. This pressure is called the electron degeneracy pressure and does not come from repulsion or motion of the electrons but from the restriction that no more than two electrons (due to the two values of spin) can occupy the same energy level. This pressure defines the compressibility or bulk modulus of the metal This expression gives the right order of magnitude for the bulk modulus for alkali metals and noble metals, which show that this pressure is as important as other effects inside the metal. For other metals the crystalline structure has to be taken into account. Magnetic response According to the Bohr–Van Leeuwen theorem, a classical system at thermodynamic equilibrium cannot have a magnetic response. The magnetic properties of matter in terms of a microscopic theory are purely quantum mechanical. For an electron gas, the total magnetic response is paramagnetic and its magnetic susceptibility given by where is the vacuum permittivity and the is the Bohr magneton. This value results from the competition of two contributions: a diamagnetic contribution (known as Landau's diamagnetism) coming from the orbital motion of the electrons in the presence of a magnetic field, and a paramagnetic contribution (Pauli's paramagnetism). The latter contribution is three times larger in absolute value than the diamagnetic contribution and comes from the electron spin, an intrinsic quantum degree of freedom that can take two discrete values and it is associated to the electron magnetic moment. Corrections to Drude's model Heat capacity One open problem in solid-state physics before the arrival of quantum mechanics was to understand the heat capacity of metals. While most solids had a constant volumetric heat capacity given by Dulong–Petit law of about at large temperatures, it did correctly predict its behavior at low temperatures. In the case of metals that are good conductors, it was expected that the electrons contributed also the heat capacity. The classical calculation using Drude's model, based on an ideal gas, provides a volumetric heat capacity given by . If this was the case, the heat capacity of a metals should be 1.5 of that obtained by the Dulong–Petit law. Nevertheless, such a large additional contribution to the heat capacity of metals was never measured, raising suspicions about the argument above. By using Sommerfeld's expansion one can obtain corrections of the energy density at finite temperature and obtain the volumetric heat capacity of an electron gas, given by: , where the prefactor to is considerably smaller than the 3/2 found in , about 100 times smaller at room temperature and much smaller at lower . Evidently, the electronic contribution alone does not predict the Dulong–Petit law, i.e. the observation that the heat capacity of a metal is still constant at high temperatures. The free electron model can be improved in this sense by adding the contribution of the vibrations of the crystal lattice. Two famous quantum corrections include the Einstein solid model and the more refined Debye model. With the addition of the latter, the volumetric heat capacity of a metal at low temperatures can be more precisely written in the form, , where and are constants related to the material. The linear term comes from the electronic contribution while the cubic term comes from Debye model. At high temperature this expression is no longer correct, the electronic heat capacity can be neglected, and the total heat capacity of the metal tends to a constant given by the Dulong–petit law. Mean free path Notice that without the relaxation time approximation, there is no reason for the electrons to deflect their motion, as there are no interactions, thus the mean free path should be infinite. The Drude model considered the mean free path of electrons to be close to the distance between ions in the material, implying the earlier conclusion that the diffusive motion of the electrons was due to collisions with the ions. The mean free paths in the free electron model are instead given by (where is the Fermi speed) and are in the order of hundreds of ångströms, at least one order of magnitude larger than any possible classical calculation. The mean free path is then not a result of electron–ion collisions but instead is related to imperfections in the material, either due to defects and impurities in the metal, or due to thermal fluctuations. Thermal conductivity and thermopower While Drude's model predicts a similar value for the electric conductivity as the free electron model, the models predict slightly different thermal conductivities. The thermal conductivity is given by for free particles, which is proportional to the heat capacity and the mean free path which depend on the model ( is the mean (square) speed of the electrons or the Fermi speed in the case of the free electron model). This implies that the ratio between thermal and electric conductivity is given by the Wiedemann–Franz law, where is the Lorenz number, given by The free electron model is closer to the measured value of V2/K2 while the Drude prediction is off by about half the value, which is not a large difference. The close prediction to the Lorenz number in the Drude model was a result of the classical kinetic energy of electron being about 100 smaller than the quantum version, compensating the large value of the classical heat capacity. However, Drude's mode predicts the wrong order of magnitude for the Seebeck coefficient (thermopower), which relates the generation of a potential difference by applying a temperature gradient across a sample . This coefficient can be showed to be , which is just proportional to the heat capacity, so the Drude model predicts a constant that is hundred times larger than the value of the free electron model. While the latter get as coefficient that is linear in temperature and provides much more accurate absolute values in the order of a few tens of μV/K at room temperature. However this models fails to predict the sign change of the thermopower in lithium and noble metals like gold and silver. Inaccuracies and extensions The free electron model presents several inadequacies that are contradicted by experimental observation. We list some inaccuracies below: Temperature dependence The free electron model presents several physical quantities that have the wrong temperature dependence, or no dependence at all like the electrical conductivity. The thermal conductivity and specific heat are well predicted for alkali metals at low temperatures, but fails to predict high temperature behaviour coming from ion motion and phonon scattering. Hall effect and magnetoresistance The Hall coefficient has a constant value in Drude's model and in the free electron model. This value is independent of temperature and the strength of the magnetic field. The Hall coefficient is actually dependent on the band structure and the difference with the model can be quite dramatic when studying elements like magnesium and aluminium that have a strong magnetic field dependence. The free electron model also predicts that the traverse magnetoresistance, the resistance in the direction of the current, does not depend on the strength of the field. In almost all the cases it does. Directional The conductivity of some metals can depend of the orientation of the sample with respect to the electric field. Sometimes even the electrical current is not parallel to the field. This possibility is not described because the model does not integrate the crystallinity of metals, i.e. the existence of a periodic lattice of ions. Diversity in the conductivity Not all materials are electrical conductors, some do not conduct electricity very well (insulators), some can conduct when impurities are added like semiconductors. Semimetals, with narrow conduction bands also exist. This diversity is not predicted by the model and can only by explained by analysing the valence and conduction bands. Additionally, electrons are not the only charge carriers in a metal, electron vacancies or holes can be seen as quasiparticles carrying positive electric charge. Conduction of holes leads to an opposite sign for the Hall and Seebeck coefficients predicted by the model. Other inadequacies are present in the Wiedemann–Franz law at intermediate temperatures and the frequency-dependence of metals in the optical spectrum. More exact values for the electrical conductivity and Wiedemann–Franz law can be obtained by softening the relaxation-time approximation by appealing to the Boltzmann transport equations. The exchange interaction is totally excluded from this model and its inclusion can lead to other magnetic responses like ferromagnetism. An immediate continuation to the free electron model can be obtained by assuming the empty lattice approximation, which forms the basis of the band structure model known as the nearly free electron model. Adding repulsive interactions between electrons does not change very much the picture presented here. Lev Landau showed that a Fermi gas under repulsive interactions, can be seen as a gas of equivalent quasiparticles that slightly modify the properties of the metal. Landau's model is now known as the Fermi liquid theory. More exotic phenomena like superconductivity, where interactions can be attractive, require a more refined theory. See also Bloch's theorem Electronic entropy Tight binding Two-dimensional electron gas Bose–Einstein statistics Fermi surface White dwarf Jellium References Citations References General Quantum models Condensed matter physics Electronic band structures Electron Arnold Sommerfeld
Free electron model
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,083
[ "Electron", "Molecular physics", "Phases of matter", "Quantum mechanics", "Materials science", "Quantum models", "Electronic band structures", "Condensed matter physics", "Matter" ]
1,037,955
https://en.wikipedia.org/wiki/Ship%20breaking
Ship breaking (also known as ship recycling, ship demolition, ship scrapping, ship dismantling, or ship cracking) is a type of ship disposal involving the breaking up of ships either as a source of parts, which can be sold for re-use, or for the extraction of raw materials, chiefly scrap. Modern ships have a lifespan of 25 to 30 years before corrosion, metal fatigue and a lack of parts render them uneconomical to operate. Ship-breaking allows the materials from the ship, especially steel, to be recycled and made into new products. This lowers the demand for mined iron ore and reduces energy use in the steelmaking process. Fixtures and other equipment on board the vessels can also be reused. While ship-breaking is sustainable, there are concerns about its use by poorer countries without stringent environmental legislation. It is also labour-intensive, and considered one of the world's most dangerous industries. In 2012, roughly 1,250 ocean ships were broken down, and their average age was 26 years. In 2013, the world total of demolished ships amounted to 29,052,000 tonnes, 92% of which were demolished in Asia. As of January 2020, Alang Ship Breaking Yard in India has the largest global share at 30%, followed by Chittagong Ship Breaking Yard in Bangladesh and Gadani Ship Breaking Yard in Pakistan. The largest sources of ships are China, Greece, and Germany, although there is greater variation in the sources of carriers versus their disposal. The ship-breaking yards of India, Bangladesh, China and Pakistan employ 225,000 workers as well as providing many indirect jobs. In Bangladesh, the recycled steel covers 20% of the country's needs and in India it is almost 10%. As an alternative to ship breaking, ships may be sunk to create artificial reefs after legally mandated removal of hazardous materials (though this does not recycle any materials), or sunk in deep ocean waters. Storage is a viable temporary option, whether on land or afloat, though most ships will eventually be scrapped; some will be sunk, or preserved as museums. History Wooden-hulled ships were simply set on fire or "conveniently sunk". In Tudor times (1485–1603), ships were dismantled and the timber re-used. This procedure was no longer applicable with the advent of metal-hulled boats in the 19th century. In 1880 Denny Brothers of Dumbarton used forgings made from scrap maritime steel in their shipbuilding. Many other nations began to purchase British ships for scrap by the late 19th century, including Germany, Italy, the Netherlands and Japan. The Italian industry started in 1892, and the Japanese industry after the passing of an 1896 law to subsidise native shipbuilding. After suffering damage or disaster, liner operators did not want the name of a broken ship to tarnish the brand of their passenger services. Many Victorian ships made their final voyages with the final letter of their name chipped off. In the 1930s it became cheaper to "beach" a boat by running her ashore—as opposed to using a dry dock. The ship would have to weigh as little as possible and would run ashore at full speed. Dismantling operations required a rise of tide and close proximity to a steel-works. Electric shears, a wrecking ball and oxy-acetylene torches were used. The technique of the time closely resembles that used in developing countries . Thos. W. Ward Ltd., one of the largest breakers in the United Kingdom in the 1930s, would recondition and sell all furniture and machinery. Many historical artifacts were sold at public auctions: the Cunarder , sold as scrap for , received high bids for her fittings worldwide. However, any weapons and military information, even if obsolete, were carefully removed by Navy personnel before turning over the ship for scrapping. In 2020, as the COVID-19 pandemic crippled the cruise ship trade, cruise vessels began to appear more frequently in ship breaking facilities. Location trends Until the late 20th century the majority of ship breaking activity took place in the port cities of industrialized countries such as the United Kingdom and the United States. those dismantlers that still remain in the United States work primarily on government-surplus vessels. Starting in the mid-20th century, East Asian countries with lower labour costs began to dominate ship-breaking. As labour costs rose, centres of the ship-breaking industry moved—initially from countries such as Japan and Hong Kong, to Korea and Taiwan and then to China. For example, the southern port city of Kaohsiung in Taiwan operated as the world's leading dismantling site in the late 1960s and 1970s, breaking up 220 ships totaling 1.6 million tons in 1972 alone; in 1977 Taiwan continued to dominate the industry with more than half the market share, followed by Spain and Pakistan. At the time, Bangladesh had no capacity at all. However, the sector is volatile and fluctuates wildly, and Taiwan processed just two ships 13 years later as wages across East Asia rose. For comparison, depending on their profession, shipbreakers in Kaohsiung earned from (day labourer) to (torch operator) per day in 1973. In 1960, after a severe cyclone, the Greek ship M D Alpine was stranded on the shores of Sitakunda, Chittagong (then part of East Pakistan). It could not be re-floated and so remained there for several years. In 1965 the Chittagong Steel House bought the ship and had it scrapped. It took years to scrap the vessel, but the work gave birth to the industry in Bangladesh. Until 1980 the Gadani Ship Breaking Yard of Pakistan was the largest ship breaking yard in the world. Tightening environmental regulations resulted in increased hazardous waste disposal costs in industrialised countries in the 1980s, causing the export of retired ships to lower-income areas, chiefly in South Asia. This, in turn, created a far worse environmental problem, subsequently leading to the Basel Convention of 1989. In 2004 a Basel Convention decision officially classified old ships as "toxic waste", preventing them from leaving a country without the permission of the importing state. This has led to a resurgence of recycling in environmentally compliant locations in developed countries, especially in former shipbuilding yards. On 31 December 2005 the French Navy's left Toulon to be dismantled at the Alang Ship Breaking Yard, India—despite protests over improper disposal capabilities and facilities for the toxic wastes. On 6 January 2006 the Supreme Court of India temporarily denied access to Alang, and the French Conseil d'État ordered Clemenceau to return to French waters. Able UK in Hartlepool received a new disassembly contract to use accepted practices in scrapping the ship. The dismantling started on 18 November 2009 and the break-up was completed by the end of 2010; the event was considered a turning point in the treatment of redundant vessels. Europe and the United States have had a resurgence in ship scrapping since the 1990s. In 2009 the Bangladesh Environmental Lawyers Association won a legal case prohibiting all substandard ship breaking. For 14 months the industry could not import ships and thousands of jobs were lost before the ban was annulled. That same year, the global recession and lower demand for goods led to an increase in the supply of ships for decommissioning. The rate of scrapping is inversely correlated to the freight price, which collapsed in 2009. Technique The decommissioning process is entirely different in developed countries than it is in third world countries. In both cases, ship-breakers bid for the ship, and the highest bidder wins the contract. The ship-breaker then acquires the vessel from the international broker who deals in outdated ships. The price paid is approximately $400 per tonne; regions with more lax environmental legislation typically can offer higher prices. For the industry in Bangladesh, 69% of revenue is spent on purchasing vessels; only 2% is labour costs. The ship is taken to the decommissioning location either under its own power or with the use of tugs. Developing countries In developing countries, chiefly the Indian subcontinent, ships are run ashore on gently sloping sand tidal beaches at high tide so that they can be accessed for disassembly. In the beaching method, no external source of energy is used to pull the ship, as opposed to the dry dock method of ship recycling where a ship is floated into the dry dock using a substantial amount of energy. However, maneuvering a large ship onto a beach at high speed takes skill and daring even for a specialist captain, and is not always successful. Next, the anchor is dropped to steady the ship and the engine is shut down. It takes 50 labourers about three months to break down a normal-sized cargo vessel of about 40,000 tonnes. Before the decommissioning begins, various clearances and permissions are obtained from regulatory, pollution and customs authorities after a thorough inspection is conducted by them. The ship recycling process then begins with the draining of fuel, hydraulic fluid, coolant, lubricating oils and firefighting liquid which may be disposed of or sold to the trade. Any reusable fixtures are sold to the trade. Any kind of waste such as plastic, garbage, or oily sand is sent to waste treatment facilities, like the Common Hazardous Waste Treatment Storage Disposal Facility (CHW-TSDF) set up by the Gujarat Maritime Board in Alang. Any usable oil is sent to government authorized refineries where used oil is chemically treated. The next steps entail recovering unused and partially spent materials, disposal of bilge water, recovering and obtaining reusable materials, and safe disposal of bio-hazardous materials like asbestos and glass wool. Each of these materials are inspected and sent to regulated waste treatment facilities or to buyers for further use and processing. In recycling yards in the Indian subcontinent, specifically in Alang, upgraded facilities such as 100% impervious floors with drainage systems, heavy-lift cranes, yard and vessel-specific training for workers, and the development and implementation of Ship Recycling Facility Plans and Ship Recycling Plans (as per IMO's guidelines in Resolutions MEPC.210(63) and MEPC.196(62)) have been implemented. Developed countries In developed countries the dismantling process mirrors the technical guidelines for the environmentally sound management of the full and partial dismantling of ships, published by the Basel Convention in 2003. Recycling rates of 98% can be achieved in these facilities. Prior to dismantling, an inventory of dangerous substances is compiled. All hazardous materials and liquids, such as bilge water, are removed before disassembly. Holes are bored for ventilation and all flammable vapours are extracted. Vessels are initially taken to a dry dock or a pier, although a dry dock is considered more environmentally friendly because all spillage is contained and can easily be cleaned up. Floating is, however, cheaper than a dry dock. Stormwater discharge facilities will stop an overflow of toxic liquid into the waterways. The carrier is then secured to ensure its stability. Often the propeller is removed beforehand to allow the watercraft to be moved into shallower water. Workers must completely strip the ship down to a bare hull, with objects cut free using saws, grinders, abrasive cutting wheels, hand-held shears, plasma, and gas torches. Anything of value, such as spare parts and electronic equipment is sold for re-use, although labour costs mean that low-value items are not economical to sell. The Basel Convention demands that all yards separate hazardous and non-hazardous waste and have appropriate storage units, and this must be done before the hull is cut up. Asbestos, found in the engine room, is isolated and stored in custom-made plastic wrapping prior to being placed in secure steel containers, which are then landfilled. Many hazardous wastes can be recycled into new products. Examples include lead-acid batteries or electronic circuit boards. Another commonly used treatment is cement-based solidification and stabilization. Cement kilns are used because they can treat a range of hazardous wastes by improving physical characteristics and decreasing the toxicity and transmission of contaminants. Hazardous waste may also be "destroyed" by incinerating it at a high temperature; flammable wastes can sometimes be burned as energy sources. Some hazardous waste types may be eliminated using pyrolysis in a high-temperature electrical arc, in inert conditions to avoid combustion. This treatment method may be preferable to high-temperature incineration in some circumstances such as in the destruction of concentrated organic waste types, including PCBs, pesticides, and other persistent organic pollutants. Dangerous chemicals can also be permanently stored in landfills as long as leaching is prevented. Valuable metals, such as copper or aluminum in electric cable, that are mixed with other materials may be recovered by the use of shredders and separators in the same fashion as e-waste recycling. The shredders cut the electronics into metallic and non-metallic pieces. Metals are extracted using magnetic separators, air flotation separator columns, shaker tables, or eddy currents. Plastic almost always contains regulated hazardous waste (e.g., asbestos, PCBs, hydrocarbons) and cannot be melted down. Large objects, such as engine parts, are extracted and sold as they become accessible. The hull is cut into 300-tonne sections, starting with the upper deck and working slowly downwards. While oxy-acetylene gas torches are most commonly used, detonation charges can quickly remove large sections of the hull. These sections are transported to an electric arc furnace to be melted down into new ferrous products, though toxic paint must be stripped prior to heating. Historical techniques At Kaohsiung in the late 1960s and '70s, ships to be scrapped were tied up at berths in Dah Jen and Dah Lin Pu, at the southern end of Kaohsiung Harbor. There were a total of 24 breaking berths at Kaohsiung; each berth was rented by the scrapper from the Port Authority at a nominal rate of per square foot per month, and up to could be rented surrounding a berth at a time. A typical 5,000-ton ship could be broken up in 25 to 30 days. The process began with "cleaning", a process in which subcontractors would come on board the ship to strip it of loose and flammable items, which were often resold in second-hand shops. After that, the cutting crews would start to dismantle the hull, stern first; large sections were cut off the ship and moved via cranes and rigging taken from previously scrapped ships. Because the scrapping at Kaohsiung was done at the docks, scrap metal was placed on trucks waiting to transport it to Kaohsiung's mills. Conventions and regulations The Basel Convention The Basel Convention on the Control of Trans-boundary Movements of Hazardous Wastes and Their Disposal of 1989 was the first convention to environmentally govern the ship breaking industry. It has been ratified by 187 countries, including India and Bangladesh. It controls the international movement of hazardous wastes and for their environmentally sound management mainly through consent for the shipment between the authorities of the country exporting the hazardous wastes with the authorities of the importing country. Though the Basel Convention has notably reduced illegal exports of hazardous wastes to countries that are unable to process and dispose of them in an environmentally sound manner, it has failed to define the minimum standards of recycling soundly. It also completely ignores important aspects such as workers' safety and falls short in overcoming bureaucratic barriers when it comes to communication between exporting and importing countries. Furthermore, the decision to scrap a ship is often made in international waters, where the convention has no jurisdiction. The "Ban Amendment" to the Basel Convention was adopted in March 1994, prohibiting the export of hazardous wastes from OECD countries to non-OECD countries. The Amendment would enter into force 90 days after it has been ratified by at least three-quarters of the 87 countries that were Parties to the Convention at the time it was adopted. Croatia deposited the 66th ratification in September 2019, and the Ban Amendment entered into force 25 years after adoption on December 5, 2019. However, the European Union had already enacted the Ban Amendment unilaterally through the European Waste Shipment Regulation, which incorporated the Basel Convention and the Ban Amendment into European Union law in February 1993, the European Union replaced its previous regulation with the Waste Shipment Regulation (EC) No 1013/2006 (the WSR), which also unilaterally implemented the Ban Amendment, prohibiting the export of hazardous wastes from European Union member states to any developing (i.e. non-OECD), countries and regulating their export to OECD countries through the Basel Convention's prior informed consent mechanism. When the European Commission attempted to apply the WSR to end-of-life ships, it encountered numerous obstacles and evasion. This is because, in enforcing the Ban Amendment, the European Waste WSR considers it illegal to recycle any ship that has started its last voyage from a European Union port in Bangladesh, China, India, or Pakistan, regardless of the flag the ship flies. These four non-OECD countries have consistently recycled around 95% of the world's tonnage. In fact, according to a study conducted by the European Commission in 2011, at least 91% of ships covered by the WSR disobeyed or circumvented its requirements. The European Commission admitted publicly that enforcing its own Waste Shipment Regulation to recycle ships had not been successful. The commission, unable to wait for the HKC to take effect, began developing new legislation to regulate the recycling of European-flagged ships. This led the European Commission in 2012 to propose the development of a new European Regulation on Ship Recycling. The Hong Kong Convention To overcome the difficulties of the Basel Convention in terms of the inordinate time and effort required in gaining the consent of all countries involved in its due time, and to highlight regulations that this convention left out, its governing body requested the International Maritime Organisation for a newer convention in 2004. Thus, the Hong Kong Convention came into existence. In essence, the Convention aims to ensure that ships, when being recycled after reaching the end of their operational lives, do not pose any unnecessary risks to human health, safety and the environment. The convention covers regulations including: the design, construction, operation and preparation of ships to facilitate safe and environmentally sound recycling, without compromising the safety and operational efficiency of ships; the operation of ship recycling facilities in a safe and environmentally sound manner; and the establishment of an appropriate enforcement mechanism for ship recycling (certification/reporting requirements). With much more sound standards of ship recycling, easier implementation and better supervision, the Hong Kong Convention was finally adopted in 2009. However, the convention will only come into universal force 24 months after the date on which the following conditions are met: ratification or accession by 15 States, the fleet of the States that have ratified or acceded to represent at least 40 percent of world merchant shipping by gross tonnage, and the combined maximum annual ship recycling volume of the States during the preceding 10 years to constitute not less than 3 percent of the gross tonnage of the combined merchant shipping of the same States. As of 2 April 2023, 20 countries have acceded to the HKC, making up 30.16% of the world's merchant shipping by gross tonnage, with a combined maximum annual ship recycling volume of the States at 2.6% of the gross tonnage of the combined merchant shipping of the same States. This leaves the second and third conditions yet to be fulfilled for the HKC to enter into force. Nearly 96 of India's 120 operational ship recycling yards have achieved Statements of Compliance (SoC) with the Hong Kong Convention by various IACS class societies—including ClassNK, IRClass, Lloyd's Register and RINA. In addition, a yard in Chattogram, Bangladesh has also become the first one to achieve an SoC by ClassNK in January 2020, having first achieved a RINA SoC in 2017. Furthermore, to encourage the growth of India's vital ship recycling sector, in November 2019, the Government of India acceded to the Hong Kong Convention for Safe and Environmentally Sound Recycling of Ships and became the only South Asian country and major ship recycling destination so far to take such a step. The EU Ship Recycling Regulation The work on the EU's Ship Recycling Regulation (SRR) was started in 2013, after the adoption of the requirements from the Hong Kong International Convention for the safe and environmentally sound recycling of ships (HKC). However, it differs from the HKC in the way yards are authorised and in its list of inventories of hazardous materials, or IHM The argument for developing a specified regulation for ship recycling in the European Union, was the fact that the EU noticed how many EU ships that ended up in unsustainable recycling facilities. Europeans own around 40% of the world fleet, around 15000 ships. Among these around 10000 fly an EU Member-State flag, but only 7% of the EU-flagged ships are dismantled in the EU territory, and the rest are mostly dismantled in South Asia After the financial crisis in the early 2010s many ship owners ended up with an unexpected overcapacity of ships, and was selling of their vessels. The phasing out of single hull oil tankers, also provided a rush to change out ones fleet. The EUs new FuelEU maritime initiative is also incentivising decarbonization of the maritime sector, which requires ships to sail on new reenables or less polluting fuels. This implies that the EU fleet already has and will undergo major changes and renewals in the coming years, thus leading to many ships being outdated or phased out, many being concerned for the end-of-life handling of these older ships. The SRR aims to address the environmental and health hazards associated with ship dismantling by setting high standards for EU-flagged vessels at the end of their operational lives. One of the key components developed by the EU is the European List of Approved Ship Recycling Facilities, identifying the approved ports for all EU flagged ships to be recycled. For a ship recycling yard to be included in the list, the facilities must comply with strict environmental and worker safety standards, reducing toxic waste release and promoting safe dismantling practices. Member States report to the Commission on which facilities in their territory that comply to the requirements, and thereby get included on the list. Shipyards outside the EU can also be included on the European List but must apply to the Commission with proof of the yard’s standards. To be included on the European List, ship recycling facilities must adhere to specific requirements set by the EU and aligned with the Hong Kong Convention and other international guidelines. Facilities need authorization, robust structural and operational standards, environmental safety protocols, and measures for monitoring health and safety risks to workers and nearby populations. This includes handling hazardous materials on impermeable surfaces, training workers and provide them with protective equipment, implementing emergency plans, and recording incidents. Operators must also submit recycling plans and completion reports, ensuring full compliance and minimizing environmental and health impacts during ship recycling activities. As of November 2024, it contains of 45 shipyards. Because the list works as a guarantee for an yards safety and validity, shipyards can both be removed and added to the list if they seize to comply to the regulation. Additionally, to the list of approved facilities, the SRR also mandates each ship to hold an Inventory of Hazardous Materials (IHM), listing hazardous substances used in each ship's construction. By hazardous material’ means any material or substance which is liable to create hazards to human health and/or the environment. New installation of material such as asbestos and ozone-depleting substances are prohibited, and the occurrence of materials containing lead, mercury and radioactive substances, to name a few, are to be reported and restricted. This inventory, which must be maintained throughout the ship's life, helps guide shipyards and recyclers on safe waste management and reduces accidental environmental contamination. The ships also report on the operationally generated waste, meaning wastewater and residues generated by the normal operation of ships. By EU standards, any EU ship going for dismantling, all new European ships, and third-country ships stopping in EU ports need to have an inventory of hazardous materials on board This list, as of 27 July 2023, contains 48 ship-recycling facilities, including 38 yards in Europe (EU, Norway and UK), 9 yards in Turkey and 1 yard in the USA. Several yards on the European List are also capable of recycling large vessels. The list excluded some of the most major ship recycling yards in India and Bangladesh, which have achieved SoCs with the HKC in various class societies. This exclusion has led to many ship owners changing the flag of their vessel before recycling or sell the ship to cash buyers, to evade the regulations. Excluded countries strive towards bringing the HKC into force as the universal regulation, arguing that it would be irrational if international shipping were regulated by multiple and competing standards. Dangers Health risks Seventy percent of ships are simply run ashore in developing countries for disassembly, where (particularly in older vessels) potentially toxic materials such as asbestos, lead, polychlorinated biphenyls and heavy metals along with lax industrial safety standards pose a danger for the workers. Burns from explosions and fire, suffocation, mutilation from falling metal, cancer and disease from toxins are regular occurrences in the industry. Asbestos was used heavily in ship construction until it was finally banned in most of the developed world in the mid-1980s. Currently, the costs associated with removing asbestos, along with the potentially expensive insurance and health risks, have meant that ship breaking in most developed countries is no longer economically viable. Dangerous vapours and fumes from burning materials can be inhaled, and dusty asbestos-laden areas are commonplace. Removing the metal for scrap can potentially cost more than the value of the scrap metal itself. In the developing world, however, shipyards can operate without the risk of personal injury lawsuits or workers' health claims, meaning many of these shipyards may operate with high health risks. Protective equipment is sometimes absent or inadequate. The sandy beaches cannot sufficiently support the heavy equipment, which is thus prone to collapse. Many are injured from explosions when flammable gas is not removed from fuel tanks. In Bangladesh, a local watchdog group claims that, on average, one worker dies per week and one is injured per day. The problem is caused by negligence from national governments, shipyard operators and former ship owners disregarding the Basel Convention. According to the Institute for Global Labour and Human Rights, workers who attempt to unionize are fired and then blacklisted. The employees have no formal contract or any rights, and sleep in over-crowded hostels. The authorities produce no comprehensive injury statistics, so the problem is underestimated. Child labour is also widespread: 20% of Bangladesh's ship breaking workforce are below 15 years of age, mainly involved in cutting with gas torches. There is, however, an active ship-breaker's union in Mumbai, India (Mumbai Port Trust Dock and General Employees' Union) since 2003 with 15,000 members, which strikes to ensure fatality compensation. It has set up a sister branch in Alang, gaining paid holidays and safety equipment for workers since 2005. They hope to expand all along the South Asian coastline. Even poor occupational safety record at Alang, the world's largest ship recycling destination, underlines the risks of the ship breaking industry. Poor worker safety has led to a number of accidents and dozens of workers deaths at the Alang yards over the years. According the IndustriALL Global Union affiliate, Alang Sosiya Ship Recycling and General Workers' Association (ASSRGWA), between January 2009 and October 2012, at least 54 workers had died in work-related accidents at the Alang shipbreaking yards. Besides worker unions, such frequent accidents due to poor safety standards at Alang have also attracted EU scrutiny. In Alang, safety awareness drives with hoardings, posters, films as well as training programmes for different categories of workers under the Safety Training and Labour Welfare Institute, safety evaluation by external teams, personal protective equipment (PPEs) including gloves, gumboot, goggles and masks are provided to workers to mitigate the hazards of their work. In addition to this, GMB has also included regular medical examinations of workers exposed to bio-hazardous materials, provision of medical facilities at the Red Cross Hospital in Alang, mobile medical vans and health awareness programmes. Several United Nations committees are increasing their coverage of ship-breakers' human rights. In 2006, the International Maritime Organisation developed legally binding global legislation which concerns vessel design, vessel recycling and the enforcement of regulation thereof and a 'Green Passport' scheme. Water-craft must have an inventory of hazardous material before they are scrapped, and the facilities must meet health & safety requirements. The International Labour Organization created a voluntary set of guidelines for occupational safety in 2003. Nevertheless, Greenpeace found that even pre-existing mandatory regulation has had little noticeable effect for labourers, due to government corruption, yard owner secrecy and a lack of interest from countries who prioritise economic growth. There are also guards who look out for any reporters. To safeguard worker health, the report recommends that developed countries create a fund to support their families, certify carriers as 'gas-free' (i.e. safe for cutting) and to remove toxic materials in appropriate facilities before export. To supplement the international treaties, organisations such as the NGO Shipbreaking Platform, the Institute for Global Labour and Human Rights and ToxicsWatch Alliance are lobbying for improvements in the industry. Environmental risks In recent years, ship breaking has become an issue of environmental concern beyond the health of the yard workers. Many ship breaking yards operate in developing nations with lax or no environmental law, enabling large quantities of highly toxic materials to escape into the general environment and causing serious health problems among ship-breakers, the local population and wildlife. Environmental campaign groups such as Greenpeace have made the issue a high priority for their activities. Along the Indian subcontinent, ecologically important mangrove forests, a valuable source of protection from tropical storms and monsoons, have been cut down to provide space for water-craft disassembly. In Bangladesh, for example, 40,000 mangrove trees were illegally chopped down in 2009. The World Bank has found that the country's beaching locations are now at risk from sea level rise. Twenty-one fish and crustacean species have been wiped out in the country as a result of the industry as well. Lead, organotins such as tributyltin in anti-fouling paints, polychlorinated organic compounds, by-products of combustion such as polycyclic aromatic hydrocarbons, dioxins and furans are found in ships and pose a great danger to the environment. The Basel Convention on the Control of Trans-boundary Movements of Hazardous Wastes and Their Disposal of 1989 has been ratified by 166 countries, including India and Bangladesh, and in 2004, End of Life Ships were subjected to its regulations. It aims to stop the transportation of dangerous substances to less-developed countries and mandate the use of regulated facilities. Furthermore, the decision to scrap a ship is often made in international waters, where the convention has no jurisdiction. The Hong Kong Convention is a compromise. It allows ships to be exported for recycling, as long as various stipulations are met: All water-craft must have an inventory and every shipyard needs to publish a recycling plan to protect the environment. The Hong Kong Convention was adopted in 2009 but with few countries signing the agreement. However, nearly 96 of the 120 ship recycling yards in India have achieved Statements of Compliance (SoC) with the Hong Kong Convention by various IACS class societies—including ClassNK, IRClass, Lloyd's Register and RINA. In addition, a yard in Chattogram, Bangladesh has also become the first one to achieve an SoC by ClassNK in January 2020, having first achieved a RINA SoC in 2017. Furthermore, to encourage the growth of India's vital ship recycling sector, in November 2019 the Government of India acceded to the Hong Kong Convention for Safe and Environmentally Sound Recycling of Ships and became the only South Asian country and major ship recycling destination so far to take such a positive step. In March 2012 the European Commission proposed tougher regulations to ensure all parties take responsibility. Under these rules, if a vessel has a European flag, it must be disposed of in a shipyard on an EU "green list". The facilities would have to show that they are compliant, and it would be regulated internationally in order to bypass corrupt local authorities. However, there is evidence of ship owners changing the flag to evade the regulations. China's scrap industry has vehemently protested against the proposed European regulations. Although Chinese recycling businesses are less damaging than their South Asian counterparts, European and American ship-breakers comply with far more stringent legislation. That being said, ship recycling yard owners have made investments into upgrading their recycling facilities and safety infrastructure in the recent past, including 100% impervious floors with drainage systems, setting up of hazardous waste processing facilities like the Common Hazardous Wastes Treatment, Storage and Disposal Facility (CHW-TSDF) in Alang, and adherence to various internationally recognised conventions. The ship recycling industry also produces about 4.5 million tons of re-rollable steel per year. That comes up to nearly 2% of total steel produced in India, coming from a process that does not exploit natural resources and thereby saves non-renewable natural resources and energy. Recycling of one ton of scrap saves 1.1 tons of iron ore, 0.6–0.7 tons of coking coal and around 0.2–0.3 tons of fluxes. Specific energy consumption for production of steel through BF-BOF (primary) and EAF& IF (secondary routes) is 14 MJ/kg and 11.7 MJ/kg, respectively. Thus, it leads to savings in energy by 16–17%. It also reduces the water consumption and GHG emission by 40% and 58%, respectively. List of ship-breaking yards The following are some of the world's largest ship breaking yards: Bangladesh Chittagong Ship Breaking Yard at Chittagong Belgium Galloo, Ghent, formerly Van Heyghen Recycling China Changjiang Ship Breaking Yard, located in Jiangyin, China India As of January 2020, India has a 30% share of ship breaking. Once India passes the planned "Recycling of Ships Act, 2019" which ratifies the Hong Kong International Convention for the safe and environmentally sound recycling of ships, ships not coming for breaking to India from the treaty nations of USA, Europe and Japan could also begin arriving in India, with the potential to double its global share of ship breaking to 60%. If that happens, it would also double India's annual ship breaking revenue to US$2.2 billion. Alang Ship Breaking Yard Steel Industrials Kerala Limited Pakistan Gadani Ship Breaking Yard Turkey Aliaga Ship Breaking Yard, at Aliağa United Kingdom Able UK, Graythorpe Dock, Teesside United States SA Recycling, Brownsville, Texas International Shipbreaking, Brownsville, Texas Mare Island Dry Docks, Vallejo, California Notable ship breaking yards This is a list of notable ship breaking yards: Gallery See also Bo'ness Clemenceau disposal controversy Flotsam, jetsam, lagan and derelict Marine debris Marine pollution Shipbreakers (film), by the National Film Board of Canada Ship decommissioning Ship Breaker, a young-adult novel by Paolo Bacigalupi Scrap Wrecking (shipwreck) List of dry docks List of the largest shipbuilding companies List of shipbuilders and shipyards Hardspace: Shipbreaker, a video game based on the ship breaking profession set in space Israel Shipyards References Intro Definition Chittagong ship breaking yard. Was worlds largest or a while Advantage Environmental- Reduction in Greenhouse Gas Emissions: Shipbreaking helps mitigate greenhouse gas emissions by recycling steel instead of producing it from raw materials. According to a study by the NGO Shipbreaking Platform, recycling one ton of steel saves approximately 1.5 tons of CO2 emissions compared to steel production from virgin materials. Economy- 1.2 billion us dollars from shipbreaking in 2020 Job creation- Employs 200,000 directly and 50,000 indirectly Disadvantages Environment-Pollution of Coastal Waters: Shipbreaking operations release a significant amount of pollutants into coastal waters, including heavy metals, oils, and toxic chemicals. It's estimated that around 120,000 metric tons of oil are released annually into the Bay of Bengal due to shipbreaking activities. Air Pollution: Burning of materials during shipbreaking emits hazardous pollutants into the air, including sulfur dioxide, nitrogen oxides, and particulate matter. This contributes to poor air quality in the surrounding areas. Studies have shown that shipbreaking activities in Bangladesh release approximately 800,000 tons of CO2 annually. •Unsafe conditions- The Bangladesh Institute of Labour Studies reported that only 10% of shipbreaking yards have access to emergency medical facilities on-site. As well as 10% proper safety. Toxic substance, sharp objects, over strenuous activities. Unfair- 12 hour days, 1-2. Shipbreaking laborers often find themselves denied breaks or sick leave, even in cases of on-the-job injuries, a clear violation of Bangladesh's labor regulations. Additionally, they frequently receive wages well below the legal minimum for shipbreaking work. Lack of formal contracts further enables yard owners to obscure incidents of worker harm. When workers seek to unionize or raise concerns, they face dismissal and intimidation tactics. Solutions Safety sessions that are already happening credit to ypsa Remove beaching and employ drydocking Further reading Contains an extensive section on the shipbreaking industry in India and Bangladesh. Ships scrapped include Mauretania and much of the German Fleet at Scapa Flow. Ships listed with owners and dates sold. Breaking Ships follows the demise of the Asian Tiger, a ship destroyed at one of the twenty ship-breaking yards along the beaches of Chittagong. BBC Bangladesh correspondent Roland Buerk takes us through the process—from beaching the vessel to its final dissemination, from wealthy shipyard owners to poverty-stricken ship cutters, and from the economic benefits for Bangladesh to the pollution of its once pristine beaches and shorelines. Analysis of the economics of shipbreaking, the status of worldwide reform efforts, and occupational health and safety of shipbreaking including results of interviewing Alang shipbreakers. Siddiquee, N.A. 2004. Impact of ship breaking on marine fish diversity of the Bay of Bengal.DFID SUFER Project, Dhaka, Bangladesh. 46 pp. Siddiquee, N. A., Parween, S., and Quddus, M. M. A., Barua, P., 2009 ‘Heavy Metal Pollution in sediments at ship breaking area of Bangladesh ‘Asian Journal of Water, Environment and Pollution, 6 (3) : 7–12 External links NGO Platform on Shipbreaking OSHA Fact Sheet – Shipbreaking Bangladesh ship breaking photos The Ship-Breakers at National Geographic. The Ship-Breakers at National Geographic. Ship breaking by Drachinifel Demolition Ship disposal Vehicle recycling
Ship breaking
[ "Engineering" ]
8,116
[ "Construction", "Demolition" ]
1,038,048
https://en.wikipedia.org/wiki/Iterated%20logarithm
In computer science, the iterated logarithm of , written   (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to . The simplest formal definition is the result of this recurrence relation: In computer science, is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base ) instead of the natural logarithm (with base e). Mathematically, the iterated logarithm is well defined for any base greater than , not only for base and base e. The "super-logarithm" function is "essentially equivalent" to the base iterated logarithm (although differing in minor details of rounding) and forms an inverse to the operation of tetration. Analysis of algorithms The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as: Finding the Delaunay triangulation of a set of points knowing the Euclidean minimum spanning tree: randomized O(n  n) time. Fürer's algorithm for integer multiplication: O(n log n 2O( n)). Finding an approximate maximum (element at least as large as the median):  n − 1 ± 3 parallel operations. Richard Cole and Uzi Vishkin's distributed algorithm for 3-coloring an n-cycle: O( n) synchronous communication rounds. The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential: the inverse grows much slower: . For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 265536, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5. Higher bases give smaller iterated logarithms. Other applications The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is . In computational complexity theory, Santhanam shows that the computational resources DTIME — computation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to See also Inverse Ackermann function, an even more slowly growing function also used in computational complexity theory References Asymptotic analysis Logarithms
Iterated logarithm
[ "Mathematics" ]
581
[ "E (mathematical constant)", "Mathematical analysis", "Logarithms", "Asymptotic analysis" ]
1,038,051
https://en.wikipedia.org/wiki/Stroop%20effect
In psychology, the Stroop effect is the delay in reaction time between congruent and incongruent stimuli. The effect has been used to create a psychological test (the Stroop test) that is widely used in clinical practice and investigation. A basic task that demonstrates this effect occurs when there is a mismatch between the name of a color (e.g., "blue", "green", or "red") and the color it is printed in (i.e., the word "red" printed in blue ink instead of red ink, thus red). Typically, when a person is asked to name the color of the word, they take longer and are more prone to errors when the color of the ink does not match the name of the color. The effect is named after John Ridley Stroop, who first published the effect in English in 1935. The effect had previously been published in Germany in 1929 by other authors. The original paper by Stroop has been one of the most cited papers in the history of experimental psychology, leading to more than 700 Stroop-related articles in literature. Original experiment The effect was named after John Ridley Stroop, who published the effect in English in 1935 in an article in the Journal of Experimental Psychology entitled "Studies of interference in serial verbal reactions" that includes three different experiments. However, the effect was first published in 1929 in Germany by Erich Rudolf Jaensch, and its roots can be followed back to works of James McKeen Cattell and Wilhelm Maximilian Wundt in the nineteenth century. In his experiments, Stroop administered several variations of the same test for which three different kinds of stimuli were created: Names of colors appeared in black ink; Names of colors in a different ink than the color named; and Squares of a given color. In the first experiment, words and conflict-words were used. The task required the participants to read the written color names of the words independently of the color of the ink (for example, they would have to read "purple" no matter what the color of the font). In experiment 2, stimulus conflict-words and color patches were used, and participants were required to say the ink-color of the letters independently of the written word with the second kind of stimulus and also name the color of the patches. If the word "purple" was written in red font, they would have to say "red", rather than "purple". When the squares were shown, the participant spoke the name of the color. Stroop, in the third experiment, tested his participants at different stages of practice at the tasks and stimuli used in the first and second experiments, examining learning effects. Unlike researchers now using the test for psychological evaluation, Stroop used only the three basic scores, rather than more complex derivative scoring procedures. Stroop noted that participants took significantly longer to complete the color reading in the second task than they had taken to name the colors of the squares in Experiment 2. This delay had not appeared in the first experiment. Such interference were explained by the automation of reading, where the mind automatically determines the semantic meaning of the word (it reads the word "red" and thinks of the color "red"), and then must intentionally check itself and identify instead the color of the word (the ink is a color other than red), a process that is not automated. Experimental findings Stimuli in Stroop paradigms can be divided into three groups: neutral, congruent and incongruent. Neutral stimuli are those stimuli in which only the text (similarly to stimuli 1 of Stroop's experiment), or color (similarly to stimuli 3 of Stroop's experiment) are displayed. Congruent stimuli are those in which the ink color and the word refer to the same color (for example the word "pink" written in pink). Incongruent stimuli are those in which ink color and word differ. Three experimental findings are recurrently found in Stroop experiments. A first finding is semantic interference, which states that naming the ink color of neutral stimuli (e.g. when the ink color and word do not interfere with each other) is faster than in incongruent conditions. It is called semantic interference since it is usually accepted that the relationship in meaning between ink color and word is at the root of the interference. The second finding, semantic facilitation, explains the finding that naming the ink of congruent stimuli is faster (e.g. when the ink color and the word match) than when neutral stimuli are present (e.g. stimulus 3; when only a colored square is shown). The third finding is that both semantic interference and facilitation disappear when the task consists of reading the word instead of naming the ink color. It has been sometimes called Stroop asynchrony, and has been explained by a reduced automatization when naming colors compared to reading words. In the study of interference theory, the most commonly used procedure has been similar to Stroop's second experiment, in which subjects were tested on naming colors of incompatible words and of control patches. The first experiment in Stroop's study (reading words in black versus incongruent colors) has been discussed less. In both cases, the interference score is expressed as the difference between the times needed to read each of the two types of cards. Instead of naming stimuli, subjects have also been asked to sort stimuli into categories. Different characteristics of the stimulus such as ink colors or direction of words have also been systematically varied. None of all these modifications eliminates the effect of interference. Neuroanatomy Brain imaging techniques including magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), and positron emission tomography (PET) have shown that there are two main areas in the brain that are involved in the processing of the Stroop task. They are the anterior cingulate cortex, and the dorsolateral prefrontal cortex. More specifically, while both are activated when resolving conflicts and catching errors, the dorsolateral prefrontal cortex assists in memory and other executive functions, while the anterior cingulate cortex is used to select an appropriate response and allocate attentional resources. The posterior dorsolateral prefrontal cortex creates the appropriate rules for the brain to accomplish the current goal. For the Stroop effect, this involves activating the areas of the brain involved in color perception, but not those involved in word encoding. It counteracts biases and irrelevant information, for instance, the fact that the semantic perception of the word is more striking than the color in which it is printed. Next, the mid-dorsolateral prefrontal cortex selects the representation that will fulfill the goal. The relevant information must be separated from irrelevant information in the task; thus, the focus is placed on the ink color and not the word. Furthermore, research has suggested that left dorsolateral prefrontal cortex activation during a Stroop task is related to an individual's’ expectation regarding the conflicting nature of the upcoming trial, and not so much on the conflict itself. Conversely, the right dorsolateral prefrontal cortex aims to reduce the attentional conflict and is activated after the conflict is over. Moreover, the posterior dorsal anterior cingulate cortex is responsible for what decision is made (i.e. whether someone will say the written word or the ink color). Following the response, the anterior dorsal anterior cingulate cortex is involved in response evaluation—deciding whether the answer is correct or incorrect. Activity in this region increases when the probability of an error is higher. Theories There are several theories used to explain the Stroop effect, which are commonly known as "race models". This is based on the underlying notion that both relevant and irrelevant information are processed in parallel, but that they "race" to enter the single central processor during response selection. They are: Processing speed This theory, also called Relative Speed of Processing Theory, suggests there is a lag in the brain's ability to recognize the color of the word since the brain reads words faster than it recognizes colors. This is based on the idea that word processing is significantly faster than color processing. In a condition where there is a conflict regarding words and colors (e.g., Stroop test), if the task is to report the color, the word information arrives at the decision-making stage before the color information which presents processing confusion. Conversely, if the task is to report the word, because color information lags after word information, a decision can be made ahead of the conflicting information. Selective attention The Selective Attention Theory suggests that color recognition, as opposed to reading a word, requires more attention. The brain needs to use more attention to recognize a color than to encode a word, so it takes a little longer. The responses lend much to the interference noted in the Stroop task. This may be a result of either an allocation of attention to the responses or to a greater inhibition of distractors that are not appropriate responses. Automaticity This theory is the most common theory of the Stroop effect. It suggests that since recognizing colors is not an "automatic process" there is hesitancy to respond, whereas, in contrast, the brain automatically understands the meanings of words as a result of habitual reading. This idea is based on the premise that automatic reading does not need controlled attention, but still uses enough attentional resources to reduce the amount of attention accessible for color information processing. Stirling (1979) introduced the concept of response automaticity. He demonstrated that changing the responses from colored words to letters that were not part of the colored words increased reaction time while reducing Stroop interference. Parallel distributed processing This theory suggests that as the brain analyzes information, different and specific pathways are developed for different tasks. Some pathways, such as reading, are stronger than others, therefore, it is the strength of the pathway and not the speed of the pathway that is important. In addition, automaticity is a function of the strength of each pathway, hence, when two pathways are activated simultaneously in the Stroop effect, interference occurs between the stronger (word reading) path and the weaker (color naming) path, more specifically when the pathway that leads to the response is the weaker pathway. Cognitive development In the neo-Piagetian theories of cognitive development, several variations of the Stroop task have been used to study the relations between speed of processing and executive functions with working memory and cognitive development in various domains. This research shows that reaction time to Stroop tasks decreases systematically from early childhood through early adulthood. These changes suggest that speed of processing increases with age and that cognitive control becomes increasingly efficient. Moreover, this research strongly suggests that changes in these processes with age are very closely associated with development in working memory and various aspects of thought. The stroop task also shows the ability to control behavior. If asked to state the color of the ink rather than the word, the participant must overcome the initial and stronger stimuli to read the word. These inhibitions show the ability for the brain to regulate behavior. Uses The Stroop effect has been widely used in psychology. Among the most important uses is the creation of validated psychological tests based on the Stroop effect permit to measure a person's selective attention capacity and skills, as well as their processing speed ability. It is also used in conjunction with other neuropsychological assessments to examine a person's executive processing abilities, and can help in the diagnosis and characterization of different psychiatric and neurological disorders. Researchers also use the Stroop effect during brain imaging studies to investigate regions of the brain that are involved in planning, decision-making, and managing real-world interference (e.g., texting and driving). Stroop test The Stroop effect has been used to investigate a person's psychological capacities; since its discovery during the twentieth century, it has become a popular neuropsychological test. There are different test variants commonly used in clinical settings, with differences between them in the number of subtasks, type and number of stimuli, times for the task, or scoring procedures. All versions have at least two numbers of subtasks. In the first trial, the written color name differs from the color ink it is printed in, and the participant must say the written word. In the second trial, the participant must name the ink color instead. However, there can be up to four different subtasks, adding in some cases stimuli consisting of groups of letters "X" or dots printed in a given color with the participant having to say the color of the ink; or names of colors printed in black ink that have to be read. The number of stimuli varies between fewer than twenty items to more than 150, being closely related to the scoring system used. While in some test variants the score is the number of items from a subtask read in a given time, in others it is the time that it took to complete each of the trials. The number of errors and different derived punctuations are also taken into account in some versions. This test is considered to measure selective attention, cognitive flexibility and processing speed, and it is used as a tool in the evaluation of executive functions. An increased interference effect is found in disorders such as brain damage, dementias and other neurodegenerative diseases, attention-deficit hyperactivity disorder, or a variety of mental disorders such as schizophrenia, addictions, and depression. Ergonomists could even show a relationship between the ergonomic characteristics of the educational furniture and the number of cognitive errors based on Stroop test. They found that an error percentage reduction using separated chair and desk against arm table student chair. Variations The Stroop test has additionally been modified to include other sensory modalities and variables, to study the effect of bilingualism, or to investigate the effect of emotions on interference. Warped words For example, the warped words Stroop effect produces the same findings similar to the original Stroop effect. Much like the Stroop task, the printed word's color is different from the ink color of the word; however, the words are printed in such a way that it is more difficult to read (typically curved-shaped). The idea here is the way the words are printed slows down both the brain's reaction and processing time, making it harder to complete the task. Emotional The emotional Stroop effect serves as an information processing approach to emotions. In an emotional Stroop task, an individual is given negative emotional words like "grief", "violence", and "pain" mixed in with more neutral words like "clock", "door", and "shoe". Just like in the original Stroop task, the words are colored and the individual is supposed to name the color. Research has revealed that individuals that are depressed are more likely to say the color of a negative word slower than the color of a neutral word. While both the emotional Stroop and the classic Stroop involve the need to suppress irrelevant or distracting information, there are differences between the two. The emotional Stroop effect emphasizes the conflict between the emotional relevance to the individual and the word; whereas, the classic Stroop effect examines the conflict between the incongruent color and word. The emotional Stroop effect has been used in psychology to test implicit biases such as racial bias via an implicit-association test. A notable study of this is Project Implicit from Harvard University which administered a test associating negative or positive emotions with pictures of race and measured the reaction time to determine racial preference. Spatial The spatial Stroop effect demonstrates interference between the stimulus location with the location in the stimuli. In one version of the spatial Stroop task, an up or down-pointing arrow appears randomly above or below a central point. Despite being asked to discriminate the direction of the arrow while ignoring its location, individuals typically make faster and more accurate responses to congruent stimuli (i.e., a down-pointing arrow located below the fixation sign) than to incongruent ones (i.e., an up-pointing arrow located below the fixation sign). A similar effect, the Simon effect, uses non-spatial stimuli. Numerical The Numerical Stroop effect demonstrates the close relationship between numerical values and physical sizes. Digits symbolize numerical values but they also have physical sizes. A digit can be presented as big or small (e.g., 5 vs. 5), irrespective of its numerical value. Comparing digits in incongruent trials (e.g., 3 5) is slower than comparing digits in congruent trials (e.g., 5 3) and the difference in reaction time is termed the numerical Stroop effect. The effect of irrelevant numerical values on physical comparisons (similar to the effect of irrelevant color words on responding to colors) suggests that numerical values are processed automatically (i.e., even when they are irrelevant to the task). Reverse Another variant of the classic Stroop effect is the reverse Stroop effect. It occurs during a pointing task. In a reverse Stroop task, individuals are shown a page with a black square with an incongruent colored word in the middle—for instance, the word "red" written in the color green (red)—with four smaller colored squares in the corners. One square would be colored green, one square would be red, and the two remaining squares would be other colors. Studies show that if the individual is asked to point to the color square of the written color (in this case, red) they would present a delay. Thus, incongruently-colored words significantly interfere with pointing to the appropriate square. However, some research has shown there is very little interference from incongruent color words when the objective is to match the color of the word. In popular culture The Brain Age: Train Your Brain in Minutes a Day! software program, produced by Ryūta Kawashima for the Nintendo DS portable video game system, contains an automated Stroop test administrator module translated into game form. MythBusters used the Stroop effect test to see if males and females are cognitively impaired by having an attractive person of the opposite sex in the room. The "myth" (that is, hypothesis) was disproved. A Nova episode used the Stroop Effect to illustrate the subtle changes of the mental flexibility of Mount Everest climbers in relation to altitude. The 2024 Horror Game The Outlast Trials by Red Barrels features a "Stroop Test" minigame that allows in-game players to participate against another to see who has the better reaction time regarding the test, with the difficulty steadily increasing the longer the players compete against each other. In industry British automotive marque MINI released a vehicle with the turn signal indicator designed as arrows that are pointing the opposite way than the indicated turn signal is blinking. References External links Online lesson with online demonstration of the Stroop effect via PsyToolkit An online test based on the Stroop effect will show the level of flexibility of the mind Training the flexibility of the mind based on the effect of the Stroop Cognitive tests Memory tests Perception Neuropsychological tests Psychophysics Cognitive biases
Stroop effect
[ "Physics" ]
3,981
[ "Psychophysics", "Applied and interdisciplinary physics" ]
1,038,273
https://en.wikipedia.org/wiki/Planetary%20engineering
Planetary engineering is the development and application of technology for the purpose of influencing the environment of a planet. Planetary engineering encompasses a variety of methods such as terraforming, seeding, and geoengineering. Widely discussed in the scientific community, terraforming refers to the alteration of other planets to create a habitable environment for terrestrial life. Seeding refers to the introduction of life from Earth to habitable planets. Geoengineering refers to the engineering of a planet's climate, and has already been applied on Earth. Each of these methods are composed of varying approaches and possess differing levels of feasibility and ethical concern. Terraforming Terraforming is the process of modifying the atmosphere, temperature, surface topography or ecology of a planet, moon, or other body in order to replicate the environment of Earth. Technologies A common object of discussion on potential terraforming is the planet Mars. To terraform Mars, humans would need to create a new atmosphere, due to the planet's high carbon dioxide concentration and low atmospheric pressure. This would be possible by introducing more greenhouse gases to below "freezing point from indigenous materials". To terraform Venus, carbon dioxide would need to be converted to graphite since Venus receives twice as much sunlight as Earth. This process is only possible if the greenhouse effect is removed with the use of "high-altitude absorbing fine particles" or a sun shield, creating a more habitable Venus. NASA has defined categories of habitability systems and technologies for terraforming to be feasible. These topics include creating power-efficient systems for preserving and packaging  food for crews, preparing and cooking foods, dispensing water, and developing facilities for rest, trash and recycling, and areas for crew hygiene and rest. Feasibility A variety of planetary engineering challenges stand in the way of terraforming efforts. The atmospheric terraforming of Mars, for example, would require "significant quantities of gas" to be added to the Martian atmosphere. This gas has been thought to be stored in solid and liquid form within Mars' polar ice caps and underground reservoirs. It is unlikely, however, that enough for sufficient atmospheric change is present within Mars' polar deposits, and liquid could only be present at warmer temperatures "deep within the crust". Furthermore, sublimating the entire volume of Mars' polar caps would increase its current atmospheric pressure to 15 millibar, where an increase to around 1000 millibar would be required for habitability. For reference, Earth's average sea-level pressure is 1013.25 mbar. First formally proposed by astrophysicist Carl Sagan, the terraforming of Venus has since been discussed through methods such as organic molecule-induced carbon conversion, sun reflection, increasing planetary spin, and various chemical means. Due to the high presence of sulfuric acid and solar wind on Venus, which are harmful to organic environments, organic methods of carbon conversion have been found unfeasible. Other methods, such as solar shading, hydrogen bombardment, and magnesium-calcium bombardment are theoretically sound but would require large-scale resources and space technologies not yet available to humans. Ethical considerations While successful terraforming would allow life to prosper on other planets, philosophers have debated whether this practice is morally sound. Certain ethics experts suggest that planets like Mars hold an intrinsic value independent of their utility to humanity and should therefore be free from human interference. Also, some argue that through the steps that are necessary to make Mars habitable - such as fusion reactors, space-based solar-powered lasers, or spreading a thin layer of soot on Mars' polar ice caps - would deteriorate the current aesthetic value that Mars possesses. This calls into question humanity's intrinsic ethical and moral values, as it raises the question of whether humanity is willing to eradicate the current ecosystem of another planet for their benefit. Through this ethical framework, terraforming attempts on these planets could be seen to threaten their intrinsically valuable environments, rendering these efforts unethical. Seeding Environmental considerations Mars is the primary subject of discussion for seeding. Locations for seeding are chosen based on atmospheric temperature, air pressure, existence of harmful radiation, and availability of natural resources, such as water and other compounds essential to terrestrial life. Developing microorganisms for seeding Natural or engineered microorganisms must be created or discovered that can withstand the harsh environments of Mars. The first organisms used must be able to survive exposure to ionizing radiation and the high concentration of present in the Martian atmosphere. Later organisms such as multicellular plants must be able to withstand the freezing temperatures, withstand high levels, and produce significant amounts of . Microorganisms provide significant advantages over non-biological mechanisms. They are self-replicating, negating the needs to either transport or manufacture large machinery to the surface of Mars. They can also perform complicated chemical reactions with little maintenance to realize planet-scale terraforming. Climate engineering Climate engineering is a form of planetary engineering which involves the process of deliberate and large-scale alteration of the Earth's climate system to combat climate change. Examples of geoengineering are carbon dioxide removal (CDR), which removes carbon dioxide from the atmosphere, and solar radiation modification (SRM) to reflect solar energy to space. Carbon dioxide removal (CDR) has multiple practices, the simplest being reforestation, to more complex processes such as direct air capture. The latter is rather difficult to deploy on an industrial scale, for high costs and substantial energy usage would be some aspects to address. Examples of SRM include stratospheric aerosol injection (SAI) and marine cloud brightening (MCB). When a volcano erupts, small particles known as aerosols proliferate throughout the atmosphere, reflecting the sun's energy back into space. This results in a cooling effect, and humanity could conceivably inject these aerosols into the stratosphere, spurring large-scale cooling. One proposal for MCB involves spraying a vapor into low-laying sea clouds, creating more cloud condensation nuclei. This would in theory result in the cloud becoming whiter, and reflecting light more efficiently. See also Astroengineering Macro-engineering Megascale engineering Moving the Earth Virgin Earth Challenge References Further reading External links Geoengineering: A Worldchanging Retrospective – Overview of articles on geoengineering from the sustainability site Worldchanging Space colonization Engineering disciplines
Planetary engineering
[ "Engineering" ]
1,304
[ "Planetary engineering", "nan" ]
1,038,280
https://en.wikipedia.org/wiki/Climate%20engineering
Climate engineering (or geoengineering, climate intervention) is the intentional large-scale alteration of the planetary environment to counteract anthropogenic climate change. The term has been used as an umbrella term for both carbon dioxide removal and solar radiation modification when applied at a planetary scale. However, these two processes have very different characteristics, and are now often discussed separately. Carbon dioxide removal techniques remove carbon dioxide from the atmosphere, and are part of climate change mitigation. Solar radiation modification is the reflection of some sunlight (solar radiation) back to space to cool the earth. Some publications include passive radiative cooling as a climate engineering technology. The media tends to also use climate engineering for other technologies such as glacier stabilization, ocean liming, and iron fertilization of oceans. The latter would modify carbon sequestration processes that take place in oceans. Some types of climate engineering are highly controversial due to the large uncertainties around effectiveness, side effects and unforeseen consequences. Interventions at large scale run a greater risk of unintended disruptions of natural systems, resulting in a dilemma that such disruptions might be more damaging than the climate damage that they offset. However, the risks of such interventions must be seen in the context of the trajectory of climate change without them. The Union of Concerned Scientists warns that solar radiation modification could become an excuse to slow reductions in fossil fuel emissions and stall progress toward a low-carbon economy, as the technology does not address these root causes of climate change. Terminology Climate engineering (or geoengineering) has been used as an umbrella term for both carbon dioxide removal and solar radiation management, when applied at a planetary scale. However, these two methods have very different geophysical characteristics, which is why the Intergovernmental Panel on Climate Change no longer uses this term. This decision was communicated in around 2018, see for example the Special Report on Global Warming of 1.5 °C. According to climate economist Gernot Wagner the term geoengineering is "largely an artefact and a result of the term's frequent use in popular discourse" and "so vague and all-encompassing as to have lost much meaning". Specific technologies that fall into the climate engineering umbrella term include: Carbon dioxide removal Biochar: Biochar is a high-carbon, fine-grained residue that is produced via pyrolysis Bioenergy with carbon capture and storage (BECCS): the process of extracting bioenergy from biomass and capturing and storing the carbon, thereby removing it from the atmosphere. Direct air capture and carbon storage: a process of capturing carbon dioxide directly from the ambient air (as opposed to capturing from point sources, such as a cement factory or biomass power plant) and generating a concentrated stream of for sequestration or utilization or production of carbon-neutral fuel and windgas. Enhanced weathering: a process that aims to accelerate the natural weathering by spreading finely ground silicate rock, such as basalt, onto surfaces which speeds up chemical reactions between rocks, water, and air. It also removes carbon dioxide () from the atmosphere, permanently storing it in solid carbonate minerals or ocean alkalinity. The latter also slows ocean acidification. Solar Radiation Management Marine cloud brightening: a proposed technique that would make clouds brighter, reflecting a small fraction of incoming sunlight back into space in order to offset anthropogenic global warming. Mirrors in space (MIS): satellites that are designed to change the amount of solar radiation that impacts the Earth as a form of climate engineering. Since the conception of the idea in 1923, 1929, 1957 and 1978 (Hermann Oberth) and also in the 1980s, space mirrors have mainly been theorized as a way to deflect sunlight to counter global warming and were seriously considered in the 2000s. Stratospheric aerosol injection (SAI): a proposed method to introduce aerosols into the stratosphere to create a cooling effect via global dimming and increased albedo, which occurs naturally from volcanic eruptions. The following methods are not termed climate engineering in the latest IPCC assessment report in 2022 but are included under this umbrella term by other publications on this topic: Passive daytime radiative cooling: this technology increases increases the Earth's solar reflectance and it's thermal emittance in the atmospheric window. Ground-level albedo modification: a process of increasing Earth's albedo through the means of altering things on the Earth's surface. Examples include planting light-colored plants to help with reflecting sunlight back into space. Glacier stabilization: proposals aiming to slow down or prevent sea level rise caused by the collapse of notable marine-terminating glaciers, such as Jakobshavn Glacier in Greenland or Thwaites Glacier and Pine Island Glacier in Antarctica. It may be possible to bolster some glaciers directly, but blocking the flow of ever-warming ocean water at a distance, allowing it more time to mix with the cooler water around the glacier, is likely to be far more effective. Ocean geoengineering (adding material such as lime or iron to the ocean to affect its ability to sequester carbon dioxide). Technologies Carbon dioxide removal Solar radiation modification Passive daytime radiative cooling Enhancing the solar reflectance and thermal emissivity of Earth in the atmospheric window through passive daytime radiative cooling has been proposed as an alternative or "third approach" to climate engineering that is "less intrusive" and more predictable or reversible than stratospheric aerosol injection. Ocean geoengineering Ocean geoengineering involves modifying the ocean to reduce the impacts of rising temperature. One approach is to add material such as lime or iron to the ocean to increase its ability to support marine life and/or sequester . In 2021 the US National Academies of Sciences, Engineering, and Medicine (NASEM) requested $2.5 billion funds for research in the following decade, specifically including field tests. Another idea is to reduce sea level rise by installing underwater "curtains" to protect Antarctic glaciers from warming waters, or by drilling holes in ice to pump out water and heat. Ocean liming Enriching seawater with calcium hydroxide (lime) has been reported to lower ocean acidity, which reduces pressure on marine life such as oysters and absorbs . The added lime raised the water's pH, capturing in the form of calcium bicarbonate or as carbonate deposited in mollusk shells. Lime is produced in volume for the cement industry. This was assessed in 2022 in an experiment in Apalachicola, Florida in an attempt to halt declining oyster populations. pH levels increased modestly, as was reduced by 70 ppm. A 2014 experiment added sodium hydroxide (lye) to part of Australia's Great Barrier Reef. It raised pH levels to nearly preindustrial levels. However, producing alkaline materials typically releases large amounts of , partially offsetting the sequestration. Alkaline additives become diluted and dispersed in one month, without durable effects, such that if necessary, the program could be ended without leaving long-term effects. Ocean sulfur cycle enhancement Enhancing the natural marine sulfur cycle by fertilizing a small portion with iron—typically considered to be a greenhouse gas remediation method—may also increase the reflection of sunlight. Such fertilization, especially in the Southern Ocean, would enhance dimethyl sulfide production and consequently cloud reflectivity. This could potentially be used as regional SRM, to slow Antarctic ice from melting. Such techniques also tend to sequester carbon, but the enhancement of cloud albedo also appears to be a likely effect. Iron fertilization Submarine forest Another 2022 experiment attempted to sequester carbon using giant kelp planted off the Namibian coast. Whilst this approach has been called ocean geoengineering by the researchers it is just another form of carbon dioxide removal via sequestration. Another term that is used to describe this process is blue carbon management and also marine geoengineering. Glacier stabilization Problems and risks Interventions at large scale run a greater risk of unintended disruptions of natural systems, alongside a greater potential for reducing the risks of warming. This raises a question of whether climate interventions might be more or less damaging than the climate damage that they offset. Matthew Watson, of the University of Bristol, led a £5m research study into the potential adverse effects of climate engineering and said in 2014, "We are sleepwalking to a disaster with climate change. Cutting emissions is undoubtedly the thing we should be focusing on but it seems to be failing. Although geoengineering is terrifying to many people, and I include myself in this, [its feasibility and safety] are questions that have to be answered". University of Oxford Professor Steve Rayner is also worried about the adverse effects of climate engineering, especially the potential for people to be too positive about the effects and stop trying to slow the actual problem of climate change. Though, he says there is a potential reason to doing climate engineering: "People decry doing [climate engineering] as a band aid, but band aids are useful when you are healing". Climate engineering may reduce the urgency of reducing carbon emissions, a form of moral hazard. Also, some approaches would have only temporary effects, which implies rapid rebound if they are not sustained. The Union of Concerned Scientists points to the concern that the use of climate engineering technology might become an excuse not to address the root causes of climate change. However, several public opinion surveys and focus groups reported either a desire to increase emission cuts in the presence of climate engineering, or no effect. Other modelling work suggests that the prospect of climate engineering may in fact increase the likelihood of emissions reduction. If climate engineering can alter the climate, then this raises questions whether humans have the right to deliberately change the climate, and under what conditions. For example, using climate engineering to stabilize temperatures is not the same as doing so to optimize the climate for some other purpose. Some religious traditions express views on the relationship between humans and their surroundings that encourage (to conduct responsible stewardship) or discourage (to avoid hubris) explicit actions to affect climate. Society and culture Public perception A large 2018 study used an online survey to investigate public perceptions of six climate engineering methods in the United States, United Kingdom, Australia, and New Zealand. Public awareness of climate engineering was low; less than a fifth of respondents reported prior knowledge. Perceptions of the six climate engineering methods proposed (three from the carbon dioxide removal group and three from the solar radiation modification group) were largely negative and frequently associated with attributes like 'risky', 'artificial' and 'unknown effects'. Carbon dioxide removal methods were preferred over solar radiation modification. Public perceptions were remarkably stable with only minor differences between the different countries in the surveys. Some environmental organizations (such as Friends of the Earth and Greenpeace) have been reluctant to endorse or oppose solar radiation modification, but are often more supportive of nature-based carbon dioxide removal projects, such as afforestation and peatland restoration. Research and projects Several organizations have investigated climate engineering with a view to evaluating its potential, including the US Congress, the US National Academy of Sciences, Engineering, and Medicine, the Royal Society, the UK Parliament, the Institution of Mechanical Engineers, and the Intergovernmental Panel on Climate Change. In 2009, the Royal Society in the UK reviewed a wide range of proposed climate engineering methods and evaluated them in terms of effectiveness, affordability, timeliness, and safety (assigning qualitative estimates in each assessment). The key recommendations reports were that "Parties to the UNFCCC should make increased efforts towards mitigating and adapting to climate change, and in particular to agreeing to global emissions reductions", and that "[nothing] now known about geoengineering options gives any reason to diminish these efforts". Nonetheless, the report also recommended that "research and development of climate engineering options should be undertaken to investigate whether low-risk methods can be made available if it becomes necessary to reduce the rate of warming this century". In 2009, a review examined the scientific plausibility of proposed methods rather than the practical considerations such as engineering feasibility or economic cost. The authors found that "[air] capture and storage shows the greatest potential, combined with afforestation, reforestation and bio-char production", and noted that "other suggestions that have received considerable media attention, in particular, "ocean pipes" appear to be ineffective". They concluded that "[climate] geoengineering is best considered as a potential complement to the mitigation of emissions, rather than as an alternative to it". The IMechE report examined a small subset of proposed methods (air capture, urban albedo and algal-based capture techniques), and its main conclusions in 2011 were that climate engineering should be researched and trialed at the small scale alongside a wider decarbonization of the economy. In 2015, the US National Academy of Sciences, Engineering, and Medicine concluded a 21-month project to study the potential impacts, benefits, and costs of climate engineering. The differences between these two classes of climate engineering "led the committee to evaluate the two types of approaches separately in companion reports, a distinction it hopes carries over to future scientific and policy discussions." The resulting study titled Climate Intervention was released in February 2015 and consists of two volumes: Reflecting Sunlight to Cool Earth and Carbon Dioxide Removal and Reliable Sequestration. In June 2023 the US government released a report that recommended conducting research on stratospheric aerosol injection and marine cloud brightening. As of 2024 the Coastal Atmospheric Aerosol Research and Engagement (CAARE) project was launching sea salt into the marine sky in an effort to increase cloud "brightness" (reflective capacity). The sea salt is launched from the USS Hornet Sea, Air & Space Museum (based on the project's regulatory filings). See also Arctic geoengineering Climate justice Earth systems engineering and management Land surface effects on climate List of geoengineering topics Weather modification References Engineering Engineering Emissions reduction Engineering disciplines Planetary engineering
Climate engineering
[ "Chemistry", "Engineering" ]
2,877
[ "Planetary engineering", "Emissions reduction", "Geoengineering", "nan", "Greenhouse gases" ]
1,038,578
https://en.wikipedia.org/wiki/Tristar%20and%20Red%20Sector%20Incorporated
Tristar and Red Sector Incorporated (TRSI) is a demogroup which formed in 1990. It came about from the longest-running cooperation in scene history. RSI existed from 1985, before being joined by the "T" later on. Evolving from the Commodore 64 to the Amiga and later to PC and various game console platforms - like the PlayStation, Xbox, Nintendo - and set-ups like Arduino, Android or Blu-ray, TRSI released a number of digital productions, dedicated to experimenting in phreaking or network alteration. Its members were spread around the world and still contribute to computer scene art and code after more than 27 years of history. History 1985 to 1987 Red Sector Incorporated (RSI) was founded with a focus on the Commodore 64 as a group for cracks, fixes, trainers, packs, intros and demos. The founders were three suppliers from Canada: Bill Best, Greg and Kangol Kid. After the initial formation in the spring of 1985, The Skeleton and Baudsurfer set up RSI's first domain, "The Pirates Ship" BBS the following summer, which eventually became the "Dawn of Eternity" BBS. At the end of the year, Irata and Mister Zeropage were asked to join the group and set up a European section in Germany, with Irata being the group's main trader, followed by additional importers in the United States. Slogans at the time included "No risk, no fun - Red Sector Number One" and "Red Sector - The Leading Force". The Light Circle was a basis for several future European members of RSI, a coalition of Radwar Enterprises, Cracking Force Berlin and Flash Cracking Formation. Red Sector Incorporated first released on the Amiga 1000 in 1986. During the summer, RSI decided to concentrate on the Amiga and formed an Amiga division. In the beginning of 1987, Red Sector's Commodore 64 section became dormant, to bundle forces on the Amiga. At this time, RSI's first Amiga demo was coded by HQC and released to the Scene. It was followed by the second, "Twilight with Music" by Karsten Obarski. At the end of the year, a short-term cooperation was formed with Ghenna of Defjam, marking the first group cooperation on the Amiga. References Bibliography Tamas Polgar. 2005. Freax - The brief history of the computer Demoscene. CSW Verlag. Pages 107, 140, 155. Tamas Polgar. 2006. Freax - The Art album. CSW Verlag. Pages 125, 153, 154, 157. Denis Moschitto and Evrim Sen. 2007. Hackerland - Logfile of the Scene. Social Media Verlag. Glossar. External links RSI official website TRSI official website TRSI on Pouet Net TRSI on CSDb TRSI on Scenery Amiga TRSI PC cracktros on Defacto2 TRSI PC file and information repository on Defacto2 TRSI selected old school files on Scene. Org TRSI Recordz Discography Amiga Music Preservation All TRSI Amiga Musicians on AMP Red Sector Incorporated Megademo ROM4.LHA Article by Mop. R.O.M. diskmag, Issue 4 (requires an Amiga or Amiga emulator, such as UAE) TRSI History 1998 Demogroups Warez groups Computing and society
Tristar and Red Sector Incorporated
[ "Technology" ]
693
[ "Computing and society" ]
1,038,634
https://en.wikipedia.org/wiki/Snottite
Snottite, also snoticle, is a microbial mat of single-celled extremophilic bacteria which hang from the walls and ceilings of caves and are similar to small stalactites, but have the consistency of nasal mucus. In the Frasassi Caves in Italy, over 70% of cells in Snottite have been identified as Acidithiobacillus thiooxidans, with smaller populations including an archaeon in the uncultivated 'G-plasma' clade of Thermoplasmatales (>15%) and a bacterium in the Acidimicrobiaceae family (>5%). The bacteria derive their energy from chemosynthesis of volcanic sulfur compounds including H2S and warm-water solution dripping down from above, producing sulfuric acid. Because of this, their waste products are highly acidic (approaching pH=0), with similar properties to battery acid. Researchers at the University of Texas have suggested that this sulfuric acid may be a more significant cause of cave formation than the usual explanation offered of the carbonic acid formed from carbon dioxide dissolved in water. Snottites were brought to attention by researchers Diana Northup and Penny Boston studying them (and other organisms) in a toxic sulfur cave called Cueva de Villa Luz (Cave of the Lighted House), in Tabasco, Mexico. Snottites were first discovered in this cave by Jim Pisarowicz in 1986, who also coined the term. The BBC series Wonders of the Solar System saw Professor Brian Cox examining snottites and positing that if there is life on Mars, it may be similarly primitive and hidden beneath the surface of the Red Planet. See also Archaea References Additional sources Hose L. D., Pisarowicz J. A. (1999) "Cueva de Villa Luz, Tabasco, Mexico: reconnaissance study of an active sulfur spring cave and ecosystem". J Cave Karst Studies; 61:13–21 External links Cave slime at NASA The Subsurface Life in Mineral Environments (SLIME) Team Cave organisms Speleothems Sulphophiles
Snottite
[ "Biology" ]
441
[ "Cave organisms", "Organisms by habitat" ]
1,038,665
https://en.wikipedia.org/wiki/Tension-leg%20platform
A tension-leg platform (TLP) or extended tension leg platform (ETLP) is a vertically moored floating structure normally used for the offshore production of oil or gas, and is particularly suited for water depths greater than 300 metres (about 1000 ft) and less than 1500 metres (about 4900 ft). Use of tension-leg platforms has also been proposed for offshore wind turbines. The platform is permanently moored by means of tethers or tendons grouped at each of the structure's corners. A group of tethers is called a tension leg. A feature of the design of the tethers is that they have relatively high axial stiffness (low elasticity), such that virtually all vertical motion of the platform is eliminated. This allows the platform to have the production wellheads on deck (connected directly to the subsea wells by rigid risers), instead of on the seafloor. This allows a simpler well completion and gives better control over the production from the oil or gas reservoir, and easier access for downhole intervention operations. TLPs have been in use since the early 1980s. The first tension leg platform was built for Conoco's Hutton field in the North Sea in the early 1980s. The hull was built in the dry-dock at Highland Fabricator's Nigg yard in the north of Scotland, with the deck section built nearby at McDermott's yard at Ardersier. The two parts were mated in the Moray Firth in 1984. The Hutton TLP was originally designed for a service life of 25 years in North Sea depth of 100 to 1000 metres. It had 16 tension legs. Its weight varied between 46,500 and 55,000 tons when moored to the seabed, but up to 61,580 tons when floating freely. The total area of its living quarters was about 3,500 square metres and accommodated over 100 cabins though only 40 people were necessary to maintain the structure in place. The hull of the Hutton TLP has been separated from the topsides. Topsides have been redeployed to the Prirazlomnoye field in the Barents Sea, while the hull was reportedly sold to a project in the Gulf of Mexico (although the hull has been moored in Cromarty Firth since 2009). Larger TLPs will normally have a full drilling rig on the platform with which to drill and intervene on the wells. The smaller TLPs may have a workover rig, or with most recent TLPs, production wellheads located at remote drillcentres subsea. The deepest (E)TLPs measured from the sea floor to the surface are: Big Foot ETLP Magnolia ETLP. Its total height is some . Marco Polo TLP Neptune TLP Kizomba A TLP Ursa TLP. Its height above surface is making a total height of . Allegheny TLP W. Seno A TLP Use for wind turbines Although the Massachusetts Institute of Technology and the National Renewable Energy Laboratory explored the concept of TLPs for offshore wind turbines in September 2006, architects had studied the idea as early as 2003. Earlier offshore wind turbines cost more to produce, stood on towers dug deep into the ocean floor, were only possible in depths of at most , and generated 1.5 megawatts for onshore units and 3.5 megawatts for conventional offshore setups. In contrast, TLP installation was calculated to cost a third as much. TLPs float, and researchers estimate they can operate in depths between 100 and and farther away from land, and they can generate 5.0 megawatts. MIT and NREL researchers planned a half-scale prototype south of Cape Cod to prove the concept. Computer simulations project that in a hurricane TLPs would shift 0.9 m to 1.8 m and the turbine blades would cycle above wave peaks. Dampers could be used to reduce motion in the event of a natural disaster. Blue H Technologies of the Netherlands deployed the world's first floating wind turbine on a tension-leg platform, off the coast of Apulia, Italy in December 2007. The prototype was installed in waters deep in order to gather test data on wind and sea conditions, and was decommissioned at the end of 2008. The turbine utilized a tension-leg platform design and a two-bladed turbine. Seawind Ocean Technology B.V., which was established by Martin Jakubowski and Silvestro Caruso (the founders of Blue H Technologies), acquired the proprietary rights to the two-bladed floating turbine technology developed by Blue H Technologies. In literature A fictitious tension-leg platform anchored in the Gulf of Mexico is at the centre of the plot of the novel Seawitch (1977) by Alistair MacLean. At the time of publication there were no commercially active TLPs, and the plot involves a conspiracy to destroy Seawitch by competing oil companies. The prologue to the novel explains the principles of operation. See also Floating cable-stayed bridge Oil platform Magnolia (oil platform) Mars (oil platform) Olympus tension leg platform List of tallest structures References Further reading 2010 Worldwide Survey of TLPs (PDF) by Mustang Engineering for Offshore Magazine Fuentes, P. (2003) Reconversion d’une plate-forme offshore, Mémoire de TPFE, École d’architecture de Lille, France. Oil platforms Watercraft Petroleum production Offshore engineering
Tension-leg platform
[ "Chemistry", "Engineering" ]
1,099
[ "Oil platforms", "Structural engineering", "Offshore engineering", "Petroleum technology", "Construction", "Natural gas technology" ]
1,038,753
https://en.wikipedia.org/wiki/Cut%20rule
In mathematical logic, the cut rule is an inference rule of sequent calculus. It is a generalisation of the classical modus ponens inference rule. Its meaning is that, if a formula A appears as a conclusion in one proof and a hypothesis in another, then another proof in which the formula A does not appear can be deduced. This applies to cases of modus ponens, such as how instances of man are eliminated from Every man is mortal, Socrates is a man to deduce Socrates is mortal. Formal notation It is normally written in formal notation in sequent calculus notation as : cut Elimination The cut rule is the subject of an important theorem, the cut-elimination theorem. It states that any sequent that has a proof in the sequent calculus making use of the cut rule also has a cut-free proof, that is, a proof that does not make use of the cut rule. References Rules of inference Logical calculi
Cut rule
[ "Mathematics" ]
196
[ "Proof theory", "Mathematical logic", "Logical calculi", "Rules of inference", "Mathematical logic stubs" ]
1,038,844
https://en.wikipedia.org/wiki/Spring%20bloom
The spring bloom is a strong increase in phytoplankton abundance (i.e. stock) that typically occurs in the early spring and lasts until late spring or early summer. This seasonal event is characteristic of temperate North Atlantic, sub-polar, and coastal waters. Phytoplankton blooms occur when growth exceeds losses, however there is no universally accepted definition of the magnitude of change or the threshold of abundance that constitutes a bloom. The magnitude, spatial extent and duration of a bloom depends on a variety of abiotic and biotic factors. Abiotic factors include light availability, nutrients, temperature, and physical processes that influence light availability, and biotic factors include grazing, viral lysis, and phytoplankton physiology. The factors that lead to bloom initiation are still actively debated (see Critical depth). Classical mechanism In the spring, more light becomes available and stratification of the water column occurs as increasing temperatures warm the surface waters (referred to as thermal stratification). As a result, vertical mixing is inhibited and phytoplankton and nutrients are entrained in the euphotic zone. This creates a comparatively high nutrient and high light environment that allows rapid phytoplankton growth. Along with thermal stratification, spring blooms can be triggered by salinity stratification due to freshwater input, from sources such as high river runoff. This type of stratification is normally limited to coastal areas and estuaries, including Chesapeake Bay. Freshwater influences primary productivity in two ways. First, because freshwater is less dense, it rests on top of seawater and creates a stratified water column. Second, freshwater often carries nutrients that phytoplankton need to carry out processes, including photosynthesis. Rapid increases in phytoplankton growth, that typically occur during the spring bloom, arise because phytoplankton can reproduce rapidly under optimal growth conditions (i.e., high nutrient levels, ideal light and temperature, and minimal losses from grazing and vertical mixing). In terms of reproduction, many species of phytoplankton can double at least once per day, allowing for exponential increases in phytoplankton stock size. For example, the stock size of a population that doubles once per day will increase 1000-fold in just 10 days. In addition, there is a lag in the grazing response of herbivorous zooplankton at the start of blooms, which minimize phytoplankton losses. This lag occurs because there is low winter zooplankton abundance and many zooplankton, such as copepods, have longer generation times than phytoplankton. Spring blooms typically last until late spring or early summer, at which time the bloom collapses due to nutrient depletion in the stratified water column and increased grazing pressure by zooplankton. The most limiting nutrient in the marine environment is typically nitrogen (N). This is because most organisms are unable to fix atmospheric nitrogen into usable forms (i.e. ammonium, nitrite, or nitrate). However, with the exception of coastal waters, it can be argued, that iron (Fe) is the most limiting nutrient because it is required to fix nitrogen, but is only available in small quantities in the marine environment, coming from dust storms and leaching from rocks. Phosphorus can also be limiting, particularly in freshwater environments and tropical coastal regions. During winter, wind-driven turbulence and cooling water temperatures break down the stratified water column formed during the summer. This breakdown allows vertical mixing of the water column and replenishes nutrients from deep water to the surface waters and the rest of the euphotic zone. However, vertical mixing also causes high losses, as phytoplankton are carried below the euphotic zone (so their respiration exceeds primary production). In addition, reduced illumination (intensity and daily duration) during winter limits growth rates. Alternative mechanisms Historically, blooms have been explained by Sverdrup's critical depth hypothesis, which says blooms are caused by shoaling of the mixed layer. Similarly, Winder and Cloern (2010) described spring blooms as a response to increasing temperature and light availability. However, new explanations have been offered recently, including that blooms occur due to: Coupling between phytoplankton growth and zooplankton grazing. The onset of near surface stratification in the spring. Mixing of the water column, rather than stratification Low turbulence Increasing light intensity (in shallow water environments). Eddies (see ‘The role of eddies in the onset of the North Atlantic spring bloom’) The role of eddies in the onset of the North Atlantic spring bloom A 2012 study showed that the onset of the North Atlantic bloom is due to eddies. Eddies, or circular currents of water, are ubiquitous throughout the world’s ocean and play an important role in ocean mixing. In the North Atlantic, surface water is colder and denser farther north and warmer and lighter in the south. This sets up a horizontal density gradient. Earth’s rotation maintains this gradient by preventing the dense water from slipping underneath the light water. Eddies, however, can mix dense water underneath the lighter water, setting up a vertical stratification that limits the depth of vertical mixing (leading to a shallower mixed layer). Mechanisms that limit the depth of vertical mixing can be referred to as ‘restratifying mechanisms’ (e.g. eddies, solar heating), which compete against mechanisms that increase vertical mixing (and deepen the mixed layer). This includes convection and down-front winds. Convection is strongest in the winter when surface cooling is strongest. Convection increases the depth of vertical mixing, which can move phytoplankton away from the light they need to grow. When convection weakens and wind switches direction in the spring, the re-stratifying effect of eddies becomes dominant. Phytoplankton are trapped closer to the surface, increasing their exposure to light. This spurs phytoplankton growth, leading to the onset of the North Atlantic spring bloom 20-30 days earlier than would occur with thermal stratification alone. Northward progression At greater latitudes, spring blooms take place later in the year. This northward progression is because spring occurs later, delaying thermal stratification and increases in illumination that promote blooms. A study by Wolf and Woods (1988) showed evidence that spring blooms follow the northward migration of the 12 °C isotherm, suggesting that blooms may be controlled by temperature limitations, in addition to stratification. At high latitudes, the shorter warm season commonly results in one mid-summer bloom. These blooms tend to be more intense than spring blooms of temperate areas because there is a longer duration of daylight for photosynthesis to take place. Also, grazing pressure tends to be lower because the generally cooler temperatures at higher latitudes slow zooplankton metabolism. Species succession The spring bloom often consists of a series of sequential blooms of different phytoplankton species. Succession occurs because different species have optimal nutrient uptake at different ambient concentrations and reach their growth peaks at different times. Shifts in the dominant phytoplankton species are likely caused by biological and physical (i.e. environmental) factors. For instance, diatom growth rate becomes limited when the supply of silicate is depleted. Since silicate is not required by other phytoplankton, such as dinoflagellates, their growth rates continue to increase. For example, in oceanic environments, diatoms (cells diameter greater than 10 to 70 μm or larger) typically dominate first because they are capable of growing faster. Once silicate is depleted in the environment, diatoms are succeeded by smaller dinoflagellates. This scenario has been observed in Rhode Island, as well as Massachusetts and Cape Cod Bay. By the end of a spring bloom, when most nutrients have been depleted, the majority of the total phytoplankton biomass is very small phytoplankton, known as ultraphytoplankton (cell diameter <5 to 10 μm). Ultraphytoplankton can sustain low, but constant stocks, in nutrient depleted environments because they have a larger surface area to volume ratio, which offers a much more effective rate of diffusion. The types of phytoplankton comprising a bloom can be determined by examination of the varying photosynthetic pigments found in chloroplasts of each species. Variability and the influence of climate change Variability in the patterns (e.g., timing of onset, duration, magnitude, position, and spatial extent) of annual spring bloom events has been well documented. These variations occur due to fluctuations in environmental conditions, such as wind intensity, temperature, freshwater input, and light. Consequently, spring bloom patterns are likely sensitive to global climate change. Links have been found between temperature and spring bloom patterns. For example, several studies have reported a correlation between earlier spring bloom onset and temperature increases over time. Furthermore, in Long Island Sound and the Gulf of Maine, blooms begin later in the year, are more productive, and last longer during colder years, while years that are warmer exhibit earlier, shorter blooms of greater magnitude. Temperature may also regulate bloom sizes. In Narragansett Bay, Rhode Island, a study by Durbin et al. (1992) indicated that a 2 °C increase in water temperature resulted in a three-week shift in the maturation of the copepod, Acartia hudsonica, which could significantly increase zooplankton grazing intensity. Oviatt et al. (2002) noted a reduction in spring bloom intensity and duration in years when winter water temperatures were warmer. Oviatt et al. suggested that the reduction was due to increased grazing pressure, which could potentially become intense enough to prevent spring blooms from occurring altogether. Miller and Harding (2007) suggested climate change (influencing winter weather patterns and freshwater influxes) was responsible for shifts in spring bloom patterns in the Chesapeake Bay. They found that during warm, wet years (as opposed to cool, dry years), the spatial extent of blooms was larger and was positioned more seaward. Also, during these same years, biomass was higher and peak biomass occurred later in the spring. See also Algal bloom Critical depth Gordon Arthur Riley Plankton References Aquatic ecology Biological oceanography Marine biology Oceanography Fisheries science Planktology Barents Sea Algal blooms
Spring bloom
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
2,168
[ "Hydrology", "Algae", "Applied and interdisciplinary physics", "Water treatment", "Oceanography", "Water pollution", "Marine biology", "Water quality indicators", "Ecosystems", "Aquatic ecology", "Algal blooms" ]
1,038,989
https://en.wikipedia.org/wiki/SystemC
SystemC is a set of C++ classes and macros which provide an event-driven simulation interface (see also discrete event simulation). These facilities enable a designer to simulate concurrent processes, each described using plain C++ syntax. SystemC processes can communicate in a simulated real-time environment, using signals of all the datatypes offered by C++, some additional ones offered by the SystemC library, as well as user defined. In certain respects, SystemC deliberately mimics the hardware description languages VHDL and Verilog, but is more aptly described as a system-level modeling language. SystemC is applied to system-level modeling, architectural exploration, performance modeling, software development, functional verification, and high-level synthesis. SystemC is often associated with electronic system-level (ESL) design, and with transaction-level modeling (TLM). Language specification SystemC is defined and promoted by the Open SystemC Initiative (OSCI — now Accellera), and has been approved by the IEEE Standards Association as IEEE 1666-2011 - the SystemC Language Reference Manual (LRM). The LRM provides the definitive statement of the semantics of SystemC. OSCI also provide an open-source proof-of-concept simulator (sometimes incorrectly referred to as the reference simulator), which can be downloaded from the OSCI website. Although it was the intent of OSCI that commercial vendors and academia could create original software compliant to IEEE 1666, in practice most SystemC implementations have been at least partly based on the OSCI proof-of-concept simulator. Compared to HDLs SystemC has semantic similarities to VHDL and Verilog, but may be said to have a syntactical overhead compared to these when used as a hardware description language. On the other hand, it offers a greater range of expression, similar to object-oriented design partitioning and template classes. Although strictly a C++ class library, SystemC is sometimes viewed as being a language in its own right. Source code can be compiled with the SystemC library (which includes a simulation kernel) to give an executable. The performance of the OSCI open-source implementation is typically worse than commercial VHDL/Verilog simulators when used for register transfer level simulation. Versions SystemC version 1 included common hardware-description language features such as structural hierarchy and connectivity, clock-cycle accuracy, delta cycles, four-valued logic (0, 1, X, Z), and bus-resolution functions. SystemC version 2 onward focused on communication abstraction, transaction-level modeling, and virtual-platform modeling. It also added abstract ports, dynamic processes, and timed event notifications. Language features Modules SystemC has a notion of a container class called a module. This is a hierarchical entity that can have other modules or processes contained in it. Modules are the basic building blocks of a SystemC design hierarchy. A SystemC model usually consists of several modules which communicate via ports. The modules can be thought of as a building block of SystemC. Ports Ports allow communication from inside a module to the outside (usually to other modules) via channels. Signals SystemC supports resolved and unresolved signals. Resolved signals can have more than one driver (a bus) while unresolved signals can have only one driver. Exports Modules have ports through which they connect to other modules. SystemC supports single-direction and bidirectional ports. Exports incorporate channels and allow communication from inside a module to the outside (usually to other modules). Processes Processes are used to describe functionality. Processes are contained inside modules. SystemC provides three different process abstractions to be used by hardware and software designers. Processes are the main computation elements. They are concurrent. Channels Channels are the communication elements of SystemC. They can be either simple wires or complex communication mechanisms like FIFOs or bus channels. Elementary channels: signal: the equivalent of a wire buffer fifo mutex semaphore Interfaces Ports use interfaces to communicate with channels. Events Events allow synchronization between processes and must be defined during initialization. Data types SystemC introduces several data types which support the modeling of hardware. Extended standard types: n-bit signed integer n-bit unsigned integer n-bit signed integer for n > 64 n-bit unsigned integer for n > 64 Logic types: 2-valued single bit 4-valued single bit vector of length n of sc_bit vector of length n of sc_logic Fixed point types: templated signed fixed point templated unsigned fixed point untemplated signed fixed point untemplated unsigned fixed point History 1999-09-27 Open SystemC Initiative announced 2000-03-01 SystemC V0.91 released 2000-03-28 SystemC V1.0 released 2001-02-01 SystemC V2.0 specification and V1.2 Beta source code released 2003-06-03 SystemC 2.0.1 LRM (language reference manual) released 2005-06-06 SystemC 2.1 LRM and TLM 1.0 transaction-level modeling standard released 2005-12-12 IEEE approves the IEEE 1666–2005 standard for SystemC 2007-04-13 SystemC v2.2 released 2008-06-09 TLM-2.0.0 library released 2009-07-27 TLM-2.0 LRM released, accompanied by TLM-2.0.1 library 2010-03-08 SystemC AMS extensions 1.0 LRM released 2011-11-10 IEEE approves the IEEE 1666–2011 standard for SystemC 2016-04-06 IEEE approves the IEEE 1666.1–2016 standard for SystemC AMS 2023-06-05 IEEE approves the IEEE 1666–2023 standard SystemC traces its origins to work on Scenic programming language described in a DAC 1997 paper. ARM Ltd., CoWare, Synopsys and CynApps teamed up to develop SystemC (CynApps later became Forte Design Systems) to launch it first draft version in 1999. The chief competitor at the time was SpecC another C based open source package developed by UC Irvine personnel and some Japanese companies. In June 2000, a standards group known as the Open SystemC Initiative was formed to provide an industry neutral organization to host SystemC activities and to allow Synopsys' largest competitors, Cadence and Mentor Graphics, democratic representation in SystemC development. Example code Example code of an adder: #include "systemc.h" SC_MODULE(adder) // module (class) declaration { sc_in<int> a, b; // ports sc_out<int> sum; void do_add() // process { sum.write(a.read() + b.read()); //or just sum = a + b } SC_CTOR(adder) // constructor { SC_METHOD(do_add); // register do_add to kernel sensitive << a << b; // sensitivity list of do_add } }; Power and energy estimation in SystemC The power and energy estimation can be accomplished in SystemC by means of simulations. Powersim is a SystemC class library aimed to the calculation of power and energy consumption of hardware described at system level. To this end, C++ operators are monitored and different energy models can be used for each SystemC data type. Simulations with Powersim do not require any change in the application source code. See also Accellera Chisel SpecC SystemRDL SystemVerilog Virtual machine Notes References T. Grötker, S. Liao, G. Martin, S. Swan, System Design with SystemC. Springer, 2002. A SystemC based Linux Live CD with C++/SystemC tutorial J. Bhasker, A SystemC Primer, Second Edition, Star Galaxy Publishing, 2004. D. C. Black, J. Donovan, SystemC: From the Ground Up, 2nd ed., Springer 2009. George Frazier, SystemC: Hardware-Oriented Constructs in C++ Frank Ghenassia (Editor), Transaction-Level Modeling with SystemC: TLM Concepts and Applications for Embedded Systems, Springer 2006. Stan Y. Liao, Steven W. K. Tjiang, Rajesh K. Gupta: An Efficient Implementation of Reactivity for Modeling Hardware in the Scenic Design Environment. DAC 1997: 70-75 External links SystemC Tutorial ESCUG - European SystemC Users Group NASCUG - North American SystemC User's Group LASCUG - Latin American SystemC User's Group ISCUG - Indian SystemC User's Group EDA Playground - Free web browser-based C++/SystemC IDE Hardware description languages Hardware verification languages System description languages C++ programming language family
SystemC
[ "Engineering" ]
1,822
[ "Hardware verification languages", "Electronic engineering", "Hardware description languages" ]
1,039,011
https://en.wikipedia.org/wiki/Chitosan
Chitosan is a linear polysaccharide composed of randomly distributed β-(1→4)-linked D-glucosamine (deacetylated unit) and N-acetyl-D-glucosamine (acetylated unit). It is made by treating the chitin shells of shrimp and other crustaceans with an alkaline substance, such as sodium hydroxide. Chitosan has a number of commercial and possible biomedical uses. It can be used in agriculture as a seed treatment and biopesticide, helping plants to fight off fungal infections. In winemaking, it can be used as a fining agent, also helping to prevent spoilage. In industry, it can be used in a self-healing polyurethane paint coating. In medicine, it is useful in bandages to reduce bleeding and as an antibacterial agent; it can also be used to help deliver drugs through the skin. History In 1799, British chemist Charles Hatchett experimented with decalcifying the shells of various crustaceans, finding that a soft, yellow and cartilage-like substance was left behind that we now know to be chitin. In 1859, French physiologist Charles Marie Benjamin Rouget found that boiling chitin in potassium hydroxide solution could deacetylate it to produce a substance that was soluble in dilute organic acids, that he called chitine modifiée. In 1894, German chemist Felix Hoppe-Seyler named the substance chitosan. From 1894 to 1930 there was a period of debate and confusion over the exact composition of chitin and particularly whether animal and fungal forms where the same chemicals. In 1930 the first chitosan films and fibres were patented but competition from petroleum-derived polymers limited their uptake. It was not until the 1970s that there was renewed interest in the compound, spurred partly by laws that prevented the dumping of untreated shellfish waste. Manufacture Chitosan is produced commercially by deacetylation of chitin, which is the structural element in the exoskeleton of crustaceans (such as crabs and shrimp) and cell walls of fungi. A common method for obtaining chitosan is the deacetylation of chitin using sodium hydroxide in excess as a reagent and water as a solvent. The reaction follows first-order kinetics though it occurs in two steps; the activation energy barrier for the first stage is estimated at 48.8 kJ·mol−1 at and is higher than the barrier to the second stage. The degree of deacetylation (%) can be determined by NMR spectroscopy and the degree of deacetylation in commercially available chitosan ranges from 60 to 100%. On average, the molecular weight of commercially produced chitosan is 3800–20,000 daltons. Nanofibrils have been made using chitin and chitosan. Chemical modifications Chitosan contains the following three functional groups: C2-NH2, C3-OH, and C6-OH. C3-OH has a large spatial site resistance and therefore is relatively difficult to modify. C2-NH2 is highly reactive for fine modifications and is the most common modifying group in chitosan. In chitosan, although amino groups are more prone to nucleophilic reactions than hydroxyl groups, both can react non-selectively with electrophilic reagents such as acids, chlorides, and haloalkanes to functionalize them. Since chitosan contains a variety of functional groups, it can be functionalized in different ways such as phosphorylation, thiolation, and quaternization to adapt it to specific purposes. Phosphorylated chitosan Water-soluble phosphorylated chitosan can be obtained by the reaction of phosphorus pentoxide and chitosan under low-temperature conditions using methane sulfonic acid as the catalyst; phosphorylated chitosan with good antibacterial activity and ionic properties can be prepared by graft copolymerization of chitosan monophosphate. The good water solubility and metal chelating properties of phosphorylated chitosan and its derivatives make them widely used in tissue engineering, drug delivery carriers, tissue regeneration, and the food industry. In tissue engineering, phosphorylated chitosan exhibits improved swelling and ionic conductivity. Although its crystallinity is reduced, its tensile strength remains largely unchanged. These properties make it useful for creating scaffolds that can support bone tissue regeneration by binding growth factors and promoting stem cell differentiation into bone-forming cells. Additionally, to enhance the solubility of chitosan-based hydrogels at neutral or alkaline pH, the derivative N-methylene phosphonic acid chitosan (NMPC-GLU) has been developed. This material maintains good mechanical strength and improve cell proliferation, making it valuable for biomedical applications. Thiolated chitosan Thiolated chitosan is produced by attaching thiol groups to the amino groups of chitosan using a thiol-containing coupling agent. The primary site for this modification is the amino group at the 2nd position of chitosan's glucosamine units. During this process, thioglycolic acid and cysteine mediate the reaction, forming an amide bond between the thiol group and chitosan. At a pH below 5, thiol activity is reduced, which limits disulfide bond formation. The modified chitosan exhibits improved adhesive properties and stability due to the covalent attachment of the thiol groups. Lower pH reduces oxidation, enhancing its adhesion properties. Additionally, thiolated chitosan can interact with cell membrane receptors, improving membrane permeability and showing potential for applications in bacterial adhesion prevention, for example for coating stainless steel. Ionic chitosan There are two main methods of chitosan quaternization: direct quaternization and indirect quaternization. The direct quaternization of chitosan amino acids treats chitosan with haloalkanes under alkaline conditions. Another method is the reaction of chitosan with aldehydes first, followed by reduction, and finally with haloalkanes to obtain quaternized chitosan. The indirect quaternization method refers to introducing small molecules containing quaternary ammonium groups into chitosan, such as glycidyl trimethyl ammonium chloride, (5-bromopentyl) trimethyl ammonium bromide, etc. Quaternary ammonium groups can further be introduced into the chitosan backbone via azide-alkyne cycloaddition, or by dissolving chitosan in alkali and urea and then reacting it with 3-chloro-2-hydroxypropyl trimethylammonium chloride, which provides a simple and green solution to achieve chitosan functionalization. Cationic derivatives of chitosan have important roles in bioadhesion, absorption enhancement, anti-inflammatory, antibacterial and anti-tumor applications. Chitosan modified with quaternary ammonium groups is one of the most common cationic chitosan derivatives. Quaternized chitosan with a permanent positive charge has increased antimicrobial activity and solubility compared to normal chitosan. Properties The amino group in chitosan has a pKb value of ~6.5, which leads to significant protonation in neutral solution, increasing with increased acidity (decreased pH) and the %DA-value. This makes chitosan water-soluble and a bioadhesive which readily binds to negatively charged surfaces such as mucosal membranes. Also, chitosan can effectively bind to other surface via hydrophobic interaction and/or cation-π interaction (chitosan as a cation source) in aqueous solution. The free amine groups on chitosan chains can make crosslinked polymeric networks with dicarboxylic acids to improve chitosan's mechanical properties. Chitosan enhances the transport of polar drugs across epithelial surfaces, and is biocompatible and biodegradable. However, it is not approved by the FDA for drug delivery. Purified quantities of chitosan are available for biomedical applications. Physicochemical properties Chitosan has biological properties, such as biodegradability and biocompatibility. The biological properties of chitosan are closely related to its physicochemical structure, which includes the degree of deacetylation, water content, and molecular weight. Deacetylation refers to the process of removing the acetyl group from chitosan, and this process determines the content of free amine groups in chitosan. Studies have shown that chitosan has good solubility only when the degree of deacetylation is above 85%. The enhanced chitosan uptake is mainly due to the interaction of positively charged chitosan with cell membranes, activation of chlorine–bicarbonate exchange channels, and reorganization of proteins associated with epithelial tight junctions, thus opening epithelial tight junctions. Chitosan inhibits the growth of different bacteria and fungi by mechanisms involving several factors, including the degree of deacetylation, pH, divalent cations, and solvent type. Uses Agricultural and horticultural use The agricultural and horticultural uses for chitosan, primarily for plant defense and yield increase, are based on how this glucosamine polymer influences the biochemistry and molecular biology of the plant cell. The cellular targets are the plasma membrane and nuclear chromatin. Subsequent changes occur in cell membranes, chromatin, DNA, calcium, MAP kinase, oxidative burst, reactive oxygen species, callose pathogenesis-related (PR) genes, and phytoalexins. Chitosan was first registered as an active ingredient (licensed for sale) in 1986. Natural biocontrol and elicitor In agriculture, chitosan is typically used as a natural seed treatment and plant growth enhancer, and as an ecologically friendly biopesticide substance that boosts the innate ability of plants to defend themselves against fungal infections. Degraded molecules of chitin/chitosan exist in soil and water. Chitosan applications for plants and crops are regulated in the USA by the EPA, and the USDA National Organic Program regulates its use on organic certified farms and crops. EPA-approved, biodegradable chitosan products are allowed for use outdoors and indoors on plants and crops grown commercially and by consumers. In the European Union and United Kingdom, chitosan is registered as a "basic substance" for use as a biological fungicide and bactericide on a wide range of crops. The natural biocontrol ability of chitosan should not be confused with the effects of fertilizers or pesticides upon plants or the environment. Chitosan active biopesticides represent a new tier of cost-effective biological control of crops for agriculture and horticulture. The biocontrol mode of action of chitosan elicits natural innate defense responses within plant to resist insects, pathogens, and soil-borne diseases when applied to foliage or the soil. Chitosan increases photosynthesis, promotes and enhances plant growth, stimulates nutrient uptake, increases germination and sprouting, and boosts plant vigor. When used as a seed treatment or seed coating on cotton, corn, seed potatoes, soybeans, sugar beets, tomatoes, wheat, and many other seeds, it elicits an innate immunity response in developing roots which destroys parasitic cyst nematodes without harming beneficial nematodes and organisms. Agricultural applications of chitosan can reduce environmental stress due to drought and soil deficiencies, strengthen seed vitality, improve stand quality, increase yields, and reduce fruit decay of vegetables, fruits and citrus crops . Horticultural application of chitosan increases blooms and extends the life of cut flowers and Christmas trees. The US Forest Service has conducted research on chitosan to control pathogens in pine trees and increase resin pitch outflow which resists pine beetle infestation. Chitosan has been studied for applications in agriculture and horticulture dating back to the 1980s. By 1989, chitosan salt solutions were applied to crops for improved freeze protection or to crop seed for seed priming. Shortly thereafter, chitosan salt received the first ever biopesticide label from the EPA, then followed by other intellectual property applications. Chitosan has been used to protect plants in space, as well, exemplified by NASA's experiment to protect adzuki beans grown aboard the space shuttle and Mir space station in 1997. NASA results revealed chitosan induces increased growth (biomass) and pathogen resistance due to elevated levels of β-(1→3)-glucanase enzymes within plant cells. NASA confirmed chitosan elicits the same effect in plants on earth. In 2008, the EPA approved natural broad-spectrum elicitor status for an ultralow molecular active ingredient of 0.25% chitosan. A natural chitosan elicitor solution for agriculture and horticultural uses was granted an amended label for foliar and irrigation applications by the EPA in 2009. Given its low potential for toxicity and abundance in the natural environment, chitosan does not harm people, pets, wildlife, or the environment when used according to label directions. Chitosan blends do not work against bark beetles when put on a tree's leaves or in its soil. Filtration Chitosan can be used in hydrology as a part of a filtration process. Chitosan causes the fine sediment particles to bind together, and is subsequently removed with the sediment during sand filtration. It also removes heavy minerals, dyes, and oils from the water. As an additive in water filtration, chitosan combined with sand filtration removes up to 99% of turbidity. Chitosan is among the biological adsorbents used for heavy metals removal without negative environmental impacts. In combination with bentonite, gelatin, silica gel, isinglass, or other fining agents, it is used to clarify wine, mead, and beer. Added late in the brewing process, chitosan improves flocculation, and removes yeast cells, fruit particles, and other detritus that cause hazy wine. Winemaking and fungal source chitosan Chitosan has a long history for use as a fining agent in winemaking. Fungal source chitosan has shown an increase in settling activity, reduction of oxidized polyphenolics in juice and wine, chelation and removal of copper (post-racking) and control of the spoilage yeast Brettanomyces. These products and uses are approved for European use by the EU and OIV standards. Wound management Chitosan-based wound dressings have been widely explored for a variety of acute and chronic wounds. Chitosan has the ability to adhere to fibrinogen, which produces increased platelet adhesion, causing clotting of blood and hemostasis. Chitosan hemostatic agents are salts made from mixing chitosan with an organic acid (such as succinic or lactic acid). Chitosan may have other properties conducive to wound healing, including antibacterial and antifungal activity, which remain under preliminary research. Chitosan is used within some wound dressings to decrease bleeding. Upon contact with blood, the bandage becomes sticky, effectively sealing the laceration. Chitosan hydrogel-based wound dressings have also been found useful as burn dressings, and for the treatment of chronic diabetic wounds and hydrofluoric acid burns. Chitosan-containing wound dressings received approval for medical use in the United States in 2003. Temperature-sensitive hydrogels Chitosan is dissolved in dilute organic acid solutions but is insoluble in high concentrations of hydrogen ions at pH 6.5 and is precipitated as a gel-like compound. Chitosan is positively charged by amine groups, making it suitable for binding to negatively charged molecules. However, it has disadvantages such as low mechanical strength and low-temperature response rate; it must be combined with other gelling agents to improve its properties. Using glycerolphosphate salts (possessing a single anionic head) without chemical modification or cross-linking, the pH-dependent gelation properties can be converted to temperature-sensitive gelation properties. In the year 2000, Chenite was the first to design the temperature-sensitive chitosan hydrogels drug delivery system using chitosan and β-glycerol phosphate. This new system can remain in the liquid state at room temperature, while becoming gel with increasing temperature above the physiological temperature (37 °C). Phosphate salts cause a particular behaviour in chitosan solutions, thereby allowing these solutions to remain soluble in the physiological pH range (pH 7), and they will be gel only at body temperature. When the liquid solution of chitosan-glycerol phosphate, containing the drug, enters the body through a syringe injection, it becomes a water-insoluble gel at 37 °C. The entrapped drug particles between the hydrogel chains will be gradually released. Research Chitosan and derivatives have been explored in the development of nanomaterials, bioadhesives, wound dressing materials, improved drug delivery systems, enteric coatings, and in medical devices. Bioprinting Bioinspired materials, a manufacturing concept inspired by natural nacre, shrimp carapace, or insect cuticles, has led to development of bioprinting methods to manufacture large scale consumer objects using chitosan. This method is based on replicating the molecular arrangement of chitosan from natural materials into fabrication methods, such as injection molding or mold casting. Once discarded, chitosan-constructed objects are biodegradable and non-toxic. The method is used to engineer and bioprint human organs or tissues. Pigmented chitosan objects can be recycled, with the option of reintroducing or discarding the dye at each recycling step, enabling reuse of the polymer independently of colorants. Unlike other plant-based bioplastics (e.g. cellulose, starch), the main natural sources of chitosan come from marine environments and do not compete for land or other human resources. 3D bioprinting of tissue engineering scaffolds for creating artificial tissues and organs is another application where chitosan has gained popularity. Chitosan has high biocompatibility, biodegradability, and antimicrobial, hemostatic, wound healing and immunomodulatory activities which make it suitable for making artificial tissues. Weight loss Chitosan is marketed in a tablet form as a "fat binder". Although the effect of chitosan on lowering cholesterol and body weight has been evaluated, the effect appears to have no or low clinical importance. Reviews from 2016 and 2008 found there was no significant effect, and no justification for overweight people to use chitosan supplements. In 2015, the U.S. Food and Drug Administration issued a public advisory about supplement retailers who made exaggerated claims concerning the supposed weight loss benefit of various products. Biodegradable antimicrobial food packaging Microbial contamination of food products accelerates the deterioration process and increases the risk of foodborne illness caused by potentially life-threatening pathogens. Ordinarily, food contamination originates superficially, requiring surface treatment and packaging as crucial factors to assure food quality and safety. Biodegradable chitosan films have potential for preserving various food products, retaining their firmness and restricting weight loss due to dehydration. In addition, composite biodegradable films containing chitosan and antimicrobial agents are in development as safe alternatives to preserve food products. Battery electrolyte Chitosan is being investigated as an electrolyte for rechargeable batteries with good performance and low environmental impact due to rapid biodegradability, leaving recycleable zinc. The electrolyte has excellent physical stability up to 50 °C, electrochemical stability up to 2 V with zinc electrodes, and accommodates redox reactions involved in the Zn-MnO2 alkaline system. results were promising, but the battery needed testing on a larger scale and under actual use conditions. References External links International research project Nano3Bio, focused on tailor-made biotechnological production of chitosans (funded by the European Union) Antihemorrhagics Biopesticides Elicitors Polysaccharides
Chitosan
[ "Chemistry" ]
4,318
[ "Carbohydrates", "Polysaccharides" ]
1,039,022
https://en.wikipedia.org/wiki/Privilege%20separation
In computer programming and computer security, privilege separation (privsep) is one software-based technique for implementing the principle of least privilege. With privilege separation, a program is divided into parts which are limited to the specific privileges they require in order to perform a specific task. This is used to mitigate the potential damage of a computer security vulnerability. Implementation A common method to implement privilege separation is to have a computer program fork into two processes. The main program drops privileges, and the smaller program keeps privileges in order to perform a certain task. The two halves then communicate via a socket pair. Thus, any successful attack against the larger program will gain minimal access, even though the pair of programs will be capable of performing privileged operations. Privilege separation is traditionally accomplished by distinguishing a real user ID/group ID from the effective user ID/group ID, using the setuid(2)/setgid(2) and related system calls, which were specified by POSIX. If these are incorrectly positioned, gaps can allow widespread network penetration. Many network service daemons have to do a specific privileged operation such as open a raw socket or an Internet socket in the well known ports range. Administrative utilities can require particular privileges at run-time as well. Such software tends to separate privileges by revoking them completely after the critical section is done, and change the user it runs under to some unprivileged account after so doing. This action is known as dropping root under Unix-like operating systems. The unprivileged part is usually run under the "nobody" user or an equivalent separate user account. Privilege separation can also be done by splitting functionality of a single program into multiple smaller programs, and then assigning the extended privileges to particular parts using file system permissions. That way the different programs have to communicate with each other through the operating system, so the scope of the potential vulnerabilities is limited (since a crash in the less privileged part cannot be exploited to gain privileges, merely to cause a denial-of-service attack). Examples Dovecot Another email server software designed with privilege separation and security in mind is Dovecot. OpenBSD Separation of privileges is one of the major OpenBSD security features. OpenSSH OpenSSH uses privilege separation to ensure pseudo terminal (pty) creation happens in a secure part of the process, away from per connection processes with network access. Postfix The implementation of Postfix was focused on implementing comprehensive privilege separation. Solaris Solaris implements a separate set of functions for privilege bracketing. See also Capability-based security Confused deputy problem Privilege escalation Privilege revocation (computing) Defensive programming Sandbox (computer security) References Computer security procedures
Privilege separation
[ "Engineering" ]
552
[ "Cybersecurity engineering", "Computer security procedures" ]
1,039,033
https://en.wikipedia.org/wiki/Cope%27s%20rule
Cope's rule, named after American paleontologist Edward Drinker Cope, postulates that population lineages tend to increase in body size over evolutionary time. It was never actually stated by Cope, although he favoured the occurrence of linear evolutionary trends. It is sometimes also known as the Cope–Depéret rule, because Charles Depéret explicitly advocated the idea. Theodor Eimer had also done so earlier. The term "Cope's rule" was apparently coined by Bernhard Rensch, based on the fact that Depéret had "lionized Cope" in his book. While the rule has been demonstrated in many instances, it does not hold true at all taxonomic levels, or in all clades. Larger body size is associated with increased fitness for a number of reasons, although there are also some disadvantages both on an individual and on a clade level: clades comprising larger individuals are more prone to extinction, which may act to limit the maximum size of organisms. Function Effects of growth Directional selection appears to act on organisms' size, whereas it exhibits a far smaller effect on other morphological traits, though it is possible that this perception may be a result of sample bias. This selectional pressure can be explained by a number of advantages, both in terms of mating success and survival rate. For example, larger organisms find it easier to avoid or fight off predators and capture prey, to reproduce, to kill competitors, to survive temporary lean times, and to resist rapid climatic changes. They may also potentially benefit from better thermal efficiency, increased intelligence, and a longer lifespan. Offsetting these advantages, larger organisms require more food and water, and shift from r to K-selection. Their longer generation time means a longer period of reliance on the mother, and on a macroevolutionary scale restricts the clade's ability to evolve rapidly in response to changing environments. Capping growth Left unfettered, the trend of ever-larger size would produce organisms of gargantuan proportions. Therefore, some factors must limit this process. At one level, it is possible that the clade's increased vulnerability to extinction, as its members become larger, means that no taxon survives long enough for individuals to reach huge sizes. There are probably also physically imposed limits to the size of some organisms; for instance, insects must be small enough for oxygen to diffuse to all parts of their bodies, flying birds must be light enough to fly, and the length of giraffes' necks may be limited by the blood pressure it is possible for their hearts to generate. Finally, there may be a competitive element, in that changes in size are necessarily accompanied by changes in ecological niche. For example, terrestrial carnivores over 21 kg almost always prey on organisms larger, not smaller, than themselves. If such a niche is already occupied, competitive pressure may oppose the directional selection. The three Canidae clades (Hesperocyoninae, Borophaginae, and Caninae) all show a trend towards larger size, although the first two are now extinct. Validity Cope recognised that clades of Cenozoic mammals appeared to originate as small individuals, and that body mass increased through a clade's history. Discussing the case of canid evolution in North America, Blaire Van Valkenburgh of UCLA and coworkers state: In some cases, the increase in body size may represent a passive, rather than an active, trend. In other words, the maximum size increases, but the minimum size does not; this is usually a result of size varying pseudo-randomly rather than directed evolution. This does not fall into Cope's rule sensu stricto, but is considered by many workers to be an example of "Cope's rule sensu lato". In other cases, an increase in size may in fact represent a transition to an optimal body size, and not imply that populations always develop to a larger size. However, many palaeobiologists are skeptical of the validity of Cope's rule, which may merely represent a statistical artefact. Purported examples of Cope's rule often assume that the stratigraphic age of fossils is proportional to their "clade rank", a measure of how derived they are from an ancestral state; this relationship is in fact quite weak. Counterexamples to Cope's rule are common throughout geological time; although size increase does occur more often than not, it is by no means universal. For example, among genera of Cretaceous molluscs, an increase in size is no more common than stasis or a decrease. In many cases, Cope's rule only operates at certain taxonomic levels (for example, an order may obey Cope's rule, while its constituent families do not), or more generally, it may apply to only some clades of a taxon. Giant dinosaurs appear to have evolved dozens of times, in response to local environmental conditions. Despite many counter-examples, Cope's rule is supported in many instances. For example, all marine invertebrate phyla except the molluscs show a size increase between the Cambrian and Permian. Collectively, dinosaurs exhibit an increase in body length over their evolution. Cope's rule also appears to hold in clades where a constraint on size is expected. For instance, one may expect the size of birds to be constrained, as larger masses mean more energy must be expended in flight. Birds have been suggested to follow Cope's law, although a subsequent reanalysis of the same data suggested otherwise. An extensive study published in 2015 supports the presence of a trend toward larger body size in marine animals during the Phanerozoic. However, this trend was present mainly in the Paleozoic and Cenozoic; the Mesozoic was a period of relative stasis. The trend is not attributable simply to neutral drift in body size from small ancestors, and was mainly driven by a greater rate of diversification in classes of larger mean size. A smaller component of the overall trend is due to trends of increasing size within individual families. Notes References Animal size Evolutionary biology Biological rules
Cope's rule
[ "Biology" ]
1,250
[ "Evolutionary biology", "Organism size", "Biological rules", "nan", "Animal size" ]
1,039,051
https://en.wikipedia.org/wiki/Springing
Springing as a nautical term refers to global (vertical) resonant hull girder vibrations induced by continuous wave loading. When the global hull girder vibrations occur as a result of an impulsive wave loading, for example a wave slam at the bow (bow-slamming) or stern (stern-slamming), the phenomenon is denoted by the term whipping. Springing is a resonance phenomenon, and it can occur when the natural frequency of the 2-node vertical vibration of the ship equals the wave encounter frequency or a multiple therefrom. Whipping is a transient phenomenon of the same hull girder vibrations due to excessive impulsive loading in the bow or stern of the vessel. The 2-node natural frequency is the lowest and thereby the most dominant resonant mode leading to hull girder stress variations, though in theory higher vibration modes will be excited as well. Springing induced vibrations can already be present in low or moderate sea states when resonant conditions occur between wave lengths present in the wave spectrum and the hull girder natural modes, while whipping typically requires rough sea states before the very local occurring slamming impact has sufficient energy to excite the global structural vibration modes. The hydrodynamic theory of springing is not yet fully understood due to the complex description of the surface waves and structure interaction. It is, however, well known that larger ships with longer resonant periods are more susceptible to this type of vibration. Ships of this type include very large crude carriers and bulk carriers, but possibly also container vessels. The first experience with this phenomenon was related to fatigue cracking on 700 ft Great Lakes bulk carriers during the 1950s. Later 1000 ft Great Lakes bulk carriers experienced the same problems even after strength specifications increased. The Great Lake bulk carriers are typically rather blunt and slender ships (length to width ratio of 10) sailing at shallow draft resulting in long natural periods of about 2 seconds. This mode can be excited by short waves in the wave spectrum. A rather complete overview of the full scale experiences and relevant literature on springing can be found in references and. The container ships are more slender, have higher service speeds and have more pronounced bow flares. Container ships are also known to experience significant whipping (transient) vibrations from bow impacts. Blunt ships may also experience whipping especially with flat bottom impacts in the bow area. The bottom part of the bow however rarely exits from the water on such ships. Vibration from whipping may also increase the extreme loading of ships potentially resulting in vessels breaking in two in severe storms. In the extreme cases springing may cause severe fatigue cracking of critical structural details, especially in moderate to rough head seas with low peak periods. Vibration is normally more easily excited by waves in ballast condition than in cargo condition. The converse may also be true since some ships experience more head wind and waves in ballast conditions, while other ships may experience more head wind and waves in cargo condition, thereby vibrating less overall. Ocean-going ships have not had this problem until recently, when high tensile strength steel was introduced as a common material in the whole ship to reduce initial costs. This makes the ships less stiff and the nominal stress level higher. Today's ship specifications do not account for springing which may be the dominant fatigue factor for some vessels. References Naval architecture
Springing
[ "Engineering" ]
661
[ "Naval architecture", "Marine engineering" ]
1,039,065
https://en.wikipedia.org/wiki/Kombu
Konbu (from ) is edible kelp mostly from the family Laminariaceae and is widely eaten in East Asia. It may also be referred to as dasima () or haidai (). Kelp features in the diets of many civilizations, including Chinese and Icelandic; however, the largest consumers of kelp are the Japanese, who have incorporated kelp and seaweed into their diets for over 1,500 years. Prominent species There are about eighteen edible species in Laminariaceae and most of them, but not all, are called kombu. Confusingly, species of Laminariaceae have multiple names in biology and in fisheries science. In the following list, fisheries science synonyms are in parentheses, and Japanese names follow them. Saccharina japonica (Laminaria japonica), Saccharina japonica var. religiosa (Laminaria religiosa), Saccharina japonica var. diabolica (Laminaria diabolica), l Saccharina japonica var. ochotensis (Laminaria ochotensis), – commonly used for soup stocks Saccharina latissima (Laminaria saccharina), Karafuto-kombu – contains mannitol and is considered sweeter Saccharina angustata (Laminaria angustata), – commonly used in the making of dashi Saccharina longissima (Laminaria longissima),  Saccharina coriacea (Laminaria coriacea), Saccharina sculpera (Kjellmaniella sculpera), Saccharina longipedalis (Laminaria longipedalis), Enaga-kombu Saccharina gyrata (Kjellmaniella gyrata), Saccharina cichorioides (Laminaria cichorioides), Chijimi-kombu Arthrothamnus bifidus, Etymology Kombu is a loanword from Japanese. In Old Japanese, edible seaweed was generically called "me" (cf. wakame, arame) and kanji such as "軍布", 海藻 or "和布" were applied to transcribe the word. Especially, kombu was called hirome (from hiroi, wide) or ebisume (from ebisu). Sometime later the names konfu and kofu appeared respectively in two editions of Iroha Jiruishō in 12th–13th century. Various theories have been claimed for the origin of the name kombu, with the following two predominant today. One is that it originated from the on'yomi (Sino-Japanese reading) of the Chinese name 昆布 (kūnbù). The kanji itself already could be seen in Shōsōin Monjo (8th century) and Shoku Nihongi (797) in Japan, and furthermore trace back in China, as early as 3rd century, to the book Wupu Bencao (around 239). Li Shizhen wrote the following in his Bencao Gangmu (1596): Another possibility to explain the association arises because descriptions of kūnbù in Chinese documents are vague and inconsistent, and it is impossible to identify to which seaweed the term might have applied. For instance, Chen Cangqi (681–757) noted: "kūnbù is produced in the South China Sea; its leaf is like a hand and the size is the same as a silver grass and a reed, is of red purple; the thin part of leaf is seaweed", which is similar to wakame, arame, kurome, or kajime (Ecklonia cava). The difficulty is that, at least in that time, kombu was not produced either in the East nor in the South China Sea. Moreover, following Zhang Yxi, Li Shizhen classified kūnbù and haidai (stands for kombu in Chinese) as different things, and this classification continues in China today. History Although archaeological evidence of seaweed is hard to find because of its easy decomposition, some plant remains of wakame seaweed are found in some ruins of the Jōmon Period which leads to the supposition that kombu was also eaten at that time. As to surviving documents, the letters 軍布 (in Sino-Japanese reading 軍 is gun/kun; 布 is fu/pu/bu) appeared in Man'yōshū and wood strips from Fujiwara-kyō, and may have indicated kombu. The Shoku Nihongi (797) reports: in 797 of Emishi (Ainu or Tohoku region people) stated they had been offering up kombu, which grew there, as tribute to the Yamato court every year without fail. The Engishiki (927) also reports that kombu had been offered up by Mutsu. During the Muromachi period, a newly developed drying technique allowed kombu to be stored for more than a few days, and it became an important export from the Tohoku area. By the Edo period, as Hokkaidō was colonized and shipment routes were organized, the use of kombu became widespread throughout Japan. Traditional Okinawan cuisine relies heavily on kombu as a part of the diet; this practice began in the Edo period. Okinawa uses more kombu per household than any other prefecture. In the 20th century, a way to cultivate kombu was discovered and it became cheap and readily available. In 1867, the word "kombu" first appeared in an English-language publication—A Japanese and English Dictionary by James Curtis Hepburn. Umami, a basic taste, was first scientifically identified in 1908 by Kikunae Ikeda through his experimentation with kombu. He found that glutamic acid was responsible for the palatability of the dashi broth created from kombu, and was a distinct sensation from sweet, sour, bitter, and salty tastes. Ikeda named the newly-discovered taste umami (うま味), from the Japanese word umai (うまい, "delicious"). Since the 1960s, dried kombu has been exported from Japan to many countries. It was available initially at Asian, and especially Japanese, food shops and restaurants, and can be found in supermarkets, health-food stores, and other nonspecializing suppliers. Cooking Japan Kombu is sold dried (dashi konbu) or pickled in vinegar (su konbu) or as a dried shred (oboro konbu, tororo konbu or shiraga konbu). It may also be eaten fresh in sashimi. Kombu is used extensively in Japanese cuisines as one of the three main ingredients needed to make dashi, a soup stock. Konbu dashi is made by putting either whole dried or powdered kombu in cold water and heating it to near-boiling. The softened kombu is commonly eaten after cooking or is sliced and used to make tsukudani, a dish that is simmered in soy sauce and mirin. Kombu may be pickled with sweet-and-sour flavoring, cut into small strips about 5 or 6 cm long and 2 cm wide. These are often eaten as a snack with green tea. It is often included when cooking beans, putatively to add nutrients and improve their digestibility. Konbu-cha or kobu-cha () is a tea made by infusing kombu in hot water. What Americans call kombucha is called "kōcha kinoko" in Japan. Kombu is also used to prepare a seasoning for rice to be made into sushi. Nutrition and health effects Kombu is a good source of glutamic acid, an amino acid responsible for umami (the Japanese word used for a basic taste identified in 1908). Several foodstuffs in addition to kombu provide glutamic acid or glutamates. Kombu contains extremely high levels of iodine. While this element is essential for normal growth and development, the levels in kombu can cause overdoses; it has been blamed for thyroid problems after drinking large amounts of soy milk in which kombu was an additive. It is also a source of dietary fiber. Algae including kombu also contain entire families of obscure enzymes that break down complex sugars that are normally indigestible to the human gut (thus gas-causing). It also contains the well-studied alpha-galactosidase and beta-galactosidase enzymes. Biofuel Genetically manipulated E. coli bacteria can digest kombu into ethanol, making it a possible maritime biofuel source. See also Notes References Davidson, Alan. Oxford Companion to Food (1999), "Kombu", p. 435 Culture of Kelp (Laminaria japonica) in China External links Kombu seaweed encyclopedia Plant common names Japanese condiments Japanese cuisine terms Laminariaceae Edible seaweeds Umami enhancers
Kombu
[ "Biology" ]
1,916
[ "Plant common names", "Common names of organisms", "Plants" ]
1,039,075
https://en.wikipedia.org/wiki/Aroma%20compound
An aroma compound, also known as an odorant, aroma, fragrance or flavoring, is a chemical compound that has a smell or odor. For an individual chemical or class of chemical compounds to impart a smell or fragrance, it must be sufficiently volatile for transmission via the air to the olfactory system in the upper part of the nose. As examples, various fragrant fruits have diverse aroma compounds, particularly strawberries which are commercially cultivated to have appealing aromas, and contain several hundred aroma compounds. Generally, molecules meeting this specification have molecular weights of less than 310. Flavors affect both the sense of taste and smell, whereas fragrances affect only smell. Flavors tend to be naturally occurring, and the term fragrances may also apply to synthetic compounds, such as those used in cosmetics. Aroma compounds can naturally be found in various foods, such as fruits and their peels, wine, spices, floral scent, perfumes, fragrance oils, and essential oils. For example, many form biochemically during the ripening of fruits and other crops. Wines have more than 100 aromas that form as byproducts of fermentation. Also, many of the aroma compounds play a significant role in the production of compounds used in the food service industry to flavor, improve, and generally increase the appeal of their products. An odorizer may add a detectable odor to a dangerous odorless substance, like propane, natural gas, or hydrogen, as a safety measure. Aroma compounds classified by structure Esters Linear terpenes Cyclic terpenes Note: Carvone, depending on its chirality, offers two different smells. Aromatic Amines Other aroma compounds Alcohols Furaneol (strawberry) 1-Hexanol (herbaceous, woody) cis-3-Hexen-1-ol (fresh cut grass) Menthol (peppermint) Aldehydes High concentrations of aldehydes tend to be very pungent and overwhelming, but low concentrations can evoke a wide range of aromas. Acetaldehyde (ethereal) Hexanal (green, grassy) cis-3-Hexenal (green tomatoes) Furfural (burnt oats) Hexyl cinnamaldehyde Isovaleraldehyde – nutty, fruity, cocoa-like Anisic aldehyde – floral, sweet, hawthorn. It is a crucial component of chocolate, vanilla, strawberry, raspberry, apricot, and others. Cuminaldehyde (4-propan-2-ylbenzaldehyde) – Spicy, cumin-like, green Esters Fructone (fruity, apple-like) Ethyl methylphenylglycidate (Strawberry) alpha-Methylbenzyl acetate (Gardenia) Ketones Cyclopentadecanone (musk-ketone) Dihydrojasmone (fruity woody floral) Oct-1-en-3-one (blood, metallic, mushroom-like) 2-Acetyl-1-pyrroline (fresh bread, jasmine rice) 6-Acetyl-2,3,4,5-tetrahydropyridine (fresh bread, tortillas, popcorn) Lactones gamma-Decalactone intense peach flavor gamma-Nonalactone coconut odor, popular in suntan lotions delta-Octalactone creamy note Jasmine lactone powerful fatty-fruity peach and apricot Massoia lactone powerful creamy coconut Wine lactone sweet coconut odor Sotolon (maple syrup, curry, fenugreek) Thiols Thioacetone (2-propanethione) A lightly studied organosulfur. Its smell is so potent it can be detected several hundred meters downwind mere seconds after a container is opened. Allyl thiol (2-propenethiol; allyl mercaptan; CH2=CHCH2SH) (garlic volatiles and garlic breath) (Methylthio)methanethiol (CH3SCH2SH), the "mouse thiol", found in mouse urine and functions as a semiochemical for female mice Ethanethiol, commonly called ethyl mercaptan (added to propane or other liquefied-petroleum gases used as fuel gases) 2-Methyl-2-propanethiol, commonly called tert-butyl mercaptan, is added as a blend of other components to natural gas used as fuel gas. Butane-1-thiol, commonly called butyl mercaptan, is a chemical intermediate. Grapefruit mercaptan (grapefruit) Methanethiol, commonly called methyl mercaptan (after eating Asparagus) Furan-2-ylmethanethiol, also called furfuryl mercaptan (roasted coffee) Benzyl mercaptan (leek or garlic-like) Miscellaneous compounds Methylphosphine and dimethylphosphine (garlic-metallic, two of the most potent odorants known) Phosphine (zinc phosphide poisoned bait) Diacetyl (butter flavor) Acetoin (butter flavor) Nerolin (orange flowers) Tetrahydrothiophene (added to natural gas) 2,4,6-Trichloroanisole (cork taint) Substituted pyrazines Aroma-compound receptors Animals that are capable of smell detect aroma compounds with their olfactory receptors. Olfactory receptors are cell-membrane receptors on the surface of sensory neurons in the olfactory system that detect airborne aroma compounds. Aroma compounds can then be identified by gas chromatography-olfactometry, which involves a human operator sniffing the GC effluent. In mammals, olfactory receptors are expressed on the surface of the olfactory epithelium in the nasal cavity. Safety and regulation In 2005–06, fragrance mix was the third-most-prevalent allergen in patch tests (11.5%). 'Fragrance' was voted Allergen of the Year in 2007 by the American Contact Dermatitis Society. An academic study in the United States published in 2016 has shown that "34.7 % of the population reported health problems, such as migraine headaches and respiratory difficulties, when exposed to fragranced products". The composition of fragrances is usually not disclosed in the label of the products, hiding the actual chemicals of the formula, which raises concerns among some consumers. In the United States, this is because the law regulating cosmetics protects trade secrets. In the United States, fragrances are regulated by the Food and Drug Administration if present in cosmetics or drugs, by the Consumer Products Safety Commission if present in consumer products. No pre-market approval is required, except for drugs. Fragrances are also generally regulated by the Toxic Substances Control Act of 1976 that "grandfathered" existing chemicals without further review or testing and put the burden of proof that a new substance is not safe on the EPA. The EPA, however, does not conduct independent safety testing but relies on data provided by the manufacturer. A 2019 study of the top-selling skin moisturizers found 45% of those marketed as "fragrance-free" contained fragrance. List of chemicals used as fragrances In 2010, the International Fragrance Association published a list of 3,059 chemicals used in 2011 based on a voluntary survey of its members, identifying about 90% of the world's production volume of fragrances. See also Aroma of wine Eau de toilette Flavour and Fragrance Journal Fragrances of the World Foodpairing Odor Odor detection threshold Odorizer, a device for adding an odorant to gas flowing through a pipe Olfaction Olfactory receptor Olfactory system Pheromone vabbing References Organic chemistry Olfaction Flavors Perfume ingredients
Aroma compound
[ "Chemistry" ]
1,604
[ "nan" ]