id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
4,148,872 | https://en.wikipedia.org/wiki/Storage%20virtualization | In computer science, storage virtualization is "the process of presenting a logical view of the physical storage resources to" a host computer system, "treating all storage media (hard disk, optical disk, tape, etc.) in the enterprise as a single pool of storage."
A "storage system" is also known as a storage array, disk array, or filer. Storage systems typically use special hardware and software along with disk drives in order to provide very fast and reliable storage for computing and data processing. Storage systems are complex, and may be thought of as a special purpose computer designed to provide storage capacity along with advanced data protection features. Disk drives are only one element within a storage system, along with hardware and special purpose embedded software within the system.
Storage systems can provide either block accessed storage, or file accessed storage. Block access is typically delivered over Fibre Channel, iSCSI, SAS, FICON or other protocols. File access is often provided using NFS or SMB protocols.
Within the context of a storage system, there are two primary types of virtualization that can occur:
Block virtualization used in this context refers to the abstraction (separation) of logical storage (partition) from physical storage so that it may be accessed without regard to physical storage or heterogeneous structure. This separation allows the administrators of the storage system greater flexibility in how they manage storage for end users.
File virtualization addresses the NAS challenges by eliminating the dependencies between the data accessed at the file level and the location where the files are physically stored. This provides opportunities to optimize storage use and server consolidation and to perform non-disruptive file migrations.
Block virtualization
Address space remapping
Virtualization of storage helps achieve location independence by abstracting the physical location of the data. The virtualization system presents to the user a logical space for data storage and handles the process of mapping it to the actual physical location.
It is possible to have multiple layers of virtualization or mapping. It is then possible that the output of one layer of virtualization can then be used as the input for a higher layer of virtualization. Virtualization maps space between back-end resources, to front-end resources. In this instance, "back-end" refers to a logical unit number (LUN) that is not presented to a computer, or host system for direct use. A "front-end" LUN or volume is presented to a host or computer system for use.
The actual form of the mapping will depend on the chosen implementation. Some implementations may limit the granularity of the mapping which may limit the capabilities of the device. Typical granularities range from a single physical disk down to some small subset (multiples of megabytes or gigabytes) of the physical disk.
In a block-based storage environment, a single block of information is addressed using a LUN identifier and an offset within that LUN known as a logical block addressing (LBA).
Metadata
The virtualization software or device is responsible for maintaining a consistent view of all the mapping information for the virtualized storage. This mapping information is often called metadata and is stored as a mapping table.
The address space may be limited by the capacity needed to maintain the mapping table. The level of granularity, and the total addressable space both directly impact the size of the meta-data, and hence the mapping table. For this reason, it is common to have trade-offs, between the amount of addressable capacity and the granularity or access granularity.
One common method to address these limits is to use multiple levels of virtualization. In several storage systems deployed today, it is common to utilize three layers of virtualization.
Some implementations do not use a mapping table, and instead calculate locations using an algorithm. These implementations utilize dynamic methods to calculate the location on access, rather than storing the information in a mapping table.
I/O redirection
The virtualization software or device uses the metadata to re-direct I/O requests. It will receive an incoming I/O request containing information about the location of the data in terms of the logical disk (vdisk) and translates this into a new I/O request to the physical disk location.
For example, the virtualization device may :
Receive a read request for vdisk LUN ID=1, LBA=32
Perform a meta-data look up for LUN ID=1, LBA=32, and finds this maps to physical LUN ID=7, LBA0
Sends a read request to physical LUN ID=7, LBA0
Receives the data back from the physical LUN
Sends the data back to the originator as if it had come from vdisk LUN ID=1, LBA32
Capabilities
Most implementations allow for heterogeneous management of multi-vendor storage devices within the scope of a given implementation's support matrix. This means that the following capabilities are not limited to a single vendor's device (as with similar capabilities provided by specific storage controllers) and are in fact possible across different vendors' devices.
Replication
Data replication techniques are not limited to virtualization appliances and as such are not described here in detail. However most implementations will provide some or all of these replication services.
When storage is virtualized, replication services must be implemented above the software or device that is performing the virtualization. This is true because it is only above the virtualization layer that a true and consistent image of the logical disk (vdisk) can be copied. This limits the services that some implementations can implement or makes them seriously difficult to implement. If the virtualization is implemented in the network or higher, this renders any replication services provided by the underlying storage controllers useless.
Remote data replication for disaster recovery
Synchronous Mirroring where I/O completion is only returned when the remote site acknowledges the completion. Applicable for shorter distances (<200 km)
Asynchronous Mirroring where I/O completion is returned before the remote site has acknowledged the completion. Applicable for much greater distances (>200 km)
Point-In-Time Snapshots to copy or clone data for diverse uses
When combined with thin provisioning, enables space-efficient snapshots
Pooling
The physical storage resources are aggregated into storage pools, from which the logical storage is created. More storage systems, which may be heterogeneous in nature, can be added as and when needed, and the virtual storage space will scale up by the same amount. This process is fully transparent to the applications using the storage infrastructure.
Disk management
The software or device providing storage virtualization becomes a common disk manager in the virtualized environment. Logical disks (vdisks) are created by the virtualization software or device and are mapped (made visible) to the required host or server, thus providing a common place or way for managing all volumes in the environment.
Enhanced features are easy to provide in this environment:
Thin Provisioning to maximize storage utilization
This is relatively easy to implement as physical storage is only allocated in the mapping table when it is used.
Disk expansion and shrinking
More physical storage can be allocated by adding to the mapping table (assuming the using system can cope with online expansion)
Similarly disks can be reduced in size by removing some physical storage from the mapping (uses for this are limited as there is no guarantee of what resides on the areas removed)
Benefits
Non-disruptive data migration
One of the major benefits of abstracting the host or server from the actual storage is the ability to migrate data while maintaining concurrent I/O access.
The host only knows about the logical disk (the mapped LUN) and so any changes to the meta-data mapping is transparent to the host. This means the actual data can be moved or replicated to another physical location without affecting the operation of any client. When the data has been copied or moved, the meta-data can simply be updated to point to the new location, therefore freeing up the physical storage at the old location.
The process of moving the physical location is known as data migration. Most implementations allow for this to be done in a non-disruptive manner, that is concurrently while the host continues to perform I/O to the logical disk (or LUN).
The mapping granularity dictates how quickly the meta-data can be updated, how much extra capacity is required during the migration, and how quickly the previous location is marked as free. The smaller the granularity the faster the update, less space required and quicker the old storage can be freed up.
There are many day to day tasks a storage administrator has to perform that can be simply and concurrently performed using data migration techniques.
Moving data off an over-utilized storage device.
Moving data onto a faster storage device as needs require
Implementing an Information Lifecycle Management policy
Migrating data off older storage devices (either being scrapped or off-lease)
Improved utilization
Utilization can be increased by virtue of the pooling, migration, and thin provisioning services. This allows users to avoid over-buying and over-provisioning storage solutions. In other words, this kind of utilization through a shared pool of storage can be easily and quickly allocated as it is needed to avoid constraints on storage capacity that often hinder application performance.
When all available storage capacity is pooled, system administrators no longer have to search for disks that have free space to allocate to a particular host or server. A new logical disk can be simply allocated from the available pool, or an existing disk can be expanded.
Pooling also means that all the available storage capacity can potentially be used. In a traditional environment, an entire disk would be mapped to a host. This may be larger than is required, thus wasting space. In a virtual environment, the logical disk (LUN) is assigned the capacity required by the using host.
Storage can be assigned where it is needed at that point in time, reducing the need to guess how much a given host will need in the future. Using Thin Provisioning, the administrator can create a very large thin provisioned logical disk, thus the using system thinks it has a very large disk from day one.
Fewer points of management
With storage virtualization, multiple independent storage devices, even if scattered across a network, appear to be a single monolithic storage device and can be managed centrally.
However, traditional storage controller management is still required. That is, the creation and maintenance of RAID arrays, including error and fault management.
Risks
Backing out a failed implementation
Once the abstraction layer is in place, only the virtualizer knows where the data actually resides on the physical medium. Backing out of a virtual storage environment therefore requires the reconstruction of the logical disks as contiguous disks that can be used in a traditional manner.
Most implementations will provide some form of back-out procedure and with the data migration services it is at least possible, but time consuming.
Interoperability and vendor support
Interoperability is a key enabler to any virtualization software or device. It applies to the actual physical storage controllers and the hosts, their operating systems, multi-pathing software and connectivity hardware.
Interoperability requirements differ based on the implementation chosen. For example, virtualization implemented within a storage controller adds no extra overhead to host based interoperability, but will require additional support of other storage controllers if they are to be virtualized by the same software.
Switch based virtualization may not require specific host interoperability — if it uses packet cracking techniques to redirect the I/O.
Network based appliances have the highest level of interoperability requirements as they have to interoperate with all devices, storage and hosts.
Complexity
Complexity affects several areas :
Management of environment: Although a virtual storage infrastructure benefits from a single point of logical disk and replication service management, the physical storage must still be managed. Problem determination and fault isolation can also become complex, due to the abstraction layer.
Infrastructure design: Traditional design ethics may no longer apply, virtualization brings a whole range of new ideas and concepts to think about (as detailed here)
The software or device itself: Some implementations are more complex to design and code network based, especially in-band (symmetric) designs in particular — these implementations actually handle the I/O requests and so latency becomes an issue.
Metadata management
Information is one of the most valuable assets in today's business environments. Once virtualized, the metadata are the glue in the middle. If the metadata are lost, so is all the actual data as it would be virtually impossible to reconstruct the logical drives without the mapping information.
Any implementation must ensure its protection with appropriate levels of back-ups and replicas. It is important to be able to reconstruct the meta-data in the event of a catastrophic failure.
The metadata management also has implications on performance. Any virtualization software or device must be able to keep all the copies of the metadata atomic and quickly updateable. Some implementations restrict the ability to provide certain fast update functions, such as point-in-time copies and caching where super fast updates are required to ensure minimal latency to the actual I/O being performed.
Performance and scalability
In some implementations the performance of the physical storage can actually be improved, mainly due to caching. Caching however requires the visibility of the data contained within the I/O request and so is limited to in-band and symmetric virtualization software and devices. However these implementations also directly influence the latency of an I/O request (cache miss), due to the I/O having to flow through the software or device. Assuming the software or device is efficiently designed this impact should be minimal when compared with the latency associated with physical disk accesses.
Due to the nature of virtualization, the mapping of logical to physical requires some processing power and lookup tables. Therefore, every implementation will add some small amount of latency.
In addition to response time concerns, throughput has to be considered. The bandwidth into and out of the meta-data lookup software directly impacts the available system bandwidth. In asymmetric implementations, where the meta-data lookup occurs before the information is read or written, bandwidth is less of a concern as the meta-data are a tiny fraction of the actual I/O size. In-band, symmetric flow through designs are directly limited by their processing power and connectivity bandwidths.
Most implementations provide some form of scale-out model, where the inclusion of additional software or device instances provides increased scalability and potentially increased bandwidth. The performance and scalability characteristics are directly influenced by the chosen implementation.
Implementation approaches
Host-based
Storage device-based
Network-based
Host-based
Host-based virtualization requires additional software running on the host, as a privileged task or process. In some cases volume management is built into the operating system, and in other instances it is offered as a separate product. Volumes (LUN's) presented to the host system are handled by a traditional physical device driver. However, a software layer (the volume manager) resides above the disk device driver intercepts the I/O requests, and provides the meta-data lookup and I/O mapping.
Most modern operating systems have some form of logical volume management built-in (in Linux called Logical Volume Manager or LVM; in Solaris and FreeBSD, ZFS's zpool layer; in Windows called Logical Disk Manager or LDM), that performs virtualization tasks.
Note: Host based volume managers were in use long before the term storage virtualization had been coined.
Pros
Simple to design and code
Supports any storage type
Improves storage utilization without thin provisioning restrictions
Cons
Storage utilization optimized only on a per host basis
Replication and data migration only possible locally to that host
Software is unique to each operating system
No easy way of keeping host instances in sync with other instances
Traditional Data Recovery following a server disk drive crash is impossible
Specific examples
Technologies:
Logical volume management
File systems, e.g., (hard links, SMB/NFS)
Automatic mounting, e.g., (autofs)
Storage device-based
Like host-based virtualization, several categories have existed for years and have only recently been classified as virtualization. Simple data storage devices, like single hard disk drives, do not provide any virtualization. But even the simplest disk arrays provide a logical to physical abstraction, as they use RAID schemes to join multiple disks in a single array (and possibly later divide the array it into smaller volumes).
Advanced disk arrays often feature cloning, snapshots and remote replication. Generally these devices do not provide the benefits of data migration or replication across heterogeneous storage, as each vendor tends to use their own proprietary protocols.
A new breed of disk array controllers allows the downstream attachment of other storage devices. For the purposes of this article we will only discuss the later style which do actually virtualize other storage devices.
Concept
A primary storage controller provides the services and allows the direct attachment of other storage controllers. Depending on the implementation these may be from the same or different vendors.
The primary controller will provide the pooling and meta-data management services. It may also provide replication and migration services across those controllers which it is .
Pros
No additional hardware or infrastructure requirements
Provides most of the benefits of storage virtualization
Does not add latency to individual I/Os
Cons
Storage utilization optimized only across the connected controllers
Replication and data migration only possible across the connected controllers and same vendors device for long distance support
Downstream controller attachment limited to vendors support matrix
I/O Latency, non cache hits require the primary storage controller to issue a secondary downstream I/O request
Increase in storage infrastructure resource, the primary storage controller requires the same bandwidth as the secondary storage controllers to maintain the same throughput
Network-based
Storage virtualization operating on a network based device (typically a standard server or smart switch) and using iSCSI or FC Fibre channel networks to connect as a SAN. These types of devices are the most commonly available and implemented form of virtualization.
The virtualization device sits in the SAN and provides the layer of abstraction between the hosts performing the I/O and the storage controllers providing the storage capacity.
Pros
True heterogeneous storage virtualization
Caching of data (performance benefit) is possible when in-band
Single management interface for all virtualized storage
Replication services across heterogeneous devices
Cons
Complex interoperability matrices limited by vendors support
Difficult to implement fast meta-data updates in switched-based devices
Out-of-band requires specific host based software
In-band may add latency to I/O
In-band the most complicated to design and code
Appliance-based vs. switch-based
There are two commonly available implementations of network-based storage virtualization, appliance-based and switch-based. Both models can provide the same services, disk management, metadata lookup, data migration and replication. Both models also require some processing hardware to provide these services.
Appliance based devices are dedicated hardware devices that provide SAN connectivity of one form or another. These sit between the hosts and storage and in the case of in-band (symmetric) appliances can provide all of the benefits and services discussed in this article. I/O requests are targeted at the appliance itself, which performs the meta-data mapping before redirecting the I/O by sending its own I/O request to the underlying storage. The in-band appliance can also provide caching of data, and most implementations provide some form of clustering of individual appliances to maintain an atomic view of the metadata as well as cache data.
Switch based devices, as the name suggests, reside in the physical switch hardware used to connect the SAN devices. These also sit between the hosts and storage but may use different techniques to provide the metadata mapping, such as packet cracking to snoop on incoming I/O requests and perform the I/O redirection. It is much more difficult to ensure atomic updates of metadata in a switched environment and services requiring fast updates of data and metadata may be limited in switched implementations.
In-band vs. out-of-band
In-band, also known as symmetric, virtualization devices actually sit in the data path between the host and storage. All I/O requests and their data pass through the device. Hosts perform I/O to the virtualization device and never interact with the actual storage device. The virtualization device in turn performs I/O to the storage device. Caching of data, statistics about data usage, replications services, data migration and thin provisioning are all easily implemented in an in-band device.
Out-of-band, also known as asymmetric, virtualization devices are sometimes called meta-data servers. These devices only perform the meta-data mapping functions. This requires additional software in the host which knows to first request the location of the actual data. Therefore, an I/O request from the host is intercepted before it leaves the host, a meta-data lookup is requested from the meta-data server (this may be through an interface other than the SAN) which returns the physical location of the data to the host. The information is then retrieved through an actual I/O request to the storage. Caching is not possible as the data never passes through the device.
File-based virtualization
File-based virtualization is a type of storage virtualization that uses files as the basic unit of storage. This is in contrast to block-based storage virtualization, which uses blocks as the basic unit. It is a way to abstract away the physical details of storage and allow files to be stored on any type of storage device, without the need for specific drivers or other low-level configuration.
File-based virtualization can be used for storage consolidation, improved storage utilization, virtualization and disaster recovery. This can simplify storage administration and reduce the overall number of storage devices that need to be managed. System administrators and software developers administer the virtual storage through offline operations using built-in or third-party tools.
Storage allocation
There are two schemes predominant file-based storage virtualization are:
Preallocation of the entire storage for the virtual disk upon creation, or,
Dynamically grow the storage on demand
Preallocated storage
The virtual disk is implemented as either split over a collection of flat files, typically each one is 2GB in size, collectively called a split flat file, or as a single, large monolithic flat file. The pre-allocated storage scheme is also referred to as a thick provisioning scheme.
Dynamic storage growth
The virtual disk can again be implemented using split or monolithic files, except that storage is allocated on demand. Several Virtual Machine Monitor implementations initialize the storage with zeros before providing it to the virtual machine that is in operation. The dynamic growth storage scheme is also referred to as a thin provisioning scheme.
Benefits
File-based virtualization can also improve storage utilization by allowing files to be stored on devices that are not being used to their full capacity. For example, if a file server has a number of hard drives that are only partially filled, file-based virtualization can be used to store files on those drives, thereby increasing the utilization of the storage devices.
File-based virtualization can be used to create a virtual file server (or virtual NAS device), which is a storage system that appears to the user as a single file server but which is actually implemented as a set of files stored on a number of physical file servers.
See also
Archive
Automated tiered storage
Storage hypervisor
Backup
Computer data storage
Data proliferation
Disk storage
Information lifecycle management
Information repository
Magnetic tape data storage
Repository
Spindle
References
Storage virtualization
Virtualization | Storage virtualization | [
"Technology"
] | 4,821 | [
"Information technology",
"Information technology management"
] |
4,148,957 | https://en.wikipedia.org/wiki/Weapons-grade%20nuclear%20material | Weapons-grade nuclear material is any fissionable nuclear material that is pure enough to make a nuclear weapon and has properties that make it particularly suitable for nuclear weapons use. Plutonium and uranium in grades normally used in nuclear weapons are the most common examples. (These nuclear materials have other categorizations based on their purity.)
Only fissile isotopes of certain elements have the potential for use in nuclear weapons. For such use, the concentration of fissile isotopes uranium-235 and plutonium-239 in the element used must be sufficiently high. Uranium from natural sources is enriched by isotope separation, and plutonium is produced in a suitable nuclear reactor.
Experiments have been conducted with uranium-233 (the fissile material at the heart of the thorium fuel cycle). Neptunium-237 and some isotopes of americium might be usable, but it is not clear that this has ever been implemented. The latter substances are part of the minor actinides in spent nuclear fuel.
Critical mass
Any weapons-grade nuclear material must have a critical mass that is small enough to justify its use in a weapon. The critical mass for any material is the smallest amount needed for a sustained nuclear chain reaction. Moreover, different isotopes have different critical masses, and the critical mass for many radioactive isotopes is infinite, because the mode of decay of one atom cannot induce similar decay of more than one neighboring atom. For example, the critical mass of uranium-238 is infinite, while the critical masses of uranium-233 and uranium-235 are finite.
The critical mass for any isotope is influenced by any impurities and the physical shape of the material. The shape with minimal critical mass and the smallest physical dimensions is a sphere. Bare-sphere critical masses at normal density of some actinides are listed in the accompanying table. Most information on bare sphere masses is classified, but some documents have been declassified.
Countries that have produced weapons-grade nuclear material
At least ten countries have produced weapons-grade nuclear material:
Five recognized "nuclear-weapon states" under the terms of the Nuclear Non-Proliferation Treaty (NPT): the United States (first nuclear weapon tested and two bombs used as weapons in 1945), Russia (first weapon tested in 1949), the United Kingdom (1952), France (1960), and China (1964)
Three other declared nuclear states that are not signatories of the NPT: India (not a signatory, weapon tested in 1974), Pakistan (not a signatory, weapon tested in 1998), and North Korea (withdrew from the NPT in 2003, weapon tested in 2006)
Israel, which is widely known to have developed nuclear weapons (likely first tested in the 1960s or 1970s) but has not openly declared its capability
South Africa, which also had enrichment capabilities and developed nuclear weapons (possibly tested in 1979), but disassembled its arsenal and joined the NPT in 1991
Weapons-grade uranium
Natural uranium is made weapons-grade through isotopic enrichment. Initially only about 0.7% of it is fissile U-235, with the rest being almost entirely uranium-238 (U-238). They are separated by their differing masses. Highly enriched uranium is considered weapons-grade when it has been enriched to about 90% U-235.
U-233 is produced from thorium-232 by neutron capture. The U-233 produced thus does not require enrichment and can be relatively easily chemically separated from residual Th-232. It is therefore regulated as a special nuclear material only by the total amount present. U-233 may be intentionally down-blended with U-238 to remove proliferation concerns.
While U-233 would thus seem ideal for weaponization, a significant obstacle to that goal is the co-production of trace amounts of uranium-232 due to side-reactions. U-232 hazards, a result of its highly radioactive decay products such as thallium-208, are significant even at 5 parts per million. Implosion nuclear weapons require U-232 levels below 50 PPM (above which the U-233 is considered "low grade"; cf. "Standard weapon grade plutonium requires a Pu-240 content of no more than 6.5%." which is 65,000 PPM, and the analogous Pu-238 was produced in levels of 0.5% (5000 PPM) or less). Gun-type fission weapons would require low U-232 levels and low levels of light impurities on the order of 1 PPM.
Weapons-grade plutonium
Pu-239 is produced artificially in nuclear reactors when a neutron is absorbed by U-238, forming U-239, which then decays in a rapid two-step process into Pu-239. It can then be separated from the uranium in a nuclear reprocessing plant.
Weapons-grade plutonium is defined as being predominantly Pu-239, typically about 93% Pu-239. Pu-240 is produced when Pu-239 absorbs an additional neutron and fails to fission. Pu-240 and Pu-239 are not separated by reprocessing. Pu-240 has a high rate of spontaneous fission, which can cause a nuclear weapon to pre-detonate. This makes plutonium unsuitable for use in gun-type nuclear weapons. To reduce the concentration of Pu-240 in the plutonium produced, weapons program plutonium production reactors (e.g. B Reactor) irradiate the uranium for a far shorter time than is normal for a nuclear power reactor. More precisely, weapons-grade plutonium is obtained from uranium irradiated to a low burnup.
This represents a fundamental difference between these two types of reactor. In a nuclear power station, high burnup is desirable. Power stations such as the obsolete British Magnox and French UNGG reactors, which were designed to produce either electricity or weapons material, were operated at low power levels with frequent fuel changes using online refuelling to produce weapons-grade plutonium. Such operation is not possible with the light water reactors most commonly used to produce electric power. In these the reactor must be shut down and the pressure vessel disassembled to gain access to the irradiated fuel.
Plutonium recovered from LWR spent fuel, while not weapons grade, can be used to produce nuclear weapons at all levels of sophistication, though in simple designs it may produce only a fizzle yield. Weapons made with reactor-grade plutonium would require special cooling to keep them in storage and ready for use. A 1962 test at the U.S. Nevada National Security Site (then known as the Nevada Proving Grounds) used non-weapons-grade plutonium produced in a Magnox reactor in the United Kingdom. The plutonium used was provided to the United States under the 1958 US–UK Mutual Defence Agreement. Its isotopic composition has not been disclosed, other than the description reactor grade, and it has not been disclosed which definition was used in describing the material this way. The plutonium was apparently sourced from the Magnox reactors at Calder Hall or Chapelcross. The content of Pu-239 in material used for the 1962 test was not disclosed, but has been inferred to have been at least 85%, much higher than typical spent fuel from currently operating reactors.
Occasionally, low-burnup spent fuel has been produced by a commercial LWR when an incident such as a fuel cladding failure has required early refuelling. If the period of irradiation has been sufficiently short, this spent fuel could be reprocessed to produce weapons grade plutonium.
References
External links
Reactor-Grade and Weapons-Grade Plutonium in Nuclear Explosives, Canadian Coalition for Nuclear Responsibility
Nuclear weapons and power-reactor plutonium , Amory B. Lovins, February 28, 1980, Nature, Vol. 283, No. 5750, pp. 817–823
Nuclear weapons
Nuclear materials
Plutonium
Uranium | Weapons-grade nuclear material | [
"Physics"
] | 1,620 | [
"Materials",
"Nuclear materials",
"Matter"
] |
4,149,018 | https://en.wikipedia.org/wiki/Amateur%20radio%20repeater | An amateur radio repeater is an electronic device that receives a weak or low-level amateur radio signal and retransmits it at a higher level or higher power, so that the signal can cover longer distances without degradation. Many repeaters are located on hilltops or on tall buildings as the higher location increases their coverage area, sometimes referred to as the radio horizon, or "footprint". Amateur radio repeaters are similar in concept to those used by public safety entities (police, fire department, etc.), businesses, government, military, and more. Amateur radio repeaters may even use commercially packaged repeater systems that have been adjusted to operate within amateur radio frequency bands, but more often amateur repeaters are assembled from receivers, transmitters, controllers, power supplies, antennas, and other components, from various sources.
Introduction
In amateur radio, repeaters are typically maintained by individual hobbyists or local groups of amateur radio operators. Many repeaters are provided openly to other amateur radio operators and typically not used as a remote base station by a single user or group. In some areas multiple repeaters are linked together to form a wide-coverage network, such as the linked system provided by the Independent Repeater Association which covers most of western Michigan, or the Western Intertie Network System ("WINsystem") that now covers a great deal of California, and is in 17 other states, including Hawaii, along with parts of four other countries, Australia, Canada, Great Britain and Japan.
Frequencies
Repeaters are found mainly in the VHF 6-meter (50–54 MHz), 2-meter (144–148 MHz), 1.25-meter band (1 meters) (220–225 MHz) and the UHF 70 centimeter (420–450 MHz) bands, but can be used on almost any frequency pair above 28 MHz. In some areas, 33 centimeters (902–928 MHz) and 23 centimeters (1.24–1.3 GHz) are also used for repeaters. Note that different countries have different rules; for example, in the United States, the two-meter band is 144–148 MHz, while in the United Kingdom (and most of Europe) it is 144–146 MHz.
Repeater frequency sets are known as "repeater pairs", and in the ham radio community most follow ad hoc standards for the difference between the two frequencies, commonly called the offset. In the USA two-meter band, the standard offset is 600 kHz (0.6 MHz), but sometimes unusual offsets, referred to as oddball splits, are used. The actual frequency pair used is assigned by a local frequency coordinating council.
In the days of crystal-controlled radios, these pairs were identified by the last portion of the transmit (Input) frequency followed by the last portion of the receive (Output) frequency that the ham would put into the radio. Thus "three-four nine-four" (34/94) meant that hams would transmit on 146.34 MHz and listen on 146.94 MHz (while the repeater would do the opposite, listening on 146.34 and transmitting on 146.94). In areas with many repeaters, "reverse splits" were common (i.e., 94/34), to prevent interference between systems.
Since the late 1970s, the use of synthesized, microprocessor-controlled radios, and widespread adoption of standard frequency splits have changed the way repeater pairs are described. In 1980, a ham might have been told that a repeater was on "22/82"—today they will most often be told "682 down". The 6 refers to the last digit of 146 MHz, so that the display will read "146.82" (the output frequency), and the radio is set to transmit "down" 600 kHz on 146.22 MHz. Another way of describing a repeater frequency pair is to give the repeater's output frequency, along with the direction of offset ("+" or "plus" for an input frequency above the output frequency, "−" or "minus" for a lower frequency) with the assumption that the repeater uses the standard offset for the band in question. For instance, a 2-meter repeater might be described as "147.36 with a plus offset", meaning that the repeater transmits on 147.36 MHz and receives on 147.96 MHz, 600 kHz above the output frequency.
Services
Services provided by a repeater may include an autopatch connection to a POTS/PSTN telephone line to allow users to make telephone calls from their keypad-equipped radios. These advanced services may be limited to members of the group or club that maintains the repeater. Many amateur radio repeaters typically have a tone access control (CTCSS, also called CG or PL tone) implemented to prevent them from being keyed-up (operated) accidentally by interference from other radio signals. A few use a digital code system called DCS, DCG or DPL (a Motorola trademark). In the UK most repeaters also respond to a short burst of 1750 Hz tone to open the repeater.
In many communities, a repeater has become a major on-the-air gathering spot for the local amateur radio community, especially during "drive time" (the morning or afternoon commuting time). In the evenings local public service nets may be heard on these systems and many repeaters are used by weather spotters. In an emergency or a disaster a repeater can sometimes help to provide needed communications between areas that could not otherwise communicate. Until cellular telephones became popular, it was common for community repeaters to have "drive time" monitoring stations so that mobile amateurs could call in traffic accidents via the repeater to the monitoring station who could relay it to the local police agencies via telephone. Systems with autopatches frequently had (and still have) most of the public safety agencies numbers programmed as speed-dial numbers.
US repeater coordination
Repeater coordination is not required by the Federal Communications Commission, nor does the FCC regulate, certify or otherwise regulate frequency coordination for the Amateur Radio Bands.
Amateur Radio Repeater Coordinators or coordination groups are all volunteers and have no legal authority to assume jurisdictional or regional control in any area where the Federal Communications Commission regulates the Amateur Radio Service. The United States Code of Federal Regulations Title 47 CFR, Part 97, which are the laws in which the Amateur Radio Service is regulated clearly states the definition of Frequency Coordinator.
The purpose of coordinating a repeater or frequency is to reduce harmful interference to other fixed operations. Coordinating a repeater or frequency with other fixed operations demonstrates good engineering and amateur practice.
UK repeaters
In the UK, the frequency allocations for repeaters are managed by the Emerging Technology Co-ordination Committee (ETCC) of the Radio Society of Great Britain and licensed by Ofcom, the industry regulator for communications in the UK. Each repeater has a NOV (Notice of Variation) licence issued to a particular amateur radio callsign (this person is normally known as the "repeater keeper") thus ensuring the licensing authority has a single point of contact for that particular repeater.
Each repeater in the UK is normally supported by a repeater group composed of local amateur radio enthusiasts who pay a nominal amount e.g. £10–15 a year each to support the maintenance of each repeater and to pay for site rents, electricity costs etc. Repeater groups do not receive any central funding from other organisations.
Such groups include the Central Scotland FM Group and the Scottish Borders Repeater Group.
Repeater equipment
The most basic repeater consists of an FM receiver on one frequency and an FM transmitter on another frequency usually in the same radio band, connected together so that when the receiver picks up a signal, the transmitter is keyed and rebroadcasts whatever is heard.
In order to run the repeater a repeater controller is necessary. A repeater controller can be a hardware solution or even be implemented in software.
Repeaters typically have a timer to cut off retransmission of a signal that goes too long. Repeaters operated by groups with an emphasis on emergency communications often limit each transmission to 30 seconds, while others may allow three minutes or even longer. The timer restarts after a short pause following each transmission, and many systems feature a beep or chirp tone to signal that the timeout timer has reset.
Repeater types
Conventional repeaters
Conventional repeaters, also known as in-band or same-band repeaters, retransmit signals within the same frequency band, and they only repeat signals using a particular modulation scheme, predominately FM.
Standard repeaters require either the use of two antennas (one each for transmitter and receiver) or a duplexer to isolate the transmit and receive signals over a single antenna. The duplexer is a device which prevents the repeater's high-power transmitter (on the output frequency) from drowning out the users' signal on the repeater receiver (on the input frequency). A diplexer allows two transmitters on different frequencies to use one antenna, and is common in installations where one repeater on 2 m and a second on 440 MHz share one feedline up the tower and one antenna.
Most repeaters are remotely controlled through the use of audio tones on a control channel.
Cross-band repeaters
A cross-band repeater (also sometimes called a replexer), is a repeater that retransmits a specific mode on a frequency in one band to a specific mode on a frequency in a different band. This technique allows for a smaller and less complex repeater system. Repeating signals across widely separated frequency bands allows for simple filters to be used to allow one antenna to be used for both transmit and receive at the same time. This avoids the use of complex duplexers to achieve the required rejection for same band repeating.
Most modern dual-band amateur transceivers are capable of cross-band repeater access. A smaller subset are capable of being used themselves as a crossband repeater.
Amateur television repeaters
Amateur television (ATV) repeaters are used by amateur radio operators to transmit full motion video. The bands used by ATV repeaters vary by country, but in the US a typical configuration is as a cross-band system with an input on the 33 or 23 cm band and output on 421.25 MHz or, sometimes, 426.25 MHz (within the 70 cm band). These output frequencies happen to be the same as standard cable television channels 57 and 58, meaning that anyone with a cable-ready analog NTSC TV can tune them in without special equipment.
There are also digital amateur TV repeaters that retransmit digital video signals. Frequently DVB-S modulation is used for digital ATV, due to narrow bandwidth needs and high loss tolerances. These DATV repeaters are more prevalent in Europe currently, partially because of the availability of DVB-S equipment.
Satellite repeaters
In addition, amateur radio satellites have been launched with the specific purpose of operating as space-borne amateur repeaters. The worldwide amateur satellite organization AMSAT designs and builds many of the amateur satellites, which are also known as OSCARs. Several satellites with amateur radio equipment on board have been designed and built by universities around the world. Also, several OSCARs have been built for experimentation. For example, NASA and AMSAT coordinated the release of SuitSat which was an attempt to make a low cost experimental satellite from a discarded Russian spacesuit outfitted with amateur radio equipment.
The repeaters on board a satellite may be of any type; the key distinction is that they are in orbit around the Earth, rather than terrestrial in nature. The three most common types of OSCARs are linear transponders, cross-band FM repeaters, and digipeaters (also referred to as pacsats).
Linear transponders
Amateur transponder repeaters are most commonly used on amateur satellites. A specified band of frequencies, usually having a bandwidth of 20 to 800 kHz is repeated from one band to another. Transponders are not mode specific and typically no demodulation occurs. Any signal with a bandwidth narrower than the transponder's pass-band will be repeated; however, for technical reasons, use of modes other than SSB and CW are discouraged. Transponders may be inverting or non-inverting. An example of an inverting transponder would be a 70cm to 2m transponder which receives on the 432.000 MHz to 432.100 MHz frequencies and transmits on the 146.000 MHz to 146.100 MHz frequencies by inverting the frequency range within the band. In this example, a signal received at 432.001 MHz would be transmitted on 146.099 MHz. Voice signals using upper sideband modulation on the input would result in a LSB modulation on the output, and vice versa.
Store-and-forward systems
Another class of repeaters do not simultaneously retransmit a signal, on different frequency, as they receive it. Instead, they operate in a store-and-forward manner, by receiving and then retransmitting on the same frequency after a short delay.
These systems may not be legally classified as "repeaters", depending on the definition set by a country's regulator. For example, in the US, the FCC defines a repeater as an "amateur station that simultaneously retransmits the transmission of another amateur station on a different channel or channels." (CFR 47 97.205(b)) Store-and-forward systems neither retransmit simultaneously, nor use a different channel. Thus, they must be operated under different rules than more conventional repeaters.
Simplex repeater
A type of system known as a simplex repeater uses a single transceiver and a short-duration voice recorder, which records whatever the receiver picks up for a set length of time (usually 30 seconds or less), then plays back the recording over the transmitter on the same frequency. A common name is a "parrot" repeater.
Digipeater
Another form of repeater used in amateur packet radio, a form of digital computer-to-computer communications, is dubbed "digipeater" (for DIGItal rePEATER). Digipeaters are often used for activities and modes such as packet radio, Automatic Packet Reporting System, and D-STAR's digital data mode. Also commercial digital modes such as DMR, P25 and NXDN. Some modes are full duplex and internet linked.
SSTV repeater
An SSTV repeater is an amateur radio repeater station that relays slow-scan television signals. A typical SSTV repeater is equipped with a HF or VHF transceiver and a computer with a sound card, which serves as a demodulator/modulator of SSTV signals.
SSTV repeaters are used by amateur radio operators for exchanging pictures. If two stations cannot copy each other, they can still communicate through a repeater.
One type of SSTV repeater is activated by a station sending it a 1,750 Hz tone. The repeater sends K in morse code to confirm its activation, after which the station must start sending a picture within about 10 seconds. After reception, the received image is transmitted on the repeater's operation frequency. Another type is activated by the SSTV vertical synchronization signal (VIS code).
Depending on the software it uses (MMSSTV, JVComm32, MSCAN, for example), an SSTV repeater typically operates in common SSTV modes.
Repeater networks
Repeaters may be linked together in order to form what is known as a linked repeater system or linked repeater network. In such a system, when one repeater is keyed-up by receiving a signal, all the other repeaters in the network are also activated and will transmit the same signal. The connections between the repeaters are made via radio (usually on a different frequency from the published transmitting frequency) for maximum reliability. Some networks have a feature to allow the user being able to turn additional repeaters and links on or off on the network. This feature is typically done with DTMF tones to control the network infrastructure. Such a system allows coverage over a wide area, enabling communication between amateurs often hundreds of miles (several hundred km) apart. These systems are used for area or regional communications, for example in Skywarn nets, where storm spotters relay severe weather reports. All the user has to know is which channel to use in which area.
Voting systems
In order to get better receive coverage over a wide area, a similar linked setup can also be done with what is known as a voted receiver system. In a voted receiver, there are several satellite receivers set up to receive on the same frequency (the one that the users transmit on). All of the satellite receivers are linked to a voting selector panel that switches from receiver to receiver based on the best quieting (strongest) signal, and the output of the selector will actually trigger the central repeater transmitter. A properly adjusted voting system can switch many times a second and can actually "assemble" a multi-syllable word using a different satellite receiver for each syllable. Such a system can be used to widen coverage to low power mobile radios or handheld radios that otherwise would not be able to key up the central location, but can receive the signal from the central location without an issue. Voting systems require no knowledge or effort on the part of the user – the system just seems to have better-than-average handheld coverage.
Internet linking
Repeaters may also be connected over the Internet using voice over IP (VoIP) techniques. VoIP links are a convenient way to connecting distant repeaters that would otherwise be unreachable by VHF/UHF radio propagation. Popular VoIP amateur radio network protocols include AllStarLink/HamVoIP, D-STAR, Echolink, IRLP, WIRES and eQSO. Digital Mobile Radio (DMR), D-STAR, Fusion, P25 and NXDN all have a codec in the user radio and along with the encoded audio, also send and receive user number and destination information so one can talk to another specific user or a Talk Group. Two such worldwide networks are DMR-MARC and Brandmeister.
For example, a simplex gateway may be used to link a simplex repeater into a repeater network via the Internet.
Operating terms
Timing Out is the situation where a person talks too long and the repeater time-out timer (TOT) shuts off the repeater transmitter.
Kerchunking is transmitting a momentary signal to check a repeater without properly identifying. In many countries, such an act violates amateur radio regulations. The term "Kerchunk" can also apply to the sound a large FM transmitter makes when the operator switches it off and on.
References
External links
Repeater
Radio electronics | Amateur radio repeater | [
"Engineering"
] | 3,896 | [
"Radio electronics"
] |
4,149,039 | https://en.wikipedia.org/wiki/A23187 | A23187 is a mobile ion-carrier that forms stable complexes with divalent cations (ions with a charge of +2). A23187 is also known as Calcimycin, Calcium Ionophore, Antibiotic A23187 and Calcium Ionophore A23187. It is produced at fermentation of Streptomyces chartreusensis.
Actions and uses
A23187 has antibiotic properties against gram positive bacteria and fungi. It also acts as a divalent cation ionophore, allowing these ions to cross cell membranes, which are usually impermeable to them. A23187 is most selective for Mn2+, somewhat less selective for Ca2+ and Mg2+, much less selective for Sr2+, and even less selective for Ba2+. The ionophore is used in laboratories to increase intracellular Ca2+ levels in intact cells. It also uncouples oxidative phosphorylation, the process cells use to synthesize Adenosine triphosphate, which they use for energy. In addition, A23187 inhibits mitochondrial ATPase activity. A23187 also induces apoptosis in some cells (e.g. mouse lymphoma cell line, or S49, and Jurkat cells) and prevents it in others (e.g. cells dependent on interleukin 3 that have had the factor withdrawn).
Inex Pharmaceuticals Corporation (Canada) reported an innovative application of A23187. Inex used A23187 as a molecular tool in order to make artificial liposomes loaded with anti-cancer drugs such as Topotecan.
In IVF field, Ca Ionophore can be used in case of low fertilization rate after ICSI procedure, particularly with Globozoospermia (Round Head sperm syndrome), Ca Ionophore will replace absence of sperm acrosome, and plays role in oocyte activation after ICSI. Recommended use is 0.5 microgram/ml twice for 10 min interrupted with fresh media with 30 min incubation, followed with regular injected eggs culture for IVF.
Biosynthesis
The core biosynthetic enzymes are thought to include 3 proteins for the biosynthesis of the α-ketopyrrole moiety, 5 for modular type I polyketide synthases for the spiroketal ring, 4 for the biosynthesis of 3-hydroxyanthranilic acid, an N-methyltransferase tailoring enzyme, and a type II thioesterase.
Commercial availability
Commercially, A23187 is available as free acid, Ca2+ salt, and 4-brominated analog.
References
External links
A23187 from AG Scientific, another vendor
A21387 from BIOMOL, a vendor's product page
Calcimycin from Bioaustralis, a vendor's product page
Antibiotics
Ionophores
Benzoxazoles
Pyrroles
Uncouplers | A23187 | [
"Chemistry",
"Biology"
] | 633 | [
"Cellular respiration",
"Biotechnology products",
"Antibiotics",
"Biocides",
"Uncouplers"
] |
4,149,041 | https://en.wikipedia.org/wiki/Radio%20repeater | A radio repeater is a combination of a radio receiver and a radio transmitter that receives a signal and retransmits it, so that two-way radio signals can cover longer distances. A repeater sited at a high elevation can allow two mobile stations, otherwise out of line-of-sight propagation range of each other, to communicate. Repeaters are found in professional, commercial, and government mobile radio systems and also in amateur radio.
Repeater systems use two different radio frequencies; the mobiles transmit on one frequency, and the repeater station receives those transmission and transmits on a second frequency. Since the repeater must transmit at the same time as the signal is being received, and may even use the same antenna for both transmitting and receiving, frequency-selective filters are required to prevent the receiver from being overloaded by the transmitted signal. Some repeaters use two different frequency bands to provide isolation between input and output or as a convenience.
In a communications satellite, a transponder serves a similar function, but the transponder does not necessarily demodulate the relayed signals.
Full duplex operation
A repeater is an automatic radio-relay station, usually located on a mountain top, tall building, or radio tower. It allows communication between two or more bases, mobile or portable stations that are unable to communicate directly with each other due to distance or obstructions between them.
The repeater receives on one radio frequency (the "input" frequency), demodulates the signal, and simultaneously re-transmits the information on its "output" frequency. All stations using the repeater transmit on the repeater's input frequency and receive on its output frequency. Since the repeater is usually located at an elevation higher than the other radios using it, their range is greatly extended.
Because the transmitter and receiver are on at the same time, isolation must exist to keep the repeater's own transmitter from degrading the repeater receiver. If the repeater transmitter and receiver are not isolated well, the repeater's own transmitter desensitizes the repeater receiver. The problem is similar to being at a rock concert and not being able to hear the weak signal of a conversation over the much stronger signal of the band.
In general, isolating the receiver from the transmitter is made easier by maximizing, as much as possible, the separation between input and output frequencies.
When operating through a repeater, mobile stations must transmit on a different frequency than the repeater output. Although the repeater site must be capable of simultaneous reception and transmission (on two different frequencies), mobile stations can operate in one mode at a time, alternating between receiving and transmitting; so, mobile stations do not need the bulky, and costly filters required at a repeater site. Mobile stations may have an option to select a "talk around" mode to transmit and receive on the same frequency; this is sometimes used for local communication within range of the mobile units.
Frequency separation: input to output
There is no set rule about spacing of input and output frequencies for all radio repeaters. Any spacing where the designer can get sufficient isolation between receiver and transmitter will work.
In some countries, under some radio services, there are agreed-on conventions or separations that are required by the system license. In the case of input and output frequencies in the United States, for example:
Amateur repeaters in the 144–148 MHz band usually use a 600 kHz (0.6 MHz) separation, in the 1.25-meter band use a 1.6 MHz separation, in the 420–450 MHz band use a 5 MHz separation, and in the 902–928 MHz band use a 25 MHz separation.
Systems in the 450–470 MHz band use a 5 MHz separation with the input on the higher frequency. Example: input is 456.900 MHz; output is 451.900 MHz.
Systems in the 806–869 MHz band use a 45 MHz separation with the input on the lower frequency. Example: input is 810.1875 MHz; output is 855.1875 MHz.
Military systems are suggested to use no less than a 10 MHz spacing.
These are just a few examples. There are many other separations or spacings between input and output frequencies in operational systems.
Same band frequencies
Same band repeaters operate with input and output frequencies in the same frequency band. For example, in US two-way radio, 30–50 MHz is one band and 150–174 MHz is another. A repeater with an input of 33.980 MHz and an output of 46.140 MHz is a same band repeater.
In same band repeaters, a central design problem is keeping the repeater's own transmitter from interfering with the receiver. Reducing the coupling between transmitter and input frequency receiver is called isolation.
Duplexer system
In same-band repeaters, isolation between transmitter and receiver can be created by using a single antenna and a device called a duplexer. The device is a tuned filter connected to the antenna. In this example, consider a type of device called a band-pass duplexer. It allows, or passes, a band, (or a narrow range,) of frequencies.
There are two legs to the duplexer filter, one is tuned to pass the input frequency, the other is tuned to pass the output frequency. Both legs of the filter are coupled to the antenna. The repeater receiver is connected to the receive leg while the transmitter is connected to the transmit leg. The duplexer prevents degradation of the receiver sensitivity by the transmitter in two ways. First, the receive leg greatly attenuates the transmitter's carrier at the receiver input (typically by 90-100 dB), preventing the carrier from overloading (blocking) the receiver front end. Second, the transmit leg attenuates the transmitter broadband noise on the receiver frequency, also typically by 90-100 dB. By virtue of the transmitter and receiver being on different frequencies, they can operate at the same time on a single antenna.
Combining system
There is often not enough tower space to accommodate a separate antenna for each repeater at crowded equipment sites. In same-band repeaters at engineered, shared equipment sites, repeaters can be connected to shared antenna systems. These are common in trunked systems, where up to 29 repeaters for a single trunked system may be located at the same site. (Some architectures such as iDEN sites may have more than 29 repeaters.)
In a shared system, a receive antenna is usually located at the top of the antenna tower. Putting the receive antenna at the top helps to capture weaker received signals than if the receive antenna were lower of the two. By splitting the received signal from the antenna, many receivers can work satisfactorily from a single antenna. Devices called receiver multicouplers split the signal from the antenna into many receiver connections. The multicoupler amplifies the signals reaching the antenna, then feeds them to several receivers, attempting to make up for losses in the power dividers (or splitters). These operate similarly to a cable TV splitter but must be built to higher quality standards so they work in environments where strong interfering signals are present.
On the transmitter side, a transmit antenna is installed somewhere below the receive antenna. There is an electrical relationship defined by the distance between transmit and receive antennas. A desirable null exists if the transmit antenna is located exactly below the receive antenna beyond a minimum distance. Almost the same isolation as a low-grade duplexer (about −60 decibels) can be accomplished by installing the transmit antenna below, and along the centerline of, the receive antenna. Several transmitters can be connected to the same antenna using filters called combiners. Transmitters usually have directional devices installed along with the filters that block any reflected power in the event the antenna malfunctions. The antenna must have a power rating that will handle the sum of energy of all connected transmitters at the same time.
Transmitter combining systems are lossy. As a rule of thumb, each leg of the combiner has a 50% (3 decibel) power loss. If two transmitters are connected to a single antenna through a combiner, half of their power will reach the combiner output. (This assumes everything is working properly.) If four transmitters are coupled to one antenna, a quarter of each transmitter's power will reach the output of the combining circuit. Part of this loss can be made up with increased antenna gain. Fifty watts of transmitter power to the antenna will make a received signal strength at a distant mobile radio that is almost identical to 100 watts.
In trunked systems with many channels, a site design may include several transmit antennas to reduce combining network losses. For example, a six-channel trunked system may have two transmit antennas with three transmitters connected to each of the two transmit antennas. Because small variations affect every antenna, each antenna will have a slightly different directional pattern. Each antenna will interact with the tower and other nearby antennas differently. If one were to measure received signal levels, this would cause a variation among channels on a single trunked system. Variations in signal strength among channels on one trunked system can also be caused by:
failed parts in the combiner,
characteristics of the design,
loose connectors,
bad cables,
mistuned filters, or;
incorrectly installed components.
Modern
Cross-band repeaters are sometimes a part of government trunked radio systems. If one community is on a trunked system and the neighboring community is on a conventional system, a talk group or agency-fleet-subfleet may be designated to communicate with the other community. In an example where the community is on 153.755 MHz, transmitting on the trunked system talk group would repeat on 153.755 MHz. Signals received by a base station on 153.755 MHz would go over the trunked system on an assigned talk group.
In conventional government systems, cross band repeaters are sometimes used to connect two agencies who use radio systems on different bands. For example, a fire department in Colorado was on a 46 MHz channel while a police department was on a 154 MHz channel, they built a cross-band repeater to allow communication between the two agencies.
If one of the systems is simplex, the repeater must have logic preventing transmitter keying in both directions at the same time. Voting comparators with a transmitter keying matrix are sometimes used to connect incompatible base stations.
Historic
In looking at records of old systems, examples of cross-band commercial systems were found in every U.S. radio service where regulations allowed them. In California, specific systems using cross-band repeaters have existed at least since the 1960s. Historic examples of cross-band systems include:
Solano County Fire, (former Fire Radio Service): 46.240 input; 154.340 output. This system was dismantled in the 1980s and is now a same-band repeater.
Mid-Valley Fire District, Fresno, (former Fire Radio Service): 46.140 input; 154.445 output. This system was dismantled in the 1980s and is now a same-band repeater.
Santa Clara County Department of Parks and Recreation, (former Forestry Conservation Radio Service): 44.840 MHz input; 151.445 MHz output. This system was dismantled in the 1980s and is now a same-band repeater.
State of California, Governor's Office of Emergency Services, Fire, (former Fire Radio Service): 33.980 MHz input; 154.160 MHz output.
In commercial systems, manufacturers stopped making cross band mobile radio equipment with acceptable specifications for public safety systems in the early 1980s. At the time, some systems were dismantled because new radio equipment was not available. Sporadic E ionospheric ducting can make the 46 MHz and below frequencies unworkable in summer.
As links
For decades, cross-band repeaters have been used as fixed links. The links can be used for remote control of base stations at distant sites or to send audio from a diversity (voting) receiver site back to the diversity combining system (voting comparator). Some legacy links occur in the US 150–170 MHz band. US Federal Communications Commission rule changes did not allow 150 MHz links after the 1970s. Newer links are more often seen on 72–76 MHz (Mid-band), 450–470 MHz interstitial channels, or 900 MHz links. These links, known as fixed stations in US licensing, typically connect an equipment site with a dispatching office.
Vehicular repeaters
Modern amateur radios sometimes include cross-band repeat capability native to the radio transceiver.
In commercial systems, cross-band repeaters are sometimes used in vehicular repeaters. For example, a 150 MHz hand held may communicate to a vehicle-mounted low-power transceiver. The low-power radio repeats transmissions from the portable over the vehicle's high power mobile radio, which has a much longer range. In these systems, the hand-held works so long as it is within range of the low power mobile repeater. The mobile radio is usually on a different band than the hand-held to reduce the chances of the mobile radio transmitter interfering with the transmission from the hand-held to the vehicle.
Motorola, for example, marketed a vehicular repeater system called PAC*RT. It was available for use with 150 MHz or 450 MHz hand-helds and interfaced with some Motorola mobile radios.
In the 1980s, General Electric Mobile Radio had a 463 MHz emergency medical services radio that featured a 453 MHz vehicular repeater link to a hand-held.
There is a difficult engineering problem with these systems. If you get two vehicle radios at the same location, some protocol has to be established so that one portable transmitting doesn't activate two or more mobile radio transmitters. Motorola uses a hierarchy system with PAC*RT, each repeater transmits a tone when it is turned on, so the last one on site that turns on is the one that gets used. This is so several of them are not on at once.
Vehicular repeaters are complex but can be less expensive than designing a system that covers a large area and works with the weak signal levels of hand-held radios. Some models of radio signals suggest that the transmitters of hand-held radios create received signals at the base station one to two orders of magnitude (10 to 20 decibels or 10 to 100 times) weaker than a mobile radio with a similar transmitter output power.
Siting as part of system design
Radio repeaters are typically placed in locations which maximize their effectiveness for their intended purpose:
"Low-level" repeaters are used for local communications, and are placed at low altitude to reduce interference with other users of the same radio frequencies. Low-level systems are used for areas as large as an entire city, or as small as a single building.
"High-level" repeaters are placed on tall towers or mountaintops to maximize their area of coverage. With these systems, users with low-powered radios (such as hand-held "walkie-talkies") can communicate with each other over many miles.
Community Repeater
Popular mainly in the UK, community based radio systems usually consist of a community radio repeater (similar to a ham repeater), for use by the community and businesses often used for Civic Events, Shopwatch, PubWatch, Neighborhood Watch and Community engagement. In larger towns, separate systems are typically used, separating commercial and community use. Whereas in smaller towns, single systems are typically used by the whole community.
Particular forms of RF repeaters
Broadcast relay station for broadcast television repeaters
Microwave radio relay for microwave RF telecommunications repeaters
Cellular repeater
Wireless repeater for WiFi
See also
Signal strength in telecommunications
External lists
UHF CB Australia – UHF CB News, Information, Repeater Locations & Sales. UHF CB Australia Supporting and expanding the UHF CB network
References
Radio electronics | Radio repeater | [
"Engineering"
] | 3,254 | [
"Radio electronics"
] |
4,149,080 | https://en.wikipedia.org/wiki/Aphidicolin | Aphidicolin is a tetracyclic diterpene antibiotic isolated from the fungus Cephalosporum aphidicola with antiviral and antimitotic properties. Aphidicolin is a reversible inhibitor of eukaryotic nuclear DNA replication. It blocks the cell cycle at early S phase. It is a specific inhibitor of DNA polymerase Alpha and Delta in eukaryotic cells and in some viruses (vaccinia and herpesviruses) and an apoptosis inducer in HeLa cells. Natural aphidicolin is a secondary metabolite of the fungus Nigrospora oryzae.
Bibliography
References
Antibiotics
Transferase inhibitors
Diterpenes
Cyclopentanes
DNA polymerase inhibitors | Aphidicolin | [
"Chemistry",
"Biology"
] | 158 | [
"Biotechnology products",
"Organic compounds",
"Antibiotics",
"Biocides",
"Organic compound stubs",
"Organic chemistry stubs"
] |
4,149,451 | https://en.wikipedia.org/wiki/National%20Science%20Advisor%20%28Canada%29 | National Science Advisor to the prime minister was a post that existed from 2004 to 2008. Previously, in 2003, the Privy Council Office published A Framework for the Application of Precaution in Science-based Decision Making about Risk under the government of Prime Minister Jean Chrétien. It provided "a lens to assess whether precautionary decision making is in keeping with Canadians' social, environmental and economic values and priorities."
Arthur Carty officially started in the role of on April 1, 2004. The advisor headed the Office of the National Science Advisor (ONSA), within Industry Canada, later moved to Privy Council Office. Carty was previously the president of the National Research Council and when Carty retired on March 31, 2008, the position was eliminated under the government of Stephen Harper.
In the 2015 Minister of Science Mandate Letter there was a priority to create a new Chief Science Officer position and on December 5, 2016 the minister of Science Kirsty Duncan announced the competition for the new position, to be called Chief Science Advisor. On September 26, 2017, Prime Minister Justin Trudeau announced that Mona Nemer would fill that role.
See also
The Council of Canadian Academies - An independent science advisory body for the Government of Canada
References
External links
Council of Science and Technology Advisors - "Members" (archive 10 April 2007)
Office of Science & Technology at the Embassy of Austria in Washington, DC. - "Arthur Carty: Science Advisor to the Canadian Prime Minister" (archive 15 April 2012)
Canada
2004 establishments in Canada
2008 disestablishments in Canada
History of science and technology in Canada
Political history of Canada | National Science Advisor (Canada) | [
"Technology"
] | 328 | [
"Scientists in technology assessment and policy",
"Chief scientific advisers by country"
] |
4,149,506 | https://en.wikipedia.org/wiki/Veil%20Nebula | The Veil Nebula is a cloud of heated and ionized gas and dust in the constellation Cygnus.
It constitutes the visible portions of the Cygnus Loop, a supernova remnant, many portions of which have acquired their own individual names and catalogue identifiers. The source supernova was a star 20 times more massive than the Sun which exploded between 10,000 and 20,000 years ago. At the time of the explosion, the supernova would have appeared brighter than Venus in the sky, and visible in the daytime. The remnants have since expanded to cover an area of the sky roughly 3 degrees in diameter (about 6 times the diameter, and 36 times the area, of the full Moon). While previous distance estimates have ranged from 1200 to 5800 light-years, a recent determination of 2400 light-years is based on direct astrometric measurements. (The distance estimates affect also the estimates of size and age.)
The Hubble Space Telescope captured several images of the nebula. The analysis of the emissions from the nebula indicates the presence of oxygen, sulfur, and hydrogen. The Cygnus Loop is also a strong emitter of radio waves and x-rays.
Components
In modern usage, the names Veil Nebula, Cirrus Nebula, and Filamentary Nebula generally refer to all the visible structure of the remnant, or even to the entire loop itself. The structure is so large that several NGC numbers were assigned to various arcs of the nebula. There are three main visual components:
The Western Veil (also known as Caldwell 34), consisting of NGC 6960 (the "Witch's Broom", Lacework Nebula, "Filamentary Nebula") near the foreground star 52 Cygni;
The Eastern Veil (also known as Caldwell 33), whose brightest area is NGC 6992, trailing off farther south into NGC 6995 (together with NGC 6992 also known as "Network Nebula") and IC 1340; and
Pickering's Triangle (or Pickering's Triangular Wisp), brightest at the north central edge of the loop, but visible in photographs continuing toward the central area of the loop.
NGC 6974 and NGC 6979 are luminous knots in a fainter patch of nebulosity on the northern rim between NGC 6992 and Pickering's Triangle.
Observation
The nebula was discovered on 5 September 1784 by William Herschel. He described the western end of the nebula as "Extended; passes thro' 52 Cygni... near 2 degree in length", and described the eastern end as "Branching nebulosity ... The following part divides into several streams uniting again towards the south."
When finely resolved, some parts of the nebula appear to be rope-like filaments. The standard explanation is that the shock waves are so thin, less than one part in 50,000 of the radius, that the shell is visible only when viewed exactly edge-on, giving the shell the appearance of a filament. At the estimated distance of 2400 light-years, the nebula has a radius of 65 light-years (a diameter of 130 light-years). The thickness of each filament is th of the radius, or about 4 billion miles, roughly the distance from Earth to Pluto.
Undulations in the surface of the shell lead to multiple filamentary images, which appear to be intertwined.
Even though the nebula has a relatively bright integrated magnitude of 7, it is spread over so large an area that the surface brightness is quite low, so the nebula is notorious among astronomers as being difficult to see. However, an observer can see the nebula clearly in a telescope using an O-III astronomical filter (isolating the wavelength of light from doubly ionized oxygen), as almost all light from this nebula is emitted at this wavelength. An telescope equipped with an O-III filter shows the delicate lacework apparent in photographs. Smaller telescopes with an O-III filter can show the nebula as well, and some argue that it can be seen without any optical aid except an O-III filter held up to the eye.
The brighter segments of the nebula have the New General Catalogue designations NGC 6960, 6974, 6979, 6992, and 6995. The easiest segment to find is 6960, which runs behind 52 Cygni, a star that can be seen with the naked eye. NGC 6992 and 6995 are objects on the eastern side of the loop which are also relatively easy to see. NGC 6974 and NGC 6979 are visible as knots in an area of nebulosity along the northern rim. Pickering's Triangle is much fainter and has no NGC number (though 6979 is occasionally used to refer to it). It was discovered photographically in 1904 by Williamina Fleming (after the New General Catalogue was published), but credit went to Edward Charles Pickering, the director of her observatory, as was the custom of the day.
The Veil Nebula is expanding at a velocity of about 1.5 million kilometers per hour. Using images taken by the Hubble Space Telescope between 1997 and 2015, the expansion of the Veil Nebula has been directly observed.
See also
List of supernova remnants
References
External links
IC 1340, photograph – by David Malin, Australian Astronomical Observatory
"Uncovering the Veil Nebula" – spacetelescope.com, with several Hubble Space Telescope photos
APOD (2010-11-19) – Nebulae in the Northern Cross, showing Veil Nebula to scale in Cygnus
APOD (2010-09-16) – Photo of the entire Veil Nebula
APOD (2009-12-01) – NGC 6992: Filaments of the Veil Nebula
APOD (2003-01-18) – Filaments in the Cygnus Loop
APOD (1999-07-25) – Shockwaves in the Cygnus Loop (and underlying HST photo)
Cygnus Loop HST Photo Release – Bill Blair (Johns Hopkins University)
Photo combining optical and X-ray data – Bill Blair (Johns Hopkins University)
Bill Blair (Johns Hopkins University) – Overview photo of Cygnus Loop and Veil Nebula
Veil Nebula at Constellation Guide
17840905
033b
Cygnus (constellation)
NGC objects
Supernova remnants | Veil Nebula | [
"Astronomy"
] | 1,297 | [
"Cygnus (constellation)",
"Constellations"
] |
4,149,526 | https://en.wikipedia.org/wiki/Adapter%20%28genetics%29 | An adapter or adaptor in genetic engineering is a short, chemically synthesized, double-stranded oligonucleotide that can be ligated to the ends of other DNA or RNA molecules. Double stranded adapters are different from linkers in that they contain one blunt end and one sticky end. For instance, a double stranded DNA adapter can be used to link the ends of two other DNA molecules (i.e., ends that do not have "sticky ends", that is complementary protruding single strands by themselves). It may be used to add sticky ends to cDNA allowing it to be ligated into the plasmid much more efficiently. Two adapters could base pair to each other to form dimers.
Types of Adapters
A conversion adapter is used to join a DNA insert cut with one restriction enzyme, say EcoRl, with a vector opened with another enzyme, Bam Hl. This adapter can be used to convert the cohesive end produced by Bam Hl to one produced by Eco Rl or vice versa.
One of its applications is ligating cDNA into a plasmid or other vectors instead of using Terminal deoxynucleotide Transferase enzyme to add poly A to the cDNA fragment.
NGS adapters are short ~80 BP fragments that bind to DNA to aid in amplification during library preparation and are also useful to bind DNA to the flow cell during sequencing. These adapters are made up of three parts that flank the DNA sequence of interest. There is the flow cell binding sequence, the primer binding site, and also tagged barcoded regions to allow pooled sequencing.
References
Genetic engineering | Adapter (genetics) | [
"Chemistry",
"Engineering",
"Biology"
] | 341 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Genetic engineering",
"Molecular biology"
] |
4,149,813 | https://en.wikipedia.org/wiki/Clinical%20Information%20Access%20Portal | The Clinical Information Access Portal, commonly referred to as CIAP, is a project of the New South Wales Department of Health that provides online clinical resources for health professionals working within the New South Wales public health system (NSW Health).
Major resources available through CIAP include:
Australian Medicines Handbook
Harrison's Online
Journal databases – Medline, EMBASE, PsycINFO
MD Consult
MIMS Online
Therapeutic Guidelines
Micromedex
BMJ Best Practice
Various full text journals and eBooks
References
External links
CIAP website – password restricted to NSW Health employees
Healthcare in Australia
Health informatics | Clinical Information Access Portal | [
"Biology"
] | 117 | [
"Health informatics",
"Medical technology"
] |
4,150,226 | https://en.wikipedia.org/wiki/Centre%20de%20Sociologie%20de%20l%27Innovation | The Centre de Sociologie de l'Innovation (CSI; "Center for the Sociology of Innovation") is a research center at the Mines Paris – PSL, France, and a research unit affiliated to the French National Centre for Scientific Research.
The CSI was created in 1967 and is known for its members' contributions to the field of science and technology studies and to actor–network theory. Prominent past and current members include academics such as Bruno Latour and Michel Callon.
References
External links
Centre de Sociologie de l'Innovation
Science and technology studies
Actor-network theory
Universities and colleges in Paris
Engineering universities and colleges in France
French National Centre for Scientific Research
Educational institutions established in 1967
1967 establishments in France | Centre de Sociologie de l'Innovation | [
"Technology"
] | 142 | [
"Actor-network theory",
"Science and technology studies"
] |
4,150,239 | https://en.wikipedia.org/wiki/RRS%20John%20Biscoe%20%281956%29 | The RRS John Biscoe was a supply and research vessel used by the British Antarctic Survey between 1956 and 1991.
History
An earlier vessel, operated from 1947 to 1956. Both were named after the English explorer John Biscoe, who discovered parts of Antarctica in the early 1830s.
John Biscoe II was replaced by in 1991. After decommissioning, she was sold and eventually scrapped in 2004 under the name Fayza Express.
Command
Biscoe'''s first visit to Halley Research Station, in 1959/60 was under the veteran captain, Bill Johnston.
From 1975, joint Masters of John Biscoe were Malcolm Phelps and Chris Elliott. Chris Elliott had joined BAS as Third Officer on John Biscoe'' in 1967, becoming Second Officer in 1970. He established the successful Offshore Biological Programme cruises and helped superintend the building of replacement . Elliott was awarded the Polar Medal in 2004 and an MBE in 2005. The sea passage between Adelaide Island and Jenny Island is named after Chris Elliott.
Footnotes
External links
Newsreel footage of a resupply voyage by the John Biscoe, 1964
History of Antarctica
Hydrography
Oceanographic instrumentation
Research vessels of the United Kingdom
1956 ships
British Antarctic Survey | RRS John Biscoe (1956) | [
"Technology",
"Engineering",
"Environmental_science"
] | 243 | [
"Hydrography",
"Hydrology",
"Oceanographic instrumentation",
"Measuring instruments"
] |
4,150,432 | https://en.wikipedia.org/wiki/Salem%20Community%20College | Salem Community College (SCC) is a public community college in Salem County in the U.S. state of New Jersey. Salem Community College's main campus is in Carneys Point Township. SCC is authorized to grant associate degrees, including Associate in Arts, Associate in Fine Arts, Associate in Science, and Associate in Applied Science certificates. SCC also offers the only degree program in the US for scientific glassblowing.
Salem Community College was founded as Salem County Technical Institute in 1958. Recognizing the college-level caliber of the institute's programs, the Salem County Board of Chosen Freeholders requested approval to grant degree-awarding authority to the institute. The New Jersey Commission on Higher Education evaluated the institute's programs and granted the requested approval. On September 3, 1972, Salem Community College was established. It is accredited by the Middle States Commission on Higher Education.
Notable alumni
Evan Edinger (born 1990), American-born YouTuber based in London
Paul Joseph Stankard (born 1943), glass artist and flameworker
See also
New Jersey County Colleges
Lampworking
References
External links
Official website
- Main campus
- Salem Center
1958 establishments in New Jersey
Carneys Point Township, New Jersey
Universities and colleges established in 1958
Garden State Athletic Conference
Glassmaking schools
New Jersey County Colleges
NJCAA athletics
Two-year colleges in the United States
Universities and colleges in Salem County, New Jersey | Salem Community College | [
"Materials_science",
"Engineering"
] | 283 | [
"Glass engineering and science",
"Glassmaking schools"
] |
4,150,495 | https://en.wikipedia.org/wiki/Schofield%20equation | The Schofield Equation is a method of estimating the basal metabolic rate (BMR) of adult men and women published in 1985.
This is the equation used by the WHO in their technical report series. The equation that is recommended to estimate BMR by the US Academy of Nutrition and Dietetics is the Mifflin-St. Jeor equation.
The equations for estimating BMR in kJ/day (kilojoules per day) from body mass (kg) are:
Men:
Women:
The equations for estimating BMR in kcal/day (kilocalories per day) from body mass (kg) are:
Men:
Women:
Key:
W = Body weight in kilograms
SEE = Standard error of estimation
The raw figure obtained by the equation should be adjusted up or downwards, within the confidence limit suggested by the quoted estimation errors, and according to the following principles:
Subjects leaner and more muscular than usual require more energy than the average.
Obese subjects require less.
Patients at the young end of the age range for a given equation require more energy.
Patients at the high end of the age range for a given equation require less energy.
Effects of age and body mass may cancel out: an obese 30-year-old or an athletic 60-year-old may need no adjustment from the raw figure.
Physical activity levels
To find total body energy expenditure (actual energy needed per day), the base metabolism must then be multiplied by a physical activity level factor. These are as follows:
The FAO/WHO uses different PALs in their recommendations when recommending how to calculate TEE. See Table 5.3 of their working document. Energy Requirements of Adults, Report of a Joint FAO/WHO/UNU Expert Consultation.
These equations were published in 1989 in the dietary guidelines and formed the RDA's for a number of years. The activity factor used by the USDA was 1.6. In the UK, a lower activity factor of 1.4 is used. The equation has now been replaced by the Institute of Medicine Equation in September 2002 in the US, however is still currently used by the FAO/WHO/UNU.
See also
Harris–Benedict equation
Institute of Medicine Equation
References
Mass
Nutrition
Obesity
Mathematics in medicine | Schofield equation | [
"Physics",
"Mathematics"
] | 466 | [
"Scalar physical quantities",
"Physical quantities",
"Applied mathematics",
"Quantity",
"Mass",
"Size",
"Wikipedia categories named after physical quantities",
"Mathematics in medicine",
"Matter"
] |
4,151,057 | https://en.wikipedia.org/wiki/E-UTRA | E-UTRA is the air interface of 3rd Generation Partnership Project (3GPP) Long Term Evolution (LTE) upgrade path for mobile networks. It is an acronym for Evolved UMTS Terrestrial Radio Access, also known as the Evolved Universal Terrestrial Radio Access in early drafts of the 3GPP LTE specification. E-UTRAN is the combination of E-UTRA, user equipment (UE), and a Node B (E-UTRAN Node B or Evolved Node B, eNodeB).
It is a radio access network (RAN) meant to be a replacement of the Universal Mobile Telecommunications System (UMTS), High-Speed Downlink Packet Access (HSDPA), and High-Speed Uplink Packet Access (HSUPA) technologies specified in 3GPP releases 5 and beyond. Unlike HSPA, LTE's E-UTRA is an entirely new air interface system, unrelated to and incompatible with W-CDMA. It provides higher data rates, lower latency and is optimized for packet data. It uses orthogonal frequency-division multiple access (OFDMA) radio-access for the downlink and single-carrier frequency-division multiple access (SC-FDMA) on the uplink. Trials started in 2008.
Features
EUTRAN has the following features:
Peak download rates of 299.6 Mbit/s for 4×4 antennas, and 150.8 Mbit/s for 2×2 antennas with 20 MHz of spectrum. LTE Advanced supports 8×8 antenna configurations with peak download rates of 2,998.6 Mbit/s in an aggregated 100 MHz channel.
Peak upload rates of 75.4 Mbit/s for a 20 MHz channel in the LTE standard, with up to 1,497.8 Mbit/s in an LTE Advanced 100 MHz carrier.
Low data transfer latencies (sub-5 ms latency for small IP packets in optimal conditions), lower latencies for handover and connection setup time.
Support for terminals moving at up to 350 km/h or 500 km/h depending on the frequency band.
Support for both FDD and TDD duplexes as well as half-duplex FDD with the same radio access technology
Support for all frequency bands currently used by IMT systems by ITU-R.
Flexible bandwidth: 1.4 MHz, 3 MHz, 5 MHz, 10 MHz, 15 MHz and 20 MHz are standardized. By comparison, UMTS uses fixed size 5 MHz chunks of spectrum.
Increased spectral efficiency at 2–5 times more than in 3GPP (HSPA) release 6
Support of cell sizes from tens of meters of radius (femto and picocells) up to over 100 km radius macrocells
Simplified architecture: The network side of EUTRAN is composed only by the eNodeBs
Support for inter-operation with other systems (e.g., GSM/EDGE, UMTS, CDMA2000, WiMAX, etc.)
Packet-switched radio interface.
Rationale for E-UTRA
Although UMTS, with HSDPA and HSUPA and their evolution, deliver high data transfer rates, wireless data usage is expected to continue increasing significantly over the next few years due to the increased offering and demand of services and content on-the-move and the continued reduction of costs for the final user. This increase is expected to require not only faster networks and radio interfaces but also higher cost-efficiency than what is possible by the evolution of the current standards. Thus the 3GPP consortium set the requirements for a new radio interface (EUTRAN) and core network evolution (System Architecture Evolution SAE) that would fulfill this need.
These improvements in performance allow wireless operators to offer quadruple play services voice, high-speed interactive applications including large data transfer and feature-rich IPTV with full mobility.
Starting with the 3GPP Release 8, E-UTRA is designed to provide a single evolution path for the GSM/EDGE, UMTS/HSPA, CDMA2000/EV-DO and TD-SCDMA radio interfaces, providing increases in data speeds, and spectral efficiency, and allowing the provision of more functionality.
Architecture
EUTRAN consists only of eNodeBs on the network side. The eNodeB performs tasks similar to those performed by the nodeBs and RNC (radio network controller) together in UTRAN. The aim of this simplification is to reduce the latency of all radio interface operations. eNodeBs are connected to each other via the X2 interface, and they connect to the packet switched (PS) core network via the S1 interface.
EUTRAN protocol stack
The EUTRAN protocol stack consists of:
Physical layer: Carries all information from the MAC transport channels over the air interface. Takes care of the link adaptation (ACM), power control, cell search (for initial synchronization and handover purposes) and other measurements (inside the LTE system and between systems) for the RRC layer.
MAC: The MAC sublayer offers a set of logical channels to the RLC sublayer that it multiplexes into the physical layer transport channels. It also manages the HARQ error correction, handles the prioritization of the logical channels for the same UE and the dynamic scheduling between UEs, etc..
RLC: It transports the PDCP's PDUs. It can work in 3 different modes depending on the reliability provided. Depending on this mode it can provide: ARQ error correction, segmentation/concatenation of PDUs, reordering for in-sequence delivery, duplicate detection, etc...
PDCP: For the RRC layer it provides transport of its data with ciphering and integrity protection. And for the IP layer transport of the IP packets, with ROHC header compression, ciphering, and depending on the RLC mode in-sequence delivery, duplicate detection and retransmission of its own SDUs during handover.
RRC: Between others it takes care of: the broadcast system information related to the access stratum and transport of the non-access stratum (NAS) messages, paging, establishment and release of the RRC connection, security key management, handover, UE measurements related to inter-system (inter-RAT) mobility, QoS, etc..
Interfacing layers to the EUTRAN protocol stack:
NAS: Protocol between the UE and the MME on the network side (outside of EUTRAN). Between others performs authentication of the UE, security control and generates part of the paging messages.
IP
Physical layer (L1) design
E-UTRA uses orthogonal frequency-division multiplexing (OFDM), multiple-input multiple-output (MIMO) antenna technology depending on the terminal category and can also use beamforming for the downlink to support more users, higher data rates and lower processing power required on each handset.
In the uplink LTE uses both OFDMA and a precoded version of OFDM called Single-Carrier Frequency-Division Multiple Access (SC-FDMA) depending on the channel. This is to compensate for a drawback with normal OFDM, which has a very high peak-to-average power ratio (PAPR). High PAPR requires more expensive and inefficient power amplifiers with high requirements on linearity, which increases the cost of the terminal and drains the battery faster. For the uplink, in release 8 and 9 multi user MIMO / Spatial division multiple access (SDMA) is supported; release 10 introduces also SU-MIMO.
In both OFDM and SC-FDMA transmission modes a cyclic prefix is appended to the transmitted symbols. Two different lengths of the cyclic prefix are available to support different channel spreads due to the cell size and propagation environment. These are a normal cyclic prefix of 4.7 μs, and an extended cyclic prefix of 16.6 μs.
LTE supports both Frequency-division duplex (FDD) and Time-division duplex (TDD) modes. While FDD makes use of paired spectra for UL and DL transmission separated by a duplex frequency gap, TDD splits one frequency carrier into alternating time periods for transmission from the base station to the terminal and vice versa. Both modes have their own frame structure within LTE and these are aligned with each other meaning that similar hardware can be used in the base stations and terminals to allow for economy of scale. The TDD mode in LTE is aligned with TD-SCDMA as well allowing for coexistence. Single chipsets are available which support both TDD-LTE and FDD-LTE operating modes.
Frames and resource blocks
The LTE transmission is structured in the time domain in radio frames. Each of these radio frames is 10 ms long and consists of 10 sub frames of 1 ms each. For non-Multimedia Broadcast Multicast Service (MBMS) subframes, the OFDMA sub-carrier spacing in the frequency domain is 15 kHz. Twelve of these sub-carriers together allocated during a 0.5 ms timeslot are called a resource block. A LTE terminal can be allocated, in the downlink or uplink, a minimum of 2 resources blocks during 1 subframe (1 ms).
Encoding
All L1 transport data is encoded using turbo coding and a contention-free quadratic permutation polynomial (QPP) turbo code internal interleaver. L1 HARQ with 8 (FDD) or up to 15 (TDD) processes is used for the downlink and up to 8 processes for the UL
EUTRAN physical channels and signals
Downlink (DL)
In the downlink there are several physical channels:
The Physical Downlink Control Channel (PDCCH) carries between others the downlink allocation information, uplink allocation grants for the terminal/UE.
The Physical Control Format Indicator Channel (PCFICH) used to signal CFI (control format indicator).
The Physical Hybrid ARQ Indicator Channel (PHICH) used to carry the acknowledges from the uplink transmissions.
The Physical Downlink Shared Channel (PDSCH) is used for L1 transport data transmission. Supported modulation formats on the PDSCH are QPSK, 16QAM and 64QAM.
The Physical Multicast Channel (PMCH) is used for broadcast transmission using a Single Frequency Network
The Physical Broadcast Channel (PBCH) is used to broadcast the basic system information within the cell
And the following signals:
The synchronization signals (PSS and SSS) are meant for the UE to discover the LTE cell and do the initial synchronization.
The reference signals (cell specific, MBSFN, and UE specific) are used by the UE to estimate the DL channel.
Positioning reference signals (PRS), added in release 9, meant to be used by the UE for OTDOA positioning (a type of multilateration)
Uplink (UL)
In the uplink there are three physical channels:
Physical Random Access Channel (PRACH) is used for initial access and when the UE loses its uplink synchronization,
Physical Uplink Shared Channel (PUSCH) carries the L1 UL transport data together with control information. Supported modulation formats on the PUSCH are QPSK, 16QAM and depending on the user equipment category 64QAM. PUSCH is the only channel which, because of its greater BW, uses SC-FDMA
Physical Uplink Control Channel (PUCCH) carries control information. Note that the Uplink control information consists only on DL acknowledges as well as CQI related reports as all the UL coding and allocation parameters are known by the network side and signaled to the UE in the PDCCH.
And the following signals:
Reference signals (RS) used by the eNodeB to estimate the uplink channel to decode the terminal uplink transmission.
Sounding reference signals (SRS) used by the eNodeB to estimate the uplink channel conditions for each user to decide the best uplink scheduling.
User Equipment (UE) categories
3GPP Release 8 defines five LTE user equipment categories depending on maximum peak data rate and MIMO capabilities support. With 3GPP Release 10, which is referred to as LTE Advanced, three new categories have been introduced. Followed by four more with Release 11, two more with Release 14, and five more with Release 15.
Note: Maximum data rates shown are for 20 MHz of channel bandwidth. Categories 6 and above include data rates from combining multiple 20 MHz channels. Maximum data rates will be lower if less bandwidth is utilized.
Note: These are L1 transport data rates not including the different protocol layers overhead. Depending on cell bandwidth, cell load (number of simultaneous users), network configuration, the performance of the user equipment used, propagation conditions, etc. practical data rates will vary.
Note: The 3.0 Gbit/s / 1.5 Gbit/s data rate specified as Category 8 is near the peak aggregate data rate for a base station sector. A more realistic maximum data rate for a single user is 1.2 Gbit/s (downlink) and 600 Mbit/s (uplink). Nokia Siemens Networks has demonstrated downlink speeds of 1.4 Gbit/s using 100 MHz of aggregated spectrum.
EUTRAN releases
As the rest of the 3GPP standard parts E-UTRA is structured in releases.
Release 8, frozen in 2008, specified the first LTE standard
Release 9, frozen in 2009, included some additions to the physical layer like dual layer (MIMO) beam-forming transmission or positioning support
Release 10, frozen in 2011, introduces to the standard several LTE Advanced features like carrier aggregation, uplink SU-MIMO or relays, aiming to a considerable L1 peak data rate increase.
All LTE releases have been designed so far keeping backward compatibility in mind. That is, a release 8 compliant terminal will work in a release 10 network, while release 10 terminals would be able to use its extra functionality.
Frequency bands and channel bandwidths
Deployments by region
Technology demos
In September 2007, NTT Docomo demonstrated E-UTRA data rates of 200 Mbit/s with power consumption below 100 mW during the test.
In April 2008, LG and Nortel demonstrated E-UTRA data rates of 50 Mbit/s while travelling at 110 km/h.
February 15, 2008 Skyworks Solutions has released a front-end module for E-UTRAN.
See also
4G (IMT-Advanced)
List of interface bit rates
LTE
LTE-A
System Architecture Evolution (SAE)
UMTS
WiMAX
References
External links
EARFCN calculator and band reference
S1-AP procedures E-RAB Setup, modify and release
3GPP Long Term Evolution page
LTE 3GPP Encyclopedia
3G Americas - UMTS/HSPA Speeds Up the Wireless Technology Roadmap. 3G Americas Publishes White Paper on 3GPP Release 7 to Release 8. Bellevue, WA, July 10, 2007
LTE (telecommunication)
Mobile telecommunications
Mobile telecommunications standards
Telecommunications infrastructure | E-UTRA | [
"Technology"
] | 3,127 | [
"Mobile telecommunications",
"Mobile telecommunications standards"
] |
4,151,388 | https://en.wikipedia.org/wiki/TERCOM | Terrain contour matching, or TERCOM, is a navigation system used primarily by cruise missiles. It uses a contour map of the terrain that is compared with measurements made during flight by an on-board radar altimeter. A TERCOM system considerably increases the accuracy of a missile compared with inertial navigation systems (INS). The increased accuracy allows a TERCOM-equipped missile to fly closer to obstacles and at generally lower altitudes, making it harder to detect by ground radar.
Description
Optical contour matching
The Goodyear Aircraft Corporation ATRAN (Automatic Terrain Recognition And Navigation) system for the MGM-13 Mace was the earliest known TERCOM system. In August 1952, Air Materiel Command initiated the mating of the Goodyear ATRAN with the MGM-1 Matador. This mating resulted in a production contract in June 1954. ATRAN was difficult to jam and was not range-limited by line-of sight, but its range was restricted by the availability of radar maps. In time, it became possible to construct radar maps from topographic maps.
Preparation of the maps required the route to be flown by an aircraft. A radar on the aircraft was set to a fixed angle and made horizontal scans of the land in front. The timing of the return signal indicated the range to the landform and produced an amplitude modulated (AM) signal. This was sent to a light source and recorded on 35 mm film, advancing the film and taking a picture at indicated times. The film could then be processed and copied for use in multiple missiles.
In the missile, a similar radar produced the same signal. A second system scanned the frames of film against a photocell and produced a similar AM signal. By comparing the points along the scan where the brightness changed rapidly, which could be picked out easily by simple electronics, the system could compare the left-right path of the missile compared with that of the pathfinding aircraft. Errors between the two signals drove corrections in the autopilot needed to bring the missile back onto its programmed flight path.
Altitude matching
Modern TERCOM systems use a different concept, based on the altitude of the ground over which missile flies and measure by radar altimeter of the missile and comparing that to measurements of prerecorded terrain altitude maps stored in missile avionics memory. TERCOM "maps" consist of a series of squares of a selected size. Using a smaller number of larger squares saves memory, at the cost of decreasing accuracy. A series of such maps are produced, typically from data from radar mapping satellites. When flying over water, contour maps are replaced by magnetic field maps.
As a radar altimeter measures the distance between the missile and the terrain, not the absolute altitude compared to sea level, the important measure in the data is the change in altitude from square to square. The missile's radar altimeter feeds measurements into a small buffer that periodically "gates" the measurements over a period of time and averages them out to produce a single measurement. The series of such numbers held in the buffer produce a strip of measurements similar to those held in the maps. The series of changes in the buffer is then compared with the values in the map, looking for areas where the changes in altitude are identical. This produces a location and direction. The guidance system can then use this information to correct the flight path of the missile.
During the cruise portion of the flight to the target, the accuracy of the system has to be enough only to avoid terrain features. This allows the maps to be a relatively low resolution in these areas. Only the portion of the map for the terminal approach has to be higher resolution, and would normally be encoded at the highest resolutions available to the satellite mapping system.
TAINS
Due to the limited amount of memory available in mass storage devices of the 1960s and 70s, and their slow access times, the amount of terrain data that could be stored in a missile-sized package was far too small to encompass the entire flight. Instead, small patches of terrain information were stored and periodically used to update a conventional inertial platform. These systems, combining TERCOM and inertial navigation, are sometimes known as TAINS, for TERCOM-Aided Inertial Navigation System.
Advantages
TERCOM systems have the advantage of offering accuracy that is not based on the length of the flight; an inertial system slowly drifts after a "fix", and its accuracy is lower for longer distances. TERCOM systems receive constant fixes during the flight, and thus do not have any drift. Their absolute accuracy, however, is based on the accuracy of the radar mapping information, which is typically in the range of meters, and the ability of the processor to compare the altimeter data to the map quickly enough as the resolution increases. This generally limits first generation TERCOM systems to targets on the order of hundreds of meters, limiting them to the use of nuclear warheads. Use of conventional warheads requires further accuracy, which in turn demands additional terminal guidance systems.
Disadvantages
The limited data storage and computing systems of the time meant that the entire route had to be pre-planned, including its launch point. If the missile was launched from an unexpected location or flew too far off-course, it would never fly over the features included in the maps, and would become lost. The INS system can help, allowing it to fly to the general area of the first patch, but gross errors simply cannot be corrected. This made early TERCOM-based systems much less flexible than more modern systems like GPS, which can be set to attack any location from any location, and do not require pre-recorded information which means they can be given their targets immediately before launch.
Improvements in computing and memory, combined with the availability of global digital elevation maps, have reduced this problem, as TERCOM data is no longer limited to small patches, and the availability of side-looking radar allows much larger areas of landscape contour data to be acquired for comparison with the stored contour data.
Comparison with other guidance systems
DSMAC, Digital Scene Matching Area Correlator
DSMAC was an early form of AI which could guide missiles in real time by using camera inputs to determine location. DSMAC was used in Tomahawk Block II onward, and proved itself successfully during the first Gulf War. The system worked by comparing camera inputs during flight to maps computed from spy satellite images. The DSMAC AI system computed contrast maps of images, which it then combined in a buffer and then averaged. It then compared the averages to stored maps computed beforehand by a large mainframe computer, which converted spy satellite pictures to simulate what routes and targets would look like from low level. Since the data were not identical and would change by season and from other unexpected changes and visual effects, the DSMAC system within the missiles had to be able to compare and determine if maps were the same, regardless of changes. It could successfully filter out differences in maps and use the remaining map data to determine its location. Due to its ability to visually identify targets instead of simply attacking estimated coordinates, its accuracy exceeded GPS guided weapons during the first Gulf War.
The massive improvements in memory and processing power from the 1950s, when these scene comparison systems were first invented, to the 1980s, when TERCOM was widely deployed, changed the nature of the problem considerably. Modern systems can store numerous images of a target as seen from different directions, and often the imagery can be calculated using image synthesis techniques. Likewise, the complexity of the live imaging systems has been greatly reduced through the introduction of solid-state technologies like CCDs. The combination of these technologies produced the digitized scene-mapping area correlator (DSMAC). DSMAC systems are often combined with TERCOM as a terminal guidance system, allowing point attack with conventional warheads.
MGM-31 Pershing II, SS-12 Scaleboard Temp-SM and OTR-23 Oka used an active radar homing version of DSMAC (digitized correlator unit DCU), which compared radar topographic maps taken by satellites or aircraft with information received from the onboard active radar regarding target topography, for terminal guidance.
Satellite navigation
Yet another way to navigate a cruise missile is by using a satellite positioning system as they are precise and cheap. Unfortunately, they rely on satellites. If the satellites are interfered with (e.g. destroyed) or if the satellite signal is interfered with (e.g. jammed), the satellite navigation system becomes inoperable. Therefore, the GPS/GLONASS/BeiDou/Galileo-based navigation is useful in a conflict with a technologically unsophisticated adversary. On the other hand, to be ready for a conflict with a technologically advanced adversary, one needs missiles equipped with TAINS and DSMAC.
Missiles that employ TERCOM navigation
The cruise missiles that employ a TERCOM system include:
Supersonic Low Altitude Missile project (early version of TERCOM was slated to be used in this never-built missile)
AGM-86B (United States)
AGM-129 ACM (United States)
BGM-109 Tomahawk (some versions, United States)
C-602 anti-ship & land attack cruise missile (China)
Kh-55 Granat NATO reporting name AS-15 Kent (Soviet Union)
Newer Russian cruise missiles, such as Kh-101 and Kh-555 are likely to have TERCOM navigation, but little information is available about these missiles
C-802 or YJ-82 NATO reporting name CSS-N-8 Saccade (China) – it is unclear if this missile employs TERCOM navigation
Hyunmoo III (South Korea)
DH-10 (China)
Babur (Pakistan) land attack cruise missile
Ra'ad (Pakistan) air-launched cruise missile
Naval Strike Missile (anti-ship and land attack missile, Norway)
SOM (missile) (air-launched cruise missile, Turkey)
HongNiao 1/2/3 cruise missiles
9K720 Iskander (short-range ballistic missile and cruise missile variants, Russia)
Storm Shadow cruise missile (UK/France)
See also
Missile guidance
TERPROM
References
External links
"Terrestrial Guidance Methods", Section 16.5.3 of Fundamentals of Naval Weapons Systems
More info at fas.org
Info at aeronautics.ru
Missile guidance
Aircraft instruments
Aerospace engineering | TERCOM | [
"Technology",
"Engineering"
] | 2,093 | [
"Aircraft instruments",
"Measuring instruments",
"Aerospace engineering"
] |
4,151,398 | https://en.wikipedia.org/wiki/Health%20ecology | Health ecology (also known as eco-health) is an emerging field that studies the impact of ecosystems on human health. It examines alterations in the biological, physical, social, and economic environments to understand how these changes affect mental and physical human health. Health ecology focuses on a transdisciplinary approach to understanding all the factors which influence an individual's physiological, social, and emotional well-being.
Eco-health studies often involve environmental pollution. Some examples include an increase in asthma rates due to air pollution, or PCB contamination of game fish in the Great Lakes of the United States. However, health ecology is not necessarily tied to environmental pollution. For example, research has shown that habitat fragmentation is the main factor that contributes to increased rates of Lyme disease in human populations.
History
Ecosystem approaches to public health emerged as a defined field of inquiry and application in the 1990s, primarily through global research supported by the International Development Research Centre (IDRC) in Ottawa, Canada (Lebel, 2003). However, this was a resurrection of an approach to health and ecology traced back to Hippocrates in Western societies. It can also be traced back to earlier eras in Eastern societies. The approach was also popular among scientists in the centuries. However, it fell out of common practice in the twentieth century, when technical professionalism and expertise were assumed sufficient to manage health and disease. In this relatively brief era, evaluating the adverse impacts of environmental change (both the natural and artificial environment) on human health was assigned to medicine and environmental health.
Integrated approaches to health and ecology re-emerged in the 20th century. These revolutionary movements were built on a foundation laid by earlier scholars, including Hippocrates, Rudolf Virchow, and Louis Pasteur. In the 20th century, Calvin Schwabe coined the term "one medicine," recognizing that human and veterinary medicine share similar biological principles, and are interrelated. This one medicine approach, which had fairly clinical and individualistic connotations, was rebranded to "One Health," to reflect its goals of global human and animal health. Other integrated health approaches include ecological resilience, ecological integrity, and healthy communities.
Eco-health approaches, as currently practiced, are participatory, systems-based approaches to understanding and promoting public health and well-being in the context of social and ecological interactions. These approaches are differentiated from previous public health approaches by a firm grounding in complexity theory and post-normal science (Waltner-Toews, 2004; Waltner-Toews et al., 2008).
After a decade of international conferences in North America and Australia under the more contentious umbrella of "ecosystem health," the first "ecosystem approach to human health" (eco-health) forum was held in Montreal in 2003, followed by conferences and forums in Wisconsin, U.S., and Mérida, Mexico, all with major support from the IDRC. Since then, the International Association for Ecology and Health, and the journal Eco Health, have established the field as a legitimate scholarly and development activity.
Definition
Eco-health studies differ from traditional, single-discipline studies, which focus on one aspect of a complex issue. A traditional epidemiological study may show increasing rates of malaria in a region, but not address the reasons for the increasing rate; an environmental health study may recommend the application of a pesticide in specific amounts in certain areas to reduce spread; an economic analysis may calculate the cost and effectiveness of such a program. Alternatively, an eco-health study combines multiple disciplines, and familiarizes the specialists with the affected community. Through pre-study meetings, the group shares their knowledge and develops common understanding. These pre-study meetings often lead to creative and novel approaches and can lead to a more "socially robust" solution. Eco-health practitioners term this synergy "transdisciplinary" and differentiate it from multidisciplinary studies. Eco-health studies also value the participation of all active groups, including stakeholders and decision-makers. They believe issues of equity (between gender, socioeconomic classes, age, and even species) are essential to completely understand and solve the problem. Jean Lebel (2003) coined transdisciplinary, participation, and equity as the three pillars of Eco Health (Lebel, 2003). The IDRC now defines six principles instead of three pillars: transdisciplinary, participation, gender and social equity, system-thinking, sustainability, and research-to-action (Charron, 2011).
Examples
A practical example of health ecology is the management of malaria in Mexico. A multidisciplinary approach ended the use of harmful DDT while reducing malaria cases. This study reveals the complex nature of these problems, and the extent to which a successful solution must cross research disciplines. The solution involved creative thinking on the part of many individuals and produced a win-win situation for researchers, businesses, and, most importantly, the community. Although many of the dramatic effects of ecosystem change, and much of the research, are focused on developing countries, the ecosystem of the artificial environment in urban areas of the developed world is also a significant determinant of human health. Obesity, diabetes, asthma, and heart disease are all directly tied to environmental factors. In addition, urban design and planning determine automobile use, available food choices, air pollution levels, and the safety and walkability of the neighborhoods in which people live.
References
Further reading
External links
Conservation Biology
Eco Health
Ecosystem Health (March 1995 – December 2001)
Global Change & Human Health (March 2000 – March 2002)
Network for Ecosystem Sustainability and Health
Wilderness Medical Society Environmental Committee
The COHAB Initiative "Cooperation on Health and Biodiversity"
Millennium Ecosystem Assessment — A UN-led global project to assess the impacts of ecosystem change on human well-being; completed in 2005
Environmental health
Ecology
International sustainable development | Health ecology | [
"Biology"
] | 1,190 | [
"Ecology"
] |
4,151,504 | https://en.wikipedia.org/wiki/Polytropic%20process | A polytropic process is a thermodynamic process that obeys the relation:
where p is the pressure, V is volume, n is the polytropic index, and C is a constant. The polytropic process equation describes expansion and compression processes which include heat transfer.
Particular cases
Some specific values of n correspond to particular cases:
for an isobaric process,
for an isochoric process.
In addition, when the ideal gas law applies:
for an isothermal process,
for an isentropic process.
Where is the ratio of the heat capacity at constant pressure () to heat capacity at constant volume ().
Equivalence between the polytropic coefficient and the ratio of energy transfers
For an ideal gas in a closed system undergoing a slow process with negligible changes in kinetic and potential energy the process is polytropic, such that
where C is a constant, , , and with the polytropic coefficient
Relationship to ideal processes
For certain values of the polytropic index, the process will be synonymous with other common processes. Some examples of the effects of varying index values are given in the following table.
When the index n is between any two of the former values (0, 1, γ, or ∞), it means that the polytropic curve will cut through (be bounded by) the curves of the two bounding indices.
For an ideal gas, 1 < γ < 5/3, since by Mayer's relation
Other
A solution to the Lane–Emden equation using a polytropic fluid is known as a polytrope.
See also
Adiabatic process
Compressor
Internal combustion engine
Isentropic process
Isobaric process
Isochoric process
Isothermal process
Polytrope
Quasistatic equilibrium
Thermodynamics
Vapor-compression refrigeration
References
Thermodynamic processes | Polytropic process | [
"Physics",
"Chemistry"
] | 374 | [
"Thermodynamic processes",
"Thermodynamics"
] |
4,151,704 | https://en.wikipedia.org/wiki/Caprolactone | ε-Caprolactone or simply caprolactone is a lactone (a cyclic ester) possessing a seven-membered ring. Its name is derived from caproic acid. This colorless liquid is miscible with most organic solvents and water. It was once produced on a large scale as a precursor to caprolactam.
Production and uses
Caprolactone is prepared industrially by Baeyer-Villiger oxidation of cyclohexanone with peracetic acid.
Caprolactone is a monomer used in the production of highly specialised polymers. Ring-opening polymerization, for example, gives polycaprolactone. Another polymer is polyglecaprone, used as suture material in surgery.
Reactions
Although no longer economical, caprolactone was once produced as a precursor to caprolactam. Caprolactone is treated with ammonia at elevated temperatures to give the lactam:
(CH2)5CO2 + NH3 → (CH2)5C(O)NH + H2O
Carbonylation of caprolactone gives, after hydrolysis, pimelic acid. The lactone ring is easily opened with nucleophiles including alcohols and water to give polylactones and eventually the 6-hydroxyadipic acid.
Related compounds
Several other caprolactones are known, including α-, β-, γ-, and δ-caprolactones. All are chiral. (R)-γ-caprolactone is a component of floral scents and of the aromas of some fruits and vegetables, and is also produced by the Khapra beetle as a pheromone. δ-caprolactone is found in heated milk fat.
An ether of caprolactone is used as a binder for AP/AN/Al rocket propellant HTCE: Hydroxy-Terminated Caprolactone Ether
Safety
Caprolactone hydrolyses rapidly and the resulting hydroxycarboxylic acid displays unexceptional toxicity, as is common for the other hydroxycarboxylic acids. It is known to cause severe eye irritation. Exposure may result in corneal injury.
References
Epsilon-lactones
Monomers
Oxepanes | Caprolactone | [
"Chemistry",
"Materials_science"
] | 469 | [
"Monomers",
"Polymer chemistry"
] |
4,151,763 | https://en.wikipedia.org/wiki/Great-circle%20navigation | Great-circle navigation or orthodromic navigation (related to orthodromic course; ) is the practice of navigating a vessel (a ship or aircraft) along a great circle. Such routes yield the shortest distance between two points on the globe.
Course
The great circle path may be found using spherical trigonometry; this is the spherical version of the inverse geodetic problem.
If a navigator begins at P1 = (φ1,λ1) and plans to travel the great circle to a point at point P2 = (φ2,λ2) (see Fig. 1, φ is the latitude, positive northward, and λ is the longitude, positive eastward), the initial and final courses α1 and α2 are given by formulas for solving a spherical triangle
where λ12 = λ2 − λ1
and the quadrants of α1,α2 are determined by the signs of the numerator and denominator in the tangent formulas (e.g., using the atan2 function).
The central angle between the two points, σ12, is given by
(The numerator of this formula contains the quantities that were used to determine
tan α1.)
The distance along the great circle will then be s12 = Rσ12, where R is the assumed radius
of the Earth and σ12 is expressed in radians.
Using the mean Earth radius, R = R1 ≈ yields results for
the distance s12 which are within 1% of the geodesic length for the WGS84 ellipsoid; see Geodesics on an ellipsoid for details.
Relation to geocentric coordinate system
Detailed evaluation of the optimum direction is possible if the sea surface is approximated by a sphere surface. The standard computation places the ship at a geodetic latitude and geodetic longitude , where is considered positive if north of the equator, and where is considered positive if east of Greenwich. In the geocentric coordinate system centered at the center of the sphere, the Cartesian components are
and the target position is
The North Pole is at
The minimum distance is the distance along a great circle that runs through and . It is calculated in a plane that contains the sphere center and the great circle,
where is the angular distance of two points viewed from the center of the sphere, measured in radians. The cosine of the angle is calculated by the dot product of the two vectors
If the ship steers straight to the North Pole, the travel distance is
If a ship starts at and swims straight to the North Pole, the travel distance is
Derivation
The cosine formula of spherical trigonometry yields for the
angle between the great circles through that point to the North on one hand and to on the other hand
The sine formula yields
Solving this for and insertion in the previous formula gives an expression for the tangent of the position angle,
Further details
Because the brief derivation gives an angle between 0 and which does not reveal the sign (west or east of north ?), a more explicit derivation is desirable which yields separately the sine and the cosine of such that use of the correct branch of the inverse tangent allows to produce an angle in the full range .
The computation starts from a construction of the great circle between and . It lies in the plane that contains the sphere center, and and is constructed rotating by the angle around an axis . The axis is perpendicular to the plane of the great circle and computed by the normalized vector cross product of the two positions:
A right-handed tilted coordinate system with the center at the center of the sphere is given by the
following three axes: the
axis , the axis
and the axis .
A position along the great circle is
The compass direction is given by inserting the two vectors and and computing the gradient of the vector with respect to at .
The angle is given by splitting this direction along two orthogonal directions in the plane tangential to the sphere at the point . The two directions are given by the partial derivatives of with respect to and with respect to , normalized to unit length:
points north and points east at the position .
The position angle projects
into these two directions,
,
where the positive sign means the positive position angles are defined to be north over east. The values of the cosine and sine of are computed by multiplying this equation on both sides with the two unit vectors,
Instead of inserting the convoluted expression of , the evaluation may employ that the triple product is invariant under a circular shift
of the arguments:
If atan2 is used to compute the value, one can reduce both expressions by division through
and multiplication by ,
because these values are always positive and that operation does not change signs; then effectively
Finding way-points
To find the way-points, that is the positions of selected points on the great circle between
P1 and P2, we first extrapolate the great circle back to its node A, the point
at which the great circle crosses the
equator in the northward direction: let the longitude of this point be λ0 — see Fig 1. The azimuth at this point, α0, is given by
Let the angular distances along the great circle from A to P1 and P2 be σ01 and σ02 respectively. Then using Napier's rules we have
(If φ1 = 0 and α1 = π, use σ01 = 0).
This gives σ01, whence σ02 = σ01 + σ12.
The longitude at the node is found from
Finally, calculate the position and azimuth at an arbitrary point, P (see Fig. 2), by the spherical version of the direct geodesic problem. Napier's rules give
The atan2 function should be used to determine
σ01,
λ, and α.
For example, to find the
midpoint of the path, substitute σ = (σ01 + σ02); alternatively
to find the point a distance d from the starting point, take σ = σ01 + d/R.
Likewise, the vertex, the point on the great
circle with greatest latitude, is found by substituting σ = +π.
It may be convenient to parameterize the route in terms of the longitude using
Latitudes at regular intervals of longitude can be found and the resulting positions transferred to the Mercator chart
allowing the great circle to be approximated by a series of rhumb lines. The path determined in this way
gives the great ellipse joining the end points, provided the coordinates
are interpreted as geographic coordinates on the ellipsoid.
These formulas apply to a spherical model of the Earth. They are also used in solving for the great circle on the auxiliary sphere which is a device for finding the shortest path, or geodesic, on an ellipsoid of revolution; see the article on geodesics on an ellipsoid.
Example
Compute the great circle route from Valparaíso,
φ1 = −33°,
λ1 = −71.6°, to
Shanghai,
φ2 = 31.4°,
λ2 = 121.8°.
The formulas for course and distance give
λ12 = −166.6°,
α1 = −94.41°,
α2 = −78.42°, and
σ12 = 168.56°. Taking the earth radius to be
R = 6371 km, the distance is
s12 = 18743 km.
To compute points along the route, first find
α0 = −56.74°,
σ01 = −96.76°,
σ02 = 71.8°,
λ01 = 98.07°, and
λ0 = −169.67°.
Then to compute the midpoint of the route (for example), take
σ = (σ01 + σ02) = −12.48°, and solve
for
φ = −6.81°,
λ = −159.18°, and
α = −57.36°.
If the geodesic is computed accurately on the WGS84 ellipsoid, the results
are α1 = −94.82°, α2 = −78.29°, and
s12 = 18752 km. The midpoint of the geodesic is
φ = −7.07°, λ = −159.31°,
α = −57.45°.
Gnomonic chart
A straight line drawn on a gnomonic chart is a portion of a great circle. When this is transferred to a Mercator chart, it becomes a curve. The positions are transferred at a convenient interval of longitude and this track is plotted on the Mercator chart for navigation.
See also
Compass rose
Great circle
Great-circle distance
Great ellipse
Geodesics on an ellipsoid
Geographical distance
Isoazimuthal
Loxodromic navigation
Map
Portolan map
Marine sandglass
Rhumb line
Spherical trigonometry
Windrose network
Notes
References
External links
Great Circle – from MathWorld Great Circle description, figures, and equations. Mathworld, Wolfram Research, Inc. c1999
Great Circle Map Interactive tool for plotting great circle routes on a sphere.
Great Circle Mapper Interactive tool for plotting great circle routes.
Great Circle Calculator deriving (initial) course and distance between two points.
Great Circle Distance Graphical tool for drawing great circles over maps. Also shows distance and azimuth in a table.
Google assistance program for orthodromic navigation
Navigation
Circles
Spherical curves | Great-circle navigation | [
"Mathematics"
] | 1,955 | [
"Circles",
"Pi"
] |
4,151,837 | https://en.wikipedia.org/wiki/Synthetic%20vision%20system | A synthetic vision system (SVS) is a computer-mediated reality system for aerial vehicles, that uses 3D to provide pilots with clear and intuitive means of understanding their flying environment.
Functionality
Synthetic vision provides situational awareness to the operators by using terrain, obstacle, geo-political, hydrological and other databases. A typical SVS application uses a set of databases stored on board the aircraft, an image generator computer, and a display. Navigation solution is obtained through the use of GPS and inertial reference systems.
Highway In The Sky (HITS), or Path-In-The-Sky, is often used to depict the projected path of the aircraft in perspective view. Pilots acquire instantaneous understanding of the current as well as the future state of the aircraft with respect to the terrain, towers, buildings and other environment features.
History
A forerunner to such systems existed in the 1960s, with the debut into U.S. Navy service of the Grumman A-6 Intruder carrier-based medium-attack aircraft. Designed with a side-by-side seating arrangement for the crew, the Intruder featured an advanced navigation/attack system, called the Digital Integrated Attack and Navigation Equipment (DIANE), which linked the aircraft's radar, navigation and air data systems to a digital computer known as the AN/ASQ-61. Information from DIANE was displayed to both the Pilot and Bombardier/Navigator (BN) through cathode ray tube display screens. In particular, one of those screens, the AN/AVA-1 Vertical Display Indicator (VDI), showed the pilot a synthetic view of the world in front of the aircraft and, in Search Radar Terrain Clearance mode (SRTC), depicted the terrain detected by the radar, which was then displayed as coded lines that represented preset range increments. Called 'Contact Analog', this technology allowed the A-6 to be flown at night, in all weather conditions, at low altitude, and through rugged or mountainous terrain without the need for any visual references.
Synthetic vision was developed by NASA and the U.S. Air Force in the late 1970s and 1980s in support of advanced cockpit research, and in 1990s as part of the Aviation Safety Program. Development of the High Speed Civil Transport fueled NASA research in the 1980s and 1990s. In the early 1980s, the USAF recognized the need to improve cockpit situation awareness to support piloting ever more complex aircraft, and pursued SVS (also called pictorial format avionics) as an integrating technology for both crewed and remotely piloted systems.
Simulations and remotely piloted vehicles
In 1979, the FS1 Flight Simulator by Bruce Artwick for the Apple II microcomputer introduced recreational uses of synthetic vision.
NASA used synthetic vision for remotely piloted vehicles (RPVs), such as the High Maneuverability Aerial Testbed or HiMAT. According to the report by NASA, the aircraft was flown by a pilot in a remote cockpit, and control signals up-linked from the flight controls in the remote cockpit on the ground to the aircraft, and aircraft telemetry downlinked to the remote cockpit displays (see photo). The remote cockpit could be configured with either nose camera video or with a 3D synthetic vision display. SV was also used for simulations of the HiMAT. Sarrafian reports that the test pilots found the visual display to be comparable to output of camera on board the RPV.
The 1986 RC Aerochopper simulation by Ambrosia Microcomputer Products, Inc. used synthetic vision to aid aspiring RC aircraft pilots in learning to fly. The system included joystick flight controls which would connect to an Amiga computer and display. The software included a three-dimensional terrain database for the ground as well as some man-made objects. This database was basic, representing the terrain with relatively small numbers of polygons by today's standards. The program simulated the dynamic three-dimensional position and attitude of the aircraft using the terrain database to create a projected 3D perspective display. The realism of this RPV pilot training display was enhanced by allowing the user to adjust the simulated control system delays and other parameters.
Similar research continued in the U.S. military services, and at Universities around the world. In 1995-1996, North Carolina State University flew a 17.5% scale F-18 RPV using Microsoft Flight Simulator to create the three-dimensional projected terrain environment.
In flight
In 2005 a synthetic vision system was installed on a Gulfstream V test aircraft as part of NASA's "Turning Goals Into Reality" program. Much of the experience gained during that program led directly to the introduction of certified SVS on future aircraft. NASA initiated industry involvement in early 2000 with major avionics manufacturers.
Eric Theunissen, a researcher at Delft University of Technology in the Netherlands, contributed to the development of SVS technology.
At the end of 2007 and early 2008, the FAA certified the Gulfstream Synthetic Vision-Primary flight display (SV-PFD) system for the G350/G450 and G500/G550 business jet aircraft, displaying 3D color terrain images from the Honeywell EGPWS data overlaid with the PFD symbology.
It replaces the traditional blue-over-brown artificial horizon.
In 2017, Avidyne Corporation certified Synthetic Vision capability for its air navigation avionics.
Other glass cockpit systems such as the Garmin G1000 and the Rockwell Collins Pro Line Fusion offer synthetic terrain.
Lower-cost, non-certified avionics offer synthetic vision like apps available for Android or iPad tablet computers from ForeFlight, Garmin, Air Navigation Pro, or Hilton Software
Regulations and standards
See also
Aircraft collision avoidance systems
Enhanced flight vision system
External vision system
Instrument landing system
References
External links
Avionics
Augmented reality
Aircraft collision avoidance systems
ja:バーチャルリアリティー | Synthetic vision system | [
"Technology"
] | 1,185 | [
"Avionics",
"Aircraft collision avoidance systems",
"Aircraft instruments"
] |
4,152,010 | https://en.wikipedia.org/wiki/OpenAP | OpenAP holds a significant place in the history of open-source Linux distributions for wireless access points. OpenAP was one of the early open-source Linux distributions designed aiming to replace the factory firmware on a range of IEEE 802.11b wireless access points, specifically those based on the Eumitcom WL11000SA-N board.
Significance
A more comprehensive breakdown of OpenAP's significance includes:
OpenAP was one of the earliest initiatives in the early 2000s to provide open-source firmware as an alternative to the proprietary factory firmware shipped with many commercial wireless access points. This initiative paved the way for user customization and enhanced functionality.
OpenAP was introduced to the community by Instant802 Networks, which later became known as Devicescape Software. This introduction marked a crucial step in bringing open-source alternatives to the forefront of the wireless networking world.
One of the key principles of OpenAP was its commitment to open-source principles. The project made its full source code available under the GNU General Public License (GPL). This allowed the community to not only use the firmware but also modify and distribute it according to the principles of open-source software.
OpenAP went beyond providing just firmware; it also offered clear and detailed instructions on how to reprogram the flash memory on the supported devices. This approach ensured that users had a well-documented process for replacing their factory firmware.
To foster collaboration and discussions among enthusiasts and developers, OpenAP established a mailing list where users could exchange ideas, share experiences, and seek help with firmware-related issues.
While OpenAP may have been an early project, its legacy lives on. It laid the foundation for subsequent projects, such as OpenWrt and HyperWRT, which continued the tradition of open-source firmware for wireless access points. These projects expanded the range of supported devices and added new features.
References
External links
http://savannah.nongnu.org/projects/openap/
Wi-Fi
Free routing software
Custom firmware | OpenAP | [
"Technology"
] | 415 | [
"Operating system stubs",
"Computing stubs",
"Wireless networking",
"Wi-Fi"
] |
4,152,068 | https://en.wikipedia.org/wiki/Haplogroup%20JT | Haplogroup JT is a human mitochondrial DNA (mtDNA) haplogroup.
Origin
Haplogroup JT is descended from the macro-haplogroup R. It is the ancestral clade to the mitochondrial haplogroups J and T.
Distribution
JT (predominantly J) was found among the ancient Etruscans. The root level haplogroup JT* has been assigned to an ancient person found at the Colfiorito necropolis in Umbria in central Italy.
The haplogroup has also been found among Iberomaurusian specimens dating from the Epipaleolithic at the Taforalt prehistoric site. One ancient individual carried a haplotype, which correlates with either the JT clade or the haplogroup H subclade H14b1 (1/9; 11%).
Subclades
Tree
This phylogenetic tree of haplogroup JT subclades is based on the paper by Mannis van Oven and Manfred Kayser Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation and subsequent published research.
R2'JT
JT
J
T
Health
Maternally inherited ancient mtDNA variants have clear impact on the presentation of disease in a modern society. Superhaplogroup JT is an example of reduced risk of Parkinson's disease And mitochondrial and mtDNa alterations continue to be promising disease biomarkers.
See also
Genealogical DNA test
Genetic genealogy
Human mitochondrial genetics
Population genetics
References
External links
General
Ian Logan's Mitochondrial DNA Site
Mannis van Oven's Phylotree
JT | Haplogroup JT | [
"Chemistry",
"Biology"
] | 326 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Bioinformatics",
"Bioinformatics stubs"
] |
4,152,321 | https://en.wikipedia.org/wiki/Reliability%20theory%20of%20aging%20and%20longevity | The reliability theory of aging is an attempt to apply the principles of reliability theory to create a mathematical model of senescence. The theory was published in Russian by Leonid A. Gavrilov and Natalia S. Gavrilova as Biologiia prodolzhitelʹnosti zhizni in 1986, and in English translation as The Biology of Life Span: A Quantitative Approach in 1991.
One of the models suggested in the book is based on an analogy with the reliability theory. The underlying hypothesis is based on the previously suggested premise that humans are born in a highly defective state. This is then made worse by environmental and mutational damage; exceptionally high redundancy due to the extremely high number of low-reliable components (e.g.., cells) allows the organism to survive for a while.
The theory suggests an explanation of two aging phenomena for higher organisms: the Gompertz law of exponential increase in mortality rates with age and the "late-life mortality plateau" (mortality deceleration compared to the Gompertz law at higher ages).
The book criticizes a number of hypotheses known at the time, discusses drawbacks of the hypotheses put forth by the authors themselves, and concludes that regardless of the suggested mathematical models, the underlying biological mechanisms remain unknown.
See also
• DNA damage theory of aging
References
Systems theory
Reliability engineering
Failure
Survival analysis
Theories of biological ageing | Reliability theory of aging and longevity | [
"Engineering",
"Biology"
] | 286 | [
"Senescence",
"Systems engineering",
"Theories of biological ageing",
"Reliability engineering"
] |
4,152,503 | https://en.wikipedia.org/wiki/Credibility%20theory | Credibility theory is a branch of actuarial mathematics concerned with determining risk premiums. To achieve this, it uses mathematical models in an effort to forecast the (expected) number of insurance claims based on past observations. Technically speaking, the problem is to find the best linear approximation to the mean of the Bayesian predictive density, which is why credibility theory has many results in common with linear filtering as well as Bayesian statistics more broadly.
For example, in group health insurance an insurer is interested in calculating the risk premium, , (i.e. the theoretical expected claims amount) for a particular employer in the coming year. The insurer will likely have an estimate of historical overall claims experience, , as well as a more specific estimate for the employer in question, . Assigning a credibility factor, , to the overall claims experience (and the reciprocal to employer experience) allows the insurer to get a more accurate estimate of the risk premium in the following manner:
The credibility factor is derived by calculating the maximum likelihood estimate which would minimise the error of estimate. Assuming the variance of and are known quantities taking on the values and respectively, it can be shown that should be equal to:
Therefore, the more uncertainty the estimate has, the lower is its credibility.
Types of Credibility
In Bayesian credibility, we separate each class (B) and assign them a probability (Probability of B). Then we find how likely our experience (A) is within each class (Probability of A given B). Next, we find how likely our experience was over all classes (Probability of A). Finally, we can find the probability of our class given our experience. So going back to each class, we weight each statistic with the probability of the particular class given the experience.
Bühlmann credibility works by looking at the Variance across the population. More specifically, it looks to see how much of the Total Variance is attributed to the Variance of the Expected Values of each class (Variance of the Hypothetical Mean), and how much is attributed to the Expected Variance over all classes (Expected Value of the Process Variance). Say we have a basketball team with a high number of points per game. Sometimes they get 128 and other times they get 130 but always one of the two. Compared to all basketball teams this is a relatively low variance, meaning that they will contribute very little to the Expected Value of the Process Variance. Also, their unusually high point totals greatly increases the variance of the population, meaning that if the league booted them out, they'd have a much more predictable point total for each team (lower variance). So, this team is definitely unique (they contribute greatly to the Variance of the Hypothetical Mean). So we can rate this team's experience with a fairly high credibility. They often/always score a lot (low Expected Value of Process Variance) and not many teams score as much as them (high Variance of Hypothetical Mean).
A simple example
Suppose there are two coins in a box. One has heads on both sides and the other is a normal coin with 50:50 likelihood of heads or tails. You need to place a wager on the outcome after one is randomly drawn and flipped.
The odds of heads is .5 * 1 + .5 * .5 = .75. This is because there is a .5 chance of selecting the heads-only coin with 100% chance of heads and .5 chance of the fair coin with 50% chance.
Now the same coin is reused and you are asked to bet on the outcome again.
If the first flip was tails, there is a 100% chance you are dealing with a fair coin, so the next flip has a 50% chance of heads and 50% chance of tails.
If the first flip was heads, we must calculate the conditional probability that the chosen coin was heads-only as well as the conditional probability that the coin was fair, after which we can calculate the conditional probability of heads on the next flip. The probability that it came from a heads-only coin given that the first flip was heads is the probability of selecting a heads-only coin times the probability of heads for that coin divided by the initial probability of heads on the first flip, or .5 * 1 / .75 = 2/3. The probability that it came from a fair coin given that the first flip was heads is the probability of selecting a fair coin times the probability of heads for that coin divided by the initial probability of heads on the first flip, or .5 * .5 / .75 = 1/3. Finally, the conditional probability of heads on the next flip given that the first flip was heads is the conditional probability of a heads-only coin times the probability of heads for a heads-only coin plus the conditional probability of a fair coin times the probability of heads for a fair coin, or 2/3 * 1 + 1/3 * .5 = 5/6 ≈ .8333.
Actuarial credibility
Actuarial credibility describes an approach used by actuaries to improve statistical estimates. Although the approach can be formulated in either a frequentist or Bayesian statistical setting, the latter is often preferred because of the ease of recognizing more than one source of randomness through both "sampling" and "prior" information. In a typical application, the actuary has an estimate X based on a small set of data, and an estimate M based on a larger but less relevant set of data. The credibility estimate is ZX + (1-Z)M, where Z is a number between 0 and 1 (called the "credibility weight" or "credibility factor") calculated to balance the sampling error of X against the possible lack of relevance (and therefore modeling error) of M.
When an insurance company calculates the premium it will charge, it divides the policy holders into groups. For example, it might divide motorists by age, sex, and type of car; a young man driving a fast car being considered a high risk, and an old woman driving a small car being considered a low risk. The division is made balancing the two requirements that the risks in each group are sufficiently similar and the group sufficiently large that a meaningful statistical analysis of the claims experience can be done to calculate the premium. This compromise means that none of the groups contains only identical risks. The problem is then to devise a way of combining the experience of the group with the experience of the individual risk to calculate the premium better. Credibility theory provides a solution to this problem.
For actuaries, it is important to know credibility theory in order to calculate a premium for a group of insurance contracts. The goal is to set up an experience rating system to determine next year's premium, taking into account not only the individual experience with the group, but also the collective experience.
There are two extreme positions. One is to charge everyone the same premium estimated by the overall mean of the data. This makes sense only if the portfolio is homogeneous, which means that all risks cells have identical mean claims. However, if the portfolio is heterogeneous, it is not a good idea to charge a premium in this way (overcharging "good" people and undercharging "bad" risk people) since the "good" risks will take their business elsewhere, leaving the insurer with only "bad" risks. This is an example of adverse selection.
The other way around is to charge to group its own average claims, being as premium charged to the insured. These methods are used if the portfolio is heterogeneous, provided a fairly large claim experience. To compromise these two extreme positions, we take the weighted average of the two extremes:
has the following intuitive meaning: it expresses how "credible" (acceptability) the individual of cell is. If it is high, then use higher to attach a larger weight to charging the , and in this case, is called a credibility factor, and such a premium charged is called a credibility premium.
If the group were completely homogeneous then it would be reasonable to set , while if the group were completely heterogeneous then it would be reasonable to set . Using intermediate values is reasonable to the extent that both individual and group history is useful in inferring future individual behavior.
For example, an actuary has an accident and payroll historical data for a shoe factory suggesting a rate of 3.1 accidents per million dollars of payroll. She has industry statistics (based on all shoe factories) suggesting that the rate is 7.4 accidents per million. With a credibility, Z, of 30%, she would estimate the rate for the factory as 30%(3.1) + 70%(7.4) = 6.1 accidents per million.
References
Further reading
Behan, Donald F. (2009) "Statistical Credibility Theory", Southeastern Actuarial Conference, June 18, 2009
Longley-Cook, L.H. (1962) An introduction to credibility theory PCAS, 49, 194-221.
Whitney, A.W. (1918) The Theory of Experience Rating, Proceedings of the Casualty Actuarial Society, 4, 274-292 (This is one of the original casualty actuarial papers dealing with credibility. It uses Bayesian techniques, although the author uses the now archaic "inverse probability" terminology.)
Venter, Gary G. (2005) "Credibility Theory for Dummies"
Actuarial science
Credit risk | Credibility theory | [
"Mathematics"
] | 1,916 | [
"Applied mathematics",
"Actuarial science"
] |
4,152,892 | https://en.wikipedia.org/wiki/Domain%20engineering | Domain engineering, is the entire process of reusing domain knowledge in the production of new software systems. It is a key concept in systematic software reuse and product line engineering. A key idea in systematic software reuse is the domain. Most organizations work in only a few domains. They repeatedly build similar systems within a given domain with variations to meet different customer needs. Rather than building each new system variant from scratch, significant savings may be achieved by reusing portions of previous systems in the domain to build new ones.
The process of identifying domains, bounding them, and discovering commonalities and variabilities among the systems in the domain is called domain analysis. This information is captured in models that are used in the domain implementation phase to create artifacts such as reusable components, a domain-specific language, or application generators that can be used to build new systems in the domain.
In product line engineering as defined by ISO26550:2015, the Domain Engineering is complemented by Application Engineering which takes care of the life cycle of the individual products derived from the product line.
Purpose
Domain engineering is designed to improve the quality of developed software products through reuse of software artifacts. Domain engineering shows that most developed software systems are not new systems but rather variants of other systems within the same field. As a result, through the use of domain engineering, businesses can maximize profits and reduce time-to-market by using the concepts and implementations from prior software systems and applying them to the target system. The reduction in cost is evident even during the implementation phase. One study showed that the use of domain-specific languages allowed code size, in both number of methods and number of symbols, to be reduced by over 50%, and the total number of lines of code to be reduced by nearly 75%.
Domain engineering focuses on capturing knowledge gathered during the software engineering process. By developing reusable artifacts, components can be reused in new software systems at low cost and high quality. Because this applies to all phases of the software development cycle, domain engineering also focuses on the three primary phases: analysis, design, and implementation, paralleling application engineering. This produces not only a set of software implementation components relevant to the domain, but also reusable and configurable requirements and designs.
Given the growth of data on the Web and the growth of the Internet of Things, a domain engineering approach is becoming relevant to other disciplines as well. The emergence of deep chains of Web services highlights that the service concept is relative. Web services developed and operated by one organization can be utilized as part of a platform by another organization. As services may be used in different contexts and hence require different configurations, the design of families of services may benefit from a domain engineering approach.
Phases
Domain engineering, like application engineering, consists of three primary phases: analysis, design, and implementation. However, where software engineering focuses on a single system, domain engineering focuses on a family of systems. A good domain model serves as a reference to resolve ambiguities later in the process, a repository of knowledge about the domain characteristics and definition, and a specification to developers of products which are part of the domain.
Domain analysis
Domain analysis is used to define the domain, collect information about the domain, and produce a domain model. Through the use of feature models (initially conceived as part of the feature-oriented domain analysis method), domain analysis aims to identify the common points in a domain and the varying points in the domain. Through the use of domain analysis, the development of configurable requirements and architectures, rather than static configurations which would be produced by a traditional application engineering approach, is possible.
Domain analysis is significantly different from requirements engineering, and as such, traditional approaches to deriving requirements are ineffective for development of configurable requirements as would be present in a domain model. To effectively apply domain engineering, reuse must be considered in the earlier phases of the software development life cycle. Through the use of selection of features from developed feature models, consideration of reuse of technology is performed very early and can be adequately applied throughout the development process.
Domain analysis is derived primarily from artifacts produced from past experience in the domain. Existing systems, their artifacts (such as design documents, requirement documents and user manuals), standards, and customers are all potential sources of domain analysis input. However, unlike requirements engineering, domain analysis does not solely consist of collection and formalization of information; a creative component exists as well. During the domain analysis process, engineers aim to extend knowledge of the domain beyond what is already known and to categorize the domain into similarities and differences to enhance reconfigurability.
Domain analysis primarily produces a domain model, representing the common and varying properties of systems within the domain. The domain model assists with the creation of architectures and components in a configurable manner by acting as a foundation upon which to design these components. An effective domain model not only includes the varying and consistent features in a domain, but also defines the vocabulary used in the domain and defines concepts, ideas and phenomena, within the system. Feature models decompose concepts into their required and optional features to produce a fully formalized set of configurable requirements.
Domain design
Domain design takes the domain model produced during the domain analysis phase and aims to produce a generic architecture to which all systems within the domain can conform. In the same way that application engineering uses the functional and non-functional requirements to produce a design, the domain design phase of domain engineering takes the configurable requirements developed during the domain analysis phase and produces a configurable, standardized solution for the family of systems. Domain design aims to produce architectural patterns which solve a problem common across the systems within the domain, despite differing requirement configurations. In addition to the development of patterns during domain design, engineers must also take care to identify the scope of the pattern and the level to which context is relevant to the pattern. Limitation of context is crucial: too much context results in the pattern not being applicable to many systems, and too little context results in the pattern being insufficiently powerful to be useful. A useful pattern must be both frequently recurring and of high quality.
The objective of domain design is to satisfy as many domain requirements as possible while retaining the flexibility offered by the developed feature model. The architecture should be sufficiently flexible to satisfy all of the systems within the domain while rigid enough to provide a solid framework upon which to base the solution.
Domain implementation
Domain implementation is the creation of a process and tools for efficiently generating a customized program in the domain.
Criticism
Domain engineering has been criticized for focusing too much on "engineering-for-reuse" or "engineering-with-reuse" of generic software features rather than concentrating on "engineering-for-use" such that an individual's world-view, language, or context is integrated into the design of software.
See also
Domain-driven design
Product family engineering
References
Sources
Software design
Ontology (information science)
Software development process
Systems engineering
Business analysis | Domain engineering | [
"Engineering"
] | 1,422 | [
"Systems engineering",
"Design",
"Software design"
] |
4,152,952 | https://en.wikipedia.org/wiki/Carotenoid%20oxygenase |
Carotenoid oxygenases are a family of enzymes involved in the cleavage of carotenoids to produce, for example, retinol, commonly known as vitamin A. This family includes an enzyme known as RPE65 which is abundantly expressed in the retinal pigment epithelium where it catalyzed the formation of 11-cis-retinol from all-trans-retinyl esters.
Carotenoids such as beta-carotene, lycopene, lutein and beta-cryptoxanthin are produced in plants and certain bacteria, algae and fungi, where they function as accessory photosynthetic pigments and as scavengers of oxygen radicals for photoprotection. They are also essential dietary nutrients in animals. Carotenoid oxygenases cleave a variety of carotenoids into a range of biologically important products, including apocarotenoids in plants that function as hormones, pigments, flavours, floral scents and defence compounds, and retinoids in animals that function as vitamins, chromophores for opsins and signalling molecules. Examples of carotenoid oxygenases include:
Beta-carotene 15,15'-monooxygenase (BCO1; ) from animals, which cleaves beta-carotene symmetrically at the central double bond to yield two molecules of retinal.
Beta-carotene-9',10'-dioxygenase (BCO2) from animals, which cleaves beta-carotene asymmetrically to apo-10'-beta-carotenal and beta-ionone, the latter being converted to retinoic acid. Lycopene is also oxidatively cleaved.
9-cis-epoxycarotenoid dioxygenase from plants, which cleaves 9-cis xanthophylls to xanthoxin, a precursor of the hormone abscisic acid. Yellow skin, which is a common phenotype in domestic chicken, is influenced by the accumulation of carotenoids in skin due to absence of beta-carotene dioxygenase 2 (BCDO2) enzyme. Inhibition of expression of BCO2 gene is caused by a regulatory mutation.
Apocarotenoid-15,15'-oxygenase from bacteria and cyanobacteria, which converts beta-apocarotenals rather than beta-carotene into retinal. This protein has a seven-bladed beta-propeller structure.
Retinal pigment epithelium 65 kDa protein (RPE65) from vertebrates which is important for the production of 11-cis retinal during visual opsin regeneration.
Members of the family use an iron(II) active center, usually held by four histidines.
Human proteins containing this domain
BCO2; BCO1; RPE65;
References
Further reading
Bioindicators
Carotenoids
Enzymes
Protein domains
Protein families
Peripheral membrane proteins | Carotenoid oxygenase | [
"Chemistry",
"Biology",
"Environmental_science"
] | 636 | [
"Biomarkers",
"Bioindicators",
"Environmental chemistry",
"Protein classification",
"Carotenoids",
"Protein domains",
"Protein families"
] |
4,152,953 | https://en.wikipedia.org/wiki/NlaIII | NlaIII is a type II restriction enzyme isolated from Neisseria lactamica. As part of the restriction modification system, NlaIII is able to prevent foreign DNA from integrating into the host genome by cutting double stranded DNA into fragments at specific sequences. This results in further degradation of the fragmented foreign DNA and prevents it from infecting the host genome.
NlaIII recognizes the palindromic and complementary DNA sequence of CATG/GTAC and cuts outside of the G-C base pairs. This cutting pattern results in sticky ends with GTAC overhangs at the 3' end.
Characteristics
NlaIII from N. lactamica contains two key components: a methylase and an endonuclease. The methylase is critical to recognition, while the endonuclease is used for cutting. The gene (NlaIIIR) is 693 bp long and creates the specific 5’-CATG-3’ endonuclease. A homolog of NlaIIIR is iceA1 from Helicobacter pylori. In H. pylori, there exists a similar methylase gene called hpyIM which is downstream of iceA1. ICEA1 is an endonuclease that also recognizes the 5’-CATG-3’ sequence. IceA1 in H. pylori is similar to that of NlaIII in N. lactamica.
NlaIII contains an ICEA protein that encompasses the 4 to 225 amino acid region. H. pylori also contains the same protein. H. pylori infection often leads to gastrointestinal issues such as peptic ulcers, gastric adenocarcinoma and lymphoma. Researchers speculate that ICEA proteins serve as potential markers for gastric cancer
Isoschizomers
NlaIII isoschizomers recognize and cut the same recognition sequence 5’-CATG-3’. Endonucleases that cut at this sequence include:
Fael
Fatl
Hin1II
Hsp92II
CviAII
IceA1
Applications
NlaIII can be used in many different experimental procedures such as:
Serial analysis of gene expression
Molecular cloning
Restriction site mapping
Genotyping
Southern blotting
Restriction fragment length polymorphism (RFLP) analysis
References
Restriction enzymes
Bacterial enzymes | NlaIII | [
"Biology"
] | 480 | [
"Genetics techniques",
"Restriction enzymes"
] |
13,388,688 | https://en.wikipedia.org/wiki/Metitepine | Metitepine (; developmental code names Ro 8-6837 (maleate), VUFB-6276 (mesylate)), also known as methiothepin, is a drug described as a "psychotropic agent" of the tricyclic or tetracyclic group which was never marketed.
It acts as a non-selective antagonist of serotonin, dopamine, and adrenergic receptors, including of the serotonin 5-HT1, 5-HT2, 5-HT5, 5-HT6, and 5-HT7 receptors. The drug has antipsychotic properties.
Pharmacology
Pharmacodynamics
Chemistry
Analogues
Metitepine is closely structurally related to certain other tetracyclic compounds including amoxapine, batelapine, clorotepine, clotiapine, clozapine, flumezapine, fluperlapine, loxapine, metiapine, olanzapine, oxyprothepin, perathiepin, perlapine, quetiapine, tampramine, and tenilapine.
Synthesis
The reduction of 2-(4-methylsulfanylphenyl)sulfanylbenzoic acid, CID:2733664 (1) gives [2-(4-methylsulfanylphenyl)sulfanylphenyl]methanol, CID:12853582 (2). Halogenating with thionyl chloride gives 1-(chloromethyl)-2-(4-methylsulfanylphenyl)sulfanylbenzene, CID:12853583 (3). FGI with cyanide gives 2-[2-(4-methylsulfanylphenyl)sulfanylphenyl]acetonitrile, CID:12853584 (4). Alkali hydrolysis of the nitrile to 2-[2-(4-methylsulfanylphenyl)sulfanylphenyl]acetic acid, CID:12383832 (5). PPA cyclization to 3-methylsulfanyl-6H-benzo[b][1]benzothiepin-5-one, CID:827052 (6). The reduction with sodium borohydride gives 3-methylsulfanyl-5,6-dihydrobenzo[b][1]benzothiepin-5-ol, CID:13597048 (7). Halogenating with a second round of thionyl chloride gives 5-chloro-3-methylsulfanyl-5,6-dihydrobenzo[b][1]benzothiepine, CID:12404411. Alkylation with 1-methylpiperazine [109-01-3] completed the synthesis of Metitepine (9).
References
External links
5-HT1A antagonists
5-HT1E antagonists
5-HT1F antagonists
5-HT2A antagonists
5-HT6 antagonists
5-HT7 antagonists
Abandoned drugs
Alpha-1 blockers
Antipsychotics
Dibenzothiepines
Dopamine antagonists
4-Methylpiperazin-1-yl compounds
Serotonin receptor antagonists
Thioethers | Metitepine | [
"Chemistry"
] | 724 | [
"Drug safety",
"Abandoned drugs"
] |
13,389,358 | https://en.wikipedia.org/wiki/Mesulergine | Mesulergine () (developmental code name CU-32085) is a drug of the ergoline group which was never marketed. It acts on serotonin and dopamine receptors. Specifically, it is an agonist of dopamine D2-like receptors and serotonin 5-HT6 receptors and an antagonist of serotonin 5-HT2A, 5-HT2B, 5-HT2C, and 5-HT7 receptors.. It also has affinity for the 5-HT1A, 5-HT1B, 5-HT1D, 5-HT1F, and 5-HT5A receptors. The compound had entered clinical trials for the treatment of Parkinson's disease; however, further development was halted due to adverse histological abnormalities in rats. It was also investigated for the treatment of hyperprolactinemia (high prolactin levels).
References
5-HT6 agonists
Abandoned drugs
Dopamine agonists
Ergolines
Prolactin inhibitors
Serotonin receptor antagonists
Sulfamides | Mesulergine | [
"Chemistry"
] | 239 | [
"Drug safety",
"Abandoned drugs"
] |
13,389,455 | https://en.wikipedia.org/wiki/Battenburg%20markings | Battenburg markings or Battenberg markings are a pattern of high-visibility markings developed in the United Kingdom in the 1990s and currently seen on many types of emergency service vehicles in the UK, Crown dependencies, British Overseas Territories and several other European countries including the Czech Republic, Iceland, Sweden, Germany, Romania, Spain, Ireland, and Belgium as well as in Hong Kong and Commonwealth nations including Australia, New Zealand, Pakistan, Trinidad and Tobago, and more recently, Canada. The name comes from its similarity in appearance to the cross-section of a Battenberg cake.
History
Battenburg markings were developed in the mid-1990s in the United Kingdom by the Police Scientific Development Branch (which later became the Home Office Centre for Applied Science and Technology) at the request of the national motorway policing sub-committee of the Association of Chief Police Officers. They were first developed for traffic patrol cars for United Kingdom police forces; private organisations and civil emergency services have also used them since then.
The brief was to design a livery for motorway and trunk road police vehicles that would maximise the vehicles' visibility, from a distance of up to , when stopped either in daylight or under headlights, and which distinctively marked them as police vehicles.
The primary objectives were to design markings that:
Made officers and vehicles more conspicuous (e.g. to prevent collisions when stopped)
Made police vehicles recognisable at a distance of up to in daylight
Assisted in high-visibility policing for public reassurance and deterrence of traffic violations
Made police vehicles nationally recognisable
Were an equal-cost option compared to existing markings
Were acceptable to at least 75% of the staff
Conspicuity
Battenburg design uses a regular pattern and the contrast between a light and a dark colour to increase conspicuity for the human eye.
The lighter colour is daylight-fluorescent (such as fluorescent-yellow) for better visibility in daytime, dusk and dawn.
For night-time visibility, the complete pattern is retroreflective.
The Battenburg design typically has two rows of alternating rectangles, usually starting with yellow at the top corner, then the alternating colour, along the sides of a vehicle. Most cars use two block rows in the design (so-called full-Battenburg scheme). Some car designs use a single row (so-called half-Battenburg scheme) or one and a half rows.
Unless precautions are taken, pattern markings can have a camouflage effect, concealing a vehicle's outline, particularly in front of a cluttered background.
With Battenburg markings, this can be avoided by:
Making rectangles large enough for optical resolution from distance—at least 600 × 300 mm. A typical car pattern consists of seven blocks along the vehicle side. (An odd number of blocks also allows both top corner blocks to be the same fluorescent colour.)
Clearly marking cars' outlines in fluorescent colour along the roof pillars
Avoiding designs with more than two block rows (even for higher vehicles) by including a large area of plain or daylight-fluorescent colour.
Avoiding hybrid designs of Battenburg markings and other high-visibility patterns or check patterns.
The Battenburg livery is not used on the rear of vehicles; upward-facing chevrons of yellow and red are most commonly used there.
Sillitoe tartan
In the development of Battenburg markings, one of the key goals was to clearly identify vehicles associated with police. In this regard, the pattern was reminiscent of the Sillitoe tartan black-and-white or blue-and-white chequered markings first introduced by the City of Glasgow Police in the 1930s, which were subsequently adopted as a symbol of police services throughout the United Kingdom; they are also used by the Chicago Police Department, Australia, and New Zealand. (Although Sillitoe patterns identified vehicles associated with police and other emergency services, they were not highly visible.)
After the launch of Battenburg markings, police added retro-reflective Sillitoe tartan markings to their uniforms, usually in blue and white.
Safety
Battenburg side markings and chevron front-and-rear markings provide conspicuity for emergency vehicles, helping to reduce accidents, especially when they are in unusual traffic situations—e.g. stopped in fast-moving traffic, or moving at different speeds or in different directions.
Several criticisms of the Battenburg scheme were stated at the 3rd Annual US Emergency Medical Services (EMS) Safety Summit in October 2010 about their use on ambulances, including:
The difficulty of applying them to small, curved, and oddly-shaped surfaces
The high costs of adopting the markings
The confusing pattern caused when several parked Battenburg vehicles visually overlap
Obscuring the vehicle's shapes against complex backgrounds, or with open doors and hatches
Combinations other than police yellow-and-blue being less effective, and sometimes even making emergency personnel harder to see
Confronting the public with unfamiliar markings
The pattern's use by services other than UK police, and in other countries, was also criticised.
The high-visibility chevrons often used on the rear and front of Battenburg-marked vehicles, "through popular opinion rather than by a scientific process of testing and research", were found ineffective at reducing rear-end collisions. Stationary vehicles on high-speed roads were likely to be noticed, but not the fact that they were stopped. Parking at an angle was found a far more effective way of indicating the vehicles were stopped.
Usage by country
Australia
In Western Australia, St John Ambulance Western Australia uses green-and-yellow markings, while New South Wales Ambulance uses red-and-white Battenburg markings on ambulances and patient transport vehicles. Australian police utilise the similar Sillitoe tartan markings.
Barbados
The Barbados Police Service uses yellow-and-blue half-Battenburg markings on most of their fleet. However, some police vehicles in Barbados use white-and-blue half-Battenburg markings.
Belgium
In response to the terrorist attacks on 13 November 2015 in Paris and 22 March 2016 in Brussels, the Belgian federal government conducted an analysis on the functioning of the emergency services during terrorist attacks. The main issue identified regarding the emergency medical services was that their recognizability (of both vehicles and personnel) had to improve, so that emergency workers would be able to identify qualified medical providers more quickly during an intervention.
An agreement was made between the federal government and the communities and regions to implement the same new vehicle markings and uniforms. Specifically, emergency ambulances and response vehicles would keep the yellow base colour, whilst non-emergency ambulances would get a white base colour. Both types of vehicles would be marked with retroreflective yellow-and-green Battenburg markings, similar to British ambulances.
A new uniform for medical personnel was also introduced, with different colours for the Star of Life for the different types of workers.
Aside from medical vehicles, some new fire brigade, Civil Protection and highway services vehicles also use respectively yellow-and-red, blue-and-orange and yellow-and-black Battenburg markings.
Canada
In Canada, Battenburg markings on law enforcement vehicles are uncommon. However, in recent decades, Canada has slowly integrated some Battenburg markings on EMS vehicles, particularly in Ontario and Quebec.
Battenburg markings are used on plow trucks for transportation and infrastructure in some parts of Canada, primarily on the back to increase visibility and alert people driving on a highway during poor road conditions that there is a plow truck in use and they must slow down. The general colour scheme for a snowplow's rear reflective panel is yellow-green and black.
Ontario
The parts of Ontario that utilize Battenburg markings, which are generally used by EMS vehicles, include the Region of Niagara, Greater Sudbury, Peterborough, Lanark County, and Frontenac County.
Battenburg markings on police vehicles are not a common sight. The first regional police service to ever officially use Battenburg markings on its vehicles was the St. Thomas Police Service when it tested its new police interceptors with Battenburg markings, which were inspired by the UK's Battenburg design with the familiar blue and yellow reflective markings, in order to help enhance visibility within the city.
The Barrie Police Service later took a similar approach to redesigning its vehicle wraps, which was announced on 26 July 2022, when it unveiled a half-Battenburg marked police cruiser as part of a pilot project to evaluate its visibility within the community. This design featured the same blue and yellow reflective markings as those seen in the UK and Europe. As of 12 May 2023, the Barrie Police Service has officially adopted half-Battenburg markings on all of their fleets, eliminating stealthy dark navy body-colored vehicles and replacing them with white instead.
During the autumn of 2023, the Cobourg Police Service (CPS) announced it would be the third police service in Canada to adopt Battenburg markings. A high-visibility Ford Explorer police vehicle with the markings is to be used by the service as part of a pilot project for 24 months.
Quebec
In Quebec, Battenburg-style markings are used on various EMS vehicles, though some of the markings are reminiscent of Sillitoe tartan.
China
Hong Kong
Hong Kong was a British Dependent Territory until 1997. Some emergency vehicles and special vehicles in the Hong Kong Police Force, Hong Kong Fire Services Department, Auxiliary Medical Service, and Hong Kong St. John Ambulance use Battenburg markings.
Czech Republic
All Czech emergency vehicles, such as ambulances, use yellow-and-green Batternburg markings.
Denmark
Danish emergency vehicles can have one of two options: a series of diagonal lines, or a Battenburg pattern. The diagonal lines must be either red-and-white or red-and-yellow at an angle of 45° ± 5° and have a width of 100 mm ± 2,5 mm. In the front and rear of the vehicle, the markings must be made symmetrical in a way that traffic is lead around the vehicle.
Vehicles may have a reflective text in the above colours, describing their function; for example (), ALARM 112, AMBULANCE, (), () or similar text.
The above patterns are not obligatory. For example, the Danish Emergency Management Agency have chosen to simply not have any reflective marking on their vehicles.
Germany
All rescue vehicles in Bavaria which have been procured uniformly since 2017 have a foiling in the Battenburg marker. From 2019 the ambulance service in Schleswig-Holstein started to adapt the design.
Iceland
In 2018 the Icelandic police started marking new police cars with blue and neon yellow markings similar to Battenburg markings used in Europe. Since then the police cars in the capital region have been made even more visible. In 2020 were Icelandic ambulances changed to look more like ambulances in Europe, adopting yellow and green markings. Icelandic Search and Rescue started adopting Battenburg markings in 2016 with red and yellow markings similar to the fire services.
Ireland
Ireland's Garda Síochána first introduced blue and yellow Battenburg style markings in 2004 with the formation of the Garda Traffic Corps. This rollout was expanded in 2008 with the formation of Regional Support Units (later renamed to the Garda Armed Support Unit), equipped with Battenberg liveried Volvo XC70s with removable red "ARMED SUPPORT UNIT" lettering; this livery was changed in 2016 with the purchase of new Audi Q7 SUVs and BMW 3 Series estates to include permanent lettering and a red stripe running along both sides of the vehicle. Battenburg markings would be rolled out onto most new Garda vehicles (excluding vans) regardless of their role from 2021 onwards.
Ambulances in Ireland originally had similar striped markings to those in the United Kingdom. The Battenburg green and yellow markings and standard base yellow began to be adopted on Irish ambulances following the formation of the HSE National Ambulance Service in 2005. Notably, the Dublin Fire Brigade's ambulance operations and the Order of Malta Ambulance Corps use the same red and yellow Battenburg markings used on fire appliances.
Malta
Malta's first emergency vehicles with Battenburg style markings, 11 Fiat Ducatos for Mater Dei Hospital, were delivered between 2012 and 2014. Further ambulances supplied new or as second-hand imports from the United Kingdom would be liveried in Battenburg markings.
The Civil Protection Department took delivery of its first fire appliances, Iveco, MAN and Volvo based appliances, with an orange and yellow Battenburg-like scheme between 2018 and 2019, with some specialist appliances later built by UK-based EmergencyOne being liveried in UK-style yellow and red markings. However from 2021, a new livery was introduced for new Civil Protection Department fire appliances in 2021 that retained the yellow/orange and red colour scheme but disposed of the Battenburg pattern.
The Malta Police Force first began rolling out Battenburg style markings in 2021 amid investments in new fleet vehicles in line with the force's Transformation Strategy 2020-2025, replacing a silver/grey and black livery. The first new vehicles delivered in the new livery were 20 new Hyundai Tucsons for use as Rapid Intervention Units. The rollout continued in 2022 with the delivery of 12 SsangYong Mussos marked in the livery for use in rural areas, followed in 2024 with deliveries of new traffic police BMW motorcycles and MG5 electric neighbourhood police cars.
New Zealand
The New Zealand Police use yellow-and-blue Battenburg markings on some vehicles. Until October 2008 general duties vehicles were marked in orange and blue, with yellow and blue for highway patrol units; orange and blue was phased out in 2014. Vehicles of New Zealand's St John's Ambulance Service/ Wellington Free Ambulance are marked with green-and-yellow Battenburg markings or rows of green-and-yellow half-chevrons. On 1 July 2017, New Zealand's urban and rural firefighting organisations amalgamated into Fire and Emergency New Zealand, with a new brand including Battenburg markings to be rolled out to the fleet.
Pakistan
In Pakistan, the National Highways & Motorways Police use yellow-and-blue Battenburg markings on most of their fleets.
Spain
Though many municipal police forces of the Autonomous communities of Spain, such as Castile and León, Catalonia, Galicia and the Basque Country, have adopted standardised liveries, some autonomous communities give their municipal police greater freedom to choose their vehicle liveries. As a result, municipal police forces of Alcobendas, Alcorcón, Colmenar Viejo and Rivas-Vaciamadrid in the Community of Madrid, the city of Seville, Benacazón and Paradas in the Province of Seville, Algeciras in Andalusía, and Barañáin in Navarre have adopted either blue-and-yellow Battenburg-style markings or a livery based on the markings.
Sweden
Originally Swedish Police vehicles were painted with black roofs and doors or black roofs, bonnet, and boot. During the 1980s the cars became white with the word written on the side. Later the livery became simply blue and white. In 2005 they began using a light blue and fluorescent yellow Battenburg livery. Swedish police cars have been Saabs, Volvos or Volkswagens, with the same livery all over Sweden. Many Swedish road agencies, contractors and consultants use Battenburg markings on road maintenance vehicles, with an orange-and-blue colour scheme, as in the UK rail response type shown above. This practice was established after a study in 2008 by the Swedish Road Administration, which showed a significant traffic calming effect when using orange-and-blue Battenburg marking to improve the visibility of road maintenance vehicles.
Switzerland
The first Swiss ambulance service with Battenburg markings was the emergency medical services in Zofingen. Since 2008, they have used Battenburg markings on their Volkswagen Crafters and Mercedes-Benz Sprinters. They use white-and-red markings on their ALS units.
Another Swiss service with Battenburg markings is the Swiss Border Guard agency, which uses yellow block markings on its vehicles.
Thailand
In Khon Kaen Province of Thailand, the Kohn Kaen Hospital features yellow-and-green Battenburg markings on their ambulances.
Trinidad and Tobago
The T.T.P.S. Police of Trinidad and Tobago uses half Battenburg yellow-and-blue Battenburg reflective markings on some of their vehicles.
United Kingdom
In the United Kingdom, the majority of the emergency services have adopted the Battenburg style of markings; nearly half of all police forces adopted the markings within three years of their introduction, and over three quarters were using it by 2003.
In 2004, following the widespread adoption and recognition of the Battenburg markings on police vehicles, the Home Office recommended that all police vehicles, not just those on traffic duty, use "half-Battenburg" livery, formalising the practice of a number of forces.
In the United Kingdom each emergency service has been allocated a specified darker colour in addition to yellow, with the police continuing to use blue, ambulances using green, and the fire service their traditional red. Other government agencies such as immigration enforcement have adopted a variation, without using the reflective yellow.
The use of these colours in retro-reflective material is controlled by the Road Vehicle Lighting Regulations 1989, with vehicles only legally allowed the use of amber reflective material (and red near the rear of the vehicle). A number of civilian organisations have also adopted the pattern, which is not legally protected, and a number of these also use other reflective colours.
An alternative to the use of reflective materials is the use of fluorescent or other non-reflective markings, which may be used by any vehicle.
United States
Battenburg markings on emergency vehicles are generally uncommon in the United States, though some municipalities have begun using them in recent years.
The Miami Township Police Department in Ohio has previously used ones similar to those found in the UK on their police cars. Battenburg markings are also used in South Carolina's Charleston County for EMS vehicles.
From 2017 to 2021, the Pittsburgh Police used Sillitoe tartan markings on some of their fleets. The design was updated to include black-and-gold Battenburg markings in 2021 to represent the city's official colours. City authorities stated that the markings would also be applied to all future municipal vehicles.
The Chicago Police Department began using Sillitoe tartan markings on their police vehicles in 2018, while the hats of officers have used them since 1967.
Red and yellow Battenburg markings can be seen on most of the ambulances in the City of Chicago for the Chicago Fire Department.
See also
Sillitoe tartan
Aerial roof markings
Blues and twos
Panda car
Jam sandwich (police car)
Notes
References
External links
High Conspicuity Livery for Police Cars 14-04
High Conspicuity Livery for Police Motorcycles 47-06
Emergency vehicles
Vehicle markings
Visibility
British inventions | Battenburg markings | [
"Physics",
"Mathematics"
] | 3,849 | [
"Wikipedia categories named after physical quantities",
"Quantity",
"Physical quantities",
"Visibility"
] |
13,389,754 | https://en.wikipedia.org/wiki/XO-2Nb | XO-2Nb (or rarely XO-2Bb) is an extrasolar planet orbiting the star XO-2N, the fainter component of XO-2 wide binary star in the constellation Lynx. This planet was found by the transit method in 2007 by Burke et al. This was the second such planet found by the XO telescope.
Like most planets found by the transit method, it is a roughly Jupiter sized planet that orbits very close to its host star; in this case, it has a surface temperature of about 1200 K, so it belongs to a group of exoplanets known as hot Jupiters. The planet takes 2.6 days to orbit the star at the average distance of 0.0369 AU. The planet has mass of 57% of Jupiter and radius of 97% of Jupiter. The radius is relatively large for its mass, probably due to its intense heating from its nearby star that bloats the planet's atmosphere. The large radius for its mass gives a low density of 820 kg/m3.
See also
XO Telescope
References
External links
Lynx (constellation)
Hot Jupiters
Transiting exoplanets
Exoplanets discovered in 2007
Giant planets | XO-2Nb | [
"Astronomy"
] | 246 | [
"Lynx (constellation)",
"Constellations"
] |
13,389,799 | https://en.wikipedia.org/wiki/HD%204203 | HD 4203 is a single star in the equatorial constellation of Pisces, near the northern constellation border with Andromeda. It has a yellow hue and is too faint to be viewed with the naked eye, having an apparent visual magnitude of 8.70. The distance to this object is 266 light years based on parallax, but it is drifting closer to the Sun with a radial velocity of −14 km/s.
This object is an ordinary G-type main-sequence star with a stellar classification of G5V. It is photometrically-stable star with an inactive chromosphere, and has a much higher than normal metallicity. The star is roughly 6.3 billion years old and is spinning with a projected rotational velocity of 5.6 km/s. It has 12% more mass than the Sun and a 35% greater radius. HD 4203 is radiating 1.68 times the luminosity of the Sun from its photosphere at an effective temperature of 5,666 K.
Planetary system
Radial velocity observations of this star during 2000–2001 found a variability that suggesting an orbited sub-stellar companion, designated component 'b'. Additional observations led to a refined orbital period of 432 days with a relatively high eccentricity of 0.52 for a gas giant companion. The presence of a second companion was deduced from residuals in the data, then confirmed in 2014. However, the orbital elements for this companion, component 'c', are poorly constrained.
See also
HD 4208
HD 4308
List of extrasolar planets
Notes
External links
G-type main-sequence stars
Planetary systems with two confirmed planets
Pisces (constellation)
BD=+19 117
004203
003502 | HD 4203 | [
"Astronomy"
] | 356 | [
"Pisces (constellation)",
"Constellations"
] |
13,389,844 | https://en.wikipedia.org/wiki/HD%208574 | HD 8574 is a single star in the equatorial constellation of Pisces. It can be viewed with binoculars or a telescope, but not with the naked eye having a low apparent visual magnitude of +7.12. The distance to this object is 146 light years based on parallax, and it has an absolute magnitude of 3.88. The star is drifting further away from the Sun with a radial velocity of +18 km/s. It has a relatively high proper motion, advancing across the celestial sphere at the rate of 0.298 arc seconds per annum.
The star HD 8574 is named Bélénos. The name was selected in the NameExoWorlds campaign by France, during the 100th anniversary of the IAU. Bélénos was the god of light, of the Sun, and of health in Gaulish mythology.
This object is an F-type star with a stellar classification of F8 and unknown luminosity class. The star is five billion years old and is spinning with a projected rotational velocity of 6.6 km/s. It has 1.1 times the mass of the Sun and 1.4 times the Sun's radius. The star is radiating 2.3 times the luminosity of the Sun from its photosphere at an effective temperature of 6,065 K.
In 2001, an extrasolar planet in an eccentric orbit was announced by the European Southern Observatory. The discovery was published in 2003. This object has at least double the mass of Jupiter and has an eccentric orbit with a period of .
See also
List of extrasolar planets
References
External links
F-type main-sequence stars
Planetary systems with one confirmed planet
Pisces (constellation)
Durchmusterung objects
008574
006643 | HD 8574 | [
"Astronomy"
] | 362 | [
"Pisces (constellation)",
"Constellations"
] |
13,389,942 | https://en.wikipedia.org/wiki/HD%2023079 | HD 23079 is a star in the southern constellation of Reticulum. Since the star has an apparent visual magnitude of 7.12, it is not visible to the naked eye, but at least in binoculars it should be easily visible. Parallax measurements give a distance estimate of 109 light years from the Sun. it is slowly drifting further away with a radial velocity of +0.65 km/s.
This object is an inactive F-type main sequence star with a stellar classification of F9.5V; in between F8 and G0. This indicates it is generating energy through core hydrogen fusion. The star is similar to the Sun, but is slightly hotter and more massive. It is about 5.1 billion years old and it is spinning slowly with a projected rotational velocity of 1.3 km/s. The metallicity of this star is below solar, meaning the abundance of elements other than hydrogen and helium is lower than in the Sun.
The star HD 23079 is named Tupi. The name was selected in the NameExoWorlds campaigns by Brazil during the 100th anniversary of the IAU. The star is named after the Tupi people, an indigenous group.
Planetary system
In October 2001, a giant planet orbiting the star was announced. The orbit of this object is similar to that of Mars, and the presence of such a large planet would have a strong impact on an Earth-like planet in the habitable zone of this Star. Any Earthlike planet would have to exist either as an exomoon or Trojan planet of HD 23079 b.
References
External links
F-type main-sequence stars
Planetary systems with one confirmed planet
Reticulum
Durchmusterung objects
023079
017096 | HD 23079 | [
"Astronomy"
] | 358 | [
"Reticulum",
"Constellations"
] |
13,389,989 | https://en.wikipedia.org/wiki/HD%2023596 | HD 23596 is a star with an orbiting exoplanet companion in the constellation Perseus. It has an apparent visual magnitude of 7.25, which is too dim to be viewed with the naked eye. Based on parallax measurements, it is located at a distance of 169 light years from the Sun. The system is drifting closer with a radial velocity of −10 km/s.
The stellar classification of this star is F8, making it an F-type star with an undefined luminosity class. It is 20% more massive than the Sun and has 153% of the Sun's girth. The visual luminosity of the star is 2.63 times greater than the Sun, which it is radiating from its photosphere at an effective temperature of 5,953 K. It has an estimated age of five billion years, and is spinning with a projected rotational velocity of 3.6 km/s. The star is considered metal-rich, having a higher surface abundance of iron compared to the Sun.
Planetary system
In June 2002, a massive long-period exoplanet orbiting the star was announced. In 2022, the inclination and true mass of HD 23596 b were measured via astrometry. It is orbiting at a distance of from the host star with an orbital period of 4.2 years and an eccentricity (ovalness) of 0.28. This body has a mass around 12 times that of the planet Jupiter.
References
External links
F-type stars
Planetary systems with one confirmed planet
Perseus (constellation)
023596
017747
Durchmusterung objects | HD 23596 | [
"Astronomy"
] | 337 | [
"Perseus (constellation)",
"Constellations"
] |
13,390,057 | https://en.wikipedia.org/wiki/HD%2030177 | HD 30177 is a single star with a pair of orbiting exoplanets in the southern constellation Dorado. Based on parallax measurements, it is located at a distance of 181 light years from the Sun. It has an absolute magnitude of 4.72, but at that distance the star is too faint to be viewed by the naked eye with an apparent visual magnitude of 8.41. The star is drifting further away with a radial velocity of 62.7 km/s.
The spectrum of HD 30177 matches a late G-type main-sequence star with a stellar classification of G8V. It is a yellow dwarf with a mass and radius similar to the Sun that is fusing hydrogen in its core. The chromosphere shows a negligible level of magnetic activity. The abundance of iron, an indicator of the star's metallicity, is more than double the Sun's. It is radiating a similar luminosity to the Sun from its photosphere at an effective temperature of 5,607 K.
A 2024 multiplicity survey, using astrometry from the Gaia spacecraft, identified a proper motion companion to HD 30177. This co-moving companion is a red dwarf star, around 10% the mass of the Sun, is located at 780" from HD 30177 with a position angle of 188°. The angular distance translates to an observed separation of 43,300 astronomical units.
Planetary system
The Anglo-Australian Planet Search team announced the discovery of HD 30177 b, which has a minimum mass 8 times that of Jupiter, on June 13, 2002. The scientific paper describing the discovery was published in The Astrophysical Journal in 2003. A second massive gas giant planet was later discovered in an approximately 32 year orbit. In 2022, the inclination and estimated mass of both planets were measured via astrometry.
See also
List of extrasolar planets
Pi Mensae
References
External links
G-type main-sequence stars
Planetary systems with two confirmed planets
Dorado
Durchmusterung objects
030177
021850 | HD 30177 | [
"Astronomy"
] | 423 | [
"Dorado",
"Constellations"
] |
13,390,153 | https://en.wikipedia.org/wiki/HD%2033283 | HD 33283 is a star in the southern constellation Lepus with one planet and a co-moving stellar companion. With an apparent visual magnitude of 8.05, the star is too faint to be seen with the naked eye. It is located at a distance of 294 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +4.5.
This is an ordinary G-type main-sequence star with a stellar classification of G3/5V. It is about 3.6 billion years old and is chromospherically inactive. The star is spinning slowly with a projected rotational velocity of 1 km/s and an estimated rotation period of about 55.5 days. It is larger and more massive than the Sun. HD 33283 is radiating over four times the luminosity of the Sun from its photosphere at an effective temperature of 5,985 K.
In 2014, a co-moving red dwarf companion star, HD 33283 B, of spectral class M4–M5 was detected at an angular separation of , corresponding to a projected separation of .
Planetary system
In 2006, J. A. Johnson and associates found a jovian planet orbiting HD 33283 with the radial velocity method. It is orbiting at a distance of from the host star with a period of 18.2 days and an eccentricity (ovalness) of 0.4.
See also
HD 33564
HD 86081
HD 224693
List of extrasolar planets
References
External links
G-type main-sequence stars
M-type main-sequence stars
Planetary systems with one confirmed planet
Double stars
Lepus (constellation)
J05080100-2647509
Durchmusterung objects
033283
023889 | HD 33283 | [
"Astronomy"
] | 367 | [
"Lepus (constellation)",
"Constellations"
] |
13,390,276 | https://en.wikipedia.org/wiki/HD%2033564 | HD 33564 (K Camelopardalis) is a single star with an exoplanetary companion in the northern constellation of Camelopardalis. It has an apparent visual magnitude of 5.08, which means it is a 5th magnitude star that is faintly visible to the naked eye. The system is located at a distance of 68 light years from the Sun based on parallax, and it is drifting closer with a radial velocity of −11 km/s. It is a candidate member of the Ursa Major Moving Group.
This is an ordinary F-type main-sequence star with a stellar classification of F7V, indicating that the star is hotter and more massive than the Sun, giving it a yellow-white hue. The star is about two billion years old and is chromospherically quiet, with a projected rotational velocity of 14.3 km/s. It has about 1.5 times the radius and 1.3 times the mass of the Sun. The star is radiating 3.4 times the luminosity of the Sun from its photosphere at an effective temperature of 6,396 K.
Planetary system
In September 2005, a massive planet was found on an eccentric orbit about the star, based on radial velocity variations measured by the ELODIE spectrograph. An infrared excess had been detected at a wavelength of 60 μm, suggesting the star may host a circumstellar disk. However, the existence of a disk is unlikely because the infrared radiation is coming from a background galaxy.
See also
List of extrasolar planets
References
External links
HR%201686 HR 1686
obswww.unige.ch
CCDM J05227+7913
Image HD 33564
F-type main-sequence stars
Planetary systems with one confirmed planet
Camelopardalis
Durchmusterung objects
0196
033564
025110
1686 | HD 33564 | [
"Astronomy"
] | 386 | [
"Camelopardalis",
"Constellations"
] |
13,390,348 | https://en.wikipedia.org/wiki/HD%2050499 | HD 50499 is a star in the constellation of constellation of Puppis. With an apparent visual magnitude of 7.21, this star is too faint to be in naked eye visibility. It is located at a distance of 151 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +36.7 km/s.
This object is a G-type main-sequence star with a stellar classification of G0/2 V. It is positioned 0.6 magnitudes above the main sequence, which may be explained by a high metallicity and an older age. Vogt et al. (2005) estimated its age as about 6.2 billion years, although more recent estimates give a younger age of around 2.4 billion years. The star has 1.31 times the mass of the Sun and 1.42 times the Sun's radius. It is radiating 2.38 times the luminosity of the Sun from its photosphere at an effective temperature of 6,099 K. As of 2019, two exoplanets have been confirmed to be orbiting the star.
Planetary system
The first planet discovered, HD 50499 b, is a gas giant with mass of 1.7 times Jupiter. It is a long period, taking 6.8 years to orbit the star. The planet's eccentric orbit passes through the average distance of .
The planet was discovered by four team members including Steven Vogt in 2005 using their radial velocity method, which used to measure changes in red- and blue-shifting of the star that indicate the presence of planets caused by gravitational tug. There is also a linear trend in the radial velocities, which may indicate an additional outer planet. The best two-planet model gives a different period and mass for the inner planet (9.8 years and 3.4 Jupiter masses), with an outer planet of 2.1 Jupiter masses in a 37-year orbit. However the two-planet model does not represent a significant improvement over the model with one planet and a linear trend, so more observations are needed to constrain the parameters of the outer planet.
Rickman et al. (2019) gave an updated model of the planet and its orbit, and confirmed the presence of a second planet, HD 50499 c, with a period of about 24 years.
See also
HD 50554
References
G-type main-sequence stars
Planetary systems with two confirmed planets
Puppis
Durchmusterung objects
050499
032970 | HD 50499 | [
"Astronomy"
] | 515 | [
"Puppis",
"Constellations"
] |
13,390,384 | https://en.wikipedia.org/wiki/Bioenvironmental%20Engineering | Bioenvironmental Engineers (BEEs) within the United States Air Force (USAF) blend the understanding of fundamental engineering principles with a broad preventive medicine mission to identify, evaluate and recommend controls for hazards that could harm USAF Airmen, employees, and their families. The information from these evaluations help BEEs design control measures and make recommendations that prevent illness and injury across multiple specialty areas, to include: Occupational Health, Environmental Health, Radiation Safety, and Emergency Response. BEEs are provided both initial and advanced instruction at the United States Air Force School of Aerospace Medicine at Wright-Patterson Air Force Base in Dayton, Ohio.
History
During the 1970s, the United States Air Force (USAF) saw a need to implement measures to protect the health of their personnel. They took elements of Military Public Health and spun off a separate arm called Bioenvironmental Engineering. From that point on, Bioenvironmental Engineering had taken the lead in protecting the health of USAF workers.
The original group of Bioenvironmental Engineers (BEEs) came to the Air Force from the U.S. Army in 1947 when the Air Force was formed. They were an outgrowth of the U.S. Army Sanitary Corps. Until 1964, Air Force BEEs were called Sanitary and Industrial Hygiene Engineers. They were Medical Service Corps (MSC) officers until the Biomedical Sciences Corps (BSC) was created in 1965.
Between 1960 and 1970, the BEE field grew from around 100 to 150 members. However, beginning in 1970, with the formation of the Occupational Safety and Health Administration (OSHA), the U.S. Environmental Protection Agency (EPA), and the Nuclear Regulatory Commission, the career field experienced an exponential growth in Federal regulations. These laws required BEEs to monitor Air Force operations for their effects on personnel and the environment. Several major catastrophes and other events focused keen Congressional interest on environment, safety and occupational health (ESOH), leading to new, mandatory compliance programs. Love Canal, Bhopal, atmospheric ozone depletion, and other incidents spawned new laws governing the Installation Restoration Program; Hazard Communication; community-right-to-know; Process Safety Management; and hazardous material inventory, control, and reduction. These have continually driven additional, corresponding requirements for BEEs.
In the early 1980s, a major shift in functions occurred. The clinical and sanitary aspects of the BEE program, (communicable disease, sanitary surveys, vector control, and occupational medicine) were turned over to the newly forming environmental health officers. This enabled the BEE force to concentrate its efforts on the industrial work place and the environment.
The importance of ensuring Air Force compliance with ESOH requirements is now higher than ever. Public awareness/concern/disclosure, the recognition of risk analysis/communication/management, loss of sovereign immunity of federal agencies, and the personal liability of commanders for environmental infractions are all impacting BEE surveillance programs. Increased environmental pollution prevention and occupational health preventive medicine programs are shifting the emphasis to avoiding problems before they occur.
Occupational health
Bioenvironmental Engineers conduct health risk assessments (HRAs) in and around workplaces, protecting Airmen and employees from the hazards associated with their duties, very similar in nature to industrial or occupational hygiene. HRAs with recommendations to reduce or eliminate risk are sent to relevant parties for their consideration and to advise them on the impacts and risks to their subordinates and their mission(s). BEEs fundamentally analyze and recommend controls for identified occupational health (OH) risks to include employee exposure to Occupational Safety and Health Administration (OSHA) expanded standard chemicals listed under 29 CFR 1910 (Subpart Z), immediately dangerous to life or health (IDLH) conditions found within confined spaces, and musculoskeletal disorders introduced by ergonomic stresses (such as repetitive motion/vibration/biomechanical stresses). BEEs routinely monitor local exhaust ventilation systems controlling airborne hazards across an installation to limit exposures a worker may receive. In conjunction with ventilation, BEEs also oversee the Respiratory Protection Program associated with each installation; BEEs ensure personnel are trained on the proper wear of an occupationally-required respirator, have a respirator fit test conducted, and know how to properly don/doff their personal protective equipment to protect them from inhalation hazards imposed by their tasks. BEEs are the installation authority regarding hazardous materials and personal protective equipment certification for use on an Air Force Base. Though not required, common OH certifications attained by BEEs include: Certified Industrial Hygienist (CIH) through the Board for Global EHS Credentialing (formerly the American Board of Industrial Hygiene) and Certified Safety Professional (CSP) through the Board of Certified Safety Professionals.
Environmental health
Bioenvironmental Engineers serve as installation liaisons for federal, state, and local organizations regarding drinking water quality and assess for environmental contaminants on Air Force Bases, annually publishing a consumer confidence report to keep the base populace informed on the quality of their drinking water. A frequent concern on Air Force Bases is exposure to occupational noise hazards, as tinnitus is the most prevalent service-connected disability claimed by veterans through the United States Department of Veterans Affairs as of 2020, accounting for ~8% of all disabilities. To address this concern, BEEs routinely conduct noise dosimetry on personnel to identify and isolate excessive noise-producing equipment in the workplace. BEEs also conduct Occupational and Environmental Health Site Assessments (OEHSA) to identify and mitigate risks to personnel from their jobs, duties, and environment on an Air Force Base and its GSUs. Additionally, BEEs assess indoor air quality for airborne dusts, fumes, mists, fogs, vapors, and gases, frequently quantifying through exposure monitoring and documentation of worker exposures. Furthermore, BEEs routinely monitor for Thermal Stress (to include heat stress and cold stress) on an installation and publish flag conditions associated with recommended work-rest cycles and hydration guidelines, allowing supervisors and workers to remain safe.
Radiation safety
Bioenvironmental Engineers typically concurrently serve as Installation Radiation Safety Officers (IRSO) and Installation Laser Safety Officers (LSO) on an Air Force Base and its GSUs, overseeing and authorizing the transport and use of radioactive materials, Nuclear Regulatory Commission (NRC) Permits, ionizing and non-ionizing radiation sources, and lasers. A key component to protecting personnel from radiation is routine exposure monitoring, managed by the BEEs through a thermoluminescent dosimetry (TLD) program that maintains oversight of all radiation worker exposures installation-wide.
Emergency response
Bioenvironmental Engineers serve as emergency responders and health risk advisors for Chemical, Biological, Radiological, and Nuclear hazards, incidents, and their associated personal protective equipment (or clothing). BEEs are also HAZWOPER-certified, providing risk assessments and communication regarding hazardous materials. However, what BEEs are typically known for on an installation is the customer-oriented service they provide in the form of gas mask fit tests. BEEs routinely respond to emergencies alongside Emergency Management.
Significant examples of BEE support
2015 - 2021 - Operation Freedom's Sentinel
2012 - Water Sampling and Safety during Hurricane Sandy
2011 - Radiation Risk Communication and Dosimetry Support for Operation Tomadachi during the Fukushima nuclear disaster
2001 - 2014 - Operation Enduring Freedom
See also
United States Air Force Medical Service
Biomedical Sciences Corp
Exposure action value
Air Force Knowledge Now
Air Force Specialty Code
Occupational hygiene
References
External links
Air Force Careers (Bioenvironmental Engineering Apprentice)
United States Air Force
Military engineering of the United States
Environment and health
Environmental science
Industrial hygiene
Occupational safety and health | Bioenvironmental Engineering | [
"Environmental_science"
] | 1,540 | [
"nan"
] |
13,390,421 | https://en.wikipedia.org/wiki/HD%2050554 | HD 50554 is a single, Sun-like star with an exoplanetary companion in the northern constellation of Gemini. It has an apparent visual magnitude of +6.84, which makes it a 7th magnitude star; it is not visible to the naked eye, but can be viewed with binoculars or a telescope. The system is located at a distance of from the Sun based on parallax, but is drifting closer with a radial velocity of −4 km/s.
This is a yellow-white hued F-type main-sequence star with a stellar classification of F8V. Age estimates put it at around 2–3 billion years old. It has a Sun-like metallicity a low level of chromospheric activity and is spinning with a projected rotational velocity of 2.3 km/s. The star has a slightly higher mass and larger radius than the Sun. It is radiating 137% of the luminosity of the Sun from its photosphere at an effective temperature of 6,036 K.
Planetary system
In 2001, a giant planet was announced by the European Southern Observatory, who used the radial velocity method. The discovery was formally published in 2002 using observations from the Lick and Keck telescopes. In 2023, the inclination and true mass of HD 50554 b were determined via astrometry.
An infrared excess indicates a debris disk is orbiting the star at a distance of with a half-width of . This may be an analog of the Kuiper belt at an earlier stage of its evolution, which suggests a Neptune-like planet could be orbiting at its inner edge.
See also
HD 50499
List of extrasolar planets
References
F-type main-sequence stars
Planetary systems with one confirmed planet
Circumstellar disks
Gemini (constellation)
BD+24 1451
050554
033212 | HD 50554 | [
"Astronomy"
] | 375 | [
"Gemini (constellation)",
"Constellations"
] |
13,390,501 | https://en.wikipedia.org/wiki/NGC%202423-3 | NGC 2423-3 is a red giant star approximately 3,040 light-years away in the constellation of Puppis. The star is part of the NGC 2423 open cluster (hence the name NGC 2423-3). The star has an apparent magnitude of 10 and an absolute magnitude of zero, with a mass of 2.4 times the Sun. In 2007, it was proposed that an exoplanet orbits the star, but this is now doubtful.
Planetary system
NGC 2423-3 b is an exoplanet 10.6 times more massive than Jupiter. Only the minimum mass is known since the orbital inclination is not known, so it is likely to be a brown dwarf instead. The planet orbits at 2.1 AU, taking 1.956 years to orbit eccentrically around the star. Its eccentricity is about the same as Mercury, but less than Pluto. The planet has a semi-amplitude of 71.5 m/s.
This planet was discovered by Christophe Lovis and Michel Mayor in July 2007 by the radial velocity method. Lovis had also found three Neptune-mass planets orbiting HD 69830 in May 2006, also in Puppis.
However, a 2018 study with the same C. Lovis as an author found evidence that the radial velocity signal corresponding to the proposed planetary companion could be caused by stellar activity or stellar pulsations, and so the planet may not exist. Another study by the same team in 2023 confirms evidence for a stellar origin of the signal.
See also
NGC 4349-127
PSR B1620-26
References
External links
Puppis
M-type giants
BD-13 2130
Hypothetical planetary systems | NGC 2423-3 | [
"Astronomy"
] | 342 | [
"Puppis",
"Constellations"
] |
13,390,561 | https://en.wikipedia.org/wiki/HD%2089307 | HD 89307 is a star in the equatorial constellation of Leo. It is too faint to be viewed with the naked eye except under ideal conditions, having an apparent visual magnitude of 7.02. The star is located at a distance of from the Sun based on parallax, and is drifting further away with a radial velocity of +23 km/s.
This is an ordinary G-type main-sequence star with a stellar classification of G0V. It is chromospherically inactive and appears older than the Sun with a rotation period of 23.7 days. The star has about the same mass as the Sun and is 8% larger. It is radiating 1.35 times the Sun's luminosity from its photosphere at an effective temperature of 5,950 K.
Planetary system
In December 2004, using the radial velocity method, it was found to have a long-period giant planet in orbit around it. The parameters of HD 89307 b were updated in 2012, and in 2023 its inclination and true mass were determined via astrometry.
See also
List of extrasolar planets
List of stars in Leo
References
External links
G-type main-sequence stars
Planetary systems with one confirmed planet
Leo (constellation)
Durchmusterung objects
089307
050473 | HD 89307 | [
"Astronomy"
] | 265 | [
"Leo (constellation)",
"Constellations"
] |
13,390,625 | https://en.wikipedia.org/wiki/HD%20195019 | HD 195019 is a binary star system in the northern constellation of Delphinus. The brighter star has a close orbiting exoplanet companion. This system is located at a distance of 122 light years from the Sun based on parallax measurements, but it is drifting closer with a radial velocity of −91.3 km/s. Although it has an absolute magnitude of 4.01, at that distance the system is considered too faint to be viewed with the naked eye, having a combined apparent visual magnitude of 6.87. However, it should be readily visible with a pair of binoculars or a small telescope.
The spectrum of the primary member, designated component A, presents as a G-type main-sequence star with a stellar classification of G1V. An older stellar classification of G3 V/IV suggested it may be near the end of its main sequence lifespan and is evolving into a subgiant star. This is an older star with an estimated age of nearly 8 billion years and a low level of magnetic activity in its chromosphere. The abundance of iron is near solar. The star has a mass similar to the Sun but a larger radius. It is radiating 2.23 times the luminosity of the Sun from its photosphere at an effective temperature of 5,825 K.
The co-moving companion, component B, was first reported by G. W. Hough in 1881. As of 2016, it is located at an angular separation of along a position angle of 334° relative to the primary. This corresponds to a projected separation of . This is a K-type star with 70% of the mass of the Sun and is magnitude 10.60.
Planetary system
In 1998, a planet was discovered at Lick Observatory utilizing a radial velocity method, orbiting around Star HD 195019 A. A search of astrometric observations from Hipparcos suggested this may be a stellar object in a near polar orbit. However, interferometric observations ruled out a stellar companion in this orbit with high likelihood.
See also
HD 190228
HD 196050
List of exoplanets discovered before 2000 - HD 195019 b
References
G-type main-sequence stars
Binary stars
Planetary systems with one confirmed planet
Delphinus
Durchmusterung objects
195019
100970
J20281860+1846103 | HD 195019 | [
"Astronomy"
] | 474 | [
"Delphinus",
"Constellations"
] |
13,390,753 | https://en.wikipedia.org/wiki/HD%20100777 | HD 100777 is a single star with a planetary companion in the equatorial constellation of Leo. With an apparent visual magnitude of 8.42 it is too faint to be viewed with the naked eye, although the absolute magnitude of 4.81 indicates it could be seen if it were just away. The distance to the star is approximately 162 light years based on parallax measurements.
The International Astronomical Union held the NameExoWorlds campaign in 2019. Nepal named the star Sagarmatha ("similar to Nepali name of the Mt. Everest") and the exoplanet revolving it was named as Laligurans, the Nepali name of the flower Rhododendron.
This is an ordinary G-type main-sequence star with a stellar classification of G8V. It has a similar mass, size, and luminosity to the Sun. The star is roughly five billion years old with an inactive chromosphere and is spinning with a projected rotational velocity of 1.7 km/s. A 2015 survey ruled out the existence of any additional stellar companions at projected distances from 18 to 369 astronomical units.
Planetary system
In 2007, a giant exoplanet companion was found using the radial velocity method. It is orbiting HD 100777 at a distance of with a period of 384 days and an eccentricity (ovalness) of 0.36. The inclination of the orbital plane of this body is unknown, so only a lower limit on the mass can be determined. It has at least 1.16 times the mass of Jupiter.
See also
HD 190647
HD 221287
List of extrasolar planets
List of stars in Leo
References
G-type main-sequence stars
Planetary systems with one confirmed planet
Leo (constellation)
BD-03 3147
100777
056572 | HD 100777 | [
"Astronomy"
] | 372 | [
"Leo (constellation)",
"Constellations"
] |
13,390,863 | https://en.wikipedia.org/wiki/HD%20190647 | HD 190647 is a yellow-hued star with an exoplanetary companion, located in the southern constellation of Sagittarius. It has an apparent visual magnitude of 7.78, making this an 8th magnitude star that is much too faint to be readily visible to the naked eye. The star is located at a distance of 178 light years from the Sun based on parallax measurements, but is drifting closer with a radial velocity of −40 km/s. It is also called HIP 99115.
The stellar classification of this star is G5V, matching a G-type main-sequence star. However, the low gravity and high luminosity of this star may indicate it is slightly evolved. It is chrompsherically inactive with a slow rotation, having a projected rotational velocity of 1.6 km/s. The star's metallicity is high, with nearly 1.5 times the abundance of iron compared to the Sun.
In 2007, a Jovian planet was found to be orbiting the star. It was detected using the radial velocity method with the HARPS spectrograph in Chile. The object is orbiting at a distance of from the host star with a period of and an eccentricity (ovalness) of 0.18. As the inclination of the orbital plane is unknown, only a lower bound on the planetary mass can be made. It has a minimum mass 1.9 times the mass of Jupiter.
See also
HD 100777
HD 221287
List of extrasolar planets
References
G-type main-sequence stars
G-type subgiants
Planetary systems with one confirmed planet
Sagittarius (constellation)
Durchmusterung objects
190647
099115 | HD 190647 | [
"Astronomy"
] | 351 | [
"Sagittarius (constellation)",
"Constellations"
] |
13,390,918 | https://en.wikipedia.org/wiki/HD%20221287 | HD 221287, named Poerava, is a star in the southern constellation of Tucana. It has a yellow-white hue but is too faint to be viewed with the naked eye, having an apparent visual magnitude of 7.82. This object is located at a distance of 183 light years from the Sun, as determined from its parallax. It is drifting closer with a radial velocity of −22 km/s.
This object is an F-type main-sequence star with a stellar classification of F7V. It is relatively young with age estimates of 763 million and 1.3 billion years, and possesses an active chromosphere. Cool spots on the surface are generating a radial-velocity signal that is modulated by the rotation period of around five days. The star is 18% larger and 20% more massive than the Sun. It is radiating 1.9 times the luminosity of the Sun from its photosphere at an effective temperature of 6,440 K.
Name
The star was given the designation "HD 221287" before being named Poerava by representatives of the Cook Islands in the IAU's 2019 NameExoWorlds contest, with the comment "Poerava is the word in the Cook Islands Maori language for a large mystical black pearl of utter beauty and perfection."
Planetary system
On March 5, 2007, the astronomer Dominique Naef used the HARPS spectrograph to uncover the exoplanetary companion designated HD 221287 b (among others). Using the amplitude from observations with HARPS, he calculated a minimum mass of 3.12 times that of Jupiter, making this a superjovian. This planet orbits 25% further from the star than Earth is from the Sun, with a low eccentricity. In 2024, astrometric measurements revealed that this object might be instead a brown dwarf, with a mass between at 68% confidence, or between at 99.5% confidence.
Stability analysis reveals that the orbits of Earth–sized planets in HD 221287 b's Trojan points, located 60 degrees ahead and behind the planet in its orbit, would be stable for long periods of time.
See also
HD 100777
HD 164595
HD 190647
References
F-type main-sequence stars
Planetary systems with one confirmed planet
Tucana
Durchmusterung objects
221287
116084
Poerava | HD 221287 | [
"Astronomy"
] | 490 | [
"Tucana",
"Constellations"
] |
13,391,053 | https://en.wikipedia.org/wiki/Neozealandia | Neozealandia is a biogeographic province of the Antarctic Realm according to the classification developed by Miklos Udvardy in 1975.
Concept
Neozealandia consists primarily of the major islands of New Zealand, including North Island and South Island, as well as Chatham Island. The southernmost areas of Neozealandia overlap with the Insulantarctica province, which includes the New Zealand Subantarctic Islands.
Both New Zealand and the New Zealand Subantarctic Islands are remnants of a submerged subcontinent known as Zealandia, which gradually submerged itself beneath the sea after breaking off from the Gondwanan land masses of Antarctica and Australia. Due to isolation, the entire Zealandia archipelago has remained virtually free of mammals (except for bats and a few others) and invasive alien species. Since only very few mammals and other alien species have actually colonized the islands of the Neozealandia province over the millions of years, the flora and fauna on most of the islands, including those of New Zealand itself, have remained almost exactly the same as they were when the original Gondwana supercontinent existed.
A couple of tuatara species survive in small numbers on small islets adjacent to New Zealand. Also, New Zealand has vestiges of ancient temperate rain forests with plant species, such as giant club mosses, tree ferns and Nothofagus trees, dating from the time when the Zealandia subcontinent split off from Gondwana. New Zealand grasslands are dominated by vast spreadings of tussock grass fed upon by the native ground parrots. Most of New Zealand's few mammals are like those frequenting Antarctic shores.
References
External links
Neozealandia World Heritage Site: Tongariro National Park
Neozealandia World Heritage Site: Te Wahipounamu
Insulantarctica World Heritage Site: New Zealand Subantarctic Islands
Fundamentals of Biogeography and Ecosystems
Biogeography
Environment of New Zealand | Neozealandia | [
"Biology"
] | 404 | [
"Biogeography"
] |
13,392,068 | https://en.wikipedia.org/wiki/Tennenbaum%27s%20theorem | Tennenbaum's theorem, named for Stanley Tennenbaum who presented the theorem in 1959, is a result in mathematical logic that states that no countable nonstandard model of first-order Peano arithmetic (PA) can be recursive (Kaye 1991:153ff).
Recursive structures for PA
A structure in the language of PA is recursive if there are recursive functions and from to , a recursive two-place relation <M on , and distinguished constants such that
where indicates isomorphism and is the set of (standard) natural numbers. Because the isomorphism must be a bijection, every recursive model is countable. There are many nonisomorphic countable nonstandard models of PA.
Statement of the theorem
Tennenbaum's theorem states that no countable nonstandard model of PA is recursive. Moreover, neither the addition nor the multiplication of such a model can be recursive.
Proof sketch
This sketch follows the argument presented by Kaye (1991). The first step in the proof is to show that, if M is any countable nonstandard model of PA, then the standard system of M (defined below) contains at least one nonrecursive set S. The second step is to show that, if either the addition or multiplication operation on M were recursive, then this set S would be recursive, which is a contradiction.
Through the methods used to code ordered tuples, each element can be viewed as a code for a set of elements of M. In particular, if we let be the ith prime in M, then . Each set will be bounded in M, but if x is nonstandard then the set may contain infinitely many standard natural numbers. The standard system of the model is the collection . It can be shown that the standard system of any nonstandard model of PA contains a nonrecursive set, either by appealing to the incompleteness theorem or by directly considering a pair of recursively inseparable r.e. sets (Kaye 1991:154). These are disjoint r.e. sets so that there is no recursive set with and .
For the latter construction, begin with a pair of recursively inseparable r.e. sets A and B. For natural number x there is a y such that, for all i < x, if then and if then . By the overspill property, this means that there is some nonstandard x in M for which there is a (necessarily nonstandard) y in M so that, for every with , we have
Let be the corresponding set in the standard system of M. Because A and B are r.e., one can show that and . Hence S is a separating set for A and B, and by the choice of A and B this means S is nonrecursive.
Now, to prove Tennenbaum's theorem, begin with a nonstandard countable model M and an element a in M so that is nonrecursive. The proof method shows that, because of the way the standard system is defined, it is possible to compute the characteristic function of the set S using the addition function of M as an oracle. In particular, if is the element of M corresponding to 0, and is the element of M corresponding to 1, then for each we can compute (i times). To decide if a number n is in S, first compute p, the nth prime in . Then, search for an element y of M so that
for some . This search will halt because the Euclidean algorithm can be applied to any model of PA. Finally, we have if and only if the i found in the search was 0. Because S is not recursive, this means that the addition operation on M is nonrecursive.
A similar argument shows that it is possible to compute the characteristic function of S using the multiplication of M as an oracle, so the multiplication operation on M is also nonrecursive (Kaye 1991:154).
Turing degrees of models of PA
Jockush and Soare have shown there exists a model of PA with low degree.
References
Model theory
Theorems in the foundations of mathematics | Tennenbaum's theorem | [
"Mathematics"
] | 880 | [
"Mathematical theorems",
"Foundations of mathematics",
"Mathematical logic",
"Model theory",
"Mathematical problems",
"Theorems in the foundations of mathematics"
] |
13,393,419 | https://en.wikipedia.org/wiki/Transversal%20%28instrument%20making%29 | Transversals are a geometric construction on a scientific instrument to allow a graduation to be read to a finer degree of accuracy. Their use creates what is sometimes called a diagonal scale, an engineering measuring instrument which is composed of a set of parallel straight lines which are obliquely crossed by another set of straight lines. Diagonal scales are used to measure small fractions of the unit of measurement.
Transversals have been replaced in modern times by vernier scales. This method is based on the Intercept theorem (also known as Thales's theorem).
History
Transversals were used at a time when finely graduated instruments were difficult to make. They were found on instruments starting in the early 14th century, but the inventor is unknown. In 1342 Levi Ben Gerson introduced an instrument called Jacob's staff (apparently invented the previous century by Jacob Ben Makir) and described the method of the transversal scale applied to the mentioned instrument.
Thomas Digges mistakenly attributed the discovery of the transversal scale to the navigator and explorer Richard Chancellor (cited by some authors as watchmaker and with other names, among them: Richard Chansler or Richard Kantzler). Its use on astronomical instruments only began in the late 16th century. Tycho Brahe used them and did much to popularize the technique. The technique began to die out once verniers became common in the late 18th century – over a century after Pierre Vernier introduced the technique.
In the interim between transversals and the vernier scale, the nonius system, developed by Pedro Nunes, was used. However, it was never in common use. Tycho also used nonius methods, but he appears to be the only prominent astronomer to do so.
Etymology
Diagonal scale is derived from the Latin word Diagonalis. The Latin word was originally coined from the Greek word diagōnios where dia means "through" and gonios denotes "corners".
Principle of a diagonal scale
Diagonal scale follows the principle of similar triangles where a short length is divided into number of parts in which sides are proportional.
Divided into required number of equal parts
Linear transversals
Linear transversals were used on linear graduations. A grid of lines was constructed immediately adjacent to the linear graduations. The lines extending above the graduations formed part of the grid. The number of lines perpendicular to the extended graduation lines in the grid was dependent on the degree of fineness the instrument maker wished to provide.
A grid of five lines would permit determination of the measure to one-fifth of a graduation's division. A ten-line grid would permit tenths to be measured. The distance between the lines is not critical as long as the distance is precisely uniform. Greater distances makes for greater accuracy.
As seen in the illustration on the right, once the grid was scribed, diagonals (transverse lines) were scribed from the uppermost corner of a column in the grid to the opposite lowest corner. This line intersects the cross lines in the grid in equal intervals. By using an indicator such as a cursor or alidade, or by measuring using a pair of dividers with points on the same horizontal grid line, the closest point where the transversal crosses the grid is determined. That indicates the fraction of the graduation for the measure.
In the illustration, the reading is indicated by the vertical red line. This could be the edge of an alidade or a similar device. Since the cursor crosses the transversal closest to the fourth grid line from the top, the reading (assuming the leftmost long graduation line is 0.0) is 0.54.
Application
Diagonal scale is used in engineering to read lengths with higher accuracy as it represents a unit into three different multiple in metres, centimeters and millimeters. Diagonal scale is an important part in Engineering drawings.
Circular transversals
Circular transversals perform the same function as the linear ones but for circular arcs. In this case, the construction of the grid is significantly more complicated. A rectangular grid will not work. A grid of radial lines and circumferential arcs must be created. In addition, a linear transverse line will not divide the radial grid into equal segments. Circular arc segments must be constructed as transversals to provide the correct proportions.
Tycho Brahe
Tycho Brahe created a grid of transversal lines made with secants between two groups of arcs that form two graduated limbs. The secants are drawn by joining the division of a limb with the next division of the other limb, and so on (see figure with the magnification of 2 degrees of the Tycho Brahe's quadrant of 2m radius).
He drew, for each degree, six straight transversals in an alternate mode forming a "V" and each transversal consisted of 9 points that divided it into 10 parts, which multiplied by 6 give 60 minutes. While Abd al-Mun'im al 'Âmilî (16th century) drew them all in the same direction (although his instrument has less precision).
Other authors
The method of the "straight transversals" applied to the measurements of angles on circular or semicircular limbs in astronomical and geographic instruments was treated by several authors. Studying the accuracy of the system, some of them indicated the convenience of employing "Circular transversals", instead of the "straight transversals".
See also
Micrometer
Vernier scale
References
Bibliography
Daumas, Maurice, Scientific Instruments of the Seventeenth and Eighteenth Centuries and Their Makers, Portman Books, London 1989
External links
Thin Strip Jig with Transversal Scale
Measuring instruments
Historical scientific instruments | Transversal (instrument making) | [
"Technology",
"Engineering"
] | 1,145 | [
"Measuring instruments"
] |
13,393,765 | https://en.wikipedia.org/wiki/ARINC%20826 | ARINC 826 is a protocol for avionic data loading over the Controller Area Network (CAN) as internationally standardized in ISO 11898-1. It allows Loadable Software Aircraft Parts to be loaded in a verifiable and secure manner to avionics Line Replaceable Units (LRUs) and Line Replaceable Modules (LRMs) using CAN.
Based on a subset of ARINC 615A features (the avionic data loading protocol for data loading over Ethernet), ARINC 826 provides basic features for avionics data loading.
References
Avionics | ARINC 826 | [
"Technology"
] | 122 | [
"Avionics",
"Aircraft instruments"
] |
13,394,972 | https://en.wikipedia.org/wiki/Richard%20Mollier | Richard Mollier (; 30 November 1863, Triest – 13 March 1935, Dresden) was a German professor of Applied Physics and Mechanics in Göttingen and Dresden, a pioneer of experimental research in thermodynamics, particularly for water, steam, and moist air.
Mollier diagrams (enthalpy-entropy charts) are routinely used by engineers in the design work associated with power plants (fossil or nuclear), compressors, steam turbines, refrigeration systems, and air conditioning equipment to visualize the working cycles of thermodynamic systems.
The Mollier diagram of enthalpy of moist air versus its water vapor content (h–x diagram) is equivalent to the Psychrometrics Chart commonly used in the US and Britain.
Education and career
After attending Gymnasium (grammar school) in Triest, he commenced studies in mathematics and physics at the university of Graz (Austria), continuing at the Technical University of Munich. He presented his first publications as an outside lecturer for Theoretical Mechanics. After a short stint in Göttingen, he succeeded Gustav Zeuner in 1897 as Professor of Mechanical Engineering at the Technischen Hochschule Dresden. His 1904 publication New Graphs for Technical Thermodynamics greatly simplified calculations involving thermodynamic processes. His New Tables and Diagrams for Water Vapor, first published in 1906, appeared in six further editions through 1932, as he updated it to reflect new developments.
Honors
At the 1923 Thermodynamics Conference held in Los Angeles, it was decided to name, in his honor, as a “Mollier graph” any thermodynamic diagram using the Enthalpy h as one of its axes. Example: the h–s graph for steam or the h–x graph for moist air.
Publications
Die Entropie der Wärme (The Entropy of Heat) 1895
Dampftafeln und Diagramme des Kohlendioxid (Vapor Tables and Diagrams for Carbon Dioxide) 1896
Neue Diagramme zur Technischen Wärmelehre (New Graphs for Technical Thermodynamics) 1904
Neue Tabellen und Diagramme für Wasserdampf (New Tables and Diagrams for Water Vapor) Berlin 1906
See also
Psychrometrics
External links
mollierdiagram.com digital version of Mollier Diagram
Publications by and about Richard Mollier
Photo
German mechanical engineers
19th-century German physicists
Technical University of Munich alumni
1863 births
1935 deaths
20th-century German physicists
Thermodynamicists
Emigrants from Austria-Hungary to Germany | Richard Mollier | [
"Physics",
"Chemistry"
] | 522 | [
"Thermodynamics",
"Thermodynamicists"
] |
13,395,049 | https://en.wikipedia.org/wiki/Tebuconazole | Tebuconazole is a triazole fungicide used agriculturally to treat plant pathogenic fungi.
Environmental hazards
Though the U.S. Food and Drug Administration considers this fungicide to be safe for humans, it may still pose a risk. It is listed as a possible carcinogen in the United States Environmental Protection Agency Office of Pesticide Programs carcinogen list with a rating of C (possible carcinogen). Its acute toxicity is moderate. According to the World Health Organization toxicity classification, it is listed as III, which means slightly hazardous.
Due to the potential for endocrine-disrupting effects, tebuconazole was assessed by the Swedish Chemicals Agency as being potentially removed from the market by EU regulation 1107/2009.
References
External links
United States Estimated Annual Agricultural Pesticide Use Pesticide Use Maps - Tebuconazole
Fungicides
Lanosterol 14α-demethylase inhibitors
Triazoles
4-Chlorophenyl compounds | Tebuconazole | [
"Biology"
] | 203 | [
"Fungicides",
"Biocides"
] |
13,397,634 | https://en.wikipedia.org/wiki/Iclazepam | Iclazepam (Clazepam) is a drug which is a benzodiazepine derivative. It has sedative and anxiolytic effects similar to those produced by other benzodiazepine derivatives, and is around the same potency as chlordiazepoxide.
Iclazepam is a derivative of nordazepam substituted with a cyclopropylmethoxyethyl group on the N1 nitrogen. Once in the body, iclazepam is quickly metabolised to nordazepam and its N-(2-hydroxyethyl) derivative, which are thought to be mainly responsible for its effects.
See also
List of benzodiazepines
References
Benzodiazepines
Chloroarenes
Ethers
Lactams | Iclazepam | [
"Chemistry"
] | 167 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
13,397,643 | https://en.wikipedia.org/wiki/Gorgonin | Gorgonin is a flexible scleroprotein which provides structural strength to gorgonian corals, a subset of the order Alcyonacea. Gorgonian corals have supporting skeletal axes made of gorgonin and/or calcite. Gorgonin makes up the joints of bamboo corals in the deep sea, and forms the central internal skeleton of sea fans. It frequently contains appreciable quantities of bromine, iodine, and tyrosine.
Gorgonin is diagenetically stable and is deposited in discrete annual growth rings in Primnoa resedaeformis, and possibly other species.
History
The study of the chemistry of gorgonin, as a substance rather than a protein, was started by Balard in 1825, who reported on the occurrence of "iodogorgic acid". Several sources cite Valenciennes as having given the protein the name of "gorgonin" in an 1855 monograph. However, the monograph cited appears to contradict this, solely naming a newly-discovered substance in Gorgonians "cornéine" after its resemblance to substances extracted from mammalian hooves and nails. According to one 1939 paper, Valenciennes' discovery was followed by investigations by Krukenberg, Mendel, Morner, and others, which suggested the protein was a keratin, similar to those obtained from the ectoderm of "higher animals".
Scientific use
Research has shown that measurements of the gorgonin and calcite within species of gorgonian corals can be useful in paleoclimatology and paleoceanography. Studies of the growth, composition, and structure of the skeleton of certain species of gorgonians, (e.g., Primnoa resedaeformis, and Plexaurella dichotoma) can be highly correlated with seasonal and climatic variation.
References
Proteins | Gorgonin | [
"Chemistry"
] | 393 | [
"Biomolecules by chemical classification",
"Protein stubs",
"Biochemistry stubs",
"Molecular biology",
"Proteins"
] |
13,398,531 | https://en.wikipedia.org/wiki/Metaclazepam | Metaclazepam (marketed under the brand name Talis) is a drug which is a benzodiazepine derivative. It is a relatively selective anxiolytic with less sedative or muscle relaxant properties than other benzodiazepines such as diazepam or bromazepam. It has an active metabolite N-desmethylmetaclazepam, which is the main metabolite of metaclazepam. There is no significant difference in metabolism between younger and older individuals.
Metaclazepam is slightly more effective as an anxiolytic than bromazepam, or diazepam, with a 15 mg dose of metaclazepam equivalent to 4 mg of bromazepam. Metaclazepam can interact with alcohol producing additive sedative-hypnotic effects. Fatigue is a common side effect from metaclazepam at high doses. Small amounts of metaclazepam as well as its metabolites enter into human breast milk.
See also
Benzodiazepine
References
Benzodiazepines
2-Chlorophenyl compounds
Ethers
Bromoarenes | Metaclazepam | [
"Chemistry"
] | 245 | [
"Organic compounds",
"Functional groups",
"Ethers"
] |
13,398,581 | https://en.wikipedia.org/wiki/Indic%20computing | Indic Computing means "computing in Indic", i.e., Indian Scripts and Languages. It involves developing software in Indic Scripts/languages, Input methods, Localization of computer applications, web development, Database Management, Spell checkers, Speech to Text and Text to Speech applications and OCR in Indian languages.
Unicode standard version 15.0 specifies codes for 9 Indic scripts in Chapter 12 titled "South and Central Asia-I, Official Scripts of India". The 9 scripts are Bengali, Devanagari, Gujarati, Gurmukhi, Kannada, Malayalam, Oriya, Tamil and Telugu.
A lot of Indic Computing projects are going on. They involve some government sector companies, some volunteer groups and individual people.
Government sector
Indian Union Government made it mandatory for Mobile phone companies whose handsets manufactured, stored, sold and distributed in India to have support for displaying and typing text using fonts for all 22 languages. This move has seen rise in use of Indian languages by millions of users.
TDIL
The Department of Electronics and Information Technology, India initiated the TDIL (Technology Development for Indian Languages) with the objective of developing Information Processing Tools and Techniques to facilitate human-machine interaction without language barrier; creating and accessing multilingual knowledge resources; and integrating them to develop innovative user products and services.
In 2005, it started distributing language software tools developed by Government/Academic/Private companies in the form of CD for non commercial use.
Some of the outcome of TDIL program deployed on Indian Language Technology Proliferation & Deployment Centre. This Centre disseminate all the linguistic resources, tools & applications which have been developed under TDIL funding. This programme took to exponential expansion under the leadership of Dr. Swaran Lata who also created international foot-print of the programme. She has now retired.
C-DAC
C-DAC is an India based government software company which is involved in developing language related software. It is best known for developing InScript Keyboard, the standard keyboard for Indian languages. It has also developed lot of Indic language solutions including Word Processors, typing tools, text to speech software, OCR in Indian languages etc.
BharateeyaOO.org
The work developed out of CDAC, Bangalore (earlier known as NCST, Bangalore) became BharateeyaOO. OpenOffice 2.1 had support for over 10 Indian languages.
BOSS
BOSS linux was developed by the Centre for Development of Advanced Computing (CDAC) to promote use of open-source software in India.
NGO and Volunteer groups
Indlinux
Indlinux organisation helped organise the individual volunteers working on different indic language versions of Linux and its applications.
Sarovar
Sarovar.org is India's first portal to host projects under Free/Open source licenses. It is located in Trivandrum, India and hosted at Asianet data center. Sarovar.org is customised, installed and maintained by Linuxense as part of their community services and sponsored by River Valley Technologies. Sarovar.org is built on Debian Etch and GForge and runs off METTLE.
Pinaak
Pinaak is a non-government charitable society devoted to Indic language computing. It works for software localization, developing language software, localizing open source software, enriching online encyclopedias etc. In addition to this Pinaak works for educating people about computing, ethical use of Internet and use of Indian languages on Internet.
Ankur Group
Ankur Group is working toward supporting Bengali language (Bengali) on Linux operating system including localized Bengali GUI, Live CD, English-to-Bengali translator, Bengali OCR and Bengali Dictionary etc.
BhashaIndia
SMC
SMC is a free software group, working to bridge the language divide in Kerala in the technology front and is today the biggest language computing community in India.
Input methods
Full size keyboards
With the advent of Unicode inputting Indic text on computer has become very easy. A number of methods exist for this purpose, but the main ones are:-
InScript
Inscript is the standard keyboard for Indian languages. Developed by C-DAC and standardized by Government of India. Nowadays it comes inbuilt in all major operating systems including Microsoft Windows (2000, XP, Vista, 7), Linux and Macintosh.
Phonetic transliteration
This is a typing method in which, for instance, the user types text in an Indian language using Roman characters and it is phonetically converted to equivalent text in Indian script in real time. This type of conversion is done by phonetic text editors, word processors and software plugins. Building up on the idea, one can use phonetic IME tools that allow Indic text to be input in any application.
Some examples of phonetic transliterators are Xlit, Google Indic Transliteration, BarahaIME, Indic IME, Rupantar, SMC's Indic Keyboard and Microsoft Indic Language Input Tool. SMC's Indic Keyboard has support for as many as 23 languages whereas Google Indic Keyboard only supports 11 Indian languages.
They can be broadly classified as:
Fixed transliteration scheme based tools – They work using a fixed transliteration scheme to convert text. Some examples are Indic IME, Rupantar and BarahaIME.
Intelligent/Learning based transliteration tools – They compare the word with a dictionary and then convert it to the equivalent words in the target language. Some of the popular ones are Google Indic Transliteration, Xlit, Microsoft Indic Language Input Tool and QuillPad.
Remington (typewriter)
This layout was developed when computers had not been invented or deployed with Indic languages, and typewriters were the only means to type text in Indic scripts. Since typewriters were mechanical and could not include a script processor engine, each character had to be placed on the keyboard separately, which resulted in a very complex and difficult to learn keyboard layout.
With the advent of Unicode, the Remington layout was added to various typing tools for sake of backward compatibility, so that old typists did not have to learn a new keyboard layout. Nowadays this layout is only used by old typists who are used to this layout due to several years of usage. One tool to include Remington layout is Indic IME. A font that is based on the Remington keyboard layout is Kruti Dev. Another online tool that very closely supports the old Remington keyboard layout using Kruti Dev is the Remington Typing tool.
Braille
IBus Sharada Braille, which supports seven Indian languages was developed by SMC.
Mobile phones with Numeric keyboards
Mobile/Hand/cell phone basic models have 12 keys like the plain old telephone keypad. Each key is mapped to 3 or 4 English letters to facilitate data entry in English. For inputting Indian languages with this kind of keypad, there are two ways to do so. First is the Multi-tap Method and second uses visual help from the screen like Panini Keypad. The primary usage is SMS. 140 characters size used for English/Roman languages can be used to accommodate only about 70 language characters when Unicode Proprietary compression is used some times to increase the size of single message for Complex script languages like Hindi. A research study of the available methods and recommendations of proposed standard was released by Broadband Wireless Consortium of India (BWCI).
Transliteration/Phonetic methods
English is used to type in Indian languages.
QuillPad
IndiSMS
Native methods
In native methods, the letters of the language are displayed on the screen corresponding to the numeral keys based on the probabilities of those letters for that language. Additional letters can be accessed by using a special key. When a word is partially typed, options are presented from which the user can make a selection.
Smart phones with Qwerty keyboards
Most smart phones have about 35 keys catering primarily to English language. Numerals and some symbols are accessed with a special key called Alt. Indic input methods are yet to evolve for these types of phones, as support of Unicode for rendering is not widely available.
For Smart Phones with Soft/Virtual keyboards
Inscript is being adopted for smart phone usage. For Android phones which can render Indic languages, Swalekh Multilingual Keypad Multiling Keyboard app are available. Gboard offers support for several Indian languages.
Localization
Localization means translating software, operating systems, websites etc. various applications in Indian language. Various volunteers groups are working in this direction.
Mandrake Tamil Version
A notable example is the Tamil version of Mandrake linux(defunct since 2011). Tamil speakers in Toronto (Canada) released Mandrake, a Linux software, in coming out with a Tamil version. It can be noted that all the features can be accessed in Tamil. By this, the prerequisite of English knowledge for using computers has been eliminated, for those who know Tamil.
IndLinux
IndLinux is a volunteer group aiming to translate the Linux operating system into Indian languages. By the efforts of this group, Linux has been localized almost completely in Hindi and other Indian languages.
Nipun
Nipun is an online translation system aimed to translate various application in Hindi. It is part of Akshargram Network.
Localising Websites
GoDaddy has localised its website in Hindi, Marathi and Tamil and also noted that 40% of the call volume for IVR is in Indian Languages.
Indic blogging
Indic blogging refers to blogging in Indic languages. Various efforts have been done to promote blogging in Indian languages.
Social Networks
Some Social networks are started in Indian languages.
Programming
Indic programming languages
BangaBhasha - Programming in Bangla
Programing using Hindi language
Ezhil, a programming language in Tamil
Frameworks
Gherkin, a popular Domain-specific language has support for Gujarati, Hindi, Kannada, Punjabi, Tamil, Telugu and Urdu
Libraries
Natural Language processing in Indian languages is on rise. There are several libraries such as iNLTK, StanfordNLP are available.
Translation
Google offers improved translation feature for Hindi, Bengali, Marathi, Tamil, Telugu, Gujarati, Punjabi, Malayalam and Kannada, with offline support as well. Microsoft also offers translation for some of these languages.
Software
Indic Language Stack
In a symposium jointly organized by FICCI and TDIL, Mr. Ajay Prakash Sawhney, Secretary, Ministry of Electronics and IT, Government of India said that India Language Stack can help overcome the barriers of communication.
Spell Checkers
Transliteration tools
Transliteration tools allow users to read a text in a different script. As of now, Aksharamukha is the tool that allows most Indian scripts. Google also offers Indic Transliteration. Text from any of these scripts can be converted to any other scripts and vice versa. Whereas Google and Microsoft allow transliteration from Latin letters to Indic scripts.
Speech-to-Text
Voice Recognition
Apple Inc. added support for major Indian languages in Siri. Amazon's Alexa has support for Hindi and recognises major Indian languages partially. Google Assistant also has support for major Indian languages.
Internationalized Domain Names
Operating Systems
Indus OS
Virtual Assistants
AI based Virtual Assistants Google Assistant provides support to various Indian languages.
Usage and Growth
According to GoDaddy, Hindi, Marathi and Tamil languages accounted for 61% of India's internet traffic. Less than 1% of online content is in Indian languages. The newly created top apps have support for multiple Indian languages and/or promote Indian language content. 61% of the Indian users of WhatsApp primarily use their native languages to communicate with it. A recent study revealed that adoption of Internet is highest among local languages such as Tamil, Hindi, Kannada, Bengali, Marathi, Telugu, Gujarati and Malayalam. It estimates that Marathi, Bengali, Tamil, and Telugu will form 30% of the total local-language user base in the country. Currently, Tamil at 42% has the highest Internet adoption levels, followed by Hindi at 39% and Kannada at 37%. Intex also reported that 87% of its regional language usage came from Hindi, Bengali, Tamil, Gujarati and Marathi speakers. Lava mobiles reported that Tamil and Malayalam are the most popular on their phones, more than even Hindi.
See also
Indic Unicode
Hindi Blogosphere
Indian Blogosphere
Clip font
References
Indic | Indic computing | [
"Technology"
] | 2,493 | [
"Natural language and computing"
] |
13,398,615 | https://en.wikipedia.org/wiki/Super-prime | Super-prime numbers, also known as higher-order primes or prime-indexed primes (PIPs), are the subsequence of prime numbers that occupy prime-numbered positions within the sequence of all prime numbers. In other words, if prime numbers are matched with ordinal numbers, starting with prime number 2 matched with ordinal number 1, then the primes matched with prime ordinal numbers are the super-primes.
The subsequence begins
3, 5, 11, 17, 31, 41, 59, 67, 83, 109, 127, 157, 179, 191, 211, 241, 277, 283, 331, 353, 367, 401, 431, 461, 509, 547, 563, 587, 599, 617, 709, 739, 773, 797, 859, 877, 919, 967, 991, ... .
That is, if p(n) denotes the nth prime number, the numbers in this sequence are those of the form p(p(n)).
used a computer-aided proof (based on calculations involving the subset sum problem) to show that every integer greater than 96 may be represented as a sum of distinct super-prime numbers. Their proof relies on a result resembling Bertrand's postulate, stating that (after the larger gap between super-primes 5 and 11) each super-prime number is less than twice its predecessor in the sequence.
show that there are
super-primes up to x.
This can be used to show that the set of all super-primes is small.
One can also define "higher-order" primeness much the same way and obtain analogous sequences of primes .
A variation on this theme is the sequence of prime numbers with palindromic prime indices, beginning with
3, 5, 11, 17, 31, 547, 739, 877, 1087, 1153, 2081, 2381, ... .
References
.
.
.
External links
A Russian programming contest problem related to the work of Dressler and Parker
Classes of prime numbers | Super-prime | [
"Mathematics"
] | 449 | [
"Number theory stubs",
"Number theory"
] |
13,399,188 | https://en.wikipedia.org/wiki/Communal%20roosting | Communal roosting is an animal behavior where a group of individuals, typically of the same species, congregate in an area for a few hours based on an external signal and will return to the same site with the reappearance of the signal. Environmental signals are often responsible for this grouping, including nightfall, high tide, or rainfall. The distinction between communal roosting and cooperative breeding is the absence of chicks in communal roosts. While communal roosting is generally observed in birds, the behavior has also been seen in bats, primates, and insects. The size of these roosts can measure in the thousands to millions of individuals, especially among avian species.
There are many benefits associated with communal roosting including: increased foraging ability, decreased thermoregulatory demands, decreased predation, and increased conspecific interactions. While there are many proposed evolutionary concepts for how communal roosting evolved, no specific hypothesis is currently supported by the scientific community as a whole.
Evolution
One of the adaptive explanations for communal roosting is the hypothesis that individuals are benefited by the exchange of information at communal roosts. This idea is known as the information center hypothesis (ICH) and proposed by Peter Ward and Amotz Zahavi in 1973. It states that bird assemblages such as communal roosts act as information hubs for distributing knowledge about food source location. When food patch knowledge is unevenly distributed amongst certain flock members, the other "clueless" flock members can follow and join these knowledgeable members to find good feeding locations. To quote Ward and Zahavi on the evolutionary reasons as to how communal roosts came about, "...communal roosts, breeding colonies and certain other bird assemblages have been evolved primarily for the efficient exploitation of unevenly-distributed food sources by serving as ' information-centres.' "
The two strategies hypothesis
The two strategies hypothesis was put forth by Patrick Weatherhead in 1983 as an alternative to the then popular information center hypothesis. This hypothesis proposes that different individuals join and participate in communal roosts for different reasons that are based primarily on their social status. Unlike the ICH, not all individuals will join a roost in order to increase their foraging capabilities. This hypothesis explains that while roosts initially evolved due to information sharing among older and more experienced foragers, this evolution was aided by the benefits that more experienced foragers gained due to the fact that as better foragers they acquired a status of high rank within the roost. As dominant individuals, they are able to obtain the safest roosts, typically those highest in the tree or closest to the center of the roost. In these roosts, the less dominant and unsuccessful foragers act as a physical predation buffer for the dominant individuals. This is similar to the selfish herd theory, which states that individuals within herds will utilize conspecifics as physical barriers from predation. The younger and less dominant individuals will still join the roost because they gain some safety from predation through the dilution effect, as well as the ability to learn from the more experienced foragers that are already in the roost.
Support for the two strategies hypothesis has been shown in studies of roosting rooks (Corvus frugilegus). A 1977 study of roosting rooks by Ian Swingland showed that an inherent hierarchy exists within rook communal roosts. In this hierarchy, the most dominant individuals have been shown to routinely occupy the roosts highest in the tree, and while they pay a cost (increased energy use to keep warm) they are safer from terrestrial predators. Despite this enforced hierarchy, lower ranking rooks remained with the roost, indicating that they still received some benefit from their participation in the roost. When weather conditions worsened, the more dominant rooks forced the younger and less dominant out of their roosts. Swingland proposed that the risk of predation at lower roosts was outweighed by the gains in reduced thermal demands. Similar support for the two strategies hypothesis has also been found in red-winged blackbird roosts. In this species the more dominant males will regularly inhabit roosts in thicker brush, where they are better hidden from predators than the less dominant individuals, that are forced to roost at the edge of the brush.
The TSH makes several assumptions that must be met in order for the theory to work. The first major assumption is that within communal roosts there are certain roosts that possess safer or more beneficial qualities than other roosts. The second assumption is that the more dominant individuals will be capable of securing these roosts, and finally dominance rank must be a reliable indicator of foraging ability.
The recruitment center hypothesis (RCH)
Proposed by Heinz Richner and Philipp Heeb in 1996, the recruitment center hypothesis (RCH) explains the evolution of communal roosting as a result of group foraging. The RCH also explains behaviors seen at communal roosts such as: the passing of information, aerial displays, and the presence or lack of calls by leaders. This hypothesis assumes:
Patchy feeding area: Food is not evenly distributed across an area but grouped into patches
Short-lasting: Patches are not present for an extended period of time
Relatively abundant: There are many patches with relatively equal amounts of food present in each
These factors decrease relative food competition since control over a food source by an individual is not correlated to the duration or richness of said source. The passing of information acts to create a foraging group. Group foraging decreases predation and increases relative feeding time at the cost of sharing a food source. The decrease in predation is due to the dilution factor and an early warning system created by having multiple animals alert. Increases in relative feeding are explained by decreasing time spent watching for predators and social learning. Recruiting new members to food patches benefits successful foragers by increasing relative numbers. With the addition of new members to a group the benefits of group foraging increase until the group size is larger than the food source is able to support. Less successful foragers benefit by gaining knowledge of where food sources are located. Aerial displays are used to recruit individuals to participate in group foraging. However, not all birds display since not all birds are members in a group or are part of a group that is seeking participants. In the presence of patchy resources, Richner and Heeb propose the simplest manner would be to form a communal roost and recruit participants there. In other words, recruitment to foraging groups explains the presence of these communal roosts.
Support for the RCH has been shown in ravens (Covus corax). Reviewing a previous study by John Marzluff, Bernd Heinrich, and Colleen Marzluff, Etienne Danchin and Heinz Richner demonstrate that the collected data proves the RCH instead of the Information Center Hypothesis supported by Marzluff, et al. Both knowledgeable and naïve ("clueless") birds are shown to make up the roosts and leave them at the same time, with the naïve birds being led to the food sources. Aerial demonstrations were shown to peak around the same time as the discovery of a new food source. These communities were made up of non-breeders which forage in patchily distributed food environments, following the assumptions made by Richner and Heeb. In 2014, Sarangi et al. shown that the recruitment centre hypothesis did not hold in the study population of Common Mynas (Acridotheres tristis) and hence Common Myna roosts are not recruitment centres.
At this point in time there has been no additional scientific evidence excluding RCH or any evidence of overwhelming support. What is overlooked by RCH is that information may also be passed within the communal roost which increases and solidifies the community.
Potential benefits
Birds in a communal roost can reduce the impact of wind and cold weather by sharing body heat through huddling, which reduces the overall energy demand of thermoregulation. A study by Guy Beauchamp explained that black-billed magpies (Pica hudsonia) often formed the largest roosts during the winter. The magpies tend to react very slowly at low body temperatures, leaving them vulnerable to predators. Communal roosting in this case would improve their reactivity by sharing body heat, allowing them to detect and respond to predators much more quickly.
A large roost with many members can visually detect predators easier, allowing individuals to respond and alert others quicker to threats. Individual risk is also lowered due to the dilution effect, which states that an individual in a large group will have a low probability of being preyed upon. Similar to the selfish-herd theory, communal roosts have demonstrated a hierarchy of sorts where older members and better foragers nest in the interior of the group, decreasing their exposure to predators. Younger birds and less able foragers located on the outskirts still demonstrate some safety from predation due to the dilution effect.
According to the ICH, successful foragers share knowledge of favorable foraging sites with unsuccessful foragers at a communal roost, making it energetically advantageous for individuals to communally roost and forage more easily. Additionally with a greater number of individuals at a roost, the searching range of a roost will increase and improve the probability of finding favorable foraging sites.
There are also potentially improved mating opportunities, as demonstrated by red-billed choughs (Pyrrhocorax pyrrhocorax), which have a portion of a communal roost dedicated to individuals that lack mates and territories.
Potential costs
It is costly for territorial species to physically travel to and from roosts, and in leaving their territories they open themselves up to takeovers. Communal roosts may draw the attention of potential predators, as the roost becomes audibly and visibly more conspicuous due to the number of members. There is also a decrease in the local food supply as a greater number of members results in competition for food. A large number of roost members can also increases the exposure to droppings, causing plumage to deteriorate and leaving birds vulnerable to dying from exposure as droppings reduce the ability of feathers to shed water.
Examples by species
Birds
Communal roosting has been observed in numerous avian species. As previously mentioned, rooks (Corvus frugilegus) are known to form large nocturnal roosts, these roosts can contain anywhere from a few hundred to over a thousand individuals. These roosts then disband at daybreak when the birds return to foraging activities. Studies have shown that communal roosting behavior is mediated by light intensity, which is correlated with sunset, where rooks will return to the roost when the ambient light has sufficiently dimmed.
Acorn woodpeckers (Melanerpes formicivorus) are known to form communal roosts during the winter months. In these roosts two to three individuals will share a cavity during the winter. Within these tree cavities woodpeckers share their body heat with each other and therefore decrease the thermoregulatory demands on the individuals within the roost. Small scale communal roosting during the winter months has also been observed in Green Woodhoopoes (Phoeniculus purpureus). Winter communal roosts in these species typically contain around five individuals.
Tree swallows (Tachycineta bicolor) located in southeastern Louisiana are known to form nocturnal communal roosts and have been shown to exhibit high roost fidelity, with individuals often returning to the same roost they had occupied on the previous night. Research has shown that swallows form communal roosts due to the combined factors of conspecific attraction, where individual swallows are likely to aggregate around other swallows of the same species, and roost fidelity. Tree swallows will form roosts numbering in hundreds or thousands of individuals.
Eurasian crag martins (Ptyonoprogne rupestris) also form large nocturnal communal roosts during the winter months. Up to 12,000 individuals have been found roosting communally at the Gorham's Cave Complex in Gibraltar. As with the tree swallows, research has shown that Eurasian crag martins also exhibit a high degree of fidelity to the roost, with individuals returning to the same caves within and between years.
Red-billed choughs (Pyrrhocorax pyrrhocorax) roost in what has been classified as either a main roost or a sub roost. Main roosts are constantly in use, whereas the sub roosts are used irregularly by individuals lacking both a mate and territory. These sub roosts are believed to help improve the ability of non-breeding choughs to find a mate and increase their territory ranges.
Interspecies roosts have been observed between different bird species. In San Blas, Mexico, the great egret (Ardea alba), the little blue heron (Egretta caerulea), the tricolored heron (Egretta tricolor), and the snowy egret (Egretta thula) are known to form large communal roosts. It has been shown that the snowy egret determines the general location of the roost due to the fact that the other three species rely on it for its abilities to find food sources. In these roosts there is often a hierarchical system, where the more dominant species (in this case the snowy egret) will typically occupy the more desirable higher perches. Interspecies roosts have also been observed among other avian species.
Insects
Communal roosting has also been well documented among insects, particularly butterflies. The passion-vine butterfly (Heliconius erato) is known to form nocturnal roosts, typically comprising four individuals. It is believed that these roosts deter potential predators due to the fact that predators attack roosts less often than they do individuals.
Communal roosting behavior has also been observed in the neotropical zebra longwing butterfly (Heliconius charitonius) in the La Cinchona region of Costa Rica. A study of this roost showed that individuals vary in their roost fidelity, and that they tend to form smaller sub roosts. The same study observed that in this region communal roosting can be mediated by heavy rainfall.
Communal roosting has also been observed in south Peruvian tiger beetles of the subfamily Cicindelidae. These species of tiger beetle have been observed to form communal roosts comprising anywhere from two to nine individuals at night and disbanding during the day. It is hypothesized that these beetles roost high in the treetops in order to avoid ground-based predators.
Mammals
While there are few observations of communal roosting mammals, the trait has been seen in several species of bats. The little brown bat (Myotis lucifugus) is known to participate in communal roosts of up to thirty seven during cold nights in order to decrease thermoregulatory demands, with the roost disbanding at daybreak.
Several other species of bats, including the hoary bat (Lasiurus cinereus) and the big brown bat (Eptesicus fuscus) have also been observed to roost communally in maternal colonies in order to reduce the thermoregulatory demands on both the lactating mothers and juveniles.
See also
Communal breeding
Cooperative breeding
Ecology
Ecosystem
Evolutionary models of food sharing
Habitat conservation
Habitat fragmentation
Habitat
Heliconius charithonia
Mating system
Reproduction
References
Ecology
Ethology | Communal roosting | [
"Biology"
] | 3,221 | [
"Behavioural sciences",
"Ethology",
"Behavior",
"Ecology"
] |
13,399,751 | https://en.wikipedia.org/wiki/Binoviewer | A binoviewer is an optical device designed to enable binocular viewing through a single objective.
In contrast to binoculars, it allows partially stereoscopic viewing and partially monocular viewing, this because the eyes and brain still process the image binocularly, as both images are produced by the same objective and do not differ except for aberrations induced by the binoviewer itself.
A binoviewer consists of a beam splitter which splits the image provided by the objective into two identical (but fainter) copies, and a system of prisms or mirrors that relay the images to a pair of identical eyepieces.
Binoviewers are a standard component of laboratory microscopes and are also used with optical telescopes, particularly in amateur astronomy.
Trinocular splitters are also used, where a camera is to be attached as well.
References
Optical devices
Microscope components | Binoviewer | [
"Materials_science",
"Engineering"
] | 182 | [
"Glass engineering and science",
"Optical devices"
] |
10,897,878 | https://en.wikipedia.org/wiki/Mesoscopic%20physics | Mesoscopic physics is a subdiscipline of condensed matter physics that deals with materials of an intermediate size. These materials range in size between the nanoscale for a quantity of atoms (such as a molecule) and of materials measuring micrometres. The lower limit can also be defined as being the size of individual atoms. At the microscopic scale are bulk materials. Both mesoscopic and macroscopic objects contain many atoms. Whereas average properties derived from constituent materials describe macroscopic objects, as they usually obey the laws of classical mechanics, a mesoscopic object, by contrast, is affected by thermal fluctuations around the average, and its electronic behavior may require modeling at the level of quantum mechanics.
A macroscopic electronic device, when scaled down to a meso-size, starts revealing quantum mechanical properties. For example, at the macroscopic level the conductance of a wire increases continuously with its diameter. However, at the mesoscopic level, the wire's conductance is quantized: the increases occur in discrete, or individual, whole steps. During research, mesoscopic devices are constructed, measured and observed experimentally and theoretically in order to advance understanding of the physics of insulators, semiconductors, metals, and superconductors. The applied science of mesoscopic physics deals with the potential of building nanodevices.
Mesoscopic physics also addresses fundamental practical problems which occur when a macroscopic object is miniaturized, as with the miniaturization of transistors in semiconductor electronics. The mechanical, chemical, and electronic properties of materials change as their size approaches the nanoscale, where the percentage of atoms at the surface of the material becomes significant. For bulk materials larger than one micrometre, the percentage of atoms at the surface is insignificant in relation to the number of atoms in the entire material. The subdiscipline has dealt primarily with artificial structures of metal or semiconducting material which have been fabricated by the techniques employed for producing microelectronic circuits.
There is no rigid definition for mesoscopic physics but the systems studied are normally in the range of 100 nm (the size of a typical virus) to 1 000 nm (the size of a typical bacterium): 100 nanometers is the approximate upper limit for a nanoparticle. Thus, mesoscopic physics has a close connection to the fields of nanofabrication and nanotechnology. Devices used in nanotechnology are examples of mesoscopic systems. Three categories of new electronic phenomena in such systems are interference effects, quantum confinement effects and charging effects.
Quantum confinement effects
Quantum confinement effects describe electrons in terms of energy levels, potential wells, valence bands, conduction bands, and electron energy band gaps.
Electrons in bulk dielectric materials (larger than 10 nm) can be described by energy bands or electron energy levels. Electrons exist at different energy levels or bands. In bulk materials these energy levels are described as continuous because the difference in energy is negligible. As electrons stabilize at various energy levels, most vibrate in valence bands below a forbidden energy level, named the band gap. This region is an energy range in which no electron states exist. A smaller amount have energy levels above the forbidden gap, and this is the conduction band.
The quantum confinement effect can be observed once the diameter of the particle is of the same magnitude as the wavelength of the electron's wave function. When materials are this small, their electronic and optical properties deviate substantially from those of bulk materials.
As the material is miniaturized towards nano-scale the confining dimension naturally decreases. The characteristics are no longer averaged by bulk, and hence continuous, but are at the level of quanta and thus discrete. In other words, the energy spectrum becomes discrete, measured as quanta, rather than continuous as in bulk materials. As a result, the bandgap asserts itself: there is a small and finite separation between energy levels. This situation of discrete energy levels is called quantum confinement.
In addition, quantum confinement effects consist of isolated islands of electrons that may be formed at the patterned interface between two different semiconducting materials. The electrons typically are confined to disk-shaped regions termed quantum dots. The confinement of the electrons in these systems changes their interaction with electromagnetic radiation significantly, as noted above.
Because the electron energy levels of quantum dots are discrete rather than continuous, the addition or subtraction of just a few atoms to the quantum dot has the effect of altering the boundaries of the bandgap. Changing the geometry of the surface of the quantum dot also changes the bandgap energy, owing again to the small size of the dot, and the effects of quantum confinement.
Interference effects
In the mesoscopic regime, scattering from defects – such as impurities – induces interference effects which modulate the flow of electrons. The experimental signature of mesoscopic interference effects is the appearance of reproducible fluctuations in physical quantities. For example, the conductance of a given specimen oscillates in an apparently random manner as a function of fluctuations in experimental parameters. However, the same pattern may be retraced if the experimental parameters are cycled back to their original values; in fact, the patterns observed are reproducible over a period of days. These are known as universal conductance fluctuations.
Time-resolved mesoscopic dynamics
Time-resolved experiments in mesoscopic dynamics: the observation and study, at nanoscales, of condensed phase dynamics such as crack formation in solids, phase separation, and rapid fluctuations in the liquid state or in biologically relevant environments; and the observation and study, at nanoscales, of the ultrafast dynamics of non-crystalline materials.
Related
s
References
External links
Condensed matter physics
Quantum mechanics | Mesoscopic physics | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,175 | [
"Theoretical physics",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Mesoscopic physics",
"Matter"
] |
10,898,092 | https://en.wikipedia.org/wiki/NeutrAvidin | Neutralite Avidin protein is a deglycosylated version of chicken avidin, with a mass of approximately 60,000 daltons. As a result of carbohydrate removal, lectin binding is reduced to undetectable levels, yet biotin binding affinity is retained because the carbohydrate is not necessary for this activity. Avidin has a high pI but NeutrAvidin has a near-neutral pI (pH 6.3), minimizing non-specific interactions with the negatively-charged cell surface or with DNA/RNA. Neutravidin still has lysine residues that remain available for derivatization or conjugation.
Like avidin itself, NeutrAvidin is a tetramer with a strong affinity for biotin (Kd = 10−15 M). In biochemical applications, streptavidin, which also binds very tightly to biotin, may be used interchangeably with NeutrAvidin.
Avidin immobilized onto solid supports is also used as purification media to capture biotin-labelled protein or nucleic acid molecules. For example, cell surface proteins can be specifically labelled with membrane-impermeable biotin reagent, then specifically captured using a NeutrAvidin support.
References
Bayer, Ed: "The avidin-biotin system", Dept. of Biological Chemistry, Weizmann Institute of Science, Israel
Proteins | NeutrAvidin | [
"Chemistry"
] | 301 | [
"Biomolecules by chemical classification",
"Protein stubs",
"Biochemistry stubs",
"Molecular biology",
"Proteins"
] |
10,898,180 | https://en.wikipedia.org/wiki/Northern%20Highlands | The Northern Highlands are a mountainous biogeographical region of northern Madagascar. The region includes the Tsaratanana Massif (with the highest mountain of Madagascar, Maromokotro) and smaller nearby massifs such as Marojejy, Anjanaharibe-Sud, and Manongarivo. The Mandritsara Window separates the Northern from the Central Highlands and apparently acts as a barrier to dispersal between the two highlands, leading to species pairs such as Voalavo gymnocaudus (Northern Highlands) and Voalavo antsahabensis (Central Highlands). None of the montane endemics of Tsaratanana are shared with the major massifs of the Central Highlands.
References
Literature cited
Goodman, S.M., Rakotondravony, D., Randriamanantsoa, H.N. and Rakotomalala-Razanahoera, M. 2005. A new species of rodent from the montane forest of central eastern Madagascar (Muridae: Nesomyinae: Voalavo). Proceedings of the Biological Society of Washington 118(4):863–873.
Goodman, S.M., Raxworthy, C.J., Maminirina, C.P. and Olson, L.E. 2006. A new species of shrew tenrec (Microgale jobihely) from northern Madagascar. Journal of Zoology 270:384–398.
Raxworthy, C.J. and Nussbaum, R.A. 1996. Montane amphibian and reptile communities in Madagascar (subscription required). Conservation Biology 10(3):750–756.
Natural regions of Africa
Biogeography
Geography of Madagascar
Highlands | Northern Highlands | [
"Biology"
] | 357 | [
"Biogeography"
] |
10,898,630 | https://en.wikipedia.org/wiki/Comamonas%20testosteroni | Comamonas testosteroni is a Gram-negative environmental bacterium capable of utilizing testosterone as a carbon source, and degrading other sterols such as ergosterol and estrogens. Strain I2gfp has been used in bioaugmentation trials, in attempts to treat the industrial byproduct 3-chloroaniline. It was first classified as a human pathogen in 1987 according to the National Library of Medicine. A number of strains of Comamonas, including C. testosteroni, have been shown to consume terephthalic acid, one of the components of PET plastic, as a sole carbon source.
Virulence
Though these organisms have low virulence, they can occasionally cause human diseases. They can be found in intravenous catheters, the respiratory tract, abdomen, urinary tract, and the central nervous system. Symptoms of infection may variously include vomiting, watery diarrhea, and meningitis.
References
External links
Comamonadaceae
Bacteria described in 1956
Plastivores | Comamonas testosteroni | [
"Biology"
] | 218 | [
"Organisms by adaptation",
"Plastivores"
] |
10,898,873 | https://en.wikipedia.org/wiki/Brevibacterium | Brevibacterium is a genus of bacteria of the order Micrococcales. They are Gram-positive soil organisms.
Species
Brevibacterium comprises the following species:
B. album Tang et al. 2008
B. ammoniilyticum Kim et al. 2013
B. anseongense Jung et al. 2019
B. antiquum Gavrish et al. 2005
B. atlanticum Pei et al. 2022
B. aurantiacum Gavrish et al. 2005
"B. aureum" Seghal Kiran et al. 2010
B. avium Pascual and Collins 1999
B. casei Collins et al. 1983
B. celere Ivanova et al. 2004
B. daeguense Cui et al. 2013
B. epidermidis Collins et al. 1983
B. hankyongi Choi et al. 2018
"B. ihuae" Valles et al. 2018
B. iodinum (ex Davis 1939) Collins et al. 1981
B. jeotgali Choi et al. 2013
"B. ketoglutamicum" Stackebrandt and Woese 1981
B. limosum Pei et al. 2022
B. linens (Wolff 1910) Breed 1953 (Approved Lists 1980)
B. luteolum corrig. Wauters et al. 2003
B. marinum Lee 2008
B. mcbrellneri McBride et al. 1994
"B. metallicus" Roman-Ponce et al. 2015
"B. methylicum" Nesvera et al. 1991
B. oceani Bhadra et al. 2008
B. otitidis Pascual et al. 1996
B. paucivorans Wauters et al. 2001
B. permense Gavrish et al. 2005
B. picturae Heyrman et al. 2004
"B. pigmentatum" Pei et al. 2021
B. pityocampae Kati et al. 2010
B. profundi Pei et al. 2020
B. ravenspurgense Mages et al. 2009
"B. renqingii" Yan et al. 2021
B. rongguiense Deng et al. 2020
B. salitolerans Guan et al. 2010
B. samyangense Lee 2006
B. sandarakinum Kämpfer et al. 2010
B. sanguinis Wauters et al. 2004
B. sediminis Chen et al. 2016
B. senegalense Kokcha et al. 2013
B. siliguriense Kumar et al. 2013
B. yomogidense Tonouchi et al. 2013
Further reading
Mimura, Haruo (September 2014). "Growth Enhancement of the Halotolerant "Brevibacterium" sp JCM 6894 by Methionine Externally Added to a Chemically Defined Medium". Biocontrol Science 19 (3): 151–155.
References
Micrococcales
Soil biology
Bacteria genera | Brevibacterium | [
"Biology"
] | 619 | [
"Soil biology"
] |
10,898,911 | https://en.wikipedia.org/wiki/Numerology%20%28Ismailism%29 | Numerology is an element of Isma'ili belief that states that numbers have religious meanings. The number seven plays a general role in the theology of the Ismā'īliyya, including mystical speculations that there are seven heavens, seven continents, seven orifices in the skull, seven days in a week, seven prophets, and so forth.
Position of the Imam
Old Ismaili doctrine holds that divine revelation had been given in six periods (daur) entrusted to six prophets, also called Natiq (Speaker), who were commissioned to preach a religious law to their respective communities.
For instance, Nasir Khusraw argues that the world of religion was created in six cycles, corresponding to the six days of the week. The seventh day, corresponding to the Sabbath, is the cycle in which the world comes out of darkness and ignorance and “into the light of her Lord” (Quran 39:69), and the people who “laboured in fulfilment of (the Prophets’) command” are rewarded.
While the Natiq was concerned with the rites and outward shape of religion and life, the inner meaning was entrusted to a Wasi (Representative), who would know the secret meaning of all rites and rules and would reveal them to a small circles of initiates.
The Natiq and Wasi are in turn succeeded by a line of seven Imams, who would guard what they received. The seventh and last Imam in any period would then be the Natiq of the next period. The last Imam of the sixth period however would not bring about a new religion or law but would abrogate the law and introduce din Adama al-awwal ("the original religion of Adam"), as practised by Adam and the Angels in paradise before the fall. This would be without cult or law but would consist in all creatures praising the creator and recognizing his unity. This final stage was called Qiyamah.
References
Numerology
Ismaili theology | Numerology (Ismailism) | [
"Mathematics"
] | 404 | [
"Numerology",
"Mathematical objects",
"Numbers"
] |
10,898,964 | https://en.wikipedia.org/wiki/Brevibacterium%20iodinum | Brevibacterium iodinum is a Gram-positive soil bacterium. It can often be found among the normal cutaneous flora of healthy people, particularly in humid environments, and is only very rarely involved in opportunistic infections. It is also suspected to be a cause of foot odor.
References
External links
Type strain of Brevibacterium iodinum at BacDive - the Bacterial Diversity Metadatabase
Micrococcales
Soil biology
Hygiene
Bacteria described in 1981 | Brevibacterium iodinum | [
"Biology"
] | 98 | [
"Soil biology"
] |
10,899,037 | https://en.wikipedia.org/wiki/Limit%20of%20positive%20stability | In sailing, the limit of positive stability (LPS) or angle of vanishing stability (AVS) is the angle from the vertical at which a boat will no longer stay upright but will capsize, becoming inverted, or turtled.
For example, if a boat with an LPS of 120 degrees rolls past this point, i.e. its mast is already at an angle of 30 degrees below the water, it will continue to roll and be completely upside down in the water. Except for dinghy sailboats and multihulls, most larger sailboats (monohull keelboats) have lead or other heavy materials in their keel at the bottom of their hulls to keep them from capsizing or turtling.
The LPS was a part of the Offshore Racing Rules and is used to measure how stable or seaworthy a sailboat is. The modern offshore racing rules published by the International Sailing Federation may also use the measurement.
See also
Angle of loll
Capsizing
Kayak roll
Metacentric height
Naval architecture
Initial stability – concerning boats
Secondary stability – concerning boats
Ship stability
Turtling
Weight distribution
Notes
Naval architecture | Limit of positive stability | [
"Engineering"
] | 228 | [
"Naval architecture",
"Marine engineering"
] |
10,899,099 | https://en.wikipedia.org/wiki/Makoto%20Kobayashi | is a Japanese physicist known for his work on CP-violation who was awarded one-fourth of the 2008 Nobel Prize in Physics "for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature."
Early life and education
Makoto Kobayashi was born in Nagoya, Japan in 1944. When he was two years old, Kobayashi's father Hisashi died. The Kobayashi family home was destroyed by the Bombing of Nagoya, so they stayed at his mother's (surnamed Kaifu) family house. One of Makoto's cousins, Toshiki Kaifu, the 51st Prime Minister of Japan, was living in the same place. His other cousin was an astronomer, Norio Kaifu. Many years later, Toshiki Kaifu recalled Kobayashi: "when he was a child, he was a quiet and lovely boy, always reading some difficult books in my room. I think this is the beginning of his sudden change into a genius."
After graduating from the School of Science of Nagoya University in 1967, he obtained a DSc degree from the Graduate School of Science of Nagoya University in 1972. During college years, he received guidance from Shoichi Sakata and others.
Career
After completing his doctoral research at Nagoya University in 1972, Kobayashi worked as a research associate on particle physics at Kyoto University. Together, with his colleague Toshihide Maskawa, he worked on explaining CP-violation within the Standard Model of particle physics. Kobayashi and Maskawa's theory required that there were at least three generations of quarks, a prediction that was confirmed experimentally four years later by the discovery of the bottom quark.
Kobayashi and Maskawa's article, "CP Violation in the Renormalizable Theory of Weak Interaction", published in 1973, is the fourth most cited high energy physics paper of all time as of 2010. The Cabibbo–Kobayashi–Maskawa matrix, which defines the mixing parameters between quarks was the result of this work. Kobayashi and Maskawa were jointly awarded half of the 2008 Nobel Prize in Physics for this work, with the other half going to Yoichiro Nambu.
In recognition of three Nobel laureates' contributions, the bronze statues of Shin'ichirō Tomonaga, Leo Esaki, and Makoto Kobayashi was set up in the Central Park of Azuma 2 in Tsukuba City in 2015.
Professional record
April 1972 – Research Associate of the Faculty of Science, Kyoto University
July 1979 – Associate Professor of the National Laboratory of High Energy Physics (KEK)
April 1989 – Professor of the National Laboratory of High Energy Physics (KEK), Head of Physics Division II
April 1997 – Professor of the Institute of Particle and Nuclear Science, KEK, Head of Physics Division II
April 2003 – Director, Institute of Particle and Nuclear Studies, KEK
April 2004 – Trustee (Director, Institute of Particle and Nuclear Studies), KEK (Inter-University Research Institute Corporation)
June 2006 – Professor Emeritus of KEK
2008 – Distinguished Invited University Professor of Nagoya University
2009
Special Honored Professor of KEK
Trustee and Director of Academic System Institute, Japan Society for the Promotion of Science
University Professor of Nagoya University
2010
Chairperson of the Advisory Committee of the Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI) at Nagoya University
Member of the Japan Academy
2016 – Superadvisor of Yokohama Science Frontier High School
2018
April – Director of the Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI) at Nagoya University
2019 – Second Honorary Director of the Nagoya City Science Museum
2020
April – Director Emeritus of KMI at Nagoya University
Recognition
1979 – Nishina Memorial Prize
1985 – Sakurai Prize
1994 – Chunichi Culture Award
1995 – Asahi Prize
2001 – Person of Cultural Merit
2007 – High Energy and Particle Physics Prize by European Physical Society
2008 – Nobel Prize in Physics
In October 2008, Kobayashi was honored with Japan's Order of Culture; and an awards ceremony for the Order of Culture was held at the Tokyo Imperial Palace.
2010 – Member of Japan Academy
Personal life
Kobayashi was born and educated in Nagoya, Japan. He married Sachiko Enomoto in 1975; they had one son, Junichiro. After his first wife died, Kobayashi married Emiko Nakayama in 1990, they had a daughter, Yuka.
See also
Progress of Theoretical Physics
List of Nobel laureates affiliated with Kyoto University
List of Japanese Nobel laureates
References
External links
Progress of Theoretical Physics
Makoto Kobayashi, Professor emeritus of KEK
Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI), Nagoya University
Japanese physicists
Nobel laureates in Physics
Living people
1944 births
People from Nagoya
Japanese Nobel laureates
Recipients of the Order of Culture
Japanese theoretical physicists
J. J. Sakurai Prize for Theoretical Particle Physics recipients
Particle physicists
Nagoya University alumni | Makoto Kobayashi | [
"Physics"
] | 989 | [
"Particle physicists",
"Particle physics"
] |
10,899,132 | https://en.wikipedia.org/wiki/List%20of%20largest%20container%20ships | This is a list of container ships with a capacity larger than 20,000 twenty-foot equivalent units (TEU).
Container ships have been built in increasingly larger sizes to take advantage of economies of scale and reduce expense as part of intermodal freight transport. Container ships are also subject to certain limitations in size. Primarily, these are the availability of sufficiently large main engines and the availability of a sufficient number of ports and terminals prepared and equipped to handle ultra-large container ships. Furthermore, some of the world's main waterways such as the Suez Canal and Singapore Strait restrict the maximum dimensions of a ship that can pass through them.
In 2016, Prokopowicz and Berg-Andreassen defined a container ship with a capacity of 10,000 to 20,000 TEU as a Very Large Container Ship (VLCS), and one with a capacity greater than 20,000 TEU as an Ultra Large Container Ship (ULCS).
In August 2021, the record for most containers loaded onto a single ship is held by the Ever Ace, which carried a total of 21,710 TEU of containers from Yantian to Europe.
As of January 2024, the record for the largest container ship is held by MSC's Irina-class with the capacity of 24,346 TEU.
Completed ships
Ships on order
Container loading records
See also
List of largest container shipping companies
List of largest cruise ships
List of largest ships by gross tonnage
List of longest ships
References
Container ships
Container ships
Container ships
Container ships | List of largest container ships | [
"Physics",
"Mathematics"
] | 308 | [
"Quantity",
"Largest things",
"Physical quantities",
"Size"
] |
10,899,167 | https://en.wikipedia.org/wiki/Toshihide%20Maskawa | was a Japanese theoretical physicist known for his work on CP-violation who was awarded one quarter of the 2008 Nobel Prize in Physics "for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature."
Early life and education
Maskawa was born in Nagoya, Japan. After World War II ended, the Maskawa family operated as a sugar wholesaler. A native of Aichi Prefecture, Toshihide Maskawa graduated from Nagoya University in 1962 and received a Ph.D. degree in particle physics from the same university in 1967. His doctoral advisor was the physicist Shoichi Sakata.
From early life Maskawa liked trivia, also studied mathematics, chemistry, linguistics and various books. In high school, he loved novels, especially detective and mystery stories and novels by Ryūnosuke Akutagawa.
Career
At Kyoto University in the early 1970s, he collaborated with Makoto Kobayashi on explaining broken symmetry (the CP violation) within the Standard Model of particle physics. Maskawa and Kobayashi's theory required that there be at least three generations of quarks, a prediction that was confirmed experimentally four years later by the discovery of the bottom quark.
Maskawa and Kobayashi's 1973 article, "CP Violation in the Renormalizable Theory of Weak Interaction", is the fourth most cited high energy physics paper of all time as of 2010. The Cabibbo–Kobayashi–Maskawa matrix, which defines the mixing parameters between quarks was the result of this work. Kobayashi and Maskawa were jointly awarded half of the 2008 Nobel Prize in Physics for this work, with the other half going to Yoichiro Nambu.
Maskawa was director of the Yukawa Institute for Theoretical Physics from 1997 to 2003. He was special professor and director general of Kobayashi-Maskawa Institute for the Origin of Particles and the Universe at Nagoya University, director of Maskawa Institute for Science and Culture at Kyoto Sangyo University and professor emeritus at Kyoto University.
Nobel lecture
On 8 December 2008, after Maskawa told the audience "Sorry, I cannot speak English", he delivered his Nobel lecture on “What Did CP Violation Tell Us?” in Japanese language, at Stockholm University. The audience followed the subtitles on the screen behind him.
Personal life
Maskawa married Akiko Takahashi in 1967. The couple have two children, Kazuki and Tokifuji.
Death
On 23 July 2021 at the same day as the opening ceremony of Tokyo Summer Olympic Games, Maskawa died of oral cancer at his home in Kyoto at the age of 81. Although his death was unrelated to triple disaster and COVID-19 infection. He was cremated in October 2021 after the private funeral.
Professional record
July 1967 – Research Associate of the Faculty of Science, Nagoya University
May 1970 – Research Associate of the Faculty of Science, Kyoto University
April 1976 – Associate Professor of the Institute for Nuclear Study, University of Tokyo
April 1980 – Professor of the Research Institute for Fundamental Physics (present Yukawa Institute for Theoretical Physics), Kyoto University
November 1990 – Professor of the Faculty of Science, Kyoto University
1995 – Councilor, Kyoto University
1997
January – Professor of Yukawa Institute for Theoretical Physics, Kyoto University
April – Director of Yukawa Institute for Theoretical Physics, Kyoto University
2003
April – Professor Emeritus of Kyoto University
April – Professor of Kyoto Sangyo University (till May 2009)
October 2004 – Director of the Research Institute, Kyoto Sangyo University
October 2007 – Distinguished Invited University Professor of Nagoya University
2009
February – Trustee of Kyoto Sangyo University
March – University Professor of Nagoya University
June – Head of Maskawa Juku and Professor, Kyoto Sangyo University (till March 2019)
2010
April – Director of the Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI) at Nagoya University
December – Member of the Japan Academy
2018
April – Director Emeritus of KMI at Nagoya University
April 2019 – Professor Emeritus of Kyoto Sangyo University
Recognition
1979 – Nishina Memorial Prize
1985 – Sakurai Prize
1985 – Japan Academy Prize
1995 – Chunichi Culture Award
1995 – Asahi Prize
2007 – High Energy and Particle Physics Prize by European Physical Society
2008 – Nobel Prize in Physics
2008 – Order of Culture
2010 – Member of Japan Academy
Political proposition
In 2013, Maskawa and chemistry Nobel laureate Hideki Shirakawa issued a statement against the Japanese State Secrecy Law." The following is Maskawa's main political proposition:
Support for Article 9 of the Japanese Constitution
Criticizing Japanese politician visits to the Yasukuni Shrine
Support for selective couple surname system
See also
Progress of Theoretical Physics
List of Japanese Nobel laureates
List of Nobel laureates affiliated with Kyoto University
References
External links
Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI), Nagoya University
1940 births
2021 deaths
Scientists from Nagoya
Japanese Nobel laureates
Japanese physicists
Nobel laureates in Physics
Recipients of the Order of Culture
Japanese theoretical physicists
J. J. Sakurai Prize for Theoretical Particle Physics recipients
Particle physicists
Nagoya University alumni
Academic staff of Nagoya University
Academic staff of the University of Tokyo | Toshihide Maskawa | [
"Physics"
] | 1,022 | [
"Particle physicists",
"Particle physics"
] |
10,899,890 | https://en.wikipedia.org/wiki/The%20Silent%20Star | Milcząca Gwiazda (), literal English translation The Silent Star, is a 1960 East German/Polish color science fiction film based on the 1951 science fiction novel The Astronauts by Polish science fiction writer Stanisław Lem. It was directed by Kurt Maetzig, and stars Günther Simon, Julius Ongewe and Yoko Tani. The film was first released by Progress Film in East Germany, running 93 min. Variously dubbed and cut versions were also released in English under other titles: First Spaceship on Venus, Planet of the Dead, and Spaceship Venus Does Not Reply.
After finding an ancient, long-buried flight recorder that originally came from a spaceship, apparently from Venus, a human spaceship is dispatched to the Morning star. The crew discovers a long-dead Venusian civilization that had constructed a device intended to destroy all life on Earth prior to invasion. Before they could execute their plan, they perished in a global nuclear war.
Plot
In 1985, engineers involved in an industrial project to irrigate the Gobi Desert accidentally unearth a mysterious and apparently artificial "spool". When found to be made of a material unknown on Earth, the spool is circumstantially linked to the Tunguska explosion of 1908. The spool is seized on as evidence that the explosion, originally blamed on a meteor, was actually caused by an alien spaceship.
Professor Harringway deduces the craft must have come from Venus. The spool itself is determined to be a flight recorder and is partially decoded by an international team of scientists led by Professor Sikarna and Dr. Tschen Yü. When radio greetings sent to Venus go unanswered, Harringway announces that a journey to Venus is the only alternative. The recently completed Soviet spaceship Kosmoskrator, intended to voyage to Mars, is now redirected to Venus, a 30-to-31-day journey. During the voyage, Sikarna works diligently to translate the alien message using the spaceship's computer.
When their spaceship nears Venus, radio interference from the planet cuts the crew off from Earth. By then, Sikarna's efforts lead to a stunning discovery: The spool describes a Venusian plan to irradiate the Earth's surface, with the extermination of mankind being the prelude to their invasion. Rather than containing a "cosmic document", as had been expected, the spool bears a cold-blooded message of destruction. With this new information the crew decides to transmit this information to Earth, believing that the information would be of service to mankind. Harringway, however, convinces the crew to press on towards Venus rather than return to Earth with revelations that could panic mankind, leading to unknown consequences.
With the ship's robot, Omega, German astronaut Brinkman pilots a one-man landing craft through the Venusian atmosphere. On the surface, he comes upon an industrial complex and finds small information storage devices that look like insects. Brinkmann's landing craft is destroyed in an explosion when it accidentally lands on high-tension power lines. The rest of the crew lands Kosmoskrator to investigate the explosion. The crew splits up, some staying near Kosmoskrator to study the storage devices. The others follow the power line to try and find the Venusians, but they find no life forms. Instead, they discover a large golf ball-like structure that Arsenjew suggests may be a giant transformer or a force-field generator. Following the power lines in the other direction, they find the remains of a deserted and blasted city centered around a huge crater. There are clear signs of a catastrophic explosion so intense that the shadowy forms of the humanoid Venusians are permanently burned onto the walls of the surviving structures.
The Venusians are gone, but their machines remain functioning, including the radiation-bombardment machine intended for use against the Earth. One of the scientists accidentally triggers the weapon, leading to a frantic effort by the team to disarm it. Tschen Yü lowers Talua, the ship's communication officer, into the Venusian command center. When Tschen Yü's spacesuit is punctured, Brinkmann ventures out to save him. Before he can reach Yü, Talua succeeds in reversing the weapon. Unfortunately, this also reverses Venus' gravitational field, flinging Kosmoskrator out into space. Brinkmann is also repelled off-planet, beyond the reach of the spaceship to save him, while Talua and Tschen Yü remain marooned on the devastated Venus. The surviving crew members must return to Earth, where they warn humanity about the dangers of atomic weapons.
Cast
Günther Simon as Raimund Brinkmann (Robert Brinkman in the US release), the Kosmokrator's German pilot
Julius Ongewe as Talua, the African communications officer
Yoko Tani as Dr. Sumiko Ogimura, the Japanese medical officer
Oldřich Lukeš as Professor Hawling, a US nuclear physicist (Orloff in the US release)
Ignacy Machowski as Professor Sołtyk (Durand, a French engineer, in the US release), the Polish chief engineer
Mikhail Postnikov as Professor Arsenjew, Soviet astrophysicist and commander of the mission (Harringway in the US release)
Kurt Rackelmann as Professor Sikarna, an Indian mathematician
Tang Hua-Ta as Dr. Tschen Yü (Chen Yu in the US release), a Chinese linguist.
Lucyna Winnicka as Joan Moran, television reporter
Eduard von Winterstein as a nuclear physicist
Ruth Maria Kubitschek as Professor Arsenjew's wife
Julius Ongewe was a medical student in Leipzig from Nigeria or Kenya. He was the first black actor to be portrayed travelling in space.
Despite a diverse cast, gender and racial attitudes are not much different than in American science fiction films of that era, with Ogimura spending most of the time dispensing liquid food to the crew, while Talua fills a "service-oriented" crew position.
Production
The story is based on the 1951 science fiction novel The Astronauts by Stanisław Lem. Lem was approached by Kurt Maezig from DEFA with an idea to make a film adaptation of Lem's novel, possibly because Lem was widely known in Poland and abroad at the time. The Astronauts was likely chosen due to the recent advancements in rocket technology and the popularity of space travel in science fiction. The story also expressed many socialist ideals, appropriate for the state-owned studio.
The DEFA director Herbert Volkmann, responsible for finance, as well as other officials of the GDR, were strict with the project: they had ideological concerns about the script, and new writers were brought in to work on it. Eventually, twelve different versions of the script were created.
In the film's original East German and Polish release, the Earth spaceship sent to Venus is named Kosmokrator.
The film was shot mostly in East Germany. The outdoors scenes were shot in the area of Zakopane, Poland and the airfield of Berlin-Johannisthal and special effects in Babelsberg Studio and in a studio in Wrocław, Poland. The spaceship mock-up at the airfield became the subject of a hoax in the newspaper Der Kurier: the front page presented the spaceship as a failed attempt at spaceflight in the Soviet occupation zone.
The film was noted for early extensive usage of "electronic sounds" on its soundtrack. Electronic music and noises illustrated the work of the computer that deciphers the alien message, the message itself, and the eerie landscape of Venus devastated by the nuclear catastrophe. Markowski, who produced the musical score, was assisted by sound engineer Krzysztof Szlifirski from the Experimental Studio of Polish Radio, with some sound effects added at the laboratory of the Military Academy of Technology in Warsaw and with post-production at DEFA.
Ernst Kunstmann was in charge of special effects.
Release
It was the first science fiction film released by Poland and East Germany.
When first released to European cinemas, the film sold about 4.3 million tickets, making it one of the 30 most successful DEFA films.
Critical response
In a retrospective on Soviet science fiction film, British director Alex Cox compared The Silent Star to the Japanese film The Mysterians, but called the former "more complex and morally ambiguous". Cox also remarked that Silent Stars images of melted cities and crystallised forests, overhung by swirling clouds of gas, are masterpieces of production design. The scene in which three cosmonauts are menaced halfway up a miniature Tower of Babel by an encroaching sea of sludge may not entirely convince, but it is still a heck of a thing to see".
Stanislaw Lem, whose novel the film was based upon, was extremely critical of the adaptation and even wanted his name removed from the credits in protest against the extra politicization of the storyline when compared to his original. (Lem: "It practically delivered speeches about the struggle for peace. Trashy screenplay was painted; tar was bubbling, which would not scare even a child".)
Awards
1964: Festival of Utopian Films, Triest (Utopisches Filmfestival Triest): "Golden Spaceship Award" ("Das goldene Raumschiff")
Other releases
United States
In 1962 the shortened 79-minute dubbed release from Crown International Pictures substituted the title First Spaceship on Venus for the English-speaking market. The film was released theatrically in the U.S. as a double feature with the re-edited version of the 1958 Japanese Kaiju film Varan the Unbelievable. All references to the atomic bombing of Hiroshima were edited out. The American character Hawling became a Russian named Orloff. The Russian character Arsenjew became the American Herringway, while the Polish character Soltyk became the Frenchman Durand. The spacecraft used for the journey was referred to and spelled as Cosmostrator.
Two differently cut and dubbed versions of the film were also shown on the American market at the time, Spaceship Venus Does Not Reply and Planet of the Dead.
The original, uncut version of the film was finally re-released in the U.S. in 2004 under its original title The Silent Star by the DEFA Film Library of the University of Massachusetts Amherst.
In other media
In 1990, First Spaceship on Venus was featured in the second national season of Mystery Science Theater 3000 and was released on DVD in 2008 by Shout! Factory, as part of their "MST3K 20th Anniversary Edition" collection.
In 2007, the film was shown on the horror hosted television series Cinema Insomnia. Apprehensive Films later released the Cinema Insomnia episode on DVD.
References
Bibliography
Ciesla, Burghard: "Droht der Menschheit Vernichtung? Der schweigende Stern – First Spaceship on Venus: Ein Vergleich". (Apropos Film. Bertz, Berlin 2002: 121–136. )
Kruschel, Karsten: "Leim für die Venus. Der Science-Fiction-Film in der DDR." (Das Science Fiction Jahr 2007 ed. Sascha Mamczak and Wolfgang Jeschke. Heyne Verlag, 2007: 803–888. .)
Warren, Bill. Keep Watching The Skies, Vol II: 1958–1962. Jefferson, North Carolina: McFarland & Company, 1986. .
External links
Said Mystery Science Theater 3000 Episode on ShoutFactoryTV
1960 films
1960s science fiction films
German science fiction films
Polish science fiction films
East German films
1960s German-language films
Films based on works by Stanisław Lem
Space adventure films
Films about astronauts
Films about alien invasions
Venus in film
Films set in the 1980s
Films set in New York City
Films set in Russia
Films set in the future
Films shot in Germany
Films shot in Poland
Crown International Pictures films
Films based on Polish novels
East Germany–Poland relations
Films set in 1985
Films about nuclear war and weapons
Films set in Mongolia
1960s German films
Tunguska event | The Silent Star | [
"Physics"
] | 2,475 | [
"Unsolved problems in physics",
"Tunguska event"
] |
10,900,810 | https://en.wikipedia.org/wiki/European%20Journal%20of%20Mass%20Spectrometry | The European Journal of Mass Spectrometry is a peer-reviewed scientific journal covering all areas of mass spectrometry. It is published by SAGE Publishing and the editor-in-chief is Jürgen Grotemeyer (University of Kiel).
See also
Mass Spectrometry Reviews
Journal of Mass Spectrometry
Journal of the American Society for Mass Spectrometry
Rapid Communications in Mass Spectrometry
External links
Mass spectrometry journals
SAGE Publishing academic journals | European Journal of Mass Spectrometry | [
"Physics",
"Chemistry"
] | 98 | [
"Spectrum (physical sciences)",
"Biochemistry journal stubs",
"Biochemistry stubs",
"Mass spectrometry",
"Mass spectrometry journals"
] |
10,901,301 | https://en.wikipedia.org/wiki/International%20Journal%20of%20Mass%20Spectrometry | The International Journal of Mass Spectrometry is a monthly peer-reviewed scientific journal covering all aspects of mass spectrometry, including instrumentation and applications in biology, chemistry, geology, and physics. It was established in 1968 as the International Journal of Mass Spectrometry and Ion Physics and was renamed International Journal of Mass Spectrometry and Ion Processes in 1983, before obtaining its current title in 1998. It is published by Elsevier and the editors-in-chief are Julia Laskin (Purdue University) and Zheng Ouyang (Tsinghua University).
Abstracting and indexing
The journal is abstracted and indexed in:
Chemical Abstracts Service
Current Contents/Physical, Chemical & Earth Sciences
EBSCO databases
Embase
Food Science and Technology Abstracts
FRANCIS
Inspec
PASCAL
Science Citation Index Expanded
Scopus
According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.986.
References
External links
Mass spectrometry journals
Elsevier academic journals
Monthly journals
Academic journals established in 1968
English-language journals | International Journal of Mass Spectrometry | [
"Physics"
] | 210 | [
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometry journals"
] |
10,901,904 | https://en.wikipedia.org/wiki/Thermophotonics | Thermophotonics (often abbreviated as TPX) is a concept for generating usable power from heat which shares some features of thermophotovoltaic (TPV) power generation. Thermophotonics was first publicly proposed by solar photovoltaic researcher Martin Green in 2000. However, no TPX device is known to have been demonstrated to date, apparently because of the stringent requirement on the emitter efficiency.
A TPX system consists of a light-emitting diode (LED) (though other types of emitters are conceivable), a photovoltaic (PV) cell, an optical coupling between the two, and an electronic control circuit. The LED is heated to a temperature higher than the PV temperature by an external heat source. If no power is applied to the LED, the system functions much like a very inefficient TPV system, but if a forward bias is applied at some fraction of the bandgap potential, an increased number of electron-hole pairs (EHPs) will be thermally excited to the bandgap energy. These EHPs can then recombine radiatively so that the LED emits light at a rate higher than the thermal radiation rate ("superthermal" emission). This light is then delivered to the cooler PV cell over the optical coupling and converted to electricity.
The control circuit presents a load to the PV cell (presumably at the maximum power point) and converts this voltage to a voltage level that can be used to sustain the bias of the emitter. Provided that the conversion efficiencies of electricity to light and light to electricity are sufficiently high, the power harnessed from the PV cell can exceed the power going into the bias circuit, and this small fraction of excess power (originating from the heat difference) can be utilized. It is thus in some sense a photonic heat engine.
Possible applications of thermophotonic generators include solar thermal electricity generation and utilization of waste heat. TPX systems may have the potential to generate power with useful levels of output at temperatures where only thermoelectric systems are now practical, but with higher efficiency.
A patent application for a thermophotonic generator using a vacuum gap with thickness on the order of a micrometer or less was published by the US Patent Office in 2009 and assigned to MTPV Corporation of Austin, Texas, USA. This proposed variant of the technology allows better thermal insulation because of the gap between the hot emitter and cold receiver, while maintaining relatively good optical coupling between them due to the gap's being small relative to the optical wavelength.
References
Thermodynamics
Photovoltaics | Thermophotonics | [
"Physics",
"Chemistry",
"Mathematics"
] | 555 | [
"Thermodynamics",
"Dynamical systems"
] |
10,902,749 | https://en.wikipedia.org/wiki/Proteomics%20%28journal%29 | Proteomics is a peer-reviewed scientific journal covering topics including whole proteome analysis of organisms, protein expression profiling, disease, pharmaceutical, agricultural and biotechnological applications, and analysis of cellular systems, organelles and protein complexes.
It is published by Wiley VCH and the current editor-in-chief is Lucie Kalvodova.
According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.984, ranking it 23rd out of 78 journals in the category "Biochemical Research Methods".
References
Academic journals established in 2006
English-language journals
Bimonthly journals
Proteomics journals
Wiley (publisher) academic journals | Proteomics (journal) | [
"Chemistry"
] | 136 | [
"Biochemistry stubs",
"Biochemistry journal stubs"
] |
10,902,751 | https://en.wikipedia.org/wiki/Mesoionic%20compounds | In chemistry, mesoionic compounds are one in which a heterocyclic structure is dipolar and where both the negative and the positive charges are delocalized. A completely uncharged structure cannot be written and mesoionic compounds cannot be represented satisfactorily by any one mesomeric structure. Mesoionic compounds are a subclass of betaines. Examples are sydnones and sydnone imines (e.g. the stimulant mesocarb), münchnones, and mesoionic carbenes.
The formal positive charge is associated with the ring atoms and the formal negative charge is associated either with ring atoms or an exocyclic nitrogen or other atom. These compounds are stable zwitterionic compounds and belong to nonbenzenoid aromatics.
See also
Mesomeric betaine
References
Further reading
Heterocyclic compounds
Zwitterions | Mesoionic compounds | [
"Physics",
"Chemistry"
] | 197 | [
"Matter",
"Organic compounds",
"Heterocyclic compounds",
"Zwitterions",
"Ions"
] |
10,902,777 | https://en.wikipedia.org/wiki/Mitochondrial%20carrier | Mitochondrial carriers are proteins from solute carrier family 25 which transfer molecules across the membranes of the mitochondria. Mitochondrial carriers are also classified in the Transporter Classification Database. The Mitochondrial Carrier (MC) Superfamily has been expanded to include both the original Mitochondrial Carrier (MC) family (TC# 2.A.29) and the Mitochondrial Inner/Outer Membrane Fusion (MMF) family (TC# 1.N.6).
Phylogeny
Members of the MC family (SLC25) (TC# 2.A.29) are found exclusively in eukaryotic organelles although they are nuclearly encoded. Most are found in mitochondria, but some are found in peroxisomes of animals, in hydrogenosomes of anaerobic fungi, and in amyloplasts of plants.
SLC25 is the largest solute transporter family in humans. 53 members have been identified in human genome, 58 in A. thaliana and 35 in S. cerevisiae. The functions of approximately 30% of the human SLC25 proteins are unknown, but most of the yeast homologues have been functionally identified. See TCDB for functional assignments
Function
Many MC proteins preferentially catalyze the exchange of one solute for another (antiport). A variety of these substrate carrier proteins, which are involved in energy transfer, have been found in the inner membranes of mitochondria and other eukaryotic organelles such as the peroxisome and facilitate the transport of inorganic ions, nucleotides, amino acids, keto acids and cofactors across the membrane. Such proteins include:
ADP/ATP carrier protein (ADP-ATP translocase; i.e., TC# 2.A.29.1.2)
2-oxoglutarate/malate carrier protein (SLC25A11; TC# 2.A.29.2.11)
phosphate carrier protein (SLC25A3; TC# 2.A.29.4.2)
Tricarboxylate transport protein, mitochondrial (SLC25A1, or citrate transport protein; TC# 2.A.29.7.2)
Graves disease carrier protein (SLC25A16; TC# 2.A.29.12.1)
Yeast mitochondrial proteins MRS3 (TC# 2.A.29.5.1) and MRS4 (TC# 2.A.29.5.2)
Yeast mitochondrial FAD carrier protein (TC# 2.A.29.10.1)
As well as many others.
Functional aspects of these proteins, including metabolite transport, have been reviewed by Dr. Ferdinando Palmieri and Dr. Ciro Leonardo Pierri (2010). Diseases caused by defects of mitochondrial carriers are reviewed by Palmieri et al. (2008) and by Gutiérrez-Aguilar and Baines 2013. Mutations of mitochondrial carrier genes involved in mitochondrial functions other than oxidative phosphorylation are responsible for carnitine/acylcarnitine carrier deficiency, HHH syndrome, aspartate/glutamate isoform 2 deficiency, Amish microcephaly, and neonatal myoclonic epilepsy. These disorders are characterized by specific metabolic dysfunctions, depending on the physiological role of the affected carrier in intermediary metabolism. Defects of mitochondrial carriers that supply mitochondria with the substrates of oxidative phosphorylation, inorganic phosphate and ADP, are responsible for diseases characterized by defective energy production. Residues involved in substrate binding in the middle of the transporter and gating have been identified and analyzed.
Structure
Permeases of the MC family (the human SLC25 family) possess six transmembrane α-helices. The proteins are of fairly uniform size of about 300 residues. They arose by tandem intragenic triplication in which a genetic element encoding two spanners gave rise to one encoding six spanners. This event may have occurred less than 2 billion years ago when mitochondria first developed their specialized endosymbiotic functions within eukaryotic cells. Members of the MC family are functional and structural monomers although early reports indicated that they are dimers.
Most MC proteins contain a primary structure exhibiting three repeats, each of about 100 amino acid residues in length, and both the N and C termini face the intermembrane space. All carriers contain a common sequence, referred to as the MCF motif, in each repeated region, with some variation in one or two signature sequences.
Amongst the members of the mitochondrial carrier family that have been identified, it is the ADP/ATP carrier (AAC; TC# 2.A.29.1.1) that is responsible for importing ADP into the mitochondria and exporting ATP out of the mitochondria and into the cytosol following synthesis. The AAC is an integral membrane protein that is synthesised lacking a cleavable presequence, but instead contains internal targeting information. It consists of a basket-shaped structure with six transmembrane helices that are tilted with respect to the membrane, 3 of them "kinked" due to the presence of prolyl residues.
Residues that are important for the transport mechanism are likely to be symmetrical, whereas residues involved in substrate binding will be asymmetrical reflecting the asymmetry of the substrates. By scoring the symmetry of residues in the sequence repeats, Robinson et al. (2008) identified the substrate-binding sites and salt bridge networks that are important for transport. The symmetry analyses provides an assessment of the role of residues and provides clues to the chemical identities of substrates of uncharacterized transporters.
There are structures of the mitochondrial ADP/ATP carrier in two different states. One is the cytoplasmic state, inhibited by carboxyatractyloside, in which the substrate binding site is accessible to the intermembrane space, which is confluent with the cytosol, i.e. the bovine mitochondrial ADP/ATP carrier /, the yeast ADP/ATP carrier Aac2p /, the yeast ADP/ATP carrier Aac3p /, Another is the matrix state, inhibited by bongkrekic acid, in which the substrate binding site is accessible to the mitochondrial matrix, i.e. the fungal mitochondrial ADP/ATP carrier . In addition, there are structures of the calcium regulatory domains of the mitochondrial ATP-Mg/Pi carrier in the calcium-bound state / and mitochondrial aspartate/glutamate carriers in different regulatory states //.
Substrates
Mitochondrial carriers transport amino acids, keto acids, nucleotides, inorganic ions and co-factors through the mitochondrial inner membrane. The transporters consist of six transmembrane alpha-helices with threefold pseudo-symmetry.
The transported substrates of MC family members may bind to the bottom of the cavity, and translocation results in a transient transition from a 'pit' to a 'channel' conformation. An inhibitor of AAC, carboxyatractyloside, probably binds where ADP binds, in the pit on the outer surface, thus blocking the transport cycle. Another inhibitor, bongkrekic acid, is believed to stabilize a second conformation, with the pit facing the matrix. In this conformation, the inhibitor may bind to the ATP-binding site. Functional and structural roles for residues in the TMSs have been proposed. The mitochondrial carrier signature, Px[D/E]xx[K/R], of carriers is probably involved both in the biogenesis and in the transport activity of these proteins. A homologue has been identified in the mimivirus genome and shown to be a transporter for dATP and dTTP.
Examples of transported compounds include:
citrate –
ornithine – ,
phosphate – , , ,
adenine nucleotide – , , ,
dicarboxylate –
oxoglutarate –
glutamate –
Examples
Human proteins containing this domain include:
HDMCP, , MCART1, MCART2, MCART6, MTCH1, MTCH2
UCP1, UCP2, UCP3
SLC25A1, SLC25A3, SLC25A4, SLC25A5, SLC25A6, SLC25A10, SLC25A11, SLC25A12, SLC25A13, SLC25A14, SLC25A16, SLC25A17, SLC25A18, SLC25A19, SLC25A21, SLC25A22, SLC25A23, SLC25A24, SLC25A25, SLC25A26, SLC25A27, SLC25A28, SLC25A29, SLC25A30, SLC25A31, SLC25A32, SLC25A33, SLC25A34, SLC25A35, SLC25A36, SLC25A37, SLC25A38, SLC25A39, SLC25A40, SLC25A41, SLC25A42, SLC25A43, SLC25A44, SLC25A45, SLC25A46, SLC25A48
Yeast Ugo1 is an example of the MMF family, but this protein has no human ortholog.
References
External links
Getting a good rate of exchange – the mitochondrial ADP-ATP carrier Article at PDBe
Transporter Classification Database - Mitochondrial Carrier Superfamily
Protein families
Solute carrier family | Mitochondrial carrier | [
"Biology"
] | 2,016 | [
"Protein families",
"Protein classification"
] |
10,902,811 | https://en.wikipedia.org/wiki/Threaded%20insert | A threaded insert, also known as a threaded bushing, is a fastener element that is inserted into an object to add a threaded hole. They may be used to repair a stripped threaded hole, provide a durable threaded hole in a soft material, place a thread on a material too thin to accept it, mold or cast threads into a work piece thereby eliminating a machining operation, or simplify changeover from unified to metric threads or vice versa.
Types
Thread inserts come in many varieties, depending on the application. Threaded inserts for plastics are used in plastic materials and applied with thermal insertion or ultrasonic welding machines.
Manufacturers of ready-to-assemble furniture often ship the parts with threaded inserts and other kinds of knock-down fasteners pre-installed.
People who use sheet metal or sandwich panel or honeycomb sandwich-structured composite often install threaded inserts to spread shear, tension, and torque loads over a larger area of the material.
Captive nut
Captive nuts come in two basic styles. One type, the cage nut or clip-on nut is a conventional nut held captive by a sheet metal carrier that clips onto the part to be connected. These are generally used to attach screws to sheet metal parts too thin to be threaded, and
they can generally be attached, removed and reused with simple hand tools.
The second type of captive nut is a threaded insert. These are either pressed into holes in the material to be joined or moulded in. In either case, part of the insert is generally knurled to get a good grip on the material supporting the insert. One variant, the swage nut, has a knurled portion that swages the sides of a soft metal hole to more tightly grip the nut. Press fit and swaged captive nuts are used in panels that are too thin to be threaded or in soft materials that are too weak to be threaded. They are installed by pressing them in with an arbor press.
Threaded inserts are commonly used in plastic casings, housing, and parts to create a metal thread (typically: brass or stainless steel) to allow for screws to be used in the assembly of many consumer electronics and consumer products. These may be cast in place in injection molded parts or they may be added by thermal insertion. In the latter, the insert is heated and then pressed into a hollow in the plastic part. The heat causes local melting in the plastic. Ultrasonic Insertion is the process used to apply vibration and pressure to install the threaded insert into a molded hollow boss (hole) of a plastic part. The ultrasonic vibrations melt the thermoplastic material where the metal insert is in contact, and pressure is applied to press it into position. The material typically reforms around the knurled body of the threaded insert to ensure a good retention.
Externally-threaded inserts
An externally threaded insert has threads on the outside and inside. The insert can be threaded into a pre-tapped hole, or a self-tapping insert creates its own threads in a drilled or molded hole. It is then anchored by various means, such as a nylon locking element. Inserts that are anchored via Loctite are more commonly known by the trademarked name E-Z Lok. A thin-walled solid-bushing insert by the trademarked name TIME-SERT is locked in by rolling the bottom few internal thread into the base material with a special install driver which will permanently lock the insert in place. Key-locking inserts, more commonly known by the trademarked name Keenserts, use keys that are hammered into grooves through the threads, permanently locking the insert. Inserts that are self-tapping and lock via friction are more commonly known by the trademarked names Tap-lok or Speedserts.
Helical insert
A helical insert (also called a screw thread insert (STI), although most users call them all by one of the prominent brand names: KATO®, Heli-Coil® or Recoil®) is an insert made of stainless steel or phosphor bronze wire, with a diamond cross section, coiled to form inner and outer threads. The
coil of wire screws into a threaded hole, where it forms a smaller-diameter internal thread for a screw or stud. These inserts provide a convenient means of repairing stripped internal threads. These inserts are commonly sold in kits with matched taps and insert tools.
In soft materials, they are used to provide stronger threads than can be obtained by direct tapping of the base materials, e.g. aluminium, zinc die castings, wood, magnesium, plastic.
An example application is engine repair after unintentionally destroying the threads in a socket for a spark plug by over-torquing or cross-threading.
Mold-in inserts
A mold-in insert has a specially shaped outer surface to anchor the insert in plastic. For injection-molded plastic, the insert is placed in a mold before it is filled with plastic, making an integral part. An insert can also be heated and pressed into pre-made thermoplastic material.
For softer, more pliable plastics, hexagonal or square inserts with deep and wide grooves allow the softer plastics to hold the inserts sufficiently. The process allows large product manufacture i.e. fuel tanks, boats etc., so the torque inserts may be of large thread sizes.
Press-fit inserts
A press-fit insert is internally threaded and has a knurled outer surface. It is pressed into a plain hole with an arbor press.
Potted inserts
A potted insert is set in epoxy to fix it, such as in a honeycomb sandwich panel, often used in commercial aircraft, and is said to be potted in.
Strength factors of threaded inserts
Pull-out resistance & torque-out resistance are the two main strength factors of threaded inserts.
Pull-out resistance: the force required to begin to pull the insert out of the parent material
Torque-out: the amount of torque required to begin to turn the fastener
Knurling
Knurling is the grooved texture on the outside of the insert. Types of knurling and their benefits are as follows:
Straight knurls: Greatest torque resistance
Diagonal or helical knurls: Balanced torque and pull-out resistance
Hexagonal or diamond knurls: Balanced torque and pull-out resistance, most common
Installation methods
For industrial purposes, the following installation methods are the standards:
Thermal insertion
Injection molding
Manual pressing
See also
Insert nut
Nut
Rivet nut
Screw
Screw thread
References
Notes
Bibliography
.
Sullivan, Gary & Crawford, Lance, "The Heat Stake Advantage". Plastic Decorating Magazine. January/February 2003 Edition. . (Topeka, KS: Peterson Publications, Inc.). Section: Assembly: pages 11–12, covers Sullivan & Crawford's article.
Hardware (mechanical)
Mechanical fasteners | Threaded insert | [
"Physics",
"Technology",
"Engineering"
] | 1,402 | [
"Machines",
"Mechanical fasteners",
"Physical systems",
"Construction",
"Mechanical engineering",
"Hardware (mechanical)"
] |
10,902,825 | https://en.wikipedia.org/wiki/Ecogenetics | Ecogenetics is a branch of genetics that studies genetic traits related to the response to environmental substances. Or, a contraction of ecological genetics, the study of the relationship between a natural population and its genetic structure.
Ecogenetics principally deals with effects of preexisting genetically-determined variability on the response to environmental agents. The word environmental is defined broadly to include the physical, chemical, biological, atmospheric, and climate agents. Ecogenetics, therefore, is an all-embracing term, and concepts such as pharmacogenetics are seen as subcomponents of ecogenetics. This work grew logically from the book entitled Pollutants and High Risk Groups (1978), which presented an overview of the various host factors i.e. age, heredity, diet, preexisting diseases, and lifestyles which affect environmentally-induced disease.
The primary intention of ecogenetics is to provide an objective and critical evaluation of the scientific literature pertaining to genetic factors and differential susceptibility to environmental agents, with particular emphasis on those agents typically considered pollutants. It is important to realize though that ones genetic makeup, while important, is but one of an array of host factors contributing to overall adaptive capacity of the individual. In many instances, it is possible for such factors to interact in ways that may enhance or offset the effect of each other.
Red blood cell conditions
There is a broad group of genetic diseases that result in either producing or predisposing affected individuals to the development of hemolytic anemias. These diseases include abnormal haemoglobin, inability to manufacture one or the other of the peptide globin chains of the haemoglobin, and deficiencies of the Embden-Meyerhoff monophosphate.
Liver metabolism
Individuals lacking the ability to detoxify and excrete PCB's may have a high risk of total liver failure in conjunction with certain ecological conditions.
Cardiovascular diseases
The pathologic lesion of atherosclerosis is a plaque-like substance that thickens the innermost and middle of the three layers of the artery wall. The thickening of the intimal and medial layers results from the accumulation of the proliferating smooth muscle cells that are encompassed by interstitial substances such as collagen, elastin, glycosaminoglycans, and fibrin.
Respiratory diseases
There are three genetically-based respiratory diseases that can directly correspond with ecological functions and induce disease. These include lung cancer and the upper and lower respiratory tract associated with a serum Ig A deficiency.
See also
Endocrine disruptor
Paraoxon
Paraoxonase
Pharmacogenetics
Xenobiotic
Xenoestrogen
References
van Zyl, Jay. Built to Thrive: Using Innovation to Make Your Mark in a Connected World. Chapter 5: Ecogenetics. San Francisco. 2011
Calabrese, Edward J. Ecogenetics: Genetic Variation in Susceptibility to Environmental Agents. Environmental Science and Technology. New York. 1984.
Branches of genetics
Risk factors
Environmental toxicology | Ecogenetics | [
"Biology",
"Environmental_science"
] | 638 | [
"Toxicology",
"Environmental toxicology",
"Branches of genetics"
] |
10,903,849 | https://en.wikipedia.org/wiki/Growth%20factor-like%20domain | A growth factor-like domain (GFLD) is a protein domain structurally related to epidermal growth factor, which has a high binding affinity for the epidermal growth factor receptor. As structural domains within larger proteins, GFLD regions commonly bind calcium ions. A subtype present in the N-terminal region of the amyloid precursor protein is a member of the heparin-binding class of GFLDs and may itself have growth factor function, particularly in promoting neuronal development.
References
Protein domains | Growth factor-like domain | [
"Chemistry",
"Biology"
] | 107 | [
"Biochemistry stubs",
"Protein stubs",
"Protein domains",
"Protein classification"
] |
10,903,919 | https://en.wikipedia.org/wiki/Anal%20hygiene | Anal hygiene refers to practices (anal cleansing) that are performed on the anus to maintain personal hygiene, usually immediately or shortly after defecation. Anal cleansing may also occur while showering or bathing. Post-defecation cleansing is rarely discussed academically, partly due to the social taboo surrounding it. The scientific objective of post-defecation cleansing is to prevent exposure to pathogens.
The process of post-defecation cleansing involves washing the anus and inner part of the buttocks with water. Water-based cleansing typically involves either the use of running water from a handheld vessel and a hand for washing or the use of pressurized water through a jet device, such as a bidet. In either method, subsequent hand sanitization is essential to achieve the ultimate objectives of post-defecation cleansing.
History
Materials
The ancient Greeks were known to use fragments of ceramic, known as pessoi (πεσσοί), to perform anal cleansing.
The ancient Romans used a tersorium (), consisting of a sponge on a wooden stick. The stick would be soaked in a water channel in front of a toilet, and then stuck through the hole built into the front of the toilet for anal cleaning. The tersorium was shared by people using public latrines. To clean the sponge, they washed it in a bucket with water and salt or vinegar. However, this became a breeding ground for bacteria, causing the spread of disease in the latrine.
In ancient Japan, wooden skewers known as chuugi ("shit sticks") were used for post-defecation cleaning.
The use of toilet paper first started in ancient China around the 2nd century BC. According to Charlier (2012), French physician François Rabelais had argued about the ineffectiveness of toilet paper in the 16th century. The first commercially available toilet paper was invented by American entrepreneur Joseph Gayetty in 1857, with the dawning of the Second Industrial Revolution.
Facilities
Post-defecation facilities evolved with human civilization, and thus, so did post-defecation cleansing. According to Fernando, there is archeological evidence of toilet use in medieval Sri Lanka, ranging from the 6th-century Abhayagiri Complex in Anuradhapura; the 10th-century Pamsukulika Monastery in Ritigala, and the Baddhasimapasada and the Alahana Pirivena hospital complex in Polonnaruwa; to the 12th-century hospital toilet in Mihintale. These toilets were found to be with a complete system of plumbing and sewage with multi-stage treatment plants. According to Buddhism, toilet etiquettes (Wachchakutti Wattakkandaka in Pali language) were enumerated by Buddha himself in Tripitaka, the earliest collection of Buddhist teachings.
Common methods
Washing
In countries that have predominantly Catholic, Eastern Orthodox, Hindu, Buddhist, or Islamic cultural traditions, water is usually used for anal cleansing. It is also practiced in some Protestant cultures, such as that of Finland. The cleaning process is typically done through either a pressurized device (e.g., a bidet or a bidet shower) or a non-pressurized vessel (e.g., a lota or an aftabeh) alongside a person's hand; many cultures assert that only the left hand is to be used for this task. Washing is sometimes followed by drying the cleaned areas with a cloth towel.
Wiping
In some parts of the developing world and in other areas where water may not always be usable, such as during camping trips, materials such as vegetable matter (leaves), mudballs, snow, corncobs, and stones are sometimes used for anal cleansing. Having hygienic means for anal cleansing available at the toilet or site of defecation is important for overall public health. The absence of proper materials in households can, under some circumstances, be correlated to the number of diarrhea episodes per household. The history of anal hygiene, from the Greco-Roman world to ancient China and ancient Japan, involves the widespread use of sponges and sticks as well as water and paper.
The inclusion of anal cleansing facilities is often overlooked in the design of public or shared toilets in developing countries. In most cases, materials for anal cleansing are not made available within those facilities. Ensuring safe disposal of anal cleansing materials is often overlooked, which can lead to unhygienic debris inside or surrounding public toilets that contributes to the spread of diseases.
Cultural preferences
Water
Water with soap cleansing is a reliable and hygienic way of removing fecal remnants.
Muslim societies
The use of water in Muslim countries is due in part to Islamic toilet etiquette which encourages washing after all instances of defecation. There are flexible provisions for when water is scarce: stones or papers can be used for cleansing after defecation instead.
In Turkey, all Western-style toilets have a small nozzle on the centre rear of the toilet rim aiming at the anus. This nozzle is called taharet musluğu and it is controlled by a small tap placed within hand's reach near the toilet. It is used to wash the anus after wiping and drying with toilet paper. Squat toilets in Turkey do not have this kind of nozzle (a small bucket of water from a hand's reach tap or a bidet shower is used instead).
Another alternative resembles a miniature shower and is known as a "health faucet", bidet shower, or "bum gun". It is commonly found to the right of the toilet where it is easy to reach. These are commonly used in the Muslim world. In the Indian subcontinent, a lota vessel is often used to cleanse with water, though the shower or nozzle is common among new toilets.
Christian societies
The use of water in many Christian countries is due in part to the biblical toilet etiquette which encourages washing after all instances of defecation. The bidet is common in predominantly Catholic countries where water is considered essential for anal cleansing.
Some people in Europe and the Americas use bidets for anal cleansing with water. Bidets are common bathroom fixtures in many Western and Southern European countries and many South American countries, while bidet showers are more common in Finland and Greece. The availability of bidets varies widely within this group of countries. Furthermore, even where bidets exist, they may have other uses than for anal washing. In Italy, the installation of bidets in every household and hotel became mandatory by law on July 5, 1975.
East Asia
The first "paperless" toilet seat was invented in Japan in 1980. A spray toilet seat, commonly known by Toto's trademark Washlet, is typically a combination of seat warmer, bidet and drier, controlled by an electronic panel or remote control next to the toilet seat. A nozzle placed at rear of the toilet bowl aims a water jet to the anus and serves the purpose of cleaning. Many models have a separate "bidet" function aimed towards the front for vaginal cleansing. The spray toilet seat is common only in Western-style toilets, and is not incorporated in traditional style squat toilets. Some modern Japanese bidet toilets, especially in hotels and public areas, are labeled with pictograms to avoid language problems, and most newer models have a sensor that will refuse to activate the bidet unless someone is sitting on the toilet.
Southeast Asia
In Southeast Asian countries such as Indonesia, the Philippines, Thailand, Brunei, Malaysia, and East Timor, house bathrooms usually have a medium size wide plastic dipper (called in Indonesia, in the Philippines, () in Thai) or large cup, which is also used in bathing. In Thailand, the "bum gun" is ubiquitous. Some health faucets are metal sets attached to the bowl of the water closet, with the opening pointed at the anus. Toilets in public establishments mainly provide toilet paper for free or dispensed, though the dipper (often a cut up plastic bottle or small jug) is occasionally encountered in some establishments. Owing to its ethnic diversity, restrooms in Malaysia often feature a combination of anal cleansing methods where most public restrooms in cities offer toilet paper as well as a built in bidet or a small hand-held bidet shower (health faucets) connected to the plumbing in the absence of a built-in bidet.
In Vietnam, people often use a bidet shower. It is usually available both at general households and public places.
Toilet paper
Western world and Sub-Saharan Africa
In some cultures—such as many Western countries—cleaning after defecation is generally done with toilet paper only, until the person can bathe or shower. Toilet paper is considered a very important household commodity in Western culture, as illustrated by the panic buying of toilet paper in many Western countries during the COVID-19 pandemic.
In some parts of the world, especially before toilet paper was available or affordable, the use of newspaper, telephone directory pages, or other paper products was common. In North America, the widely distributed Sears Roebuck catalog was also a popular choice until it began to be printed on glossy paper (at which point some people wrote to the company to complain). With flush toilets, using newspaper as toilet paper is likely to cause blockages.
This practice continues today in parts of Africa; while rolls of toilet paper are readily available, they can be fairly expensive, prompting poorer members of the community to use newspapers.
People suffering from hemorrhoids may find it more difficult to keep the anal area clean using only toilet paper and may prefer washing with water as well.
Although wiping from front to back minimizes the risk of contaminating the urethra, the directionality of wiping varies based on sex, personal preference, and culture.
Some people wipe their anal region standing while others wipe theirs sitting.
Other methods and materials
Wet wipes and gel wipes
When cleaning babies' buttocks during diaper changes wet wipes are often used, in combination with water if available. As wet wipes are produced from plastic textiles made of polyester or polypropylene, they are notoriously bad for sewage systems as they do not decompose, although the wet wipe industry maintains they are biodegradable but not "flushable".
A product of the 21st century, special foams, sprays and gels can be combined with dry toilet paper as alternatives to wet wipes. A moisturizing gel can be applied to toilet paper for personal hygiene or to reduce skin irritation from diarrhea. This product is called gel wipe.
Pre-wipes
Novel pre-wipes and methods are disclosed for assisting in the cleaning of skin in the anal area. The pre-wipes comprise an anti-adherent formulation and are wiped across the anal region of a user prior to defecation to introduce a film of the anti-adherent formulation onto the anal region. This film reduces the amount of fecal material that is retained in the anal region after defecation and reduces the amount of cleanup required. This reduced amount of cleanup results in cleaner, healthier skin.
Natural materials
Stones, leaves, corn cobs and similar natural materials may also be used for anal cleansing.
References
Defecation
Hygiene
Hygiene | Anal hygiene | [
"Biology"
] | 2,300 | [
"Excretion",
"Defecation"
] |
10,904,266 | https://en.wikipedia.org/wiki/Coinduction | In computer science, coinduction is a technique for defining and proving properties of systems of concurrent interacting objects.
Coinduction is the mathematical dual to structural induction. Coinductively defined data types are known as codata and are typically infinite data structures, such as streams.
As a definition or specification, coinduction describes how an object may be "observed", "broken down" or "destructed" into simpler objects. As a proof technique, it may be used to show that an equation is satisfied by all possible implementations of such a specification.
To generate and manipulate codata, one typically uses corecursive functions, in conjunction with lazy evaluation. Informally, rather than defining a function by pattern-matching on each of the inductive constructors, one defines each of the "destructors" or "observers" over the function result.
In programming, co-logic programming (co-LP for brevity) "is a natural generalization of logic programming and coinductive logic programming, which in turn generalizes other extensions of logic programming, such as infinite trees, lazy predicates, and concurrent communicating predicates. Co-LP has applications to rational trees, verifying infinitary properties, lazy evaluation, concurrent logic programming, model checking, bisimilarity proofs, etc." Experimental implementations of co-LP are available from the University of Texas at Dallas and in the language Logtalk (for examples see ) and SWI-Prolog.
Description
In a concise statement is given of both the principle of induction and the principle of coinduction. While this article is not primarily concerned with induction, it is useful to consider their somewhat generalized forms at once. In order to state the principles, a few preliminaries are required.
Preliminaries
Let be a set and be a monotone function , that is:
Unless otherwise stated, will be assumed to be monotone.
X is F-closed if
X is F-consistent if
X is a fixed point if
These terms can be intuitively understood in the following way. Suppose that is a set of assertions, and is the operation that yields the consequences of . Then is F-closed when you cannot conclude anymore than you've already asserted, while is F-consistent when all of your assertions are supported by other assertions (i.e. there are no "non-F-logical assumptions").
The Knaster–Tarski theorem tells us that the least fixed-point of (denoted ) is given by the intersection of all F-closed sets, while the greatest fixed-point (denoted ) is given by the union of all F-consistent sets. We can now state the principles of induction and coinduction.
Definition
Principle of induction: If is F-closed, then
Principle of coinduction: If is F-consistent, then
Discussion
The principles, as stated, are somewhat opaque, but can be usefully thought of in the following way. Suppose you wish to prove a property of . By the principle of induction, it suffices to exhibit an F-closed set for which the property holds. Dually, suppose you wish to show that . Then it suffices to exhibit an F-consistent set that is known to be a member of.
Examples
Defining a set of datatypes
Consider the following grammar of datatypes:
That is, the set of types includes the "bottom type" , the "top type" , and (non-homogenous) lists. These types can be identified with strings over the alphabet . Let denote all (possibly infinite) strings over . Consider the function :
In this context, means "the concatenation of string , the symbol , and string ." We should now define our set of datatypes as a fixpoint of , but it matters whether we take the least or greatest fixpoint.
Suppose we take as our set of datatypes. Using the principle of induction, we can prove the following claim:
All datatypes in are finite
To arrive at this conclusion, consider the set of all finite strings over . Clearly cannot produce an infinite string, so it turns out this set is F-closed and the conclusion follows.
Now suppose that we take as our set of datatypes. We would like to use the principle of coinduction to prove the following claim:
The type
Here denotes the infinite list consisting of all . To use the principle of coinduction, consider the set:
This set turns out to be F-consistent, and therefore . This depends on the suspicious statement that
The formal justification of this is technical and depends on interpreting strings as sequences, i.e. functions from . Intuitively, the argument is similar to the argument that (see Repeating decimal).
Coinductive datatypes in programming languages
Consider the following definition of a stream:
data Stream a = S a (Stream a)
-- Stream "destructors"
head (S a astream) = a
tail (S a astream) = astream
This would seem to be a definition that is not well-founded, but it is nonetheless useful in programming and can be reasoned about. In any case, a stream is an infinite list of elements from which you may observe the first element, or place an element in front of to get another stream.
Relationship with F-coalgebras
Source:
Consider the endofunctor in the category of sets:
The final F-coalgebra has the following morphism associated with it:
This induces another coalgebra with associated morphism . Because is final, there is a unique morphism
such that
The composition induces another F-coalgebra homomorphism . Since is final, this homomorphism is unique and therefore . Altogether we have:
This witnesses the isomorphism , which in categorical terms indicates that is a fixpoint of and justifies the notation.
Stream as a final coalgebra
We will show that Stream A is the final coalgebra of the functor . Consider the following implementations:
out astream = (head astream, tail astream)
out' (a, astream) = S a astream
These are easily seen to be mutually inverse, witnessing the isomorphism. See the reference for more details.
Relationship with mathematical induction
We will demonstrate how the principle of induction subsumes mathematical induction.
Let be some property of natural numbers. We will take the following definition of mathematical induction:
Now consider the function :
It should not be difficult to see that . Therefore, by the principle of induction, if we wish to prove some property of , it suffices to show that is F-closed. In detail, we require:
That is,
This is precisely mathematical induction as stated.
See also
F-coalgebra
Corecursion
Bisimulation
Anamorphism
Total functional programming
References
Further reading
Textbooks
Davide Sangiorgi (2012). Introduction to Bisimulation and Coinduction. Cambridge University Press.
Davide Sangiorgi and Jan Rutten (2011). Advanced Topics in Bisimulation and Coinduction. Cambridge University Press.
Introductory texts
Andrew D. Gordon (1994). — mathematically oriented description
Bart Jacobs and Jan Rutten (1997). A Tutorial on (Co)Algebras and (Co)Induction (alternate link) — describes induction and coinduction simultaneously
Eduardo Giménez and Pierre Castéran (2007). "A Tutorial on [Co-]Inductive Types in Coq"
Coinduction — short introduction
History
Davide Sangiorgi. "On the Origins of Bisimulation and Coinduction", ACM Transactions on Programming Languages and Systems, Vol. 31, Nb 4, Mai 2009.
Miscellaneous
Co-Logic Programming: Extending Logic Programming with Coinduction — describes the co-logic programming paradigm
Theoretical computer science
Logic programming
Functional programming
Category theory
Mathematical induction | Coinduction | [
"Mathematics"
] | 1,597 | [
"Functions and mappings",
"Mathematical structures",
"Proof theory",
"Theoretical computer science",
"Applied mathematics",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations",
"Mathematical induction"
] |
10,904,287 | https://en.wikipedia.org/wiki/Termcap | Termcap ("terminal capability") is a legacy software library and database used on Unix-like computers that enables programs to use display computer terminals in a device-independent manner, which greatly simplifies the process of writing portable text mode applications. It was superseded by the terminfo database used by ncurses, tput, and other programs.
Bill Joy wrote the first termcap library in 1978 for the Berkeley Unix operating system; it has since been ported to most Unix and Unix-like environments, even OS-9. Joy's design was reportedly influenced by the design of the terminal data store in the earlier Incompatible Timesharing System.
A termcap database can describe the capabilities of hundreds of different display terminals. This allows programs to have character-based display output, independent of the type of terminal. On-screen text editors such as vi and Emacs are examples of programs that may use termcap. Other programs are listed in the Termcap category.
Examples of what the database describes:
how many columns wide the display is
what string to send to move the cursor to an arbitrary position (including how to encode the row and column numbers)
how to scroll the screen up one or several lines
how much padding is needed for such a scrolling operation.
Data model
Termcap databases consist of one or more descriptions of terminals.
Indices
Each description must contain the canonical name of the terminal. It may also contain one or more aliases for the name of the terminal. The canonical name or aliases are the keys by which the library searches the termcap database.
Data values
The description contains one or more capabilities, which have conventional names. The capabilities are typed: boolean, numeric and string. The termcap library has no predetermined type for each capability name. It determines the types of each capability by the syntax:
string capabilities have an "=" between the capability name and its value,
numeric capabilities have a "#" between the capability name and its value, and
boolean capabilities have no associated value (they are always true if specified).
Applications which use termcap do expect specific types for the commonly used capabilities, and obtain the values of capabilities from the termcap database using library calls that return successfully only when the database contents matches the assumed type.
Hierarchy
Termcap descriptions can be constructed by including the contents of one description in another, suppressing capabilities from the included description or overriding or adding capabilities. No matter what storage model is used, the termcap library constructs the terminal description from the requested description, including, suppressing or overriding at the time of the request.
Storage model
Termcap data is stored as text, making it simple to modify. The text can be retrieved by the termcap library from files or environment variables.
Environment variables
The TERM environment variable contains the terminal type name.
The TERMCAP environment variable may contain a termcap database. It is most often used to store a single termcap description, set by a terminal emulator to provide the terminal's characteristics to the shell and dependent programs.
The TERMPATH environment variable is supported by newer termcap implementations and defines a search path for termcap files.
Flat file
The original (and most common) implementation of the termcap library retrieves data from a flat text file. Searching a large termcap file, e.g., 500 kB, can be slow. To aid performance, a utility such as reorder is used to put the most frequently used entries near the beginning of the file.
Hashed database
4.4BSD based implementations of termcap store the terminal description in a hashed database (e.g., something like Berkeley DB version 1.85). These store two types of records: aliases which point to the canonical entry, and the canonical entry itself. The text of the termcap entry is stored literally.
Limitations and extensions
The original termcap implementation was designed to use little memory:
the first name is two characters, to fit in 16 bits
capability names are two characters
descriptions are limited to 1023 characters.
only one termcap entry with its definitions can be included, and must be at the end.
Newer implementations of the termcap interface generally do not require the two-character name at the beginning of the entry.
Capability names are still two characters in all implementations.
The tgetent function used to read the terminal description uses a buffer whose size must be large enough for the data, and is assumed to be 1024 characters. Newer implementations of the termcap interface may relax this constraint by allowing a null pointer in place of the fixed buffer, or by hiding the data which would not fit, e.g., via the ZZ capability in NetBSD termcap. The terminfo library interface also emulates the termcap interface, and does not actually use the fixed-size buffer.
The terminfo library's emulation of termcap allows multiple other entries to be included without restricting the position. A few other newer implementations of the termcap library may also provide this ability, though it is not well documented.
Obsolete features
A special capability, the "hz" capability, was defined specifically to support the Hazeltine 1500 terminal, which had the unfortunate characteristic of using the ASCII tilde character ('~') as a control sequence introducer. In order to support that terminal, not only did code that used the database have to know about using the tilde to introduce certain control sequences, but it also had to know to substitute another printable character for any tildes in the displayed text, since a tilde in the text would be interpreted by the terminal as the start of a control sequence, resulting in missing text and screen garbling. Additionally, attribute markers (such as start and end of underlining) themselves took up space on the screen. Comments in the database source code often referred to this as "Hazeltine braindamage". Since the Hazeltine 1500 was a widely used terminal in the late 1970s, it was important for applications to be able to deal with its limitations.
See also
ANSI escape sequences, attempts to unify the many sequences
Computer terminal
Curses (programming library)
Terminfo
Terminal emulator
References
External links
Current termcap data
Termcap/Terminfo Resources Page at Eric S. Raymond's website
Computer data
Databases
Text mode
1978 software | Termcap | [
"Technology"
] | 1,290 | [
"Computer data",
"Data"
] |
10,904,347 | https://en.wikipedia.org/wiki/Power%20Engineering%20%28magazine%29 | Power Engineering is a monthly magazine dedicated to professionals in the field of power engineering and power generation. Articles are focused on new developments in power plant design, construction and operation in North America.
Power Engineering was published by PennWell Corporation, the largest U.S. publisher of electric power industry books, directories, maps and conferences. In 2018, PennWell was acquired by Clarion Events, a British company owned by The Blackstone Group.
Power Engineering International, also published by PennWell, covers Europe, Asia-Pacific, the Middle East and the rest of the world.
References
External links
1896 establishments in the United States
Monthly magazines published in the United States
Energy magazines
Engineering magazines
Magazines established in 1896
Power engineering
Science and technology magazines published in the United States
Magazines published in Oklahoma
Mass media in Tulsa, Oklahoma | Power Engineering (magazine) | [
"Engineering"
] | 162 | [
"Power engineering",
"Electrical engineering",
"Energy engineering"
] |
10,904,482 | https://en.wikipedia.org/wiki/THC-O-acetate | THC-O-acetate (THC acetate ester, O-acetyl-THC, THC-O, AcO-THC) is the acetate ester of THC. The term THC-O-acetate and its variations are commonly used for two types of the substance, dependent on which cannabinoid it is synthesized from. The difference between Δ8-THC and Δ9-THC is bond placement on the cyclohexene ring.
Physical data, chemistry, and properties
THC acetate ester (THC-O or THCOA) can be synthesized from THC, or from THCA.
The acetylation of THC does not change the properties of the compound to the same extent as with other acetate esters, as the parent compound (THC) is already highly lipophilic, but potency is nonetheless increased to some extent. While the acetate ester of Δ9-THC is the best studied, the acetate esters of other isomers, especially Δ8-THC but also Δ10-THC are also known, as are other esters such as THC-O-propionate, THC-O-phosphate, THC hemisuccinate, THC hemiglutarate, THC morpholinylbutyrate, THC piperidinylpropionate, THC naphthoyl ester (THC-NE), and THC-VHS, as well as the hydrogenated derivative HHC-O-acetate and the ring-expanded Abeo-HHC acetate.
Pharmacology
It is a metabolic pro-drug, with its subjective effects being felt around 30 minutes after ingestion.
Psychedelic claims
In a 2023 study, anecdotal claims surrounding THC-O-acetate's supposed ability to initiate psychedelic experiences were shown to not be significant. Answers using the Mystical Experience Questionnaire (MEQ) were under the threshold of a true experience, and those who had used classical psychedelics such as LSD or psilocybin consistently scored lower on the MEQ. When asked directly, 79% of the participants said it was either "not at all" or "a little" like a psychedelic experience.
History
THC acetate ester was investigated as a possible non-lethal incapacitating agent as part of the Edgewood Arsenal experiments at some point between 1949 and 1975. It was noted to have about twice the capacity to produce ataxia (lack of voluntary coordination of muscle movements) as did THC when administered to dogs.
Author D. Gold provided synthesis instructions for this compound (calling it "THC acetate") in his 1974 book Cannabis Alchemy: Art of Modern Hashmaking, in which it is described as follows;
The U.S. DEA first encountered THC-O-acetate as an apparent controlled substance analogue of THC in 1978. It was made in an analogous manner to how aspirin (acetylsalicylic acid) is made from willow bark (salicylic acid). The incident was described by Donald A. Cooper of the DEA thus:
A similar case reported in June 1995 in the United Kingdom.The description of that case appears to indicate the convicted manufacturer was using D. Gold's book Cannabis Alchemy as a guide. THC acetate was also reported to have been found by New Zealand police in 1995, again made by acetylation of purified cannabis extracts with acetic anhydride.
Following legal changes in the United States since around 2018, especially the legalisation of cannabis in an increasing number of states and the passage of the 2018 Farm Bill which eased restrictions on the cultivation of industrial hemp, THC-O-acetate has become increasingly available as a recreational drug in the United States.
Toxicity
In 2022, researchers at Portland State University used an e-nail to vaporize CBD-acetate, CBN-acetate, and THC-O-acetate (referred to simply as "THC acetate") to screen for the presence of ketene formation when vaporizing. They reported that just like Vitamin E acetate, all three of these cannabinoid acetates produced ketene gas when heated. For this reason, the vaping of THC-O-acetate could put its users at risk.
Legal status
United Kingdom
THC-O-acetate is a Class B drug in the United Kingdom.
United States
THC-O-acetate is not directly listed under the Controlled Substances Act, but it is designated as a Schedule I controlled substance in the United States. This designation is based upon a private letter ruling by DEA which communicated that THC-O-Acetate met the statutory definition of tetrahydrocannabinol. DEA reached this conclusion primarily on the basis that THC-O-Acetate is not known to occur in nature. Consequently, THC-O-Acetate cannot be derived from either Hemp or Marijuana without chemical conversion:
The legal status of hemp derived cannabinoids, including THC-O-acetate, in the United States continues to evolve as of 2021.
See also
Acetylation
Abeo-HHC acetate
Cannabidiol diacetate
Cod-THC
Nabitan
O-1057
O-2694
HHCP-O-acetate
THCP-O-acetate
References
Cannabinoids
Acetate esters
Benzochromenes
Prodrugs
Heterocyclic compounds with 3 rings | THC-O-acetate | [
"Chemistry"
] | 1,159 | [
"Chemicals in medicine",
"Prodrugs"
] |
10,904,960 | https://en.wikipedia.org/wiki/Beta-ketoacyl-ACP%20synthase | In molecular biology, Beta-ketoacyl-ACP synthase , is an enzyme involved in fatty acid synthesis. It typically uses malonyl-CoA as a carbon source to elongate ACP-bound acyl species, resulting in the formation of ACP-bound β-ketoacyl species such as acetoacetyl-ACP.
Beta-ketoacyl-ACP synthase is a highly conserved enzyme that is found in almost all life on earth as a domain in fatty acid synthase (FAS). FAS exists in two types, aptly named type I and II. In animals, fungi, and lower eukaryotes, Beta-ketoacyl-ACP synthases make up one of the catalytic domains of larger multifunctional proteins (Type I), whereas in most prokaryotes as well as in plastids and mitochondria, Beta-ketoacyl-ACP synthases are separate protein chains that usually form dimers (Type II).
Beta-ketoacyl-ACP synthase III, perhaps the most well known of this family of enzymes, catalyzes a Claisen condensation between acetyl CoA and malonyl ACP. The image below reveals how CoA fits in the active site as a substrate of synthase III.
Beta-ketoacyl-ACP synthases I and II only catalyze acyl-ACP reactions with malonyl ACP. Synthases I and II are capable of producing long-chain acyl-ACPs. Both are efficient up to acyl-ACPs with a 14 carbon chain, at which point synthase II is the more efficient choice for further carbon additions. Type I FAS catalyzes all the reactions necessary to create palmitic acid, which is a necessary function in animals for metabolic processes, one of which includes the formation of sphingosines.
Beta-ketoacyl-ACP synthase is found as a component of a number of enzymatic systems, including fatty acid synthetase (FAS); the multi-functional 6-methysalicylic acid synthase (MSAS) from Penicillium patulum, which is involved in the biosynthesis of a polyketide antibiotic; polyketide antibiotic synthase enzyme systems; Emericella nidulans multifunctional protein Wa, which is involved in the biosynthesis of conidial green pigment; Rhizobium nodulation protein nodE, which probably acts as a beta-ketoacyl synthase in the synthesis of the nodulation Nod factor fatty acyl chain; and yeast mitochondrial protein CEM1.
Structure
Beta-ketoacyl synthase contains two protein domains. The active site is located between the N- and C-terminal domains. The N-terminal domain contains most of the structures involved in dimer formation and also the active site cysteine. Residues from both domains contribute to substrate binding and catalysis
In animals and in prokaryotes, beta-ketoacyl-ACP synthase is a domain on type I FAS, which is a large enzyme complex that has multiple domains to catalyze multiple different reactions. Analogously, beta-ketoacyl-ACP synthase in plants is found in type II FAS; note that synthases in plants have been documented to have a range of substrate specificities. The presence of similar ketoacyl synthases present in all living organisms point to a common ancestor. Further examination of beta-ketoacyl-ACP synthases I and II of E. coli revealed that both are homodimeric, but synthase II is slightly larger. However, even though they are both involved in fatty acid metabolism, they also have highly divergent primary structure. In synthase II, each subunit consists of a five-stranded beta pleated sheet surrounded by multiple alpha helices, shown in the image on the left. The active sites are relatively close, only about 25 angstroms apart, and consist of a mostly hydrophobic pocket. Certain experiments have also suggested the presence of "fatty acid transport tunnels" within the beta-ketoacyl-ACP synthase domain that lead to one of many "fatty acid cavities", which essentially acts as the active site.
Mechanism
Beta-ketoacyl-synthase’s mechanism is a topic of debate among chemists. Many agree that Cys171 of the active site attacks acetyl ACP's carbonyl, and, like most enzymes, stabilizes the intermediate with other residues in the active site. ACP is subsequently eliminated, and it deprotonates His311 in the process. A thioester is then regenerated with the cysteine in the active site. Decarboxylation of a malonyl CoA that is also in the active site initially creates an enolate, which is stabilized by His311 and His345. The enolate tautomerizes to a carbanion that attacks the thioester of the acetyl-enzyme complex. Some sources speculate that an activated water molecule also resides in the active site as a means of hydrating the released CO2 or of attacking C3 of malonyl CoA. Another proposed mechanism considers the creation of a tetrahedral transition state. The driving force of the reaction comes from the decarboxylation of malonyl ACP; the energy captured in that bond technically comes from ATP, which is what is initially used to carboxylate acetyl CoA to malonyl CoA.
Biological function
The main function of beta-ketoacyl-ACP synthase is to produce fatty acids of various lengths for use by the organism. These uses include energy storage and creation of cell membranes. Fatty acids can also be used to synthesize prostaglandins, phospholipids, and vitamins, among many other things. Further, palmitic acid, which is created by the beta-ketoacyl-synthases on type I FAS, is used in a number of biological capacities. It is a precursor of both stearic and palmitoleic acids. Palmitoleic can subsequently be used to create a number of other fatty acids. Palmitic acid is also used to synthesize sphingosines, which play a role in cell membranes.
Clinical significance
The different types of beta-ketoacyl-ACP synthases in type II FAS are called FabB, FabF, and FabH synthases. FabH catalyzes the quintessential ketoacyl synthase reaction with malonyl ACP and acetyl CoA. FabB and FabF catalyze other related reactions. Given that their function is necessary for proper biological function surrounding lipoprotein, phospholipid, and lipopolysaccharide synthesis, they have become a target in antibacterial drug development. In order to adapt to their environment, bacteria alter the phospholipid composition of their membranes. Inhibiting this pathway may thus be a leverage point in disrupting bacterial proliferation. By studying Yersinia pestis, which causes bubonic, pneumonic, and septicaemic plagues, researchers have shown that FabB, FabF, and FabH can theoretically all be inhibited by the same drug due to similarities in their binding sites. However, such a drug has not yet been developed. Cerulenin, a molecule that appears to inhibit by mimicking the "condensation transition state" can only inhibit B or F, but not H. Another molecule, thiolactomycin, which mimics malonyl ACP in the active site, can only inhibit FabB. Lastly, platensimycin also has possible antibiotic use due to its inhibition of FabF.
These types of drugs are highly relevant. For example, Y. pestis was the main agent in the Justinian Plague, Black Death, and the modern plague. Even within the last five years, China, Peru, and Madagascar all experienced an outbreak of infection by Y. pestis. If it is not treated within 24 hours, it normally results in death. Furthermore, there is worry that it can now be used as a possible biological warfare weapon.
Unfortunately, many drugs that target prokaryotic beta-ketoacyl-synthases carry many side effects. Given the similarities between prokaryotic ketoacyl synthases and mitochondrial ones, these types of drugs tend to unintentionally also act upon mitochondrial synthases, leading to many biological consequences for humans.
Industrial applications
Recent efforts in bioengineering include engineering of FAS proteins, which includes beta-ketoacyl-ACP synthase domains, in order to favor the synthesis of branched carbon chains as a renewable energy source. Branched carbon chains contain more energy and can be used in colder temperatures because of their lower freezing point. Using E. coli as the organism of choice, engineers have replaced the endogenous FabH domain on FAS, which favors unbranched chains, with FabH versions that favor branching due to their high substrate specificity for branched acyl-ACPs.
See also
Beta-ketoacyl-ACP synthase I
Beta-ketoacyl-ACP synthase II
Beta-ketoacyl-ACP synthase III
3-oxoacyl-(acyl-carrier-protein) reductase
References
External links
Further reading
Protein domains | Beta-ketoacyl-ACP synthase | [
"Biology"
] | 2,007 | [
"Protein domains",
"Protein classification"
] |
10,905,011 | https://en.wikipedia.org/wiki/Lenabasum | Lenabasum (also known as ajulemic acid, 1',1'-dimethylheptyl-delta-8-tetrahydrocannabinol-11-oic acid, DMH-D8-THC-11-OIC, AB-III-56, HU-239, IP-751, CPL 7075, CT-3, JBT-101, Anabasum, and Resunab) is a synthetic cannabinoid that shows anti-fibrotic and anti-inflammatory effects in pre-clinical studies without causing a subjective "high". Although its design was inspired by a metabolite of delta-9-THC known as delta-9-THC-11-oic acid, lenabasum is an analog of the delta-8-THC metabolite delta-8-THC-11-oic acid. It is being developed for the treatment of inflammatory and fibrotic conditions such as systemic sclerosis, dermatomyositis and cystic fibrosis. It does not share the anti-emetic effects of some other cannabinoids, but may be useful for treating chronic inflammatory conditions where inflammation fails to resolve. Side effects include dry mouth, tiredness, and dizziness. The mechanism of action is through activation of the CB2 receptor leading to production of specialized proresolving eicosanoids such as lipoxin A4 and prostaglandin J2. Studies in animals at doses up to 40 mg/kg show minimal psychoactivity of lenabasum, compared to that produced by tetrahydrocannabinol. Lenabasum is being developed by Corbus Pharmaceuticals (formerly JB Therapeutics) for the treatment of orphan chronic life-threatening inflammatory diseases. Development since been discontinued.
References
Cannabinoids
Benzochromenes
Carboxylic acids
Hydroxyarenes
HU cannabinoids
Abandoned drugs | Lenabasum | [
"Chemistry"
] | 408 | [
"Functional groups",
"Carboxylic acids",
"Drug safety",
"Abandoned drugs"
] |
10,905,663 | https://en.wikipedia.org/wiki/Asia%20and%20South%20Pacific%20Design%20Automation%20Conference | The Asia and South Pacific Design Automation Conference, or ASP-DAC is the international conference on VLSI design automation in Asia and South Pacific regions, the most active region of design, CAD and fabrication of silicon chips in the world. The ASP-DAC is a high-quality and premium conference on electronic design automation (EDA) like other sister conferences such as Design Automation Conference (DAC), International Conference on Computer Aided Design (ICCAD), Design, Automation & Test in Europe (DATE). Founded in 1995, the conference aims to provide a platform for researchers and designers to exchange ideas and understand the latest technologies in the areas of LSI design and design automation.
See also
Design Automation Conference
International Conference on Computer-Aided Design
Design Automation and Test in Europe
References
External links
Main web page for the ASP-DAC conference
IEEE conferences
Electronic design automation conferences | Asia and South Pacific Design Automation Conference | [
"Technology"
] | 181 | [
"Computing stubs",
"Computer conference stubs"
] |
10,905,770 | https://en.wikipedia.org/wiki/Farnesyl-diphosphate%20farnesyltransferase | Squalene synthase (SQS) or farnesyl-diphosphate:farnesyl-diphosphate farnesyl transferase is an enzyme localized to the membrane of the endoplasmic reticulum. SQS participates in the isoprenoid biosynthetic pathway, catalyzing a two-step reaction in which two identical molecules of farnesyl pyrophosphate (FPP) are converted into squalene, with the consumption of NADPH. Catalysis by SQS is the first committed step in sterol synthesis, since the squalene produced is converted exclusively into various sterols, such as cholesterol, via a complex, multi-step pathway. SQS belongs to squalene/phytoene synthase family of proteins.
Diversity
Squalene synthase has been characterized in animals, plants, and yeast. In terms of structure and mechanics, squalene synthase closely resembles phytoene synthase (PHS), another prenyltransferase. PHS serves a similar role to SQS in plants and bacteria, catalyzing the synthesis of phytoene, a precursor of carotenoid compounds.
Structure
Squalene synthase (SQS) is localized exclusively to the membrane of the endoplasmic reticulum (ER). SQS is anchored to the membrane by a short C-terminal membrane-spanning domain. The N-terminal catalytic domain of the enzyme protrudes into the cytosol, where the soluble substrates are bound. Mammalian forms of SQS are approximately 47kDa and consist of ~416 amino acids. The crystal structure of human SQS was determined in 2000, and revealed that the protein was composed entirely of α-helices. The enzyme is folded into a single domain, characterized by a large central channel. The active sites of both of the two half-reactions catalyzed by SQS are located within this channel. One end of the channel is open to the cytosol, whereas the other end forms a hydrophobic pocket. SQS contains two conserved aspartate-rich sequences, which are believed to participate directly in the catalytic mechanism. These aspartate-rich motifs are one of several conserved structural features in class I isoprenoid biosynthetic enzymes, although these enzymes do not share sequence homology.
Mechanism
Squalene synthase (SQS) catalyzes the reductive dimerization of farnesyl pyrophosphate (FPP), in which two identical molecules of FPP are converted into one molecule of squalene. The reaction occurs in two steps, proceeding through the intermediate presqualene pyrophosphate (PSPP). FPP is a soluble allylic compound containing 15 carbon atoms (C15), whereas squalene is an insoluble, C30 isoprenoid. This reaction is a head-to-head terpene synthesis, because the two FPP molecules are both joined at the C4 position and form a 1-1' linkage. This stands in contrast to the 1'-4 linkages that are much more common in isoprene biosynthesis than 4-4' linkages. The reaction mechanism of SQS requires a divalent cation, often Mg2+, to facilitate binding of the pyrophosphate groups on FPP.
FPP condensation
In the first half-reaction, two identical molecules of farnesyl pyrophosphate (FPP) are bound to squalene synthase (SQS) in a sequential manner. The FPP molecules bind to distinct regions of the enzyme, and with different binding affinities. Starting at the top of the catalytic cycle below, the reaction begins with the ionization of FPP to generate an allylic carbocation. A tyrosine residue (Tyr-171) plays a critical role in this step by serving as a proton donor to facilitate abstraction of pyrophosphate. Moreover, the resulting phenolate anion can stabilize the resulting carbocation through cation-π interactions, which would be particularly strong due to the highly electron-rich nature of the phenolate anion. The allylic cation generated is then attacked by the olefin of a second molecule of FPP, affording a tertiary carbocation. The phenolate anion generated previously then serves as a base to abstract a proton from this adduct to form a cyclopropane product, presqualene pyrophosphate (PSPP). The PSPP created remains associated with SQS for the second reaction. The importance of a tyrosine residue in this process was demonstrated by mutagenesis studies with rat SQS (rSQS), and by the fact that Tyr-171 is conserved in all known SQSs (and PHSs). In rSQS, Tyr-171 was converted to aromatic residues Phe and Trp, as well as hydroxyl-containing residue Ser. None of these mutants were able to convert FPP to PSPP or squalene, demonstrating that aromatic rings or alcohols alone are insufficient for converting FPP to PSPP.
PSPP rearrangement and reduction
In the second half-reaction of SQS, presqualene pyrophosphate (PSPP) moves to a second reaction site within SQS. Keeping PSPP in the central channel of SQS is thought to protect the reactive intermediate from reacting with water. From PSPP, squalene is formed by a series of carbocation rearrangements. The process begins with ionization of pyrophosphate, giving a cyclopropylcarbinyl cation. The cation rearranges by a 1,2-migration of a cyclopropane C–C bond to the carbocation, forming the bond shown in blue to give a cyclobutyl carbocation. Subsequently, a second 1,2-migration occurs to form another cyclopropylcarbinyl cation, with the cation resting on a tertiary carbon. This resulting carbocation is then ring-opened by a hydride delivered by NADPH, giving squalene, which is then released by SQS into the membrane of the endoplasmic reticulum.
While cyclopropylcarbinyl-cyclopropylcarbinyl rearrangements can proceed through discrete cyclobutyl cation intermediates, the supposed cyclobutyl cation could not be trapped in model studies. Thus, the cyclobutyl cation may actually be a transition state between the two cyclopropylcarbinyl cations, rather than a discrete intermediate. The stereochemistry of the intermediates and the olefin geometry in the final product is dictated by the suprafacial nature of the 1,2-shifts and stereoelectronic requirements. While other mechanisms have been proposed, the mechanism shown above is supported by isolation of rillingol, which is the alcohol formed from trapping the second cyclopropylcarbinyl cation with water.
Regulation
FPP is an important metabolic intermediate in the mevalonate pathway that represents a major branch point in terpenoid pathways. FPP is used to form several important classes of compounds in addition to sterols (via squalene), including ubiquinone and dolichols. SQS catalyzes the first committed step in sterol biosynthesis from FPP, and is therefore important for controlling the flux towards sterol vs. non-sterol products. The activity of SQS is intimately related to the activity of HMG-CoA reductase, which catalyzes the rate-limiting step of the mevalonate pathway. High levels of LDL-derived cholesterol inhibit HMG-CoA reductase activity significantly, since mevalonate is no longer needed for sterol production. However, residual HMG-CoA reductase activity is observed even with very high LDL levels, such that FPP can be made for forming non-sterol products essential for cell growth. To prevent this residual FPP from being used for sterol synthesis when sterols are abundant, SQS activity declines significantly when LDL levels are high. This suppression of SQS activity is better thought of as a flux control mechanism, rather than a way to regulate cholesterol levels. This is since HMG-CoA reductase is the more significant control factor for regulating cholesterol synthesis (its activity is 98% inhibited when LDL levels are high).
Regulation by sterols
SQS regulation occurs primarily at the level of SQS gene transcription. The sterol regulatory element binding protein (SREBP) class of transcription factors is central to regulating genes involved in cholesterol homeostasis, and is important for controlling levels of SQS transcription. When sterol levels are low, an inactive form of SREBP is cleaved to form the active transcription factor, which moves to the nucleus to induce transcription of the SQS gene. Of the three known SREBP transcription factors, only SREBP-1a and SREBP-2 activate SQS gene transcription in transgenic mouse livers. In cultured HepG2 cells, SREBP-1a appears more important than SREBP-2 in controlling activation of the SQS promoter. However, SQS promoters have been shown to respond differently to SREBP-1a and SREBP-2 in different experimental systems.
Aside from SREBPs, accessory transcription factors are needed for maximal activation of the SQS promoter. Promoter studies using luciferase reporter gene assays revealed that the Sp1, and NF-Y and/or CREB transcription factors are also important for SQS promoter activation. NF-Y and/or CREB are required for SREBP-1a to fully activate the SQS promoter, although Sp1 is also needed for SREBP-2 to do so.
Interactive pathway map
Biological Function
Squalene synthase (SQS) is an enzyme participating in the isoprenoid biosynthetic pathway. SQS synthase catalyzes the branching point between sterol and nonsterol biosynthesis, and commits farnesyl pyrophosphate (FPP) exclusively to production of sterols. An important sterol produced by this pathway is cholesterol, which is used in cell membranes and for the synthesis of hormones. SQS competes with several other enzymes for use of FPP, since it is a precursor for a variety of terpenoids. Decreases in SQS activity limit flux of FPP to the sterol pathway, and increase the production of nonsterol products. Important nonsterol products include ubiquinone, dolichols, heme A, and farnesylated proteins
Development of squalene synthase knockout mice has demonstrated that loss of squalene synthase is lethal, and that the enzyme is essential for development of the central nervous system.
Disease Relevance
Squalene synthase is a target for the regulation of cholesterol levels. Increased expression of SQS has been shown to elevate cholesterol levels in mice. Therefore, inhibitors of SQS are of great interest in the treatment of hypercholesterolemia and prevention of coronary heart disease (CHD). It has also been suggested that variants in this enzyme may be part of a genetic association with hypercholesterolemia.
Squalene synthase inhibitors
Squalene synthase inhibitors have been shown to decrease cholesterol synthesis, as well as to decrease plasma triglyceride levels. SQS inhibitors may provide an alternative to HMG-CoA reductase inhibitors (statins), which have problematic side effects for some patients. Squalene synthase inhibitors that have been investigated for use in the prevention of cardiovascular disease include lapaquistat (TAK-475), zaragozic acid, and RPR 107393. Despite reaching phase II clinical trials, lapaquistat was discontinued by 2008.
Squalene synthase homolog inhibition in Staphylococcus aureus is currently being investigated as a virulence factor-based antibacterial therapy.
References
External links
Biosynthesis
EC 2.5.1 | Farnesyl-diphosphate farnesyltransferase | [
"Chemistry"
] | 2,565 | [
"Biosynthesis",
"Metabolism",
"Chemical synthesis"
] |
10,905,801 | https://en.wikipedia.org/wiki/Design%20Automation%20and%20Test%20in%20Europe | Design, Automation & Test in Europe, or DATE is a yearly conference on the topic of electronic design automation. It is typically held in March or April of each year, alternating between France and Germany. It is sponsored by the SIGDA of the Association for Computing Machinery, the Electronic System Design Alliance, the European Design and Automation Association (EDAA), and the IEEE Council on Electronic Design Automation (CEDA).
Technical co-sponsors include ACM SIGBED, the IEEE Solid-State Circuits Society (SSCS), IFIP, and the Institution of Engineering and Technology (IET).
DATE is a combination of a technical conference and a small trade show. It was formed in 1998 as a merger of EDAC, ETC, Euro-ASIC, and Euro-DAC.
See also
electronic design automation
EDA Software Category
Design Automation Conference
International Conference on Computer-Aided Design
Asia and South Pacific Design Automation Conference
Symposia on VLSI Technology and Circuits
References
External links
Web page for the DATE conference
dblp: Design, Automation, and Test in Europe
IEEE conferences
Association for Computing Machinery conferences
Electronic design automation conferences
Information technology organizations based in Europe
International conferences in Germany
International conferences in France | Design Automation and Test in Europe | [
"Technology"
] | 246 | [
"Computing stubs",
"Computer conference stubs"
] |
10,905,953 | https://en.wikipedia.org/wiki/Lanosterol%20synthase | Lanosterol synthase () is an oxidosqualene cyclase (OSC) enzyme that converts (S)-2,3-oxidosqualene to a protosterol cation and finally to lanosterol. Lanosterol is a key four-ringed intermediate in cholesterol biosynthesis. In humans, lanosterol synthase is encoded by the LSS gene.
In eukaryotes, lanosterol synthase is an integral monotopic protein associated with the cytosolic side of the endoplasmic reticulum. Some evidence suggests that the enzyme is a soluble, non-membrane bound protein in the few prokaryotes that produce it.
Due to the enzyme's role in cholesterol biosynthesis, there is interest in lanosterol synthase inhibitors as potential cholesterol-reducing drugs, to complement existing statins.
Mechanism
Though some data on the mechanism has been obtained by the use of suicide inhibitors, mutagenesis studies, and homology modeling, it is still not fully understood how the enzyme catalyzes the formation of lanosterol.
Initial epoxide protonation and ring opening
Before the acquisition of the protein's X-ray crystal structure, site-directed mutagenesis was used to determine residues key to the enzyme's catalytic activity. It was determined that an aspartic acid residue (D455) and two histidine residues (H146 and H234) were essential to enzyme function. Corey et al. hypothesized that the aspartic acid acts by protonating the substrate's epoxide ring, thus increasing its susceptibility to intramolecular attack by the nearest double bond, with H146 possibly intensifying the proton donor ability of the aspartic acid through hydrogen bonding. After acquisition of the X-ray crystal structure of the enzyme, the role of D455 as a proton donor to the substrate's epoxide was confirmed, though it was found that D455 is more likely stabilized by hydrogen bonding from two cysteine residues (C456 and C533) than from the earlier suggested histidine.
Ring formation cascade
Epoxide protonation activates the substrate, setting off a cascade of ring forming reactions. Four rings in total (A through D) are formed, producing the cholesterol backbone. Though the idea of a concerted formation of all four rings had been entertained in the past, kinetic studies with (S)-2,3-oxidosqualene analogs showed that product formation is achieved through discrete carbocation intermediates (see Figure 1). Isolation of monocyclic and bicyclic products from lanosterol synthase mutants has further weakened the hypothesis of a concerted mechanism. Evidence suggests that epoxide ring opening and A ring formation is concerted, though.
Structure
Lanosterol synthase is a two-domain monomeric protein composed of two connected (α/α) barrel domains and three smaller β-structures. The enzyme active site is in the center of the protein, closed off by a constricted channel. Passage of the (S)-2,3-epoxysqualene substrate through the channel requires a change in protein conformation. In eukaryotes, a hydrophobic surface (6% of the total enzyme surface area) is the ER membrane-binding region (see Figure 2).
The enzyme contains five fingerprint regions containing Gln-Trp motifs, which are also present in the highly analogous bacterial enzyme squalene-hopene cyclase. Residues of these fingerprint regions contain stacked sidechains which are thought to contribute to enzyme stability during the highly exergonic cyclization reactions catalyzed by the enzyme.
Function
Catalysis of lanosterol formation
Lanosterol synthase catalyzes the conversion of (S)-2,3-epoxysqualene to lanosterol, a key four-ringed intermediate in cholesterol biosynthesis. Thus, it in turn provides the precursor to estrogens, androgens, progestogens, glucocorticoids, mineralocorticoids, and neurosteroids. In eukaryotes the enzyme is bound to the cytosolic side of the endoplasmic reticulum membrane. While cholesterol synthesis is mostly associated with eukaryotes, few prokaryotes have been found to express lanosterol synthase; it has been found as a soluble protein in Methylococcus capsulatus.
Catalysis of epoxylanosterol formation
Lanosterol synthase also catalyzes the cyclization of 2,3;22,23-diepoxysqualene to 24(S),25-epoxylanosterol, which is later converted to 24(S),25-epoxycholesterol. Since the enzyme affinity for this second substrate is greater than for the monoepoxy (S)-2,3-epoxysqualene, under partial inhibition conversion of 2,3;22,23-diepoxysqualene to 24(S),25-epoxylanosterol is favored over lanosterol synthesis. This has relevance for disease prevention and treatment.
Clinical significance
Enzyme inhibitors as cholesterol-lowering drugs
Interest has grown in lanosterol synthase inhibitors as drugs to lower blood cholesterol and treat atherosclerosis. The widely popular statin drugs currently used to lower LDL (low-density lipoprotein) cholesterol function by inhibiting HMG-CoA reductase activity. Because this enzyme catalyzes the formation of precursors far upstream of (S)-2,3-epoxysqualene and cholesterol, statins may negatively influence amounts of intermediates required for other biosynthetic pathways (e.g. synthesis of isoprenoids, coenzyme Q). Thus, lanosterol synthase, which is more closely tied to cholesterol biosynthesis than HMG-CoA reductase, is an attractive drug target.
Lanosterol synthase inhibitors are thought to lower LDL and VLDL cholesterol by a dual control mechanism. Studies in which lanosterol synthase is partially inhibited have shown both a direct decrease in lanosterol formation and a decrease in HMG-CoA reductase activity. The oxysterol 24(S),25-epoxylanosterol, which is preferentially formed over lanosterol during partial lanosterol synthase inhibition, is believed to be responsible for this inhibition of HMG-CoA reductase activity.
Evolution
It is believed that oxidosqualene cyclases (OSCs, the class to which lanosterol cyclase belongs) evolved from bacterial squalene-hopene cyclase (SHC), which is involved with the formation of hopanoids. Phylogenetic trees constructed from the amino acid sequences of OSCs in diverse organisms suggest a single common ancestor, and that the synthesis pathway evolved only once. The discovery of steranes including cholestane in 2.7-billion year-old shales from Pilbara Craton, Australia, suggests that eukaryotes with OSCs and complex steroid machinery were present early in earth's history.
References
Further reading
External links
Steroid hormone biosynthesis
EC 5.4.99 | Lanosterol synthase | [
"Chemistry",
"Biology"
] | 1,578 | [
"Steroid hormone biosynthesis",
"Biosynthesis"
] |
10,906,098 | https://en.wikipedia.org/wiki/Exothermic%20welding | Exothermic welding, also known as exothermic bonding, thermite welding (TW), and thermit welding, is a welding process that employs molten metal to permanently join the conductors. The process employs an exothermic reaction of a thermite composition to heat the metal, and requires no external source of heat or current. The chemical reaction that produces the heat is an aluminothermic reaction between aluminium powder and a metal oxide.
Overview
In exothermic welding, aluminium dust reduces the oxide of another metal, most commonly iron oxide, because aluminium is highly reactive. Iron(III) oxide is commonly used:
The products are aluminium oxide, free elemental iron, and a large amount of heat. The reactants are commonly powdered and mixed with a binder to keep the material solid and prevent separation.
Commonly the reacting composition is five parts iron oxide red (rust) powder and three parts aluminium powder by weight, ignited at high temperatures. A strongly exothermic (heat-generating) reaction occurs that via reduction and oxidation produces a white hot mass of molten iron and a slag of refractory aluminium oxide. The molten iron is the actual welding material; the aluminium oxide is much less dense than the liquid iron and so floats to the top of the reaction, so the set-up for welding must take into account that the actual molten metal is at the bottom of the crucible and covered by floating slag.
Other metal oxides can be used, such as chromium oxide, to generate the given metal in its elemental form. Copper thermite, using copper oxide, is used for creating electric joints:
Thermite welding is widely used to weld railway rails. One of the first railroads to evaluate the use of thermite welding was the Delaware and Hudson Railroad in the United States in 1935 The weld quality of chemically pure thermite is low due to the low heat penetration into the joining metals and the very low carbon and alloy content in the nearly pure molten iron. To obtain sound railroad welds, the ends of the rails being thermite welded are preheated with a torch to an orange heat, to ensure the molten steel is not chilled during the pour.
Because the thermite reaction yields relatively pure iron, not the much stronger steel, some small pellets or rods of high-carbon alloying metal are included in the thermite mix; these alloying materials melt from the heat of the thermite reaction and mix into the weld metal. The alloying beads composition will vary, according to the rail alloy being welded.
The reaction reaches very high temperatures, depending on the metal oxide used. The reactants are usually supplied in the form of powders, with the reaction triggered using a spark from a flint lighter. The activation energy for this reaction is very high however, and initiation requires either the use of a "booster" material such as powdered magnesium metal or a very hot flame source. The aluminium oxide slag that it produces is discarded.
When welding copper conductors, the process employs a semi-permanent graphite crucible mould, in which the molten copper, produced by the reaction, flows through the mould and over and around the conductors to be welded, forming an electrically conductive weld between them. When the copper cools, the mould is either broken off or left in place. Alternatively, hand-held graphite crucibles can be used. The advantages of these crucibles include portability, lower cost (because they can be reused), and flexibility, especially in field applications.
Properties
An exothermic weld has higher mechanical strength than other forms of weld, and excellent corrosion resistance It is also highly stable when subject to repeated short-circuit pulses, and does not suffer from increased electrical resistance over the lifetime of the installation. However, the process is costly relative to other welding processes, requires a supply of replaceable moulds, suffers from a lack of repeatability, and can be impeded by wet conditions or bad weather (when performed outdoors).
Applications
Exothermic welding is usually used for welding copper conductors but is suitable for welding a wide range of metals, including stainless steel, cast iron, common steel, brass, bronze, and Monel. It is especially useful for joining dissimilar metals. The process is marketed under a variety of names such as AIWeld, American Rail Weld, AmiableWeld, Ardo Weld, ERICO Cadweld, FurseWeld, Harger Ultrashot, Quikweld, StaticWeld, Techweld, Tectoweld, TerraWeld, Thermoweld and Ultraweld.
Because of the good electrical conductivity and high stability in the face of short-circuit pulses, exothermic welds are one of the options specified by §250.7 of the United States National Electrical Code for grounding conductors and bonding jumpers. It is the preferred method of bonding, and indeed it is the only acceptable means of bonding copper to galvanized cable. The NEC does not require such exothermically welded connections to be listed or labelled, but some engineering specifications require that completed exothermic welds be examined using X-ray equipment.
Rail welding
History
Modern thermite rail welding was first developed by Hans Goldschmidt in the mid-1890s as another application for the thermite reaction which he was initially exploring for the use of producing high-purity chromium and manganese. The first rail line was welded using the process in Essen, Germany in 1899, and thermite welded rails gained popularity as they had the advantage of greater reliability with the additional wear placed on rails by new electric and high speed rail systems. Some of the earliest adopters of the process were the cities of Dresden, Leeds, and Singapore. In 1904 Goldschmidt established his eponymous Goldschmidt Thermit Company (known by that name today) in New York City to bring the practice to railways in North America.
In 1904, George E. Pellissier, an engineering student at Worcester Polytechnic Institute who had been following Goldschmidt's work, reached out to the new company as well as the Holyoke Street Railway in Massachusetts. Pellissier oversaw the first installation of track in the United States using this process on August 8, 1904, and went on to improve upon it further for both the railway and Goldschmidt's company as an engineer and superintendent, including early developments in continuous welded rail processes that allowed the entirety of each rail to be joined rather than the foot and web alone. Although not all rail welds are completed using the thermite process, it still remains a standard operating procedure throughout the world.
Process
Typically, the ends of the rails are cleaned, aligned flat and true, and spaced apart . This gap between rail ends for welding is to ensure consistent results in the pouring of the molten steel into the weld mold. In the event of a welding failure, the rail ends can be cropped to a gap, removing the melted and damaged rail ends, and a new weld attempted with a special mould and larger thermite charge. A two or three piece hardened sand mould is clamped around the rail ends, and a torch of suitable heat capacity is used to preheat the ends of the rail and the interior of the mould.
The proper amount of thermite with alloying metal is placed in a refractory crucible, and when the rails have reached a sufficient temperature, the thermite is ignited and allowed to react to completion (allowing time for any alloying metal to fully melt and mix, yielding the desired molten steel or alloy). The reaction crucible is then tapped at the bottom. Modern crucibles have a self-tapping thimble in the pouring nozzle. The molten steel flows into the mould, fusing with the rail ends and forming the weld.
The slag, being lighter than the steel, flows last from the crucible and overflows the mould into a steel catch basin, to be disposed of after cooling. The entire setup is allowed to cool. The mould is removed and the weld is cleaned by hot chiselling and grinding to produce a smooth joint. Typical time from start of the work until a train can run over the rail is approximately 45 minutes to more than an hour, depending on the rail size and ambient temperature. In any case, the rail steel must be cooled to less than before it can sustain the weight of rail locomotives.
When a thermite process is used for track circuits – the bonding of wires to the rails with a copper alloy, a graphite mould is used. The graphite mould is reusable many times, because the copper alloy is not as hot as the steel alloys used in rail welding. In signal bonding, the volume of molten copper is quite small, approximately and the mould is lightly clamped to the side of the rail, also holding a signal wire in place. In rail welding, the weld charge can weigh up to .
The hardened sand mould is heavy and bulky, must be securely clamped in a very specific position and then subjected to intense heat for several minutes before firing the charge. When rail is welded into long strings, the longitudinal expansion and contraction of steel must be taken into account. British practice sometimes uses a sliding joint of some sort at the end of long runs of continuously welded rail, to allow some movement, although by using a heavy concrete sleeper and an extra amount of ballast at the sleeper ends, the track, which will be prestressed according to the ambient temperature at the time of its installation, will develop compressive stress in hot ambient temperature, or tensile stress in cold ambient temperature, its strong attachment to the heavy sleepers preventing sun kink (buckling) or other deformation.
Current practice is to use welded rails throughout on high speed lines, and expansion joints are kept to a minimum, often only to protect junctions and crossings from excessive stress. American practice appears to be very similar, a straightforward physical restraint of the rail. The rail is prestressed, or considered "stress neutral" at some particular ambient temperature. This "neutral" temperature will vary according to local climate conditions, taking into account lowest winter and warmest summer temperatures.
The rail is physically secured to the ties or sleepers with rail anchors, or anti-creepers. If the track ballast is good and clean and the ties are in good condition, and the track geometry is good, then the welded rail will withstand ambient temperature swings normal to the region.
Remote welding
Remote exothermic welding is a type of exothermic welding process for joining two electrical conductors from a distance. The process reduces the inherent risks associated with exothermic welding and is used in installations that require a welding operator to permanently join conductors a safe distance from the superheated copper alloy.
The process incorporates either an igniter for use with standard graphite molds or a consumable sealed drop-in weld metal cartridge, semi-permanent graphite crucible mold, and an ignition source that tethers to the cartridge with a cable that provides the safe remote ignition.
See also
Rail lengths
References
External links
Exothermic Welding Powder - Learn how Exothermic Welding is done, AmiableWeld
History of Cleveland
Welding | Exothermic welding | [
"Engineering"
] | 2,341 | [
"Welding",
"Mechanical engineering"
] |
10,906,237 | https://en.wikipedia.org/wiki/Precipitation%20types | In meteorology, the different types of precipitation often include the character, formation, or phase of the precipitation which is falling to ground level. There are three distinct ways that precipitation can occur. Convective precipitation is generally more intense, and of shorter duration, than stratiform precipitation. Orographic precipitation occurs when moist air is forced upwards over rising terrain and condenses on the slope, such as a mountain.
Precipitation can fall in either liquid or solid phases, is mixed with both, or transition between them at the freezing level. Liquid forms of precipitation include rain and drizzle and dew. Rain or drizzle which freezes on contact with a surface within a subfreezing air mass gains the preceding adjective "freezing", becoming the known freezing rain or freezing drizzle. Slush is a mixture of both liquid and solid precipitation. Frozen forms of precipitation include snow, ice crystals, ice pellets (sleet), hail, and graupel. Their respective intensities are classified either by rate of precipitation, or by visibility restriction.
Phases
Precipitation falls in many forms, or phases. They can be subdivided into:
Liquid precipitation:
Drizzle (DZ)
Rain (RA)
Cloudburst (CB)
Freezing/Mixed precipitation:
Freezing drizzle (FZDZ)
Freezing rain (FZRA)
Rain and snow mixed / Slush (RASN)
Drizzle and snow mixed / Slush (DZSN)
Frozen precipitation:
Snow (SN)
Snow grains (SG)
Ice crystals (IC)
Ice pellets / Sleet (PL)
Snow pellets / Graupel (GS)
Hail (GR)
Megacryometeors (MC)
The parenthesized letters are the shortened METAR codes for each phenomenon.
Mechanisms
Precipitation occurs when evapotranspiration takes place and local air becomes saturated with water vapor, and so can no longer maintain the level of water vapor in gaseous form, which creates clouds. This occurs when less dense moist air cools, usually when an air mass rises through the atmosphere to higher and cooler altitudes. However, an air mass can also cool without a change in altitude (e.g. through radiative cooling, or ground contact with cold terrain).
Convective precipitation occurs when air rises vertically through the (temporarily) self-sustaining mechanism of convection. Stratiform precipitation occurs when large air masses rise diagonally as larger-scale winds and atmospheric dynamics force them to move over each other. Orographic precipitation is similar, except the upwards motion is forced when a moving air mass encounters the rising slope of a landform such as a mountain ridge or slope.
Convectional
Convection occurs when the Earth's surface, especially within a conditionally unstable or moist atmosphere, becomes heated more than its surroundings and in turn leading to significant evapotranspiration. Convective rain and light precipitation are the result of large convective clouds, for example cumulonimbus or cumulus congestus clouds. In the initial stages of this precipitation, it generally falls as showers with a smaller area and a rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds have limited vertical and horizontal extent and do not conserve much water. Most precipitation in the tropics appears to be convective; however, it has been suggested that stratiform and convective precipitation often both occur within the same complex of convection-generated cumulonimbus.
Graupel and hail indicate convection when either or both are present at the surface. They are indicative that some form of precipitation forms and exists at the freezing level, a varying point in the atmosphere in which the temperature is 0°C. In mid-latitude regions, convective precipitation is often associated with cold fronts where it is often found behind the front, occasionally initiating a squall line.
Cyclonic
Frontal precipitation is the result of frontal systems surrounding extratropical cyclones or lows, which form when warm and tropical air meets cooler, subpolar air. Frontal precipitation typically falls out from nimbostratus clouds.
When masses of air with different densities (moisture and temperature characteristics) meet, the less dense warmer air overrides the more dense colder air. The warmer air is forced to rise and, if conditions are right, creates an effect of saturation and condensation, causing precipitation. In turn, precipitation can enhance the temperature and dewpoint contrast along a frontal boundary, creating more precipitation while the front lasts. Passing weather fronts often result in sudden changes in environmental temperature, and in turn the humidity and pressure in the air at ground level as different air masses switch the local weather.
Warm fronts occur where advancing warm air pushes out a previously extant cold air mass. The warm air overrides the cooler air and moves upward. Warm fronts are followed by extended periods of light rain and drizzle due to the fact that, after the warm air rises above the cooler air (which remains on the ground), it gradually cools due to the air's expansion while being lifted, which forms clouds and leads to precipitation.
Cold fronts occur when an advancing mass of cooler air dislodges and plows through a mass of warm air. This type of transition is sharper and faster than warm fronts, since cold air is more dense than warm air and sinks through in gravity's favor. Precipitation duration is often shorter and generally more intense than that which occurs ahead of warm fronts.
A wide variety of weather can be found along an occluded front, usually found near anticyclonic activity, but usually their passage is associated with a drying of the air mass.
Orographic
Orographic or relief rainfall is caused when masses of air are forced up the side of elevated land formations, such as large mountains or plateaus (often referred to as an upslope effect). The lift of the air up the side of the mountain results in adiabatic cooling with altitude, and ultimately condensation and precipitation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward (downwind) side, as wind carries moist air masses and orographic precipitation. Moisture is precipitated and removed by orographic lift, leaving drier air (see Foehn) on the descending (generally warming), leeward side where a rain shadow is observed.
In Hawaii, Mount Waiʻaleʻale (Waialeale), on the island of Kauai, is notable for its extreme rainfall. It currently has the highest average annual rainfall on Earth, with approximately per year. Storm systems affect the region with heavy rains during winter, between October and March. Local climates vary considerably on each island due to their topography, divisible into windward (Koolau) and leeward (Kona) regions based upon location relative to the higher surrounding mountains. Windward sides face the east-to-northeast trade winds and receive much more clouds and rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover. On the island of Oahu, high amounts of clouds and often rain can usually be observed around the windward mountain peaks, while the southern parts of the island (including most of Honolulu and Waikiki) receive dramatically less rainfall throughout the year.
In South America, the Andes mountain range blocks Pacific Ocean winds and moisture that arrives on the continent, resulting in a desert-like climate just downwind across western Argentina. The Sierra Nevada range creates the same drying effect in North America, causing the Great Basin Desert, Mojave Desert, and Sonoran Desert.
Intensity
Precipitation is measured using a rain gauge, and more recently remote sensing techniques such as a weather radar. When classified according to the rate of precipitation, rain can be divided into categories. Light rain describes rainfall which falls at a rate of between a trace and per hour. Moderate rain describes rainfall with a precipitation rate of between and per hour. Heavy rain describes rainfall with a precipitation rate above per hour, and violent rain has a rate more than per hour.
Snowfall intensity is classified in terms of visibility instead. When the visibility is over , snow is determined to be light. Moderate snow describes snowfall with visibility restrictions between and . Heavy snowfall describes conditions when visibility is restricted below .
Gallery
See also
Weather
Precipitation
Flood
Cyclone
Low pressure area
References
External links
UK Met Office: Why does it rain?
Precipitation
Hydrology
de:Regen#Konvektionsregen
fr:Précipitations#Types | Precipitation types | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,766 | [
"Hydrology",
"Environmental engineering"
] |
10,906,395 | https://en.wikipedia.org/wiki/Neutron%20supermirror | A neutron supermirror is a highly polished, layered material used to reflect neutron beams. Supermirrors are a special case of multi-layer neutron reflectors with varying layer thicknesses.
The first neutron supermirror concept was proposed by Ferenc Mezei, inspired by earlier work with X-rays.
Supermirrors are produced by depositing alternating layers of strongly contrasting substances, such as nickel and titanium, on a smooth substrate. A single layer of high refractive index material (e.g. nickel) exhibits total external reflection at small grazing angles up to a critical angle . For nickel with natural isotopic abundances, in degrees is approximately where is the neutron wavelength in Angstrom units.
A mirror with a larger effective critical angle can be made by exploiting diffraction (with non-zero losses) that occurs from stacked multilayers. The critical angle of total reflection, in degrees, becomes approximately , where is the "m-value" relative to natural nickel. values in the range of 1–3 are common, in specific areas for high-divergence (e.g. using focussing optics near the source, choppers, or experimental areas) m=6 is readily available.
Nickel has a positive scattering cross section, and titanium has a negative scattering cross section, and in both elements the absorption cross section is small, which makes Ni-Ti the most efficient technology with neutrons. The number of Ni-Ti layers needed increases rapidly as , with in the range 2–4, which affects the cost. This has a strong bearing on the economic strategy of neutron instrument design.]
References
Optical materials
Hungarian inventions | Neutron supermirror | [
"Physics"
] | 338 | [
"Optical materials",
"Materials",
"Particle physics",
"Particle physics stubs",
"Matter"
] |
10,907,834 | https://en.wikipedia.org/wiki/Mesocarb | Mesocarb, sold under the brand name Sidnocarb or Sydnocarb and known by the developmental code name MLR-1017, is a psychostimulant medication which has been used in the treatment of psychiatric disorders and for a number of other indications in the Soviet Union and Russia. It is currently under development for the treatment of Parkinson's disease and sleep disorders. It is taken by mouth.
The drug is a selective dopamine reuptake inhibitor (DRI). It is an unusual and unique DRI, acting as a negative allosteric modulator and non-competitive inhibitor of the dopamine transporter (DAT). Chemically, mesocarb contains amphetamine within its structure but has been modified and extended at the amine with a sydnone imine-containing moiety.
Mesocarb was first described by 1971. It was used as a pharmaceutical drug until 2008. In 2021, its nature as a DAT allosteric modulator was reported. As of February 2023, mesocarb was in phase 1 clinical trials for Parkinson's disease. The active enantiomer, armesocarb, is also being developed.
Medical uses
Mesocarb was originally developed in the Soviet Union in the 1970s for a variety of indications including asthenia, apathy, adynamia, and some clinical aspects of depression and schizophrenia. Mesocarb was used for counteracting the sedative effects of benzodiazepines, increasing workload capacity and cardiovascular function, treatment of attention deficit hyperactivity disorder (ADHD) in children, as a nootropic, and as a drug to enhance resistance to extremely cold temperatures. It has also been reported to have antidepressant and anticonvulsant properties.
Available forms
Mesocarb was sold in Russia as 5mg oral tablets under the brand name Sydnocarb.
Pharmacology
Pharmacodynamics
Mesocarb has been found to act as a selective dopamine reuptake inhibitor (DRI) by blocking the actions of the dopamine transporter (DAT), and lacks the dopamine release characteristic of stimulants such as dextroamphetamine. It was the most selective DAT inhibitor amongst an array of other DAT inhibitors to which it was compared and, in 2017, was reported as the most selective DAT inhibitor described to date.
The affinities (Ki) of mesocarb at the human monoamine transporters in vitro have been reported to be 8.3nM for the dopamine transporter (DAT), 1,500nM for the norepinephrine transporter (NET) (181-fold lower than for the DAT), and >10,000nM for the serotonin transporter (SERT) (>1,205-fold lower than for the DAT). The inhibitory potencies () of mesocarb at the human monoamine transporters in vitro have been reported to be 0.49 ± 0.14μM at the DAT, 34.9 ± 14.08μM at the NET (71-fold lower than for the DAT), and 494.9 ± 17.00μM at the SERT (1,010-fold lower than for the DAT).
In 2021, it was discovered that mesocarb is not a conventional DRI but acts as a DAT allosteric modulator or non-competitive inhibitor. In accordance with its nature as an atypical DAT blocker, the drug has atypical effects relative to conventional DRIs. As an example, it shows greater antiparkinsonian activity relative to other DRIs in animals.
Similarly to other DRIs, mesocarb has been found to possess wakefulness-promoting effects.
Pharmacokinetics
Hydroxylated metabolites can be detected in urine for up to 10days after consumption.
Mesocarb had erroneously been referred to as a prodrug of amphetamine. However, this was based on older literature that relied on gas chromatography as an analytical method. Subsequently, with the advent of mass spectroscopy, it has been shown that presence of amphetamine in prior studies was an artifact of the gas chromatography method. More recent studies using mass spectroscopy show that negligible levels of amphetamine are released from mesocarb metabolism.
Chemistry
Mesocarb, also known as 3-(β-phenylisopropyl)-N-phenylcarbamoylsydnonimine, is a substituted phenethylamine and amphetamine and a mesoionic sydnone imine. It has the amphetamine backbone present, except that the RN has a complicated imine side chain present.
Whereas mesocarb (MLR-1017) is a racemic mixture, the enantiopure levorotatory or (R)-enantiomer is known as armesocarb (MLR-1019). Armesocarb is described as the active enantiomer of mesocarb, whereas the (S)- or D-enantiomer is said to be virtually inactive.
It is structurally related to feprosidnine (Sydnophen; 3-(α-methylphenylethyl)sydnone imine).
Synthesis
Feprosidnine (Sydnophen) is converted from the hydrochloride salt (1) into the freebase amine (2). This is then treated with phenylisocyanate (3).
History
Mesocarb was first described in the scientific literature by 1971. It is said to have been used as a pharmaceutical drug from 1971 until 2008. It was said to have been discontinued by its manufacturer in 2008 for business reasons unrelated to the drug itself.
Society and culture
Names
Mesocarb is the generic name of the drug and its . It is also known by the synonym fensidnimine as well as by the brand names Sydnocarb and Synocarb. The drug is additionally known by its developmental code name MLR-1017 (for Parkinson's disease).
Status
Mesocarb is almost unknown in the western world and is neither used in medicine nor studied scientifically to any great extent outside of Russia and other countries in the former Soviet Union. It has however been added to the list of drugs under international control and is a scheduled substance in most countries, despite its multiple therapeutic applications and reported lack of significant abuse potential.
Research
Parkinson's disease
Mesocarb, has been under development for the treatment of Parkinson's disease since 2016. As of February 2023, it is in phase 1 clinical trials for this indication. However, no recent development has been reported. Mesocarb's active enantiomer armesocarb is also under development.
See also
List of Russian drugs
References
Antidepressants
Antiparkinsonian agents
Dopamine reuptake inhibitors
Drugs in the Soviet Union
Experimental drugs
Imines
Oxadiazoles
Russian drugs
Stimulants
Substituted amphetamines
Ureas
Wakefulness-promoting agents
Withdrawn drugs | Mesocarb | [
"Chemistry"
] | 1,517 | [
"Organic compounds",
"Drug safety",
"Withdrawn drugs",
"Ureas"
] |
10,908,732 | https://en.wikipedia.org/wiki/Younger%20Memnon | The Younger Memnon is an Ancient Egyptian statue, one of two colossal granite statues from the Ramesseum mortuary temple in Thebes, Upper Egypt. It depicts the Nineteenth Dynasty Pharaoh Ramesses II wearing the Nemes head-dress with a cobra diadem on top. The damaged statue has since been separated from its upper torso and head. These sections can now be found in the British Museum. The remainder of the statue remains in Egypt. It is one of a pair that originally flanked the Ramesseum's doorway. The head of the other statue is still found at the temple.
Description
The Younger Memnon is high × wide (across the shoulders). It weighs 7.25 tons and was cut from a single block of two-coloured granite. There is a slight variation of normal conventions in that the eyes look down slightly more than usual, and to exploit the different colours (broadly speaking, the head is in one colour, and the body another).
Acquisition
Belzoni
Napoleon's men tried but failed to dig and remove it to France during his 1798 expedition there, during which he did acquire but then lost the Rosetta Stone. It was during this attempt that the hole on the right of the torso (just above Ramesses's right nipple) is said to have been made.
Following an idea mentioned to him by his friend Johann Ludwig Burckhardt of digging the statue and bringing it to Britain, the British Consul General Henry Salt hired the adventurer Giovanni Belzoni in Cairo in 1815 for this purpose. Using his hydraulics and engineering skills, it was pulled on wooden rollers by ropes to the bank of the Nile opposite Luxor by hundreds of workmen. However, no boat was yet available to take it up to Alexandria and so Belzoni carried out an expedition to Nubia, returning by October. With French collectors also in the area possibly looking to acquire the statue, he then sent workmen to Esna to gain a suitable boat and in the meantime carried out further excavations in Thebes. He finally loaded the products of these digs, plus the Memnon, onto this boat and got it to Cairo by 15 December 1816. There he received and obeyed orders from Salt to unload all but the Memnon, which was then sent on to Alexandria and London without him.
Anticipated by Shelley's poem "Ozymandias", the head arrived in 1818 on in Deptford. In London it acquired its name "The Younger Memnon", after the "Memnonianum" (the name in classical times for the Ramesseum – the two statues at the entrance of the mortuary temple of Amenhotep III were associated with Memnon in classical times, and are still known as the Colossi of Memnon. The British Museum sculpture and its pair seem to have either been mistaken for them or suffered a similar misnaming).
British Museum
It was later acquired from Salt in 1821 by the British Museum and was at first displayed in the old Townley Galleries (now demolished) for several years, then installed (using heavy ropes and lifting equipment and with help from the Royal Engineers) in 1834 in the new Egyptian Sculpture Gallery (now Room 4, where it now resides). The soldiers were commanded by a Waterloo veteran, Major Charles Cornwallis Dansey, lame from a wound sustained there, who therefore sat whilst commanding them. On its arrival there, it could be said to be the first piece of Egyptian sculpture to be recognized as a work of art rather than a curiosity low down in the chain of art (with ancient Greek art at the pinnacle of this chain). It is museum number EA 19.
In February 2010, the statue was featured as object 20 in A History of the World in 100 Objects, a BBC Radio 4 programme by British Museum director Neil MacGregor.
References
Sources
British Museum Catalogue entry
3D model of the Younger Memnon via photogrammetric survey
Encyclopaedic.net – extracts from Belzoni's account
Publications
G. Belzoni, Narrative of the operations and recent discoveries within the pyramids, temples, tombs, and excavations in Egypt and Nubia I (London, John Murray, 1822), pp. 61–80
S. Quirke and A.J. Spencer, The British Museum book of ancient Egypt (London, The British Museum Press, 1992), pp. 126–7
Albert M. Lythgoe, 'Statues of the Goddess Sekhmet', The Metropolitan Museum of Art Bulletin Vol. 14, No. 10, Part 2 (Oct., 1919), pp. 1+3-23
Stephanie Moser, Wondrous Curiosities: Ancient Egypt at the British Museum (University of Chicago Press, 2006),
Sculptures of ancient Egypt
Ancient Egyptian sculptures in the British Museum
13th-century BC works
Colossal statues
Sculptures in the United Kingdom
Ramesses II | Younger Memnon | [
"Physics",
"Mathematics"
] | 1,002 | [
"Quantity",
"Colossal statues",
"Physical quantities",
"Size"
] |
10,910,742 | https://en.wikipedia.org/wiki/Mark%20Napier%20%28artist%29 | Mark Napier is an early adopter of the web and a pioneer of digital and Internet art (net.art) in the United States, known for creating interactive online artwork that challenges traditional definitions of art. He uses code as an expressive form, and the Internet as his exhibition space and laboratory.
Napier developed his first web-based applications for financial data in 1996. He is the author of his own website, potatoland.org, his online studio where many of his net artworks can be found, such as Shredder 1.0, net.flag, Riot, etc.
Personal life
Mark Napier was born in 1961 in Springfield, New Jersey. Napier lives and works in New York city. Currently, he is a consultant for a new personal finance company.
Education
Mark Napier graduated in 1984 with a bachelor's degree in Fine Arts from Syracuse University.
Life and work
Trained as a painter, Napier worked as a self-taught programmer in New York's financial markets until 1995, when a friend introduced him to the web. With Levi Asher, Napier collaborated on his first website ("Chicken Wire Mother") and began several experiments with hypertext in which he explored juxtaposing meanings and pop culture symbols. In The Distorted Barbie site, Napier created a family of Photoshopped Barbie also-rans that riffed on the "sacred cash-cow" status of the capitalist icon. Mattel was not amused and threatened Napier with a cease-and-desist letter, which prompted a wholesale copying of the site by enraged fans.
In 1997, shortly after the Distorted Barbie episode, Napier opened potatoland.org, an online studio for interactive work where he explored software as an art medium with such pieces as Digital Landfill and Internet shredder 1.0 (1998). Both pieces were included in the seminal "net_condition" show at ZKM in Karlsruhe and attracted critical attention: Shredder was shown at Ars Electronica and Digital Landfill was written up in the Village Voice. Over the next five years Napier explored the networked software environment, creating work that challenged the definition of the art object. The salient features of these pieces: 1) The artwork can be altered by the viewer/visitor, 2) it responds to actions from the viewer/visitor and 3) typically relies on viewer/visitor actions to enact the work. The work can change, possibly unpredictably, over time, and often appropriates other network property to use as raw material, e.g., websites, flags, images. The art is "massively public": it is accessible to and can be altered by anybody with access to the network.
These pieces exist in part as performances, in part as places that a viewer visits, in part as compositions, like music, that unfold differently when played under different circumstances. The overriding experience is that the art object is disembodied, existing in many places at once, with many authors contributing to the piece, with many appearances, over time, with no clear end point. The artwork is in the algorithm, the process, which manifests itself in an unending series of appearances on the screen.
During this time Napier produced Riot, an alternative browser shown in the 2002 Whitney Biennial, Feed, commissioned by SFMOMA and shown in the "010101" show at SFMOMA (2001), and net.flag, commissioned by the Guggenheim Museum. In 2002 net.flag and John Simon's Unfolding Object became the first network-based artworks to be acquired by a major museum.
These pieces turn the structure of the software/network environment inside out, hacking the inner workings of virtual space, and often collide physical metaphors with the insolidity of the net environment, i.e. shredding (Shredder), decaying (Digital Landfill), breaking down neighborhoods (Riot), creating a flag (net.flag). By hacking the http protocol he turns the web into an abstract expressionist painting or a meditative color field. Matt Mirapaul writing in the New York Times described Feed as "a digital action painting, albeit with actual action." Napier has said, he is influenced by Jackson Pollock, he admires how he used the material, the way "he explored paint in its most raw form, without disguising it." In Shredder he wanted to use the web as raw material, so the code, HTML, text, images, and colors, would become a visual aesthetic in their own right. Cy Twombly has also influenced him as well, for the "chaotic, accidental, seemingly unplanned quality of his work."
This repurposing of the matter of the web continues in Black and White (2003), a transitional piece in which Napier reads the text of the Old Testament, New Testament and Koran, as a stream of zeroes and ones, then treats the stream of binary data as two forces that drive a black and white line on the screen. The lines are propelled by the 0 and 1 values from the data, and are mutually attracted to one another, creating a swirling, orbiting dance as the black and white points seek equilibrium. The Black and White algorithm translates writing from a form that is meaningful to human beings into a form that is equally precise, but that can only be understood as a gestalt: a moment of insight that points to experiences that cannot be transcribed into text.
In the period following 2003, Napier explored a more private side of software, making meditative pieces and drawing on the history of painting for inspiration. In three solo shows at bitforms gallery in Manhattan, Napier leaves the browser and moves towards a more tactile interactivity, showing work that is graphically rich and minimally interactive. Still addressing the expression of power in the global network, Napier turns to the Empire State Building as a symbol of nationalism, military and economic might. By transliterating the monument into software, Napier creates a contradiction: a soft, malleable, bouncing skyscraper. Flexible where the original is rigid, small where the original is huge, at once delicate and unbreakable, Napier's skyscraper collides the worlds of steel with the world of software, and reveals the anxiety of transitional time.
These pieces, with names like KingKong, Cyclops Birth and Smoke, deal with the expression of power in the Information Age. The seeming permanence of steel, the formative material of the Industrial Revolution, appears almost quaint as we navigate an environment that is increasingly made of electricity, magnetism and light.
As they comment on the condition of human media in transition, these pieces also upset the conventions of visual art, long dominated by permanent unique objects. By creating virtual "objects" Napier's work exists in a space that is visible, yet forever just out of reach. These objects teeter on the edge of solidity and tempt the viewer to freeze them, hold them, to return them to the familiar and comfortably solid world.
In 2013 Napier created an android app(Kaarme Scholarship Search) that allowed individuals to search for both college and scholarships. This app gives high-school students a LinkedIn like site where they can network with colleges, counselors and find the resources they need to get into college. This project was the company's first step into mobile apps, a critical technology for the high-school demographic.
A recipient of grants from Creative Capital, NYFA, and the Greenwall Foundation, Napier has also been commissioned to create artwork for SFMOMA, the Whitney Museum, and the Guggenheim. Napier.s work has also been exhibited at the Centre Pompidou, PS1, the Walker Arts Center, Ars Electronica, The Kitchen, Kunstlerhaus Vienna, Transmediale, Bard College, the Princeton Art Museum, ASCII Digital Festival, bitforms gallery in Seoul, and la Villette in Paris among many others.
Notable projects
The Distorted Barbie (1996)
Digital Landfill (1998)
Shredder 1.0 (1998)
Riot (1999)
©Bots (2000)
net.flag (2002)
Black and White (2003)
Kaarme Scholarship Search (2013)
Awards and honors
2007 New York Foundation for the Arts Fellowship in Computer Arts
2002 Creative Capital grant
2001 Nominated for a Webby Award in the Arts category
2001 New York Foundation for the Arts, fellowship in Computer Arts
2001 Greenwall Foundation grant for “Point-to-Point”
2000 Fraunhofer Society prize for “Point-to-Point”
1999 The Shredder awarded honorable mention by Ars Electronica 99.
1998 Digital Landfill receives first prize in ASCII Digital 99 festival
References
Mark Napier's official website
Interview with Mark Napier by Tilman Baumgaertel
Interview with Mark Napier by Jon Ippolito, January 2002
Interview with Mark Napier by Andreas Broegger
010101: Art in Technological Times (catalog), pp. 112–113
Tilman Baumgartel, net.art 2.0, Kunst Nurnberg, pp. 182–191
Christiane Paul, Digital Art, Thames & Hudson Ltd
Ebon Fisher, Wigglism Leonardo Journal 40, No. I, p. 40
New Media Art by Mark Tribe and Reena Jana, Taschen p. 70
From Steel to Software by Lauren Cornell,
Lieser, Wolf. Digital Art. Langenscheidt: h.f. ullmann. 2009 pp. 46–49
Interview of Mark Napier by Kristine Feeks,Spring 2001
Mark Napier's official website biography
External links
Napier's website, featuring some of his artwork
Napier's earlier website, featuring his controversial Barbie pieces
A Harvard page discussing the legal standpoints of the Barbie controversy
Thomas Dreher: Tomatoland (Napier) (in German)
Thomas Dreher: History of Computer Art, chap. VI.3.3 Browser Art with a wider explanation of Mark Napier´s "The Shredder" (1998).
American digital artists
1961 births
Living people
Artists from Newark, New Jersey
Artists from New York (state)
Net.artists
Syracuse University alumni | Mark Napier (artist) | [
"Technology"
] | 2,056 | [
"Multimedia",
"Net.artists"
] |
7,214,278 | https://en.wikipedia.org/wiki/Decision%20field%20theory | Decision field theory (DFT) is a dynamic-cognitive approach to human decision making. It is a cognitive model that describes how people actually make decisions rather than a rational or normative theory that prescribes what people should or ought to do. It is also a dynamic model of decision-making rather than a static model, because it describes how a person's preferences evolve across time until a decision is reached rather than assuming a fixed state of preference. The preference evolution process is mathematically represented as a stochastic process called a diffusion process. It is used to predict how humans make decisions under uncertainty, how decisions change under time pressure, and how choice context changes preferences. This model can be used to predict not only the choices that are made but also decision or response times.
The paper "Decision Field Theory" was published by Jerome R. Busemeyer and James T. Townsend in 1993. The DFT has been shown to account for many puzzling findings regarding human choice behavior including violations of stochastic dominance, violations of strong stochastic transitivity, violations of independence between alternatives, serial-position effects on preference, speed accuracy tradeoff effects, inverse relation between probability and decision time, changes in decisions under time pressure, as well as preference reversals between choices and prices. The DFT also offers a bridge to neuroscience. Recently, the authors of decision field theory also have begun exploring a new theoretical direction called Quantum Cognition.
Introduction
The name decision field theory was chosen to reflect the fact that the inspiration for this theory comes from an earlier approach – avoidance conflict model contained in Kurt Lewin's general psychological theory, which he called field theory. DFT is a member of a general class of sequential sampling models that are commonly used in a variety of fields in cognition.
The basic ideas underlying the decision process for sequential sampling models is illustrated in Figure 1 below. Suppose the decision maker is initially presented with a choice between three risky prospects, A, B, C, at time t = 0. The horizontal axis on the figure represents deliberation time (in seconds), and the vertical axis represents preference strength. Each trajectory in the figure represents the preference state for one of the risky prospects at each moment in time.
Intuitively, at each moment in time, the decision maker thinks about various payoffs of each prospect, which produces an affective reaction, or valence, to each prospect. These valences are integrated across time to produce the preference state at each moment. In this example, during the early stages of processing (between 200 and 300 ms), attention is focused on advantages favoring prospect C, but later (after 600 ms) attention is shifted toward advantages favoring prospect A. The stopping rule for this process is controlled by a threshold (which is set equal to 1.0 in this example): the first prospect to reach the top threshold is accepted, which in this case is prospect A after about two seconds. Choice probability is determined by the first option to win the race and cross the upper threshold, and decision time is equal to the deliberation time required by one of the prospects to reach this threshold.
The threshold is an important parameter for controlling speed–accuracy tradeoffs. If the threshold is set to a lower value (about .30) in Figure 1, then prospect C would be chosen instead of prospect A (and done so earlier). Thus decisions can reverse under time pressure. High thresholds require a strong preference state to be reached, which allows more information about the prospects to be sampled, prolonging the deliberation process, and increasing accuracy. Low thresholds allow a weak preference state to determine the decision, which cuts off sampling information about the prospects, shortening the deliberation process, and decreasing accuracy. Under high time pressure, decision makers must choose a low threshold; but under low time pressure, a higher threshold can be used to increase accuracy. Very careful and deliberative decision makers tend to use a high threshold, and impulsive and careless decision makers use a low threshold.
To provide a bit more formal description of the theory, assume that the decision maker has a choice among three actions, and also suppose for simplicity that there are only four possible final outcomes. Thus each action is defined by a probability distribution across these four outcomes. The affective values produced by each payoff are represented by the values mj. At any moment in time, the decision maker anticipates the payoff of each action, which produces a momentary evaluation, Ui(t), for action i. This momentary evaluation is an attention-weighted average of the affective evaluation of each payoff: Ui(t) = Σ Wij(t)mj. The attention weight at time t, Wij(t), for payoff j offered by action i, is assumed to fluctuate according to a stationary stochastic process. This reflects the idea that attention is shifting from moment to moment, causing changes in the anticipated payoff of each action across time. The momentary evaluation of each action is compared with other actions to form a valence for each action at each moment, vi(t) = Ui(t) – U.(t), where U.(t) equals the average across all the momentary actions. The valence represents the momentary advantage or disadvantage of each action. The total valence balances out to zero so that all the options cannot become attractive simultaneously. Finally, the valences are the inputs to a dynamic system that integrates the valences over time to generate the output preference states. The output preference state for action i at time t is symbolized as Pi(t). The dynamic system is described by the following linear stochastic difference equation for a small time step h in the deliberation process: Pi(t+h) = Σ sijPj(t)+vi(t+h).The positive self feedback coefficient, sii = s > 0, controls the memory for past input valences for a preference state. Values of sii < 1 suggest decay in the memory or impact of previous valences over time, whereas values of sii > 1 suggest growth in impact over time (primacy effects). The negative lateral feedback coefficients, sij = sji < 0 for i not equal to j, produce competition among actions so that the strong inhibit the weak. In other words, as preference for one action grows stronger, then this moderates the preference for other actions. The magnitudes of the lateral inhibitory coefficients are assumed to be an increasing function of the similarity between choice options. These lateral inhibitory coefficients are important for explaining context effects on preference described later. Formally, this is a Markov process; matrix formulas have been mathematically derived for computing the choice probabilities and distribution of choice response times.
The decision field theory can also be seen as a dynamic and stochastic random walk theory of decision making, presented as a model positioned between lower-level neural activation patterns and more complex notions of decision making found in psychology and economics.
Explaining context effects
The DFT is capable of explaining context effects that many decision making theories are unable to explain.
Many classic probabilistic models of choice satisfy two rational types of choice principles. One principle is called independence of irrelevant alternatives, and according to this principle, if the probability of choosing option X is greater than option Y when only X,Y are available, then option X should remain more likely to be chosen over Y even when a new option Z is added to the choice set. In other words, adding an option should not change the preference relation between the original pair of options. A second principle is called regularity, and according to this principle, the probability of choosing option X from a set containing only X and Y should be greater than or equal to the probability of choosing option X from a larger set containing options X, Y, and a new option Z. In other words, adding an option should only decrease the probability of choosing one of the original pair of options. However, empirical findings obtained by consumer researchers studying human choice behavior have found systematic context effects that systematically violate both of these principles.
The first context effect is the similarity effect. This effect occurs with the introduction of a third option S that is similar to X but it is not dominated by X. For example, suppose X is a BMW, Y is a Ford focus, and S is an Audi. The Audi is similar to the BMW because both are not very economical but they are both high quality and sporty. The Ford focus is different from the BMW and Audi because it is more economical but lower quality. Suppose in a binary choice, X is chosen more frequently than Y. Next suppose a new choice set is formed by adding an option S that is similar to X. If X is similar to S, and both are very different from Y, the people tend to view X and S as one group and Y as another option. Thus the probability of Y remains the same whether S is presented as an option or not. However, the probability of X will decrease by approximately half with the introduction of S. This causes the probability of choosing X to drop below Y when S is added to the choice set. This violates the independence of irrelevant alternatives property because in a binary choice, X is chosen more frequently than Y, but when S is added, then Y is chosen more frequently than X.
The second context effect is the compromise effect. This effect occurs when an option C is added that is a compromise between X and Y. For example, when choosing between C = Honda and X = BMW, the latter is less economical but higher quality. However, if another option Y = Ford Focus is added to the choice set, then C = Honda becomes a compromise between X = BMW and Y = Ford Focus. Suppose in a binary choice, X (BMW) is chosen more often than C (Honda). But when option Y (Ford Focus) is added to the choice set, then option C (Honda) becomes the compromise between X (BMW) and Y (Ford Focus), and C is then chosen more frequently than X. This is another violation of the independence of irrelevant alternatives property because X is chosen more often than C in a binary choice, but C when option Y is added to the choice set, then C is chosen more often than X.
The third effect is called the attraction effect. This effect occurs when the third option D is very similar to X but D is defective compared to X. For example D may be a new sporty car developed by a new manufacturer that is similar to option X = BMW, but costs more than the BMW. Therefore, there is little or no reason to choose D over X, and in this situation D is rarely ever chosen over X. However, adding D to a choice set boosts the probability of choosing X. In particular, the probability of choosing X from a set containing X,Y,D is larger than the probability of choosing X from a set containing only X and Y. The defective option D makes X shine, and this attraction effect violates the principle of regularity, which says that adding another option cannot increase the popularity of an option over the original subset.
DFT accounts for all three effects using the same principles and same parameters across all three findings. According to DFT, the attention switching mechanism is crucial for producing the similarity effect, but the lateral inhibitory connections are critical for explaining the compromise and attraction effects. If the attention switching process is eliminated, then the similarity effect disappears, and if the lateral connections are all set to zero, then the attraction and compromise effects disappear. This property of the theory entails an interesting prediction about the effects of time pressure on preferences. The contrast effects produced by lateral inhibition require time to build up, which implies that the attraction and compromise effects should become larger under prolonged deliberation (see ). Alternatively, if context effects are produced by switching from a weighted average rule under binary choice to a quick heuristic strategy for the triadic choice, then these effects should get larger under time pressure. Empirical tests show that prolonging the decision process increases the effects and time pressure decreases the effects.
Neuroscience
The Decision Field Theory has demonstrated an ability to account for a wide range of findings from behavioral decision making for which the purely algebraic and deterministic models often used in economics and psychology cannot account. Recent studies that record neural activations in non-human primates during perceptual decision making tasks have revealed that neural firing rates closely mimic the accumulation of preference theorized by behaviorally-derived diffusion models of decision making.
The decision processes of sensory-motor decisions are beginning to be fairly well understood both at the behavioral and neural levels. Typical findings indicate that neural activation regarding stimulus movement information is accumulated across time up to a threshold, and a behavioral response is made as soon as the activation in the recorded area exceeds the threshold. A conclusion that one can draw is that the neural areas responsible for planning or carrying out certain actions are also responsible for deciding the action to carry out, a decidedly embodied notion.
Mathematically, the spike activation pattern, as well as the choice and response time distributions, can be well described by what are known as diffusion models—especially in two-alternative forced choice tasks. Diffusion models, such as the decision field theory, can be viewed as stochastic recurrent neural network models, except that the dynamics are approximated by linear systems. The linear approximation is important for maintaining a mathematically tractable analysis of systems perturbed by noisy inputs. In addition to these neuroscience applications, diffusion models (or their discrete time, random walk, analogues) have been used by cognitive scientists to model performance in a variety of tasks ranging from sensory detection, and perceptual discrimination, to memory recognition, and categorization. Thus, diffusion models provide the potential to form a theoretical bridge between neural models of sensory-motor tasks and behavioral models of complex-cognitive tasks.
Notes
References
Models of computation
Decision theory
Cognitive science
Cognitive modeling
Mathematical psychology | Decision field theory | [
"Mathematics"
] | 2,859 | [
"Applied mathematics",
"Mathematical psychology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.