id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
53,716 | https://en.wikipedia.org/wiki/DV%20%28video%20format%29 | DV (from Digital Video) is a family of codecs and tape formats used for storing digital video, launched in 1995 by a consortium of video camera manufacturers led by Sony and Panasonic. It includes the recording or cassette formats DV, MiniDV, HDV, DVCAM, DVCPro, DVCPro50, DVCProHD, Digital8, and Digital-S. DV has been used primarily for video recording with camcorders in the amateur and professional sectors.
DV was designed to be a standard for home video using digital data instead of analog. Compared to the analog Video8/Hi8, VHS-C and VHS formats, DV features a higher video resolution (on par with professional-grade Digital Betacam) and also records audio digitally at 16-bit like CD. The most popular tape format using a DV codec was MiniDV; these cassettes measured just 6.35 mm/ΒΌ inch, making it ideal for video cameras and rendering older analog formats obsolete. In the late 1990s and early 2000s, DV was strongly associated with the transition from analog to digital desktop video production, and also with several enduring "prosumer" camera designs such as the Sony VX-1000.
In 2003, DV was joined by a successor format called HDV, which used the same tapes but with an updated video codec with high-definition video; HDV cameras could typically switch between DV and HDV recording modes. In the 2010s, DV rapidly grew obsolete as cameras using memory cards and solid-state drives became the norm, recording at higher bitrates and resolutions that were impractical for mechanical tape formats. Additionally, as manufacturers switched from interlaced to superior progressive recording methods, they broke the interoperability that had previously been maintained across multiple generations of DV and HDV equipment.
Development
DV was developed by the HD Digital VCR Association: in April 1994, 55 companies worldwide took part, which developed the standards and specifications of the format.
The original DV specification, known as Blue Book, was standardized within the IECΒ 61834 family of standards. These standards define common features such as physical videocassettes, recording modulation method, magnetization, and basic system data in part 1. Part 2 describes the specifics of video systems supporting 525-60 for NTSC and 625-50 for PAL. The IEC standards are available as publications sold by IEC and ANSI.
DV compression
DV uses lossy compression of video while audio is stored uncompressed. An intraframe video compression scheme is used to compress video on a frame-by-frame basis with the discrete cosine transform (DCT).
Closely following the ITU-R Rec. 601 standard, DV video employs interlaced scanning with the luminance sampling frequency of 13.5Β MHz. This results in 480 scanlines per complete frame for the 60Β Hz system, and 576 scanlines per complete frame for the 50Β Hz system. In both systems the active area contains 720 pixels per scanline, with 704 pixels used for content and 16 pixels on the sides left for digital blanking. The same frame size is used for 4:3 and 16:9 frame aspect ratios, resulting in different pixel aspect ratios for fullscreen and widescreen video.
Prior to the DCT compression stage, chroma subsampling is applied to the source video in order to reduce the amount of data to be compressed. Baseline DV uses 4:1:1 subsampling in its 60Β Hz variant and 4:2:0 subsampling in the 50Β Hz variant. Low chroma resolution of DV (compared to higher-end digital video formats) is a reason this format is sometimes avoided in chroma keying applications, though advances in chroma keying techniques and software have made producing quality keys from DV material possible.
Audio can be stored in either of two forms: 16-bit Linear PCM stereo at 48Β kHz sampling rate (768Β kbit/s per channel, 1.5Β Mbit/s stereo), or four nonlinear 12-bit PCM channels at 32Β kHz sampling rate (384Β kbit/s per channel, 1.5Β Mbit/s for four channels). In addition, the DV specification also supports 16-bit audio at 44.1Β kHz (706Β kbit/s per channel, 1.4Β Mbit/s stereo), the same sampling rate used for CD audio. In practice, the 48Β kHz stereo mode is used almost exclusively.
Digital Interface Format
The audio, video, and metadata are packaged into 80-byte Digital Interface Format (DIF) blocks which are multiplexed into a 150-block sequence. DIF blocks are the basic units of DV streams and can be stored as computer files in raw form or wrapped in such file formats as Audio Video Interleave (AVI), QuickTime (QT) and Material Exchange Format (MXF). One video frame is formed from either 10 or 12 such sequences, depending on scanning rate, which results in a data rate of about 25Β Mbit/s for video, and an additional 1.5Β Mbit/s for audio. When written to tape, each sequence corresponds to one complete track.
Baseline DV employs unlocked audio. This means that the sound may be +/- β
frame out of sync with the video. However, this is the maximum drift of the audio/video synchronization; it is not compounded throughout the recording.
Variants
Sony and Panasonic created their proprietary versions of DV aimed toward professional & broadcast users, which use the same compression scheme, but improve on robustness, linear editing capabilities, color rendition and raster size.
All DV variants except for DVCPRO Progressive are recorded to tape within interlaced video stream. Film-like frame rates are possible by using pulldown. DVCPROΒ HD supports native progressive format when recorded to P2 memory cards.
DVCPRO
DVCPRO, also known as DVCPRO25 and D-7, is a variation of DV developed by Panasonic and introduced in 1995, originally intended for use in electronic news gathering (ENG) equipment.
Unlike baseline DV, DVCPRO uses locked audio, meaning the audio sample clock runs in sync with the video sample clock. Audio is available in 16-bit/48Β kHz precision.
When recorded to tape, DVCPRO uses wider track pitchβ18 ΞΌm vs. 10 ΞΌm of baseline DVβwhich reduces the chance of dropout errors during recording. Two extra longitudinal tracks provide support for audio cue and for timecode control. Tape is transported 80% faster compared to baseline DV, resulting in shorter recording time. Long Play mode is not available.
DVCPRO50
DVCPRO50 was introduced by Panasonic in 1997 and is often described as two DV codecs working in parallel.
The DVCPRO50 doubles the coded video data rate to 50Β Mbit/s. This has the effect of cutting total record time of any given storage medium in half. Chroma resolution is improved by using 4:2:2 chroma subsampling.
Following the introduction of the AJ-SDX900 camcorder in 2003, DVCPRO50 was used in many productions where high definition video was not required. For example, BBC used DVCPRO50 to record high-budget TV series, such as Space Race (2005) and Ancient Rome: The Rise and Fall of an Empire (2006).
A similar format, D-9 (or Digital-S), offered by JVC, uses videocassettes with the same form-factor as VHS.
Comparable high quality standard definition digital tape formats include Sony's Digital Betacam, introduced in 1993, and MPEGΒ IMX, introduced in 2000.
DVCPRO Progressive
DVCPRO Progressive was introduced by Panasonic alongside DVCPRO50. It offered 480 or 576 lines of progressive scan recording with 4:2:0 chroma subsampling and four 16-bit 48Β kHz PCM audio channels. Like HDV-SD, it was meant as an intermediate format during the transition time from standard definition to high definition video.
The format offered six modes for recording and playback: 16:9 progressive (50Β Mbit/s), 4:3 progressive (50Β Mbit/s), 16:9 interlaced (50Β Mbit/s), 4:3 interlaced (50Β Mbit/s), 16:9 interlaced (25Β Mbit/s), 4:3 interlaced (25Β Mbit/s).
The format was superseded by DVCPROΒ HD.
DVCPRO HD
DVCPRO HD, also known as DVCPRO100 and D-12, is a high-definition video format that can be thought of as four DV codecs that work in parallel. Video data rate depends on frame rate and can be as low as 40Β Mbit/s for 24 frame/s mode and as high as 100Β Mbit/s for 50/60 frame/s modes. Like DVCPRO50, DVCPRO HD employs 4:2:2 color sampling. It was introduced in 2000.
DVCPRO HD uses smaller raster size than broadcast high definition television: 960x720 pixels for 720p, 1280x1080 for 1080/59.94i and 1440x1080 for 1080/50i. Similar horizontal downsampling (using rectangular pixels) is used in many other magnetic tape-based HD formats such as HDCAM. To maintain compatibility with HD-SDI, DVCPRO100 equipment upsamples video during playback.
Variable framerates (from 4 to 60 frame/s) are available on Varicam camcorders. DVCPRO HD equipment offers backward compatibility with older DV/DVCPRO formats.
When recorded to tape in standard-play mode, DVCPROΒ HD uses the same 18 ΞΌm track pitch as other DVCPRO flavors. A long play variant, DVCPROΒ HD-LP, doubles the recording density by using 9 ΞΌm track pitch.
DVCPROΒ HD is codified as SMPTE 370M; the DVCPROΒ HD tape format is SMPTE 371M, and the MXF Op-Atom format used for DVCPROΒ HD on P2 cards is SMPTE 390M.
While technically DVCPRO HD is a direct descendant of DV, it is used almost exclusively by professionals. Tape-based DVCPROΒ HD cameras exist only in shoulder mount variant.
A similar format, Digital-S (D-9 HD), was offered by JVC and used videocassettes with the same form-factor as VHS.
The main competitor to DVCPRO HD was HDCAM, offered by Sony. It uses a similar compression scheme but at higher bitrate.
DVCAM
In 1996, Sony responded with its own professional version of DV called DVCAM.
Like DVCPRO, DVCAM uses locked audio, which prevents audio synchronization drift that may happen on DV if several generations of copies are made.
When recorded to tape, DVCAM uses 15 ΞΌm track pitch, which is 50% wider compared to baseline. Accordingly, tape is transported 50% faster, which reduces recording time by one third compared to regular DV. Because of the wider track and track pitch, DVCAM has the ability to do a frame-accurate insert edit, while regular DV may vary by a few frames on each edit compared to the preview.
Digital8
Digital8 is a combination of the tape transport originally designed for analog Video8 and Hi8 formats with the DV codec. Digital8 equipment records in DV format only, but usually can play back Video8 and Hi8 tapes as well.
Comparison of DV implementations
Recording media
Magnetic tape
The table below show the physical DV cassette formats at a glance:
DV was originally designed for recording onto magnetic tape. Tape is enclosed into videocassette of four different sizes: small, medium, large and extra-large. All DV cassettes use wide tape. DV on magnetic tape uses helical scan, which wraps the tape around a tilted, rotating head drum with video heads mounted to it. As the drum rotates, the heads read the tape diagonally. DV, DVCAM and DVCPRO use a 21.7Β mm diameter head drum at 9000Β rpm. The diagonal video tracks read by the heads are 10 microns wide in DV tapes.
Technically, any DV cassette can record any variant of DV video. Nevertheless, manufacturers often label cassettes with DV, DVCAM, DVCPRO, DVCPRO50 or DVCPRO HD and indicate recording time with regards to the label posted. Cassettes labeled as DV indicate recording time of baseline DV; another number can indicate recording time of Long Play DV. Cassettes labeled as DVCPRO have a yellow tape door and indicate recording time when DVCPRO25 is used; with DVCPRO50 the recording time is half, with DVCPRO HD it is a quarter. Cassettes labeled as DVCPRO50 have a blue tape door and indicate recording time when DVCPRO50 is used. Cassettes labeled as DVCPRO HD have a red tape door and indicate recording time when DVCPRO HD-LP format is used; a second number may be used for DVCPRO HD recording, which will be half as long.
Panasonic stipulated use of a particular magnetic-tape formulationβmetal particle (MP)βas an inherent part of its DVCPRO family of formats. Regular DV tape uses Metal Evaporate (ME) formulation (which, as the name suggests, uses physical vapor deposition to deposit metal onto the tape), which was pioneered for use in Hi8 camcorders. Early Hi8 ME tapes were plagued with excessive dropouts, which forced many shooters to switch to more expensive MP tapes. After the technology improved, the dropout rate was greatly reduced, nevertheless Panasonic deemed ME formulation not robust enough for professional use. Tape-based professional Panasonic DVCPRO camcorders and decks only record onto DVCPRO-branded cassettes, effectively preventing use of ME tape.
Small size (MiniDV)
Small cassettes (66 x 48 x 12.2Β mm), also known as S-size or MiniDV cassettes, had been intended for amateur use, but have become accepted in professional productions as well. MiniDV cassettes are used for recording baseline DV, DVCAM, and HDV. These cassettes come in lengths up to about 14~20.8GB for 63 or 90 minutes of DV or HDV video. When recording in DVCAM, these cassettes hold up to 41 minutes of video. There are some 83-minute versions but these use thinner tape than the 63-minute ones and Panasonic advised against playing these cassettes in DVCPRO decks.
Medium size
Medium or M-size cassettes (97.5 Γ 64.5 Γ 14.6Β mm), which are about the size of eight-millimeter cassettes, are used in professional Panasonic equipment and are often called DVCPRO tapes. Panasonic video recorders that accept medium cassette can play back from and record to medium cassette in different flavors of DVCPRO format; they will also play small cassettes containing DV or DVCAM recording via an adapter. These cassettes come in lengths up to 66 minutes for DVCPRO, 33 minutes for DVCPRO50 and DVCPRO HD-LP, and 16.5 minutes for the original DVCPRO HD.
Large size
Large or L-size cassettes (125.1 x 78 x 14.6Β mm) are close in size to small MII cassettes and are accepted by most standalone DV tape recorders and are used in many shoulder-mount camcorders. The L-size cassette can be used in both Sony and Panasonic equipment; nevertheless, they are often called DVCAM tapes. Older Sony decks would not play large cassettes with DVCPRO recordings, but newer models can play these and M-size DVCPRO cassettes. These cassettes come in lengths up to 276 minutes of DV or HDV video (or 184 minutes for DVCAM). Unlike the VHS and Digital8 formats that use thinner tape for their longest-length variants, the 276-minute DV cassette employs the same tape as its shorter-length variants. On the DVCPRO side, these cassettes have nearly double the tape capacity of their M-size counterparts, with duration up to 126 minutes for DVCPRO, 63 minutes for DVCPRO50 and DVCPRO HD-LP, and 31.5 minutes for the original DVCPRO HD. A thin-tape 184/92/46-minute version was also released.
Extra-large size
Extra-large cassettes or XL-size (172 x 102 x 14.6Β mm) are close in size to VHS cassettes and have been designed for use in Panasonic equipment and are sometimes called DVCPROΒ XL. These cassettes are not widespread, only a few models of Panasonic tape recorders can accept them. Each XL-size cassette holds nearly double the amount of tape as the full-length L-size cassettes with a capacity of 252 minutes of DVCPRO video or 126 minutes of DVCPRO50 or DVCPRO HD-LP video.
File-based media
With proliferation of tapeless camcorder video recording, DV video can be recorded on optical discs, solid state flash memory cards and hard disk drives and used as computer files. In particular:
Sony XDCAM family of cameras can record DV onto either Professional Disc or SxS memory cards.
Panasonic DVCPROΒ HD and AVC-Intra camcorders can record DV (as well as DVCPRO) onto P2 cards.
Some Panasonic AVCHD camcorders (AG-HMC80, AG-AC130, AG-AC160) record DV video onto Secure Digital memory cards.
JVC GY-HM750 can be set to standard definition mode and in this case will record '.AVI or .MOV SD legacy format' video onto SDHC cards. For clarity - and contrary to what has previously been written, the camera does not natively support SxS memory cards, has no slots for them and requires an optional add-on recorder (or 'adapter' as JVC call it) to achieve this - basically this camera is an 'XDCAM EX' High definition unit and the add-on SxS recorder was only made available to achieve better compatibility in facilities which were Sony based.
Most DV and HDV camcorders can feed live DV stream over IEEE 1394 interface to an external file-based recorder.
Video is stored either as native DIF bitstream or wrapped into an audio/video container such as AVI, QuickTime or MXF.
DV-DIF is the raw form of DV video. The files usually have extensions *.dv or *.dif.
DV-AVI is Microsoft's implementation of DV video file, which is wrapped into an AVI container. Two variants of wrapping are available: with Type 1 the multiplexed audio and video is saved into the video section of a single AVI file, with Type 2 video and audio are saved as separate streams in an AVI file (one video stream and one to four audio streams). This container is used primarily on Windows-based computers, though Sony offers two tapeless recorders, the HDD-based HVR-DR60 and the CompactFlash-based HVR-MRC1K, for use with DV/HDV camcorders that can record in DV-AVI format either making a file-based copy of the tape or bypassing tape recording altogether. Panasonic AVCHD camcorders use Type 2 DV-AVI for recording DV video onto Secure Digital memory card.
QuickTime-DV is DV video wrapped into QuickTime container. This container is used primarily on Apple computers.
MXF-DV wraps DV video into MXF container, which is presently used on P2-based camcorders (Panasonic) and on XDCAM/XDCAM EX camcorders (Sony).
Connectivity
Nearly all DV camcorders and decks have IEEE 1394 (FireWire, i.LINK) ports for digital video transfer. This is usually a two-way port, so that DV video data can be output to a computer (DV-out), or input from either a computer or another camcorder (DV-in). The DV-in capability makes it possible to copy edited DV video from a computer back onto tape, or make a lossless copy between two mutually connected DV camcorders. However, models made for sale in the European Union usually had the DV-in capability disabled in the firmware by the manufacturer because the camcorder would be classified by the EU as a video recorder and would therefore attract higher duty; a model which only had DV-out could be sold at a lower price in the EU.
When video is captured onto a computer it is stored in a container file, which can be either raw DV stream, AVI, WMV or QuickTime. Whichever container is used, the video itself is not re-encoded and represents a complete digital copy of what has been recorded onto tape. If needed, the video can be recorded back to tape to create a full and lossless copy of the original footage.
Some camcorders also feature a USB 2.0 port for computer connection. This port is usually used for transferring still images, but not for video transfer. Camcorders that offer video transfer over USB usually do not deliver full DV quality; usually it is 320x240 video, except for the Sony DCR-PC1000 camcorder and some Panasonic camcorders that provide transfer of a full-quality DV stream via USB by using the UVC protocol. Full-quality DV can also be captured via USB or Thunderbolt by using separate hardware that receives DV data from the camcorder over a FireWire cable and forwards it without any transcoding to the computer via a USB cable or a Firewire to Thunderbolt adapter - this can be particularly useful for capturing on modern laptop computers which usually do not have a FireWire port or expansion slot but always have USB or Thunderbolt ports.
High end cameras and VTRs may have additional professional outputs such as SDI, SDTI or analog component video. All DV variants have a time code, but some older or consumer computer applications fail to take advantage of it.
Usage
The high quality of DV images, especially when compared to Video8 and Hi8 which were vulnerable to an unacceptable number of video dropouts and "hits", prompted the acceptance by mainstream broadcasters of material shot on DV. The low costs of DV equipment and their ease of use put such cameras in the hands of a new breed of videojournalists.
DVCPRO HD was the preferred high definition standard of BBC Factual.
Films
Notable films that were shot on the DV format include:
The Cruise (Bennett Millerβ1998)
The Gleaners and I (AgnΓ¨s Vardaβ2000)
Chuck and Buck (Miguel Artetaβ2000)
The Gleaners and I: Two Years Later (AgnΓ¨s Vardaβ2002)
28 Days Later (Danny Boyleβ2002)
Inland Empire (David Lynchβ2006)
Iraq in Fragments (James Longleyβ2006)
My First Kiss and the People Involved (Luigi Campi & Giacomo Bellettiβ2016)
Application software support
Most DV players, editors and encoders only support the basic DV format, but not its professional versions. The exception to this being that most (not all) consumer Sony miniDV equipment will play mini-DVCAM tapes. DV Audio/Video data can be stored as raw DV data stream file (data is written to a file as the data is received over FireWire, file extensions are .dv and .dif) or the DV data can be packed into container files (ex: Microsoft AVI, Apple MOV). The DV meta-information is preserved in both file types being Sub-timecode and Start/Stop date times which can be muxed to Quicktime SMPTE standard timecode.
Most Windows video software only supports DV in AVI containers, as they use Microsoft's avifile.dll, which only supports reading avi files. Mac OS X video software support both AVI and MOV containers.
Tape formulation compatibility
There was considerable controversy solely based on hearsay over whether or not using tapes from different manufacturers could lead to dropouts. Initially this was suggested around the conception of mostly MiniDV tapes in the mid to late 90s as the only two manufacturers of MiniDV tapes (Sony, who produce their tapes solely under the Sony brand; and Panasonic, who produce their own tapes under their Panasonic brand and outsources for TDK, Canon, etc.) used two different lubrication types for their cameras.
A research undertaken by Sony claimed that there was no hard evidence of the above statement. The only evidence claimed was that using ME tapes in equipment designed for MP tapes can cause tape damage and hence dropouts. Sony has done a significant amount of internal testing to simulate head clogs as a result of mixing tape lubricants, and has been unable to recreate the problem. Sony recommends using cleaning cassettes once every 50 hours of recording or playback. For those who are still skeptical, Sony recommends cleaning video heads with a cleaning cassette before trying another brand of tape.
In 1999, Steve Epstein, technical editor of Broadcast Engineering magazine, received the following response from a Sony representative regarding tape stock compatibility:
Sony developed DVCAM based on the DV consumer format. The DV format was designed for use with metal evaporated tape, which offers approximately 5Β dB better carrier-to-noise figures than metal particle tape. Customers have requested VTRs that can play additional DV-based 6Β mm formats such as the consumer DV LP and DVCPRO. Sony will be offering new VTRs that can play back both of these additional formats without headclog and tape path issues.
It was realized early on that the VTR transport needed to be optimized to play various tape formulations and thicknesses. In addition, there is no need to dub DV LP or DVCPRO footage to another format for use as source material. This new VTR is the DSR 2000 DVCAM Studio recorder, and it is expected to be available later this year.
Robert Ott, Vice President for storage products and marketing, Sony Electronics, Park Ridge, New Jersey
See also
Common Intermediate Format (CIF)
Source Input Format (SIF)
Video CD
References
Television technology
Television terminology
Video storage
Videocassette formats | DV (video format) | Technology | 5,624 |
70,295,797 | https://en.wikipedia.org/wiki/Sulfamation | In organic chemistry sulfamation is the installation of either of two related functional groups, sulfamic acid (R2NSO3H) and sulfamate (R2NSO3β). Typical methods entail reaction of primary amines with sources of sulfur trioxide such as pyridine-sulfur trioxide:
RNH2 + SO3 β RNHSO3H
Sulfamation can also be effected by treating the amine with the sulfate ester of catechol (C6H4O2SO2).
References
Sulfur oxoacids
Sulfamates | Sulfamation | Chemistry | 124 |
50,731,741 | https://en.wikipedia.org/wiki/Dead%20by%20Daylight | Dead by Daylight is an online asymmetric multiplayer survival horror video game developed and published by Canadian studio Behaviour Interactive. It is a one-versus-four game in which one player takes on the role of a Killer and the other four play as Survivors; the Killer must hunt and impale each Survivor on sacrificial hooks to appease a malevolent force known as the Entity, while the Survivors have to avoid being caught and power up the exit gates by working together to fix five generators.
The game was released for Windows in 2016; PlayStation 4 and Xbox One in 2017; Nintendo Switch in 2019; Android, iOS, PlayStation 5, Google Stadia, and Xbox Series X/S in 2020; and Steam Deck in 2023. Swedish studio Starbreeze Studios published the game on behalf of Behaviour from 2016 until 2018, when Behaviour bought the publishing rights. Italian company 505 Games publishes the Nintendo Switch version, while Austrian company Deep Silver publishes physical copies for the PlayStation 5 and Xbox Series X/S versions. Cross-play was added to the game in 2020 to allow play with people on other platforms, while cross-progression followed in 2024 to allow players with accounts on different platforms to share everything they had unlocked across each account. The game ran on Unreal Engine 4 from 2016 to 2024, when it upgraded to Unreal Engine 5.
Dead by Daylight received mixed reviews upon release, but was a commercial success; it has since attracted more than 60 million players and improved its ratings. In 2023, it was announced that production companies Blumhouse Productions and Atomic Monster had begun developing a film adaptation.
Gameplay
Dead by Daylight is an asymmetrical horror game where one player is the Killer and the other four are Survivors. Matches are referred to as trials. The Survivors' objective is to escape the trial by repairing five of seven generators scattered throughout it to power the two exit gates. The Killer must impale all Survivors on hooks before they escape, which will cause them to be sacrificed to a malevolent force known as the Entity. If only one Survivor remains, an escape hatch opens at a random location on the map; if the Killer closes the hatch or an exit gate is opened, the two-minute "Endgame Collapse" begins, with the timer being extended if there are any incapacitated or hooked Survivors. When the timer ends, any remaining Survivors are immediately sacrificed to the Entity.
When hunting Survivors, the Killer must capture them by either striking them with their weapon twice and picking them up off the ground while they are in the dying state (one strike injures Survivors and a second strikes puts them into the dying state) or by grabbing them in one move by catching them unexpectedly while they interact with objects such as generators. Although Survivors can attempt to pull themselves free of the hook the first time they are impaled, there is only a 4% chance of success and failed attempts lead to a faster death. They can also be saved by other Survivors. If they are hooked a second time, they enter a phase in which they must resist the Entity as it attempts to take them out of the game by performing skill checks until they are either killed or rescued. If they are hooked a third time, they will be sacrificed to the Entity.
The Survivors' movement options consist of sprinting, walking, crouch-walking, or crawling if they are in the dying state. They must elude the Killer by losing their line of sight in a chase or by successfully hiding from them. Most Killers can move at a pace that is moderately faster than that of a sprinting Survivor but are slower in other movements, such as vaulting obstacles, which can buy Survivors time. Survivors can throw down wooden pallets placed at key locations on the map, and if the Killer is close enough to the pallet when the Survivor throws it down, they will be stunned for a brief moment. The Survivor can then vault over the downed pallet, but most Killers cannot, and must either spend time to break the pallet or go around it. The Killer also has an aura-reading ability, constantly revealing the location of generators, hooks, and sometimes Survivors. Every Killer has their own unique power that can be altered using add-ons obtained through gameplay. A significant amount of the gameplay revolves around chases, with Survivors using their environment and wits to outmanoeuvre the Killer and escape them or at least buy time for their allies to complete objectives.
Survivors can unlock chests to find useful items or bring items at the start of the trial, such as maps that highlight the locations of objectives, keys that can be used for certain aura-reading abilities or re-opening a closed escape hatch, toolboxes for quicker generator repairs, med-kits for quicker healing, and flashlights that can be used to blind the Killer temporarily; the latter, if the Killer is carrying a Survivor, will let that Survivor escape. All Killers have an innate "terror radius" that surrounds them, although the size of the terror radius varies depending on the Killer, the state of the Killer's power, and what add-ons or perks they may have. Survivors inside the radius will hear a heartbeat, which increases in intensity with proximity to the Killer. They can also see a red light called the "red stain" emanating from the Killer's head onto the ground, which reveals the direction they are facing and can help the Survivors determine when the Killer is about to come around a corner. Some Killers have the ability to suppress their terror radius and red stain under certain conditions, enabling them to surprise unaware Survivors.
Objectives
Survivor interactions with many objects in the game can trigger random skill checks. The player can either perform a good skill check which has no additional effect, a great skill check which speeds up the progression of the action the player is taking, or a failed skill check which usually notifies the Killer of the Survivor's location and may also cause a loss of progress. Great skill checks require more precision, and may not always be possible depending on the type of interaction or the perks being used by either the Survivor or the Killer. Killers have the ability to damage generators, which will slightly regress their repair progress and continue regressing progress slowly until a Survivor resumes repairing it for at least a few seconds, and may trigger certain perks.
If the Killer catches a Survivor, the Killer can pick them up and carry them to one of many sacrificial hooks scattered throughout the map. While being carried, the Survivor can attempt to wiggle out of the Killer's grasp by successfully hitting consecutive skill checks before they reach the hook. When a Survivor is hooked, they gain a hook stage up to a maximum of three stages; when they are hooked for the first time, they can wait to be rescued or attempt to unhook themselves. If they are left too long or hooked again, they will progress to the second stage, during which they have to succeed at skill checks before another Survivor rescues them. If a Survivor is hooked a third time or left long enough to progress to the third stage, they are sacrificed and removed from the match.
Once five generators are repaired, the exit gates are powered, allowing Survivors to open the gates and escape from the match. The game only ends when all Survivors have either escaped or died; thus, while some Survivors may escape and finish early, those still inside must keep playing. Players who have escaped or died have the ability to cycle through the remaining players' points of view as a spectator or return to the main menu.
If only one Survivor remains, repairing generators is typically not practical. When the other three Survivors have escaped or have died, an escape hatch will appear somewhere throughout the map. If the Survivor finds this hatch first, they may escape through it, but if the Killer finds it first, then they can close it. If the Killer closes the hatch, the exit gates become powered as though the generator objective was completed, allowing the remaining Survivor a way to escape the trial.
Perks
Survivors and Killers each have the option to use up to four perks in their loadout, which gives their characters special abilities. Perks differ between Survivors and Killers; a Survivor perk may not be used by Killers and vice versa. Examples of Survivor perks include a burst of speed when running from the Killer, the ability to self-heal without a med-kit, briefly sabotaging sacrificial hooks without a toolbox, and many more. Examples of Killer perks include seeing Survivors through walls, hindering the effects of struggling Survivors while taking them to a hook, preventing generators from being worked on, regressing generators, speeding up the recovery from a missed basic attack, and many more.
There are a number of "general perks" that start unlocked for any character to learn. Additionally, each specific character starts off with a set of three perks that are unique to them. Perks, add-ons, and items can be unlocked through the Bloodweb, a skill tree where each character can spend in-game currency. Advancing to level 50 in a character's Bloodweb allows the player to "prestige" that particular character. Doing so resets the character back to level 1 and unlocks the next tier (level) of the character's unique perks to every other character owned of the same role if there is a higher tier.
Setting
Premise
A group of four Survivors must elude one Killer obsessed with sacrificing them on hooks to a malevolent being called the Entity. The Survivors' perspectives are third-person, while most Killer's perspectives are first-person. The Survivors can only fight back by stunning the Killer or using items such as flashlights to blind them. Survivors can also vault over obstacles much faster than the Killer, providing a means of escape. Survivors use these obstacles and tools to help them elude the Killer for as long as they can. In order to escape, Survivors must repair five of the seven generators scattered across the map to power up the exit gates. They must then open the exit gates and escape through them; the final Survivor may also find an unmarked escape hatch to jump into, though the hatch can be shut if the Killer finds it first. Repairing generators can be made quicker with items such as toolboxes, or if the player repairs a generator alongside other Survivors, or if certain perks are equipped to help the process along.
Plot
The Entity, an eldritch horror that exists between dimensions, is attracted to actions of great violence and malice. The majority of Killers, most of whom are serial murderers or victims of terrible tragedy, are pulled out of reality by the Entity and convinced or forced to do its bidding. In order to maintain its existence, the Entity requires sacrifices and demands that they hunt and kill the Survivors so it can feed off their emotions and steal a piece of their souls upon death. In order to continue this hunt, the Entity blocks off the gateways of death and puts the Survivors into a dreamlike state that leads them back to the Entity's purgatory-esque world to get hunted again.
The Survivors are pulled into the Entity's constructed world when they wander into the woods or explore abandoned buildings, disappearing from the real world without a trace. More recent Survivors have entered through other methods such as being taken right before they die or sending themselves there using witchcraft. They end up at a campfire in the woods, where they rest between trials until a Killer pursues them again. Each trial takes place in a series of realms constructed by the Entity of areas related to each Killer's history. Escaping from the grounds always takes the Survivors back to the campfire, and offerings can be created to be burnt at it and appeal to the Entity. Since the Entity feeds off of Survivors' hopes of escaping, it helps them just as much as it helps Killers, acting as an impartial observer of the hunt and stepping in only to claim those successfully sacrificed to it.
Each original Survivor and Killer also has their own individual backstory which explains their personality and their unique abilities, along with how they end up in the realm of the Entity. These backstories are expanded in "Tomes" where completing in-game challenges unlocks more information. These Tomes are released every few months alongside a battle pass known as "The Rift".
Downloadable content
Alongside original characters, the game licenses characters and settings from several franchises. As of July 2024, the game has featured 39 individual DLC releases in total. Most DLC releases include both a new Survivor and Killer, except for ten where only one character was introduced and four where three characters were introduced. 25 of the DLCs released have also introduced new maps that are accessible to all players. The DLCs release with an average of three months between each one. Prior to the release of every DLC since the Clown, a Public Test Build (PTB) becomes available to PC players, which allows the developers of the game to test and receive community feedback on major upcoming changes. Once the PTB has been out for roughly two weeks, it is disabled for further maintenance on the upcoming DLC, such as bug fixing and adding missing features. The estimated time span between the opening of the PTB and a new DLC's release is approximately 2β3 weeks, and the DLC typically releases a few days after the shutdown of the PTB.
19 of the currently released DLCs have featured licensed Killers and Survivors from both popular horror franchises and other video games. The DLCs can be acquired by either purchasing them normally through Steam or non-licensed characters can be purchased through a shop within the game using an in-game currency called iridescent shards. This alternative way of obtaining the DLCs was introduced in the 2.0 update. Each DLC has its own trailer and a so-called "spotlight" that exhibits the Killer and Survivor as well as a new in-game map. The relevant DLC does not need to be purchased in order for any map to be played on. Three DLCs (The Last Breath, Left Behind, and A Lullaby for the Dark) have been distributed for free.
In August 2021, Behaviour announced that the Stranger Things DLC (including individual characters from the DLC and their cosmetics) would no longer be available for purchase after November 17 and that the Hawkins National Laboratory map would be removed from rotation. All characters from the DLC, as well as their cosmetics, remained usable by players who purchased them before the removal date. No official reason was given for the chapter's removal; observers hypothesized that Netflix, the owner of the Stranger Things license, decided to not renew the license for the game with Behavior due to its focus on their own Netflix Games brand. On November 6, 2023, Behavior announced that all Stranger Things content would be returning to the game that day.
In October 2021, Behaviour collaborated with NFT company Boss Protocol to release the Pinhead character model as an NFT. This led to criticism from players, with Steam reviews of the DLC falling down to a "Mostly Negative" rating. However, it was later revealed that Behaviour had agreed to give unchecked access to the Pinhead character model to anyone who held the rights to Pinhead, and that Boss Protocol had subsequently created the NFT without Behaviour's involvement; two months later, Boss Protocol lost the rights to Pinhead back to English author Clive Barker, who created the character.
In addition to DLCs, other collaborations have resulted in purely cosmetic items being added to existing characters. Collaborations with Crypt TV, Attack on Titan, and Junji Ito added cosmetic items to original characters in order to make them resemble characters from those franchises. Cosmetic items have also been added from video game franchises such as For Honor, Meet Your Maker, PUBG: Battlegrounds, Rainbow Six Siege, and Naughty Bear. Collaborations with real people have resulted in cosmetic items inspired by heavy metal bands Iron Maiden and Slipknot, as well as actor Nicolas Cage being a playable Survivor.
List
This symbol denotes DLCs that unveiled licensed characters.
Notes
a.The Stranger Things chapter was removed from the game on November 17, 2021, and was reintroduced on November 6, 2023. During this time, the characters introduced were still available to play for those who had purchased them prior to the removal, but the Underground Complex map was not accessible to any player until the chapter's reintroduction.
b.The Raccoon City Police Station map was introduced in the Resident Evil chapter. With the release of the Resident Evil: Project W chapter, it was permanently replaced with two separate maps which split the original map into its East and West halves.
c.The Shelter Woods map was introduced with the release of the game in 2016. However, with the release of the Tools of Torment chapter, the map was redesigned with a new landmark, The Hunting Camp, being added to the map. This landmark is modeled after The Skull Merchant's background lore and is featured in chapter's marketing materials.
d.Although it was initially announced that the Doomed Course would have an accompanying new map, ultimately no map released with the chapter. Instead, the Lake Ormond Mine (Ormond) map was released two weeks after the chapter as part of the annual winter event.
Reception
Dead by Daylight received "mixed or average reviews", according to review aggregator Metacritic. GameSpot awarded it a score of 9 out of 10, saying, "At launch, Dead by Daylight suffered because of its reliance on peer-to-peer hosting and absent social features, but over time it rectified these issues. And while a brief and premature tussle with skill-based matchmaking turned the new player experience into a bit of a horror show (a problem which is now [was] fixed), thanks to its community of players Dead by Daylight is without peer in the asymmetrical competitive multiplayer arena, and has grown into one of the most robust horror experiences around." Luke Winkie of PC Gamer awarded it a score of 88 out of 100, saying, "In the five years since Behaviour Interactive released Dead by Daylight on Steam, the game has developed razor-sharp mechanical intrigue, an ultra-complex web of versatile builds and strategies, and a diverse suite of characters, each equipped with relative strengths and weaknesses."
Sales
During its first week, Dead by Daylight sold more than 270,000 copies. The game sold more than 1 million copies within 2 months. On November 16, 2017, more than 3 million copies were sold. As of May 2019, the game sold more than 5 million copies. In August 2020, the game reached more than 25 million players across all platforms. This number reached 36 million by May 2021, 50 million by May 2022, and 60 million by November 2023.
Popularity and impact
Dead by Daylight has become increasingly popular in Japan due to Behaviour catering to Japanese players with the release of Japanese-themed chapters like "The Shattered Bloodline" and "The Cursed Legacy", as well as expanding the lore of its in-game universe, which the Japanese gaming community tends to favor in video games. In return, the country has become one of the biggest markets for the game, becoming so popular that a Dead by Daylight-themed Entity CafΓ© opened on the fourth floor of the Tokyo Skytree in August 2021.
Original Dead by Daylight characters and items have appeared in other video games. In Payday 2, Dead by Daylight initially launched with an optional "Deluxe Edition" for an additional price which granted cosmetics for Payday 2 as well as a discount for Payday 2 owners who pre-ordered the game. The Ubisoft fighting game For Honor featured a crossover event called "Survivors of the Fog" from October 21 to November 11, 2021, which featured Dead by Daylight-inspired cosmetic items and a limited-time game mode that featured the Trapper as an AI-controlled enemy. From October 21 to November 7, 2022, PUBG: Battlegrounds featured a crossover event with Dead by Daylight which saw the traditionally battle royale game introduce a new mode in which players would repair generators and escape from the killer, alongside cosmetics that emulate certain survivors and killers from the game. The mobile port of Battlegrounds, New State Mobile, also received a similar event. By playing at least one match of the special PUBG game mode, players were given a unique code to unlock cosmetics in Dead by Daylight. Other video games in which Dead by Daylight characters appeared were Deathgarden, Meet Your Maker, Move or Die, and Rainbow Six Siege.
Mobile release
On June 19, 2019, Behaviour Interactive Inc. announced the plan to release Dead by Daylight on iOS and Android for free in an attempt to make the game more accessible to players around the world. A different development team was formed that was fully dedicated to optimizing the game for the mobile experience. Dead by Daylight Mobile was initially slated for launch in 2019, however the developers had to push the release to 2020, citing their need for more time to work on bugs and optimize. On February 27, 2019, Behaviour announced that the mobile versions will be published by Chinese video game publisher NetEase Games in Southeast Asia, Japan, and Korea. It was released in EMEA, the Americas, and South Asia on April 16, 2020. In December 2022, Behaviour Interactive announced that an improved global version was being co-developed by Behaviour and NetEase Games. The old global version was removed from mobile storefronts on December 16, 2022 and the improved global version released on March 15, 2023, after being delayed from a February 2023 planned release. This new version included enhanced graphics and would come to feature exclusive game modes, features, and skins only available in the mobile version of the game.
In its 48 hours of release, Dead by Daylight Mobile outnumbered 1 million downloads. By October 2020, the game surpassed 10 million downloads. By May 2022, the game reached 25 million downloads. The game was banned by the government of India in March 2023. In December 2024, NetEase announced that Dead by Daylight Mobile would permanently shut down by March 2025.
Spin-offs and other media
Video games
Hooked on You: A Dead by Daylight Dating Sim
Hooked on You: A Dead by Daylight Dating Sim is a visual novel dating simulator game released on August 3, 2022. The parodic game is developed by Psyop and allows the player to date four of Dead by Daylight's original Killers, the Trapper, the Huntress, the Wraith, and the Spirit, guided by original Survivors such as Dwight and Claudette.
What the Fog
What the Fog is a co-op roguelike game released on May 14, 2024, after being revealed the same day. Following original survivors Dwight, Claudette, and Feng Min, players must defeat enemies in order to power up generators and escape. The game was released for free for those with an account on Behavior.com or was made purchasable through Steam.
The Casting of Frank Stone
The Casting of Frank Stone is a single-player interactive drama horror game released on September 3, 2024. The game is developed by Supermassive Games, known for developing similar interactive horror games including Until Dawn, The Quarry, and The Dark Pictures Anthology. Featuring a branching narrative, players follow the stories of different characters set across multiple time periods, all centering around the killer Frank Stone and the paranormal Entity.
Project T
An untitled spin-off game was announced on May 19, 2023, described as a four-player co-op PvE experience "centered around greed and the lust for power." Midwinter Entertainment, a Seattle-studio Behavior Interactive acquired in May 2022, was developing the game. On May 14, 2024, the game's working title was revealed as Project T along with development footage revealing the game as a third person shooter. Limited playtests were held during the summer of that year. On September 17, 2024, it was announced that Project T had been canceled following "unsatisfactory" results from the prior playtests. The same day, Behavior Interactive announced that Midwinter Entertainment was closing.
Dead by Daylight Pinball
Zen Studios released a Dead by Daylight pinball table for Pinball M on November 30, 2023. Players can choose to play as either killer or survivor, with 4 survivors to choose from.
Board game
A board game adaptation, Dead by Daylight: The Board Game, was created by Level 99 Games and distributed by Asmodee in the United States. Featuring original characters up through the All Kill chapter from the original video game, one player takes on the role of the killer and two to four players take on the role of survivors. Players use perks, items, props, and powers to traverse the map in order to either repair generators and escape or hunt down and kill the survivors. Following a successful fundraising campaign on Kickstarter, the game was released on April 7, 2023. An expansion for the board game was announced on May 14, 2024, which adds further playable characters and mechanics.
Comic series
On October 29, 2022, Behaviour Interactive tweeted an announcement for a comic series featuring the four members of The Legion, which details more of their backstory. It is published by Titan Comics, written by Nadia Shammas, illustrated by Dillon Snook, and colored by Emilio Lecce. The first issue was released on June 14, 2023. and the fourth and final issue released on March 6, 2024. Each issue follows a member of The Legion.
Film adaptation
In March 2023, it was reported by Variety that Jason Blum and James Wan were producing a film adaptation through their Blumhouse Productions and Atomic Monster banners, respectively alongside Behaviour Interactive. In a January 2024 interview, executive producer Ryan Turek commented that the film would be informed by lessons learned from the studio's recent Five Nights at Freddy's film adaptation and to prioritize making "a video game adaptation for the fans". Jason Blum commented in October 2024 that the script was actively being developed, but that "it could be five years, it could be twelve months" until a script was completed.
See also
List of horror games
Identity V
Notes
References
External links
2016 video games
Asymmetrical multiplayer video games
2010s horror video games
Horror crossover video games
Indie games
Multiplayer video games
PlayStation 4 games
PlayStation 5 games
Survival horror video games
Nintendo Switch games
Unreal Engine 5 games
Video games about cannibalism
Video games developed in Canada
Video games using procedural generation
Windows games
Xbox Cloud Gaming games
Xbox One games
Xbox Series X and Series S games
Stadia games
Behaviour Interactive games
505 Games games
Video games with cross-platform play
Starbreeze Studios games
Krampus in popular culture | Dead by Daylight | Physics | 5,451 |
66,868,680 | https://en.wikipedia.org/wiki/Hydrogen%20assisted%20magnesiothermic%20reduction | The hydrogen assisted magnesiothermic reduction ("HAMR") process is a thermochemical process to obtain titanium metal from titanium oxides.
A technical challenge in the production of titanium metal is the formation of oxide impurities. The Kroll process, which is widely used commercially, addresses this challenge by converting titanium ore (an oxide) into titanium tetrachloride (TiCl4). This intermediate is readily purified. It is reduced to titanium metal with magnesium. This technology is both capital, energy, and carbon-intensive. One advantage of the Kroll process, and several like it, is that it starts with titanium ores (e.g., illmenite), not a purified dioxide.
The HAMR technology also entails a two step process, starting with TiO2 under an atmosphere of hydrogen gas. The product TiH2 can be further processed to titanium metal through standard methods. The reduction of titanium oxides to titanium metal using magnesium does not occur. The novelty of the HAMR process is the inclusion of hydrogen.
References
Titanium processes
Chemical processes
Hydrogen economy | Hydrogen assisted magnesiothermic reduction | Chemistry | 229 |
56,060,544 | https://en.wikipedia.org/wiki/TLQP-62 | TLQP-62 (amino acid 556-617) is a VGF-derived C-terminal peptide that was first discovered by Trani et al. TLQP-62 is derived from VGF precursor protein via proteolytic cleavage by prohormone convertases PC1/3 at the RPR555 site. TLQP-62 is named after its first four N-terminal amino acids and its peptide length.
Function
Although the receptor(s) for TLQP-62 has not been identified so far, extensive studies have demonstrated that it acts on central nervous system, peripheral nervous system and endocrine tissue to exert its biological functions.
Synaptic plasticity
Acute TLQP-62 treatment rapidly increases synaptic activity in hippocampal neurons, and potentiates CA1 field excitatory postsynaptic potential fEPSP in the hippocampal slices, thus facilitating hippocampal synaptic transmission. TLQP-62 also increases dendritic branching and length in cultured hippocampal neurons.
Neurogenesis
TLQP-62 treatment enhances hippocampal neurogenesis both in vitro and in vivo by promoting the proliferation in neuronal progenitor cells.
Antidepressant efficacy
Intrahippocampal TLQP-62 infusion produces both rapid and sustained antidepressant-like effects in the forced swim test. TLQP-62's processed peptide AQEE-30, when given via intracerebroventricular route, also elicits antidepressant-like effects.
Memory and learning
Acute intrahippocampal TLQP-62 infusion enhances memory formation via BDNF/TrkB signaling.
Pain
Acute intrathecal administration of TLQP-62 induces hypersensitivity to mechanical and cold stimuli that recapitulates neuropathic pain, potentially by regulating the excitability of dorsal horn neurons.
Insulin secretion
TLQP-62 treatment increases insulin secretion in cultured insulinoma cells by increasing intracellular calcium mobilization.
References
Peptides | TLQP-62 | Chemistry | 453 |
42,069,967 | https://en.wikipedia.org/wiki/Yamato%20000593 | Yamato 000593 (or Y000593) is the second largest meteorite from Mars found on Earth. Studies suggest the Martian meteorite was formed about 1.3 billion years ago from a lava flow on Mars. An impact occurred on Mars about 11 million years ago and ejected the meteorite from the Martian surface into space. The meteorite landed on Earth in Antarctica about 50,000 years ago. The mass of the meteorite is and has been found to contain evidence of past water alteration.
At a microscopic level, spheres are found in the meteorite rich in carbon compared to surrounding areas lacking such spheres. The carbon-rich spheres and the observed micro-tunnels may have been formed by biotic activity, according to NASA scientists.
Discovery and naming
The 41st Japanese Antarctic Research Expedition (JARE) found the meteorite in late December 2000 on the Yamato Glacier in the Queen Fabiola Mountains, Antarctica.
Description
The mass of the meteorite is . It is an unbrecciated cumulus igneous rock consisting predominantly of elongated augite crystalsβa solid solution in the pyroxene group. Japanese scientists from the National Institute of Polar Research reported in 2003 that the meteorite contains iddingsite, which forms from the weathering of basalt in the presence of liquid water. In addition, NASA researchers reported in February 2014 that they also found carbon-rich spheres encased in multiple layers of iddingsite, as well as microtubular features emanating from iddingsite veins displaying curved, undulating shapes consistent with bio-alteration textures that have been observed in terrestrial basaltic glass. However, the scientific consensus is that "morphology alone cannot be used unambiguously as a tool for primitive life detection." Interpretation of morphology is notoriously subjective, and its use alone has led to numerous errors of interpretation. According to the NASA team, the presence of carbon and lack of corresponding cations is consistent with the occurrence of organic matter embedded in iddingsite. The NASA researchers indicated that mass spectrometry may provide deeper insight into the nature of the carbon, and could distinguish between abiotic and biologic carbon incorporation and alteration.
Classification
The Martian meteorite is an igneous rock classified as an achondrite type of the nakhlite group.
Images
See also
Allan Hills 84001
Astrobiology
Glossary of meteoritics
Life on Mars
List of Martian meteorites
List of meteorites on Mars
Nakhla meteorite
Panspermia
References
External links
Yamato meteorite (PDF) The Astromaterials Acquisition and Curation Office, NASA.
Astrobiology
Martian meteorites
Meteorites found in Antarctica
Natural history of Antarctica | Yamato 000593 | Astronomy,Biology | 546 |
2,680,215 | https://en.wikipedia.org/wiki/Independent%20hardware%20vendor | An Independent Hardware Vendor (IHV) is a company that designs, manufactures or sells hardware or peripherals compatible with operating systems. Examples of Independent hardware vendors are Hewlett-Packard and Dell.
See also
Independent software vendor
Software company
References
Computer industry | Independent hardware vendor | Technology | 53 |
73,211,394 | https://en.wikipedia.org/wiki/High%20Seas%20Treaty | The United Nations agreement on biodiversity beyond national jurisdiction or BBNJ Agreement, also referred to by some stakeholders as the High Seas Treaty or Global Ocean Treaty, is a legally binding instrument for the conservation and sustainable use of marine biological diversity of areas beyond national jurisdiction. There is some controversy over the popularized name of the agreement. It is an agreement under the United Nations Convention on the Law of the Sea (UNCLOS). The text was finalised during an intergovernmental conference at the UN on 4 March 2023 and adopted on 19 June 2023. Both states and regional economic integration organizations can become parties to the treaty.
In 2017, the United Nations General Assembly (UNGA) had voted to convene an intergovernmental conference (IGC) to consider establishing an international legally binding instrument (ILBI) on the conservation and sustainable use of biodiversity beyond national jurisdiction (BBNJ). This was considered necessary because UNCLOS did not provide a framework for areas beyond national jurisdiction. There was a particular concern for marine biodiversity and the impact of overfishing on global fish stocks and ecosystem stability.
The treaty addresses four themes: (1) marine genetic resources (MGRs) and their Digital sequence information, including the fair and equitable sharing of benefits; (2) area-based management tools (ABMTs), including marine protected areas (MPAs); (3) environmental impact assessments (EIAs); and (4) capacity building and transfer of marine technology (CB&TMT). The area-based management tools and environmental impact assessments relate mainly to conservation and sustainable use of marine biodiversity, while the marine genetic resources and capacity building and transfer of marine technology include issues of economic justice and equity.
Greenpeace called it "the biggest conservation victory ever". The main achievement is the new possibility to create marine protected areas in international waters. By doing so the agreement now makes it possible to protect 30% of the oceans by 2030 (part of the 30 by 30 target). Though the agreement does not directly address climate change, it also serves as a step towards protecting the ecosystems that store carbon in sediments.
The treaty has 75 articles and its main purpose is "to take stewardship of the worldβs oceans for present and future generations, care for and protect the marine environment and ensure its responsible use, maintain the integrity of undersea ecosystems and conserve marine biological diversityβs inherent value". The treaty recognizes traditional knowledge. It has articles regarding the "polluter-pays" principle, and different impacts of human activities including areas beyond the national jurisdiction of the countries making those activities. The agreement was adopted by the 193 United Nations Member States.
Before the treaty can enter into force, it needs to be ratified by at least 60 UN member states. This process is likely to take some time. The former treaty, UNCLOS, was adopted in 1982 and entered into force in 1994. UNCLOS has 170 parties. The European Union pledged financial support for the process of ratification and implementation of the treaty.
Context
The world's oceans are facing a severe decline in biodiversity and degradation of ecosystems due to threats related to climate change and the expansion of human activities, such as shipping, overfishing, plastic pollution and deep-sea mining. Consequently, there is a pressing need for a more cohesive ocean governance framework, since the existing framework is too fragmented and incomplete to effectively secure conservation and sustainably use of marine biodiversity in areas beyond national jurisdiction. The High Seas treaty aims to address the regulatory gaps, by promoting coherence and coordination with and among existing institutions, frameworks, and bodies.
The areas beyond national jurisdiction comprise the 'high seas' (water column) and the βareaβ (seabeds), making up about two-thirds of the ocean. The areas are currently regulated by different regional and sectoral agreements, such as regional fisheries management organisations (RFMOs). However, they can only implement measures within their own respective mandates and cooperation is lacking. Additionally, only a few areas are covered, leaving the majority effectively unregulated. The remaining one-third of the ocean falls under national jurisdiction and is situated within the exclusive economic zones (EEZs). The exclusive economic zones extend 200 nautical miles (about 370Β km) from the territorial sea baseline. The zones are established under UNCLOS, giving coastal states the jurisdiction over the living and non-living resources within the water and the seabeds.
History
A new agreement under UNCLOS for areas beyond national jurisdiction has been discussed at the United Nations for almost 20 years. The United Nations began preparatory meetings in 2004 to lay the foundation for an Implementing Agreement to UNCLOS addressing governance and regulatory gaps.
On 24 December 2017, the United Nations General Assembly adopted Resolution 72/249 to convene an intergovernmental conference and undertake formal negotiations for a new international legally binding instrument under the UNCLOS for the conservation and sustainable development of marine biological diversity in areas beyond national jurisdiction. Between 2018 and 2023, diplomats have gathered at the UN Headquarters in New York City for negotiating sessions. There have so far been five sessions in total.
The intergovernmental conference (IGC) convened a total of six sessions in 2018, 2019, 2022 and 2023 to negotiate the text for the BBNJ legal instrument:
During the first session in September 2018, the concept of 'Beyond National Jurisdiction' seemed to have a greater influence on positions taken than the direct concerns regarding 'Biodiversity' itself.
In the second session March/April 2019, it became clear that the principle stating that the new BBNJ agreement "should not undermine" existing institutions could be a hindrance, impeding progress towards achieving an effective instrument.
The third session in August 2019 evolved around the dichotomy between βthe freedom of the seasβ and βthe common heritage of mankindβ principles.
The fourth session was originally scheduled for 2020, but it had to be postponed until March 2022 because of the COVID-19 pandemic. During the session, a lack of political will was observed, as states continued to object to substantive, key issues for a new treaty. Progress was made in the four main elements: marine genetic resources (MGRs), benefit sharing using area-based management tools (ABMTs) including marine protected areas (MPAs), environmental impact assessments (EIAs) and capacity building and the transfer of marine technology (CB&TT).
The fifth round of talks in August 2022 failed to produce an agreement, due in part to significant disagreements over how to share benefits derived from marine genetic resources and digital sequence information. It was therefore agreed to suspend the session and resume it at a later date.
Agreement on a text was reached on 4 March 2023, after the sixth round of talks at the UN in New York. In February/March 2023, the final text was agreed upon, after almost two decades of work. With the words "the ship has reached the shore", Rena Lee, the president of the intergovernmental conference, announced the final agreement. The treaty opened for signature in New York City on 20 September, a day after a summit on the Sustainable Development Goals. Signatures will be open for two years from 20 September 2023.
In January 2024, Ambassador Ilana Seid presented the agreement of Palau to the agreement. Palau was the first of the sixty required to agree with the new treaty.
The content of the treaty
Marine genetic resources (MGRs), including the fair and equitable sharing of benefits
Marine genetic resources (MGRs), including the fair and equitable sharing of benefits is the first element mentioned in the treaty. Among other things, marine genetic resources can enable production of biochemicals that can be used in cosmetics, pharmaceuticals and food supplements. The economic value of the resources is for now unclear, but the potential for profits has created an increased interest in the resources exploration and exploitation among stakeholders.
During the UN negotiations it has been a contentious point whether or not marine genetic resources should apply to βfishβ and βfishing activitiesβ. If not, it would be likely to impact the ability of the High Seas treaty to address its objective, since fish are a major component of marine biodiversity and play an essential role in the functioning of marine ecosystems, according to some experts. However, the final treaty text states that the provisions about marine genetic resources do not apply to βfishβ and βfishingβ in areas beyond national jurisdiction.
The part about fair and equitable sharing of benefits has also been a point of dispute in the negotiations. In the end it was agreed upon to regulate non-monetary as well as monetary benefits. Furthermore, an access and benefit-sharing committee will be established with the purpose of providing guidelines for the benefit-sharing, and ensuring that this is done in a transparent, fair, and equitable way.
Area-based management tools (ABMTs), including marine protected areas (MPAs)
Area-based management tools (ABMTs), including marine protected areas (MPAs) are recognized as key tools for conserving and restoring biodiversity. They can be used to protect, preserve and maintain certain areas beyond national jurisdiction. Marine protected areas offer a degree of long term conservation, and are already established in some areas. However, the protection level of biodiversity varies a lot and the protected areas only cover a small proportion of the areas beyond national jurisdiction. Area based management tools can be used for short-term and emergency measures and to address a specific sector.
The process to establish a tool or a protected area is as follows. First, a part under the High Seas treaty has to submit a proposal for an area-based management tool or a marine protected area. The proposal has to be based on the best available sciences and information. It will be made publicly available and transmitted to the Scientific and Technical Body to be reviewed. Hereafter, relevant stakeholders have to be consulted. The proposal has to be adopted by consensus - or if this is not possible, three-quarter majority of the representatives present and voting. The decision will enter into force within 120 days after the voting, and will be binding for all parties of the treaty. However, if a part within the 120 days makes an objection to the decision, an opt-out is possible.
After the treaty text was finalised, it has been reported that the treaty through marine protected areas will protect 30 pct. of the oceans by 2030 - a target adopted at the UN Biodiversity Conference (COP15) in December 2022 - however this is not the case, according to experts. The treaty can help to implement the 30 by 30 biodiversity target in the oceans, but it will a require a lot of action by states.
Environmental impact assessments (EIAs)
Environmental impact assessments have the potential to predict, reduce and prevent human activities affecting marine biodiversity and ecosystems. While the institutional and legal framework for environmental impact assessments is well established in areas within national jurisdiction, it is less developed in areas beyond. Under the treaty, participating parties are obliged to conduct environmental impact assessments when a planned activity may have an effect on the marine environment, or when there is insufficient knowledge about its potential effects. In such cases, the party possessing jurisdiction or control over the activity is required to conduct the assessment.
The treaty also includes provisions for Strategic Environmental Assessments (SEAs), which are assessments that are more holistic and focused on long-term environmental protection compared to the more specific focus of environmental impact assessments. Parties under the treaty have to consider conducting a strategic environmental assessment for plans and programmes related to their activities in areas beyond national jurisdiction, but are not obliged to conduct one.
Capacity building and the transfer of marine technology (CB&TMT)
Capacity building and the transfer of marine technology concerns the equitable access to research conducted in international waters and enabling cooperation and participation in the activities outlined in the agreement. Different types of capacity building and transfer of technology are mentioned in the agreement, such as sharing of information and research results; develop and share manuals, guidelines and standards; collaboration and cooperation in marine science; and develop and strengthen institutional capacity and national regulation or mechanisms.
Technology plays an important role in the implementation, making capacity building and technology transfer essential for the enforcement of the treaty. A key focus is to support developing and geographically disadvantaged states in implementing the agreement.
Furthermore, a capacity-building and transfer of marine technology committee will be established, in order to monitor and review the undertaken initiatives, under the authority of the Conference of the Parties.
Institutional Setup
The treaty introduces a new institutional framework in part VI about 'Institutional Arrangements', including the Conference of the Parties, the Scientific and Technical Body, the secretariat and the clearing-house mechanism.
The Conference of the parties (COP) will have its first meeting one year after the treaty enters into force, at the latest. The rules of procedure and the financial rules will be adopted at the first meeting. The Conference of the Parties will review and evaluate the implementation of the High Seas treaty. The Conference has to take decisions and adopt recommendations by consensus - or if it is not possible to reach consensus after all efforts have been exhausted, adopted by a two-thirds majority of the parties present and voting. The Conference will also have to promote transparency in the implementation of the agreement and the related activities. Five years after the treaty enters into force, the Conference of Parties has to review the treaty.
The Scientific and Technical Body will be composed of members nominated by the parties and elected by the Conference of the Parties, serving as experts and in the best interest of the agreement. The need for multidisciplinary expertise has to be taken into account in the nomination and election of members. The Scientific and Technical Body will among other things provide scientific and technical advice to the Conference of the Parties, monitor and review area-based management tools and comment on environmental impact assessments.
The secretariat is responsible for providing administrative and logistical support to the Conference of the Parties and its subsidiary bodies. This includes tasks, such as arranging and servicing the meetings, as well as circulating information relating to the implementation of the treaty in a timely manner.
The clearing-house mechanism will work as an open-access platform, facilitating the access, provision, and dissemination of information. It will promote transparency and facilitate international cooperation and collaboration. The mechanism will be managed by the secretariat.
In addition, the treaty establishes an 'access and benefit-sharing committee', a 'capacity-building and transfer of marine technology committee', a 'finance committee on financial resources' and an 'implementation and compliance committee'. However, these are not mentioned in the section about institutional arrangements.
Financial support
The European Union pledged financial support for the process of ratification and implementation of the treaty.
See also
United Nations Convention on the Law of the Sea
2022 United Nations Biodiversity Conference
Kunming-Montreal Global Biodiversity Framework
High seas fisheries management
Nagoya Protocol to the Convention on Biological Diversity
Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge (GRATK)
References
External links
UN delegates reach historic agreement on protecting marine biodiversity in international waters (UN News, 5 March 2023)
Agreement under the United Nations Convention on the Law of the Sea on the conservation and sustainable use of marine biological diversity of areas beyond national jurisdiction (19 June 2023)
2023 in international relations
Anti-biopiracy treaties
Biopiracy
Law of the sea treaties
Marine conservation
Treaties of Belize
Treaties of Chile
Treaties of Cuba
Treaties of Mauritius
Treaties of the Federated States of Micronesia
Treaties of Monaco
Treaties of Palau
Treaties of Seychelles
United Nations treaties | High Seas Treaty | Biology | 3,157 |
24,479,840 | https://en.wikipedia.org/wiki/NGC%202899 | NGC 2899 is a planetary nebula in the southern constellation of Vela. It was discovered by English astronomer John Herschel on February 27, 1835. This nebula can be viewed with a moderate-sized amateur telescope, but requires a larger telescope to resolve details. NGC 2899 is located at a distance of from the Sun and from the Galactic Center.
This nebula has an overall kidney shape that is elongated along an axis from WNW to ESE. The overall topology is bipolar with a significant equatorial structure. This shape is believed to result from a binary star system. The mean expansion rate is , with high velocity structures expanding at . The core mass of the central star is estimated as .
The nebula lies within a large cavity in the surrounding medium. This opening has quadrupolar shape with a physical dimension of . The elongation lies along a position angle of , which is aligned with the minor axis of the planetary nebula. This opening was most likely crafted by a fast stellar wind coming from the central star during its asymptotic giant branch stage, prior to the formation of a planetary nebula. The shape and filamentary structures suggest the interaction of a binary star system.
References
External links and images
Planetary nebulae
2899
Vela (constellation)
Discoveries by John Herschel | NGC 2899 | Astronomy | 260 |
1,878,226 | https://en.wikipedia.org/wiki/XAD%20%28software%29 | The XAD system is an open-source client-based unarchiving system for the Amiga. This means there is a master library called xadmaster.library which provides an interface between the client and the user application and there are clients handling the special archive formats. Three different types to handle file and disk archives and also disk image files (filesystem) are possible. They can be made by anyone. The master library itself includes some of these clients internally to make the work somewhat easier for the package maintainer and the user installing it.
The XAD subsystem was officially included in AmigaOS 3.9 along with a simple ReAction GUI-based tool for unarchiving supported file archives. It is also part of MorphOS since version 2.0. The Mac OS X frontend is called The Unarchiver and written in Objective-C.
References
External links
Developer website
Mac implementation
Hollywood plugin see Hollywood (programming language)
"Avalanche" ReAction GUI
Free data compression software
AmigaOS
Amiga software
MorphOS
MorphOS software
Amiga APIs
File archivers | XAD (software) | Technology | 220 |
37,804,593 | https://en.wikipedia.org/wiki/K%20shortest%20path%20routing | The k shortest path routing problem is a generalization of the shortest path routing problem in a given network. It asks not only about a shortest path but also about next kβ1 shortest paths (which may be longer than the shortest path). A variation of the problem is the loopless k shortest paths.
Finding k shortest paths is possible by extending Dijkstra's algorithm or the Bellman-Ford algorithm.
History
Since 1957, many papers have been published on the k shortest path routing problem. Most of the fundamental works were done between 1960s and 2001. Since then, most of the research has been on the problem's applications and its variants. In 2010, Michael GΓΌnther et al. published a book on Symbolic calculation of k-shortest paths and related measures with the stochastic process algebra tool CASPA.
Algorithm
Dijkstra's algorithm can be generalized to find the k shortest paths.
Variations
There are two main variations of the k shortest path routing problem. In one variation, paths are allowed to visit the same node more than once, thus creating loops. In another variation, paths are required to be simple and loopless. The loopy version is solvable using Eppstein's algorithm and the loopless variation is solvable by Yen's algorithm.
Loopy variant
In this variant, the problem is simplified by not requiring paths to be loopless. A solution was given by B. L. Fox in 1975 in which the k-shortest paths are determined in asymptotic time complexity (using big O notation. In 1998, David Eppstein reported an approach that maintains an asymptotic complexity of by computing an implicit representation of the paths, each of which can be output in O(n) extra time. In 2015, Akiba et al. devised an indexing method as a significantly faster alternative for Eppstein's algorithm, in which a data structure called an index is constructed from a graph and then top-k distances between arbitrary pairs of vertices can be rapidly obtained.
Loopless variant
In the loopless variant, the paths are forbidden to contain loops, which adds an additional level of complexity. It can be solved using Yen's algorithm to find the lengths of all shortest paths from a fixed node to all other nodes in an n-node non negative-distance network, a technique requiring only 2n2 additions and n2 comparison, fewer than other available shortest path algorithms need. The running time complexity is pseudo-polynomial, being (where m and n represent the number of edges and vertices, respectively). In 2007, John Hershberger and Subhash Suri proposed a replacement paths algorithm, a more efficient implementation of Lawler's and Yen's algorithm with O(n) improvement in time for a large number of graphs, but not all of them (therefore not changing the asymptotic bound of Yen's algorithm).
Some examples and description
Example 1
The following example makes use of Yenβs model to find k shortest paths between communicating end nodes. That is, it finds a shortest path, second shortest path, etc. up to the Kth shortest path. More details can be found here.
The code provided in this example attempts to solve the k shortest path routing problem for a 15-nodes network containing a combination of unidirectional and bidirectional links:
Example 2
Another example is the use of k shortest paths algorithm to track multiple objects. The technique implements a multiple object tracker based on the k shortest paths routing algorithm. A set of probabilistic occupancy maps is used as input. An object detector provides the input.
The complete details can be found at "Computer Vision Laboratory β CVLAB".
Example 3
Another use of k shortest paths algorithms is to design a transit network that enhances passengers' experience in public transportation systems. Such an example of a transit network can be constructed by putting traveling time under consideration. In addition to traveling time, other conditions may be taken depending upon economical and geographical limitations. Despite variations in parameters, the k shortest path algorithms finds the most optimal solutions that satisfies almost all user needs. Such applications of k shortest path algorithms are becoming common, recently Xu, He, Song, and Chaudhry (2012) studied the k shortest path problems in transit network systems.
Applications
The k shortest path routing is a good alternative for:
Geographic path planning
Network routing, especially in optical mesh network where there are additional constraints that cannot be solved by using ordinary shortest path algorithms.
Hypothesis generation in computational linguistics
Sequence alignment and metabolic pathway finding in bioinformatics
Multiple object tracking as described above
Road Networks: road junctions are the nodes (vertices) and each edge (link) of the graph is associated with a road segment between two junctions.
Related problems
The breadth-first search algorithm is used when the search is only limited to two operations.
The FloydβWarshall algorithm solves all pairs shortest paths.
Johnson's algorithm solves all pairs' shortest paths, and may be faster than FloydβWarshall on sparse graphs.
Perturbation theory finds (at worst) the locally shortest path.
Cherkassky et al. provide more algorithms and associated evaluations.
See also
Constrained shortest path routing
Notes
External links
Implementation of Yen's algorithm
Implementation of Yen's and fastest k shortest simple paths algorithms
http://www.technical-recipes.com/2012/the-k-shortest-paths-algorithm-in-c/#more-2432
Multiple objects tracking technique using K-shortest path algorithm: http://cvlab.epfl.ch/software/ksp/
Computer Vision Laboratory: http://cvlab.epfl.ch/software/ksp/
Network theory
Polynomial-time problems
Graph algorithms
Computational problems in graph theory | K shortest path routing | Mathematics | 1,176 |
38,441,485 | https://en.wikipedia.org/wiki/Qalb%20%28programming%20language%29 |
ΩΩΨ¨ (), transliterated Qalb, Qlb and Alb, is a functional programming language allowing a programmer to write programs completely in Arabic. Its name means "heart" in Arabic and is a recursive acronym for Qlb: a programming language (, ). It was developed in 2012 by Ramsey Nasser, a computer scientist at the Eyebeam Art + Technology Center in New York City, as both an artistic endeavor and as a response to the Anglophone bias in the vast majority of programming languages, which express their fundamental concepts using English words.
The syntax is like that of Lisp or Scheme, consisting of parenthesized lists. Keywords are in Arabic (specifically, Lebanese Arabic) and program text is laid out right-to-left, like all Arabic text. The language provides a minimal set of primitives for defining functions, conditionals, looping, list manipulation, and basic arithmetic expressions. It is Turing-complete, and the Fibonacci sequence and Conway's Game of Life have been implemented.
Because program text is written in Arabic and the connecting strokes between characters in the Arabic script can be extended to any length, it is possible to align the source code in artistic patterns, in the tradition of Arabic calligraphy.
A JavaScript-based interpreter is currently self hosted and the project can be forked on GitHub.
Hello world
(ΩΩΩ "Ω
Ψ±ΨΨ¨Ψ§ ΩΨ§ ΨΉΨ§ΩΩ
")
(ΩΩΩ "Hello, worldβ")
References
Further reading
External links
Browser-based interpreter
Artist's statement
Functional languages
Non-English-based programming languages
Lisp programming language family | Qalb (programming language) | Technology | 337 |
1,207,119 | https://en.wikipedia.org/wiki/Zeller%27s%20congruence | Zeller's congruence is an algorithm devised by Christian Zeller in the 19th century to calculate the day of the week for any Julian or Gregorian calendar date. It can be considered to be based on the conversion between Julian day and the calendar date.
Formula
For the Gregorian calendar, Zeller's congruence is
for the Julian calendar it is
where
h is the day of the week (0 = Saturday, 1 = Sunday, 2 = Monday, ..., 6 = Friday)
q is the day of the month
m is the month (3 = March, 4 = April, 5 = May, ..., 14 = February)
K the year of the century ().
J is the zero-based century (actually ) For example, the zero-based centuries for 1995 and 2000 are 19 and 20 respectively (not to be confused with the common ordinal century enumeration which indicates 20th for both cases).
is the floor function or integer part
mod is the modulo operation or remainder after division
Note: In this algorithm January and February are counted as months 13 and 14 of the previous year. E.g. if it is 2 February 2010 (02/02/2010 in DD/MM/YYYY), the algorithm counts the date as the second day of the fourteenth month of 2009 (02/14/2009 in DD/MM/YYYY format)
For an ISO week date Day-of-Week d (1 = Monday to 7 = Sunday), use
Analysis
These formulas are based on the observation that the day of the week progresses in a predictable manner based upon each subpart of that date. Each term within the formula is used to calculate the offset needed to obtain the correct day of the week.
For the Gregorian calendar, the various parts of this formula can therefore be understood as follows:
represents the progression of the day of the week based on the day of the month, since each successive day results in an additional offset of 1 in the day of the week.
represents the progression of the day of the week based on the year. Assuming that each year is 365 days long, the same date on each succeeding year will be offset by a value of .
Since there are 366 days in each leap year, this needs to be accounted for by adding another day to the day of the week offset value. This is accomplished by adding to the offset. This term is calculated as an integer result. Any remainder is discarded.
Using similar logic, the progression of the day of the week for each century may be calculated by observing that there are 36,524 days in a normal century and 36,525 days in each century divisible by 400. Since and , the term accounts for this.
The term adjusts for the variation in the days of the month. Starting from January, the days in a month are {31, 28/29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}. February's 28 or 29 days is a problem, so the formula rolls January and February around to the end so February's short count will not cause a problem. The formula is interested in days of the week, so the numbers in the sequence can be taken modulo 7. Then the number of days in a month modulo 7 (still starting with January) would be {3, 0/1, 3, 2, 3, 2, 3, 3, 2, 3, 2, 3}. Starting in March, the sequence basically alternates 3, 2, 3, 2, 3, but every five months there are two 31-day months in a row (JulyβAugust and DecemberβJanuary). The fraction 13/5 = 2.6 and the floor function have that effect; the denominator of 5 sets a period of 5 months.
The overall function, , normalizes the result to reside in the range of 0 to 6, which yields the index of the correct day of the week for the date being analyzed.
The reason that the formula differs between calendars is that the Julian calendar does not have a separate rule for leap centuries and is offset from the Gregorian calendar by a fixed number of days each century.
Since the Gregorian calendar was adopted at different times in different regions of the world, the location of an event is significant in determining the correct day of the week for a date that occurred during this transition period. This is only required through 1929, as this was the last year that the Julian calendar was still in use by any country on earth, and thus is not required for 1930 or later.
The formulae can be used proleptically, but "Year 0" is in fact year 1 BC (see astronomical year numbering). The Julian calendar is in fact proleptic right up to 1 March AD 4 owing to mismanagement in Rome (but not Egypt) in the period since the calendar was put into effect on 1 January 45 BC (which was not a leap year). In addition, the modulo operator might truncate integers to the wrong direction (ceiling instead of floor). To accommodate this, one can add a sufficient multiple of 400 Gregorian or 700 Julian years.
Examples
For 1 January 2000, the date would be treated as the 13th month of 1999, so the values would be:
So the formula evaluates as .
(The 36 comes from , truncated to an integer.)
However, for 1 March 2000, the date is treated as the 3rd month of 2000, so the values become
so the formula evaluates as .
Implementations in software
Basic modification
The formulas rely on the mathematician's definition of modulo division, which means that β2 mod 7 is equal to positive 5. Unfortunately, in the truncating way most computer languages implement the remainder function, β2 mod 7 returns a result of β2. So, to implement Zeller's congruence on a computer, the formulas should be altered slightly to ensure a positive numerator. The simplest way to do this is to replace with and with .
For the Gregorian calendar, Zeller's congruence becomes
For the Julian calendar, Zeller's congruence becomes
One can readily see that, in a given year, the last day of February and March 1 are a good test dates.
As an aside note, if we have a three-digit number abc, where a, b, and c are the digits, each nonpositive if abc is nonpositive; we have (abc) mod 7 = 9*a + 3*b + c. Repeat the formula down to a single digit. If the result is 7, 8, or 9, then subtract 7. If, instead, the result is negative, then add 7. If the result is still negative, then add 7 one more time. Utilizing this approach, we can avoid the worries of language specific differences in mod 7 evaluations. This also may enhance a mental math technique.
Common simplification
Zeller used decimal arithmetic, and found it convenient to use J and K in representing the year. But when using a computer, it is simpler to handle the modified year and month , which are and during January and February:
For the Gregorian calendar, Zeller's congruence becomes
In this case there is no possibility of underflow due to the single negative term because .
For the Julian calendar, Zeller's congruence becomes
The algorithm above is mentioned for the Gregorian case in , Appendix B, albeit in an abridged form that returns 0 for Sunday.
Other variations
At least three other algorithms share the overall structure of Zeller's congruence in its "common simplification" type, also using an and the "modified year" construct.
Michael Keith published a piece of very short C code in 1990 for Gregorian dates. The month-length component () is replaced by .
J R Stockton provides a Sunday-is-0 version with , calling it a variation of Zeller.
Claus TΓΈndering describes as a Sunday-is-0 replacement.
Both expressions can be shown to progress in a way that is off by one compared to the original month-length component over the required range of , resulting in a starting value of 0 for Sunday.
See also
Determination of the day of the week
Doomsday rule
ISO week date
Julian day
References
Bibliography
Each of these four similar imaged papers deals firstly with the day of the week and secondly with the date of Easter Sunday, for the Julian and Gregorian calendars. The pages link to translations into English.
External links
The Calendrical Works of Rektor Chr. Zeller: The Day-of-Week and Easter Formulae by J R Stockton, near London, UK. The site includes images and translations of the above four papers, and of Zeller's reference card "Das Ganze der Kalender-Rechnung".
Gregorian calendar
Julian calendar
Calendar algorithms
Modular arithmetic
Eponymous algorithms of mathematics | Zeller's congruence | Mathematics | 1,834 |
50,986,522 | https://en.wikipedia.org/wiki/Tripod%20%28foundation%29 | The tripod or jacket is a type of foundation for offshore wind turbines. The tripod is generally more expensive than other types of foundation. However, for large turbines and higher water depth, the cost disadvantage might be compensated when durability is also taken into account.
History
Start of the offshore wind industry
The exploration of offshore wind energy started with the introduction of monopile foundations for wind turbines in a range from 1 up to 3MW in water depth of about 10 to 20m during the 1990s.
Germany has been facing water depths up to 40m, when it joined this new field of renewable energy. At the same time the 5MW turbine class appeared. One representative of this new turbine generation was the Multibrid M5000 with a rotor diameter of 116m, later 135m under the labels Areva and Adwen.
The first prototype of this machine was erected in Bremerhaven in 2004 onshore. Already in this stage Bremerhaven had supported the development on behalf of BIS Bremerhavener Gesellschaft fΓΌr InvestitionsfΓΆrderung und Stadtentwicklung mbH.
Development of the tripod foundation
Since the new century, there has been a search for a feasible foundation for the upcoming large turbines and greater water depths, in light of the available geotechnical assessment methods, fabrication processes, pile driving equipment and logistic and installation equipment.
One result was the Tripod foundation. The first design was drawn by OWT β Offshore Wind Technology in Leer (Germany) in 2005. The Tripod was integrally designed with the tower from this early beginning. The three-legged structure reaches from the sea bed up to typically 20m above the sea water level, keeping the bolted flange on top safely apart from the crest of the waves. This section allows to be outfitted onshore with all functionalities needed in terms of boat landing, cable guiding and last but not least corrosion protection systems. The central column is designed as an open system allowing an unrestricted water exchange in each tide cycle. This circumstance is beneficial when the corrosion protection system has to be designed for the inner surfaces.
The Tripod is fixed with midsized pin piles at the sea bed. The piles might be pre piled or post piled. A suction bucket foundation was designed as well. The first tower section, called S3, is foreseen to be mounted offshore on top of the Tripod with a bolted flange connection. This section contains the outer service platform and the entry door. This section is independently accessible for electrical equipment and cold commissioning procedures. Additionally it provides simply height, what can be saved on Tripod side. The height of a Tripod amounts already about 60m for 40m water depth.
In 2006 a Tripod onshore demonstrator was designed by OWT for Multibrid GmbH, manufactured and erected in Bremerhaven, Germany, by WeserWind GmbH Offshore Construction GeorgsmarienhΓΌtte. This was the beginning of a long lasting collaboration between the turbine developer and manufacturer Multibrid, the foundation designer OWT and the fabricator WeserWind. Meanwhile, the design covers the demands of an offshore turbine foundation to sufficient extent, the fabrication was even challenging regarding size and shape of the structure. That time WeserWind was supported in terms of fabrication and assembly by its sister company IAG Industrieanlagenbau GeorgsmarienhΓΌtte GmbH, a member of the GeorgsmarienhΓΌtte group as well. The first operation of the turbine was accompanied by the research project IMO-Wind. The first steps in condition monitoring have been undertaken including the determination of stress curves, the so-called "Hot spot" Survey, in order to enable the comparison with calculation models.
Large scale deployments
In 2008 Tripods were built as a substructure for six Multibrid M5000 offshore wind turbines in the Alpha Ventus project. Alpha Ventus was planned as a first test field for the exploration of offshore wind energy in German waters. The project organisation has been Deutsche Offshore-Testfeld und Infrastruktur GmbH & Co. KG, DOTI. It was founded in 2006 by EWE AG (47,5%), E.ON Climate & Renewables Central Europe GmbH and Vattenfall Europe Windkraft GmbH (each 26,25%) assisted by Stiftung Offshore Windenergie. The German Federal Ministry of Environment BMU supported a number of research projects, which were summarized in the RAVE initiative (Research at Alpha Ventus). A broad basis of experience and knowledge was gained for the construction, commissioning and operation for future offshore wind farms. The Tripods were fabricated by Aker Kvaerner in Verdal, Norway. A horizontal assembly of the Tripods was realized in accordance to the local fabrication experience of the yard, coming from large oil and gas jacket fabrication, with subsequent upending and of course upright sailing from Norway to the offshore terminal in Eemshaven. The transportation of the Tripods to the location was done by Taklift 4 from Boskalis one by one.
The year 2010 marked the next milestone in rolling out the M5000 turbine with the Tripod foundation. The two projects Borkum West II and Global Tech I decided to erect their farms using this technology platform. 40 Tripods were ordered by each project in first instance nearly at the same time. Anticipating this demand WeserWind has developed a serial production approach for Tripods in the years before, together with Dr. MΓΆller GmbH / IMS Nord, Bremerhaven. The key parameters of this approach are the upright assembly concept, the setup of an assembly line with up to nine work stations, the transportation of the growing structures on behalf of heavy load rail carriers along the assembly line and the integrated load out operation to a tailor-made pontoon. Based on this concept GeorgsmarienhΓΌtte released the investment program building this assembly shop with two parallel lines at Lunedeich, Bremerhaven. The building was operational in the beginning of 2011 and in June the first Borkum-West II-Tripod was completed.
In December 2011 the pontoon was baptised and the offshore terminal ABC-Peninsula was commissioned by BLG Logistics Solutions GmbH & Co. KG after essential upgrading. Finally 100 Tripods have been built at this site in the years from 2011 to 2013. The cycle time for the whole plant was reached with down to five calendar days per structure. The load out cycle was achieved to four hours. Also SIAG Emden and the consortium Iemants N.V. with Eiffage Construction MΓ©tallique S.A.S. in Vlissingen produced in total 20 Tripods in that time in upright position. The offshore transportation technology has been developed significantly since Alpha Ventus. The Offshore Construction Jack Up βInnovationβ by HGO InfraSea Solutions GmbH & Co. KG was commissioned in 2012 and did her first job for Global Tech 1 carrying three Tripods and pile sets per sail. The crane ships βStanislaw Yudinβ and βOleg Strassnowβ by SHL Seaway Heavy Lifting were in operation for Borkum West II.
Specific technical characteristics
Suitability and use conditions
The peculiarity of the Tripod is the combination of the above-water structure like a Monopile solution with small exposed surface, robust performance in risk scenarios and easy transition to the tower part with the supporting effect and performance of a lattice structure. Hot spots are avoided in the aggressive environment of splash zone by design allowing a free corrosion fatigue assessment.
In the wind energy, the coordination of the dynamics of the structure, characterized by the frequencies it mainly swings, is of special importance due to the excitation by the turbine rotor. The Tripod behaviour is between the Monopile, which tends to be softer and the Jacket, which in turn is more rigid.
The application area in terms of water depth was initially predicted to at least 25 meters water depth up to 50m. The impressively growing Monopile technology within the last years moved their field of application far to 40m nowadays. Therefore, the Tripod disappeared from the scene. Beside the higher fabrication effort for Tripods, transport and installation efforts might become even more comparable the more the structures grow. Finally the dedicated suitability of the Tripod to corrosion protection systems will remain a significant difference to the Monopile. The performance of the structures over the life time and due diligence assessments of the assets in later life cycle stages might give reason for reconciliation of the arguments.
Comparable to other lattice structures like Jackets, the Tripod is fixed with piles in the sea bed. The number of three legs results in sufficient stability in the unpiled or ungrouted situation what comes back with a reliable weather window for installation. The design parameters for the piles can be independently chosen from the Tripod itself and reflect the geotechnical needs explicitly. There is no need to apply scour protection.
The connection to the pile is usually achieved using a grouted connection. This is a technique where special concrete is poured in the joint gap between pile and pile sleeve. Due to the resulting composite effect the loads are transferred from the sleeve to the pile, and thus into the ground. A submerged grouting process requires high competence in design, planning and execution of the processes. The stable moderate temperature under water supports the temperature sensitive grout curing process.
Structural backgrounds
The supporting action is based on the deflection of the bending moment of the tower to the piles, which are then essentially only pulled or pushed. This requires a combination of upper and lower legs which build up the leverage. Alternatively, a suction bucket can be used instead of the pile. In comparison, the monopile distributes its loads by laterally stabilizing into the ground.
Tubular nodes are the characteristic design element in lattice structures, where tubes intersect each other. It is preferred that incoming tubes, the stubs, remain in certain ratio of the diameters (0.8) to the continuous tube, the chord, to achieve efficient load bearing effects. This effect determines to the final dimensional ratios.
The plate thicknesses within offshore foundations are well adapted to the local load situations. A balanced material utilisation can be achieved by design because the dimension of an offshore foundation is large compared to the dimension of hot rolled plates. Tripods and Monopiles are shell structures. Their wall thickness is relatively small compared to the diameter. Therefore, they have to be proven in terms of shell buckling. The tower, central tube and legs are assembled of cylindrical or conical sections, cans, with an individual length of 2 to 4m. The wall thicknesses are in the range of 40 to 60mm in the central column, a few cans in high-stress areas up to 90mm. The wall thicknesses of the conical legs range from 20 to 30mm.
The lifetime is a central requirement to the design. In the classic oil and gas industry offshore wave loads have already been taken into account. The operation of wind turbine generators causes additionally high dynamic operating loads. This was impressively observed with the Growian project, what was a two bladed 3MW onshore turbine, what failed in 1983 for this reason.
Calculation methods
FEM methods are mainly used for the assessments. Only these more extensive tools allow to reflect the stress curves in detail and to provide accuracy as it is required for the design. The calculation times have been considerably reduced by scripted modelling and increasing computing speeds, which increased the iterations speeds and thus improved the optimization results.
Summary and outlook
The Tripod foundation for offshore wind turbines represents a remarkable contribution to the beginning of the industrial utilisation of offshore wind energy in German waters. It was born within a creative nut shell of German offshore wind pioneers and it expanded its potential gaining further partners to a large multidisciplinary team realising the vision. The fact, that 126 turbines founded on top of Tripods are nowadays operational, is the result of a long lasting reliable collaboration of a number of stakeholders.
A desk top study has been performed in 2014 assessing the feasibility of the foundation concept to the next turbine generation with 8MW and rotor diameter beyond 160m. It was essential to demonstrate the limited weight increase carrying the even higher loads and thus approving all the existing fabrication and installation processes from the projects done before.
Today the grown knowledge in offshore engineering from the Tripod decade is a kind of immaterial asset to be put into new projects using Monopile, Jacket or why not Tripod concepts, exploring the recent state of the art for lowering the cost of energy.
References
Deep foundations
Structural steel
Structural engineering
Offshore wind farms | Tripod (foundation) | Engineering | 2,573 |
27,046,132 | https://en.wikipedia.org/wiki/Rustock%20botnet | The Rustock botnet was a botnet that operated from around 2006 until March 2011.
It consisted of computers running Microsoft Windows, and was capable of sending up to 25,000 spam messages per hour from an infected PC. At the height of its activities, it sent an average of 192 spam messages per compromised machine per minute. Reported estimates on its size vary greatly across different sources, with claims that the botnet may have comprised anywhere between 150,000 and 2,400,000 machines. The size of the botnet was increased and maintained mostly through self-propagation, where the botnet sent many malicious e-mails intended to infect machines opening them with a trojan which would incorporate the machine into the botnet.
The botnet took a hit after the 2008 takedown of McColo, an ISP which was responsible for hosting most of the botnet's command and control servers. McColo regained Internet connectivity for several hours, and in those hours up to 15 Mbit a second of traffic was observed, likely indicating a transfer of command and control to Russia. While these actions temporarily reduced global spam levels by around 75%, the effect did not last long: spam levels increased by 60% between January and June 2009, 40% of which was attributed to the Rustock botnet.
On March 16, 2011, the botnet was taken down through what was initially reported as a coordinated effort by Internet service providers and software vendors. It was revealed the next day that the take-down, called Operation b107, was the action of Microsoft, U.S. federal law enforcement agents, FireEye, and the University of Washington.
To capture the individuals involved with the Rustock botnet, on July 18, 2011, Microsoft is offering "a monetary reward in the amount of US$250,000 for new information that results in the identification, arrest and criminal conviction of such individual(s)."
Operations
Botnets are composed of infected computers used by unwitting Internet users. In order to hide its presence from the user and anti-virus software, the Rustock botnet employed rootkit technology. Once a computer was infected, it would seek contact with command-and-control servers at a number of IP addresses and any of 2,500 domains and backup domains that may direct the zombies in the botnet to perform various tasks such as sending spam or executing distributed denial of service (DDoS) attacks. Ninety-six servers were in operation at the time of the takedown. When sending spam the botnet uses TLS encryption in around 35 percent of the cases as an extra layer of protection to hide its presence. Whether detected or not, this creates additional overhead for the mail servers handling the spam. Some experts pointed out that this extra load could negatively impact the mail infrastructure of the Internet, as most of the e-mails sent these days are spam.
See also
Botnet
Helpful worm
McColo
Operation: Bot Roast
Srizbi botnet
Zombie (computer science)
Alureon
Conficker
Gameover ZeuS
Storm botnet
Bagle (computer worm)
ZeroAccess botnet
Regin (malware)
Cyberwarfare by Russia
Zeus (malware)
References
Internet security
Distributed computing projects
Spamming
Botnets | Rustock botnet | Engineering | 665 |
7,513,883 | https://en.wikipedia.org/wiki/Inis%20Beag | Inis Beag (Irish, 'Little Island') is a pseudonymous Irish island in the 1960s, as described by American cultural anthropologist John Cowan Messenger. Messenger lived on the island and studied the community in 1959 and 1960. He subsequently wrote several academic works about his experience, including Inis Beag: Isle of Ireland and Sex and Repression in an Irish Folk Community.
Location
Messenger describes Inis Beag as a remote island off the coast of Connemara, Ireland, near the Aran Islands in the 1960s. It contains a small, isolated, Irish-speaking, Catholic population. Messenger states that during the period of his study between 1958 and 1966, Inis Beag supported a population of around 350, mostly living by subsistence farming and fishing. The name "Inis Beag" is a pseudonym and was used by Messenger to protect the privacy of the island's people. Subsequent texts have stated that the island's true identity is Inisheer.
Island life
Messenger characterises the Gaelic revival movement as nativism. He states that members of the revival and those involved in the Irish independence movement held up the island the surrounding areas as examples of true Irish identity. Messenger argues that many of the written works by these members were "romanticized," focusing on cultural forms that outsiders found attractive. These included "the traditional garb of the folk, their skill in rowing the famed canoe, called curach, the manner in which they manufacture soils and grow in them a variety of crops, and their Gaelic speech."
Messenger argues that these customs were as not as pure as other outsiders stated. He found that 11 of the 111 adult males and 9 of the 85 adult females had given up the traditional local clothing for imported styles from the mainland. This behaviour was especially prevalent among the younger women, with no adherents between the ages of 18 and 29. He also found that use of the local curach had declined in recent decades, from 30 to 50 three-man crews fishing nearly all year in the early 1900s to nine crews working from the island in 1960. Messenger found that essentially all of the islanders older than eight spoke English proficiently, mixed English regularly into their speech, and even confessed to their priests in English. He attributed the rise in English to a practical view of language; many young people emigrate and would be disadvantaged by speaking only Irish.
Sexuality
Messenger reported that Inis Beag had no formal sex education, and sexual intercourse was treated by both sexes and the local curate as a "duty" which must be "endured." Messenger proposed that the institutionalisation of repression of sexual conduct was due to early replacement of physical affection with verbal affection by the time a child can walk. He stated that "any forms of direct or indirect sexual expressionβmasturbation, mutual exploration of bodies, use of either standard or slang words relating to sex, and open urination and defecation" were "punished severely by word and deed." He found that children were separated by gender in almost all activities. He also reported that islanders tended to bathe only the hands, face, and feet and developed an "obsessive fear" of nudity early in life. In some households, "dogs [were] whipped for licking their genitals and soon [learned] to indulge in this behavior outside when unobserved."
He argues that repression of sexuality also manifested in intercourse. Elders of the island boasted that there was no premarital sex, although some young men did admit to it in rumor. Messenger states that when couples did have sex with each other the husband always initiated and the wife was commonly passive. Messenger found that couples left their underclothes only partially removed and used only the male superior position, and when the man orgasmed, he fell asleep almost immediately.
Messenger argues that people of the island behaved like this due to informal and formal social control and extreme ignorance. He states that menstruation and menopause were regarded with profound misgivings. He states that women asked Messenger's wife about the female cycle more than any other question about sex phenomena. He states that young women were often traumatized by menarche, and that in 1960 at least three older women had confined themselves entirely to bed to avoid a potential "madness" induced by menopause. Women sent their children out of the room when Messenger's wife would inquire about their pregnancies.
Messenger viewed the men as grossly ignorant about sex. He found that female orgasm was unknown to the men, not experienced by the women, or shunned and hidden. Messenger reported that one middle-aged bachelor who considered himself "wise in the ways of the outside world . . . described the violent bodily reactions of a girl to his fondling" and when Messenger explained, he "admitted not knowing that women also could achieve climax." Men of the island thought sexual intercourse would weaken them, and would abstain the night before an exhausting task. Despite all this, Messenger could not report a single family that was childless due to ignorance. He states that this was a phenomenon in some other regions of Ireland. When Messenger inquired how newly married couples learned how to copulate, he was told that "after marriage, nature takes its course."
Reception
In Marriage in Ireland, a collection of essays edited by Art Cosgrove covering history of marriage practices and norms in Ireland from the 8th century to the 1980s, Trinity College Dublin historian David Fitzpatrick is critical of this work. He describes Messenger's account in Inis Beag (1969), along with two other American anthropological works on Irish society from that time as βhighly colouredβ. In the context of these works he states that Irish post-famine sexual mores were common across European peasant communities. He argues that if Ireland was sick (as implied by these works), so was rural Europe.
References
Bibliography
A more recent edition, with an ISBN, is: Messenger, John C. Inis Beag: Isle of Ireland. Long Grove: IL: Waveland Press, 1983. , OCLC 10578752
John C. Messenger, "Sex and Repression in an Irish Folk Community", in Donald S. Marshall and Robert C. Suggs, eds., Human Sexual Behavior: Variations in the Ethnographic Spectrum, 1971. Basic Books, New York.
John C. Messenger, Ines Beag Revisited: The Anthropologist as Observant Participator. Publisher: Salem, Wisconsin: Sheffield, 1989.
John Messenger, Peasants, Proverbs, and Projection. Central Issues in Anthropology April 1991, Vol. 9, No. 1: pp.Β 99β105
External links
UCSB SexInfo: Sex in a Conservative Society: Ines Beag, Ireland
https://web.archive.org/web/20070428175248/http://www2.hu-berlin.de/sexology/IES/ireland.html
Sexuality Information and Education Council of the United States
John Messenger bio
Sexology
Fictional islands
Cultural anthropology | Inis Beag | Biology | 1,431 |
7,074,545 | https://en.wikipedia.org/wiki/Dawn%20simulation | Dawn simulation is a technique that involves timing a light, often called a wake-up light, sunrise alarm clock, or natural light alarm clock, in the bedroom to come on gradually, over a period of 30 minutes to 2 hours, before awakening to simulate dawn.
History
The concept of dawn simulation was first patented in 1890 as "mechanical sunrise". Modern electronic units were patented in 1973. Variations and improvements seem to get patented every few years. Clinical trials were conducted by David Avery, MD, in the 1980s at Columbia University following a long line of basic laboratory research that showed animals' circadian rhythms to be exquisitely sensitive to the dim, gradually rising dawn signal at the end of the night. The first modern commercial product was created by Outside In Cambridge UK (now known as Lumie) in 1993. https://www.lumie.com/30-years-of-light-therapy/5/
Clinical use
There are two types of dawn that have been used effectively in a clinical setting: a naturalistic dawn mimicking a springtime sunrise (but used in mid-winter when it is still dark outside), and a sigmoidal-shaped dawn (30 minutes to 2 hours). When used successfully, patients are able to sleep through the dawn and wake up easily at the simulated sunrise, after which the day's treatment is over. The theory behind dawn simulation is based on the fact that early morning light signals are much more effective at advancing the biological clock than are light signals given at other times of day (see Phase response curve).
Comparison with bright light therapy
Dawn simulation generally uses light sources that range in illuminance from 100 to 300 lux, while bright light boxes are usually in the 10,000-lux range. Approximately 19% of patients discontinue post-awakening bright light therapy due to inconvenience. Because the entire treatment is complete before awakening, dawn simulation may be a more convenient alternative to post-awakening bright light therapy. In terms of efficacy, some studies have shown dawn simulation to be more effective than standard bright light therapy while others have shown no difference or shown that bright light therapy is superior. Some patients with seasonal affective disorder use both dawn simulation and bright light therapy to provide maximum effect at the start of the day.
Other uses
In an elaboration of the method, patients have also been presented with a dim dusk signal at bedtime, with indications that it eases sleep onset. In addition, the technique has been used clinically with patients who suffer from delayed sleep phase syndrome, helping them to awaken earlier in gradual steps, as the simulated dawn is moved earlier.
Non-clinical sleep and wake-up uses
A dawn simulator can be used as an alarm clock. Light enters through the eyelids triggering the body to begin its wake-up cycle, including the release of cortisol, so that by the time the light is at full brightness, sleepers wake up on their own, without the need for an alarm. Most commercial alarm clocks include a "dusk" mode as well for bedtime.
References
Further reading
Circadian rhythm | Dawn simulation | Biology | 631 |
325,714 | https://en.wikipedia.org/wiki/Hopf%20algebra | In mathematics, a Hopf algebra, named after Heinz Hopf, is a structure that is simultaneously an (unital associative) algebra and a (counital coassociative) coalgebra, with these structures' compatibility making it a bialgebra, and that moreover is equipped with an antihomomorphism satisfying a certain property. The representation theory of a Hopf algebra is particularly nice, since the existence of compatible comultiplication, counit, and antipode allows for the construction of tensor products of representations, trivial representations, and dual representations.
Hopf algebras occur naturally in algebraic topology, where they originated and are related to the H-space concept, in group scheme theory, in group theory (via the concept of a group ring), and in numerous other places, making them probably the most familiar type of bialgebra. Hopf algebras are also studied in their own right, with much work on specific classes of examples on the one hand and classification problems on the other. They have diverse applications ranging from condensed matter physics and quantum field theory to string theory and LHC phenomenology.
Formal definition
Formally, a Hopf algebra is an (associative and coassociative) bialgebra H over a field K together with a K-linear map S: H β H (called the antipode) such that the following diagram commutes:
Here Ξ is the comultiplication of the bialgebra, β its multiplication, Ξ· its unit and Ξ΅ its counit. In the sumless Sweedler notation, this property can also be expressed as
As for algebras, one can replace the underlying field K with a commutative ring R in the above definition.
The definition of Hopf algebra is self-dual (as reflected in the symmetry of the above diagram), so if one can define a dual of H (which is always possible if H is finite-dimensional), then it is automatically a Hopf algebra.
Structure constants
Fixing a basis for the underlying vector space, one may define the algebra in terms of structure constants for multiplication:
for co-multiplication:
and the antipode:
Associativity then requires that
while co-associativity requires that
The connecting axiom requires that
Properties of the antipode
The antipode S is sometimes required to have a K-linear inverse, which is automatic in the finite-dimensional case, or if H is commutative or cocommutative (or more generally quasitriangular).
In general, S is an antihomomorphism, so S2 is a homomorphism, which is therefore an automorphism if S was invertible (as may be required).
If S2 = idH, then the Hopf algebra is said to be involutive (and the underlying algebra with involution is a *-algebra). If H is finite-dimensional semisimple over a field of characteristic zero, commutative, or cocommutative, then it is involutive.
If a bialgebra B admits an antipode S, then S is unique ("a bialgebra admits at most 1 Hopf algebra structure"). Thus, the antipode does not pose any extra structure which we can choose: Being a Hopf algebra is a property of a bialgebra.
The antipode is an analog to the inversion map on a group that sends g to gβ1.
Hopf subalgebras
A subalgebra A of a Hopf algebra H is a Hopf subalgebra if it is a subcoalgebra of H and the antipode S maps A into A. In other words, a Hopf subalgebra A is a Hopf algebra in its own right when the multiplication, comultiplication, counit and antipode of H are restricted to A (and additionally the identity 1 of H is required to be in A). The NicholsβZoeller freeness theorem of Warren Nichols and Bettina Zoeller (1989) established that the natural A-module H is free of finite rank if H is finite-dimensional: a generalization of Lagrange's theorem for subgroups. As a corollary of this and integral theory, a Hopf subalgebra of a semisimple finite-dimensional Hopf algebra is automatically semisimple.
A Hopf subalgebra A is said to be right normal in a Hopf algebra H if it satisfies the condition of stability, adr(h)(A) β A for all h in H, where the right adjoint mapping adr is defined by adr(h)(a) = S(h(1))ah(2) for all a in A, h in H. Similarly, a Hopf subalgebra A is left normal in H if it is stable under the left adjoint mapping defined by adl(h)(a) = h(1)aS(h(2)). The two conditions of normality are equivalent if the antipode S is bijective, in which case A is said to be a normal Hopf subalgebra.
A normal Hopf subalgebra A in H satisfies the condition (of equality of subsets of H): HA+ = A+H where A+ denotes the kernel of the counit on A. This normality condition implies that HA+ is a Hopf ideal of H (i.e. an algebra ideal in the kernel of the counit, a coalgebra coideal and stable under the antipode). As a consequence one has a quotient Hopf algebra H/HA+ and epimorphism H β H/A+H, a theory analogous to that of normal subgroups and quotient groups in group theory.
Hopf orders
A Hopf order O over an integral domain R with field of fractions K is an order in a Hopf algebra H over K which is closed under the algebra and coalgebra operations: in particular, the comultiplication Ξ maps O to OβO.
Group-like elements
A group-like element is a nonzero element x such that Ξ(x) = xβx. The group-like elements form a group with inverse given by the antipode. A primitive element x satisfies Ξ(x) = xβ1 + 1βx.
Examples
Note that functions on a finite group can be identified with the group ring, though these are more naturally thought of as dual β the group ring consists of finite sums of elements, and thus pairs with functions on the group by evaluating the function on the summed elements.
Cohomology of Lie groups
The cohomology algebra (over a field ) of a Lie group is a Hopf algebra: the multiplication is provided by the cup product, and the comultiplication
by the group multiplication . This observation was actually a source of the notion of Hopf algebra. Using this structure, Hopf proved a structure theorem for the cohomology algebra of Lie groups.
Theorem (Hopf) Let be a finite-dimensional, graded commutative, graded cocommutative Hopf algebra over a field of characteristic 0. Then (as an algebra) is a free exterior algebra with generators of odd degree.
Quantum groups and non-commutative geometry
Most examples above are either commutative (i.e. the multiplication is commutative) or co-commutative (i.e. Ξ = T β Ξ where the twist map T: H β H β H β H is defined by T(x β y) = y β x). Other interesting Hopf algebras are certain "deformations" or "quantizations" of those from example 3 which are neither commutative nor co-commutative. These Hopf algebras are often called quantum groups, a term that is so far only loosely defined. They are important in noncommutative geometry, the idea being the following: a standard algebraic group is well described by its standard Hopf algebra of regular functions; we can then think of the deformed version of this Hopf algebra as describing a certain "non-standard" or "quantized" algebraic group (which is not an algebraic group at all). While there does not seem to be a direct way to define or manipulate these non-standard objects, one can still work with their Hopf algebras, and indeed one identifies them with their Hopf algebras. Hence the name "quantum group".
Representation theory
Let A be a Hopf algebra, and let M and N be A-modules. Then, M β N is also an A-module, with
for m β M, n β N and Ξ(a) = (a1, a2). Furthermore, we can define the trivial representation as the base field K with
for m β K. Finally, the dual representation of A can be defined: if M is an A-module and M* is its dual space, then
where f β M* and m β M.
The relationship between Ξ, Ξ΅, and S ensure that certain natural homomorphisms of vector spaces are indeed homomorphisms of A-modules. For instance, the natural isomorphisms of vector spaces M β M β K and M β K β M are also isomorphisms of A-modules. Also, the map of vector spaces M* β M β K with f β m β f(m) is also a homomorphism of A-modules. However, the map M β M* β K is not necessarily a homomorphism of A-modules.
Related concepts
Graded Hopf algebras are often used in algebraic topology: they are the natural algebraic structure on the direct sum of all homology or cohomology groups of an H-space.
Locally compact quantum groups generalize Hopf algebras and carry a topology. The algebra of all continuous functions on a Lie group is a locally compact quantum group.
Quasi-Hopf algebras are generalizations of Hopf algebras, where coassociativity only holds up to a twist. They have been used in the study of the KnizhnikβZamolodchikov equations.
Multiplier Hopf algebras introduced by Alfons Van Daele in 1994 are generalizations of Hopf algebras where comultiplication from an algebra (with or without unit) to the multiplier algebra of tensor product algebra of the algebra with itself.
Hopf group-(co)algebras introduced by V. G. Turaev in 2000 are also generalizations of Hopf algebras.
Weak Hopf algebras
Weak Hopf algebras, or quantum groupoids, are generalizations of Hopf algebras. Like Hopf algebras, weak Hopf algebras form a self-dual class of algebras; i.e., if H is a (weak) Hopf algebra, so is H*, the dual space of linear forms on H (with respect to the algebra-coalgebra structure obtained from the natural pairing with H and its coalgebra-algebra structure). A weak Hopf algebra H is usually taken to be a
finite-dimensional algebra and coalgebra with coproduct Ξ: H β H β H and counit Ξ΅: H β k satisfying all the axioms of Hopf algebra except possibly Ξ(1) β 1 β 1 or Ξ΅(ab) β Ξ΅(a)Ξ΅(b) for some a,b in H. Instead one requires the following:
for all a, b, and c in H.
H has a weakened antipode S: H β H satisfying the axioms:
for all a in H (the right-hand side is the interesting projection usually denoted by Ξ R(a) or Ξ΅s(a) with image a separable subalgebra denoted by HR or Hs);
for all a in H (another interesting projection usually denoted by Ξ R(a) or Ξ΅t(a) with image a separable algebra HL or Ht, anti-isomorphic to HL via S);
for all a in H.
Note that if Ξ(1) = 1 β 1, these conditions reduce to the two usual conditions on the antipode of a Hopf algebra.
The axioms are partly chosen so that the category of H-modules is a rigid monoidal category. The unit H-module is the separable algebra HL mentioned above.
For example, a finite groupoid algebra is a weak Hopf algebra. In particular, the groupoid algebra on [n] with one pair of invertible arrows eij and eji between i and j in [n] is isomorphic to the algebra H of n x n matrices. The weak Hopf algebra structure on this particular H is given by coproduct Ξ(eij) = eij β eij, counit Ξ΅(eij) = 1 and antipode S(eij) = eji. The separable subalgebras HL and HR coincide and are non-central commutative algebras in this particular case (the subalgebra of diagonal matrices).
Early theoretical contributions to weak Hopf algebras are to be found in as well as
Hopf algebroids
See Hopf algebroid
Analogy with groups
Groups can be axiomatized by the same diagrams (equivalently, operations) as a Hopf algebra, where G is taken to be a set instead of a module. In this case:
the field K is replaced by the 1-point set
there is a natural counit (map to 1 point)
there is a natural comultiplication (the diagonal map)
the unit is the identity element of the group
the multiplication is the multiplication in the group
the antipode is the inverse
In this philosophy, a group can be thought of as a Hopf algebra over the "field with one element".
Hopf algebras in braided monoidal categories
The definition of Hopf algebra is naturally extended to arbitrary braided monoidal categories. A Hopf algebra in such a category is a sextuple where is an object in , and
(multiplication),
(unit),
(comultiplication),
(counit),
(antipode)
β are morphisms in such that
1) the triple is a monoid in the monoidal category , i.e. the following diagrams are commutative:
2) the triple is a comonoid in the monoidal category , i.e. the following diagrams are commutative:
3) the structures of monoid and comonoid on are compatible: the multiplication and the unit are morphisms of comonoids, and (this is equivalent in this situation) at the same time the comultiplication and the counit are morphisms of monoids; this means that the following diagrams must be commutative:
where is the left unit morphism in , and the natural transformation of functors which is unique in the class of natural transformations of functors composed from the structural transformations (associativity, left and right units, transposition, and their inverses) in the category .
The quintuple with the properties 1),2),3) is called a bialgebra in the category ;
4) the diagram of antipode is commutative:
The typical examples are the following.
Groups. In the monoidal category of sets (with the cartesian product as the tensor product, and an arbitrary singletone, say, , as the unit object) a triple is a monoid in the categorical sense if and only if it is a monoid in the usual algebraic sense, i.e. if the operations and behave like usual multiplication and unit in (but possibly without the invertibility of elements ). At the same time, a triple is a comonoid in the categorical sense iff is the diagonal operation (and the operation is defined uniquely as well: ). And any such a structure of comonoid is compatible with any structure of monoid in the sense that the diagrams in the section 3 of the definition always commute. As a corollary, each monoid in can naturally be considered as a bialgebra in , and vice versa. The existence of the antipode for such a bialgebra means exactly that every element has an inverse element with respect to the multiplication . Thus, in the category of sets Hopf algebras are exactly groups in the usual algebraic sense.
Classical Hopf algebras. In the special case when is the category of vector spaces over a given field , the Hopf algebras in are exactly the classical Hopf algebras described above.
Functional algebras on groups. The standard functional algebras , , , (of continuous, smooth, holomorphic, regular functions) on groups are Hopf algebras in the category (Ste,) of stereotype spaces,
Group algebras. The stereotype group algebras , , , (of measures, distributions, analytic functionals and currents) on groups are Hopf algebras in the category (Ste,) of stereotype spaces. These Hopf algebras are used in the duality theories for non-commutative groups.
See also
Quasitriangular Hopf algebra
Algebra/set analogy
Representation theory of Hopf algebras
Ribbon Hopf algebra
Superalgebra
Supergroup
Anyonic Lie algebra
Sweedler's Hopf algebra
Hopf algebra of permutations
MilnorβMoore theorem
Notes and references
Notes
Citations
References
.
Heinz Hopf, Uber die Topologie der Gruppen-Mannigfaltigkeiten und ihrer Verallgemeinerungen, Annals of Mathematics 42 (1941), 22β52. Reprinted in Selecta Heinz Hopf, pp.Β 119β151, Springer, Berlin (1964). ,
.
.
Monoidal categories
Representation theory | Hopf algebra | Mathematics | 3,758 |
63,628,505 | https://en.wikipedia.org/wiki/Insensible%20perspiration | Insensible perspiration, also known as transepidermal water loss, is the passive vapour diffusion of water through the epidermis. Insensible perspiration takes place at an almost constant rate and reflects evaporative loss from the epithelial cells of the skin. Unlike sweating, the lost fluid is pure without additional solutes. For this reason, it can also be referred to as "insensible water loss".
The amount of water lost in this way is deemed to be approximately per day. Some sources broaden the definition of insensible perspiration to include not only the water lost through the skin, but also the water lost through the epithelium of the respiratory tract, which is also approximately per day.
Insensible perspiration is the main source of heat loss from the body, with the figure being placed around 480 kCal per day, which is approximately 25% of basal heat production. Insensible perspiration is not under regulatory control.
History
Known in Latin as , the concept was already known to Galen in ancient Greece and was studied by the Venetian Santorio Santorio, who experimented on himself and observed that a significant part of the weight of what he ate and drank was not excreted in his faeces or urine but was also not being added to his body weight. He was able to measure the loss through a chair that he designed.
References
Excretion | Insensible perspiration | Biology | 300 |
7,949,129 | https://en.wikipedia.org/wiki/ICT%20Development%20Index | The ICT Development Index (IDI) is an index published by the United Nations International Telecommunication Union based on internationally agreed information and communication technologies (ICT) indicators. This makes it a valuable tool for benchmarking the most important indicators for measuring the information society. The IDI is a standard tool that governments, operators, development agencies, researchers and others can use to measure the digital divide and compare ICT performance within and across countries.
Having the role to analyze the level of development of the information and communication technology sector (ICT), the ICT Development Index (IDI) is a composite indicator published by ITU between 2009 and 2017. It was discontinued in 2018, owing to issues of data availability and quality. In October 2022, ITUβs Plenipotentiary Conference 2022 in Bucharest adopted a revised text of Resolution 131, which defines, inter alia, the main features of the process for developing and adopting a new IDI methodology and of the IDI itself. In November 2023, the revised IDI methodology was approved by the Member States and is valid for four years. In December 2023, the 2023 edition of the IDI based on the new methodology was released. The 2024 edition of the IDI was released in June 2024.
List of countries by ICT Development Index (IDI)
The following table shows the most recent values of the ICT Development Index, based on data published by International Telecommunication Union in 2024. Sorting is alphabetical by country code, according to ISO 3166-1 alpha-3.
References
Economic country classifications
IT infrastructure | ICT Development Index | Technology | 321 |
1,644,367 | https://en.wikipedia.org/wiki/Refinement%20%28computing%29 | Refinement is a generic term of computer science that encompasses various approaches for producing correct computer programs and simplifying existing programs to enable their formal verification.
Program refinement
In formal methods, program refinement is the verifiable transformation of an abstract (high-level) formal specification into a concrete (low-level) executable program. Stepwise refinement allows this process to be done in stages. Logically, refinement normally involves implication, but there can be additional complications.
The progressive just-in-time preparation of the product backlog (requirements list) in agile software development approaches, such as Scrum, is also commonly described as refinement.
Data refinement
Data refinement is used to convert an abstract data model (in terms of sets for example) into implementable data structures (such as arrays). Operation refinement converts a specification of an operation on a system into an implementable program (e.g., a procedure). The postcondition can be strengthened and/or the precondition weakened in this process. This reduces any nondeterminism in the specification, typically to a completely deterministic implementation.
For example, x β {1,2,3} (where x is the value of the variable x after an operation) could be refined to x β {1,2}, then x β {1}, and implemented as x := 1. Implementations of x := 2 and x := 3 would be equally acceptable in this case, using a different route for the refinement. However, we must be careful not to refine to x β {} (equivalent to false) since this is unimplementable; it is impossible to select a member from the empty set.
The term reification is also sometimes used (coined by Cliff Jones). Retrenchment is an alternative technique when formal refinement is not possible. The opposite of refinement is abstraction.
Refinement calculus
Refinement calculus is a formal system (inspired from Hoare logic) that promotes program refinement. The FermaT Transformation System is an industrial-strength implementation of refinement. The B-Method is also a formal method that extends refinement calculus with a component language: it has been used in industrial developments.
Refinement types
In type theory, a refinement type is a type endowed with a predicate which is assumed to hold for any element of the refined type. Refinement types can express preconditions when used as function arguments or postconditions when used as return types: for instance, the type of a function which accepts natural numbers and returns natural numbers greater than 5 may be written as . Refinement types are thus related to behavioral subtyping.
See also
Reification (computer science)
References
Formal methods terminology
Computer programming | Refinement (computing) | Mathematics,Technology,Engineering | 592 |
24,271,204 | https://en.wikipedia.org/wiki/C27H31O15 | {{DISPLAYTITLE:C27H31O15}}
The molecular formula C27H31O15+ (molar mass: 595.53 g/mol, exact mass: 595.1663 u) may refer to:
Antirrhinin
Pelargonin
Molecular formulas | C27H31O15 | Physics,Chemistry | 63 |
1,088,143 | https://en.wikipedia.org/wiki/Game%20balance | Game balance is a branch of game design with the intention of improving gameplay and user experience by balancing difficulty and fairness. Game balance consists of adjusting rewards, challenges, and/or elements of a game to create the intended player experience.
Overview and development
Game balance is generally understood as introducing a level of fairness for the players. This includes adjusting difficulty, win-loss conditions, game states, economy balancing, and so on to work in tandem with each other. The concept of game balance depends on the game genre. Most game designers agree that game balancing serves towards providing an engaging player experience, especially through a meta.
Game balance is commonly discussed among game designers, some of whom include Ernest Adams, Jeannie Novak, Ian Schreiber, David Sirlin, and Jesse Schell. The topic is also featured in many YouTube channels specializing in game design topics, including Extra Credits, GMTK and Adam Millard.
Terms specific to game balance
PvP, PvE and Co-Op Games
Player versus player (PvP) describes games that feature a competition between players. PvE is an acronym for player versus environment, where players instead compete with the environment and non-player characters (NPCs).
Co-op is short for "cooperative" and refers to PvE and PvP games where you can work with other players.
Game elements
Game elements are things that appear within a video game that contribute to the gameplay experience. In most game design frameworks, game elements are categorized into groups to help describe their roles in the games. A game element refers to anything ranging from a player's special ability to the relations between different game mechanics in a game.
Game mechanics
Game mechanics are constructs that let the player interact with the game world. They define the goal, how players can achieve them and how they cannot, and what happens when they try. These would include challenges, competitive or cooperative gameplay, win-loss conditions and states, feedback loops, and how they relate to one another. Like game balance, the terminology behind game mechanics can vary depending on the designer or the resource's author.
Buffs and nerfs
Buffs are changes to a game which increase the utility of game elements, items, environments, mechanics and so on, while nerfs are changes that decrease the utility of said game elements and alike. Buffs and nerfs are common methods for adjusting the challenge for the player. Both can be achieved indirectly by changing other elements and mechanics or introducing new ones. Both terms can also be used as verbs for the act of making such a change. The first established use of the term "nerf" was in Ultima Online, as a reference to the Nerf brand of toys due to their soft toy bullets. However, there is no concrete evidence to show where the term "buff" came from. It has been perceived that the term came from bodybuilding culture, where it is a slang term which refers to an individual's large musculature as a result of strength-based exercises.
The most popular use of these terms is found in most MMORPGs, where game designers use buffs and nerfs to maintain game balance shortly after introducing a new feature that may cause significant changes to the game's mechanics. This is sometimes due to a method of using or acquiring the object that was not considered by the developers. The frequency and scale of nerfing vary widely from game to game, but almost all MMOs have engaged in nerfing at some point.
Nerfs in various online games, such as Anarchy Online, have spurred in-world protests. Since many items in virtual worlds are sold or traded among players, a nerf may have an outsized impact on the virtual economy. As players respond, the nerf may cause prices to fluctuate before settling down in a different equilibrium. This impact on the economy, along with the original impact of the nerf, can cause large player resentment for even a small change. In particular, in the case of items or abilities which have been nerfed, players can become upset over the perceived wasted efforts in obtaining the now nerfed features. For games where avatars and items represent significant economic value, this may bring up legal issues over the lost value.
Overpowered and underpowered
The terms βoverpoweredβ (OP) and βunderpoweredβ (UP) are used on game elements and mechanics that are too good or bad to describe a lack of game balance. More precisely, if a game element is too strong even with the lowest possible cost, it is overpowered. If it is too weak even with the highest possible cost, it is underpowered. On the other hand, a game element might simply be too expensive or not expensive enough for the benefit it provides.
Colloquially, overpowered is often used when describing a specific class in an RPG, a specific faction in strategic games, or a specific tactic, ability, weapon or unit in various games. For something to be deemed overpowered, it is either the best choice in a disproportionate number of situations (marginalizing other choices) and/or excessively hard to counter by the opponent compared to the effort required to use it.
Underpowered often refers to when describing a specific class in an RPG, a specific faction in strategic games, or a specific tactic, ability, weapon or unit in various games as far weaker than average, resulting in it being always one of the worst options to pick in most situations. In such way, it is often marginalized by other choices because it's inherently weaker than similar options or it's much more easily countered by opponents.
Gimp
A gimp is a character, character class or character ability that is underpowered in the context of the game (e.g. a close range warrior class equipping a full healing boosting armor set, despite having no healing abilities). Gimped characters lack effectiveness compared to other characters at a similar level of experience. A player may gimp a character by assigning skills and abilities that are inappropriate for the character class, or by developing the character inefficiently. However, this is not always the case, as some characters are purposely "gimped" by the game's developers in order to provide an incentive for raising their level, or to give the player an early head-start. An example of this is Final Fantasy Mystic Knight class, which starts out weak, but is able to become the most powerful class if brought to a very high level. Gimps may also be accidental on the part of the developer, and may require a software patch to balance.
Sometimes, especially in MMORPGs, gimp is used as a synonym for nerf to describe a rule modification that weakens the affected target. Unlike the connotatively neutral term nerf, gimp in this usage often implies that the rule change unfairly disadvantages the target.
Revamp
A revamp (or rework) is a significant change to a game that is designed to improve (or balance out) the game's overall quality. This can include changes to the game's mechanics, art style, storyline, or any other aspect of the game. Revamps are often done in response to player feedback or to address problems that have been identified with the game. They can also be done simply to refresh the game and keep it feeling fresh for players.
Revamps can happen at any time during a game's development or after its release. The difference between a revamp and a remaster is that a remaster is simply an updated version of the game with better graphics and maybe some new content, while a revamp is a completely new game built on the foundation of the original.
Revamps may be optional and may happen if something is not properly nerfed.
Essential concepts of balancing
Chance
While the optimal ratio between skill and chance are dependent on the target group, the outcome should be more influenced by skill. Chance and skill are viewed as partial opposites. Chance allows a weaker player to beat a stronger one. Generally, it is advised to favor many small random elements with little influence over a few with large effects to make results, that differ highly from average, less likely. The player should also receive a certain degree of information and control over random elements.
Difficulty
Difficulty is especially important for PvE-games, but has at least some significance for PvP-games regarding the usability of game elements. The perception of the difficulty depends on mechanics and numbers, but also on the players abilities and expectations. The ideal difficulty therefore depends on individual player and should put the player in a state of flow. Consequently, for the development, it can be useful or even necessary to focus on a certain target group. Difficulty should increase throughout the game since players get better and usually unlock more power. Achieving all those goals is problematic since, among other things, skill cannot be measured objectively and testers also get continuously better. In any case, difficulty should be adjustable for or by the player in some way.
Dynamic and static balance
Game balance can be divided into a dynamic and a static component. Static balance is mostly concerned with a game's rules and elements, everything, that is set before a game or match starts. An example would be like player health and ammo left. Dynamic balance conversely describes the balance between players, environment and computer opponents and how it changes throughout the game. An example would be moving objects in a game environment
Economies
Within a game, everything that has an owner or is provided to a player can be called a resource. This includes commodities, units, tokens, but also information or time, for example. Those resource systems are similar to real economies, especially in regards to trading resources. There are some distinctions for video games though: There are open economies, that receive additional resources, but also closed ones that do not. Additionally, economies might provide indefinite resources, or all players have to share a set amount instead. Especially for online games, it therefore is important to design economies to make them βfunβ and sustainable.
Fairness
A game is fair if all players have roughly the same chance of winning at the start independent of which offered options they choose. This makes fairness especially important for PvP games. Fairness also means, even for PvE games, that the player never feels like the opponents were unbeatable.
Early appearances of the term βfairnessβ
Chris Crawford wrote in 1982 of the importance of a game's "illusion of winnability"; Pac-Man is popular because it "appears winnable to most players, yet is never quite winnable". When defeated "the player must perceive", Computer Gaming World wrote in 1984, "that failure was the player's fault (not the game's) but can be corrected by playing better the next time". The illusion of winnability, Crawford said, "is very difficult to maintain. Some games maintain it for the expert but never achieve it for the beginner; these games intimidate all but the most determined players", citing Tempest as an example.
A fair game is winnable but, InfoWorld stated in 1981, can be "complicated or random or appear unfair". Fairness does not necessarily mean that a game is balanced. This is particularly true of action games: Jaime Griesemer, design lead at Bungie, states that "every fight in Halo is unfair". This potential for unfairness creates uncertainty, leading to the tension and excitement that action games seek to deliver. In these cases balancing is instead the management of unfair scenarios, with the ultimate goal of ensuring that all of the strategies which the game intends to support are viable. The extent to which those strategies are equal to one another defines the character of the game in question.
Simulation games can be balanced unfairly in order to be true to life. A wargame may cast the player into the role of a general who was defeated by an overwhelming force, and it is common for the abilities of teams in sports games to mirror those of the real-world teams they represent regardless of the implications for players who pick them.
Player perception can also affect the appearance of fairness. Sid Meier stated that he omitted multiplayer alliances in Civilization because he found that the computer was almost as good as humans in exploiting them, which caused players to think that the computer was cheating.
Meaningful decisions
Meaningful decisions are decisions whose alternatives are neither without any effect nor is one alternative clearly the best. This would make, for example, choosing between the numbers of a dice meaningless if 6 always gives the greatest benefit. This example is a dominant strategy, the most damaging type of meaningless decision, since it doesn't leave a reason to choose any alternative. Meaningful decisions consequently are a central part of the interactive medium games. Meaningless decisions, also called trivial decisions, do not add anything desirable to a game. They might actually harm the game by unnecessarily making it more complex. Additionally, a higher number of meaningful decisions can also make a game just more complex. Offered decisions should always be meaningful though. However, for the balancing irrelevant decisions might still influence the players experience, e.g. a decision between cosmetic alternatives like skins.
Strategies
Strategies are specific combinations of actions to achieve a certain goal. Classic examples for this are a rush or focusing on economy in a real-time strategy game. Not only elementary decisions within a strategy, e.g. between game elements, also the decision between strategies should remain meaningful.
Dominant strategies
A dominant strategy is a strategy that is always the most likely to lead to success, making it objectively the best strategy. This therefore renders all related decisions meaningless. Even if a strategy does not always win, but clearly is the best, it can be called (almost) dominant. Dominant strategies damage games and should strongly be avoided when possible. However, there is no objective border when a slightly better strategy becomes dominant.
Metagame
Metagame describes a game around the actual game, including discussions, like in forums, interactions between players, e.g. on local tournaments, but also the influence of extrinsic factors like finances. The βMetaβ, as it is also called, can act as a self-balancing force, since counters to popular strategies become widely known and lead to players changing their play behavior appropriately. This self-balancing force should not prevent developers from intervening in extreme cases of imbalance though.
Positive and negative feedback
Positive and negative feedback, also called positive and negative feedback loop, essentially describes game mechanics that reward or punish playing (usually well or bad) with power or the loss of it. Therefore, success leads to more power within a positive loop and therefore accelerates progress further, while a negative loop decreases power or adds additional costs to it. Feedback loops should be implemented carefully to only target the correct player, or otherwise they might determine the outcome too early or achieve nothing but simply delay the end of the game.
Many games become more challenging if the player is successful. For instance, real-time strategy games often feature "upkeep", a resource tax that scales with the number of units under a player's control. Team games which challenge players to invade their opponents' territory (football, capture the flag) have a negative feedback loop by default: the further a player pushes, the more opponents they are likely to face.
Many games also feature positive feedback loops β where success (for example capturing an enemy territory) leads to greater resources or capabilities, and hence greater scope for further successes (for example further conquests or economic investments). The overall dynamic balance of the game will depend on the comparative strength of positive and negative feedback processes, and therefore decreasing the power of positive feedback processes has the same effect as introducing negative feedback processes. Positive feedback processes may be limited by making capabilities some concave function of a measure of raw success. For example:
In RPG (role-playing games) using a level structure, the level attained is usually a concave transformation of experience points β as the character becomes more proficient, they can defeat more powerful adversaries, and hence can earn more experience points in a given period of playtime β but conversely more experience points are required to 'level up'. In this case, the players level and perhaps also power does not improve exponentially, but approximately linearly in playing time.
In many military strategy games, the conquest of new territory only gives a marginal increase in power β for example the 'home province' may be exceptionally productive, whereas new territories open to acquisition might only have by comparison slight resources, or may be prone to revolts or public order penalties which reduce their ability to provide significant net resources, after resources are allocated to adequately suppressing revolts. In this case, a player with initially impressive successes may become 'overextended' attempting to hold may regions which provide only marginal increases in resources.
In many games there is little or no advantage in acquiring a large horde of some particular item. For example, having a large and varied cache of equipment or weapons is an advantage, but only weakly over a somewhat smaller horde with a similar degree of diversity β for example only one weapon can be used at a time, and having another in an inventory with very similar capabilities offers only marginal gain. In more general terms, capabilities may depend on some bottleneck where there is no or only weak positive feedback.
Strongly net negative feedback loops can lead to frequent ties. Conversely, if there is on net a strong positive feedback loop, early successes can multiply very rapidly, leading to the player eventually attaining a commanding position from which losing is almost impossible. See also dynamic game difficulty balancing.
Power and costs
Power is everything that provides an advantage, while costs are essentially everything that is a disadvantage. Therefore, power and costs can be viewed as positive and negative values of the same scale. This allows calculations with both of them at the same time. Sometimes, it is only a matter of perspective if something is an advantage or a disadvantage: Is it a benefit to have bonus damage against dragons? Or is it a drawback not to receive it against other targets? A crucial part of game balancing consists in relating power and costs to each other and find a suitable relation in the first place, e.g. a power curve. In addition to that, costs might not be explicitly quantified: Spending gold on something from any finite amount limits future purchases. Also, certain investments might have prerequisites before they even become available. Sometimes, a game does not even show disadvantages. All of this can be referred to as shadow costs.
Rewards
Every player desires rewards, e.g. new game content or a simple compliment. Rewards should get bigger as the playtime increases. They give a player the feeling of doing something right and can enhance progress. A little bit of uncertainty about rewards makes them more desirable for many players.
Solvability
Colloquially speaking, solving a game refers to winning it or reaching its end. Ian Schreiber calls a game solvable if, for every situation, there is a recognizable best action. Generally, it is undesirable if a game can easily be solved, since this makes decisions meaningless, and games become boring faster.
There are multiple tiers of solvability: A game might be trivial to solve, but it might also be solvable only in theory with a lot of computing effort. Even games with random elements are solvable since a best action can be found using expected values. Besides high complexity, hidden information and the influence of other human players are what makes it impossible for a human to completely solve a game.
Symmetry and asymmetry
Symmetric games offer all players identical starting conditions and are therefore automatically fair in the above stated sense. While they are easier to balance, they still must be balanced, e.g. regarding their game elements. Most modern games are asymmetric though, while the grade of asymmetry can vary greatly. Fairness becomes even more important for those.
Giving each player identical resources is the simplest game balancing technique. Most competitive games feature some level of symmetry; some (such as Pong) are completely symmetric, but those in which players alternate turns (such as chess) can never achieve total symmetry as one player will always have a first-move advantage or disadvantage.
Symmetry is unappealing in games because both sides can and will use any effective strategy simultaneously, or success depends on a very small advantage such as one pawn in chess. An alternative is to offer symmetry with restrictions. Players in Wizard's Quest and Catan have the same number of territories, but choose them in alternating order; the differing combination of territories causes asymmetry.
Symmetry can be undone by human psychology; the advantage of players wearing red over players wearing blue is a well-documented example of this.
Systems and subsystems
In general, games can be viewed as systems of numbers and relations that typically consist of multiple subsystems. All numbers within a game only have a meaning in their given context. Subsystems can be dealt with separately and they might even have different balancing goals, but they also influence each other more or less. It is therefore crucial to consider how changes can affect the balance as a whole.
Transitivity and intransitivity
(In-)transitivity is a term used for logical relations. In games, this usually refers to relations between game elements, e.g. between the element A, B and C: In case of transitivity given A beats B and B beats C, A beats C. This means that A is the best element of those three. A transitive relation is especially useful as rewards for the player to receive more and more useful game elements.
In case of intransitivity given A beats B and B beats C, A does not automatically beat C. On the contrary, it might even be the case that C beats A, like in rock-paper-scissors. Intransitive relations can be assessed within the properties of game elements instead of just defining the outcome. This helps to create variety and prevent dominant strategies.
Balancing process
Balancing always includes changing quantifiable values and relations between them, directly or indirectly; this is done as an iterative process and partially dependent on the genre, during development and also afterwards (e.g. by rule changes, addons or software-updates). However, it cannot be completely solved by algorithms since aesthetics are also important and a perfect balance might actually achieve the opposite of fun. Ideally, simple rules deliver complex results. This is also referred to as βemergenceβ.
Firstly, a balanced basis should be created, so most later work consists in merely changing numbers and introducing new content becomes much easier. This makes it important for a designer to adjust numbers easily and they should always know how changes affect the overall system. Sight of the bigger picture should never be lost to create a positive experience for the player.
Extremely powerful game elements and dominant strategies are dangerous to the latter goal and should therefore be identified and corrected. Game elements that provide a highly situational use but have a fixed cost value, that is comparable to less situational elements, are particularly difficult to balance. Another priority is providing multiple viable options. Generally, players react better to buffing something than nerfing it. It is possible, however, to achieve those indirectly by changing another part of the system, since most content, if not everything is connected and related to each other.
Goals of balancing
The highest goal of balancing is always preserving or increasing the fun or engagement. This, however, can highly depend on the individual game and its audience and might even consist in great imbalance or turn into the opposite of fun: Especially in games with in game purchases or in-game advertising, the developer or publisher has an interest to monetize the game, even if it is detrimental to the fun. Such games may frequently interrupt the experience with advertisements or provide low chances (e.g. in loot boxes) to intentionally frustrate the player but keep engagement high to encourage spending money to skip frustrating parts. Otherwise, the player may face huge disadvantages (imbalances) even against other paying players.
In general, though, there is a consensus that huge imbalances are bad for a game, even if the game still is fun to play β a better balance would make it even more fun. Opinions on exactly what should be balanced, how well-balanced a game should ideally be and even if perfect balance is achievable or even a good thing vary. In some cases, it is even stated that a slight imbalance is actually beneficial.
A crucial goal of balancing a is preventing any of its component systems from being ineffective or otherwise undesirable when compared to their peers. An unbalanced system represents wasted development resources at the very least, and at worst can undermine the game's entire ruleset by making important roles or tasks impossible to perform.
One balancing approach is to set strategies as the goal, so all offered strategies have roughly equal chances of success. Strategies can only be affected by changing underlying game elements, but the balance between game elements is not the focus here. Strategies should offer a deep gaming experience.
The balance can depend on player skill. Therefore, one level of skill should be chosen as the goal of all development efforts. This might be professional or casual players, for example. On all other levels, that do not fit the prime audience, more imbalances can be accepted.
Preserving strategies and game elements from becoming irrelevant also is emphasized: Every given option should have at least some use and should be viable. To achieve this, strategies and game elements should be compared within all contexts they compete in, e.g. combat or resource investments. Extremely powerful (βbrokenβ) strategies and elements are viewed as especially damaging since they devalue all their competitors.
Beyond all of that, there is an argument for some imbalances within a game, since that constantly encourages players to find new solution, e.g. by interacting in the metagame. This especially applies to frequently updated games. On the opposite end, (nearly) perfectly balanced games would result in mere execution of proven strategies, with only top players being able to create new successful strategies. Also, giving all game elements the exact same amount of power would make all decisions meaningless, since everything is equally powerful anyway.
Another approach emphasizes that balance between game elements, strategies and actions is not the most important factor, but providing counters against any situation that may arise. This always allows players to find them together and they never face unsolvable problems.
At least, there is the idea to include players in the balancing regarding their skills and other prerequisites. Matchmaking and handicaps can help achieving that. This might also decrease the influence of imbalance since players are more equally matched. In addition to that, the playersβ perception of balance should be considered: Player behavior can affect success rates of strategies and game elements. Therefore, all changes should be communicated accordingly.
Characteristics of a well-balanced game
Despite not all goals of balancing are clear, many characteristics of well-balanced games are usually not disagreed on: Decisions should be meaningful. The player should still have a chance to win in most situations and no stalemates should arise, in which nobody can win or lose. Leading player or computer controlled opponents should never get an irretrievable advantage until they almost won. Early mistakes and chance should not make a game unwinnable. Also, the game should provide the player with enough information and control to avoid those errors, so the player always feels responsible for his or her actions.
Measuring the state of balance is another matter though, since it requires interpretation of data. Sheer win rates of strategies or game elements do not have a great significance without considering other factors like player skill and pick rates. Making correct conclusions is therefore crucial to find causes for imbalance.
Methods and tools
The following paragraphs present a collection of tools and methods used to balance a game or to measure its state. Not mathematical perfection, but fun, engagement or a mix of both is the main goal and human evaluation still is the only known measurement for successfully achieving those, especially fun. Also, balancing is an intricate process and typically needs many iterations.
Aesthetics and narration
The visual impression of a game should not contradict with its balancing. On the contrary: Especially real models, e.g. historic facts, can serve as inspiration for mechanics, counters, orthogonal unit differences or intransitive relations.
Balancing strategies
One approach is to move the balancing goal to strategies instead of game elements. Strategies typically include multiple elements and decisions. This makes sure that all game elements have at least some use and decisions stay meaningful. Also, seemingly fine game elements might become too powerful only in certain combinations. A difficulty of this is though that strategies can only be influenced by changing the game elements and mechanics they include.
Ban
Banning certain game elements or strategies is a way to remove dominant strategies from otherwise well-balanced games, especially in the competitive sector. This should be avoided when possible, however.
Central resource
A chosen value, this may be an attribute of a game elements, costs or an additionally calculated value like power, can be nominated as a benchmark for all other values. Every change of one of them means another one must change as well. It can affect the central resource but also any other value to still fit the same budget.
Counter
There should be a counter to every action, game element strategy that beats those in a direct competition. This does not only make dominant strategies unlikelier to develop, it also allows players to find new solutions for current challenges. Ideally, a counter relation is assessed within properties of game elements rather than simply defined. Also, decisions that are made at the beginning of a game that cannot be revised by the player should not determine the outcome right away.
Difficulty level
Video games often allow players to influence their balance by offering a choice of "difficulty levels". These affect how challenging the game is to play, and usually run on a general scale of "easy", "medium", and "hard". Sometimes, the difficulty is set once for the entirety of a game, while in other games it can be changed freely at any point. Modern games, e.g. Horizon Zero Dawn, may also feature a difficulty setting called βStoryβ for players who want to focus on the narrative rather than interactive parts like combat. There are also other terms. The Last of Us, for example, offers two settings above βhardβ, called βsurvivorβ and βgroundedβ.
In addition to altering the game's rules, difficulty levels can be used to alter what content is presented to the player. This usually takes the form of adding or removing challenging locations or events, but some games also change their narrative to reward players who play them on higher difficulty levels or end early as punishment for playing on easy. Difficulty selection is not always presented bluntly, particularly in competitive games where all players are affected equally, and the standard "easy/hard" terminology no longer applies. Sometimes veiled language is used (Mario Kart offers "CC select"), while at other times there may be an array of granular settings instead of an overarching difficulty option. An alternative approach to difficulty levels is catering to players of all abilities at the same time, a technique that has been called "subjective difficulty". This requires a game to provide multiple solutions or routes, each offering challenges appropriate to players of different skill levels (Super Mario Galaxy, Sonic Generations).
Feedback
While tester feedback is important when developing and updating a game, there are certain things to be kept in mind: Skill and the ability to explain do not necessarily correlate with each other. There are typically more players than developers, so they are better at solving it. Additionally, new testers should be added from time to time since practice effects emerge.
Gamemaster
A game can be balanced dynamically by a gamemaster who observes players and adjusts the game in response to their actions, emotional state, etc., or even proactively changes the direction of the game to create certain experiences.
Although gamemasters have historically been humans, some videogames now feature artificial intelligence (AI) systems that perform a similar role by monitoring player ability and inferring emotional state from input. Such systems are often referred to as having dynamic difficulty. One notable example is Left 4 Dead and its sequel Left 4 Dead 2, cooperative games that have the players fight through hordes of zombie-like creatures including unique creatures with special abilities. Both games use an AI Director which not only generates random events but tries to create tension and fear by spawning-in creatures to specific rule sets based on how players are progressing, specifically penalizing players through more difficult challenges for not working together. Research into biofeedback peripherals is set to greatly improve the accuracy of such systems.
Game theory
Game theory focuses more on theoretical modeling of competing players and their decision making and therefore is only for limited use in game design. However, it does offer knowledge and tools like a Net Payoff Matrix that can be helpful to measure power and understand player reasoning.
Handicaps
Handicaps may create a competitive situation between players of different skill level, but they might also go too far and render skill irrelevant. Handicaps are disadvantages that sometimes are deliberately self-inflicted.
Intuition
Games can be complex systems. Since development resources are limited, relying on intuition can sometimes be useful or even necessary. The designer should always keep in mind how changes affect other parts of the game and guesses should always rely on evidence or proof.
Matchmaking and ranking
An approach to avoid some balancing problems all together is ranking players depending on their skill. Ideally, the ranking system predicts the outcome almost perfectly and every player (in a PvP game) has roughly the same win rate, even considering factors that lie outside the game, like the gaming device. In any case, good match making benefits a game greatly, since, for example, newbies are not matched against experienced players who leave them with no chance of winning and the challenge of stronger opponents rises together with each player's skills.
Observation
Some obvious problems become clear through sheer observation of the game and player behavior. This includes mathematical superiority of game elements or strategies but also extremely high or low usage of those. In any case, statistics do not necessarily represent causalities and that there are typically multiple factors.
Orthogonal unit differences
Orthogonal unit differences describes properties of game elements that cannot be compared by inherent numbers. Ideally, every game element has at least one unique trait. This also helps creating intransitivity and counters.
Pacing
Player versus environment games are usually balanced to tread the fine line of regularly challenging players' abilities without ever producing insurmountable or unfair obstacles. This turns balancing into the management of dramatic structure, generally referred to by game designers as "pacing". Pacing is also a consideration in competitive games, but the autonomy of players makes it harder to control.
Power curve
A power curve (also: cost curve) is a relation that reflects the ratio between power and costs. It is especially useful when dealing with multiple game elements that provide varying benefits depending on different values of the same cost, e.g. when using a central resource. While a power curve always shows an order, it does not necessarily represent exact relations, depending on the level of measurement.
Randomization
Randomization of starting conditions is a technique common in board games, card games, and also experimental research, which fights back against the human tendency to optimize patterns in one's favor.
The downside of randomization is that it takes control away from the player, potentially leading to frustration. Methods of overcoming this include giving the player a selection of random results within which they can optimize (Scrabble, Magic: The Gathering) and making each game session short enough to encourage multiple attempts in one play session (Klondike, Strange Adventures in Infinite Space).
Statistical analysis
Statistics can help collecting empiric data of player behavior, success rates, etc., to identify unbalanced areas and make corrections. Ideally, a game gathers this data automatically. Statistics can only support a designersβ abilities and intuition and are therefore only one part of making design decisions, together with, for example, tester- or user feedback. Statistics and their interpretation should also consider factors like skill and pick-rates.
Tier list
A tier list orders game elements according to their power in multiple categories. This ranking can be achieved using feedback, empiric data or subjective impressions. While the number and names of tiers can vary, a list typically goes from βgod tierβ through multiple tiers in between to βgarbage tierβ. While balancing, all elements within the god tier should be nerfed first. Too powerful elements make many other elements worse if not useless. After this, all elements within the garbage tier should be buffed until they are no longer useless. In the end, the power differences between all other tiers can be adjusted until a satisfying state is reached. A tier list is especially useful when working with game elements that have exactly the same cost, e.g. characters in a Fighting game.
See also
Triangular number#Applications
References
Game design
Video game terminology | Game balance | Technology,Engineering | 7,556 |
48,317,066 | https://en.wikipedia.org/wiki/Fort%20de%20Bron | The Fort de Bron is a fortification built between 1875 and 1877, located in the commune of Bron. It is part of the second belt of fortifications around Lyon, which also includes Fort de Vancia, Fort de Feyzin and Fort du Mont Verdun.
History
Its history is linked to the Franco-Prussian War of 1870. Indeed, as a result of the Treaty of Frankfurt which ended the 1870 war, France lost Alsace and Lorraine, reducing its borders. To ensure the defence of Lyon, it built a strong cordon of forts encircling the city to the east, which included the forts of Bron, Vancia, Feyzin and Mont Verdun. These forts were equipped with significant amounts of artillery with all the hardware, staff, and powder storage that this then entailed.
When a defensive reorganization occurred in France in 1874, the commune of Bron was therefore included in the crown of detached forts, to protect the stronghold of Lyon.
From 1875 to 1885, the following were built successively around the town:
Fort de Bron, placed on the heights above the Rhone valley, covering as far as Saint-Priest;
Batteries at Lessivas and Parilly;
A walled enclosure with four bastions.
The Fort de Bron is the only one remaining.
Fort de Bron was completed with two annexed batteries at Lessivas and Parilly. But advances in artillery quickly made these forts, and therefore that of Bron, ineffective, inadequate and unable to defend Lyon. During World War I, which did not see fighting in this region, the fort was used only as a barracks and equipment warehouse. During the Second World War the Germans used it as a prison. The French army used it until 1962 as an annex of the air base; it was decommissioned in 1963.
Characteristics
Role
The establishment of this fort allowed the City of Lyon to protect itself from enemy attacks from the east, dominating the surrounding plain, the fort covered DΓ©cines, Chassieu and Saint-Priest.
Location
The fort is located at shooting distance by antique cannon from Lyon (i.e. 7 to 8Β km), at 212 metres above sea level on a hill at Bron.
Composition
This polygonal structure is surrounded by a deep moat six to eight metres deep and twelve to fourteen metres wide, defended by caponiers. The buildings (some underground) of the 1,500Β m2 site could accommodate 841 men in war. The bridge that allows access to the rear entrance of the fort is unique: it retracts sideways, sliding on steel rollers.
Land and zones
A judgement of expropriation dating from 10 June 1874 released 24 hectares for the construction of the fort. Like other forts, military land was bounded by stone posts located around it, their hats on an engraving showing the direction of the next point.
Armament
The fort, whose cannons can reach targets located 6Β km away (with an extended range up to 8Β km, in 1880, with the new Bange guns), was equipped with:
17 guns on the cavalier,
13 guns on the lower enclosure,
10 light guns to defend the moats,
5 mortars,
a total of 45 pieces of artillery.
Garrison and housing
841 people were housed in wartime:
1 commander of the fort,
17 officers,
39 NCOs,
784 soldiers.
Ten horses were also present on the site.
Officers and NCOs were housed in the second floor barracks, upstairs. The rest of the men occupied the first floor of the barracks at a rate of 56 soldiers to a room.
The fort was also equipped with two kitchens, a bakery, a well, a cistern, latrines, a forge and shops.
The bakery required 69,400Β kg flour in reserve.
A pump drew drinking water from a depth of 37 metres in the water table at a rate of 50 mΒ³ per day, also feeding a tank containing 13 mΒ³. This was intended to provide water in the event of pump failure during three days.
Lighting was by kerosene lamps, candles and skylights .
Disciplinary premises were also placed in the centre of the fort, including a guard room and four cells.
Construction
The order was given on 8 May 1874 by General Cissey of the Ministry of War, to begin construction of the Fort of Bron. The stone came from Trept and from the Monts d'Or, a small mountain range to the northwest of Lyon. Construction began in 1875 and continued until 1877 for a total cost of 3,014,578 francs:
760,000 francs in 1875,
1,230,000 francs in 1876,
745,000 francs in 1877,
19,000 francs in 1878, and
260,578 francs for acquisition costs.
Today
The greater Lyon council bought the fort in 1975 to build two water tanks, occupying 50% of the built area of the fort and 300 metres of a South divide as a spillway for security. On 23 September 1976 at the Extra-Municipal Planning Commission (CEMU), the COURLY proposed to transform the ditches to public landfill, to abandon the rubble, and to finally abort the project. The army still retained 6 hectares of woodland (including some of ditches and the Diamond gap) in order to build an extension of the Army medical school. Several attempts to negotiate with the army to retain all the fort intact failed, so an agreement was signed between the mayor of Bron, AndrΓ© Sousi, and the Prime Minister, Raymond Barre, proposing the purchase of the land totalling 9,878 m2) by the municipality at the price of 10 francs per mΒ², i.e. 98,780 francs.
The purchase allowed the creation in 1983 of a fitness trail around the fort.
The Fort de Bron hosts a theatrical event every two years: the Biennial du Fort de Bron; for two months, a theatre company takes possession of the premises. In 2009, The Odyssey of Homer drew nearly 17,000 spectators 15,000 in 2011.
The Fort de Bron is managed by an association created March 25, 1982, which organises free tours on the first Sunday of each month. The association also participates in Heritage Days and organizes a craft exhibition in early October. A museum has also been built there.
The museum SociΓ©tΓ© Lyonnaise History of Aviation and Aerospace Documentation is installed in three rooms of the barracks on the second floor.
The fort has also been used as a filming location for video clips, movie scenes or interviews. The TV movie The gate of heaven by Denys Granier-Deferre, broadcast 1993, and Under guard, the TV movie Luc Beraud, broadcast in 2002, and the short films Masquerade from Nicolas Brossette, broadcast 2007, and The dΓ©carquilleurs from Jean-Paul Lebesson were partly shot at the fort.
See also
Ceintures de Lyon
References
Bibliography
External links
History of Bron
Association du fort de Bron
Fort Bron on fortiffsere.fr
Fort Source on the website PSS-Archi. Accessed January 6, 2014.
Fort Bron on the website and Memory Fortifications.fr
Séré de Rivières system
Fortifications of Lyon
Fortification lines | Fort de Bron | Engineering | 1,451 |
4,312,667 | https://en.wikipedia.org/wiki/Spectrum%20management | Spectrum management is the process of regulating the use of radio frequencies to promote efficient use and gain a net social benefit. The term radio spectrum typically refers to the full frequency range from 1Β Hz to 3000Β GHz (3 THz) that may be used for wireless communication. Increasing demand for services such as mobile telephones and many others has required changes in the philosophy of spectrum management. Demand for wireless broadband has soared due to technological innovation, such as 3G and 4G mobile services, and the rapid expansion of wireless internet services.
Since the 1930s, spectrum was assigned through administrative licensing. Limited by technology, signal interference was once considered as a major problem of spectrum use. Therefore, exclusive licensing was established to protect licensees' signals. This former practice of discrete bands licensed to groups of similar services is giving way, in many countries, to a "spectrum auction" model that is intended to speed technological innovation and improve the efficiency of spectrum use. During the experimental process of spectrum assignment, other approaches have also been carried out, namely, lotteries, unlicensed access, and privatization of spectrum.
Most recently, America has been moving toward a shared spectrum policy, whereas Europe has been pursuing an authorized shared access (ASA) licensing model. President Obama made shared spectrum the policy of the United States on 14 June 2013, following recommendations from the President's Council of Advisors for Science and Technology (PCAST) which advocated the sharing of (uncleared) federal radio spectrum when unused at a place and time provided it does not pose undue risks. In line with this guidance, as of Dec 2014 the FCC was extending the limited success of television band spectrum sharing (TV white space) into other bands, significantly into the 3550β3700Β MHz US Navy radar band via a three tier licensing model (incumbent, priority, and general access).
Governments and spectrum management
Most countries consider RF spectrum as an exclusive property of the state. The RF spectrum is a national resource, much like water, land, gas and minerals. Unlike these, however, RF is reusable. The purpose of spectrum management is to mitigate radio spectrum pollution, and maximize the benefit of usable radio spectrum.
The first sentence of the International Telecommunication Union (ITU) constitution fully recognises "the sovereign right of each State to regulate its telecommunication". Effective spectrum management requires regulation at national, regional, and global levels.
Goals of spectrum management include: rationalize and optimize the use of the RF spectrum; avoid and solve interference; design short and long range frequency allocations; advance the introduction of new wireless technologies; coordinate wireless communications with neighbours and other administrations. Radio spectrum items which need to be nationally regulated: frequency allocation for various radio services, assignment of license and RF to transmitting stations, type approval of equipment (for countries out of the European Union), fee collection, notifying ITU for the Master International Frequency Register (MIFR), coordination with neighbour countries (as there are no borders to the radio waves), external relations toward regional commissions (such as CEPT in Europe, CITEL in America) and toward ITU.
Spectrum use
Spectrum management is a growing problem due to the growing number of spectrum uses. Uses include: over-the-air broadcasting, (which started in 1920); government and research uses (which include defense, public safetyβmaritime, air, policeβresource management, transport, and radio astronomy); commercial services to the public (including voice, data, home networking); and industrial, scientific and medical services (which include Telemedicine, and remote control).
In the 1980s, the only concern was about radio and television broadcasting; but today mobile phones and wireless computer networks are more and more important as fewer than 15% of US households rely on over-the-air broadcasting to receive their TV signals.
The US spectrum is managed either by the Federal Communications Commission (FCC) for non-governmental applications or by the National Telecommunications and Information Administration (NTIA) for governmental applications. For shared application, both entities should agree.
The spectrum is divided into different frequency bands, each having a specific application. For instance, the frequency band that covers 300Β kHz to 535Β kHz is reserved for aeronautical and maritime communications and the spectrum from 520Β kHz to 1700Β kHz for AM radio. This process is called "allocation".
The next step is to assign frequencies to specific users or classes of users. Each frequency band has a specific assignment that depends on the nature of the application and the numbers of users. Indeed, some applications require a wider band than others (AM radio uses blocks of 10Β kHz where FM radio uses blocks of 200Β kHz). In addition, "guard bands" are needed to keep the interference between applications to a minimum.
Status quo: the command and control approach
The Command and Control management approach is the one currently employed by most regulators around the globe. This approach advocates that the regulators be the centralized authorities for spectrum allocation and usage decisions. In the US example, the regulator (FCC) determines the use cases for specified spectrum portions, as well as the parties who will have access to them.
The Federal Communications Commission (FCC) also regulates the physical layer technologies to be employed.
The allocation decisions are often static in temporal and spatial dimensions, meaning that they are valid for extended periods of time (usually decades) and for large geographical regions (country wide). The usage is often set to be exclusive; each band is dedicated to a single provider, thus maintaining interference free communication. The command and control management model dates back to initial days of wireless communications, when the technologies employed required interference-free mediums for achieving acceptable quality. Thus, it is often argued that the exclusive nature of the command and control approach is an artifact of outdated technologies.
The apparent advantages of this model is that services related to public interest could be sustained. In terms of profitability, public interest programs, for example, over-the-air television, may not be as attractive as commercial ones in the provider perspective, but they are nevertheless beneficial for the society. Therefore, these services are often implicitly enforced by the regulator through the license agreements. Another advantage is the standardization that results from such a centralized approach. Such standardization is critical in networked industries, for which the telecommunication industry is a text-book example. One scholar has published a paper that shows how the development of new technologies promises to bring considerably more spectrum to the public, but would require that society embrace a new paradigm of spectrum use.
GAO Report on Spectrum Management (2004)
Excerpt:
Alternative spectrum governance regimes and the spectrum debate
With the digital transition, spectrum management entered a new age. Full conversion to digital TV by 17 February 2009 (Digital Transition and Public Safety Act of 2005) allows broadcasters to use spectrum more efficiently and save space for the possibility of sharing spectrum.
Spectrum sharing is the subject of heated discussion. Exponential growth of commercial wireless calls for additional spectrum to accommodate more traffic. As a regulator, the FCC responded to these needs by making more spectrum available. A secondary market has been allowed to emerge and licensees are encouraged to lease use of the spectrum to third parties temporarily. Making licenses transferable is an important attempt by the FCC to create incentives for broadcasters to share unused spectrum. Another proposed solution to the spectrum scarcity problem is to enable communications systems to occupy spectrum that was previously allocated for radar use and to cooperatively share spectrum. This approach has received increased attention recently with several research programs, including DARPA projects, investigating several methods of cooperative radar-communications spectrum sharing. More alternatives are underway such as spectrum sharing in cellular networks.
Spectrum scarcity has emerged as a primary problem encountered when trying to launch new wireless services. The effects of this scarcity are most noticeable in spectrum auctions where the operators often need to invest billions of dollars to secure access to specified bands in the available spectrum.
In spite of this scarcity, recent spectrum utilization measurements have shown that the available spectrum opportunities are severely underutilized, i.e. left unused. This artificial "access limitation"-based scarcity is often considered to result from the static and rigid nature of the command and control governance regime. Interested parties have started to consider possible improvements in the governance regime by relaxing the constraints on spectrum access. Two prevailing models are the "spectrum commons" and the "spectrum property rights" approaches.
Spectrum commons theory
Under US law, the spectrum is not considered to be the property of the private sector nor of the government except insofar as the term "government" is used to be synonymous with "the people".
The original use of the term "the commons" was the practice by which the public at large had limited rights to use the commons; each person then had an interest in their own usage rights, but the commons themselves were not property, nor were the rights "property" since they could not be traded. The term "tragedy of the commons" was popularized by Garrett Hardin in a 1968 article which appeared in Science. The tragedy of the commons illustrates the philosophy that destructive use of public reservations ("the commons") by private interests can result when the best strategy for individuals conflicts with the "common good". In such a scenario, it asserts that even though the contribution of each "bad actor" may be minute, when the results of these actions are combined the resource could be degraded to the point of uselessness. This concern has led to the regulation of the spectrum.
Spectrum property rights model
The spectrum property rights model advocates that the spectrum resources should be treated like land, i.e. private ownership of spectrum portions should be permitted. The allocation of these portions should be implemented by means of market forces. The spectrum owners should be able to trade these portions in secondary markets. Alternatively, the spectrum owners would be able to use their bands in any way they want through any technology they prefer (service and technology neutrality). Although the spectrum property rights model advocates exclusive allocation of transmission rights, it is not the same as a licensed regime. The main difference is the service and technology neutrality advocated in the spectrum property rights approach, as opposed to strict requirements on services and communications technologies inherent in licensed governance regimes.
The basic idea of spectrum property rights was first proposed by Leo Herzel in 1951, who was a law student at the time, preparing a critique of the US FCC policies in spectrum management. Ronald Coase, a Nobel Prizeβwinning economist, championed the idea of auctioning off spectrum rights as a superior alternative to the status quo in 1959. Coase argued that, though initial distributions may affect matters, property rights in a frequency will lead to the most efficient usage thereof. When he first presented his vision to the FCC, he was asked whether he was making a joke.
The supporters of the spectrum property rights model argue that such a management scheme would potentially promote innovation and more efficient use of spectrum resources, as the spectrum owners would potentially want to economize on their resources.
The spectrum property rights model is often critiqued for potentially leading to artificial scarcity and the hold-up problem. The hold-up problem refers to the difficulty in aggregation of the spectrum resources (which would be required for high bandwidth applications), as the individual spectrum owners could ask for very high compensation in return for their contribution. Since spectrum is a scarce finite good, there is a perverse incentive to not use it at all. Participants and existing spectrum owners in the spectrum market can preemptively buy spectrum, then warehouse it to prevent existing or newcomer competitors from utilizing it. The existing spectrum owner's official plans for this warehoused spectrum would be save it for an unknown future use, and therefore not utilize it at all for the foreseeable future.
In a partial or incomplete "spectrum as property" regulatory regime, incumbent and grandfathered owners who obtained spectrum under the old cause and merit policy, can obtain windfalls in selling the spectrum they obtained for no cost under the earlier regulatory regime. When a regulatory regime changes to the property model, the original merit and cause guidelines for incumbent and grandfathered users are often removed. No regulatory review mechanism exists to check if the merit guidelines are still being followed, and if not, to then revoke the spectrum license from the incumbent spectrum owner and reissue the spectrum to a new user under old merit guidelines, or sell the spectrum inline with the new "spectrum as property" policy.
U.S. regulatory agencies
The Communications Act of 1934 grants authority for spectrum management to the President for all federal use (47 USC 305). The National Telecommunications and Information Administration (NTIA) manages the spectrum for the Federal Government. Its rules are found in the NTIA Manual of Regulations and Procedures for Federal Radio Frequency Management".
The Federal Communications Commission (FCC) manages and regulates all domestic non-federal spectrum use (47 USC 301).
Background:
Radio Act of 1927
Communications Act of 1934
Administrative Procedures Act of 1947
Communications Satellite Act of 1962
National Telecommunications and Information Administration
Negotiated Rulemaking Act of 1990
Cable TV Consumer Protection & Competition Act of 1992
Omnibus Budget Reconciliation Act of 1993
Telecommunications Act of 1996
International spectrum management
The International Telecommunication Union (ITU) is the part of the United Nations (UN) that manages the use of both the RF Spectrum and space satellites among nation states. The Plenipotentiary Conference is the top policy-making body of the ITU, meeting every four years in order to set the Union's general policies. The ITU is divided into three Sectors: the Radiocommunication Sector (ITU-R) determines the technical characteristics and operational procedures for wireless services, and plays a vital role in the Spectrum Management of the radio-frequency; ITU-R Study Group 1 is the Spectrum Management study group; the Telecommunication Standardization Sector (ITU-T) develops internationally agreed technical and operating standards; and the Telecommunication Development Sector (ITU-D) fosters the expansion of telecommunications infrastructure in developing nations throughout the world, that make up two-thirds of the ITU's 191 Member States. The ITU Radio Regulations set a binding international treaty governing the use of the radio spectrum by some 40 different services.
Frequency administration
In telecommunication, frequency assignment authority is the power granted for the administration, designation or delegation to an agency or administrator via treaty or law, to specify frequencies, frequency channels or frequency bands, in the electromagnetic spectrum for use in radiocommunication services, radio stations or ISM applications.
Frequency administration is β according to Article 1.2 of the International Telecommunication Union's (ITU) Radio Regulations (RR) β defined as Β«Any governmental department or service responsible for discharging the obligations undertaken in the Constitution of the International Telecommunication Union, in the Convention of the International Telecommunication Union and in the Administrative Regulations (CS 1002).Β» Definitions identical to those contained in the Annex to the Constitution or the Annex to the Convention of the International Telecommunication Union (Geneva, 1992) are marked "(CS)" or "(CV)" respectively.
International frequency assignment authority is vested in the Radiocommunication Bureau of the International Telecommunication Union (ITU).
Europe
In Europe each country has regulatory input into the progress of European and international policy, standards, and legislation governing these sectors through their respective frequency administration.
European frequency administrations might receive military advice by the appropriate National Radio Frequency Agency (NRFA). Pertaining to NATO-Europe, this expertise is within the Spectrum Consultation Command and Control & Infrastructure Branch (SC3IB). However, the decision making body, pertaining to military access to the radio frequency spectrum, is the NATO Civ/Mil Spectrum Capability Panel 3 (CaP3), on behalf of the NATO Consultation, Command and Control Board (C3B), with participation of competent, authorised and mandated representatives of national frequency administrations''.
Civil frequency management for Europe is driven by a number of organisations. These include the:
European Union (EU)
Independent Regulator's Group (IRG)
European Conference of Postal and Telecommunications Administrations (CEPT)
European Radiocommunications Office (ERO)
In July 2002, the European Commission also established the European Regulators Group for Electronic Communications Networks and Services; creating, for the first time, a formal structure for interaction and coordination between the European Commission and regulators in all EU Member States to ensure consistent application of European legislation.
United States
In the United States, primary frequency assignment authority is exercised by the National Telecommunications and Information Administration (NTIA) for the Federal Government and by the Federal Communications Commission (FCC) for non-Federal Government organizations.
See also
Frequency assignment
Military spectrum management
Broadcast license
LTE in unlicensed spectrum
Challenges and Strategies for Effective Spectrum Management
References
Radio spectrum
Radio resource management | Spectrum management | Physics | 3,362 |
746,550 | https://en.wikipedia.org/wiki/Generalized%20game | In computational complexity theory, a generalized game is a game or puzzle that has been generalized so that it can be played on a board or grid of any size. For example, generalized chess is the game of chess played on an board, with pieces on each side. Generalized Sudoku includes Sudokus constructed on an grid.
Complexity theory studies the asymptotic difficulty of problems, so generalizations of games are needed, as games on a fixed size of board are finite problems.
For many generalized games which last for a number of moves polynomial in the size of the board, the problem of determining if there is a win for the first player in a given position is PSPACE-complete. Generalized hex and reversi are PSPACE-complete.
For many generalized games which may last for a number of moves exponential in the size of the board, the problem of determining if there is a win for the first player in a given position is EXPTIME-complete. Generalized chess, go (with Japanese ko rules), Quixo, and checkers are EXPTIME-complete.
See also
Game complexity
Combinatorial game theory
References
Computational complexity theory
Combinatorial game theory | Generalized game | Mathematics | 241 |
1,768,173 | https://en.wikipedia.org/wiki/Glycine%20receptor | The glycine receptor (abbreviated as GlyR or GLR) is the receptor of the amino acid neurotransmitter glycine. GlyR is an ionotropic receptor that produces its effects through chloride currents. It is one of the most widely distributed inhibitory receptors in the central nervous system and has important roles in a variety of physiological processes, especially in mediating inhibitory neurotransmission in the spinal cord and brainstem.
The receptor can be activated by a range of simple amino acids including glycine, Ξ²-alanine and taurine, and can be selectively blocked by the high-affinity competitive antagonist strychnine. Caffeine is a competitive antagonist of GlyR. Cannabinoids enhance the function.
The protein Gephyrin has been shown to be necessary for GlyR clustering at inhibitory synapses. GlyR is known to colocalize with the GABAA receptor on some hippocampal neurons. Nevertheless, some exceptions can occur in the central nervous system where the GlyR Ξ±1 subunit and gephyrin, its anchoring protein, are not found in dorsal root ganglion neurons despite the presence of GABAA receptors.
History
Glycine and its receptor were first suggested to play a role in inhibition of cells in 1965. Two years later, experiments showed that glycine had a hyperpolarizing effect on spinal motor neurons due to increased chloride conductance through the receptor. Then, in 1971, glycine was found to be localized in the spinal cord using autoradiography. All of these discoveries resulted in the conclusion that glycine is a primary inhibitory neurotransmitter of the spinal cord that works via its receptor.
Arrangement of subunits
Strychnine-sensitive GlyRs are members of a family of ligand-gated ion channels. Receptors of this family are arranged as five subunits surrounding a central pore, with each subunit composed of four Ξ± helical transmembrane segments. There are presently four known isoforms of the ligand-binding Ξ±-subunit (Ξ±1-4) of GlyR (GLRA1, GLRA2, GLRA3, GLRA4) and a single Ξ²-subunit (GLRB). The adult form of the GlyR is the heteromeric Ξ±1Ξ² receptor, which is believed to have a stoichiometry (proportion) of three Ξ±1 subunits and two Ξ² subunits or four Ξ±1 subunits and one Ξ² subunit. The embryo form on the other hand, is made up of five Ξ±2 subunits. The Ξ±-subunits are also able to form functional homopentamers in heterologous expression systems in African clawed frog oocytes or mammalian cell lines, which are useful for studies of channel pharmacokinetics and pharmacodynamics. The Ξ² subunit is unable to form functional channels without Ξ± subunits but determines the synaptic localization of GlyRs and the pharmacological profile of glycinergic currents.
Function
Adults
In mature adults, glycine is a inhibitory neurotransmitter found in the spinal cord and regions of the brain. As it binds to a glycine receptor, a conformational change is induced, and the channel created by the receptor opens. As the channel opens, chloride ions are able to flow into the cell which results in hyperpolarization. In addition to this hyperpolarization, which decreases the likelihood of action potential propagation, glycine is also responsible for decreasing the release of both inhibitory and excitatory neurotransmitters as it binds to its receptor. This is called the "shunting" effect and can be explained by Ohm's Law. As the receptor is activated, the membrane conductance is increased and the membrane resistance is decreased. According to Ohm's Law, as resistance decreases, so does voltage. A decreased postsynaptic voltage results in a decreased release of neurotransmitters.
Embryos
In developing embryos, glycine has the opposite effect as it does in adults. It is an excitatory neurotransmitter. This is due to the fact that chloride has a more positive equilibrium potential in early stages of life due to the high expression of NKCC1. This moves one sodium, one potassium and two chloride ions into the cell, resulting in a higher intracellular chloride concentration. When glycine binds to its receptor, the result is an efflux of chloride, instead of an influx as it happens in mature adults. The efflux of chloride causes the membrane potential to become more positive, or depolarized. As the cells mature, the K+-Cl- cotransporter 2 (KCC2) is expressed, which moves potassium and chloride out of the cell, decreasing the intracellular chloride concentration. This allows the receptor to switch to an inhibitory mechanism as described above for adults.
Glycine receptors in diseases
Disruption of GlyR surface expression or reduced ability of expressed GlyRs to conduct chloride ions results in the rare neurological disorder, hyperekplexia. The disorder is characterized by an exaggerated response to unexpected stimuli which is followed by a temporary but complete muscular rigidity often resulting in an unprotected fall. Chronic injuries as a result of the falls are symptomatic of the disorder. A mutation in GLRA1 is responsible for some cases of stiff person syndrome.
Ligands
Agonists
Ξ²-Alanine
D-Alanine
Gelsemine
Glycine
Hypotaurine
Ivermectin
L-Alanine
L-Proline
L-Serine
Milacemide
Quisqualamine
Sarcosine
Taurine
THC
L-Theanine
Positive Allosteric Modulators
Ethanol
Toluene
Antagonists
Bicuculline
Brucine
Caffeine
Levorphanol
Picrotoxin
Strychnine
Tutin
Quercetin
References
External links
Ionotropic receptors
Cell signaling | Glycine receptor | Chemistry | 1,255 |
3,481,221 | https://en.wikipedia.org/wiki/Denitrifying%20bacteria | Denitrifying bacteria are a diverse group of bacteria that encompass many different phyla. This group of bacteria, together with denitrifying fungi and archaea, is capable of performing denitrification as part of the nitrogen cycle. Denitrification is performed by a variety of denitrifying bacteria that are widely distributed in soils and sediments and that use oxidized nitrogen compounds such as nitrate and nitrite in the absence of oxygen as a terminal electron acceptor. They metabolize nitrogenous compounds using various enzymes, including nitrate reductase (NAR), nitrite reductase (NIR), nitric oxide reductase (NOR) and nitrous oxide reductase (NOS), turning nitrogen oxides back to nitrogen gas () or nitrous oxide ().
Diversity of denitrifying bacteria
There is a great diversity in biological traits. Denitrifying bacteria have been identified in over 50 genera with over 125 different species and are estimated to represent 10-15% of bacteria population in water, soil and sediment.
Denitrifying include for example several species of Pseudomonas, Alcaligenes , Bacillus and others.
The majority of denitrifying bacteria are facultative aerobic heterotrophs that switch from aerobic respiration to denitrification when oxygen as an available terminal electron acceptor (TEA) runs out. This forces the organism to use nitrate to be used as a TEA. Because the diversity of denitrifying bacteria is so large, this group can thrive in a wide range of habitats including some extreme environments such as environments that are highly saline and high in temperature. Aerobic denitrifiers can conduct an aerobic respiratory process in which nitrate is converted gradually to N2 (NO3β β NO2β β NO β N2O β N2 ), using nitrate reductase (Nar or Nap), nitrite reductase (Nir), nitric oxide reductase (Nor), and nitrous oxide reductase (Nos). Phylogenetic analysis revealed that aerobic denitrifiers mainly belong to Ξ±-, Ξ²- and Ξ³-Proteobacteria.
Denitrification mechanism
Denitrifying bacteria use denitrification to generate ATP.
The most common denitrification process is outlined below, with the nitrogen oxides being converted back to gaseous nitrogen:
2 NO3β + 10 eβ + 12 H+ β N2 + 6 H2O
The result is one molecule of nitrogen and six molecules of water. Denitrifying bacteria are a part of the N cycle, and consists of sending the N back into the atmosphere. The reaction above is the overall half reaction of the process of denitrification. The reaction can be further divided into different half reactions each requiring a specific enzyme. The transformation from nitrate to nitrite is performed by nitrate reductase (Nar)
NO3β + 2 H+ + 2 eβ β NO2β + H2O
Nitrite reductase (Nir) then converts nitrite into nitric oxide
2 NO2β + 4 H+ + 2 eβ β 2 NO + 2 H2O
Nitric oxide reductase (Nor) then converts nitric oxide into nitrous oxide
2 NO + 2 H+ + 2 eβ β N2O + H2O
Nitrous oxide reductase (Nos) terminates the reaction by converting nitrous oxide into dinitrogen
N2O + 2 H+ + 2 eβ β N2 + H2O
It is important to note that any of the products produced at any step can be exchanged with the soil environment.
Oxidation of methane and denitrification
Anaerobic oxidation of methane coupled to denitrification
Anaerobic denitrification coupled to methane oxidation was first observed in 2008, with the isolation of a methane-oxidizing bacterial strain found to oxidize methane independently. This process uses the excess electrons from methane oxidation to reduce nitrates, effectively removing both fixed nitrogen and methane from aquatic systems in habitats ranging from sediment to peat bogs to stratified water columns.
The process of anaerobic denitrification may contribute significantly to the global methane and nitrogen cycles, especially in light of the recent influx of both due to anthropogenic changes. The extent to which anthropogenic methane affects the atmosphere is known to be a significant driver of climate change, and considering it is multiple times more potent than carbon dioxide. Removing methane is widely considered to be beneficial to the environment, although the extent of the role that denitrification plays in the global flux of methane is not well understood. Anaerobic denitrification as a mechanism has been shown to be capable of removing the excess nitrate caused by fertilizer runoff, even in hypoxic conditions.
Additionally, microorganisms which employ this type of metabolism may be employed in bioremediation, as shown by a 2006 study of hydrocarbon contamination in the Antarctic, as well as a 2016 study which successfully increased the rates of denitrification by altering the environment housing the bacteria. Denitrifying bacteria are said to be high quality bioremediators because of their adaptability to a variety of different environments, as well as the lacking any toxic or undesirable leftovers, as are left by other metabolisms.
Role of denitrifying bacteria as a methane sink
Denitrifying bacteria have been found to play a significant role in the oxidation of methane (CH4) (where methane is converted to CO2, water, and energy) in deep freshwater bodies of water. This is important because methane is the second most significant anthropogenic greenhouse gas, with a global warming potential 25 times more potent than that of carbon dioxide, and freshwaters are a major contributor of global methane emissions.
A study conducted on Europe's Lake Constance found that anaerobic methane oxidation coupled to denitrification β also referred to as nitrate/nitrite-dependent anaerobic methane oxidation (n-damo) β is a dominant sink of methane in deep lakes. For a long time, it was considered that the mitigation of methane emissions was only due to aerobic methanotrophic bacteria. However, methane oxidation also takes place in anoxic, or oxygen depleted zones, of freshwater bodies. In the case of Lake Constance, this is carried out by M. oxyfera-like bacteria. M. oxyfera-like bacteria are bacteria similar to Candidatus Methylomirabilis oxyfera, which is a species of bacteria that acts as a denitrifying methanotroph.
The results from the study on Lake Constance found that nitrate was depleted in the water at the same depth as methane, which suggests that methane oxidation was coupled to denitrification. It could be inferred that it was M. oxyfera-like bacteria carrying out the methane oxidation because their abundance peaked at the same depth where the methane and nitrate profiles met. This n-damo process is significant because it aids in decreasing methane emissions from deep freshwater bodies and it aids in turning nitrates into nitrogen gas, reducing excess nitrates.
Denitrifying bacteria and the environment
Denitrification effects on limiting plant productivity and producing by-products
The process of denitrification can lower the fertility of soil as nitrogen, a growth-limiting factor, is removed from the soil and lost to the atmosphere. This loss of nitrogen to the atmosphere can eventually be regained via introduced nutrients, as part of the nitrogen cycle. Some nitrogen may also be fixated by species of nitrifying bacteria and the cyanobacteria. Another important environmental issue concerning denitrification is the fact that the process tends to produce large amounts of by-products. Examples of by-products are nitric oxide (NO) and nitrous oxide (N2O). NO is an ozone depleting species and N2O is a potent greenhouse gas which can contribute to global warming.
Denitrifying bacteria use in wastewater treatment
Denitrifying bacteria are an essential component in treating wastewater. Wastewater often contains large amounts of nitrogen (in the form of ammonium or nitrate), which could be damaging to human health and ecological processes if left untreated. Many physical, chemical, and biological methods have been used to remove the nitrogenous compounds and purify polluted waters. The process and methods vary, but it generally involves converting ammonium to nitrate via the nitrification process with ammonium oxidizing bacteria (AOB, NH4+ β NO2β) and nitrite oxidizing bacteria (NOB, NO2β β NO3β), and finally to nitrogen gas via denitrification. One example of this is ammonia-oxidizing bacteria which have a metabolic feature that, in combination with other nitrogen-cycling metabolic activities, such as nitrite oxidation and denitrification, remove nitrogen from wastewater in activated sludge. Since denitrifying bacteria are heterotrophic, an organic carbon source is supplied to the bacteria in an anoxic basin. With no available oxygen, denitrifying bacteria use the redox of nitrate to oxidize the carbon. This leads to the creation of nitrogen gas from nitrate, which then bubbles up out of the wastewater.
See also
Nitrifying bacteria
Nitrogen Cycle
References
Bacteria
Nitrogen cycle
Soil biology
Fishkeeping
Aquariums | Denitrifying bacteria | Chemistry,Biology | 1,964 |
42,833,632 | https://en.wikipedia.org/wiki/Sunset%20%28computing%29 | In the realm of information technology (IT), to sunset a server, service, software feature, etc. is to plan to intentionally remove or discontinue it. In most cases, the term also connotes that this discontinuation is announced to users in advance, generally with an expected timeline. After sunsetting is announced, usually very few changes are made to the hardware or software in question, as such work would be counterproductive, when its termination is soon to follow. In some cases, however, individual features of an application, server, or service may be phased out at different times, leading up to the eventual full shutdown.
References
Servers (computing)
Software development process | Sunset (computing) | Technology | 143 |
7,218,913 | https://en.wikipedia.org/wiki/Type%20704%20Radar | The Type 704 is a counter-battery radar designed to accurately locate the hostile artillery, rocket and ground-to-ground missile launcher immediately after the firing of enemy, and support friendly artillery by providing guidance of counter fire. Built by NORINCO, it was first displayed publicly in 1988's ASIADEX defence show.
Development
Type 704 radar shares the same root as its larger cousin, the SLC-2 Radar: four AN/TPQ-37 Firefinder radar have been sold to China and this had become the foundation of SLC-2 radar development. Aside from political reasons, the US$10 million plus unit price tag of TPQ-37 (including after sale logistic support) was simply too costly for Chinese. Decision was made to develop a domestic equivalent after mastering the technologies of TPQ-37. After the initial test of TPQ-37 in Tangshan (ζ±€ε±±) Range near Nanjing in 1988, and in Xuanhua District in October of the same year, several shortcomings of TPQ-37 were discovered and further intensive tests were conducted and completed in 1994.
The requirement of the Chinese domestic equivalent was subsequently modified to address these issues revealed in trials. Due to the limitation of the Chinese industrial capability at the time, decision was made to develop the Chinese domestic equivalent in several steps. The first step was to develop a smaller one, which would result in the Chinese equivalent of AN/TPQ-36 Firefinder radar, Type 704 series radar, and based on the experience gained from this program, a more capable larger version in the same class of AN/TPQ-37 Firefinder radar would be developed, which eventually resulted in SLC-2 series.
Type 704 radar
Type 704 is the first of the Type 704 series of counter-battery radars. Developmental work of Type 704 begun in parallel with the introduction of AN/TPQ-37 radar into Chinese service, and the reported experience gained on the Chinese reverse engineering of TPQ-37 has influenced Type 704 radar.
One problem revealed in the tests was that the reliability of TPQ-37 is much lower than what was claimed. The reason was that when TPQ-37 was deployed in environments with high humidity and high level of rainfall (southern China), high salinity (coastal regions), high altitude (southwestern China), and subjected to daily high temperature differences (northwestern China), malfunctions occurs more frequently. Type 704 radar was designed specifically to improve the reliability against these harsh environmental factors.
Type 704A radar
Type 704 is followed by its successor, Type 704A, which is fully solid state, fully digitized version, which further improved reliability and simplified logistics, and thus reduced the operational cost.
One of the limitations of TPQ-37 revealed in tests was that it was less effective against projectiles with flat trajectory, so it is much more effective against howitzer and mortar rounds than rounds from 130 mm towed field gun M1954 (M-46) and its Chinese derivative Type 59-1. Type 704A radar is designed to overcome this shortcoming by improving the capability against rounds with flat trajectory.
BL904 radar
A further improved variant based on Type 704A designated as BL904 has also been introduced. This latest version of Type 704 radar family reportedly utilizes the more advanced lens arrangement for its planar passive phased array antenna, instead of earlier simple horn arranged used in earlier versions. Unconfirmed Chinese claims also concludes that the BL904 radar also incorporate former-USSR counter-battery radar Zoopark-1 radar, two of which was purchased by China from Ukraine, but such claim has yet to be verified by official sources and sources outside China.
Specifications
S - band
Range (against 81-mm mortar round sized target): >
CS/RB1 radar
At the 9th Zhuhai Airshow held in November 2012, a new, lightweight, counterbattery radar designated as CS/RB1 made its public debut. Like Type 704 and BL904 radars, CS/RB1 is also designed primarily for detecting incoming projectiles down to the size of mortar round, though larger objects can be tracked as well. CS/RB1 is designed to be a lightweight version of Type 704/BL904 to be carried by individual soldiers (when systems are breaking down into portions). CS/RB1 radar operates is a passive phased radar operates in L-band, and it is fully solid state, highly digitized, conformal array in cylindrical shape., and it can be airdropped.
References
1. Fire Control Radar Technology, Dec 1999 issue, Xi'an Electronics Research Institute (also known as Institute No. 206 of China Arms Industry Group Corporation), Xi'an, December, 1999, οΌDomestic Chinese SN: CN 61-1214/TJ.
2. Fire Control Radar Technology, Feb 1995 issue, Xi'an Electronics Research Institute (also known as Institute No. 206 of China Arms Industry Group Corporation), Xi'an, February, 1995, οΌDomestic Chinese SN: CN 61-1214/TJ.
3. Ordnance Knowledge, Jul 2007 issue, Ordnance Knowledge Magazine Publishing House, Beijing, July, 2007, , Domestic Chinese SN: CN 11-1470/TJ.
Weapon locating radar
Military radars of the People's Republic of China
Military equipment introduced in the 1980s | Type 704 Radar | Technology | 1,122 |
16,976,582 | https://en.wikipedia.org/wiki/Edmond%20Coignet | Edmond Coignet (4 July 1856 β 1915) was a French engineer and entrepreneur. He has been instrumental in the theory of reinforced concrete.
Life and achievements
Coignet was the son of industrialist FranΓ§ois Coignet (1814β1888) and educated at the Γcole Centrale des Arts et Manufactures (Γcole Centrale Paris). He was the inventor of the agglomerated concrete to strengthen the cement with metal inserts. He permanently reoriented the family business to construction. In 1892 he applied his innovative construction methods on the aqueduct of AchΓ¨res in Paris. Coignet was the first to use reinforced concrete piles and built with the architect Jacques Hermant some of the first Parisian buildings in this material.
References
L'art de l'ingΓ©nieur, sous la dir. de Antoine Picon, Γ©d. du Moniteur, 600 p. ()
1856 births
1915 deaths
Concrete pioneers
French civil engineers
Γcole Centrale Paris alumni
Structural engineers | Edmond Coignet | Engineering | 194 |
334,420 | https://en.wikipedia.org/wiki/Telecommunications%20device%20for%20the%20deaf | A telecommunications device for the deaf (TDD) is a teleprinter, an electronic device for text communication over a telephone line, that is designed for use by persons with hearing or speech difficulties. Other names for the device include teletypewriter (TTY), textphone (common in Europe), and minicom (United Kingdom).
The typical TDD is a device about the size of a typewriter or laptop computer with a QWERTY keyboard and small screen that uses an LED, LCD, or VFD screen to display typed text electronically. In addition, TDDs commonly have a small spool of paper on which text is also printedold versions of the device had only a printer and no screen. The text is transmitted live, via a telephone line, to a compatible device, i.e. one that uses a similar communication protocol.
Special telephone services have been developed to carry the TDD functionality even further. In certain countries, there are systems in place so that a deaf person can communicate with a hearing person on an ordinary voice phone using a human relay operator. There are also "carry-over" services, enabling people who can hear but cannot speak ("hearing carry-over," a.k.a. "HCO"), or people who cannot hear but are able to speak ("voice carry-over," a.k.a. "VCO") to use the telephone.
The term TDD is sometimes discouraged because people who are deaf are increasingly using mainstream devices and technologies to carry out most of their communication. The devices described here were developed for use on the partially-analog Public Switched Telephone Network (PSTN). They do not work well on the new internet protocol (IP) networks. Thus as society increasingly moves toward IP based telecommunication, the telecommunication devices used by people who are deaf will not be TDDs. In the US and Canada, the devices are referred to as TTYs.
Teletype Corporation, of Skokie, Illinois, made page printers for text, notably for news wire services and telegrams, but these used standards different from those for deaf communication, and although in quite widespread use, were technically incompatible. Furthermore, these were sometimes referred to by the "TTY" initialism, short for "Teletype". When computers had keyboard input mechanisms and page printer output, before CRT terminals came into use, Teletypes were the most widely used devices. They were called "console typewriters". (Telex used similar equipment, but was a separate international communication network.)
History
APCOM acoustic coupler or MODEM device
The TDD concept was developed by James C. Marsters (1924β2009), a dentist and private airplane pilot who became deaf as an infant because of scarlet fever, and Robert Weitbrecht, a deaf physicist. In 1964, Marsters, Weitbrecht and Andrew Saks, an electrical engineer and grandson of the founder of the Saks Fifth Avenue department store chain, founded APCOM (Applied Communications Corp.), located in the San Francisco Bay area, to develop the acoustic coupler, or modem; their first product was named the PhoneType. APCOM collected old teleprinter machines (TTYs) from the Department of Defense and junkyards. Acoustic couplers were cabled to TTYs enabling the AT&T standard Model 500 telephone to couple, or fit, into the rubber cups on the coupler, thus allowing the device to transmit and receive a unique sequence of tones generated by the different corresponding TTY keys. The entire configuration of teleprinter machine, acoustic coupler, and telephone set became known as the TTY. Weitbrecht invented the acoustic coupler modem in 1964. The actual mechanism for TTY communications was accomplished electro-mechanically through frequency-shift keying (FSK) allowing only half-duplex communication, where only one person at a time can transmit.
Paul Taylor TTY device
During the late 1960s, Paul Taylor combined Western Union Teletype machines with modems to create teletypewriters, known as TTYs. He distributed these early, non-portable devices to the homes of many in the deaf community in St. Louis, Missouri. He worked with others to establish a local telephone wake-up service. In the early 1970s, these small successes in St. Louis evolved into the nation's first local telephone relay system for the deaf.
Micon Industries MCM device
In 1973, the Manual Communications Module (MCM), which was the world's first electronic portable TTY allowing two-way telecommunications, premiered at the California Association of the Deaf convention in Sacramento, California. The battery-powered MCM was invented and designed by a deaf news anchor and interpreter, Kit Patrick Corson, in conjunction with Michael Cannon and physicist Art Ogawa. It was manufactured by Michael Cannon's company, Micon Industries, and initially marketed by Kit Corson's company, Silent Communications. In order to be compatible with the existing TTY network, the MCM was designed around the five-bit Baudot code established by the older TTY machines instead of the ASCII code used by computers. The MCM was an instant success with the deaf community despite the drawback of a $599 cost. Within six months there were more MCMs in use by the deaf and hard of hearing than TTY machines. After a year Micon took over the marketing of the MCM and subsequently concluded a deal with Pacific Bell (who coined the term "TDD") to purchase MCMs and rent them to deaf telephone subscribers for $30 per month.
After Micon formed an alliance with APCOM, Michael Cannon (Micon), Paul Conover (Micon), and Andrea Saks (APCOM) successfully petitioned the California Public Utilities Commission (CPUC), resulting in a tariff that paid for TTY devices to be distributed free of cost to deaf persons. Micon produced over 1,000 MCMs per month, resulting in approximately 50,000 MCMs being disseminated into the deaf community.
Before he left Micon in 1980, Michael Cannon developed several computer compatible variations of the MCM and a portable, battery operated printing TTY, but they were never as popular as the original MCM. Newer model TTYs could communicate with selectable codes that allow communications at a higher bit rate on those models similarly equipped. However, the lack of true computer interface functionality spelled the demise of the original TTY and its clones. During the mid-1970s, other so-called portable telephone devices were being cloned by other companies, and this was the time period when the term "TDD" began being used largely by those outside the deaf community.
Text messaging and the Def-Tone System (DTS)
This relay system became known commonly as the Def-Tone System (DTS) because the tones representing letters of the alphabet were eventually carried in tones outside the range of human hearing. Today, this is commonly called multi-tap because you press a number 1, 2 or 3 times to get a corresponding letter. In 1994 Joseph Alan Poirier, a college student-worker, recommended using the system to send texts to forklifts to improve delivery of parts to the assembly line at GM Powertrain in Toledo, Ohio, and sending a text to pagers. He recommended taking pagers to alphanumeric displays incorporating the same system in discussions with the pager supplier for Outback Steakhouse and having relays put in the forklifts to ping alert messages to the pagers used in that system. He called it text messaging, coining the phrase. It is theorized that when Toyota forklift was allegedly hired by GM for this work, one of the subcontractors, Kyocera, utilized the work for the Toyota forklift company to create text messaging for cell phones.
Marsters Award
In 2009, AT&T received the James C. Marsters Promotion Award from TDI (formerly Telecommunications for the Deaf, Inc.) for its efforts to increase accessibility to communication for people with disabilities. The award holds some irony; it was AT&T that, in the 1960s, resisted efforts to implement TTY technology, claiming it would damage its communication equipment. In 1968, the Federal Communications Commission struck down AT&T's policy and forced it to offer TTY access to its network.
Protocols
There are many different standards for TDDs and textphones.
Original 5-bit Baudot code
The original standard used by TTYs is a variant of the Baudot code. The maximum speed of this protocol is 10 characters per second. This is a half-duplex protocol, which means that only one person at a time may transmit characters. If both try to transmit at the same time, the characters will be garbled on the other end.
This protocol is commonly used in the United States.
This is a variant of the Baudot code, implemented as 5-bits per character transmitted asynchronously using frequency-shift key-modulation at either 45.5 or 50 baud, 1 start bit, 5 data bits, and 1.5 stop bits. Details of the protocol implementation are available in TIA-825-A and also in T-REC V.18 Annex A "5-bit operational mode".
Turbo Code
The UltraTec company implements another protocol known as Enhanced TTY, which it calls "Turbo Code," in its products. Turbo Code has some advantages over Baudot protocols, such as a higher data rate, full ASCII compliance, and full-duplex capability. However, Turbo Code is proprietary, and UltraTec gives its specifications only to parties who are willing to license it, although some information concerning it is disclosed in .
Other legacy protocols
Other protocols used for text telephony are European Deaf Telephone (EDT) and dual-tone multi-frequency signaling (DTMF).
The ITU-T V-series recommendations include the following early modem standards approved by the ITU in 1988:
ITU-T V.21 specifies 300 bits per second duplex mode.
ITU-T V.23 specifies audio frequency-shift keying modulation to encode and transfer data at 600/1200 bits per second.
V.18
In 1994, the ITU approved the V.18 standard, which comprises two major parts, a dual standard. It is both an umbrella protocol that allows recognition and interoperability of some of the most commonly used textphone protocols, as well as offering a native V.18 mode, which is an ASCII full- or half-duplex modulation method.
Computers can, with appropriate software and modem, emulate a V.18 TTY. Some voice modems, coupled with appropriate software, can now be converted to TTY modems by using a software-based decoder for TTY tones. Same can be done with such software using a computer's sound card, when coupled to the telephone line.
In the UK, a virtual V.18 network, called TextDirect, exists as part of the Public Switched Telephone Network (PSTN), thereby offering interoperability between textphones using different protocols. The platform also offers additional functionality like call progress and status information in text and automatic invocation of a relay service for speech-to-text calls.
Cell phones
Many digital cell phones are compatible with TTY devices.
Many people want to replace TTY with real-time text over IP (RTT), which can be used on a digital cell phone or tablet without a separate TTY device.
New technologies
As TDDs are increasingly considered legacy devices, with the emergence of modern technologies such as email, texting and instant messaging, text from TDD are increasingly being sent over Text over IP gateways, or other real-time text protocols. However, these newer methods require IP connections and will not work with regular analog phone lines, unless a data connection is used (i.e. dial-up Internet, or the modem method of multiplexing text and voice that is done on a Captioned Telephone hardware handset). Because some people have no access to any kind of data connection, and it is not even available in some parts within many countries, TTYs are still the only method for analog landline text phone calls, although TTYs include any device with a suitable modem and software.
Other devices for the deaf or hard of hearing
In addition to TDD, there are a number of pieces of equipment that can be coupled to telephones to improve their utility. For those with hearing difficulties the telephone ring and conversation sound level can be amplified or pitch adjusted; ambient noise can also be filtered. The amplifier can be a simple addition or through an inductive coupler to interact with suitable hearing aids. The ring can also be supplemented with extension bells or a visual call indicator.
Etiquette
There are some etiquette rules that users of TTYs must be aware of. Because of the inability to detect when a person has finished speakingβand the fact that two people typing will scramble the text on both endsβthe term "Go Ahead" (GA) is used to denote the end of a turn, and an indication for the other person to begin typing.
Sample conversation
Caller A: HELLO JOHN, WHAT TIME WILL YOU BE COMING AROUND TODAY Q GA
Caller B: HI FRED, I WILL BE AROUND NOON GA
Caller A: OK, NO PROBLEM, DON'T FORGET TO BRING THE BOOKS AND THE WORK SO FAR GA
Caller B: WILL DO SK
Caller A: BYE BYE SKSK
SK is used to allow the users to say their farewells, while SKSK indicates an immediate call hang-up.
Sample conversation 2
Caller A HI, THIS IS JOHN, CAN I ASK WHO IS CALLING? GA
Caller B HI JOHN, ITS ME FRED, I AM WONDERING WHERE YOU ARE, ITS GETTING LATE TO GO OUT TO THE PUB GA
Caller A HI FRED, SORRY I DONT THINK I CAN GO GA
Caller B WHY CANT YOU GO? GA
Caller A MY WIFE IS NOT FEELING WELL AND I HAVE NO BABYSITTER FOR MY KIDS! GA
Caller B AWWWW DARN. I WANTED YOU THERE. OH WELL WHAT CAN YOU DO ? GA
Caller A I KNOW.. I GOTTA GO. THE KIDS NEED ME. SEE YOU AROUND! BYE FOR NOW SK
Caller B OK NO WORRIES SEE YOU SOON! BYE BYE SK GA
Caller A SKSK (THE PARTY HAS HUNG UP)
Sample text relay call
Caller A TXD DIALING.. TXD RING... TXD OPERATOR CONNECTED.. EXPLAINING TEXT RELAY SERVICE. PLEASE WAIT.... HI THIS IS JOHN GA
Caller B HI JOHN ITS ME FRED. I AM WONDERING WHAT YOU ARE DOING TONIGHT? GA
Caller A HI FRED. I AM THINKING OF HAVING A POKER NIGHT AT MINE, WHAT DO YOU THINK? GA
Caller B GOOD IDEA, I'LL CALL A FEW MATES TO COME ROUND AND HAVE A GOOD GAME GA
Caller A OK SEE YOU AT 7PM. BYE BYE SK GA
Caller B OK SEE YOU AT 7PM BYE BYE SKSKSKSK GA
Caller A THANK YOU FOR USING TEXT RELAY SERVICE. GOODBYE
Note: TTYs use only capital letters except when there are computer screens.
''Note: In the UK, Text relay service used to be called typetalk (RNID) but have since merged with the phone line using the dialling prefix 18001 (TTY) or the 18002 (voice relay). The emergency line is 18000 (TTY).
TRS relay
One of the most common uses for a TTY is to place calls to a Telecommunications Relay Service (TRS), which makes it possible for the deaf to successfully make phone calls to users of regular phone systems.
The voice recognition systems are in limited use, due to problems with the technology. A new development called the captioned telephone now uses voice recognition to assist the human operators. Newer text-based communication methods, such as short message service (SMS), Internet Relay Chat (IRC), and instant messaging have also been adopted by the deaf as an alternative or adjunct to TTY.
See also
List of video telecommunication services and product brands
Telecommunications relay service
Video relay service (VRS), using videotelephony
Notes
References
Assistive technology
Deafness
Telecommunications
American inventions | Telecommunications device for the deaf | Technology | 3,339 |
2,910,735 | https://en.wikipedia.org/wiki/Machine%20drawn%20cylinder%20sheet%20glass | Machine drawn cylinder sheet was the first mechanical method for "drawing" window glass. Cylinders of glass 40Β feet (12Β m) high are drawn vertically from a circular tank. The glass is then annealed and cut into 7 to 10 foot (2 to 3Β m) cylinders. These are cut lengthways, reheated, and flattened.
This process was invented in the US in 1903. This type of glass was manufactured in the early 20th century (it was manufactured in the United Kingdom by Pilkingtons from 1910 to 1933).
Other historical methods for making window glass included broad sheet, blown plate, crown glass, polished plate and cylinder blown sheet. These methods of manufacture lasted at least until the end of the 19th century. The early 20th century marks the move away from hand-blown to machine manufactured glass such as rolled plate, flat drawn sheet, single and twin ground polished plate and float glass.
Sources
Glass production | Machine drawn cylinder sheet glass | Materials_science,Engineering | 188 |
1,274,254 | https://en.wikipedia.org/wiki/Alpha%20Comae%20Berenices | Alpha Comae Berenices (Ξ± Comae Berenices, abbreviated Alpha Com, Ξ± Com) is a binary star in the constellation of Coma Berenices (Berenice's Hair), away. It consists of two main sequence stars, each a little hotter and more luminous than the Sun.
Alpha Comae Berenices is said to represent the crown worn by Queen Berenice. The two components are designated Alpha Comae Berenices A (officially named Diadem , the traditional name for the system) and B.
Nomenclature
Ξ± Comae Berenices (Latinised to Alpha Comae Berenices) is the system's Bayer designation. The designations of the two components as Alpha Comae Berenices A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU).
The system bore the traditional names Diadem and Al Dafirah, the latter derived from the Arabic Ψ§ΩΨΆΩΩΨ±Ψ© aΔΜ§-ΔΜ§afΔ«rah "the braid". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Diadem for the component Alpha Comae Berenices A on 1 February 2017 and it is now so included in the List of IAU-approved Star Names.
In Chinese, (), meaning Left Wall of Supreme Palace Enclosure, refers to an asterism consisting of Alpha Comae Berenices, Eta Virginis, Gamma Virginis, Delta Virginis and Epsilon Virginis. Consequently, the Chinese name for Alpha Comae Berenices itself is (, .), representing (), meaning The First Eastern General. ζ±δΈε° (DΕngshΗngjiΔng), westernized into Shang Tseang, but that name was designated for "v Comae Berenices" by R.H. Allen and the meaning is "a Higher General".
Properties
Although Alpha Comae Berenices bears the title "alpha", at magnitude 4.32 it is actually fainter than Beta Comae Berenices.
It is a binary star, with almost equal components of magnitudes 5.05 m and 5.08 m orbiting each other with a period of 25.87 years. The system, estimated to be 58 light-years distant, appears so nearly "edge-on" from the Earth that the two stars appear to move back-and-forth in a straight line with a maximum separation of only 0.7 arcsec. Eclipses are predicted to occur between the two components however they have not been successfully observed due to miscalculations of the time of eclipse.
The mean separation between them is approximately 10 AU, about the distance between the Sun and Saturn.
The binary star has a visual companion, CCDM J13100+1732C, of apparent magnitude 10.2, located 89 arcseconds away along a position angle of 345Β°.
Alpha Comae Berenicis forms an isosceles triangle with globular star clusters Messier 53 and NGC 5053. The apparent diameter of this triangle is a little more than one degree. The location of Alpha Comae Berenicis is westward (preceding) of both globular star clusters.
References
External links
Comae Berenices, Alpha
Binary stars
Coma Berenices
F-type main-sequence stars
Diadem
Comae Berenices, 42
114378
064241
Triple stars
BD+18 2697
0501
4968 | Alpha Comae Berenices | Astronomy | 755 |
69,704,037 | https://en.wikipedia.org/wiki/Lubabegron | Lubabegron (trade name Experior) is a veterinary drug used to reduce ammonia emissions from animals and their waste. Ammonia emissions are a concern in agricultural production because of detrimental effects on the environment, human health, and animal health.
Lubabegron was approved by the U.S. Food and Drug Administration in 2018 for use in feedlot cattle. It is the first drug approved for reducing ammonia emissions. It is also approved for use in Canada.
Lubabegron is a beta-adrenergic receptor agonist/antagonist. The antagonist activity of lubabegron at Ξ²1 and Ξ²2 receptors prevents the stimulation of the Ξ²-AR found in the heart (Ξ²1) and trachea/bronchi (Ξ²2) of humans and, in doing so, avoids the potential negative side effects associated with Ξ²1 and Ξ²2 receptor activation. The Ξ²1-AR and Ξ²2-AR antagonist behavior of lubabegron could decrease lipolysis in adipose tissue, whereas the Ξ²3-AR agonist activity could increase skeletal muscle hypertrophy, possibly due to the differences in the second messenger systems and enzyme expression in skeletal muscle compared with adipose tissues.
References
Veterinary drugs
Beta-adrenergic agonists
Nitriles
Pyridines
Thiophenes | Lubabegron | Chemistry | 275 |
2,115,079 | https://en.wikipedia.org/wiki/Shuttle%20Inc. | Shuttle Inc. () (TAIEX:2405) is a Taiwan-based manufacturer of motherboards, barebone computers, complete PC systems and monitors. Throughout the last 10 years, Shuttle has been one of the world's top 10 motherboard manufacturers, and gained fame in 2001 with the introduction of the Shuttle SV24, one of the world's first commercially successful small form factor computers. Shuttle XPC small form factor computers tend to be popular among PC enthusiasts and hobbyists, although in 2004 Shuttle started a campaign to become a brand name recognized by mainstream PC consumers.
Shuttle XPC desktop systems are based on same PC platform as the XPC barebone (case+motherboard+power supply) Shuttle manufactures. More recently, the differentiation between Shuttle barebones and Shuttle systems has become greater, with the launch of system exclusive models such as the M-series and X-series.
History
1983 β Shuttle was initially incorporated in Taiwan by David and Simon Yu under the name Holco (ζ΅©ι«), and commences trading of computer motherboards.
1984 β Holco begins manufacturing motherboards in its Taoyuan County (now Taoyuan City), Taiwan factory.
1988 β Holco establishes its first overseas branch office, in Fremont, California.
1990 β Holco subsidiary Shuttle Computer Handel is established in Elmshorn, Germany to serve European market.
1994 β Introduces Shuttle RiscPC 4475, a desktop based on DEC Alpha 64-bit microprocessor and Microsoft Windows NT for Alpha.
1995 β Shuttle reaches #5 motherboard manufacturer worldwide in terms of volume.
1997 β Holco officially changes its name to Shuttle Inc.
2000 β Goes public on TAIEX stock market under symbol 2405.
2001 β Introduces Shuttle SV24, a compact all-aluminum computer using desktop components.
2002 β SV24 evolves into XPC line of small form factor barebones computers, including models for Intel's Pentium 4 and AMD's Athlon.
2003 β 8 different XPCs introduced, including models featuring chipsets from Nvidia, Intel, SiS, and VIA.
2004 β XPC shipments pass 1 million, Shuttle introduces the XP17 LCD and fully assembled PC systems. Branch offices established in Japan and China.
2005 β PC World awards Shuttle the "World Class" and "Best Buy" awards.
2005 β IDC research ranks the Shuttle brand loyalty higher than Dell, Sony and Apple.
2005 β Shuttle introduces the world's first small-form-factor Nvidia SLi system.
2005 β Introduction of Shuttle's first exclusive set-top box living-room PC, the M1000.
2005 β Shuttle debuts the world's fastest ultra-small-form-factor PC - the X100.
2006 β Shuttle is selected as one of Intel's premiere partners when Intel Viiv launched at the International Consumer Electronics Show (CES) in Las Vegas.
2006 β Shuttle launches a series of new chassis, X series and T series, which offers variety shapes of PC.
2006 β PCWORLD selects the Shuttle as the 15th great landmark in PC history.
2007 β Shuttle introduces new lineup of extreme gaming PC, SDXi system. Features Intel Core 2 Extreme processor and water cooling system.
2007 β Shuttle Computers introduces the first SFF Workstation line with Intel Xeon Processors.
2007 β Shuttle introduces XPC G5 3300m System, features the world's first and the only supports the dual HD 1080 formats optical drive.
2007 β Shuttle introduces XPC Glamor, Prima and D'VO series.
2007 β BCN awards Shuttle the "Top Prize" in the "Barebone PC" category, a market share of 44.2 percent. Most prestigious Japanese award for IT companies.
2008 β Shuttle implements "Green" features into PC lineup.
2008 β Shuttle debuts its first Surveillance concept product.
2009 β Shuttle launches X50, its first All-in-one PC.
2009 β Shuttle develops its first Home Automation product.
2009 β Shuttle develops its first IPC product.
2010 β Shuttle establishes OEM business unit to launch mobile solutions.
2010 β Shuttle debuts the world's first successful notebook ecosystem - the Shuttle Notebook Ecosystem.
2010 β Introduction of first online notebook ordering system - eSPA.
2010 β Shuttle launches fanless 1-Liter PC series, XS35.
2011 β Introduction of BTR, its "Build-To-Request" solution to the PC industry.
2011 β Intel names Shuttle a "Platinum" technology provider, the highest recognition.
2011 β NVIDIA names Shuttle a "Premier" partner in North America, the highest recognition.
Products
From 1987 to 2004, Shuttle manufactured AT, Baby AT, ATX, and Micro ATX motherboards. Among Shuttle's most popular motherboards were the HOT-603 Socket 7 motherboard based on the AMD640 chipset, and the AK31 Socket A motherboard based on the VIA KT266 and KT266A chipsets.
Currently, Shuttle's primary product is the XPC. The Shuttle XPC's design goal is to provide the power and features of a typical desktop PC in a fraction of the space. The XPC consists of a custom small-footprint motherboard, a rectangular chassis typically consisting of aluminum, a "Shuttle ICE" heatpipe-augmented heatsink, and a compact power supply. Popular XPCs include the SS51G, the SN41G2, and the SN25P. Shuttle XPC barebones can be found worldwide from PC distributors, retailers, and e-commerce stores. In 2004, the Shuttle XPC was the official PC of the World Cyber Games.
In 2004, Shuttle began manufacturing fully assembled PC systems. As of 2007, Shuttle XPC systems are available in the United States only at Sam's Club as well as Shuttle's US website. Shuttle systems are also available in Europe, Taiwan, China and Japan.
The Shuttle XP17 is a portable 17" LCD introduced in 2004 and surprisingly, is still being sold today at a premium. The XP17 is targeted at LAN gaming and other activities requiring a portable, high performance monitor. The XP17 won the Red Dot Award for industrial design in June 2005.
Models
Current Models
T-series
X-series
M-series
P2-series
P3-series
G5-series
G2-series
H7 series
J1 series
J2 series
J3 series
J4 series
Laptop standardization proposal
At the 2010 Consumer Electronics Show Shuttle unveiled a proposal called Shuttle PCB Assembly (SPA) to standardize motherboard size and layouts for laptop computers. Computer Shopper magazine said this was one of the top ten announcements for innovation made at 2010 CES.
Awards
In 2009, CNet praised one of Shuttle's new machines for allowing full sized graphics cards while still maintaining a small form factor.
See also
List of companies of Taiwan
References
External links
Interview with Ken Huang, chief XPC architect.
Interview with Jack Wang, USA president of Shuttle.
Companies established in 1983
Computer hardware companies
Computer systems companies
Motherboard companies
Companies based in Taipei
Taiwanese brands
Electronics companies of Taiwan | Shuttle Inc. | Technology | 1,441 |
49,234 | https://en.wikipedia.org/wiki/Chromatic%20scale | The chromatic scale (or twelve-tone scale) is a set of twelve pitches (more completely, pitch classes) used in tonal music, with notes separated by the interval of a semitone. Chromatic instruments, such as the piano, are made to produce the chromatic scale, while other instruments capable of continuously variable pitch, such as the trombone and violin, can also produce microtones, or notes between those available on a piano.
Most music uses subsets of the chromatic scale such as diatonic scales. While the chromatic scale is fundamental in western music theory, it is seldom directly used in its entirety in musical compositions or improvisation.
Definition
The chromatic scale is a musical scale with twelve pitches, each a semitone, also known as a half-step, above or below its adjacent pitches. As a result, in 12-tone equal temperament (the most common tuning in Western music), the chromatic scale covers all 12 of the available pitches. Thus, there is only one chromatic scale. The ratio of the frequency of one note in the scale to that of the preceding note is given by .
In equal temperament, all the semitones have the same size (100 cents), and there are twelve semitones in an octave (1200 cents). As a result, the notes of an equal-tempered chromatic scale are equally-spaced.
The ascending and descending chromatic scale is shown below.
Notation
The chromatic scale has no set enharmonic spelling that is always used. Its spelling is, however, often dependent upon major or minor key signatures and whether the scale is ascending or descending. In general, the chromatic scale is usually notated with sharp signs when ascending and flat signs when descending. It is also notated so that no scale degree is used more than twice in succession (for instance, GΒ β GΒ β G).
Similarly, some notes of the chromatic scale have enharmonic equivalents in solfege. The rising scale is Do, Di, Re, Ri, Mi, Fa, Fi, Sol, Si, La, Li, Ti and the descending is Ti, Te/Ta, La, Le/Lo, Sol, Se, Fa, Mi, Me/Ma, Re, Ra, Do, However, once 0 is given to a note, due to octave equivalence, the chromatic scale may be indicated unambiguously by the numbers 0-11 mod twelve. Thus two perfect fifths are 0-7-2. Tone rows, orderings used in the twelve-tone technique, are often considered this way due to the increased ease of comparing inverse intervals and forms (inversional equivalence).
Pitch-rational tunings
Pythagorean
The most common conception of the chromatic scale before the 13th century was the Pythagorean chromatic scale (). Due to a different tuning technique, the twelve semitones in this scale have two slightly different sizes. Thus, the scale is not perfectly symmetric. Many other tuning systems, developed in the ensuing centuries, share a similar asymmetry.
In Pythagorean tuning (i.e. 3-limit just intonation) the chromatic scale is tuned as follows, in perfect fifths from G to A centered on D (in bold) (GβDβAβEβBβFβCβGβDβAβEβBβFβCβGβDβA), with sharps higher than their enharmonic flats (cents rounded to one decimal):
{| class="wikitable" style="text-align: center"
|-
!width=4%|
!width=4%| C
!width=4%| D
!width=4%| C
!width=4%| D
!width=4%| E
!width=4%| D
!width=4%| E
!width=4%| F
!width=4%| G
!width=4%| F
!width=4%| G
!width=4%| A
!width=4%| G
!width=4%| A
!width=4%| B
!width=4%| A
!width=4%| B
!width=4%| C
|-
!Pitchratio
| 1 || || || || || || || || || || || || || || || || || 2
|-
!Cents
| 0 || 90.2 || 113.7 || 203.9 || 294.1 || 317.6 || 407.8 || 498 || 588.3 || 611.7 || 702 || 792.2 || 815.6 || 905.9 || 996.1 || 1019.6 || 1109.8 || 1200
|}
where is a diatonic semitone (Pythagorean limma) and is a chromatic semitone (Pythagorean apotome).
The chromatic scale in Pythagorean tuning can be tempered to the 17-EDO tuning (P5 = 10 steps = 705.88 cents).
Just intonation
In 5-limit just intonation the chromatic scale, Ptolemy's intense chromatic scale, is as follows, with flats higher than their enharmonic sharps, and new notes between EβF and BβC (cents rounded to one decimal):
{| class="wikitable" style="text-align: center"
|-
!
! C !! C !! D !! D !! D !! E !! E !! E/F !! F !! F !! G !! G !! G !! A !! A !! A !! B !! B !! B/C !! C
|-
!Pitch ratio
| 1 || || || || || || || || || || || || || || || || || || || 2
|-
!Cents
| 0 || 70.7 || 111.7 || 203.9 || 274.6 || 315.6 || 386.3 || 427.4 || 498 || 568.7 || 631.3 || 702 || 772.6 || 813.7 || 884.4 || 955 || 1017.6 || 1088.3 || 1129.3 || 1200
|}
The fractions and , and , and , and , and many other pairs are interchangeable, as (the syntonic comma) is tempered out.
Just intonation tuning can be approximated by 19-EDO tuning (P5 = 11 steps = 694.74 cents).
Non-Western cultures
The ancient Chinese chromatic scale is called ShΓ-Γ¨r-lΗ. However, "it should not be imagined that this gamut ever functioned as a scale, and it is erroneous to refer to the 'Chinese chromatic scale', as some Western writers have done. The series of twelve notes known as the twelve lΓΌ were simply a series of fundamental notes from which scales could be constructed." However, "from the standpoint of tonal music [the chromatic scale] is not an independent scale, but derives from the diatonic scale," making the Western chromatic scale a gamut of fundamental notes from which scales could be constructed as well.
See also
Atonality
Chromaticism
Twelve-tone technique
20th century music#Classical
"All Through the Night" (Cole Porter song)
Notes
Sources
Further reading
Hewitt, Michael. 27 January 2013. Musical Scales of the World. The Note Tree.
External links
The Chromatic Scale arranged for guitar in several fingerings. (Formatted for easy printing)
The 12 golden notes of music
Chromatic Scale β Analysis
Chromaticism
Musical scales
Post-tonal music theory
Musical symmetry
Hemitonic scales
Tritonic scales | Chromatic scale | Physics | 1,681 |
24,028,448 | https://en.wikipedia.org/wiki/C11H17NO2 | {{DISPLAYTITLE:C11H17NO2}}
The molecular formula C11H17NO2 (molar mass : 195.258 g/mol) may refer to:
2C-D
DESOXY
Deterenol
Dimethoxyamphetamine
4-Hydroxy-3-methoxymethamphetamine
Metaterol
3-Methoxy-4-ethoxyphenethylamine | C11H17NO2 | Chemistry | 92 |
33,293,519 | https://en.wikipedia.org/wiki/Pink%20flowers | Pink flowers are used as a symbol of love and awareness. For decades, pink flowers have been used to decorate weddings as a symbol of love. They can also be used as a display of love at funerals, as demonstrated at the funeral for Anna Nicole Smith.
More recently, pink flowers have come to symbolize breast cancer awareness.
They may also be used as an expression of thanks, or just enjoyed for their aesthetic beauty.
Species
Species of pink flowers include:
Allium (flowering onion)
Astilbe
Azalea
Begonias
Butterfly bush
Camellia
Carambola tree (starfruit)
Carnation
Cherry
Clematis
Coneflower (Echinacea)
Cypripedium acaule (lady's slipper orchids)
Dahlia
Dianthus family (carnation, pink, and sweet william, and especially garden pink, whence the colour got its name)
Flowering plum tree
Hibiscus
Hyacinth
Hydrangea growing in alkaline (basic) soil
Oriental lily
Papaver orientale (Oriental poppy)
Peony / paeony
Petunia
Rhododendron and Azalea
Roses
Sabatia angularis (rosepink or bitterbloom)
Tulips
Vinca
Alumroot
Aster
Forget-me-not (Myosotis)
Orchid
References
Flora | Pink flowers | Biology | 272 |
29,269,614 | https://en.wikipedia.org/wiki/Gastrin%20family | The gastrin family (also known as the gastrin/cholecystokinin family) of proteins is defined by the peptide hormones gastrin and cholecystokinin. Gastrin and cholecystokinin (CCK) are structurally and functionally related peptide hormones that serve as regulators of various digestive processes and feeding behaviors. Additional structurally related members of this family include the amphibian caerulein skin peptide, the cockroach leukosulphakinin I and II (LSK) peptides, Drosophila melanogaster putative CCK-homologs Drosulphakinins I and II, cionin, a chicken gastrin/cholecystokinin-like peptide and cionin, a neuropeptide from the protochordate Ciona intestinalis.
Gastrin and CCK are important hormonal regulators that are known to induce gastric secretion, stimulate pancreatic secretion, increase blood circulation and water secretion in the stomach and intestine, and stimulate smooth muscle contraction. Originally found in the gut, these hormones have since been shown to be present in various parts of the nervous system.
Like many other active peptides they are synthesized as larger protein precursors that are then enzymatically converted into their mature forms. They exist in several molecular forms due to tissue-specific post-translational processing.
The biological activity of gastrin and CCK is associated with the last five C-terminal residues. One or two positions downstream, there is a conserved sulphated tyrosine residue.
Human proteins from this family
CCK; GAST;
References
Protein domains
Hormones | Gastrin family | Biology | 356 |
19,374,361 | https://en.wikipedia.org/wiki/Timeline%20of%20calculus%20and%20mathematical%20analysis | A timeline of calculus and mathematical analysis.
500BC to 1600
5th century BC - The Zeno's paradoxes,
5th century BC - Antiphon attempts to square the circle,
5th century BC - Democritus finds the volume of cone is 1/3 of volume of cylinder,
4th century BC - Eudoxus of Cnidus develops the method of exhaustion,
3rd century BC - Archimedes displays geometric series in The Quadrature of the Parabola. Archimedes also discovers a method which is similar to differential calculus.
3rd century BC - Archimedes develops a concept of the indivisiblesβa precursor to infinitesimalsβallowing him to solve several problems using methods now termed as integral calculus. Archimedes also derives several formulae for determining the area and volume of various solids including sphere, cone, paraboloid and hyperboloid.
Before 50 BC - Babylonian cuneiform tablets show use of the Trapezoid rule to calculate of the position of Jupiter.
3rd century - Liu Hui rediscovers the method of exhaustion in order to find the area of a circle.
4th century - The Pappus's centroid theorem,
5th century - Zu Chongzhi established a method that would later be called Cavalieri's principle to find the volume of a sphere.
600 - Liu Zhuo is the first person to use second-order interpolation for computing the positions of the sun and the moon.
665 - Brahmagupta discovers a second order Newton-Stirling interpolation for ,
862 - The Banu Musa brothers write the "Book on the Measurement of Plane and Spherical Figures",
9th century - ThΔbit ibn Qurra discusses the quadrature of the parabola and the volume of different types of conic sections.
12th century - BhΔskara II discovers a rule equivalent to Rolle's theorem for ,
14th century - Nicole Oresme proves of the divergence of the harmonic series,
14th century - Madhava discovers the power series expansion for , , and This theory is now well known in the Western world as the Taylor series or infinite series.
14th century - Parameshvara discovers a third order Taylor interpolation for ,
1445 - Nicholas of Cusa attempts to square the circle,
1501 - Nilakantha Somayaji writes the Tantrasamgraha, which contains the Madhava's discoveries,
1548 - Francesco Maurolico attempted to calculate the barycenter of various bodies (pyramid, paraboloid, etc.),
1550 - Jyeshtadeva writes the YuktibhΔαΉ£Δ, a commentary to Nilakantha's Tantrasamgraha,
1560 - Sankara Variar writes the Kriyakramakari,
1565 - Federico Commandino publishes De centro Gravitati,
1588 - Commandino's translation of Pappus' Collectio gets published,
1593 - François Viète discovers the first infinite product in the history of mathematics,
17th century
1606 - Luca Valerio applies methods of Archimedes to find volumes and centres of gravity of solid bodies,
1609 - Johannes Kepler computes the integral ,
1611 - Thomas Harriot discovers an interpolation formula similar to Newton's interpolation formula,
1615 - Johannes Kepler publishes Nova stereometria doliorum,
1620 - GrΓ©goire de Saint-Vincent discovers that the area under a hyperbola represented a logarithm,
1624 - Henry Briggs publishes Arithmetica Logarithmica,
1629 - Pierre de Fermat discovers his method of maxima and minima, precursor of the derivative concept,
1634 - Gilles de Roberval shows that the area under a cycloid is three times the area of its generating circle,
1635 - Bonaventura Cavalieri publishes Geometria Indivisibilibus,
1637 - RenΓ© Descartes publishes La GΓ©omΓ©trie,
1638 - Galileo Galilei publishes Two New Sciences,
1644 - Evangelista Torricelli publishes Opera geometrica,
1644 - Fermat's methods of maxima and minima published by Pierre HΓ©rigone,
1647 - Cavalieri computes the integral ,
1647 - GrΓ©goire de Saint-Vincent publishes Opus Geometricum,
1650 - Pietro Mengoli proves of the divergence of the harmonic series,
1654 - Johannes Hudde discovers the power series expansion for ,
1656 - John Wallis publishes Arithmetica Infinitorum,
1658 - Christopher Wren shows that the length of a cycloid is four times the diameter of its generating circle,
1659 - Second edition of Van Schooten's Latin translation of Descartes' Geometry with appendices by Hudde and Heuraet,
1665 - Isaac Newton discovers the generalized binomial theorem and develops his version of infinitesimal calculus,
1667 - James Gregory publishes Vera circuli et hyperbolae quadratura,
1668 - Nicholas Mercator publishes Logarithmotechnia,
1668 - James Gregory computes the integral of the secant function,
1670 - Isaac Newton rediscovers the power series expansion for and (originally discovered by Madhava),
1670 - Isaac Barrow publishes Lectiones Geometricae,
1671 - James Gregory rediscovers the power series expansion for and (originally discovered by Madhava),
1672 - RenΓ©-FranΓ§ois de Sluse publishes A Method of Drawing Tangents to All Geometrical Curves,
1673 - Gottfried Leibniz also develops his version of infinitesimal calculus,
1675 - Isaac Newton invents a Newton's method for the computation of roots of a function,
1675 - Leibniz uses the modern notation for an integral for the first time,
1677 - Leibniz discovers the rules for differentiating products, quotients, and the function of a function.
1683 - Jacob Bernoulli discovers the number ,
1684 - Leibniz publishes his first paper on calculus,
1686 - The first appearance in print of the notation for integrals,
1687 - Isaac Newton publishes Philosophiæ Naturalis Principia Mathematica,
1691 - The first proof of Rolle's theorem is given by Michel Rolle,
1691 - Leibniz discovers the technique of separation of variables for ordinary differential equations,
1694 - Johann Bernoulli discovers the L'HΓ΄pital's rule,
1696 - Guillaume de L'HΓ΄pital publishes Analyse des Infiniment Petits, the first calculus textbook,
1696 - Jakob Bernoulli and Johann Bernoulli solve the brachistochrone problem, the first result in the calculus of variations.
18th century
1711 - Isaac Newton publishes De analysi per aequationes numero terminorum infinitas,
1712 - Brook Taylor develops Taylor series,
1722 - Roger Cotes computes the derivative of sine function in his Harmonia Mensurarum,
1730 - James Stirling publishes The Differential Method,
1734 - George Berkeley publishes The Analyst,
1734 - Leonhard Euler introduces the integrating factor technique for solving first-order ordinary differential equations,
1735 - Leonhard Euler solves the Basel problem, relating an infinite series to Ο,
1736 - Newton's Method of Fluxions posthumously published,
1737 - Thomas Simpson publishes Treatise of Fluxions,
1739 - Leonhard Euler solves the general homogeneous linear ordinary differential equation with constant coefficients,
1742 - Modern definion of logarithm by William Gardiner,
1742 - Colin Maclaurin publishes Treatise on Fluxions,
1748 - Euler publishes Introductio in analysin infinitorum,
1748 - Maria Gaetana Agnesi discusses analysis in Instituzioni Analitiche ad Uso della Gioventu Italiana,
1762 - Joseph Louis Lagrange discovers the divergence theorem,
1797 - Lagrange publishes ThΓ©orie des fonctions analytiques,
19th century
1807 - Joseph Fourier announces his discoveries about the trigonometric decomposition of functions,
1811 - Carl Friedrich Gauss discusses the meaning of integrals with complex limits and briefly examines the dependence of such integrals on the chosen path of integration,
1815 - SimΓ©on Denis Poisson carries out integrations along paths in the complex plane,
1817 - Bernard Bolzano presents the intermediate value theorem β a continuous function which is negative at one point and positive at another point must be zero for at least one point in between,
1822 - Augustin-Louis Cauchy presents the Cauchy integral theorem for integration around the boundary of a rectangle in the complex plane,
1825 - Augustin-Louis Cauchy presents the Cauchy integral theorem for general integration pathsβhe assumes the function being integrated has a continuous derivative, and he introduces the theory of residues in complex analysis,
1825 - André-Marie Ampère discovers Stokes' theorem,
1828 - George Green introduces Green's theorem,
1831 - Mikhail Vasilievich Ostrogradsky rediscovers and gives the first proof of the divergence theorem earlier described by Lagrange, Gauss and Green,
1841 - Karl Weierstrass discovers but does not publish the Laurent expansion theorem,
1843 - Pierre-Alphonse Laurent discovers and presents the Laurent expansion theorem,
1850 - Victor Alexandre Puiseux distinguishes between poles and branch points and introduces the concept of essential singular points,
1850 - George Gabriel Stokes rediscovers and proves Stokes' theorem,
1861 - Karl Weierstrass starts to use the language of epsilons and deltas,
1873 - Georg Frobenius presents his method for finding series solutions to linear differential equations with regular singular points,
20th century
1908 - Josip Plemelj solves the Riemann problem about the existence of a differential equation with a given monodromic group and uses Sokhotsky - Plemelj formulae,
1966 - Abraham Robinson presents non-standard analysis.
1985 - Louis de Branges de Bourcia proves the Bieberbach conjecture,
See also
References
Calculus and mathematical analysis | Timeline of calculus and mathematical analysis | Mathematics | 2,041 |
20,574,897 | https://en.wikipedia.org/wiki/Personal%20environmental%20impact%20accounting | Personal environmental impact accounting (PEIA) is a computer software-based methodology developed in 1992 by Don Lotter for quantifying an individual's impact on the environment via analysis of answers to an extensive quantity-based questionnaire that the individual fills out regarding their lifestyle. The questions are arranged in six areas: home energy and water, transportation, consumerism, waste, advocacy, and demographics.
Conception
Lotter, at the time a graduate student in ecology at the University of California, Davis, developed the PEIA methodology while teaching a course on the History of Western Consciousness in the UC Davis Experimental College. He realized that, while individuals in contemporary Western society generally have an enormous environmental impact, most were unaware of it, and no method existed for its quantification or assessment.
Development
The first software version of the PEIA methodology was the DOS-based EnviroAccount software, written in QuickBasic and completed in 1992. The program asked users 115 questions, then provided a score to indicate the user's personal environmental impact.
Lotter later created EarthAware, released in 1996, which built from EnviroAccount, and ran on Windows 3.1. EarthAware provided internet links for users to learn more about their environmental impact. After the test, users could print out their test results and areas for improvement. They would also receive a label ranging from "Eco-Titan" for the most environmentally friendly to "Eco-Tyrannosaurus Rex" for those "bound for extinction" doing the most harm to the planet.
Lotter also authored a book on the topic, EarthScore: Your Personal Environmental Audit and Guide.
See also
PEIA is similar in concept to the ecological footprint.
Notes and references
Environmental science software
Accounting
Types of accounting | Personal environmental impact accounting | Environmental_science | 360 |
41,319,353 | https://en.wikipedia.org/wiki/International%20Facility%20for%20Food%20Irradiation%20Technology | The International Facility for Food Irradiation Technology (IFFIT) was a research and training centre at the Institute of Atomic Research in Agriculture in Wageningen, Netherlands, sponsored by the Food and Agriculture Organization (FAO) of the United Nations, the International Atomic Energy Agency (IAEA) and the Dutch Ministry of Agriculture and Fisheries.
Aims
The organisation's aim was to address food loss and food safety in developing countries by speeding up the practical introduction of the food irradiation process. They achieved this by training initiatives, research and feasibility studies.
It was founded in 1978 and was operational until 1990, and during those twelve years over four hundred key personnel from over fifty countries were trained in aspects of food irradiation, making a significant contribution to the development and use of the radiation process. The Facility also co-ordinated research into the technology, economics and implementation of food irradiation, assisted in the assessment of the feasibility of using radiation to preserve foodstuffs, and evaluated trial shipments of irradiated material.
Facilities
The Facility had a pilot plant with a cobalt-60 source whose activity was , which was stored underwater. Drums or boxes containing products were placed on rotating tables or conveyor belts, and irradiation took place by raising the source out of the pool.
Details
During IFFIT's first five years of operation, 109 scientists from 40 countries attended six training courses, five of them being general training courses on food irradiation and the sixth being a specialised course on public health aspects. IFFIT also evaluated shipments of irradiated mangoes, spices, avocado, shrimp, onions and garlic, and produced 46 reports. The publications are available on WorldCat.
One trainee noted that Professor D. A. A. Mossel (1918β2004) assisted with the training courses with what he described as "remarkably suggestive lectures and his phenomenal foreign language abilities". From 1988 onwards, Ari Brynjolfsson was director of IFFIT.
References
External links
List of publications produced by the International Facility for Food Irradiation Technology
Food preservation
International organizations based in Europe
Radiation
Wageningen
History of agriculture in the Netherlands | International Facility for Food Irradiation Technology | Physics,Chemistry | 440 |
11,421,650 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20R16 | In molecular biology, Small nucleolar RNA R16 is a non-coding RNA (ncRNA) molecule identified in plants which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA.
snoRNA R16 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs.
snoR16 was identified in Arabidopsis thaliana. snoRNA R16 is related to another Arabidopsis snoRNA called R40.
References
External links
Small nuclear RNA | Small nucleolar RNA R16 | Chemistry | 211 |
40,048,902 | https://en.wikipedia.org/wiki/Archaeosortase | An archaeosortase is a protein that occurs in the cell membranes of some archaea. Archaeosortases recognize and remove carboxyl-terminal protein sorting signals about 25 amino acids long from secreted proteins. A genome that encodes one archaeosortase may encode over fifty target proteins. The best characterized archaeosortase target is the Haloferax volcanii S-layer glycoprotein, an extensively modified protein with O-linked glycosylations, N-linked glycosylations, and a large prenyl-derived lipid modification toward the C-terminus. Knockout of the archaeosortase A (artA) gene, or permutation of the motif Pro-Gly-Phe (PGF) to Pro-Phe-Gly in the S-layer glycoprotein, blocks attachment of the lipid moiety as well as blocking removal of the PGF-CTERM protein-sorting domain. Thus archaeosortase appears to be a transpeptidase, like sortase, rather than a simple protease.
Archaeosortases are related to exosortases, their uncharacterized counterparts in Gram-negative bacteria. The names of both families of proteins reflect roles analogous to sortases in Gram-positive bacteria, with which they share no sequence homology. The sequences of archaeosortases and exosortases consists mostly of hydrophobic transmembrane helices, which sortases lack. Archaeosortases fall into a number of distinct subtypes, each responsible for recognizing sorting signals with a different signature motif. Archaeosortase A (ArtA) recognizes the PGF-CTERM signal, ArtB recognizes VPXXXP-CTERM, AtrC recognizes PEF-CTERM, and so on; one archaeal genome may encode two different archaeosortase systems.
Invariant residues shared by all archaeosortases and exosortases include a Cys and an Arg. Replacement of either destroys catalytic activity, suggesting convergent evolution of the active site with the sortases.
In the archaeal model species Haloferax volcanii, archaeosortase A belongs to a fairly large collection of identified membrane-associated proteases, but apparently also to the smaller set of intramembrane cleaving proteases, along with the rhomboid protease RhoII, and in contrast to bacterial sortases.
References
Membrane proteins
Proteases | Archaeosortase | Biology | 543 |
14,236,462 | https://en.wikipedia.org/wiki/Warwick%20Collins | Warwick Collins (born 14 December 1948 β 10 February 2013) was a British novelist, screenwriter, yacht designer, and evolutionary theorist.
Collins was born in Johannesburg to English-speaking parents. His father, Robin Collins, was a novelist who wrote under the nom-de-plume Robin Cranford. Robin Collins's novels were written from a liberal perspective and one of them, My City Fears Tomorrow, was banned by the South African apartheid regime. When Warwick Collins was eleven, his family moved to England, and Collins entered The King's School, Canterbury. He continued his education at the University of Sussex, where he read Biology. He lived for many years in the Hampshire town of Lymington where he set two of his novels.
His early poetry was featured in Encounter between 1968 and 1971.
A Silent Gene Theory of Evolution
Collins studied biology at The University of Sussex, where his tutor was the leading theoretical biologist John Maynard Smith. In 1975 Collins voiced to Maynard Smith the view that natural selection could not drive evolution because it always acted to reduce variation in favour of an optimum type for any environment, whereas the central story of evolution was that of increasing variation and complexity. Collins quoted Charles Darwin in The Origin of Species ("... unless profitable variations do occur, natural selection can do nothing."), and argued that if variation must always occur before natural selection can act, then variation, and not natural selection, drives evolution. He asked Maynard Smith whether he could search for a "strong" theory of variation. Maynard Smith warned Collins that he could not support his efforts to pursue a rival theory to the theory that natural selection drives evolution. Collins replied that he thought the object of science was to question and examine everything, including hallowed theories such as the theory of natural selection. Maynard Smith asserted that, on the contrary, the strength of science was its capacity to agree on certain principles, and act collectively to pursue agreed aims. This difference of view with his tutor made Collins give up his scientific career and pursue other interests instead.
Other careers
After leaving university, Collins became a yacht designer and invented and patented the tandem keel, which was conceived to create high performance at low draft, but which also remains one of the radical keels in the America's Cup. He continued his interest in yacht design with an innovation in hull design called the Universal Hull. This fused together two classic hull types (the long, thin, easily driven hull and the beamy commodious hull) in a form which yielded the chief virtues of both types of hull. The two hulls are joined above the waterline by a ledge which also acts as a spray ledge. The resulting shape is easily driven because of the long, thin underwater shape but enjoys the accommodation space (above the waterline) of a beamy hull.
In the 1990s Collins turned to fiction, publishing three sailing novels and then a series of more wide-ranging novels, including two (The Rationalist and The Marriage of Souls) which are set in 18th century Lymington. He published ten novels in all.
Collins's political views were liberal and libertarian, but (in 1979) he was asked by Keith Joseph to join a Conservative party think tank chaired by John Hoskyns (who became Chief Political Adviser to Margaret Thatcher) to work on issues such as privatisation. Collins, though left of centre politically, always believed, in common with "classical liberals" such as Gladstone, that the free market is a superior means of distributing wealth than the state.
Collins's political views manifested themselves in his novel Gents (1996) which has recently been republished by The Friday Project, and was reviewed as an all-time classic in the Times (8 September 2007). Gents, which describes the lives of three West Indian immigrants who run a public urinal in London, is considered to be a leading fiction on tolerance. Collins claimed it was stimulated in part by his memories of apartheid when he lived as a child in South Africa.
Collins's other fictions include the somewhat luridly entitled Fuckwoman, a spoof on the superhero genre which details the adventures of a feminist vigilante who hunts down men who commit crimes against women. Set in Los Angeles, it also satirises the movie industry, contrasting Hollywood's emphasis on the image over reality. It has been published in French, German and Italian translations and recently in English as F-Woman.
His last novel was The Sonnets, a fictional account of William Shakespeare's life from 1592 to 1594, when the London theatres were closed by threat of plague, during which time many scholars believe that the main body of Shakespeare's sonnets were written.
Warwick Collins maintained an occasional blog at "www.publicpoems.com".
Publications
Fiction
Challenge (1990) (novel about the America's Cup, set in 2000)
New World (1991) (sequel to Challenge)
Death of an Angel (1992) (sequel, set in 2003)
The Rationalist (1993) (set in 18th century England)
Computer One (1993) (science fiction)
Gents (1997, republished in 2007 by The Friday Project)
The Marriage of Souls (1999) (Sequel to the Rationalist)
Fuckwoman (published in French and German in 2002)
The Sonnets (Warwick Collins) (2008)
Non-fiction
A Silent Gene Theory of Evolution (2009)
References
Udo Taubitz: Rezension von Gents, Falter No. 3/2001 (17 January 2001), p.Β 66 (in German)
A short biography (in French)
External links
Publishing the short novels of Warwick Collins
A Silent Gene Theory Of Evolution
Public Poems β Warwick's blog
Warwick Collins: Lock Up Your Laptops, Prospect (December 1997).
1948 births
2013 deaths
20th-century British novelists
21st-century British novelists
20th-century British male writers
21st-century British male writers
British male novelists
British male screenwriters
Non-Darwinian evolution
Writers from Johannesburg | Warwick Collins | Biology | 1,216 |
2,909,461 | https://en.wikipedia.org/wiki/Candelilla%20wax | Candelilla wax is a wax derived from the leaves of the small candelilla shrub native to northern Mexico and the southwestern United States, Euphorbia antisyphilitica, from the family Euphorbiaceae. It is yellowish-brown, hard, brittle, aromatic, and opaque to translucent.
Composition and production
With a melting point of , candelilla wax consists of mainly hydrocarbons (about 50%, chains with 29β33 carbons), esters of higher molecular weight (20β29%), free acids (7β9%), and resins (12β14%, mainly triterpenoid esters). The high hydrocarbon content distinguishes this wax from carnauba wax. It is insoluble in water, but soluble in many organic solvents such as acetone, chloroform, benzene, and turpentine.
The wax is obtained by boiling the leaves and stems with dilute sulfuric acid, and the resulting "cerote" is skimmed from the surface and further processed. In this way, about 900 tons are produced annually.
Uses
It is mostly used mixed with other waxes to harden them without raising their melting point. As a food additive, candelilla wax has the E number EΒ 902 and is used as a glazing agent. It also finds use in the cosmetic industry, as a component of lip balms and lotion bars. One of its major uses is as a binder for chewing gums.
Candelilla wax can be used as a substitute for carnauba wax and beeswax. It is also used for making varnish.
References
External links
Candelilla wax data sheet - from the UN Food and Agriculture Organization
Candelilla Institute
Wax, Men, and Money: Candelilla Wax Camps along the Rio Grande
Visual arts materials
Food additives
Painting materials
Waxes
E-number additives | Candelilla wax | Physics | 396 |
18,233,949 | https://en.wikipedia.org/wiki/List%20of%20tree%20genera | The major tree genera are listed below by taxonomic family.
Flowering plants (Magnoliophyta; angiosperms)
For classification of flowering plants, see APG II system.
Eudicots (together with magnoliids they are called broadleaf or hardwood trees)
About 210 eudicot families include trees.
Adoxaceae (Moschatel family)
Sambucus, Elderberry
Viburnum, Viburnum
Altingiaceae (Sweetgum family)
Liquidambar, Sweetgum
Anacardiaceae (Cashew family)
Anacardium, Cashew etc.
Mangifera, Mango
Pistacia, Pistachio etc.
Rhus, Sumac
Toxicodendron, Lacquer tree etc.
Apocynaceae (Dogbane family)
Pachypodium
Aquifoliaceae (Holly family)
Ilex, Holly
Araliaceae (Ivy family)
Harmsiopanax
Kalopanax septemlobus, Kalopanax
Schefflera, Schefflera
Betulaceae (Birch family)
Alnus, Alder
Betula, Birch
Carpinus, Hornbeam
Corylus, Hazel
Bignoniaceae (Trumpet Creeper family)
Catalpa, Catalpa
Jacaranda
Tabebuia
Cactaceae (Cactus family)
Carnegiea gigantea, Saguaro
Cannabaceae (Cannabis family)
Celtis, Hackberry
Cornaceae (Dogwood family)
Cornus, Dogwood
Family Dipterocarpaceae
Dipterocarpus, Garjan
Shorea, Sal etc.
Ebenaceae (Persimmon family)
Diospyros, Persimmon
Ericaceae (Heath family)
Arbutus, Arbutus
Eucommiaceae (Eucommia family)
Eucommia ulmoides, Eucommia
Fabaceae (Pea family)
Acacia, Acacia
Bauhinia Orchid tree etc.
Caesalpinia, Brazilwood etc.
Gleditsia, Honey locust etc.
Laburnum, Laburnum
Robinia, Black locust etc.
Fagaceae (Beech family)
Castanea, Chestnut
Fagus, Beech
Lithocarpus, Tanoak etc.
Quercus, Oak
Fouquieriaceae (Boojum family)
Fouquieria, Boojum etc.
Hamamelidaceae (Witch-hazel family)
Parrotia persica, Persian Ironwood
Juglandaceae (Walnut family)
Carya, Hickory
Juglans, Walnut
Pterocarya, Wingnut
Lecythidaceae (Paradise nut family)
Bertholletia excelsa, Brazil Nut
Lythraceae (Loosestrife family)
Lagerstroemia, Crape-myrtle
Malvaceae (Mallow family; including Tiliaceae, Sterculiaceae and Bombacaceae)
Adansonia, Baobab
Bombax, Silk-cotton tree
Brachychiton, Bottletrees
Ceiba, Kapok etc.
Durio, Durian
Ochroma pyramidale, Balsa
Theobroma, Cacao etc.
Tilia, Linden (Basswood, Lime)
Meliaceae (Mahogany family)
Azadirachta, Neem etc.
Melia, Bead tree etc.
Swietenia, Mahogany
Moraceae (Mulberry family)
Ficus, Fig
Morus, Mulberry
Myrtaceae (Myrtle family)
Eucalyptus, Eucalypt
Eugenia, Stopper etc.
Myrtus, Myrtle
Psidium, Guava
Nothofagaceaed (Southern Beech family)
Nothofagus, Southern beech
Nyssaceae (Tupelo family; sometimes included in Cornaceae)
Davidia involucrata, Dove tree
Nyssa, Tupelo
Oleaceae (Olive family)
Fraxinus, Ash
Olea, Olive etc.
Paulowniaceae (Paulownia family)
Paulownia, Foxglove Tree
Platanaceae (Plane family)
Platanus, Plane
Rhizophoraceae (Mangrove family)
Rhizophora, Red mangrove etc.
Rosaceae (Rose family)
Crataegus, Hawthorn
Malus, Apple
Prunus, Almond, Peach, Apricot, Plums, Cherries etc.
Pyrus, Pear
Sorbus, Rowans, Whitebeams etc.
Rubiaceae (Bedstraw family)
Coffea, Coffee
Rutaceae (Rue family)
Citrus, Citrus
Phellodendron, Cork-tree
Tetradium, Euodia
Salicaceae (Willow family)
Populus, Poplars and Aspens
Salix, Willow
Sapindaceae (including Aceraceae, Hippocastanaceae) (Soapberry family)
Acer, Maple
Aesculus, Buckeye, Horse-chestnut
Koelreuteria, Golden rain tree
Litchi sinensis, Lychee
Ungnadia speciosa, Mexican Buckeye
Sapotaceae (Sapodilla family)
Argania spinosa, Argan
Palaquium, Gutta-percha
Sideroxylon, Tambalacoque ("dodo tree") etc.
Family Simaroubaceae
Ailanthus, Tree of heaven
Theaceae (Camellia family)
Gordonia, Gordonia
Stewartia, Stewartia
Thymelaeaceae (Thymelaea family)
Gonystylus, Ramin
Ulmaceae (Elm family)
Ulmus, Elm
Zelkova, Zelkova
Monocotyledons (Liliopsida)
About 10 Monocotyledon families include trees.
Asparagaceae (Asparagus family)
Cordyline, Cabbage tree etc.
Dracaena, Dragon tree
Yucca, Joshua tree etc.
Arecaceae (Palmae) (Palm family)
Areca, Areca
Cocos nucifera, Coconut
Phoenix, Date Palm etc.
Trachycarpus, Chusan Palm etc.
Poaceae (grass family)
Bamboos, Poaceae subfamily Bambusoideae, around 92 genera
Note that banana 'trees' are not actually trees; they are not woody nor is the stalk perennial.
Magnoliids (together with eudicots they are called broadleaf or hardwood trees)
17 magnoliid families include trees.
Annonaceae (Custard apple family)
Annona, Cherimoya, Custard apple, Soursop etc.
Asimina, American Pawpaw
Lauraceae (Laurel family)
Cinnamomum, Cinnamon etc.
Laurus, Bay Laurel etc.
Persea, Avocado etc.
Sassafras, Sassafras
Magnoliaceae (Magnolia family)
Liriodendron, Tulip tree
Magnolia, Magnolia
Myristicaceae (Nutmeg family)
Myristica, Nutmeg
Conifers (Pinophyta; softwood trees)
7 families, all of them include trees.
Araucariaceae (Araucaria family)
Agathis, Kauri
Araucaria, Araucaria
Wollemia nobilis, Wollemia
Cupressaceae (Cypress family)
Chamaecyparis
Cryptomeria japonica, Sugi
Cupressus, Cypress
Fitzroya cupressoides, Alerce or Patagonian cypress
Juniperus, Juniper
Metasequoia glyptostroboides, Dawn Redwood
Sequoia sempervirens, Coast Redwood
Sequoiadendron giganteum, Giant Sequoia
Taxodium, Bald Cypress
Thuja, Western Redcedar etc.
Pinaceae (Pine family)
Abies, Fir
Cedrus, Cedar
Larix, Larch
Picea, Spruce
Pinus, Pine
Pseudotsuga, Douglas-fir
Tsuga, Hemlock
Podocarpaceae (Yellowwood family)
Afrocarpus, African Yellowwood etc.
Dacrycarpus, Kahikatea etc.
Dacrydium, Rimu etc.
Podocarpus, Totara etc.
Prumnopitys, Miro etc.
Family Sciadopityaceae
Sciadopitys verticillata, Kusamaki
Taxaceae (Yew family)
Taxus, Yew
Ginkgos (Ginkgophyta)
Only one species.
Ginkgoaceae (Ginkgo family)
Ginkgo biloba, Ginkgo
Cycads (Cycadophyta)
2 families include trees.
Cycadaceae (Cycad family)
Cycas, Ngathu cycad etc.
Zamiaceae (Zamia family)
Lepidozamia, Wunu cycad etc.
Ferns (Pteridophyta)
Cyatheaceae
Cyathea
Dicksoniaceae
Dicksonia
Fossil trees
Wattieza, the earliest known tree
See also
List of trees and shrubs by taxonomic family
List of Clusiaceae genera
References
Genera
Trees
Trees, genera
. | List of tree genera | Biology | 1,796 |
77,037,673 | https://en.wikipedia.org/wiki/Energy%20signature | In mechanical engineering, energy signatures (also called change-point regression models) relate energy demand of buildings to climatic variables, typically ambient temperature. Also other climatic variables such as heating or cooling degree days are used. In most cases, heating or cooling building energy demand is analysed through energy signatures, but also hot water or electricity demand is considered.
Energy signatures make a simplified assumption of a linear relationship between a building's energy demand and temperature. This assumption allows for balancing accuracy with computation time, as the estimation of energy demand through energy signatures is considerably faster than using building performance simulation software. A crucial advantage of applying energy signatures is that no detailed information on the geometrical, construction, and operational characteristics of buildings needs to be available.
References
Energy
Building
Heating, ventilation, and air conditioning
Temperature | Energy signature | Physics,Chemistry,Engineering | 161 |
18,475,356 | https://en.wikipedia.org/wiki/Zorubicin | Zorubicin (INN) is a benzoylhydrazone derivative of the anthracycline antineoplastic antibiotic daunorubicin. Zorubicin intercalates into DNA; it as well interacts with topoisomerase II and inhibits DNA polymerases and therefore is used to treat cancer.
References
Anthracyclines
Topoisomerase inhibitors
Hydrazones
Hydroxyarenes
Phenol ethers | Zorubicin | Chemistry | 98 |
59,497,796 | https://en.wikipedia.org/wiki/Zero%20point%20%28photometry%29 | In astronomy, the zero point in a photometric system is defined as the magnitude of an object that produces 1 count per second on the detector. The zero point is used to calibrate a system to the standard magnitude system, as the flux detected from stars will vary from detector to detector. Traditionally, Vega is used as the calibration star for the zero point magnitude in specific pass bands (U, B, and V), although often, an average of multiple stars is used for higher accuracy. It is not often practical to find Vega in the sky to calibrate the detector, so for general purposes, any star may be used in the sky that has a known apparent magnitude.
General formula
The equation for the magnitude of an object in a given band is
where is the magnitude of an object, is the flux at a specific wavelength, and is the sensitivity function of a given instrument. Under ideal conditions, the sensitivity is 1 inside a pass band and 0 outside a pass band. The constant is determined from the zero point magnitude using the above equation, by setting the magnitude equal to 0.
Vega as calibration
Under most circumstances, Vega is used as the zero point, but in reality, an elaborate "bootstrap" system is used to calibrate a detector. The calibration typically takes place through extensive observational photometry as well as the use of theoretical atmospheric models.
Bolometric magnitude zero point
While the zero point is defined to be that of Vega for passband filters, there is no defined zero point for bolometric magnitude, and traditionally, the calibrating star has been the sun. However, the IAU has recently defined the absolute bolometric magnitude and apparent bolometric magnitude zero points to be 3.0128Γ1028 W and 2.51802Γ10β8 W/m2, respectively.
See also
Luminosity
Bolometric correction
Absolute magnitude
References
Photometric systems
Observational astronomy | Zero point (photometry) | Astronomy | 403 |
60,826,815 | https://en.wikipedia.org/wiki/Silanes | In organosilicon chemistry, silanes are a diverse class of charge-neutral organic compounds with the general formula . The R substituents can be any combination of organic or inorganic groups. Most silanes contain Si-C bonds, and are discussed under organosilicon compounds. Some contain Si-H bonds and are discussed under hydrosilanes.
Examples
Silane , the parent.
Binary silicon-hydrogen compounds (which are sometimes called silanes also) includes silane itself but also compounds with Si-Si bonds including disilane and longer chains.
Silanes with one, two, three, or four Si-H bonds are called hydrosilanes. Silane is again the parent member. Examples: triethylsilane () and triethoxysilane ().
Polysilanes are organosilicon compounds with the formula . They feature Si-Si bonds. Attracting more interest are the organic derivatives such as polydimethylsilane . Dodecamethylcyclohexasilane is an oligomer of such materials. Formally speaking, polysilanes also include compounds of the type , but these less studied.
Carbosilanes are polymeric silanes with alternating Si-C bonds.
Chlorosilanes have Si-Cl bonds. The dominant examples come from the Direct process, i.e., (CH3)4-xSiClx. Another important member is trichlorosilane ().
Organosilanes are a class of charge-neutral organosilicon compounds. Example: tetramethylsilane ()
By tradition, compounds with Si-O-Si bonds are usually not referred to as silanes. Instead, they are called siloxanes. One example is hexamethyldisiloxane, .
Applications
See compound-specific applications. Commonly:
Polysilicone production
PEX crosslinking agent
See also
Silane quats
References
Silanes
Trimethylsilyl compounds
Carbosilanes | Silanes | Chemistry | 421 |
25,161,773 | https://en.wikipedia.org/wiki/HD%2044219 | HD 44219 is a solar-type star with an exoplanetary companion in the equatorial constellation of Monoceros. It has an apparent visual magnitude of 7.69, making it an 8th magnitude star that is too faint to be readily visible to the naked eye. The system is located at a distance of 173Β light-years from the Sun based on parallax measurements, but is drifting closer with a radial velocity of β12Β km/s.
Characteristics
This is an ordinary G-type main-sequence star with a stellar classification of G3V. L. Casagrande and associates in 2011 estimated the age of the star as 5.4Β billion years, while A. Bonfanti and colleagues listed a much greater age of nearly 10Β billion years in 2015. It has a near solar metallicity and is spinning with a projected rotational velocity of 1.5Β km/s. The star has about the same mass as the Sun but is 37% larger in radius. It is radiating 1.83 times the luminosity of the Sun from its photosphere at an effective temperature of 5,749Β K.
Planetary system
In 2009, a Jovian planet was found in a highly eccentric orbit around the star by the HARPS planet search program. There is some evidence of an additional, longer-period companion.
See also
List of extrasolar planets
References
G-type main-sequence stars
Planetary systems with one confirmed planet
Monoceros
Durchmusterung objects
044219
030114 | HD 44219 | Astronomy | 306 |
2,630,160 | https://en.wikipedia.org/wiki/Visual%20angle | Visual angle is the angle a viewed object subtends at the eye, usually stated in degrees of arc.
It also is called the object's angular size.
The diagram on the right shows an observer's eye looking at a frontal extent (the vertical arrow) that has a linear size , located in the distance from point .
For present purposes, point can represent the eye's nodal points at about the center of the lens, and also represent the center of the eye's entrance pupil that is only a few millimeters in front of the lens.
The three lines from object endpoint heading toward the eye indicate the bundle of light rays that pass through the cornea, pupil and lens to form an optical image of endpoint on the retina at point .
The central line of the bundle represents the chief ray.
The same holds for object point and its retinal image at .
The visual angle is the angle between the chief rays of and .
Measuring and computing
The visual angle can be measured directly using a theodolite placed at point .
Or, it can be calculated (in radians) using the formula, .
However, for visual angles smaller than about 10 degrees, this simpler formula provides very close approximations:
The retinal image and visual angle
As the above sketch shows, a real image of the object is formed on the retina between points and . (See visual system). For small angles, the size of this retinal image is
where is the distance from the nodal points to the retina, about 17Β mm.
Examples
If one looks at a one-centimeter object at a distance of one meter and a two-centimeter object at a distance of two meters, both subtend the same visual angle of about 0.01 rad or 0.57Β°. Thus they have the same retinal image size .
That is just a bit larger than the retinal image size for the moon, which is about , because, with moon's mean diameter , and earth to moon mean distance averaging (), .
Also, for some easy observations, if one holds one's index finger at arm's length, the width of the index fingernail subtends approximately one degree, and the width of the thumb at the first joint subtends approximately two degrees.
Therefore, if one is interested in the performance of the eye or the first processing steps in the visual cortex, it does not make sense to refer to the absolute size of a viewed object (its linear size ). What matters is the visual angle which determines the size of the retinal image.
Terminological confusions
In astronomy the term apparent size refers to the physical angle or angular diameter.
But in psychophysics and experimental psychology the adjective "apparent" refers to a person's subjective experience.
So, "apparent size" has referred to how large an object looks, also often called its "perceived size".
Additional confusion has occurred because there are two qualitatively different "size" experiences for a viewed object. One is the perceived visual angle (or apparent visual angle) which is the subjective correlate of , also called the object's perceived or apparent angular size.
The perceived visual angle is best defined as the difference between the perceived directions of the object's endpoints from oneself.
The other "size" experience is the object's perceived linear size (or apparent linear size) which is the subjective correlate of , the object's physical width or height or diameter.
Widespread use of the ambiguous terms "apparent size" and "perceived size" without specifying the units of measure has caused confusion.
Representation of visual angle in visual cortex
The brain's primary visual cortex (area V1 or Brodmann area 17) contains a spatially isomorphic representation of the retina (see retinotopy). Loosely speaking, it is a distorted "map" of the retina. Accordingly, the size of a given retinal image determines the extent of the neural activity pattern eventually generated in area V1 by the associated retinal activity pattern. Murray, Boyaci, & Kersten (2006) recently used Functional magnetic resonance imaging (fMRI) to show that an increase in a viewed target's visual angle, which increases , also increases the extent of the corresponding neural activity pattern in area V1.
The observers in experiment carried out by Murray and colleagues viewed a flat picture with two discs that subtended the same visual angle and formed retinal images of the same size , but the perceived angular size of one was about 17% larger than for the other, due to differences in the background patterns for the disks. It was shown that the areas of the activity in V1 related to the disks were of unequal size, despite the fact that the retinal images were the same size. This size difference in area V1 correlated with the 17% illusory difference between the perceived visual angles. This finding has implications for spatial illusions such as the visual angle illusion.
See also
Visual acuity
Visual angle illusion
Notes
References
Baird, J. C. (1970). Psychophysical analysis of visual space. Oxford, London: Pergamon Press.
Joynson, R. B. (1949). The problem of size and distance. Quarterly Journal of Experimental Psychology, 1, 119β135.
McCready, D. (1965). Size-distance perception and accommodation-convergence micropsia: A critique. Vision Research. 5, 189β206.
McCready, D. (1985). On size, distance and visual angle perception. Perception & Psychophysics, 37, 323β334.
Murray, S.O., Boyaci, H, & Kersten, D. (2006) The representation of perceived angular size in human primary visual cortex. Nature Neuroscience, 9, 429β434 (1 March 2006).
McCready, D. The Moon Illusion Explained.
External links
The University at Buffalo's Interactive Visual Acuity Chart for the display of letters or symbols for a specified Snellen line on your computer monitor at exactly the right size (note: you must follow the instructions for calibration).
Vision
Angle | Visual angle | Physics | 1,271 |
54,459,094 | https://en.wikipedia.org/wiki/Mexrenoate%20potassium | Mexrenoate potassium (developmental code name SC-26714) is a synthetic steroidal antimineralocorticoid which was never marketed.
See also
Mexrenoic acid
Mexrenone
References
Abandoned drugs
Antimineralocorticoids
Carboxylic acids
Enones
Potassium compounds
Pregnanes
Spirolactones
Tertiary alcohols | Mexrenoate potassium | Chemistry | 76 |
26,914,385 | https://en.wikipedia.org/wiki/C7H6O6 | {{DISPLAYTITLE:C7H6O6}}
The molecular formula C7H6O6 (molar mass: 186.11 g/mol, exact mass: 186.01643790 u) may refer to:
3-Carboxy-cis,cis-muconic acid
3-Fumarylpyruvic acid
3-Maleylpyruvic acid
Molecular formulas | C7H6O6 | Physics,Chemistry | 86 |
1,681,010 | https://en.wikipedia.org/wiki/Cycle%20graph%20%28algebra%29 | In group theory, a subfield of abstract algebra, a cycle graph of a group is an undirected graph that illustrates the various cycles of that group, given a set of generators for the group. Cycle graphs are particularly useful in visualizing the structure of small finite groups.
A cycle is the set of powers of a given group element a, where an, the n-th power of an element a, is defined as the product of a multiplied by itself n times. The element a is said to generate the cycle. In a finite group, some non-zero power of a must be the group identity, which we denote either as e or 1; the lowest such power is the order of the element a, the number of distinct elements in the cycle that it generates. In a cycle graph, the cycle is represented as a polygon, with its vertices representing the group elements and its edges indicating how they are linked together to form the cycle.
Definition
Each group element is represented by a node in the cycle graph, and enough cycles are represented as polygons in the graph so that every node lies on at least one cycle. All of those polygons pass through the node representing the identity, and some other nodes may also lie on more than one cycle.
Suppose that a group element a generates a cycle of order 6 (has order 6), so that the nodes a, a2, a3, a4, a5, and a6 = e are the vertices of a hexagon in the cycle graph. The element a2 then has order 3; but making the nodes a2, a4, and e be the vertices of a triangle in the graph would add no new information. So, only the primitive cycles need be considered, those that are not subsets of another cycle. Also, the node a5, which also has order 6, generates the same cycle as does a itself; so we have at least two choices for which element to use in generating a cycle --- often more.
To build a cycle graph for a group, we start with a node for each group element. For each primitive cycle, we then choose some element a that generates that cycle, and we connect the node for e to the one for a, a to a2, ..., akβ1 to ak, etc., until returning to e. The result is a cycle graph for the group.
When a group element a has order 2 (so that multiplication by a is an involution), the rule above would connect e to a by two edges, one going out and the other coming back. Except when the intent is to emphasize the two edges of such a cycle, it is typically drawn as a single line between the two elements.
Note that this correspondence between groups and graphs is not one-to-one in either direction: Two different groups can have the same cycle graph, and two different graphs can be cycle graphs for a single group. We give examples of each in the non-uniqueness section.
Example and properties
As an example of a group cycle graph, consider the dihedral group Dih4. The multiplication table for this group is shown on the left, and the cycle graph is shown on the right, with e specifying the identity element.
Notice the cycle {e, a, a2, a3} in the multiplication table, with a4 = e. The inverse aβ1 = a3 is also a generator of this cycle: (, , and . Similarly, any cycle in any group has at least two generators, and may be traversed in either direction. More generally, the number of generators of a cycle with n elements is given by the Euler Ο function of n, and any of these generators may be written as the first node in the cycle (next to the identity e); or more commonly the nodes are left unmarked. Two distinct cycles cannot intersect in a generator.
Cycles that contain a non-prime number of elements have cyclic subgroups that are not shown in the graph. For the group Dih4 above, we could draw a line between a2 and e since , but since a2 is part of a larger cycle, this is not an edge of the cycle graph.
There can be ambiguity when two cycles share a non-identity element. For example, the 8-element quaternion group has cycle graph shown at right. Each of the elements in the middle row when multiplied by itself gives β1 (where 1 is the identity element). In this case we may use different colors to keep track of the cycles, although symmetry considerations will work as well.
As noted earlier, the two edges of a 2-element cycle are typically represented as a single line.
The inverse of an element is the node symmetric to it in its cycle, with respect to the reflection which fixes the identity.
Non-uniqueness
The cycle graph of a group is not uniquely determined up to graph isomorphism; nor does it uniquely determine the group up to group isomorphism. That is, the graph obtained depends on the set of generators chosen, and two different groups (with chosen sets of generators) can generate the same cycle graph.
A single group can have different cycle graphs
For some groups, choosing different elements to generate the various primitive cycles of that group can lead to different cycle graphs. There is an example of this for the abelian group , which has order 20. We denote an element of that group as a triple of numbers , where and each of and is either 0 or 1. The triple is the identity element. In the drawings below, is shown above and .
This group has three primitive cycles, each of order 10. In the first cycle graph, we choose, as the generators of those three cycles, the nodes , , and . In the second, we generate the third of those cycles --- the blue one --- by starting instead with .
The two resulting graphs are not isomorphic because they have diameters 5 and 4 respectively.
Different groups can have the same cycle graph
Two different (non-isomorphic) groups can have cycle graphs that are isomorphic, where the latter isomorphism ignores the labels on the nodes of the graphs. It follows that the structure of a group is not uniquely determined by its cycle graph.
There is an example of this already for groups of order 16, the two groups being and . The abelian group is the direct product of the cyclic groups of orders 8 and 2. The non-abelian group is that semidirect product of and in which the non-identity element of maps to the multiply-by-5 automorphism of .
In drawing the cycle graphs of those two groups, we take to be generated by elements s and t with
where that latter relation makes abelian. And we take to be generated by elements and with
Here are cycle graphs for those two groups, where we choose to generate the green cycle on the left and to generate that cycle on the right:
In the right-hand graph, the green cycle, after moving from 1 to , moves next to because
History
Cycle graphs were investigated by the number theorist Daniel Shanks in the early 1950s as a tool to study multiplicative groups of residue classes. Shanks first published the idea in the 1962 first edition of his book Solved and Unsolved Problems in Number Theory. In the book, Shanks investigates which groups have isomorphic cycle graphs and when a cycle graph is planar. In the 1978 second edition, Shanks reflects on his research on class groups and the development of the baby-step giant-step method:
Cycle graphs are used as a pedagogical tool in Nathan Carter's 2009 introductory textbook Visual Group Theory.
Graph characteristics of particular group families
Certain group types give typical graphs:
Cyclic groups Zn, order n, is a single cycle graphed simply as an n-sided polygon with the elements at the vertices:
When n is a prime number, groups of the form (Zn)m will have n-element cycles sharing the identity element:
Dihedral groups Dihn, order 2n consists of an n-element cycle and n 2-element cycles:
Dicyclic groups, Dicn = Q4n, order 4n:
Other direct products:
Symmetric groups β The symmetric group Sn contains, for any group of order n, a subgroup isomorphic to that group. Thus the cycle graph of every group of order n will be found in the cycle graph of Sn.
See example: Subgroups of S4
Extended example: Subgroups of the full octahedral group
The full octahedral group is the direct product of the symmetric group S4 and the cyclic group Z2.
Its order is 48, and it has subgroups of every order that divides 48.
In the examples below nodes that are related to each other are placed next to each other,
so these are not the simplest possible cycle graphs for these groups (like those on the right).
Like all graphs a cycle graph can be represented in different ways to emphasize different properties. The two representations of the cycle graph of S4 are an example of that.
See also
List of small groups
Cayley graph
References
Skiena, S. (1990). Cycles, Stars, and Wheels. Implementing Discrete Mathematics: Combinatorics and Graph Theory with Mathematica (pp. 144-147).
Pemmaraju, S., & Skiena, S. (2003). Cycles, Stars, and Wheels. Computational Discrete Mathematics: Combinatorics and Graph Theory with Mathematica (pp. 248-249). Cambridge University Press.
External links
Abstract algebra
Group theory
Application-specific graphs | Cycle graph (algebra) | Mathematics | 1,965 |
53,567,922 | https://en.wikipedia.org/wiki/Industrial%20enzymes | Industrial enzymes are enzymes that are commercially used in a variety of industries such as pharmaceuticals, chemical production, biofuels, food and beverage, and consumer products. Due to advancements in recent years, biocatalysis through isolated enzymes is considered more economical than use of whole cells. Enzymes may be used as a unit operation within a process to generate a desired product, or may be the product of interest. Industrial biological catalysis through enzymes has experienced rapid growth in recent years due to their ability to operate at mild conditions, and exceptional chiral and positional specificity, things that traditional chemical processes lack. Isolated enzymes are typically used in hydrolytic and isomerization reactions. Whole cells are typically used when a reaction requires a co-factor. Although co-factors may be generated in vitro, it is typically more cost-effective to use metabolically active cells.
Enzymes as a unit of operation
Immobilization
Despite their excellent catalytic capabilities, enzymes and their properties must be improved prior to industrial implementation in many cases. Some aspects of enzymes that must be improved prior to implementation are stability, activity, inhibition by reaction products, and selectivity towards non-natural substrates. This may be accomplished through immobilization of enzymes on a solid material, such as a porous support. Immobilization of enzymes greatly simplifies the recovery process, enhances process control, and reduces operational costs. Many immobilization techniques exist, such as adsorption, covalent binding, affinity, and entrapment. Ideal immobilization processes should not use highly toxic reagents in the immobilization technique to ensure stability of the enzymes. After immobilization is complete, the enzymes are introduced into a reaction vessel for biocatalysis.
Adsorption
Enzyme adsorption onto carriers functions based on chemical and physical phenomena such as van der Waals forces, ionic interactions, and hydrogen bonding. These forces are weak, and as a result, do not affect the structure of the enzyme. A wide variety of enzyme carriers may be used. Selection of a carrier is dependent upon the surface area, particle size, pore structure, and type of functional group.
Covalent binding
Many binding chemistries may be used to adhere an enzyme to a surface to varying degrees of success. The most successful covalent binding techniques include binding via glutaraldehyde to amino groups and N-hydroxysuccinide esters. These immobilization techniques occur at ambient temperatures in mild conditions, which have limited potential to modify the structure and function of the enzyme.
Affinity
Immobilization using affinity relies on the specificity of an enzyme to couple an affinity ligand to an enzyme to form a covalently bound enzyme-ligand complex. The complex is introduced into a support matrix for which the ligand has high binding affinity, and the enzyme is immobilized through ligand-support interactions.
Entrapment
Immobilization using entrapment relies on trapping enzymes within gels or fibers, using non-covalent interactions. Characteristics that define a successful entrapping material include high surface area, uniform pore distribution, tunable pore size, and high adsorption capacity.
Recovery
Enzymes typically constitute a significant operational cost for industrial processes, and in many cases, must be recovered and reused to ensure economic feasibility of a process. Although some biocatalytic processes operate using organic solvents, the majority of processes occur in aqueous environments, improving the ease of separation. Most biocatalytic processes occur in batch, differentiating them from conventional chemical processes. As a result, typical bioprocesses employ a separation technique after bioconversion. In this case, product accumulation may cause inhibition of enzyme activity. Ongoing research is performed to develop in situ separation techniques, where product is removed from the batch during the conversion process. Enzyme separation may be accomplished through solid-liquid extraction techniques such as centrifugation or filtration, and the product-containing solution is fed downstream for product separation.
Enzymes as a desired product
To industrialize an enzyme, the following upstream and downstream enzyme production processes are considered:
Upstream
Upstream processes are those that contribute to the generation of the enzyme.
Selection of a suitable enzyme
An enzyme must be selected based upon the desired reaction. The selected enzyme defines the required operational properties, such as pH, temperature, activity, and substrate affinity.
Identification and selection of a suitable source for the selected enzyme
The choice of a source of enzymes is an important step in the production of enzymes. It is common to examine the role of enzymes in nature and how they relate to the desired industrial process. Enzymes are most commonly sourced through bacteria, fungi, and yeast. Once the source of the enzyme is selected, genetic modifications may be performed to increase the expression of the gene responsible for producing the enzyme.
Process development
Process development is typically performed after genetic modification of the source organism, and involves the modification of the culture medium and growth conditions. In many cases, process development aims to reduce mRNA hydrolysis and proteolysis.
Large scale production
Scaling up of enzyme production requires optimization of the fermentation process. Most enzymes are produced under aerobic conditions, and as a result, require constant oxygen input, impacting fermenter design. Due to variations in the distribution of dissolved oxygen, as well as temperature, pH, and nutrients, the transport phenomena associated with these parameters must be considered. The highest possible productivity of the fermenter is achieved at maximum transport capacity of the fermenter.
Downstream
Downstream processes are those that contribute to separation or purification of enzymes.
Removal of insoluble materials and recovery of enzymes from the source
The procedures for enzyme recovery depend on the source organism, and whether enzymes are intracellular or extracellular. Typically, intracellular enzymes require cell lysis and separation of complex biochemical mixtures. Extracellular enzymes are released into the culture medium, and are much simpler to separate. Enzymes must maintain their native conformation to ensure their catalytic capability. Since enzymes are very sensitive to pH, temperature, and ionic strength of the medium, mild isolation conditions must be used.
Concentration and primary purification of enzymes
Depending on the intended use of the enzyme, different levels purity are required. For example, enzymes used for diagnostic purposes must be separated to a higher purity than bulk industrial enzymes to prevent catalytic activity that provides erroneous results. Enzymes used for therapeutic purposes typically require the most rigorous separation. Most commonly, a combination of chromatography steps is employed for separation.
The purified enzymes are either sold in pure form and sold to other industries, or added to consumer goods.
See also
Industrial ecology
Industrial fermentation
Industrial microbiology
References | Industrial enzymes | Biology | 1,356 |
43,779,002 | https://en.wikipedia.org/wiki/Australia%20Bioinformatics%20Resource | The Australia Bioinformatics Resource (EMBL-ABR) (formerly the Bioinformatics Resource Australia - EMBL (BRAEMBL)) was a significant initiative under the associate membership to EMBL.
Since 2019, all activities carried out under EMBL-ABR have rolled over into the Bioplatforms Australia (NCRIS-funded) Australian BioCommons, under new funding agreements and led by Associate Professor Andrew Lonie.
EMBL-ABR aimed to:
Increase Australiaβs capacity to collect, integrate, analyse, exploit, share and archive the large heterogeneous data sets now part of modern life sciences research
Contribute to the development of and provide training in data, tools and platforms to enable Australiaβs life science researchers to undertake research in the age of big data
Showcase Australian research and datasets at an international level
Enable engagement in international programs that create, deploy and develop best practice approaches to data management, software tools and methods, computational platforms and bioinformatics services
EMBL-ABR was supported by Bioplatforms Australia and the University of Melbourne. EMBL-ABR Hub was hosted at the Victorian Life Sciences Computation Initiative (VLSCI) at the University of Melbourne.
In July 2016, EMBL-ABR announced an agreement to collaborate with GOBLET to develop training programs for bioinformatics.
References
Bioinformatics organizations
European Molecular Biology Organization | Australia Bioinformatics Resource | Biology | 290 |
23,224,707 | https://en.wikipedia.org/wiki/Davallia%20fejeensis | Davallia fejeensis is a species of epiphytic fern in the family Davalliaceae, commonly referred to as rabbit's foot fern. They are best known for their furry, brown and yellow rhizomes, which resemble rabbit's feet.
It is native to the Fiji Islands in Oceania. They survive from approximately 60-75Β°F (15-24Β°C) and cannot survive below 55Β°F (13Β°C). Their fronds can grow up to 2 feet (61 centimeters) in height.
See also
Phlebodium aureum, sometimes also referred to as "hare-foot fern."
References
Davalliaceae
Ferns of Oceania
Flora of Fiji
Garden plants of Oceania
House plants
Ferns
Plants | Davallia fejeensis | Biology | 155 |
70,560 | https://en.wikipedia.org/wiki/Maximilian%20Kolbe | Maximilian Maria Kolbe (born Raymund Kolbe; ; 8 January 1894 β 14 August 1941) was a Polish Catholic priest and Conventual Franciscan friar who volunteered to die in place of a man named Franciszek Gajowniczek in the German death camp of Auschwitz, located in German-occupied Poland during World War II. He had been active in promoting the veneration of the Immaculate Virgin Mary, founding and supervising the monastery of NiepokalanΓ³w near Warsaw, operating an amateur-radio station (SP3RN), and founding or running several other organizations and publications.
On 10 October 1982, Pope John Paul II canonized Kolbe and declared him a martyr of charity. The Catholic Church venerates him as the patron saint of amateur radio operators, drug addicts, political prisoners, families, journalists, and prisoners. John Paul II declared him "the patron of our difficult century". His feast day is 14 August, the day of his martyrdom.
Due to Kolbe's efforts to promote consecration and entrustment to Mary, he is known as an "apostle of consecration to Mary".
Early life
Raymund Kolbe was born on 8 January 1894 in ZduΕska Wola, in the Kingdom of Poland, which was then part of the Russian Empire. He was the second son of weaver Julius Kolbe and midwife Maria DΔ
browska. His father was an ethnic German, and his mother was Polish. He had four brothers, two of whom died of tuberculosis. Shortly after his birth, his family moved to Pabianice.
Kolbe's life was strongly influenced in 1903, when he was 9, by a vision of the Virgin Mary. He later described this incident:
That night I asked the Mother of God what was to become of me. Then she came to me holding two crowns, one white, the other red. She asked me if I was willing to accept either of these crowns. The white one meant that I should persevere in purity and the red that I should become a martyr. I said that I would accept them both.
Franciscan friar
In 1907, Kolbe and his elder brother Francis joined the Conventual Franciscans. They enrolled at the Conventual Franciscan minor seminary in LwΓ³w later that year. In 1910, Kolbe was allowed to enter the novitiate, where he chose a religious name Maximilian. He professed his first vows in 1911, and final vows in 1914, adopting the additional name of Maria (Mary).
World War I
Kolbe was sent to Rome in 1912, where he attended the Pontifical Gregorian University. He earned a doctorate in philosophy in 1915 there. From 1915 he continued his studies at the Pontifical University of St. Bonaventure, where he earned a doctorate in theology in 1919 or 1922 (sources vary). He was active in the consecration and entrustment to Mary.
In the midst of these studies, World War I broke out. Maximilian's father, Julius Kolbe, joined JΓ³zef PiΕsudski's Polish Legions fighting against the Russians for an independent Poland, still subjugated and still divided among Prussia, Russia, and Austria. Julius Kolbe was caught and hanged as a traitor by the Russians at the relatively young age of 43, a traumatic event for young Maximilian.
During his time as a student, he witnessed vehement demonstrations against Popes Pius X and Benedict XV in Rome during an anniversary celebration by the Freemasons. According to Kolbe:
They placed the black standard of the "Giordano Brunisti" under the windows of the Vatican. On this standard the archangel, Michael, was depicted lying under the feet of the triumphant Lucifer. At the same time, countless pamphlets were distributed to the people in which the Holy Father (i.e., the Pope) was attacked shamefully.
Soon afterward, on 16 October 1917, Kolbe organized the Militia Immaculatae (Army of the Immaculate One), to work for conversion of sinners and enemies of the Catholic Church, specifically the Freemasons, through the intercession of the Virgin Mary. So serious was Kolbe about this goal that he added to the Miraculous Medal prayer:
O Mary, conceived without sin, pray for us who have recourse to thee. And for all those who do not have recourse to thee; especially the Freemasons and all those recommended to thee.
Kolbe wanted the entire Franciscan Order consecrated to the Immaculate by an additional vow. The idea was well received, but faced the hurdles of approval by the hierarchy of the order and the lawyers, so it was never formally adopted during his life and was no longer pursued after his death.
Priesthood
In 1918, Kolbe was ordained a priest. In July 1919, he returned to Poland, which was newly independent. He was active in promoting the veneration of the Immaculate Virgin Mary. He was strongly opposed to leftist β in particular, communist β movements.
From 1919 to 1922, he taught at the KrakΓ³w Seminary. Around that time, as well as earlier in Rome, he suffered from tuberculosis, which forced him to take a lengthy leave of absence from his teaching duties. Before antibiotics, tuberculosis was often fatal, with rest and good nutrition the only treatment.
In January 1922, Kolbe founded the monthly periodical Rycerz Niepokalanej (Knight of the Immaculata), a devotional publication based on the French Le Messager du Coeur de Jesus (Messenger of the Heart of Jesus). From 1922 to 1926, he operated a religious publishing press in Grodno. As his activities grew in scope, in 1927 he founded a new Conventual Franciscan monastery at NiepokalanΓ³w near Warsaw. It became a major religious publishing centre. A junior seminary was opened there two years later.
Missionary work in Asia
Between 1930 and 1936, Kolbe undertook a series of missions to East Asia. He arrived first in Shanghai, China, but failed to gather a following there. Next he moved to Japan, where by 1931 he had founded a Franciscan monastery, Mugenzai no Sono( ), on the outskirts of Nagasaki.
Kolbe had started publishing a Japanese edition of the Knight of the Immaculata (: ). The monastery he founded remains prominent in the Roman Catholic Church in Japan. Kolbe had the monastery built on a mountainside. According to Shinto beliefs, this was not the side best suited to be in harmony with nature. However, when the United States dropped the atomic bomb on Nagasaki, the Franciscan monastery survived, unlike the Immaculate Conception Cathedral, the latter having been on the side of the mountain that took the main force of the blast.
In mid-1932, Kolbe left Japan for Malabar, India, where he founded another monastery, which has since closed.
Return to Poland
Meanwhile, in his absence the monastery at NiepokalanΓ³w began to publish a daily newspaper MaΕy Dziennik (the Small Diary), in alliance with the political group National Radical Camp (ObΓ³z Narodowo Radykalny). This publication reached a circulation of 137,000, and nearly double that, 225,000, on weekends. Kolbe returned to Poland in 1933 for a general chapter of the order in KrakΓ³w. Kolbe returned to Japan and remained there until called back to attend the Provincial Chapter in Poland in 1936. There he was appointed guardian of NiepokalanΓ³w, thus precluding his return to Japan. Two years later, in 1938, he started a radio station at NiepokalanΓ³w, Radio NiepokalanΓ³w. He held an amateur radio licence, with the call sign SP3RN.
World War II
After the outbreak of World War II, Kolbe was one of the few friars who remained in the monastery, where he organized a temporary hospital. After the town was captured by the Germans, they arrested him on 19 September 1939; he was later released on 8 December. He refused to sign the Deutsche Volksliste, which would have given him rights similar to those of German citizens in exchange for recognizing his ethnic German ancestry. Upon his release he continued work at his friary where he and other friars provided shelter to refugees from Greater Poland including 2,000 Jews whom he hid from German persecution in the NiepokalanΓ³w friary. Kolbe received permission to continue publishing religious works, though significantly reduced in scope. The monastery continued to act as a publishing house, issuing a number of publications considered anti-Nazi.
Arrest and imprisonment
On 17 February 1941, the monastery was shut down by the German authorities. That day Kolbe and four others were arrested by the Gestapo and imprisoned in the Pawiak prison. On 28 May, he was transferred to Auschwitz as prisoner 16670.
Continuing to act as a priest, Kolbe was subjected to violent harassment, including beatings and lashings. Once, he was smuggled to a prison hospital by friendly inmates.
Martyrdom at Auschwitz
At the end of July 1941, a prisoner escaped from the camp, prompting the deputy camp commander, SS-HauptsturmfΓΌhrer Karl Fritzsch, to pick ten men to be starved to death in an underground bunker to deter further escape attempts. When one of the selected men, Franciszek Gajowniczek (also a Polish Catholic), cried out, "My wife! My children!" Kolbe volunteered to take his place.
According to an eyewitness, who was an assistant janitor at that time, in his prison cell Kolbe led the prisoners in prayer. Each time the guards checked on him, he was standing or kneeling in the middle of the cell and looking calmly at those who entered. After they had been starved and deprived of water for two weeks, only Kolbe and three others remained alive.
The guards wanted the bunker emptied, so they gave the four remaining prisoners lethal injections of carbolic acid. Kolbe is said to have raised his left arm and calmly waited for the deadly injection. He died on 14 August 1941. He was cremated on 15 August, the feast day of the Assumption of Mary.
Canonization
The cause for Kolbe's beatification was opened at a local level on 3 June 1952. On 12 May 1955 Kolbe was recognized by the Holy See as a Servant of God. Kolbe was declared venerable by Pope Paul VI on 30 January 1969, beatified as a Confessor of the Faith by the same Pope in 1971, and canonized as a saint by Pope John Paul II on 10 October 1982. Upon canonization, the Pope declared Maximilian Kolbe as a confessor and a martyr of charity. The miracles that were used to confirm his beatification were the July 1948 cure of intestinal tuberculosis in Angela Testoni and in August 1950, the cure of calcification of the arteries/sclerosis of Francis Ranier; both attributed to Kolbe's intercession by their prayers to him.
Franciszek Gajowniczek, the man Kolbe saved at Auschwitz, survived the Holocaust and was present as a guest at both the beatification and the canonization ceremonies.
After his canonisation, a feast day for Maximilian Kolbe was added to the General Roman Calendar. He is one of ten 20th-century martyrs who are depicted in statues above the Great West Door of Anglican Westminster Abbey, London.
Maximilian Kolbe is remembered in the Church of England with a commemoration on 14 August.
Controversies
Kolbe's recognition as a Christian martyr generated some controversy within the Catholic Church. While his self-sacrifice at Auschwitz was considered saintly and heroic, he was not killed out of odium fidei (hatred of the faith), but as the result of his act of Christian charity toward another man. Pope Paul VI recognized this distinction at Kolbe's beatification, naming him a Confessor and giving him the unofficial title "martyr of charity". Pope John Paul II, however, overruled the commission he had established (which agreed with the earlier assessment of heroic charity). John Paul II wanted to make the point that the Nazis' systematic hatred of whole categories of humanity was inherently also a hatred of religious (Christian) faith; he said that Kolbe's death equated to earlier examples of religious martyrdom.
Accusations of antisemitism
Kolbe's alleged antisemitism was a source of controversy in the 1980s in the aftermath of his canonization. In 1926, in the first issue of the monthly Knight of the Immaculate, Kolbe said he considered Freemasons "as an organized clique of fanatical Jews, who want to destroy the church." In a 1924 column, he cited the Protocols of the Elders of Zion as an "important proof" that "the founders of Zionism intended, in fact, the subjugation of the entire world", but that "not even all Jews know this". In a calendar that the publishing house of his organization, the Militia of the Immaculate, published in an edition of a million in 1939, Kolbe wrote, "Atheistic Communism seems to rage ever more wildly. Its origin can easily be located in that criminal mafia that calls itself Freemasonry, and the hand that is guiding all that toward a clear goal is international Zionism. Which should not be taken to mean that even among Jews one cannot find good people." In his periodicals he had published articles about topics such as a Zionist plot for world domination. Slovenian philosopher Slavoj Ε½iΕΎek criticized Kolbe's activities as "writing and organizing mass propaganda for the Catholic Church, with a clear anti-Semitic and anti-Masonic edge." In contrast, a writer for online EWTN claimed that the "Jewish question played a very minor role in Kolbe's thought and work" and that "only thirty-one out of over 14,000 of his letters reference the Jewish people or Judaism, and most express a missionary zeal and concern for their spiritual welfare".
During World War II, Kolbe's monastery at NiepokalanΓ³w sheltered Jewish refugees. According to the testimony of a local, "When Jews came to me asking for a piece of bread, I asked Father Maximilian if I could give it to them in good conscience, and he answered me, 'Yes, it is necessary to do this because all men are our brothers.
Relics
First-class relics of Kolbe exist, in the form of hairs from his head and beard, preserved without his knowledge by two friars at NiepokalanΓ³w who served as barbers in his friary between 1930 and 1941. Since his beatification in 1971, more than 1,000 such relics have been distributed around the world for public veneration. Second-class relics, such as his personal effects, clothing and liturgical vestments, are preserved in his monastery cell and in a chapel at NiepokalanΓ³w, where they may be venerated by visitors.
Influence
Kolbe influenced his own Order of Conventual Franciscan friars, as the Militia Immaculatae movement had continued. In recent years new religious and secular institutes have been founded, inspired from this spiritual way. Among these are the Missionaries of the Immaculate Mary β Fr. Kolbe, the Franciscan Friars of Mary Immaculate, and a parallel congregation of religious sisters and others. The Franciscan Friars of Mary Immaculate are taught basic Polish so they can sing the traditional hymns sung by Kolbe, in his native tongue.
According to the friars:
Our patron, St. Maximilian Kolbe, inspires us with his unique Mariology and apostolic mission, which is to bring all souls to the Sacred Heart of Christ through the Immaculate Heart of Mary, Christ's most pure, efficient, and holy instrument of evangelization β especially those most estranged from the Church.
Kolbe's views into Marian theology echo today through their influence on Vatican II. His image may be found in churches across Europe and throughout the world. Several churches in Poland are under his patronage, such as the Sanctuary of Saint Maxymilian in ZduΕska Wola and the Church of Saint Maxymilian Kolbe in Szczecin. A museum, Museum of St. Maximilian Kolbe "There was a Man", was opened in NiepokalanΓ³w in 1998.
In 1963, Rolf Hochhuth published The Deputy, a play influenced by Kolbe's life, and dedicated to him. In 2000, the National Conference of Catholic Bishops (US) designated Marytown in Libertyville, Illinois home to a community of Conventual Franciscan friars, as the National Shrine of St. Maximilian Kolbe.
In 1991, Krzysztof Zanussi released a Polish film about the life of Kolbe, , with Edward Ε»entara as Kolbe. The Polish Senate declared 2011 to be the year of Maximilian Kolbe.
In 2023, the Mexican production company Dos Corazones Films released the animated feature film Max, which recounts part of the Franciscan's life.
Immaculata prayer
Kolbe composed the Immaculata prayer as a prayer of consecration to the Immaculata.
See also
Holocaust theology
Maximilian of Tebessa
Peter Fehlner
Sisters Minor of Mary Immaculate
Γlise Rivet
Notes
References
Further reading
Smith, Jeremiah J. (1951). Saint Maximilian Kolbe : Knight of the Immaculata. Rockford, IL: Tan. ISBN 978β0895556196
External links
Patron Saints Index: Saint Maximilian Kolbe
Kolbe's Gift, a play by David Gooderson about Kolbe and his self-sacrifice in Auschwitz based on factual evidence and conversations with the late JΓ³zef GarliΕski
A Man Feared by the 21st Century: Saint Maximilian Kolbe from the Starvation Bunker in Auschwitz β a drama by Kazimierz Braun
Saint Maximilian Kolbe, a popular biography at Catholicism.org
NiepokalanΓ³w in English
Catholic Online, St. Maximilian Kolbe, Catholic Online.Inform-Inspire-Ignite.
St. Maximilian Kolbe Website
An "Insight" episode which mentions Maximilian Kolbe, who was portrayed by Werner Klemperer
Radio Kolbe, International Radio Group OM / SWL / BCL (Based in Italy)
1894 births
1941 deaths
People from ZduΕska Wola
People from Kalisz Governorate
20th-century Polish Roman Catholic priests
20th-century Christian saints
Anglican saints
Anti-Masonry
Canonizations by Pope John Paul II
Catholic saints and blesseds of the Nazi era
Conventual Friars Minor
Polish Franciscans
Martyred Roman Catholic priests
People celebrated in the Lutheran liturgical calendar
Polish anti-communists
Polish civilians killed in World War II
Polish people of German descent
Polish people who died in Auschwitz concentration camp
Polish Roman Catholic saints
Pontifical Gregorian University alumni
Pontifical University of St. Bonaventure alumni
Roman Catholic activists
Amateur radio people
People executed by Nazi Germany by lethal injection
Franciscan saints
Polish magazine founders
Roman Catholic priests executed by Nazi Germany
People who have sacrificed their lives to save others | Maximilian Kolbe | Biology | 3,902 |
1,110,499 | https://en.wikipedia.org/wiki/Higman%E2%80%93Sims%20graph | In mathematical graph theory, the HigmanβSims graph is a 22-regular undirected graph with 100 vertices and 1100 edges. It is the unique strongly regular graph srg(100,22,0,6), where no neighboring pair of vertices share a common neighbor and each non-neighboring pair of vertices share six common neighbors. It was first constructed by and rediscovered in 1968 by Donald G. Higman and Charles C. Sims as a way to define the HigmanβSims group, a subgroup of index two in the group of automorphisms of the HoffmanβSingleton graph.
Construction
From M22 graph
Take the M22 graph, a strongly regular graph srg(77,16,0,4) and augment it with 22 new vertices corresponding to the points of S(3,6,22), each block being connected to its points, and one additional vertex C connected to the 22 points.
From HoffmanβSingleton graph
There are 100 independent sets of size 15 in the HoffmanβSingleton graph. Create a new graph with 100 corresponding vertices, and connect vertices whose corresponding independent sets have exactly 0 or 8 elements in common.
The resulting HigmanβSims graph can be partitioned into two copies of the HoffmanβSingleton graph in 352 ways.
From a cube
Take a cube with vertices labeled 000, 001, 010, ..., 111. Take all 70 possible 4-sets of vertices, and retain only the ones whose XOR evaluates to 000; there are 14 such 4-sets, corresponding to the 6 faces + 6 diagonal-rectangles + 2 parity tetrahedra. This is a 3-(8,4,1) block design on 8 points, with 14 blocks of block size 4, each point appearing in 7 blocks, each pair of points appearing 3 times, each triplet of points occurring exactly once. Permute the original 8 vertices any of 8! = 40320 ways, and discard duplicates. There are then 30 different ways to relabel the vertices (i.e., 30 different designs that are all isomorphic to each other by permutation of the points). This is because there are 1344 automorphisms, and 40320/1344 = 30.
Create a vertex for each of the 30 designs, and for each row of every design (there are 70 such rows in total, each row being a 4-set of 8 and appearing in 6 designs). Connect each design to its 14 rows. Connect disjoint designs to each other (each design is disjoint with 8 others). Connect rows to each other if they have exactly one element in common (there are 4x4 = 16 such neighbors). The resulting graph is the HigmanβSims graph. Rows are connected to 16 other rows and to 6 designs == degree 22. Designs are connected to 14 rows and 8 disjoint designs == degree 22. Thus all 100 vertices have degree 22 each.
Algebraic properties
The automorphism group of the HigmanβSims graph is a group of order isomorphic to the semidirect product of the HigmanβSims group of order with the cyclic group of order 2. It has automorphisms that take any edge to any other edge, making the HigmanβSims graph an edge-transitive graph. The outer elements induce odd permutations on the graph. As mentioned above, there are 352 ways to partition the HigmanβSims graph into a pair of HoffmanβSingleton graphs; these partitions actually come in 2 orbits of size 176 each, and the outer elements of the HigmanβSims group swap these orbits.
The characteristic polynomial of the HigmanβSims graph is (xΒ βΒ 22)(xΒ βΒ 2)77(xΒ +Β 8)22. Therefore, the HigmanβSims graph is an integral graph: its spectrum consists entirely of integers. It is also the only graph with this characteristic polynomial, making it a graph determined by its spectrum.
Inside the Leech lattice
The HigmanβSims graph naturally occurs inside the Leech lattice: if X, Y and Z are three points in the Leech lattice such that the distances XY, XZ and YZ are respectively, then there are exactly 100 Leech lattice points T such that all the distances XT, YT and ZT are equal to 2, and if we connect two such points T and Tβ² when the distance between them is , the resulting graph is isomorphic to the HigmanβSims graph. Furthermore, the set of all automorphisms of the Leech lattice (that is, Euclidean congruences fixing it) which fix each of X, Y and Z is the HigmanβSims group (if we allow exchanging X and Y, the order 2 extension of all graph automorphisms is obtained). This shows that the HigmanβSims group occurs inside the Conway groups Co2 (with its order 2 extension) and Co3, and consequently also Co1.
References
Group theory
Individual graphs
Regular graphs
Strongly regular graphs | HigmanβSims graph | Mathematics | 1,034 |
4,618,420 | https://en.wikipedia.org/wiki/Leyland%20number | In number theory, a Leyland number is a number of the form
where x and y are integers greater than 1. They are named after the mathematician Paul Leyland. The first few Leyland numbers are
8, 17, 32, 54, 57, 100, 145, 177, 320, 368, 512, 593, 945, 1124 .
The requirement that x and y both be greater than 1 is important, since without it every positive integer would be a Leyland number of the form x1 + 1x. Also, because of the commutative property of addition, the condition x β₯ y is usually added to avoid double-covering the set of Leyland numbers (so we have 1 < y β€ x).
Leyland primes
A Leyland prime is a Leyland number that is prime. The first such primes are:
17, 593, 32993, 2097593, 8589935681, 59604644783353249, 523347633027360537213687137, 43143988327398957279342419750374600193, ...
corresponding to
32+23, 92+29, 152+215, 212+221, 332+233, 245+524, 563+356, 3215+1532.
One can also fix the value of y and consider the sequence of x values that gives Leyland primes, for example x2 + 2x is prime for x = 3, 9, 15, 21, 33, 2007, 2127, 3759, ... ().
By November 2012, the largest Leyland number that had been proven to be prime was 51226753 + 67535122 with digits. From January 2011 to April 2011, it was the largest prime whose primality was proved by elliptic curve primality proving. In December 2012, this was improved by proving the primality of the two numbers 311063 + 633110 (5596 digits) and 86562929 + 29298656 ( digits), the latter of which surpassed the previous record. In February 2023, 1048245 + 5104824 ( digits) was proven to be prime, and it was also the largest prime proven using ECPP, until three months later a larger (non-Leyland) prime was proven using ECPP. There are many larger known probable primes such as 3147389 + 9314738, but it is hard to prove primality of large Leyland numbers. Paul Leyland writes on his website: "More recently still, it was realized that numbers of this form are ideal test cases for general purpose primality proving programs. They have a simple algebraic description but no obvious cyclotomic properties which special purpose algorithms can exploit."
There is a project called XYYXF to factor composite Leyland numbers.
Leyland number of the second kind
A Leyland number of the second kind is a number of the form
where x and y are integers greater than 1. The first such numbers are:
0, 1, 7, 17, 28, 79, 118, 192, 399, 431, 513, 924, 1844, 1927, 2800, 3952, 6049, 7849, 8023, 13983, 16188, 18954, 32543, 58049, 61318, 61440, 65280, 130783, 162287, 175816, 255583, 261820, ...
A Leyland prime of the second kind is a Leyland number of the second kind that is also prime. The first few such primes are:
7, 17, 79, 431, 58049, 130783, 162287, 523927, 2486784401, 6102977801, 8375575711, 13055867207, 83695120256591, 375700268413577, 2251799813682647, ... . We can also consider 145 in the form of 4 to the power of 3 plus 4 to the power of 4.
For the probable primes, see Henri Lifchitz & Renaud Lifchitz, PRP Top Records search.
References
External links
Eponymous numbers in mathematics
Integer sequences | Leyland number | Mathematics | 925 |
66,157,646 | https://en.wikipedia.org/wiki/Regina%20Lamendella | Regina Lamendella is an American Professor of Microbiology. She is best known for the use of omics for applied studies of microbiology in natural waterways and the guts of animals, including humans.
Lamendella collaborates with and leads teams of scientist and healthcare professionals developing novel approaches to identify and screen for microorganisms in diverse environments, from waterways to human tissue. For example, her work suggests that eating walnuts may be good for human gut flora, resulting in improved heart health. Lamendella has also contributed to local testing for COVID-19 among rural Amish communities.
Education
Lamendella earned her B.A. in biology from Lafayette College. From the University of Cincinnati, she earned a M.S. in environmental science, a M.S. in molecular biology, and in 2009 she completed her PhD. From 2009-2012, she completed postdoctoral studies at Lawrence Livermore National Laboratory.
Employment
In 2012, Lamendella joined the faculty of Juniata College, where she is currently an Associate Professor and holds the George '75 and Cynthia '76 Valko Professorship in Biological Sciences.
Bibliography
Lamendella has more than 50 publications listed on Scopus that have been cited a total of more than 4,000 times, giving her an h-index of 23. Her most cited articles include:
<li>
<li>
<li>
References
External links
American environmental scientists
Living people
University of Cincinnati alumni
Juniata College faculty
Year of birth missing (living people)
American microbiologists
American women microbiologists
Lafayette College alumni
Place of birth missing (living people)
20th-century American biologists
American women academics
21st-century American women
20th-century American women scientists | Regina Lamendella | Environmental_science | 344 |
1,211,626 | https://en.wikipedia.org/wiki/All%20American%20Five | The term All American Five (abbreviated AA5) is a colloquial name for mass-produced, superheterodyne radio receivers that used five vacuum tubes in their design. These radio sets were designed to receive amplitude modulation (AM) broadcasts in the medium wave band, and were manufactured in the United States from the mid-1930s until the early 1960s. By eliminating a power transformer, cost of the units was kept low; the same principle was later applied to television receivers. Variations in the design for lower cost, shortwave bands, better performance or special power supplies existed, although many sets used an identical set of vacuum tubes.
Philosophy
The radio was called the "All American Five" because the design typically used five vacuum tubes, and comprised the majority of radios manufactured for home use in the USA and Canada in the tube era.
They were manufactured in the millions by hundreds of manufacturers from the 1930s onward, and the last examples were made in Japan. The heaters of the tubes were connected in series, all requiring the same current, but with different voltages across them. The standard line up of tubes were designed so that the total rated voltage of the five tubes was 121Β volts, slightly more than the electricity supply voltage of 110β117V. An extra dropper resistor was therefore not required. Transformerless designs had a metal chassis connected to one side of the power line, which was a dangerous electric shock hazard and required a thoroughly insulated cabinet. Transformerless radios could be powered by either AC or DC (consequently called AC/DC receivers)βDC supplies were still not uncommon. When operated on DC, they would only work if the plug was inserted with the correct polarity. Also, if run from a DC supply the radio had a reduced performance because the B+ voltage would only be 120 volts compared with 160β170 volts when operated from AC.
The philosophy of the design was simple: it had to be as cheap to make as possible. The design was optimized to provide good performance for the price. At least one radio manufacturer, Arthur Atwater Kent, preferred to go out of business rather than attempt to compete with 'midget' or low-cost AA5 designs.<ref>Douglas, Alan, Radio Manufacturers of the 1920s (Vol. 1) Vestal, New York: Vestal Press, Ltd. (1988); Schiffer, Michael, The Portable Radio In American Life, Tucson: Univ. of Ariz. Press (1991)</ref>
Many design tricks were used to reduce production costs of the five-tube radio. The heaters of all the vacuum tubes had to be rated to use the same current, so they could be operated in series from line voltage. The rectifier and audio output tube required more heater power, so dropped a larger voltage than the other tubes. In many designs the rectifier tube had a tap on the heater to power a dial light. The plate current was routed through that portion of the rectifier heater, in order to make up for the current diverted to the dial lamp. If the dial lamp failed, that part of the rectifier heater would have a larger current which could burn out the tube in a few months. Early radios had a resistor network to minimize the problem but this was soon eliminated as the cost of replacing the tube was not the manufacturer's problem. As with Christmas tree lights, if one tube heater failed, none of the tube heaters would operate.
The radio used a half wave rectifier to produce a plate voltage of 160 to 170 volts directly from the AC power line; the rectifier, while not needed with a strictly DC supply, did not cause a problem.
The frequency mixer was of the pentagrid converter design to save the cost of a separate oscillator tube. The detector and first audio stage were provided by a dual diode/triode combination tube. When the detector/first audio tube contained a second diode, it could be used to provide automatic gain control (AGC), or AGC bias could be derived from the audio detector diode.
Potential hazards of the design
Many early examples of the 'All-American Five' posed a shock hazard to users. Lacking a mains transformer, the chassis of the AA5 radio was directly connected to one side of the mains electric supply. The hazard was made worse because the on/off switch was often in the wire of the mains supply which was connected to the chassis, meaning that the chassis could be "hot" when the set was either 'on' or 'off', depending on which way the plug was inserted in the power outlet. Many power plugs had two identical pins, and could be plugged in either way round. The metal chassis securing screws were sometimes accessible from the outside of the Bakelite or wood case, and there were many examples of owners receiving a shock by making contact with these screws while handling a set. Ventilation holes could be large enough to allow children to poke their fingers, or metal objects, through. The same type of hazard was present in European AC/DC sets, at twice the voltage.
The hazard was eliminated from later sets by the use of an internal ground bus connected to the chassis by an isolation network. Underwriters Laboratories required the adoption of the floating chassis, as isolation from the mains (the exact circuit and component values were not specified although the leakage current allowed was) to limit the shock to a "safe" current level. The chassis was maintained at RF ground (for shielding) by a bypass capacitor (typically 0.05Β ΞΌF to 0.2Β ΞΌF) usually with a resistor connected across it (typically 220Β kΞ© to 470Β kΞ©, although values as small as 22Β kΞ© were sometimes used or the resistor was simply omitted).In older schematics, "M" was used to indicate "thousand" and not "megohm". Later on, "K" for "kilo" or "thousand", and "Meg" for "mega" or "million" became the standard, with "M" deleted to avoid confusion. Today, the symbols are kΞ© and MΞ©. Over the years, these paper capacitors often become leaky, and could allow sufficient current flow to give the user a shock.
Variations on the theme
Although four-, six-, and even a few rare eight-tube radios were produced, they were not common. The four-tube version with vacuum tube rectifier was of inferior performance, as they typically had no IF amplifier tube, although some four-tube designs with a selenium rectifier in place of the rectifier tube avoided this problem. The six-tube versions added either an RF amplifier tube, a push-pull audio power amplifier tube, or a beat frequency oscillator tube (to listen to Morse code or single-sideband modulation transmissions). However, these radios cost significantly more and sold in smaller quantities. The eight-tube versions cost even more, adding two or more of the features of the six-tube versions and sometimes an extra IF amplifier tube.
Specific implementations
The basic design of the 'All-American Five' had its origins in low-cost sets produced in the early days of radio.
Early attempts
Radio manufacturers departed from the traditional heater voltages of 2.5, 5 and 6.3 volts to get a five tube combination that would operate as close as possible to 110β120 VAC line voltage. For the 1935 model year, designers were able to get a 5-tube heater string to total up to 78 volts. This meant that a dropping resistor or line ballast tube was needed to drop the remaining 35β42 volts. If a ballast tube was used, the radio would be marketed as a "6-tube" radio even though one was just a voltage dropping ballast. Other manufacturers used a "line cord resistor", a special AC cord made with resistance wire which replaced a power resistor in the radio chassis. These line cords tend to get warm to the touch after the radio was in use for a while.
During the 1935β36 model years examples of 5 tube (pre-octal base or prong tubes) series strings using 300Β mA heaters were:
Detector-Oscillator: 78
Intermediate Frequency (IF): 78
Second Detector and First Audio Amplifier: 77
Power Amplifier: 43
Rectifier: 25Z5
Later when newer tubes came out another variant was:
Pentagrid Converter: 6A7
Intermediate Frequency (IF): 78 or 6D6
Second Detector and First Audio Amplifier: 75
Power Amplifier: 43
Rectifier: 25Z5
True 5-tube transformerless version
The very first set of metal tubes produced included 6-volt heater tubes that could be used to make a transformer-powered 6-tube radio. RCA released their first set of these metal octal tubes for this design in 1939, using 12.6-volt 150Β mA heaters instead. The original design used the following tubes:
Converter: 12A8
IF amplifier: 12K7
Detector and first audio amplifier: 12Q7
Audio power output: 50L6
Rectifier: 35Z4
This series had the grids brought out as top caps on the signal tubes, and the 35Z4 did not have a provision for a dial light.
Single ended tube variant
AC/DC designs for 110β117V usually used 150Β mA heater current.
The tube array in the early days of single ended octal tubes was:
Converter: 12SA7
IF amplifier: 12SK7
Detector and first audio amplifier: 12SQ7
Audio power output: 50L6
Rectifier: 35Z5
These sets were first marketed in late 1939. Canadian sets would sometimes use a 35L6 in place of the 50L6, as parts of Canada used 110 volts as a design standard. Because areas near Niagara Falls had 25Β Hz power, some Canadian sets had slightly larger filter capacitors.
The "Loctal" variant
The tube line up of the Loctal tubes was:
Converter: 14Q7
IF amplifier: 14A7
Detector and first audio amplifier: 14B6
Audio power output: 50A5
Rectifier: 35Y4 or 35Z3
Miniature tubes
After the Second World War the set was redesigned to use miniature 7-pin tubes and the line up became:
Converter: 12BE6
IF amplifier: 12BA6
Detector and first audio amplifier: 12AV6 or 12AT6
Audio power output: 50C5 or the less-common 50B5
Rectifier: 35W4
The 50C5, introduced in 1948, is electrically identical to the 50B5, but has a revised pinout to address concerns that high peak voltage between 4 (heater) and 5 (anode) would promote socket breakdown.
In the postwar period, some makers built sets with a mixture of miniature, octal, and loctal types.
"Power-Saver" version
Another low-power variation changed the tube heaters to run on 100 milliamperes rather than 150 milliamperes. These tubes took a little longer to warm up:
Converter: 18FX6
IF amplifier: 18FW6
Detector and first audio amplifier: 18FY6
Audio power output: 32ET5 or 34GD5
Rectifier: 36AM3
The voltage distribution has changed around the tube heaters but the total is still a little more than the 120 volt mains supply. This line-up is for an Admiral radio.
Farm radio
A "farm radio" modification (usually done at the point of sale) allowed an AA5 to run off 32 volts DC, commonly generated by farm windmills. With a relatively simple rewiring, the tube heaters could be put in series-parallel to run off 32 volts, with the three twelve-volt heaters in series and a 25L6, 35L6 or 43 in parallel; the tubes would still function with the heater voltage somewhat out of specification. If run from a 32-volt supply the radio had a substantially reduced performance because the B+ voltage would only be 32 volts compared with 160β170 volts when operated from AC. With 32 volts on the plate, the radio tended to be insensitive. Sometimes only the tube heater power was derived from a windmill, and dry batteries were retained for the plate voltage supply. The advantage was that the heaters were a high and continuous load on the battery, whereas the plate voltage battery drain was smaller and intermittent. Often a wet-cell rechargeable battery was used for tube heaters, recharged by a local garage or by exchanging with a vehicle battery.
Many 32-volt farm radios were factory-built for the purpose. They usually had two type 48 power tetrodes that could operate with B+ voltages as low as 28 volts. The type 48 pairs were parallel connected, or connected in pushβpull. Some factory 32-volt radios used an electromechanical vibrator power supply to provide increased voltage. Vibrator power supplies could also be made to work from a 6 volt supply from a dedicated wind-charger or from a car battery borrowed from a farm vehicle.
Battery operated variants
A number of other versions of the set appeared, including some that did have a transformer, a version that operated in a motor vehicle off a 6-volt supply, using a vibrator to convert the 6V DC supply to AC which could feed a transformer with higher voltage output, and a version that operated from either dry batteries or the mains supply. The battery version commonly used tubes where the filament was heated by a single 1.5-volt dry cell and plate voltage was supplied by a (nominally) 90-volt battery.
One version, called a Three-way portable because it could be operated any of three ways: batteries, the AC line, or the DC line; typically had the following tube array:
Converter: 1R5 (or 1L6 if the set was shortwave, such as the Zenith Trans-Oceanic)
IF amplifier: 1U4
Detector and first audio amplifier: 1U5
Audio power output: 3V4
Rectifier: 35W4, 117Z3, or a selenium rectifier
This version used a 7.5Β V A battery and a 90Β V B battery. Note that the A battery did not need to heat the rectifier tube because, when operating from the batteries, the rectifier was not needed.
When operating on batteries, this version had almost instant warmup because of the tubes used their filaments as cathodes.
This setup was common on Motorola portable radios commonly resembling metal "lunch boxes".
Variations
Since the AA5 was a minimalist design, there was plenty of room for enhanced versions, resulting in an "AA6":
A few sets added an extra 12SK7 as an RF or IF amplifier. This would require using a 35L6 to maintain the heater voltage.
Or, another audio amplifier tube could be added for increased audio output. To keep the total heater voltage at around 120Β V, the two output tubes would have to be 25 to 35-volt types, such as the 35L6 or 25L6.
There were even a few "AA4" designs, usually midget sets, only usable in strong-signal metropolitan areas, because most had no IF amplifier (although some replaced the rectifier tube with a selenium rectifier).
Series string order
According to various editions of the RCA Receiving Tube Manual, the heater string of an AC/DC radio should be arranged in a particular order to minimize hum. Assuming that all functions are performed by separate tubes, the heaters in the string should be arranged as follows:
Input stage
Ballast tube or resistor
Rectifier
Audio power output amplifier
RF and IF amplifiers
Converter
First AF amplifier
Detector
Ground/B-minus line
Not all manufacturers followed this recommendation.
Effect on television design
Many black-and-white and color television receivers were built using All American Five principles, including a hot chassis and series-wired heaters. The designs were found primarily in portable or inexpensive sets ranging from the 1950s to even as late as the GE Portacolor series which was finally discontinued in the 1980s. Early sets tended to use selenium rectifiers in place of a tube; later sets used silicon diodes. Some of these sets were hybrid, using transistors for small signal applications and vacuum tubes in place of then-expensive power transistors. Some also included a rectifier diode in series with the tube filaments; when the set was off, the rectifier kept the filaments partially heated, a technique given a variety of names such as "Instant On".
Servicing precautions
Since the chassis of the set may be connected directly to the live side of the power line, service shops used an isolation transformer to protect technicians from a shock hazard. Some restorers will rewire the hot chassis set to put the chassis at neutral at all times. Some designs only require polarizing the plug, while others require rewiring the power supply to remove the switch from chassis ground. Power outlets must be wired properly for this modification to be protective.
See also
Utility Radio
VolksempfΓ€nger
References
External links
The All American Five
Arcane Radio Trivia AA5 Article
A Review of Developments in Broadcast Receivers of 1933, Radio Engineering'' (magazine), August 1933, pages 6, 7, 20
Types of radios
Radio electronics
History of radio technology | All American Five | Engineering | 3,668 |
24,170,618 | https://en.wikipedia.org/wiki/Clitoral%20index | The clitoral index, defined as the product of the sagittal and transverse dimensions of the glans clitoridis, is sometimes used as a measure of virilization in women. In one study, the mean, and also median, clitoral index of a group of 200 normal women was measured as being roughly 18.5mm.
See also
Clitoromegaly
References
Anthropometry | Clitoral index | Biology | 87 |
11,299 | https://en.wikipedia.org/wiki/Fox | Foxes are small-to-medium-sized omnivorous mammals belonging to several genera of the family Canidae. They have a flattened skull; upright, triangular ears; a pointed, slightly upturned snout; and a long, bushy tail ("brush").
Twelve species belong to the monophyletic "true fox" group of genus Vulpes. Another 25 current or extinct species are sometimes called foxes β they are part of the paraphyletic group of the South American foxes or an outlying group, which consists of the bat-eared fox, gray fox, and island fox.
Foxes live on every continent except Antarctica. The most common and widespread species of fox is the red fox (Vulpes vulpes) with about 47 recognized subspecies. The global distribution of foxes, together with their widespread reputation for cunning, has contributed to their prominence in popular culture and folklore in many societies around the world. The hunting of foxes with packs of hounds, long an established pursuit in Europe, especially in the British Isles, was exported by European settlers to various parts of the New World.
Etymology
The word fox comes from Old English and derives from Proto-Germanic *fuhsaz. This in turn derives from Proto-Indo-European *puαΈ±- "thick-haired, tail." Male foxes are known as dogs, tods, or reynards; females as vixens; and young as cubs, pups, or kits, though the last term is not to be confused with the kit fox, a distinct species. "Vixen" is one of very few modern English words that retain the Middle English southern dialectal "v" pronunciation instead of "f"; i.e., northern English "fox" versus southern English "vox". A group of foxes is referred to as a skulk, leash, or earth.
Phylogenetic relationships
Within the Canidae, the results of DNA analysis shows several phylogenetic divisions:
The fox-like canids, which include the kit fox (Vulpes velox), red fox (Vulpes vulpes), Cape fox (Vulpes chama), Arctic fox (Vulpes lagopus), and fennec fox (Vulpes zerda).
The wolf-like canids, (genus Canis, Cuon and Lycaon) including the dog (Canis lupus familiaris), gray wolf (Canis lupus), red wolf (Canis rufus), eastern wolf (Canis lycaon), coyote (Canis latrans), golden jackal (Canis aureus), Ethiopian wolf (Canis simensis), black-backed jackal (Canis mesomelas), side-striped jackal (Canis adustus), dhole (Cuon alpinus), and African wild dog (Lycaon pictus).
The South American canids, including the bush dog (Speothos venaticus), hoary fox (Lycalopex uetulus), crab-eating fox (Cerdocyon thous) and maned wolf (Chrysocyon brachyurus).
Various monotypic taxa, including the bat-eared fox (Otocyon megalotis), gray fox (Urocyon cinereoargenteus), and raccoon dog (Nyctereutes procyonoides).
Biology
General morphology
Foxes are generally smaller than some other members of the family Canidae such as wolves and jackals, while they may be larger than some within the family, such as raccoon dogs. In the largest species, the red fox, males weigh between , while the smallest species, the fennec fox, weighs just .
Fox features typically include a triangular face, pointed ears, an elongated rostrum, and a bushy tail. They are digitigrade (meaning they walk on their toes). Unlike most members of the family Canidae, foxes have partially retractable claws. Fox vibrissae, or whiskers, are black. The whiskers on the muzzle, known as mystacial vibrissae, average long, while the whiskers everywhere else on the head average to be shorter in length. Whiskers (carpal vibrissae) are also on the forelimbs and average long, pointing downward and backward. Other physical characteristics vary according to habitat and adaptive significance.
Pelage
Fox species differ in fur color, length, and density. Coat colors range from pearly white to black-and-white to black flecked with white or grey on the underside. Fennec foxes (and other species of fox adapted to life in the desert, such as kit foxes), for example, have large ears and short fur to aid in keeping the body cool. Arctic foxes, on the other hand, have tiny ears and short limbs as well as thick, insulating fur, which aid in keeping the body warm. Red foxes, by contrast, have a typical auburn pelt, the tail normally ending with a white marking.
A fox's coat color and texture may vary due to the change in seasons; fox pelts are richer and denser in the colder months and lighter in the warmer months. To get rid of the dense winter coat, foxes moult once a year around April; the process begins from the feet, up the legs, and then along the back. Coat color may also change as the individual ages.
Dentition
A fox's dentition, like all other canids, is I 3/3, C 1/1, PM 4/4, M 3/2 = 42. (Bat-eared foxes have six extra molars, totalling in 48 teeth.) Foxes have pronounced carnassial pairs, which is characteristic of a carnivore. These pairs consist of the upper premolar and the lower first molar, and work together to shear tough material like flesh. Foxes' canines are pronounced, also characteristic of a carnivore, and are excellent in gripping prey.
Behaviour
In the wild, the typical lifespan of a fox is one to three years, although individuals may live up to ten years. Unlike many canids, foxes are not always pack animals. Typically, they live in small family groups, but some (such as Arctic foxes) are known to be solitary.
Foxes are omnivores. Their diet is made up primarily of invertebrates such as insects and small vertebrates such as reptiles and birds. They may also eat eggs and vegetation. Many species are generalist predators, but some (such as the crab-eating fox) have more specialized diets. Most species of fox consume around of food every day. Foxes cache excess food, burying it for later consumption, usually under leaves, snow, or soil. While hunting, foxes tend to use a particular pouncing technique, such that they crouch down to camouflage themselves in the terrain and then use their hind legs to leap up with great force and land on top of their chosen prey. Using their pronounced canine teeth, they can then grip the prey's neck and shake it until it is dead or can be readily disemboweled.
The gray fox is one of only two canine species known to regularly climb trees; the other is the raccoon dog.
Sexual characteristics
The male fox's scrotum is held up close to the body with the testes inside even after they descend. Like other canines, the male fox has a baculum, or penile bone. The testes of red foxes are smaller than those of Arctic foxes. Sperm formation in red foxes begins in AugustβSeptember, with the testicles attaining their greatest weight in DecemberβFebruary.
Vixens are in heat for one to six days, making their reproductive cycle twelve months long. As with other canines, the ova are shed during estrus without the need for the stimulation of copulating. Once the egg is fertilized, the vixen enters a period of gestation that can last from 52 to 53 days. Foxes tend to have an average litter size of four to five with an 80 percent success rate in becoming pregnant. Litter sizes can vary greatly according to species and environmentthe Arctic fox, for example, can have up to eleven kits.
The vixen usually has six or eight mammae. Each teat has 8 to 20 lactiferous ducts, which connect the mammary gland to the nipple, allowing for milk to be carried to the nipple.
Vocalization
The fox's vocal repertoire is vast, and includes:
Whine Made shortly after birth. Occurs at a high rate when kits are hungry and when their body temperatures are low. Whining stimulates the mother to care for her young; it also has been known to stimulate the male fox into caring for his mate and kits.
Yelp Made about 19 days later. The kits' whining turns into infantile barks, yelps, which occur heavily during play.
Explosive call At the age of about one month, the kits can emit an explosive call which is intended to be threatening to intruders or other cubs; a high-pitched howl.
Combative call In adults, the explosive call becomes an open-mouthed combative call during any conflict; a sharper bark.
Growl An adult fox's indication to their kits to feed or head to the adult's location.
Bark Adult foxes warn against intruders and in defense by barking.
In the case of domesticated foxes, the whining seems to remain in adult individuals as a sign of excitement and submission in the presence of their owners.
Classification
Canids commonly known as foxes include the following genera and species:
Conservation
Several fox species are endangered in their native environments. Pressures placed on foxes include habitat loss and being hunted for pelts, other trade, or control. Due in part to their opportunistic hunting style and industriousness, foxes are commonly resented as nuisance animals. Contrastingly, foxes, while often considered pests themselves, have been successfully employed to control pests on fruit farms while leaving the fruit intact.
Urocyon littoralis
The island fox, though considered a near-threatened species throughout the world, is becoming increasingly endangered in its endemic environment of the California Channel Islands. A population on an island is smaller than those on the mainland because of limited resources like space, food and shelter. Island populations are therefore highly susceptible to external threats ranging from introduced predatory species and humans to extreme weather.
On the California Channel Islands, it was found that the population of the island fox was so low due to an outbreak of canine distemper virus from 1999 to 2000 as well as predation by non-native golden eagles. Since 1993, the eagles have caused the population to decline by as much as 95%. Because of the low number of foxes, the population went through an Allee effect (an effect in which, at low enough densities, an individual's fitness decreases). Conservationists had to take healthy breeding pairs out of the wild population to breed them in captivity until they had enough foxes to release back into the wild. Nonnative grazers were also removed so that native plants would be able to grow back to their natural height, thereby providing adequate cover and protection for the foxes against golden eagles.
Pseudalopex fulvipes
Darwin's fox was considered critically endangered because of their small known population of 250 mature individuals as well as their restricted distribution. However, the IUCN have since downgraded the conservation status from crictically endangered in their 2004 and 2008 assessments to endangered in the 2016 assessment, following findings of a wider distribution than previously reported. On the Chilean mainland, the population is limited to Nahuelbuta National Park and the surrounding Valdivian rainforest. Similarly on ChiloΓ© Island, their population is limited to the forests that extend from the southernmost to the northwesternmost part of the island. Though the Nahuelbuta National Park is protected, 90% of the species live on ChiloΓ© Island.
A major issue the species faces is their dwindling, limited habitat due to the cutting and burning of the unprotected forests. Because of deforestation, the Darwin's fox habitat is shrinking, allowing for their competitor's (chilla fox) preferred habitat of open space, to increase; the Darwin's fox, subsequently, is being outcompeted. Another problem they face is their inability to fight off diseases transmitted by the increasing number of pet dogs. To conserve these animals, researchers suggest the need for the forests that link the Nahuelbuta National Park to the coast of Chile and in turn ChiloΓ© Island and its forests, to be protected. They also suggest that other forests around Chile be examined to determine whether Darwin's foxes have previously existed there or can live there in the future, should the need to reintroduce the species to those areas arise. And finally, the researchers advise for the creation of a captive breeding program, in Chile, because of the limited number of mature individuals in the wild.
Relationships with humans
Foxes are often considered pests or nuisance creatures for their opportunistic attacks on poultry and other small livestock. Fox attacks on humans are not common.
Many foxes adapt well to human environments, with several species classified as "resident urban carnivores" for their ability to sustain populations entirely within urban boundaries. Foxes in urban areas can live longer and can have smaller litter sizes than foxes in non-urban areas. Urban foxes are ubiquitous in Europe, where they show altered behaviors compared to non-urban foxes, including increased population density, smaller territory, and pack foraging. Foxes have been introduced in numerous locations, with varying effects on indigenous flora and fauna.
In some countries, foxes are major predators of rabbits and hens. Population oscillations of these two species were the first nonlinear oscillation studied and led to the derivation of the LotkaβVolterra equation.
As food
Fox meat is edible, though it is not considered a common cuisine in any country.
Hunting
Fox hunting originated in the United Kingdom in the 16th century. Hunting with dogs is now banned in the United Kingdom, though hunting without dogs is still permitted. Red foxes were introduced into Australia in the early 19th century for sport, and have since become widespread through much of the country. They have caused population decline among many native species and prey on livestock, especially new lambs. Fox hunting is practiced as recreation in several other countries including Canada, France, Ireland, Italy, Russia, United States and Australia.
Domestication
There are many records of domesticated red foxes and others, but rarely of sustained domestication. A recent and notable exception is the Russian silver fox, which resulted in visible and behavioral changes, and is a case study of an animal population modeling according to human domestication needs. The current group of domesticated silver foxes are the result of nearly fifty years of experiments in the Soviet Union and Russia to de novo domesticate the silver morph of the red fox. This selective breeding resulted in physical and behavioral traits appearing that are frequently seen in domestic cats, dogs, and other animals, such as pigmentation changes, floppy ears, and curly tails. Notably, the new foxes became more tame, allowing themselves to be petted, whimpering to get attention and sniffing and licking their caretakers.
Urban settings
Foxes are among the comparatively few mammals which have been able to adapt themselves to a certain degree to living in urban (mostly suburban) human environments. Their omnivorous diet allows them to survive on discarded food waste, and their skittish and often nocturnal nature means that they are often able to avoid detection, despite their larger size.
Urban foxes have been identified as threats to cats and small dogs, and for this reason there is often pressure to exclude them from these environments.
The San Joaquin kit fox is a highly endangered species that has, ironically, become adapted to urban living in the San Joaquin Valley and Salinas Valley of southern California. Its diet includes mice, ground squirrels, rabbits, hares, bird eggs, and insects, and it has claimed habitats in open areas, golf courses, drainage basins, and school grounds.
Though rare, bites by foxes have been reported; in 2018, a woman in Clapham, London was bitten on the arm by a fox after she had left the door to her flat open.
In popular culture
The fox appears in many cultures, usually in folklore. There are slight variations in their depictions. In European, Persian, East Asian, and Native American folklore, foxes are symbols of cunning and trickeryβa reputation derived especially from their reputed ability to evade hunters. This is usually represented as a character possessing these traits. These traits are used on a wide variety of characters, either making them a nuisance to the story, a misunderstood hero, or a devious villain.
In East Asian folklore, foxes are depicted as familiar spirits possessing magic powers. Similar to in the folklore of other regions, foxes are portrayed as mischievous, usually tricking other people, with the ability to disguise as an attractive female human. Others depict them as mystical, sacred creatures who can bring wonder or ruin. Nine-tailed foxes appear in Chinese folklore, literature, and mythology, in which, depending on the tale, they can be a good or a bad omen. The motif was eventually introduced from Chinese to Japanese and Korean cultures.
The constellation Vulpecula represents a fox.
Notes
References
External links
BBC Wales Nature: Fox videos
Mammal common names
Paraphyletic groups | Fox | Biology | 3,602 |
64,377,524 | https://en.wikipedia.org/wiki/Jeanneney%20Rabearivony | Jeanneney Rabearivony is a Malagasy ecologist and herpetologist.
Life and research
Rabearivony grew up in rural Madagascar, and spent much of his childhood in the forest. The familiarity with the forest made him keenly aware of its disappearance, and set him on the path to work in environmental conservation.
Rabearivony received his MSc in Conservation Biology from the Durrell Institute of Conservation and Ecology (DICE), and a Diplome d'Etude Approffondies (DEA) in Ecology and Environmental studies from the University of Antananarivo in 1999. Thereafter he conducted his PhD on chameleon ecology and conservation at the University of Antananarivo, finishing his studies in 2013.
Rabearivony joined the WWF in July 2009 as manager of the Holistic Forest Conservation Project (PHCF) in Andapa, after being manager of humid zones for the Peregrine Fund. In this role, he works closely with local people to gain their perspectives and resource needs, helping to devise management plans that suit their requirements. He considers it imperative that the authorities managing Madagascar's forests and waters spend time in the field, in order to understand their role and the needs of the rural peoples of Madagascar, and has advocated for the coupling of local resource management with socio-economic engagement in order to improve the effectiveness of biodiversity protection.
Currently, Rabearivony is Dean of the Faculty of Sciences of the University of Antsiranana.
References
Herpetologists
21st-century scientists
Malagasy scientists
Year of birth missing (living people)
Living people
Ecologists | Jeanneney Rabearivony | Environmental_science | 332 |
2,526,988 | https://en.wikipedia.org/wiki/Isotopes%20of%20ruthenium | Naturally occurring ruthenium (44Ru) is composed of seven stable isotopes (of which two may in the future be found radioactive). Additionally, 27 radioactive isotopes have been discovered. Of these radioisotopes, the most stable are 106Ru, with a half-life of 373.59 days; 103Ru, with a half-life of 39.26 days and 97Ru, with a half-life of 2.9 days.
Twenty-four other radioisotopes have been characterized with atomic weights ranging from 86.95Β u (87Ru) to 119.95Β u (120Ru). Most of these have half-lives that are less than five minutes, except 94Ru (half-life: 51.8 minutes), 95Ru (half-life: 1.643 hours), and 105Ru (half-life: 4.44 hours).
The primary decay mode before the most abundant isotope, 102Ru, is electron capture and the primary mode after is beta emission. The primary decay product before 102Ru is technetium and the primary product after is rhodium.
Because of the very high volatility of ruthenium tetroxide () ruthenium radioactive isotopes with their relative short half-life are considered as the second most hazardous gaseous isotopes after iodine-131 in case of release by a nuclear accident. The two most important isotopes of ruthenium in case of nuclear accident are these with the longest half-life: 103Ru (39.26 days) and 106Ru (373.59 days).
List of isotopes
|-id=Ruthenium-85
| 85Ru
| style="text-align:right" | 44
| style="text-align:right" | 41
| 84.96712(54)#
| 1#Β ms[>400Β ns]
|
|
| 3/2β#
|
|
|-id=Ruthenium-86
| 86Ru
| style="text-align:right" | 44
| style="text-align:right" | 42
| 85.95731(43)#
| 50#Β ms[>400Β ns]
|
|
| 0+
|
|
|-id=Ruthenium-87
| 87Ru
| style="text-align:right" | 44
| style="text-align:right" | 43
| 86.95091(43)#
| 50#Β ms[>1.5Β ΞΌs]
|
|
| 1/2β#
|
|
|-id=Ruthenium-88
| rowspan=2|88Ru
| rowspan=2 style="text-align:right" | 44
| rowspan=2 style="text-align:right" | 44
| rowspan=2|87.94166(32)#
| rowspan=2|1.5(3)Β s
| Ξ²+ (>96.4%)
| 88Tc
| rowspan=2|0+
| rowspan=2|
| rowspan=2|
|-
| Ξ²+, p (<3.6%)
| 87Mo
|-id=Ruthenium-89
| rowspan=2|89Ru
| rowspan=2 style="text-align:right" | 44
| rowspan=2 style="text-align:right" | 45
| rowspan=2|88.937338(26)
| rowspan=2|1.32(3)Β s
| Ξ²+ (96.7%)
| 89Tc
| rowspan=2|(9/2+)
| rowspan=2|
| rowspan=2|
|-
| Ξ²+, p (3.1%)
| 88Mo
|-id=Ruthenium-90
| 90Ru
| style="text-align:right" | 44
| style="text-align:right" | 46
| 89.9303444(40)
| 11.7(9)Β s
| Ξ²+
| 90Tc
| 0+
|
|
|-id=Ruthenium-91
| 91Ru
| style="text-align:right" | 44
| style="text-align:right" | 47
| 90.9267415(24)
| 8.0(4)Β s
| Ξ²+
| 91Tc
| (9/2+)
|
|
|-id=Ruthenium-91m
| rowspan=2 style="text-indent:1em" | 91mRu
| rowspan=2 colspan="3" style="text-indent:2em" | β340(500)Β keV
| rowspan=2|7.6(8)Β s
| Ξ²+ (>99.9%)
| 91Tc
| rowspan=2|(1/2β)
| rowspan=2|
| rowspan=2|
|-
| Ξ²+, p (?%)
| 90Mo
|-id=Ruthenium-92
| 92Ru
| style="text-align:right" | 44
| style="text-align:right" | 48
| 91.9202344(29)
| 3.65(5)Β min
| Ξ²+
| 92Tc
| 0+
|
|
|-id=Ruthenium-92m
| style="text-indent:1em" | 92mRu
| colspan="3" style="text-indent:2em" | 2833.9(18)Β keV
| 100(8)Β ns
| IT
| 92Ru
| (8+)
|
|
|-id=Ruthenium-93
| 93Ru
| style="text-align:right" | 44
| style="text-align:right" | 49
| 92.9171044(22)
| 59.7(6)Β s
| Ξ²+
| 93Tc
| (9/2)+
|
|
|-id=Ruthenium-93m1
| rowspan=3 style="text-indent:1em" | 93m1Ru
| rowspan=3 colspan="3" style="text-indent:2em" | 734.40(10)Β keV
| rowspan=3|10.8(3)Β s
| Ξ²+ (78.0%)
| 93Tc
| rowspan=3|(1/2)β
| rowspan=3|
| rowspan=3|
|-
| IT (22.0%)
| 93Ru
|-
| Ξ²+, p (0.027%)
| 92Mo
|-id=Ruthenium-93m2
| style="text-indent:1em" | 93m2Ru
| colspan="3" style="text-indent:2em" | 2082.5(9)Β keV
| 2.30(7)Β ΞΌs
| IT
| 93Ru
| (21/2)+
|
|
|-id=Ruthenium-94
| 94Ru
| style="text-align:right" | 44
| style="text-align:right" | 50
| 93.9113429(34)
| 51.8(6)Β min
| Ξ²+
| 94Tc
| 0+
|
|
|-id=Ruthenium-94m
| style="text-indent:1em" | 94mRu
| colspan="3" style="text-indent:2em" | 2644.1(4)Β keV
| 67.5(28)Β ΞΌs
| IT
| 94Ru
| 8+
|
|
|-id=Ruthenium-95
| 95Ru
| style="text-align:right" | 44
| style="text-align:right" | 51
| 94.910404(10)
| 1.607(4)Β h
| Ξ²+
| 95Tc
| 5/2+
|
|
|-id=Ruthenium-96
| 96Ru
| style="text-align:right" | 44
| style="text-align:right" | 52
| 95.90758891(18)
| colspan=3 align=center|Observationally Stable
| 0+
| 0.0554(14)
|
|-id=Ruthenium-97
| 97Ru
| style="text-align:right" | 44
| style="text-align:right" | 53
| 96.9075458(30)
| 2.8370(14)Β d
| Ξ²+
| 97Tc
| 5/2+
|
|
|-id=Ruthenium-98
| 98Ru
| style="text-align:right" | 44
| style="text-align:right" | 54
| 97.9052867(69)
| colspan=3 align=center|Stable
| 0+
| 0.0187(3)
|
|-id=Ruthenium-99
| 99Ru
| style="text-align:right" | 44
| style="text-align:right" | 55
| 98.90593028(37)
| colspan=3 align=center|Stable
| 5/2+
| 0.1276(14)
|
|-id=Ruthenium-100
| 100Ru
| style="text-align:right" | 44
| style="text-align:right" | 56
| 99.90421046(37)
| colspan=3 align=center|Stable
| 0+
| 0.1260(7)
|
|-id=Ruthenium-101
| 101Ru
| style="text-align:right" | 44
| style="text-align:right" | 57
| 100.90557309(44)
| colspan=3 align=center|Stable
| 5/2+
| 0.1706(2)
|
|-id=Ruthenium-101m
| style="text-indent:1em" | 101mRu
| colspan="3" style="text-indent:2em" | 527.56(10)Β keV
| 17.5(4)Β ΞΌs
| IT
| 101Ru
| 11/2β
|
|
|-id=Ruthenium-102
| 102Ru
| style="text-align:right" | 44
| style="text-align:right" | 58
| 101.90434031(45)
| colspan=3 align=center|Stable
| 0+
| 0.3155(14)
|
|-id=Ruthenium-103
| 103Ru
| style="text-align:right" | 44
| style="text-align:right" | 59
| 102.90631485(47)
| 39.245(8)Β d
| Ξ²β
| 103Rh
| 3/2+
|
|
|-id=Ruthenium-103m
| style="text-indent:1em" | 103mRu
| colspan="3" style="text-indent:2em" | 238.2(7)Β keV
| 1.69(7)Β ms
| IT
| 103Ru
| 11/2β
|
|
|-id=Ruthenium-104
| 104Ru
| style="text-align:right" | 44
| style="text-align:right" | 60
| 103.9054253(27)
| colspan=3 align=center|Observationally Stable
| 0+
| 0.1862(27)
|
|-id=Ruthenium-105
| 105Ru
| style="text-align:right" | 44
| style="text-align:right" | 61
| 104.9077455(27)
| 4.439(11)Β h
| Ξ²β
| 105Rh
| 3/2+
|
|
|-id=Ruthenium-105m
| style="text-indent:1em" | 105mRu
| colspan="3" style="text-indent:2em" | 20.606(14)Β keV
| 340(15)Β ns
| IT
| 105Ru
| 5/2+
|
|
|-id=Ruthenium-106
| 106Ru
| style="text-align:right" | 44
| style="text-align:right" | 62
| 105.9073282(58)
| 371.8(18)Β d
| Ξ²β
| 106Rh
| 0+
|
|
|-id=Ruthenium-107
| 107Ru
| style="text-align:right" | 44
| style="text-align:right" | 63
| 106.9099698(93)
| 3.75(5)Β min
| Ξ²β
| 107Rh
| (5/2)+
|
|
|-id=Ruthenium-108
| 108Ru
| style="text-align:right" | 44
| style="text-align:right" | 64
| 107.9101858(93)
| 4.55(5)Β min
| Ξ²β
| 108Rh
| 0+
|
|
|-id=Ruthenium-109
| 109Ru
| style="text-align:right" | 44
| style="text-align:right" | 65
| 108.9133237(96)
| 34.4(2)Β s
| Ξ²β
| 109Rh
| (5/2+)
|
|
|-id=Ruthenium-109m
| style="text-indent:1em" | 109mRu
| colspan="3" style="text-indent:2em" | 96.14(15)Β keV
| 680(30)Β ns
| IT
| 109Ru
| (5/2β)
|
|
|-id=Ruthenium-110
| 110Ru
| style="text-align:right" | 44
| style="text-align:right" | 66
| 109.9140385(96)
| 12.04(17)Β s
| Ξ²β
| 110Rh
| 0+
|
|
|-id=Ruthenium-111
| 111Ru
| style="text-align:right" | 44
| style="text-align:right" | 67
| 110.917568(10)
| 2.12(7)Β s
| Ξ²β
| 111Rh
| 5/2+
|
|
|-id=Ruthenium-112
| 112Ru
| style="text-align:right" | 44
| style="text-align:right" | 68
| 111.918807(10)
| 1.75(7)Β s
| Ξ²β
| 112Rh
| 0+
|
|
|-id=Ruthenium-113
| 113Ru
| style="text-align:right" | 44
| style="text-align:right" | 69
| 112.922847(41)
| 0.80(5)Β s
| Ξ²β
| 113Rh
| (1/2+)
|
|
|-id=Ruthenium-113m
| rowspan=2 style="text-indent:1em" | 113mRu
| rowspan=2 colspan="3" style="text-indent:2em" | 131(33)Β keV
| rowspan=2|510(30)Β ms
| Ξ²β (?%)
| 113Rh
| rowspan=2|(7/2β)
| rowspan=2|
| rowspan=2|
|-
| IT (?%)
| 113Ru
|-id=Ruthenium-114
| 114Ru
| style="text-align:right" | 44
| style="text-align:right" | 70
| 113.9246144(38)
| 0.54(3)Β s
| Ξ²β
| 114Rh
| 0+
|
|
|-id=Ruthenium-115
| 115Ru
| style="text-align:right" | 44
| style="text-align:right" | 71
| 114.929033(27)
| 318(19)Β ms
| Ξ²β
| 115Rh
| (1/2+)
|
|
|-id=Ruthenium-115m
| rowspan=2 style="text-indent:1em" | 115mRu
| rowspan=2 colspan="3" style="text-indent:2em" | 82(6)Β keV
| rowspan=2|76(6)Β ms
| Ξ²β (?%)
| 115Rh
| rowspan=2|(7/2β)
| rowspan=2|
| rowspan=2|
|-
| IT (?%)
| 115Ru
|-id=Ruthenium-116
| 116Ru
| style="text-align:right" | 44
| style="text-align:right" | 72
| 115.9312192(40)
| 204(6)Β ms
| Ξ²β
| 116Rh
| 0+
|
|
|-id=Ruthenium-117
| 117Ru
| style="text-align:right" | 44
| style="text-align:right" | 73
| 116.93614(47)
| 151(3)Β ms
| Ξ²β
| 117Rh
| 3/2+#
|
|
|-id=Ruthenium-117m
| style="text-indent:1em" | 117mRu
| colspan="3" style="text-indent:2em" | 185.0(4)Β keV
| 2.49(6)Β ΞΌs
| IT
| 117Ru
| 7/2β#
|
|
|-id=Ruthenium-118
| 118Ru
| style="text-align:right" | 44
| style="text-align:right" | 74
| 117.93881(22)#
| 99(3)Β ms
| Ξ²β
| 118Rh
| 0+
|
|
|-id=Ruthenium-119
| 119Ru
| style="text-align:right" | 44
| style="text-align:right" | 75
| 118.94409(32)#
| 69.5(20)Β ms
| Ξ²β
| 119Rh
| 3/2+#
|
|
|-id=Ruthenium-119m
| style="text-indent:1em" | 119mRu
| colspan="3" style="text-indent:2em" | 227.1(7)Β keV
| 384(22)Β ns
| IT
| 119Ru
|
|
|
|-id=Ruthenium-120
| 120Ru
| style="text-align:right" | 44
| style="text-align:right" | 76
| 119.94662(43)#
| 45(2)Β ms
| Ξ²β
| 120Rh
| 0+
|
|
|-id=Ruthenium-121
| 121Ru
| style="text-align:right" | 44
| style="text-align:right" | 77
| 120.95210(43)#
| 29(2)Β ms
| Ξ²β
| 121Rh
| 3/2+#
|
|
|-id=Ruthenium-122
| 122Ru
| style="text-align:right" | 44
| style="text-align:right" | 78
| 121.95515(54)#
| 25(1)Β ms
| Ξ²β
| 122Rh
| 0+
|
|
|-id=Ruthenium-123
| 123Ru
| style="text-align:right" | 44
| style="text-align:right" | 79
| 122.96076(54)#
| 19(2)Β ms
| Ξ²β
| 123Rh
| 3/2+#
|
|
|-id=Ruthenium-124
| 124Ru
| style="text-align:right" | 44
| style="text-align:right" | 80
| 123.96394(64)#
| 15(3)Β ms
| Ξ²β
| 124Rh
| 0+
|
|
|-id=Ruthenium-125
| 125Ru
| style="text-align:right" | 44
| style="text-align:right" | 81
| 124.96954(32)#
| 12#Β ms[>550Β ns]
|
|
| 3/2+#
|
|
In September 2017 an estimated amount of 100 to 300 TBq (0.3 to 1 g) of 106Ru was released in Russia, probably in the Ural region. It was, after ruling out release from a reentering satellite, concluded that the source is to be found either in nuclear fuel cycle facilities or radioactive source production. In France levels up to 0.036mBq/m3 of air were measured. It is estimated that over distances of the order of a few tens of kilometres around the location of the release levels may exceed the limits for non-dairy foodstuffs.
References
Isotope masses from:
Isotopic compositions and standard atomic masses from:
Half-life, spin, and isomer data selected from the following sources.
Ruthenium
Ruthenium | Isotopes of ruthenium | Chemistry | 4,523 |
55,687,902 | https://en.wikipedia.org/wiki/OnePlus%205T | The OnePlus 5T is an Android-based smartphone produced, released and marketed by OnePlus. It was unveiled on 16 November 2017 via a live streamed press event which aired on YouTube. It went on sale on 21 November 2017. It is an incremental update to its predecessor, the OnePlus 5, which was unveiled only five months prior. Some notable changes that are featured, includes, a larger display and thinner bezels found on the device with the repositioning of the fingerprint scanner from the front to the rear panel. On 17 May 2018 the OnePlus 5T was succeeded by the OnePlus 6.
Specifications
Hardware
The OnePlus 5T features a redesigned 6.01" Full Optic 2160x1080 AMOLED display, which the company calls the "Sunlight display" and claims to provide a crisp and bright experience even when used in a sunny environment, such as when outdoors. The display uses the 18:9 (2:1) aspect ratio instead of the 16:9 aspect ratio found in most smartphones, and supports the DCI-P3 wide color gamut standard. It also has a pixel density of 401ppi, and features smaller bezels than found in previous OnePlus smartphones such as the OnePlus 5 and 3T. The 5T's body is made entirely from anodized aluminum, uses 2.5D Gorilla Glass 5 as protection for the display, and uses a ceramic coating for the fingerprint sensor which was moved to the rear of the phone. It also features an "Alert Slider" located on the left side of the device, a feature that is also available on its predecessors, the 5, the 3T, and the 3, which allows users to set their notifications to silent. The 5T is powered by the Qualcomm Snapdragon 835 and comes with the choice of either 6 or 8 GB of LPDDR4X RAM, depending on the storage configuration. Similar to its predecessor, the phone is available in either a 64Β GB or 128Β GB configuration.
Unlike OnePlus devices released in the past, the phone does not feature any physical hardware buttons on the front of the device, electing to use virtual navigation keys instead. It features a non-removable 3,300 mAh battery, capable of fast charging by OnePlus' proprietary Dash Charge through its USB-C port. The phone retains the 3.5mm headphone jack, but lacks stereo speakers, instead opting for a mono speaker located on the bottom of the device. The device features a combination of a 16Β MP main lens (Sony Exmor IMX398) and a 20Β MP secondary lens (Sony Exmor IMX376K) located on the device's rear, both of which have an aperture of f/1.7. They are capable of shooting 4K video and "Portrait mode" shots, like many other flagship devices. The front (selfie) camera is also a 16Β MP sensor, with an aperture of f/2.0. The 5T features a "Face Unlock" facial recognition feature that can be used to unlock the device, however it lacks the ability to authenticate purchases. Since it uses 2D scanning, it is relatively faster than some other competitors, but doesn't work in particular lighting conditions and is not as secure.
The OnePlus 5T has become available in multiple limited edition variants over time, such as a Star Wars edition promoting Star Wars: The Last Jedi, which came with a special case and a new color scheme; a "Sandstone White" variant which was released on 5 January 2018 and featured a white color scheme and a red "Alert Slider", which sold out in under 2 hours; and a "Lava Red" variant which was released on 11 January 2018 initially in India, and became available in Europe and North America on 6 February.
Software
The OnePlus 5T ships with Android 7.1.1 "Nougat" and uses the OxygenOS user interface, OnePlus' proprietary custom skin built on top of Android, adding various features not found in the stock Android operating system, such as night mode and reading mode, which both change the color temperature of the device's screen, and the ability to change the Bluetooth audio codec. The night mode reduces the screen's color temperature to reduce blue light levels, while the reading mode applies a monochrome-like effect to the screen. Unlike the OnePlus 5, the 5T no longer overrides the performance scaling in benchmarks to max out CPU and GPU clocks in specific applications, this makes benchmarks being run on the 5T more accurate.
OnePlus promised an update to Android 8 "Oreo" (OxygenOS 5.0) in early 2018. The first beta for this version was released on 29 December 2017, but was pulled a couple days later due to issues with regards to instability that were present. OnePlus fixed the issues and reuploaded the beta version on 3 January 2018.
The first stable Android Oreo-based version of OxygenOS (OxygenOS 5.0.2) for the OnePlus 5T was released on 31 January 2018, introducing features such as Picture-in-Picture, Autofill, and Notification Dots, as well as a faster boot time. The update also introduced a redesigned Quick Settings menu, introduces a security patch for CVE-2017-13218, and updates a few system applications. The update does not contain the Treble feature for device independent system updates. On 11 March 2018, OnePlus released the Android 8.1 "Oreo" update in beta for the OnePlus 5 and the OnePlus 5T, bringing the changes made to Android 8.1 such as the Neural Networks and Shared memory APIs, the dimming of the navigation bar when it is not being used to the devices, and other improvements, such as improvements to the Autofill framework. The update also updated the security patch level to 1 February 2018. In Q2 2020 The OnePlus 5 and 5T got their stable Android 10 updates with OxygenOS
Network compatibility
The OnePlus 5T includes only a single variant for cellular networks worldwide.
Reception
Critical reception
OnePlus 5T received generally positive reviews. The Verge was full of praise for the phone's elegant design and optimized software but noted that the camera was still mediocre. Engadget heaped praise on the phone for its excellent performance despite its price bracket.
Sales
See also
Comparison of ARMv8-A cores, ARMv8 family
List of Qualcomm Snapdragon systems on chips
References
External links
OnePlus mobile phones
Ubuntu Touch devices
Mobile phones introduced in 2017
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording
Discontinued flagship smartphones | OnePlus 5T | Technology | 1,408 |
3,990,606 | https://en.wikipedia.org/wiki/Robert%20Madge%20%28businessman%29 | Robert Hylton Madge (born 2 April 1952) is a British entrepreneur and technologist.
Career
In the 1980s, he founded and was chairman of Madge Networks, a pioneer of high speed networking technology.
Once he was the President of IDTrack, a European Association for identification and traceability of goods based on technologies such as RFID. He was also the founder of Olzet, a provider of services associated with the implementation of RFID solutions in the food industry.
He was President of the European Association for Secure Identification.
References
1952 births
Living people
British technology company founders
People in information technology
Radio-frequency identification | Robert Madge (businessman) | Technology,Engineering | 128 |
11,074,998 | https://en.wikipedia.org/wiki/Mooring%20%28oceanography%29 | A mooring in oceanography is a collection of devices connected to a wire and anchored on the sea floor. It is the Eulerian way of measuring ocean currents, since a mooring is stationary at a fixed location. In contrast to that, the Lagrangian way measures the motion of an oceanographic drifter, the Lagrangian drifter.
Construction principle
The mooring is held up in the water column with various forms of buoyancy such as glass balls and syntactic foam floats. The attached instrumentation is wide-ranging but often includes CTDs (conductivity, temperature depth sensors), current meters (e.g. acoustic Doppler current profilers or deprecated rotor current meters), and biological sensors to measure various parameters. Long-term moorings can be deployed for durations of two years or more, powered with alkaline or lithium battery packs.
Components
Top buoy
Surface buoys
Moorings often include surface buoys that transmit real time data back to shore. The traditional approach is to use the Argos System. Alternatively, one may use the commercial Iridium satellites which allow higher data rates.
Submerged buoys
In deeper waters, areas covered by sea ice, areas within or near shipping lines or areas that are prone to theft or vandalism, moorings are often submerged with no surface markers. Submerged moorings typically use an acoustic release or a Timed Release that connects the mooring to an anchor weight on the sea floor. The weight is released by sending a coded acoustic command signal and stays on the ground. Deep water anchors are typically made from steel and may be as large as 100Β kg. A common deep water anchor consists of a stack of 2β4 railroad wheels. In shallow waters anchors may consist of a concrete block or small portable anchor.
The buoyancy of the floats, i.e. of the top buoy plus additional packs of glass bulbs of foam, is sufficient to carry the instruments back to the surface. In order to avoid entangled ropes, it has been practical to place additional floats directly above each instrument.
Instrument housing
Prawlers
Prawlers (profiling crawlers) are sensor bodies which climb and descend the cable, to observe multiple depths. The energy to move is "free," harnessed by ratcheting upward via wave energy, then returning downward via gravity.
Depth correction
Similar to a kite in the wind, the mooring line will follow a so-called (half-)catenary.
The influence of currents (and wind if the top buoy is above the sea surface) can be modeled and the shape of the mooring line can be determined by software. If the currents are strong (above 0.1 m/s) and the mooring lines are long (more than 1 km), the instrument position may vary up to 50 m.
See also
Benthic lander, a mooring which does not have any mooring line
References
Oceanography
Physical oceanography
Oceanographic instrumentation
Ocean currents
Biological oceanography
de:Boje (Schifffahrt) | Mooring (oceanography) | Physics,Chemistry,Technology,Engineering,Environmental_science | 623 |
851,153 | https://en.wikipedia.org/wiki/Noah%20%28gaur%29 | Noah was the name of the first cloned gaur. He was cloned and gestated in the womb of a cow named Bessie. Gaurs are listed as a vulnerable species on the IUCN Red List. Noah was delivered on January 8, 2001, but he died within just 48 hours of dysentery on January 10, 2001. Noah's condition was monitored by Dr. Jonathan Hill and his teammates in Iowa. The process used to clone Noah was nuclear transfer.
References
BBC News. 2000. Website. Endangered species cloned. BBC. Retrieved May 31, 2008.
CNN.com. 2001. Website. First cloned endangered species dies 2 days after birth. CNN.com. Retrieved May 31, 2008
2001 animal births
2001 animal deaths
Cloned animals
Individual bovines | Noah (gaur) | Biology | 161 |
24,993,728 | https://en.wikipedia.org/wiki/Cocamidopropyl%20hydroxysultaine | Cocamidopropyl hydroxysultaine (CAHS) is a synthetic amphoteric surfactant from the hydroxysultaine group. It is found in personal care products (soaps, shampoos, lotions etc.). It has uses as a foam booster, viscosity builder, and an antistatic agent.
See also
Cocamidopropyl betaine
References
External links
Household Products Database: Chemical Information
Zwitterionic surfactants
Antiseptics
Cosmetics chemicals
Antistatic agents
Quaternary ammonium compounds | Cocamidopropyl hydroxysultaine | Chemistry | 117 |
679,991 | https://en.wikipedia.org/wiki/Synthetic%20crude | Synthetic crude is the output from a bitumen/extra heavy oil upgrader facility used in connection with oil sand production. It may also refer to shale oil, an output from an oil shale pyrolysis. The properties of the synthetic crude depend on the processes used in the upgrading. Typically, it is low in sulfur and has an API gravity of around 30. It is also known as "upgraded crude".
Synthetic crude is an intermediate product produced when an extra-heavy or unconventional oil source is upgraded into a transportable form. Synthetic crude is then shipped to oil refineries where it is refined into finished products. Synthetic crude may also be mixed, as a diluent, with heavy oil to create synbit. Synbit is more viscous than synthetic crude, but can also be a less expensive alternative for transporting heavy oil to a conventional refinery.
Syncrude Canada, Suncor Energy Inc., and Canadian Natural Resources Limited are the three largest worldwide producers of synthetic crude with a cumulative production of approximately . The NewGrade Energy Upgrader became operational in 1988, and was the first upgrader in Canada, now part of the CCRL Refinery Complex.
"Synthetic crude" may also refer to crude-like hydrocarbon mixes generated from other processes. Examples are manure-derived synthetic crude oil and greencrude.
See also
Albian Sands
Canadian Centre for Energy Information
History of the petroleum industry in Canada (oil sands and heavy oil)
Scotford Upgrader
Suncor
Syncrude
References
External links
Scotford Upgrader (Shell Canada website)
Scotford Complex (Shell Canada website)
Muskeg River Mine (Shell Canada website)
Scientists find bugs that eat waste and excrete petrol - Times Online
Synthetic fuels
Bituminous sands
Petroleum industry in Canada | Synthetic crude | Chemistry | 360 |
12,518,595 | https://en.wikipedia.org/wiki/Ray%E2%80%93Dutt%20twist | The RayβDutt twist is a mechanism proposed for the racemization of octahedral complexes containing three bidentate chelate rings. Such complexes typically adopt an octahedral molecular geometry in their ground states, in which case they possess helical chirality. The pathway entails formation of an intermediate of C2v point group symmetry. An alternative pathway that also does not break any metal-ligand bonds is called the Bailar twist. Both of these mechanism product complexes wherein the ligating atoms (X in the scheme) are arranged in an approximate trigonal prism.
This pathway is called the RayβDutt twist in honor of Priyadaranjan Ray (not Prafulla Chandra Ray) and N. K. Dutt, inorganic chemists at the Indian Association for the Cultivation of Science abbr. IACS who proposed this process.
See also
Pseudorotation
Bailar twist
Bartell mechanism
Berry mechanism
Fluxional molecule
Indian Association for the Cultivation of Science (IACS)
References
Molecular geometry
Stereochemistry
Coordination chemistry | RayβDutt twist | Physics,Chemistry | 212 |
78,119,260 | https://en.wikipedia.org/wiki/Hyperpositive%20nonlinear%20effect | A hyperpositive nonlinear effect is a very specific case of a nonlinear effect. A nonlinear effect in asymmetric catalysis is a phenomenon in which the enantiopurity of the catalyst (or chiral auxiliary) is not proportional to the enantiopurity of the product obtained. These phenomena were rationalized in the mid-1980s by Henri B. Kagan, who proposed simple mechanistic models, supported by mathematical models, to model experimental curves.
In 1994, H. B. Kagan and collaborators proposed more elaborate models that more closely resembled the experimental results observed at the time. Using these models, the authors were able to make theoretical predictions about situations that had not been encountered experimentally. An example is a case βwhere the enantiomeric excess could take on much larger values for a partially resolved ligand than for an enantiomerically pure ligandβ. The authors proposed the term βhyperpositive nonlinear effectβ to characterize this situation.
This statement may seem somewhat implausible at first glance, but the possibility was observed experimentally 26 years later: the first experimental example of a hyperpositive nonlinear effect was described in 2020 by S. Bellemin-Laponnaz and colleagues, but the mechanism of this phenomenon turned out to be different from that originally proposed. This mechanism, which explains a hyperpositive nonlinear effect, has also been validated to explain cases of enantiodivergence.
References
Catalysis | Hyperpositive nonlinear effect | Chemistry | 304 |
3,100,105 | https://en.wikipedia.org/wiki/K%C3%B6the%20conjecture | In mathematics, the KΓΆthe conjecture is a problem in ring theory, open . It is formulated in various ways. Suppose that R is a ring. One way to state the conjecture is that if R has no nil ideal, other than {0}, then it has no nil one-sided ideal, other than {0}.
This question was posed in 1930 by Gottfried KΓΆthe (1905β1989). The KΓΆthe conjecture has been shown to be true for various classes of rings, such as polynomial identity rings and right Noetherian rings, but a general solution remains elusive.
Equivalent formulations
The conjecture has several different formulations:
(KΓΆthe conjecture) In any ring, the sum of two nil left ideals is nil.
In any ring, the sum of two one-sided nil ideals is nil.
In any ring, every nil left or right ideal of the ring is contained in the upper nil radical of the ring.
For any ring R and for any nil ideal J of R, the matrix ideal Mn(J) is a nil ideal of Mn(R) for every n.
For any ring R and for any nil ideal J of R, the matrix ideal M2(J) is a nil ideal of M2(R).
For any ring R, the upper nilradical of Mn(R) is the set of matrices with entries from the upper nilradical of R for every positive integer n.
For any ring R and for any nil ideal J of R, the polynomials with indeterminate x and coefficients from J lie in the Jacobson radical of the polynomial ring R[x].
For any ring R, the Jacobson radical of R[x] consists of the polynomials with coefficients from the upper nilradical of R.
Related problems
A conjecture by Amitsur read: "If J is a nil ideal in R, then J[x] is a nil ideal of the polynomial ring R[x]." This conjecture, if true, would have proven the KΓΆthe conjecture through the equivalent statements above, however a counterexample was produced by Agata Smoktunowicz. While not a disproof of the KΓΆthe conjecture, this fueled suspicions that the KΓΆthe conjecture may be false.
Kegel proved that a ring which is the direct sum of two nilpotent subrings is itself nilpotent. The question arose whether or not "nilpotent" could be replaced with "locally nilpotent" or "nil". Partial progress was made when Kelarev produced an example of a ring which isn't nil, but is the direct sum of two locally nilpotent rings. This demonstrates that Kegel's question with "locally nilpotent" replacing "nilpotent" is answered in the negative.
The sum of a nilpotent subring and a nil subring is always nil.
References
External links
PlanetMath page
Survey paper (PDF)
Ring theory
Conjectures
Unsolved problems in mathematics | KΓΆthe conjecture | Mathematics | 638 |
6,482,587 | https://en.wikipedia.org/wiki/Johannes%20Wislicenus | Johannes Wislicenus (; 24 June 18355 December 1902) was a German chemist, most famous for his work in early stereochemistry.
Biography
The son of the radical Protestant theologian Gustav Wislicenus, Johannes was born on 24 June 1835 in Kleineichstedt (now part of Querfurt, Saxony-Anhalt) in Prussian Saxony, and entered University of Halle-Wittenberg in 1853. In October 1853 he immigrated to the United States with his family. For a brief time he acted as assistant to Harvard chemist Eben Horsford, and in 1855 was appointed lecturer at the Mechanics' Institute in New York. Returning to Europe in 1856, he continued to study chemistry with Wilhelm Heinrich Heintz at the University of Halle. In 1860, he began lecturing at the University of ZΓΌrich, and at the Swiss Polytechnical Institute and by 1868 he was Professor of Chemistry at the university. In 1870, he was chosen to succeed Georg Staedeler as Professor of General Chemistry at the Swiss Polytechnical Institute in ZΓΌrich, retaining also the position of full professor at the University of ZΓΌrich. In 1872, he succeeded Adolph Strecker in the chair of chemistry at University of WΓΌrzburg, and in 1885, he succeeded Hermann Kolbe as Professor of Chemistry at the University of Leipzig, where he died on 6 December 1902.
Research
By the late 1860s, Wislicenus devoted his research to organic chemistry. His work on the isomeric lactic acids from 1868 to 1872 resulted in the discovery of two substances with different physical properties but with an identical chemical structure. He called this difference "geometrical isomerism". He would later promote J. H. van't Hoff's theory of the tetrahedral carbon atom, believing that it, together with the supposition that there are "specially directed forces, the affinity-energies", which determine the relative position of atoms in the molecule, afforded a method by which the spatial arrangement of atoms in particular cases may be ascertained by experiment. While at WΓΌrzburg, Wislicenus developed the use of ethyl aceto acetate in organic synthesis. However, he was also active in inorganic chemistry, finding a reaction for the production of sodium azide. He was the first to prepare cyclopentane in 1893
Awards
In 1898 Wislicenus was awarded the Davy Medal by the Royal Society of London.
Notes
References
Attribution
Further reading
Berichte der deutschen chemischen Gesellschaft, 1905, volume 37. pp.Β 4861β4946
- Proceedings of the Royal Society, A, 1907, volume 78, pages iii β xii
1835 births
1902 deaths
People from Querfurt
Chemists from the Kingdom of Prussia
Scientists from the Province of Saxony
Harvard University staff
Academic staff of ETH Zurich
Foreign members of the Royal Society
19th-century German chemists
Alldeutscher Verband members
University of Halle alumni
Academic staff of Leipzig University
Academic staff of the University of WΓΌrzburg
Stereochemists | Johannes Wislicenus | Chemistry | 609 |
63,319,956 | https://en.wikipedia.org/wiki/Korean%20Genome%20Project | Korean Genome Project (Korea1K) is the largest genome sequencing project in Korea, first launched in 2015 as part of the Genome Korea in Ulsan. As of 2021, the project has sequenced over 10,000 human genomes and is the first large-scale data base for constructing a genetic map and diversity analysis of Koreans.
History
KGP was originated from the national initiative of sequencing the reference Korean and whole population genomes in 2006 by KOBIC, KRIBB and NCSRD, KRISS,Β Daejeon in Korea.Β From 2009, KGP was supported by the Genome Research Foundation and TheragenEtex to build the Variome of Koreans as well as the Korean Reference Genome (KOREF). Starting from KOREF, a consensus variome reference, providing information on millions of variants from 40 additional ethnically homogeneous genomes from the Korean Personal Genome Project was completed in 2017. Updating the technology an improved version of KOREF was then constructed using long-read sequencing data produced by Oxford Nanopore PromethION and PacBio technologies has been released showcasing newer assembly technologies and techniques. In 2022 a new chromosome-level haploid assembly of KOREF was published, assembled using Oxford Nanopore Technologies PromethION, Pacific Biosciences HiFi-CCS, and Hi-C technology.
Since 2014, KGP has been supported by Ulsan National Institute of Science and Technology, Clinomics, and Ulsan City, Ulsan, Korea.
Science & development
Korea1K) has been used in sequencing technologies such as MGI DNBSEQ-T7 and Illumina HiSeq2000, HiSeq2500, HiSeq4000, HiSeqX10, and NovaSeq6000 sequencing technologies. The variome data has been a reference to study the origin and composition of Korean ethnicity when compared to ancient DNA sequences.
Korea1K released 1,094 Korean whole genome sequences on 27 May 2020, published in Science Advances.
In April 2024, Korea4K was published, making whole genome sequences of 4,157 Koreans publicly accessible alongside an imputation reference panel and 107 phenotypes derived from extensive health check-ups.
References
External links
KoreanGenome.org
Opengenome.net
www.srd.re.kr
1000genomes.kr
Genome projects | Korean Genome Project | Biology | 488 |
3,329,225 | https://en.wikipedia.org/wiki/Impossible%20differential%20cryptanalysis | In cryptography, impossible differential cryptanalysis is a form of differential cryptanalysis for block ciphers. While ordinary differential cryptanalysis tracks differences that propagate through the cipher with greater than expected probability, impossible differential cryptanalysis exploits differences that are impossible (having probability 0) at some intermediate state of the cipher algorithm.
Lars Knudsen appears to be the first to use a form of this attack, in the 1998 paper where he introduced his AES candidate, DEAL. The first presentation to attract the attention of the cryptographic community was later the same year at the rump session of CRYPTO '98, in which Eli Biham, Alex Biryukov, and Adi Shamir introduced the name "impossible differential" and used the technique to break 4.5 out of 8.5 rounds of IDEA and 31 out of 32 rounds of the NSA-designed cipher Skipjack. This development led cryptographer Bruce Schneier to speculate that the NSA had no previous knowledge of impossible differential cryptanalysis. The technique has since been applied to many other ciphers: Khufu and Khafre, E2, variants of Serpent, MARS, Twofish, Rijndael (AES), CRYPTON, Zodiac, Hierocrypt-3, TEA, XTEA, Mini-AES, ARIA, Camellia, and SHACAL-2.
Biham, Biryukov and Shamir also presented a relatively efficient specialized method for finding impossible differentials that they called a miss-in-the-middle attack. This consists of finding "two events with probability one, whose conditions cannot be met together."
References
Further reading
Cryptographic attacks | Impossible differential cryptanalysis | Technology | 341 |
106,165 | https://en.wikipedia.org/wiki/Globin | The globins are a superfamily of heme-containing globular proteins, involved in binding and/or transporting oxygen. These proteins all incorporate the globin fold, a series of eight alpha helical segments. Two prominent members include myoglobin and hemoglobin. Both of these proteins reversibly bind oxygen via a heme prosthetic group. They are widely distributed in many organisms.
Structure
Globin superfamily members share a common three-dimensional fold. This 'globin fold' typically consists of eight alpha helices, although some proteins have additional helix extensions at their termini. Since the globin fold contains only helices, it is classified as an all-alpha protein fold.
The globin fold is found in its namesake globin families as well as in phycocyanins. The globin fold was thus the first protein fold discovered (myoglobin was the first protein whose structure was solved).
Helix packaging
The eight helices of the globin fold core share significant nonlocal structure, unlike other structural motifs in which amino acids close to each other in primary sequence are also close in space. The helices pack together at an average angle of about 50 degrees, significantly steeper than other helical packings such as the helix bundle. The exact angle of helix packing depends on the sequence of the protein, because packing is mediated by the sterics and hydrophobic interactions of the amino acid side chains near the helix interfaces.
Evolution
Globins evolved from a common ancestor and can be divided into three lineages:
Family M (for myoglobin-like) or F (for FHb-like), which has a typical 3/3 fold.
Subfamily FHb, for flavohaemoglobins. Chimeric.
Subfamily SDgb, for single-domain globins (not to be confused with SSDgb).
Family S (for sensor-like), again with a 3/3 fold.
Subfamily GCS, for Globin-coupled sensors. Chimeric.
Subfamily PGb, for protoglobins. Single-domain.
Subfamily SSDgb, for sensor single-domain globins.
Family T (for truncated), with a 2/2 fold All subfamilies can be chimeric, single-domain, or tandemly linked.
Subfamily TrHb1 (also T1 or N).
Subfamily TrHb2 (also T2 or O). Includes 2/2 phytoglobins.
Subfamily TrHb3 (also T3 or P).
The M/F family of globins is absent in archaea. Eukaryotes lack GCS, Pgb, and T3 subfamily globins.
Eight globins are known to occur in vertebrates: androglobin (Adgb), cytoglobin (Cygb), globin E (GbE, from bird eye), globin X (GbX, not found in mammals or birds), globin Y (GbY, from some mammals), hemoglobin (Hb), myoglobin (Mb) and neuroglobin (Ngb). All these types evolved from a single globin gene of F/M family found in basal animals. The single gene has also invented an oxygen-carrying "hemoglobin" multiple times in other groups of animals. Several functionally different haemoglobins can coexist in the same species.
Sequence conservation
Although the fold of the globin superfamily is highly evolutionarily conserved, the sequences that form the fold can have as low as 16% sequence identity. While the sequence specificity of the fold is not stringent, the hydrophobic core of the protein must be maintained and hydrophobic patches on the generally hydrophilic solvent-exposed surface must be avoided in order for the structure to remain stable and soluble. The most famous mutation in the globin fold is a change from glutamate to valine in one chain of the hemoglobin molecule. This mutation creates a "hydrophobic patch" on the protein surface that promotes intermolecular aggregation, the molecular event that gives rise to sickle-cell disease.
Subfamilies
Leghaemoglobin
Myoglobin
Erythrocruorin
Hemoglobin, beta
Hemoglobin, alpha
Myoglobin, trematode type
Globin, nematode
Globin, lamprey/hagfish type
Globin, annelid-type
Haemoglobin, extracellular
Examples
Human genes encoding globin proteins include:
CYGB
HBA1, HBA2, HBB, HBD, HBE1, HBG1, HBG2, HBM, HBQ1, HBZ, MB
The globins include:
Haemoglobin (Hb)
Myoglobin (Mb)
Neuroglobin: a myoglobin-like haemprotein expressed in vertebrate brain and retina, where it is involved in neuroprotection from damage due to hypoxia or ischemia. Neuroglobin belongs to a branch of the globin family that diverged early in evolution.
Cytoglobin: an oxygen sensor expressed in multiple tissues. Related to neuroglobin.
Erythrocruorin: highly cooperative extracellular respiratory proteins found in annelids and arthropods that are assembled from as many as 180 subunit into hexagonal bilayers.
Leghaemoglobin (legHb or symbiotic Hb): occurs in the root nodules of leguminous plants, where it facilitates the diffusion of oxygen to symbiotic bacteriods in order to promote nitrogen fixation.
Non-symbiotic haemoglobin (NsHb): occurs in non-leguminous plants, and can be over-expressed in stressed plants .
Flavohaemoglobins (FHb): chimeric, with an N-terminal globin domain and a C-terminal ferredoxin reductase-like NAD/FAD-binding domain. FHb provides protection against nitric oxide via its C-terminal domain, which transfers electrons to haem in the globin.
Globin E: a globin responsible for storing and delivering oxygen to the retina in birds
Globin-coupled sensors: chimeric, with an N-terminal myoglobin-like domain and a C-terminal domain that resembles the cytoplasmic signalling domain of bacterial chemoreceptors. They bind oxygen, and act to initiate an aerotactic response or regulate gene expression.
Protoglobin: a single domain globin found in archaea that is related to the N-terminal domain of globin-coupled sensors.
Truncated 2/2 globin: lack the first helix, giving them a 2-over-2 instead of the canonical 3-over-3 alpha-helical sandwich fold. Can be divided into three main groups (I, II and II) based on structural features.
HbN (or GlbN): a truncated haemoglobin-like protein that binds oxygen cooperatively with a very high affinity and a slow dissociation rate, which may exclude it from oxygen transport. It appears to be involved in bacterial nitric oxide detoxification and in nitrosative stress.
Cyanoglobin (or GlbN): a truncated haemoprotein found in cyanobacteria that has high oxygen affinity, and which appears to serve as part of a terminal oxidase, rather than as a respiratory pigment.
HbO (or GlbO): a truncated haemoglobin-like protein with a lower oxygen affinity than HbN. HbO associates with the bacterial cell membrane, where it significantly increases oxygen uptake over membranes lacking this protein. HbO appears to interact with a terminal oxidase, and could participate in an oxygen/electron-transfer process that facilitates oxygen transfer during aerobic metabolism.
Glb3: a nuclear-encoded truncated haemoglobin from plants that appears more closely related to HbO than HbN. Glb3 from Arabidopsis thaliana (Mouse-ear cress) exhibits an unusual concentration-independent binding of oxygen and carbon dioxide.
The globin fold
The globin fold (cd01067) also includes some non-haem proteins. Some of them are the phycobiliproteins, the N-terminal domain of two-component regulatory system histidine kinase, RsbR, and RsbN.
See also
C-rich stability element
Globular protein
Hemoglobin
Heme
Myoglobin
Phytoglobin
References
Protein superfamilies
Protein folds | Globin | Biology | 1,879 |
41,562,201 | https://en.wikipedia.org/wiki/List%20of%20items%20smuggled%20into%20space | Multiple people have covertly snuck items on to space missions without the knowledge of their superiors. During the Gemini program, Deke Slayton issued a memo to all astronauts urging a halt to the practice: ββ¦ the attempt β¦ to bootleg any item on board not approved by me will result in appropriate disciplinary action. In addition to jeopardizing your personal careers, it must be recognized that seemingly insignificant items can and have affected the prerogatives of follow-on crews." Despite this and other warnings, the practice continued. Here is a partial list of those items.
On March 23, 1965, Gemini 3 astronauts Gus Grissom and John Young brought a corned beef sandwich into orbit, which was widely publicized in the media. They were reprimanded by NASA officials.
On December 15, 1965, Walter Schirra discreetly brought a harmonica on board Gemini VI-A and played the song "Jingle Bells". The incident marked the first time that a musical instrument was ever played in space and the harmonica is now in the possession of the National Air and Space Museum.
Schirra also reported bringing Scotch and cigarettes onto Gemini VI-A without permission.
On January 31, 1971, Edgar Mitchell brought materials on Apollo 14 to conduct unauthorized experiments into extrasensory perception.
On August 2, 1971, Apollo 15 commander David Scott placed a small metal statue on the Moon, named Fallen Astronaut to commemorate the astronauts and cosmonauts who had died in the advancement of space exploration.
Soviet cosmonauts aboard Soyuz 29 in 1978 brought chocolates on board their flight, which scattered in orbit and required two hours to collect.
In 2008 Richard Garriott claimed to have smuggled a small amount of James Doohan's ashes onto the ISS inside a number of laminated postcards. NASA declined to comment on the story.
In 2019, American entrepreneur Nova Spivack sent tardigrades to the Moon on board an Arch Mission Foundation lander without informing the Israeli launch company SpaceIL that they were part of the payload.
References
Spaceflight
Smuggling | List of items smuggled into space | Astronomy | 420 |
53,688,628 | https://en.wikipedia.org/wiki/SN%202013ej | SN 2013ej is a Type II-P supernova in the nearby spiral galaxy Messier 74 (NGC 628). It was discovered by the Lick Observatory Supernova Search on July 25, 2013, with the 0.76 m Katzman Automatic Imaging Telescope, with pre-discovery images having been taken the day before.
Supernova 2013ej was noted for being as bright as 12th magnitude.
SN 2013ej was compared to supernovas SN 2004et and SN 2007od. Based on various observations it has been theorized that the supernova originated from a red supergiant star that went supernova.
SN 2013ej is one of the brightest Type II supernova detected to-date in NGC 628.
References
External links
Light curves and spectra on the Open Supernova Catalog
20130725
Supernovae
Pisces (constellation) | SN 2013ej | Chemistry,Astronomy | 176 |
51,392,611 | https://en.wikipedia.org/wiki/Marco%20Antonio%20Cuevas | Marco Antonio Cuevas Cruz (November 18, 1933 β February 2, 2009) son of Angel Rafael Cuevas del Cid and Maria Soledad Cruz Sierra. He had three brothers: Marta Cuevas del LeΓ³n, Rafael Cuevas del Cid and JosΓ© Rodolfo Cuevas Cruz. He studied at ColΓ©gio de Infantes and the Liceo Guatemala, where he graduated as valedictorian. In 1955 he represented Guatemala in the second Pan American Games, having won the gold medal in rowing competitions, along with his team.
Born Antigua Guatemala, he studied civil engineering at the University of San Carlos of Guatemala (USAC). At that time, he worked at the General Directorate of Roads and the Institute of Development of Municipal Works (INFOM), where he met Engineer Raul Aguilar Batres, great precursor of planning in Guatemala, who urged him to undertake graduate studies in urban planning. He attended the program led by Yale University, based in Lima, Peru, from 1960 to 1963. Upon his return to Guatemala in 1965, he continued to work in the INFOM as the first graduate in Urban Planning in the history of Guatemala. He then introduced urban planning methods that were used to make more effective the execution of public works and created the Center for Urban and Regional Studies (CEUR) USAC, where he taught until 1978, when he was appointed as the first manager of the Guatemalan Railroads, after its nationalization during the government of Julio CΓ©sar MΓ©ndez Montenegro.
Between 1970 and 1994, with a team of professionals, developed several master plans for the cities of Rio de Janeiro, Brazil, Santa Cruz, Bolivia, San Pedro Sula, Honduras, Nairobi, Kenya and Guatemala (1972β2000). In 1986, as part of the activities of the Centre for Political Studies (CEDEP), he participated in the drafting of the new Constitution of Guatemala and, in the same year, was in charge of the formation of the new electoral system in the first election of a civilian government during the internal armed conflict of Guatemala (1959β1996). In the 1960s he served as a member of the Board of the Faculty of Engineering of the USAC. From 1990 to 2007 he led the development of several low-cost housing projects, having introduced innovative concepts for planning and financing. At the same time, he developed the first plan for territorial ordering, at the national level, in Guatemala. He died on February 2, 2009.
The National Library of Guatemala inaugurated in 2016 a special room that houses the private collection of books on urban planning that belonged to Marco Antonio Cuevas.
See also
Raul Aguilar Batres
Bibliography
People from Guatemala City
Civil engineers
1933 births
2009 deaths
Guatemalan engineers | Marco Antonio Cuevas | Engineering | 544 |
971,961 | https://en.wikipedia.org/wiki/Plant%20reproductive%20morphology | Plant reproductive morphology is the study of the physical form and structure (the morphology) of those parts of plants directly or indirectly concerned with sexual reproduction.
Among all living organisms, flowers, which are the reproductive structures of angiosperms, are the most varied physically and show a correspondingly great diversity in methods of reproduction. Plants that are not flowering plants (green algae, mosses, liverworts, hornworts, ferns and gymnosperms such as conifers) also have complex interplays between morphological adaptation and environmental factors in their sexual reproduction. The breeding system, or how the sperm from one plant fertilizes the ovum of another, depends on the reproductive morphology, and is the single most important determinant of the genetic structure of nonclonal plant populations. Christian Konrad Sprengel (1793) studied the reproduction of flowering plants and for the first time it was understood that the pollination process involved both biotic and abiotic interactions. Charles Darwin's theories of natural selection utilized this work to build his theory of evolution, which includes analysis of the coevolution of flowers and their insect pollinators.
Use of sexual terminology
Plants have complex lifecycles involving alternation of generations. One generation, the sporophyte, gives rise to the next generation, the gametophyte asexually via spores. Spores may be identical isospores or come in different sizes (microspores and megaspores), but strictly speaking, spores and sporophytes are neither male nor female because they do not produce gametes. The alternate generation, the gametophyte, produces gametes, eggs and/or sperm. A gametophyte can be monoicous (bisexual), producing both eggs and sperm, or dioicous (unisexual), either female (producing eggs) or male (producing sperm).
In the bryophytes (liverworts, mosses, and hornworts), the sexual gametophyte is the dominant generation. In ferns and seed plants (including cycads, conifers, flowering plants, etc.) the sporophyte is the dominant generation; the obvious visible plant, whether a small herb or a large tree, is the sporophyte, and the gametophyte is very small. In bryophytes and ferns, the gametophytes are independent, free-living plants, while in seed plants, each female megagametophyte, and the megaspore that gives rise to it, is hidden within the sporophyte and is entirely dependent on it for nutrition. Each male gametophyte typically consists of two to four cells enclosed within the protective wall of a pollen grain.
The sporophyte of a flowering plant is often described using sexual terms (e.g. "female" or "male") . For example, a sporophyte that produces spores that give rise only to male gametophytes may be described as "male", even though the sporophyte itself is asexual, producing only spores. Similarly, flowers produced by the sporophyte may be described as "unisexual" or "bisexual", meaning that they give rise to either one sex of gametophyte or both sexes of the gametophyte.
Flowering plants
Basic flower morphology
The flower is the characteristic structure concerned with sexual reproduction in flowering plants (angiosperms). Flowers vary enormously in their structure (morphology). A perfect flower, like that of Ranunculus glaberrimus shown in the figure, has a calyx of outer sepals and a corolla of inner petals and both male and female sex organs. The sepals and petals together form the perianth. Next inwards there are numerous stamens, which produce pollen grains, each containing a microscopic male gametophyte. Stamens may be called the "male" parts of a flower and collectively form the androecium. Finally in the middle there are carpels, which at maturity contain one or more ovules, and within each ovule is a tiny female gametophyte. Carpels may be called the "female" parts of a flower and collectively form the gynoecium.
Each carpel in Ranunculus species is an achene that produces one ovule, which when fertilized becomes a seed. If the carpel contains more than one seed, as in Eranthis hyemalis, it is called a follicle. Two or more carpels may be fused together to varying degrees and the entire structure, including the fused styles and stigmas may be called a pistil. The lower part of the pistil, where the ovules are produced, is called the ovary. It may be divided into chambers (locules) corresponding to the separate carpels.
Variations
A perfect flower has both stamens and carpels, and is described as "bisexual" or "hermaphroditic". A unisexual flower is one in which either the stamens or the carpels are missing, vestigial or otherwise non-functional. Each flower is either staminate (having only functional stamens and thus male), or carpellate or pistillate (having only functional carpels and thus female). If separate staminate and carpellate flowers are always found on the same plant, the species is described as monoecious. If separate staminate and carpellate flowers are always found on different plants, the species is described as dioecious. A 1995 study found that about 6% of angiosperm species are dioecious, and that 7% of genera contain some dioecious species.
Members of the birch family (Betulaceae) are examples of monoecious plants with unisexual flowers. A mature alder tree (Alnus species) produces long catkins containing only male flowers, each with four stamens and a minute perianth, and separate stalked groups of female flowers, each without a perianth. (See the illustration of Alnus serrulata.)
Most hollies (members of the genus Ilex) are dioecious. Each plant produces either functionally male flowers or functionally female flowers. In Ilex aquifolium (see the illustration), the common European holly, both kinds of flower have four sepals and four white petals; male flowers have four stamens, female flowers usually have four non-functional reduced stamens and a four-celled ovary. Since only female plants are able to set fruit and produce berries, this has consequences for gardeners. Amborella represents the first known group of flowering plants to separate from their common ancestor. It too is dioecious; at any one time, each plant produces either flowers with functional stamens but no carpels, or flowers with a few non-functional stamens and a number of fully functional carpels. However, Amborella plants may change their "sex" over time. In one study, five cuttings from a male plant produced only male flowers when they first flowered, but at their second flowering three switched to producing female flowers.
In extreme cases, almost all of the parts present in a complete flower may be missing, so long as at least one carpel or one stamen is present. This situation is reached in the female flowers of duckweeds (Lemna), which consist of a single carpel, and in the male flowers of spurges (Euphorbia) which consist of a single stamen.
A species such as Fraxinus excelsior, the common ash of Europe, demonstrates one possible kind of variation. Ash flowers are wind-pollinated and lack petals and sepals. Structurally, the flowers may be bisexual, consisting of two stamens and an ovary, or may be male (staminate), lacking a functional ovary, or female (carpellate), lacking functional stamens. Different forms may occur on the same tree, or on different trees. The Asteraceae (sunflower family), with close to 22,000 species worldwide, have highly modified inflorescences made up of flowers (florets) collected together into tightly packed heads. Heads may have florets of one sexual morphology β all bisexual, all carpellate or all staminate (when they are called homogamous), or may have mixtures of two or more sexual forms (heterogamous). Thus goatsbeards (Tragopogon species) have heads of bisexual florets, like other members of the tribe Cichorieae, whereas marigolds (Calendula species) generally have heads with the outer florets bisexual and the inner florets staminate (male).
Like Amborella, some plants undergo sex-switching. For example, Arisaema triphyllum (Jack-in-the-pulpit) expresses sexual differences at different stages of growth: smaller plants produce all or mostly male flowers; as plants grow larger over the years the male flowers are replaced by more female flowers on the same plant. Arisaema triphyllum thus covers a multitude of sexual conditions in its lifetime: nonsexual juvenile plants, young plants that are all male, larger plants with a mix of both male and female flowers, and large plants that have mostly female flowers. Other plant populations have plants that produce more male flowers early in the year and as plants bloom later in the growing season they produce more female flowers.
Terminology
The complexity of the morphology of flowers and its variation within populations has led to a rich terminology.
Androdioecious: having male flowers on some plants, bisexual ones on others.
Androecious: having only male flowers (the male of a dioecious population); producing pollen but no seed.
Androgynous: see bisexual.
Androgynomonoecious: having male, female, and bisexual flowers on the same plant, also called trimonoecious.
Andromonoecious: having both bisexual and male flowers on the same plant.
Bisexual: each flower of each individual has both male and female structures, i.e. it combines both sexes in one structure. Flowers of this kind are called perfect, having both stamens and carpels. Other terms used for this condition are androgynous, hermaphroditic, monoclinous and synoecious.
Dichogamous: having sexes developing at different times; producing pollen when the stigmas are not receptive, either protandrous or protogynous. This promotes outcrossing by limiting self-pollination. Some dichogamous plants have bisexual flowers, others have unisexual flowers.
Diclinous: see Unisexual.
Dioecious: having either only male or only female flowers. No individual plant of the population produces both pollen and ovules. (From the Greek for "two households". See also the Wiktionary entry for .)
Gynodioecious: having hermaphrodite flowers and female flowers on separate plants.
Gynoecious: having only female flowers (the female of a dioecious population); producing seed but not pollen.
Gynomonoecious: having both bisexual and female flowers on the same plant.
Hermaphroditic: see bisexual.
Homogamous: male and female sexes reach maturity in synchrony; producing mature pollens when stigma is receptive.
Imperfect: (of flowers) having some parts that are normally present not developed, e.g. lacking stamens. See also Unisexual.
Monoclinous: see bisexual.
Monoecious: In the commoner narrow sense of the term, it refers to plants with unisexual flowers which occur on the same individual. In the broad sense of the term, it also includes plants with bisexual flowers. Individuals bearing separate flowers of both sexes at the same time are called simultaneously or synchronously monoecious and individuals that bear flowers of one sex at one time are called consecutively monoecious. (From the Greek monos "single" + oikia "house". See also the Wiktionary entry for .)
Perfect: (of flowers) see bisexual.
Polygamodioecious: mostly dioecious, but with either a few flowers of the opposite sex or a few bisexual flowers on the same plant.
Polygamomonoecious: see polygamous. Or, mostly monoecious, but also partly polygamous.
Polygamous: having male, female, and bisexual flowers on the same plant. Also called polygamomonoecious or trimonoecious. Or, with bisexual and at least one of male and female flowers on the same plant.
Protandrous: (of dichogamous plants) having male parts of flowers developed before female parts, e.g. having flowers that function first as male and then change to female or producing pollen before the stigmas of the same plant are receptive. (Protoandrous is also used.)
Protogynous: (of dichogamous plants) having female parts of flowers developed before male parts, e.g. having flowers that function first as female and then change to male or producing pollen after the stigmas of the same plant are receptive.
Subandroecious: having mostly male flowers, with a few female or bisexual flowers.
Subdioecious: having some individuals in otherwise dioecious populations with flowers that are not clearly male or female. The population produces normally male or female plants with unisexual flowers, but some plants may have bisexual flowers, some both male and female flowers, and others some combination thereof, such as female and bisexual flowers. The condition is thought to represent a transition between bisexuality and dioecy.
Subgynoecious: having mostly female flowers, with a few male or bisexual flowers.
Synoecious: see bisexual.
Trimonoecious: see polygamous and androgynomonoecious.
Trioecious: with male, female and bisexual flowers on different plants.
Unisexual: having either functionally male or functionally female flowers. This condition is also called diclinous, incomplete or imperfect.
Outcrossing
Outcrossing, cross-fertilization or allogamy, in which offspring are formed by the fusion of the gametes of two different plants, is the most common mode of reproduction among higher plants. About 55% of higher plant species reproduce in this way. An additional 7% are partially cross-fertilizing and partially self-fertilizing (autogamy). About 15% produce gametes but are principally self-fertilizing with significant out-crossing lacking. Only about 8% of higher plant species reproduce exclusively by non-sexual means. These include plants that reproduce vegetatively by runners or bulbils, or which produce seeds without embryo fertilization (apomixis). The selective advantage of outcrossing appears to be the masking of deleterious recessive mutations.
The primary mechanism used by flowering plants to ensure outcrossing involves a genetic mechanism known as self-incompatibility. Various aspects of floral morphology promote allogamy. In plants with bisexual flowers, the anthers and carpels may mature at different times, plants being protandrous (with the anthers maturing first) or protogynous (with the carpels mature first). Monoecious species, with unisexual flowers on the same plant, may produce male and female flowers at different times.
Dioecy, the condition of having unisexual flowers on different plants, necessarily results in outcrossing, and probably evolved for this purpose. However, "dioecy has proven difficult to explain simply as an outbreeding mechanism in plants that lack self-incompatibility". Resource-allocation constraints may be important in the evolution of dioecy, for example, with wind-pollination, separate male flowers arranged in a catkin that vibrates in the wind may provide better pollen dispersal. In climbing plants, rapid upward growth may be essential, and resource allocation to fruit production may be incompatible with rapid growth, thus giving an advantage to delayed production of female flowers. Dioecy has evolved separately in many different lineages, and monoecy in the plant lineage correlates with the evolution of dioecy, suggesting that dioecy can evolve more readily from plants that already produce separate male and female flowers.
See also
Apomixis
Vegetative reproduction
Botany
Evolution of sexual reproduction
Flower
Evolutionary history of plants: Flowers
Flower: Development
Meiosis
References
Citations
Sources
Further reading
External links
Images of sexual systems in flowering plants at bioimages.vanderbilt.edu
Plant morphology | Plant reproductive morphology | Biology | 3,506 |
73,222,405 | https://en.wikipedia.org/wiki/Cystangium%20balpineum | Cystangium balpineum, better known as the white sessile truffle, is a basidiomycete mushroom.
See also
Truffle
References
Russulales
Taxa named by Cheryl A. Grgurinovic
Fungus species | Cystangium balpineum | Biology | 52 |
45,413,769 | https://en.wikipedia.org/wiki/Safety%20behaviors%20%28anxiety%29 | Safety behaviors (also known as safety-seeking behaviors) are coping behaviors used to reduce anxiety and fear when the user feels threatened. An example of a safety behavior in social anxiety is to think of excuses to escape a potentially uncomfortable situation. These safety behaviors, although useful for reducing anxiety in the short term, might become maladaptive over the long term by prolonging anxiety and fear of nonthreatening situations. This problem is commonly experienced in anxiety disorders. Treatments such as exposure and response prevention focus on eliminating safety behaviors due to the detrimental role safety behaviors have in mental disorders. There is a disputed claim that safety behaviors can be beneficial to use during the early stages of treatment.
History
The concept of safety behaviors was first related to a mental disorder in 1984 when the βsafety perspectiveβ hypothesis was proposed to explain how agoraphobia is maintained over time. The βsafety perspectiveβ hypothesis states that people with agoraphobia act in ways they believe will increase or maintain their level of safety. In 1991, the use of safety behaviors was observed in people with panic disorders. Later studies observed the use of safety behaviors in people with other disorders such as social phobia, obsessive compulsive disorder, and posttraumatic stress disorder.
Theories about effects
Safety behaviors directly amplify fear and anxiety.
The use of safety behaviors promotes the monitoring of anxiety symptoms. For example, people with panic disorders tend to monitor themselves for symptoms of anxiety and respond to these symptoms with avoidant behaviors. This over analysis of physical sensations results in detection of symptoms that may not lead to panic attacks but are perceived as panic-inducing symptoms.
People with social phobia withdraw themselves from social situations by quietly speaking, reducing body movement, and preventing eye contact with other people. These behaviors are meant to reduce the chances of receiving criticism from other people. Instead, safety behaviors result in more criticism because people with social phobia are seen as aloof and unwelcoming people.
Safety behaviors reduce anxiety in feared situations but retain anxiety in the long term.
When a person uses safety behaviors to reduce anxiety and fear in a threatening situation, the anxiety and fear may subside. The user will then believe that the safety behaviors caused the emotional decrease and continue to use safety behaviors in future situations. However, the decrease in anxiety and fear may be due to other factors such as time.
The decrease in anxiety and fear may also be due to the situation itself. Situations that seem severely threatening, such as giving a presentation, are not actually very harmful. By avoiding the situation through the use of safety behaviors, the user is unable to realize that the situation is harmless, allowing the cycle of anxiety and behavior to continue.
Classification
Safety behaviors can be grouped into two major categories: preventive and restorative safety behaviors.
Preventative
These behaviors are also known as "emotional avoidance behaviors". These behaviors are aimed to reduce fear or anxiety in future situations.
Examples include:
Completely avoiding situations in which the threat might occur
Relying on safety signals such as inviting companions to social events for support
Subtle avoidance behaviors such as avoiding physical contact
Compulsive behaviors such as checking doors before leaving
Preparations for potentially encountering these situations
Restorative
These behaviors are aimed to reduce fear or anxiety in a currently threatening situation.
Examples include:
Escaping the situation
Using safety signals such as looking at cell phones to reduce social anxiety
Subtle avoidance behaviors such as breathing techniques
Compulsive behaviors such as repeatedly washing hands
Seeking reassurance from loved ones or professionals to ensure that the fears are unwarranted
Distracting attention from the threat or focusing attention on reducing the threat
Neutralizing the threat by praying or counting
Suppressing anxiety-provoking thoughts
Associated conditions
Agoraphobia
People may increase their risk for agoraphobia when they use safety behaviors to avoid potentially dangerous environments even though the danger may not be as severe as perceived. A common safety behavior is when a person with agoraphobia attempts to entirely avoid a crowded place such as a mall or a public bus. If the affected person does end up in a crowded area, then the person may tense his or her legs to prevent collapsing in the area. The affected person may also attempt to escape these crowded situations. People with agoraphobia then attribute the lack of feared symptoms to the safety behaviors instead of to the lack of danger itself. This incorrect attribution may lead to persisting fears and symptoms.
Generalized anxiety disorder
People with generalized anxiety disorder (GAD) view the world as a highly threatening environment. These people continuously search for safety and use safety behaviors. A common safety behavior used by GAD sufferers is seeking reassurance from a loved one to reduce the excessive worry. The affected person may also attempt to avoid all possible risks of danger and protect others from that danger. However, these behaviors are unlikely to significantly reduce anxiety because the affected person often has multiple fears that are not clearly defined.
Insomnia
People with insomnia tend to excessively worry about getting enough sleep and the consequences of not getting enough sleep. These people use safety behaviors in an attempt to reduce their excessive anxiety. However, the use of safety behaviors serves to increase anxiety and reduce the chances that the affected person will disconfirm these anxiety-provoking thoughts. A common safety behavior used by affected people is attempting to control the anxiety-provoking thoughts by distracting themselves with other thoughts. The affected person may also cancel appointments and decide not to work because the person believes that he or she will not function properly. The affected person may take naps to compensate for the lack of sleep.
Obsessive-compulsive disorder
People with obsessive-compulsive disorder (OCD) use safety behaviors to reduce their anxiety when obsessions arise. Common safety behaviors include washing hands more times than needed and avoiding potential contaminants by not shaking hands. However, when people with OCD use safety behaviors to reduce the chance of contamination, their awareness of potential contamination increases. This heightened awareness then leads to an increased fear of being contaminated.
Checking rituals, such as checking several times to determine if all of the doors to a house are locked, are also common safety behaviors. People with OCD often believe that if they do not perform their checking rituals, others will be in danger. Consequentially, people with OCD often perceive themselves as more responsible for the wellbeing of others than people without the disorder. Therefore, people with OCD use safety behaviors when they believe that other people will be in danger if these behaviors are not used. Continuous checking reduces the certainty and vividness of memories related to checking. Exposure and response prevention therapy is effective in treating OCD.
Posttraumatic stress disorder
People with posttraumatic stress disorder (PTSD) believe that their general safety has been compromised after a trauma has occurred. These people use safety behaviors to restore their general sense of safety and to prevent the trauma from happening again. A common safety behavior used by affected people is staying awake for long periods of time to make sure that potential intruders do not attempt to break into their homes. The person may also attempt to avoid potential reminders of the trauma such as moving away from the place where the trauma occurred. These behaviors may lead to persistent fears because the behaviors prevent the affected person from disconfirming the threatening beliefs.
Schizophrenia
People with schizophrenia may have persecutory delusions. These people use safety behaviors to prevent the potential threats that arise from their persecutory delusions. Common safety behaviors include avoiding locations where perceived persecutors can be found and physically escaping from the perceived persecutors. These behaviors may increase the amount of persecutory delusions the person experiences because the safety behaviors prevent the affected person from disconfirming the threatening beliefs.
Social anxiety
Generally, people use social behaviors to either seek approval or avoid disapproval from others. People without social anxiety tend to use behaviors that are designed to gain approval from others, while people with social anxiety prefer to use behaviors that help to avoid disapproval from others.
Safety behaviors seem to reduce the chances of obtaining criticism by drawing less attention to the affected person. Common safety behaviors include avoiding eye contact with other people, focusing on saying the proper words, and other self-controlling behaviors.
Exposure therapy alone is mildly effective in treating social anxiety. There are larger decreases in anxiety and fear when people are also told to stop themselves from using safety behaviors during therapy than when people are encouraged to use safety behaviors. These decreases are largest when people are told to stop using safety behaviors and disconfirm the thoughts that the threatening situation will most likely not happen even if the safety behaviors are stopped. This combination of techniques is used in exposure and response prevention therapy for social anxiety.
Assessment measures
Several assessments have been developed to measure the amount of safety behaviors used by people with specific psychological conditions. Two examples of assessments developed to measure safety behaviors performed by people with social anxiety are the Social Behavior Questionnaire and the Subtle Avoidance Frequency Examination. An assessment developed to measure safety behaviors performed by people with panic disorder is the Texas Safety Maneuver Scale.
Social Behavior Questionnaire
The Social Behavior Questionnaire (SBQ) is an assessment of safety behaviors in social anxiety that was developed in 1994. The frequency at which a behavior is performed is rated from βneverβ to βalways.β Examples of safety behaviors recorded in this assessment include βavoiding asking questionsβ and βcontrolling shaking.β The SBQ has been shown to distinguish between people with strong from people with weak fears of being negatively evaluated by others.
Subtle Avoidance Frequency Examination
The Subtle Avoidance Frequency Examination (SAFE) is an assessment of safety behaviors in social anxiety that was developed in 2009. The frequency at which a behavior is performed and the total number of safety behaviors utilized is rated from βneverβ to βalways.β Examples of safety behaviors recorded in this assessment include βspeaking softlyβ and βavoiding eye contact.β This measure has been shown to distinguish between people with clinical levels of social anxiety and those without. This assessment has also been shown to support other measures of social anxiety such as the Social Phobia Scale.
Texas Safety Maneuver Scale
The Texas Safety Maneuver Scale (TSMS) is an assessment of safety behaviors in panic disorder that was developed in 1998. The frequency at which each behavior is performed is measured on a five-point scale from βneverβ to βalways.β Examples of safety behaviors recorded in this assessment include βchecking pulseβ and βavoiding stressful encounters.β This assessment has also been shown to correlate with agoraphobia measures such as the Fear Questionnaire.
Objections to treatment
Some researchers have claimed that safety behaviors can be helpful in therapy but only when the behaviors are used during the early stages of treatment. For example, exposure therapy will appear less threatening if patients are able to use safety behaviors during the treatment. Patients will also feel more in control in the threatening situations if they are able to use their safety behaviors to reduce anxiety. The studies testing this claim have shown mixed results.
See also
Denial
Dissociation
Escapism
Risk compensation, adjusting behavior depending on perceived level of safety (or risk)
Security blanket
Self-medication
Sensitization
Stress management
References
Anxiety disorders
Human behavior
Psychological adjustment | Safety behaviors (anxiety) | Biology | 2,266 |
141,029 | https://en.wikipedia.org/wiki/African%20trypanosomiasis | African trypanosomiasis is an insect-borne parasitic infection of humans and other animals.
Human African trypanosomiasis (HAT), also known as African sleeping sickness or simply sleeping sickness, is caused by the species Trypanosoma brucei. Humans are infected by two types, Trypanosoma brucei gambiense (TbG) and Trypanosoma brucei rhodesiense (TbR). TbG causes over 92% of reported cases. Both are usually transmitted by the bite of an infected tsetse fly and are most common in rural areas.
Initially, the first stage of the disease is characterized by fevers, headaches, itchiness, and joint pains, beginning one to three weeks after the bite. Weeks to months later, the second stage begins with confusion, poor coordination, numbness, and trouble sleeping. Diagnosis is by finding the parasite in a blood smear or in the fluid of a lymph node. A lumbar puncture is often needed to tell the difference between first- and second-stage disease. If the disease is not treated quickly, it can lead to death.
Prevention of severe disease involves screening the at-risk population with blood tests for TbG. Treatment is easier when the disease is detected early and before neurological symptoms occur. Treatment of the first stage has been with the medications pentamidine or suramin. Treatment of the second stage has involved eflornithine or a combination of nifurtimox and eflornithine for TbG. Fexinidazole is a more recent treatment that can be taken by mouth, for either stage of TbG. While melarsoprol works for both types, it is typically only used for TbR, due to serious side effects. Without treatment, sleeping sickness typically results in death.
The disease occurs regularly in some regions of sub-Saharan Africa with the population at risk being about 70 million in 36 countries. An estimated 11,000 people are currently infected with 2,800 new infections in 2015. In 2018 there were 977 new cases. In 2015 it caused around 3,500 deaths, down from 34,000 in 1990. More than 80% of these cases are in the Democratic Republic of the Congo. Three major outbreaks have occurred in recent history: one from 1896 to 1906 primarily in Uganda and the Congo Basin, and two in 1920 and 1970, in several African countries. It is classified as a neglected tropical disease. Other animals, such as cows, may carry the disease and become infected in which case it is known as Nagana or animal trypanosomiasis.
Signs and symptoms
African trypanosomiasis symptoms occur in two stages: the hemolymphatic stage and the neurological stage (the latter being characterised by parasitic invasion of the central nervous system). Neurological symptoms occur in addition to the initial features, and the two stages may be difficult to distinguish based on clinical features alone.
The disease has been reported to present with atypical symptoms in infected individuals who originate from non-endemic areas (e.g. travelers). The reasons for this are unclear and may be genetic. The low number of such cases may also have skewed findings. In such persons, the infection is said to present mainly as fever with gastrointestinal symptoms (e.g. diarrhoea and jaundice) with lymphadenopathy developing only rarely.
Trypanosomal ulcer
Systemic disease is sometimes presaged by a trypanosomal ulcer developing at the site of the infectious fly bite within 2 days of infection. The ulcer is most commonly observed in T. b. rhodesiense infection, and only rarely in T. b. gambiense (however, in T. b. gambiense infection, ulcers are more common in persons from non-endemic areas).
Hemolymphatic phase
The incubation period is 1β3 weeks for T. b. rhodesiense, and longer (but less precisely characterised) in T. b. gambiense infection. The first/initial stage, known as the hemolymphatic phase, is characterized by non-specific, generalised symptoms like: fever (intermittent), headaches (severe), joint pains, itching, weakness, malaise, fatigue, weight loss, lymphadenopathy, and hepatosplenomegaly.
Diagnosis may be delayed due to the vagueness of initial symptoms. The disease may also be mistaken for malaria (which may occur as a co-infection).
Intermittent fever
Fever is intermittent, with attacks lasting from a day to a week, separated by intervals of a few days to a month or longer. Episodes of fever become less frequent throughout the disease.
Lymphadenopathy
Invasion of the circulatory and lymphatic systems by the parasite is associated with severe swelling of lymph nodes, often to tremendous sizes. Posterior cervical lymph nodes are most commonly affected, however, axillary, inguinal, and epitrochlear lymph node involvement may also occur. Winterbottom's sign, the tell-tale swollen lymph nodes along the back of the neck, may appear. Winterbottom's sign is common in T. b. gambiense infection.
Other features
Those affected may additionally present with: skin rash, haemolytic anaemia, hepatomegaly and abnormal liver function, splenomegaly, endocrine disturbance, cardiac involvement (e.g. pericarditis, and congestive heart failure), and ophthalmic involvement.
Neurological phase
The second phase of the disease, the neurological phase (also called the meningoencephalic stage), begins when the parasite invades the central nervous system by passing through the bloodβbrain barrier. Progression to the neurological phase occurs after an estimated 21β60 days in case of T. b. rhodesiense infection, and 300β500 days in case of T. b. gambiense infection.
In actuality, the two phases overlap and are difficult to distinguish based on clinical features alone; determining the actual stage of the disease is achieved by examining the cerebrospinal fluid for the presence of the parasite.
Sleep disorders
Sleep-wake disturbances are a leading feature of the neurological stage and give the disease its common name of "sleeping sickness". Infected individuals experience a disorganized and fragmented sleep-wake cycle. Those affected experience sleep inversion resulting in daytime sleep and somnolence, and nighttime periods of wakefulness and insomnia. Additionally, those affected also experience episodes of sudden sleepiness.
Neurological/neurocognitive symptoms
Neurological symptoms include: tremor, general muscle weakness, hemiparesis, paralysis of a limb, abnormal muscle tone, gait disturbance, ataxia, speech disturbances, paraesthesia, hyperaesthesia, anaesthesia, visual disturbance, abnormal reflexes, seizures, and coma. Parkinson-like movements might arise due to non-specific movement disorders and speech disorders.
Psychiatric/behavioural symptoms
Individuals may exhibit psychiatric symptoms which may sometimes dominate the clinical diagnosis and may include aggressiveness, apathy, irritability, psychotic reactions and hallucinations, anxiety, emotional lability, confusion, mania, attention deficit, and delirium.
Advanced/late disease and outcomes
Without treatment, the disease is invariably fatal, with progressive mental deterioration leading to coma, systemic organ failure, and death. An untreated infection with T. b. rhodesiense will cause death within months whereas an untreated infection with T. b. gambiense will cause death after several years. Damage caused in the neurological phase is irreversible.
Cause
Trypanosoma brucei gambiense accounts for the majority of African trypanosomiasis cases, with humans as the main reservoir needed for the transmission, while Trypanosoma brucei rhodesiense is mainly zoonotic, with accidental human infections. The epidemiology of African trypanosomiasis is dependent on the interactions between the parasite (trypanosome), the vector (tsetse fly), and the host.
Trypanosoma brucei
There are two subspecies of the parasite that are responsible for starting the disease in humans. Trypanosoma brucei gambiense causes the diseases in west and central Africa, whereas Trypanosoma brucei rhodesiense has a limited geographical range and is responsible for causing the disease in east and southern Africa. In addition, a third subspecies of the parasite known as Trypanosoma brucei brucei is responsible for affecting animals but not humans.
Humans are the main reservoir for T. b. gambiense but this species can also be found in pigs and other animals. Wild game animals and cattle are the main reservoir of T. b. rhodesiense. These parasites primarily infect individuals in sub-Saharan Africa because that is where the vector (tsetse fly) is located. The two human forms of the disease also vary greatly in intensity. T. b. gambiense causes a chronic condition that can remain in a passive phase for months or years before symptoms emerge and the infection can last about three years before death occurs.
T. b. rhodesiense is the acute form of the disease, and death can occur within months since the symptoms emerge within weeks and it is more virulent and faster developing than T. b. gambiense. Furthermore, trypanosomes are surrounded by a coat that is composed of variant surface glycoproteins (VSG). These proteins act to protect the parasite from any lytic factors that are present in human plasma. The host's immune system recognizes the glycoproteins present on the coat of the parasite leading to the production of different antibodies (IgM and IgG).
These antibodies will then act to destroy the parasites that circulate in the blood. However, from the several parasites present in the plasma, a small number of them will experience changes in their surface coats resulting in the formation of new VSGs. Thus, the antibodies produced by the immune system will no longer recognize the parasite leading to proliferation until new antibodies are created to combat the novel VSGs. Eventually, the immune system will no longer be able to fight off the parasite due to the constant changes in VSGs and infection will arise.
Vector
The tsetse fly (genus Glossina) is a large, brown, biting fly that serves as both a host and vector for the trypanosome parasites. While taking blood from a mammalian host, an infected tsetse fly injects metacyclic trypomastigotes into skin tissue. From the bite, parasites first enter the lymphatic system and then pass into the bloodstream. Inside the mammalian host, they transform into bloodstream trypomastigotes and are carried to other sites throughout the body, reach other body fluids (e.g., lymph, spinal fluid), and continue to replicate by binary fission.
The entire life cycle of African trypanosomes is represented by extracellular stages. A tsetse fly becomes infected with bloodstream trypomastigotes when taking a blood meal on an infected mammalian host. In the fly's midgut, the parasites transform into procyclic trypomastigotes, multiply by binary fission, leave the midgut, and transform into epimastigotes. The epimastigotes reach the fly's salivary glands and continue multiplication by binary fission.
The entire life cycle of the fly takes about three weeks. In addition to the bite of the tsetse fly, the disease can be transmitted by:
Mother-to-child infection: the trypanosome can sometimes cross the placenta and infect the fetus.
Laboratories: accidental infections, for example, through the handling of blood of an infected person and organ transplantation, although this is uncommon.
Blood transfusion
Sexual contact
Horse-flies (Tabanidae) and stable flies (Muscidae) possibly play a role in the transmission of nagana (the animal form of sleeping sickness) and the human disease form.
Pathophysiology
Tryptophol is a chemical compound produced by the trypanosomal parasite in sleeping sickness which induces sleep in humans.
Diagnosis
The gold standard for diagnosis is the identification of trypanosomes in a sample by microscopic examination. Samples that can be used for diagnosis include ulcer fluid, lymph node aspirates, blood, bone marrow, and, during the neurological stage, cerebrospinal fluid. Detection of trypanosome-specific antibodies can be used for diagnosis, but the sensitivity and specificity of these methods are too variable to be used alone for clinical diagnosis. Further, seroconversion occurs after the onset of clinical symptoms during a T. b. rhodesiense infection, so is of limited diagnostic use.
Trypanosomes can be detected from samples using two different preparations. A wet preparation can be used to look for the motile trypanosomes. Alternatively, a fixed (dried) smear can be stained using Giemsa's or Field's technique and examined under a microscope. Often, the parasite is in relatively low abundance in the sample, so techniques to concentrate the parasites can be used before microscopic examination. For blood samples, these include centrifugation followed by an examination of the buffy coat; mini anion-exchange/centrifugation; and the quantitative buffy coat (QBC) technique. For other samples, such as spinal fluid, concentration techniques include centrifugation followed by an examination of the sediment.
Three serological tests are also available for the detection of the parasite: the micro-CATT (card agglutination test for trypanosomiasis), wb-CATT, and wb-LATEX. The first uses dried blood, while the other two use whole blood samples. A 2002 study found the wb-CATT to be the most efficient for diagnosis, while the wb-LATEX is a better exam for situations where greater sensitivity is required.
Prevention
Currently, there are few medically related prevention options for African trypanosomiasis (i.e. no vaccine exists for immunity). Although the risk of infection from a tsetse fly bite is minor (estimated at less than 0.1%), the use of insect repellants, wearing long-sleeved clothing, avoiding tsetse-dense areas, implementing bush clearance methods and wild game culling are the best options to avoid infection available for residents of affected areas.
Regular active and passive surveillance, involving detection and prompt treatment of new infections, and tsetse fly control are the backbone of the strategy used to control sleeping sickness. Systematic screening of at-risk communities is the best approach, because case-by-case screening is not practical in endemic regions. Systematic screening may be in the form of mobile clinics or fixed screening centres where teams travel daily to areas with high infection rates. Such screening efforts are important because early symptoms are not evident or serious enough to warrant people with gambiense disease to seek medical attention, particularly in very remote areas. Also, diagnosis of the disease is difficult and health workers may not associate such general symptoms with trypanosomiasis. Systematic screening allows early-stage disease to be detected and treated before the disease progresses and removes the potential human reservoir. A single case of sexual transmission of West African sleeping sickness has been reported.
In July 2000, a resolution was passed to form the Pan African Tsetse and Trypanosomiasis Eradication Campaign (PATTEC). The campaign works to eradicate the tsetse vector population levels and subsequently the protozoan disease, by use of insecticide-impregnated targets, fly traps, insecticide-treated cattle, ultra-low dose aerial/ground spraying (SAT) of tsetse resting sites and the sterile insect technique (SIT). The use of SIT in Zanzibar proved effective in eliminating the entire population of tsetse flies but was expensive and is relatively impractical to use in many of the endemic countries afflicted with African trypanosomiasis.
A pilot program in Senegal has reduced the tsetse fly population by as much as 99% by introducing male flies that have been sterilized by exposure to gamma rays.
Treatment
The treatment is dependent on if the disease is discovered in the first or second stage of the disease. A requirement for treatment of the second stage is that the drug passes the blood-brain barrier.
First stage
The treatment for first-stage disease is fexinidazole by mouth or pentamidine by injection for T. b. gambiense. Suramin by injection is used for T. b. rhodesiense.
Second stage
Fexinidazole may be used for the second stage of TbG, if the disease is not severe. Otherwise a regimen involving the combination of nifurtimox and eflornithine, nifurtimox-eflornithine combination treatment (NECT), or eflornithine alone appear to be more effective and result in fewer side effects. These treatments may replace melarsoprol when available. NECT has the benefit of requiring fewer injections of eflornithine.
Intravenous melarsoprol was previously the standard treatment for second-stage (neurological phase) disease and is effective for both types. Melarsoprol is the only treatment for second stage T. b. rhodesiense; however, it causes death in 5% of people who take it. Resistance to melarsoprol can occur.
Drug development projects.
A major challenge has been to find drugs that readily pass the blood-brain barrier. The latest drug that has come into clinical use is fexinidazol, but promising results have also been obtained with the benzoxaborole drug acoziborole (SCYX-7158). This drug is currently under evaluation as a single-dose oral treatment, which is a great advantage compared to currently used drugs. Another research field that has been extensively studied in Trypanosoma brucei is to target its nucleotide metabolism. The nucleotide metabolism studies have both led to the development of adenosine analogues looking promising in animal studies, and to the finding that downregulation of the P2 adenosine transporter is a common way to acquire partial drug resistance against the melaminophenyl arsenical and diamidine drug families (containing melarsoprol and pentamidine, respectively). Drug uptake and degradation are two major issues to consider to avoid drug resistance development. In the case of nucleoside analogues, they need to be taken up by the P1 nucleoside transporter (instead of P2), and they also need to be resistant to cleavage in the parasite.
Prognosis
If untreated, T. b. gambiense almost always results in death, with only a few individuals shown in a long-term 15-year follow-up to have survived after refusing treatment. T. b. rhodesiense, being a more acute and severe form of the disease, is consistently fatal if not treated.
Disease progression greatly varies depending on disease form. For individuals who are infected by T. b. gambiense, which accounts for 92% of all of the reported cases, a person can be infected for months or even years without signs or symptoms until the advanced disease stage, where it is too late to be treated successfully. For individuals affected by T. b. rhodesiense, which accounts for 2% of all reported cases, symptoms appear within weeks or months of the infection. Disease progression is rapid and invades the central nervous system, causing death within a short amount of time.
Epidemiology
In 2010, it caused around 9,000 deaths, down from 34,000 in 1990. As of 2000, the disability-adjusted life-years (9 to 10 years) lost due to sleeping sickness are 2.0 million. From 2010 to 2014, there was an estimated 55 million people at risk for gambiense African Trypanosomiasis and over 6 million people at risk for rhodesiense African trypanosomiasis. In 2014, the World Health Organization reported 3,797 cases of Human African Trypanosomiasis when the predicted number of cases was to be 5,000. The number of total reported cases in 2014 is an 86% reduction to the total number of cases reported in 2000.
The disease has been recorded as occurring in 37 countries, all in sub-Saharan Africa. The Democratic Republic of the Congo is the most affected country in the world, accounting for 75% of the Trypanosoma brucei gambiense cases. In 2009, the population at risk was estimated at about 69 million with one-third of this number being at a 'very high' to 'moderate' risk and the remaining two-thirds at a 'low' to 'very low' risk. Since then, the number of people being affected by the disease has continued to decline, with fewer than 1000 cases per year reported from 2018 onwards. Against this backdrop, sleeping sickness elimination is considered a real possibility, with the World Health Organization targeting the elimination of the transmission of the gambiese form by 2030.
History
The condition has been present in Africa for thousands of years. Because of a lack of travel between Indigenous people, sleeping sickness in humans had been limited to isolated pockets. This changed after Arab slave traders entered central Africa from the east, following the Congo River, bringing parasites along. Gambian sleeping sickness travelled up the Congo River, and then further east.
An Arab writer of the 14th century left the following description in the case of a sultan of the Mali Kingdom: "His end was to be overtaken by the sleeping sickness (illat an-nawm) which is a disease that frequently befalls the inhabitants of these countries, especially their chieftains. Sleep overtakes one of them in such a manner that it is hardly possible to awake him."
The British naval surgeon John Atkins described the disease on his return from West Africa in 1734:
French naval surgeon Marie-ThΓ©ophile Griffon du Bellay treated and described cases while stationed aboard the hospital ship Caravane in Gabon in the late 1860s.
In 1901, a devastating epidemic erupted in Uganda, killing more than 250,000 people, including about two-thirds of the population in the affected lakeshore areas. According to The Cambridge History of Africa, "It has been estimated that up to half the people died of sleeping-sickness and smallpox in the lands on either bank of the lower river Congo."
The causative agent and vector were identified in 1903 by David Bruce, and the subspecies of the protozoa were differentiated in 1910. Bruce had earlier shown that T. brucei was the cause of a similar disease in horses and cattle that was transmitted by the tsetse fly (Glossina morsitans).
The first effective treatment, atoxyl, an arsenic-based drug developed by Paul Ehrlich and Kiyoshi Shiga, was introduced in 1910, but blindness was a serious side effect.
Suramin was first synthesized by Oskar Dressel and Richard Kothe in 1916 for Bayer. It was introduced in 1920 to treat the first stage of the disease. By 1922, Suramin was generally combined with tryparsamide (another pentavalent organoarsenic drug), the first drug to enter the nervous system and be useful in the treatment of the second stage of the gambiense form. Tryparsamide was announced in the Journal of Experimental Medicine in 1919 and tested in the Belgian Congo by Louise Pearce of the Rockefeller Institute in 1920. It was used during the grand epidemic in West and Central Africa on millions of people and was the mainstay of therapy until the 1960s. American medical missionary Arthur Lewis Piper was active in using tryparsamide to treat sleeping sickness in the Belgian Congo in 1925.
Pentamidine, a highly effective drug for the first stage of the disease, has been used since 1937. During the 1950s, it was widely used as a prophylactic agent in western Africa, leading to a sharp decline in infection rates. At the time, eradication of the disease was thought to be at hand.
The organoarsenical melarsoprol (Arsobal) developed in the 1940s is effective for people with second-stage sleeping sickness. However, 3β10% of those injected have reactive encephalopathy (convulsions, progressive coma, or psychotic reactions), and 10β70% of such cases result in death; it can cause brain damage in those who survive the encephalopathy. However, due to its effectiveness, melarsoprol is still used today. Resistance to melarsoprol is increasing, and combination therapy with nifurtimox is currently under research.
Eflornithine (difluoromethylornithine or DFMO), the most modern treatment, was developed in the 1970s by Albert Sjoerdsma and underwent clinical trials in the 1980s. The drug was approved by the United States Food and Drug Administration in 1990. Aventis, the company responsible for its manufacture, halted production in 1999. In 2001, Aventis, in association with Médecins Sans Frontières and the World Health Organization, signed a long-term agreement to manufacture and donate the drug.
In addition to sleeping sickness, previous names have included negro lethargy, maladie du sommeil (Fr), Schlafkrankheit (Ger), African lethargy, and Congo trypanosomiasis.
Research
The genome of the parasite has been sequenced and several proteins have been identified as potential targets for drug treatment. Analysis of the genome also revealed the reason why generating a vaccine for this disease has been so difficult. T. brucei has over 800 genes that make proteins the parasite "mixes and matches" to evade immune system detection.
Using a genetically modified form of a bacterium that occurs naturally in the gut of the vectors is being studied as a method of controlling the disease.
Recent findings indicate that the parasite is unable to survive in the bloodstream without its flagellum. This insight gives researchers a new angle with which to attack the parasite.
Trypanosomiasis vaccines are undergoing research.
Additionally, the Drugs for Neglected Diseases Initiative has contributed to the African sleeping sickness research by developing a compound called fexinidazole. This project was originally started in April 2007 and enrolled 749 people in the DRC and Central African Republic. The results showed efficacy and safety in both stages of the disease, both in adults and children β₯ 6 years old and weighing β₯ 20Β kg. The European Medicines Agency approved it for first and second stage disease outside of Europe in November 2018. The treatment was approved in the DRC in December 2018.
Funding
For current funding statistics, human African trypanosomiasis is grouped with kinetoplastid infections. Kinetoplastids refer to a group of flagellate protozoa. Kinetoplastid infections include African sleeping sickness, Chagas' disease, and Leishmaniasis. Altogether, these three diseases accounted for 4.4 million disability adjusted life years (DALYs) and an additional 70,075 recorded deaths yearly. For kinetoplastid infections, the total global research and development funding was approximately $136.3 million in 2012. Each of the three diseases, African sleeping sickness, Chagas' disease, and Leishmaniasis each received approximately a third of the funding, which was about US$36.8 million, US$38.7 million, and US $31.7 million, respectively.
For sleeping sickness, funding was split into basic research, drug discovery, vaccines, and diagnostics. The greatest amount of funding was directed towards basic research of the disease; approximately US$21.6 million was directed towards that effort. As for therapeutic development, approximately $10.9 million was invested.
The top funder for kinetoplastid infection research and development are public sources. About 62% of the funding comes from high-income countries while 9% comes from low- and middle-income countries. High-income countries' public funding is the largest contributor to the neglected disease research effort. However, in recent years, funding from high-income countries has been steadily decreasing; in 2007, high-income countries provided 67.5% of the total funding whereas, in 2012, high-income countries public funds only provided 60% of the total funding for kinetoplastid infections. This downward trend leaves a gap for other funders, such as philanthropic foundations and private pharmaceutical companies to fill.
Much of the progress that has been made in African sleeping sickness and neglected disease research as a whole is a result of the other non-public funders. One of these major sources of funding has come from foundations, which have increasingly become more committed to neglected disease drug discovery in the 21st century. In 2012, philanthropic sources provided 15.9% of the total funding. The Bill and Melinda Gates Foundation has been a leader in providing funding for neglected diseases drug development. They have provided US$444.1 million towards neglected disease research in 2012. To date, they have donated over US$1.02 billion towards the neglected disease discovery efforts.
For kinetoplastid infections specifically, they have donated an average of US$28.15 million annually between the years 2007 to 2011. They have labeled human African trypanosomiasis a high-opportunity target meaning it is a disease that presents the greatest opportunity for control, elimination, and eradication, through the development of new drugs, vaccines, public health programs, and diagnostics. They are the second-highest funding source for neglected diseases, immediately behind the US National Institutes of Health. At a time when public funding is decreasing and government grants for scientific research are harder to obtain, the philanthropic world has stepped in to push the research forward.
Another important component of increased interest and funding has come from industry. In 2012, they contributed 13.1% total to the kinetoplastid research and development effort, and have additionally played an important role by contributing to public-private partnerships (PPP) as well as product-development partnerships (PDP). A public-private partnership is an arrangement between one or more public entities and one or more private entities that exists to achieve a specific health outcome or to produce a health product. The partnership can exist in numerous ways; they may share and exchange funds, property, equipment, human resources, and intellectual property. These public-private partnerships and product-development partnerships have been established to address challenges in the pharmaceutical industry, especially related to neglected disease research. These partnerships can help increase the scale of the effort toward therapeutic development by using different knowledge, skills, and expertise from different sources. These types of partnerships are more effective than industry or public groups working independently.
Other animals and reservoir
Trypanosoma of both the rhodesiense and gambiense types can affect other animals such as cattle and wild animals. African trypanosomiasis has generally been considered an anthroponotic disease and thus its control program was mainly focused on stopping the transmission by treating human cases and eliminating the vector. However, animal reservoirs were reported to possibly play an important role in the endemic nature of African trypanosomiasis, and for its resurgence in the historic foci of West and Central Africa.
References
External links
Links to pictures of Sleeping Sickness (Hardin MD/ University of Iowa)
Health in Africa
Insect-borne diseases
Parasitic diseases
Protozoal diseases
Wikipedia medicine articles ready to translate
Wikipedia infectious disease articles ready to translate
Sleep disorders
Tropical diseases
Zoonoses
African trypanosomiasis | African trypanosomiasis | Biology | 6,574 |
47,817,282 | https://en.wikipedia.org/wiki/Penicillium%20terrenum | Penicillium terrenum is an anamorph species of fungus in the genus Penicillium. Georegion is New Zealand.
References
Further reading
terrenum
Fungi described in 1968
Fungus species | Penicillium terrenum | Biology | 42 |
40,284 | https://en.wikipedia.org/wiki/Cam%20%28mechanism%29 | A cam is a rotating or sliding piece in a mechanical linkage used especially in transforming rotary motion into linear motion. It is often a part of a rotating wheel (e.g. an eccentric wheel) or shaft (e.g. a cylinder with an irregular shape) that strikes a lever at one or more points on its circular path. The cam can be a simple tooth, as is used to deliver pulses of power to a steam hammer, for example, or an eccentric disc or other shape that produces a smooth reciprocating (back and forth) motion in the follower, which is a lever making contact with the cam. A cam timer is similar, and these were widely used for electric machine control (an electromechanical timer in a washing machine being a common example) before the advent of inexpensive electronics, microcontrollers, integrated circuits, programmable logic controllers and digital control.
Camshaft
The cam can be seen as a device that converts rotational motion to reciprocating (or sometimes oscillating) motion. A common example is the camshaft of an automobile, which takes the rotary motion of the engine and converts it into the reciprocating motion necessary to operate the intake and exhaust valves of the cylinders.
Displacement diagram
Cams can be characterized by their displacement diagrams, which reflect the changing position a follower would make as the surface of the cam moves in contact with the follower. In the example shown, the cam rotates about an axis. These diagrams relate angular position, usually in degrees, to the radial displacement experienced at that position. Displacement diagrams are traditionally presented as graphs with non-negative values. A simple displacement diagram illustrates the follower motion at a constant velocity rise followed by a similar return with a dwell in between as depicted in figure 2. The rise is the motion of the follower away from the cam center, dwell is the motion where the follower is at rest, and return is the motion of the follower toward the cam center.
A common type is in the valve actuators in internal combustion engines. Here, the cam profile is commonly symmetric and at rotational speeds generally met with, very high acceleration forces develop. Ideally, a convex curve between the onset and maximum position of lift reduces acceleration, but this requires impractically large shaft diameters relative to lift. Thus, in practice, the points at which lift begins and ends mean that a tangent to the base circle appears on the profile. This is continuous with a tangent to the tip circle. In designing the cam, the lift and the dwell angle are given. If the profile is treated as a large base circle and a small tip circle, joined by a common tangent, giving lift , the relationship can be calculated, given the angle between one tangent and the axis of symmetry ( being ), while is the distance between the centres of the circles (required), and is the radius of the base (given) and that of the tip circle (required):
and
Disc or plate cam
The most commonly used cam is the cam plate (also known as disc cam or radial cam)
which is cut out of a piece of flat metal or plate. Here, the follower moves in a plane perpendicular to the axis of rotation of the camshaft. Several key terms are relevant in such a construction of plate cams: base circle, prime circle (with radius equal to the sum of the follower radius and the base circle radius), pitch curve which is the radial curve traced out by applying the radial displacements away from the prime circle across all angles, and the lobe separation angle (LSA β the angle between two adjacent intake and exhaust cam lobes).
The base circle is the smallest circle that can be drawn to the cam profile.
A once common, but now outdated, application of this type of cam was automatic machine tool programming cams. Each tool movement or operation was controlled directly by one or more cams. Instructions for producing programming cams and cam generation data for the most common makes of machine, were included in engineering references well into the modern CNC era.
This type of cam is used in many simple electromechanical appliances controllers, such as dishwashers and clothes washing machines, to actuate mechanical switches that control the various parts.
Cylindrical cam
A cylindrical cam, or barrel cam, is a cam in which the follower rides on the surface of a cylinder. In the most common type, the follower rides in a groove cut into the surface of a cylinder. These cams are principally used to convert rotational motion to linear motion perpendicular to the rotational axis of the cylinder. A cylinder may have several grooves cut into the surface and drive several followers. Cylindrical cams can provide motions that involve more than a single rotation of the cylinder and generally provide positive positioning, removing the need for a spring or other provision to keep the follower in contact with the control surface.
Applications include machine tool drives, such as reciprocating saws, and shift control barrels in sequential transmissions, such as on most modern motorcycles.
A special case of this cam is a constant lead, where the position of the follower is linear with rotation, as in a lead screw. The purpose and detail of implementation influence whether this application is called a cam or a screw thread, but in some cases, the nomenclature may be ambiguous.
Cylindrical cams may also be used to reference an output to two inputs, where one input is the rotation of the cylinder and the other is the position of the follower along the cam. The output is radial to the cylinder. These were once common for special functions in control systems, such as fire control mechanisms for guns on naval vessels and mechanical analog computers.
An example of a cylindrical cam with two inputs is provided by a duplicating lathe, an example of which is the Klotz axe handle lathe, which cuts an axe handle to a form controlled by a pattern acting as a cam for the lathe mechanism.
Face cam
A face cam produces motion by using a follower riding on the face of a disk. The most common type has the follower ride in a slot so that the captive follower produces radial motion with positive positioning without the need for a spring or other mechanism to keep the follower in contact with the control surface. A face cam of this type generally has only one slot for a follower on each face. In some applications, a single element, such as a gear, a barrel cam or other rotating element with a flat face, may do duty as a face cam in addition to other purposes.
Face cams may provide repetitive motion with a groove that forms a closed curve or may provide function generation with a stopped groove. Cams used for function generation may have grooves that require several revolutions to cover the complete function, and in this case, the function generally needs to be invertible so that the groove does not self intersect, and the function output value must differ enough at corresponding rotations that there is sufficient material separating the adjacent groove segments. A common form is the constant lead cam, where the displacement of the follower is linear with rotation, such as the scroll plate in a scroll chuck. Non-invertible functions, which require the groove to self-intersect, can be implemented using special follower designs.
A variant of the face cam provides motion parallel to the axis of cam rotation. A common example is the traditional sash window lock, where the cam is mounted to the top of the lower sash, and the follower is the hook on the upper sash. In this application, the cam is used to provide a mechanical advantage in forcing the window shut, and also provides a self-locking action, like some worm gears, due to friction.
Face cams may also be used to reference a single output to two inputs, typically where one input is the rotation of the cam and the other is the radial position of the follower. The output is parallel to the axis of the cam. These were once common is mechanical analog computation and special functions in control systems.
A face cam that implements three outputs for a single rotational input is the stereo phonograph, where a relatively constant lead groove guides the stylus and tonearm unit, acting as either a rocker-type (tonearm) or linear (linear tracking turntable) follower, and the stylus alone acting as the follower for two orthogonal outputs to representing the audio signals. These motions are in a plane radial to the rotation of the record and at angles of 45 degrees to the plane of the disk (normal to the groove faces). The position of the tonearm was used by some turntables as a control input, such as to turn the unit off or to load the next disk in a stack, but was ignored in simple units.
Heart shaped cam
This type of cam, in the form of a symmetric heart, is used to return a shaft holding the cam to a set position by pressure from a roller. They were used on early models of Post Office Master clocks to synchronise the clock time with Greenwich Mean Time when the activating follower was pressed onto the cam automatically via a signal from an accurate time source.
Snail drop cam
This type of cam was used for example in mechanical timekeeping clocking-in clocks to drive the day advance mechanism at precisely midnight and consisted of a follower being raised over 24 hours by the cam in a spiral path which terminated at a sharp cut off at which the follower would drop down and activate the day advance. Where timing accuracy is required as in clocking-in clocks these were typically ingeniously arranged to have a roller cam follower to raise the drop weight for most of its journey to near its full height, and only for the last portion of its travel for the weight to be taken over and supported by a solid follower with a sharp edge. This ensured that the weight dropped at a precise moment, enabling accurate timing. This was achieved by the use of two snail cams mounted coaxially with the roller initially resting on one cam and the final solid follower on the other but not in contact with its cam profile. Thus the roller cam initially carried the weight, until at the final portion of the run the profile of the non-roller cam rose more than the other causing the solid follower to take the weight.
Linear cam
A linear cam is one in which the cam element moves in a straight line rather than rotates. The cam element is often a plate or block but may be any cross-section. The key feature is that the input is a linear motion rather than rotational. The cam profile may be cut into one or more edges of a plate or block, may be one or more slots or grooves in the face of an element, or may even be a surface profile for a cam with more than one input. The development of a linear cam is similar to, but not identical to, that of a rotating cam.
A common example of a linear cam is a key for a pin tumbler lock. The pins act as followers. This behavior is exemplified when the key is duplicated in a key duplication machine, where the original key acts as a control cam for cutting the new key.
History
Cam mechanisms appeared in China at around 600 BC in the form of a crossbow trigger-mechanism with a cam-shaped swing arm. However, the trigger mechanism did not rotate around its own axis and traditional Chinese technology generally made little use of continuously rotating cams. Nevertheless, later research showed that such cam mechanisms did in fact rotate around its own axis. Likewise, more recent research indicates that cams were used in water-driven trip hammers by the latter half of the Western Han Dynasty (206 BC β 8 AD) as recorded in the Huan Zi Xin Lun. Complex pestles were also mentioned in later records such as the Jin Zhu Gong Zan and the Tian Gong Kai Wu, amongst many other records of water-driven pestles. During the Tang dynasty, the wooden clock within the water-driven astronoical device, the spurs inside a water-driven armillary sphere, the automated alarm within a five-wheeled sand-driven clock, artificial paper figurines within a revolving lantern, all utilized cam mechanisms. The Chinese hodometer which utilized a bell and gong mechanism is also a cam, as described in the Song Shi. In the book Nongshu, the vertical wheel of a water-driven wind box is also a cam. Out of these examples, the water-driven pestle and the water driven wind box both have two cam mechanisms inside. Cams that rotated continuously and functioned as integral machine elements were built into Hellenistic water-driven automata from the 3rd century BC.
The cam and camshaft later appeared in mechanisms by Al-Jazari, who used them in his automata, described in 1206. The cam and camshaft appeared in European mechanisms from the 14th century. Waldo J Kelleigh of Electrical Apparatus Company patented the adjustable cam in the United States in 1956 for its use in mechanical engineering and weaponry.
See also
References
External links
Cam design pages Creates animated cams for specified follower motions.
Kinematic Models for Design Digital Library (KMODDL) β Movies and photos of hundreds of working mechanical-systems models at Cornell University. Also includes an e-book library of classic texts on mechanical design and engineering.
Introduction to Mechanisms β Cams Classification, nomenclature, motion, and design of cams; information for the course, Introduction to Mechanisms, at Carnegie Mellon University.
Polynomial cam function with excel VBA file to demonstrate its motion
Mechanisms (engineering) | Cam (mechanism) | Engineering | 2,729 |
11,853,055 | https://en.wikipedia.org/wiki/Biodiversity%20Outcomes%20Framework | Canada's Biodiversity Outcomes Framework was approved by Ministers responsible for Environment, Forests, Parks, Fisheries and Aquaculture, and Wildlife in October 2006. It has been developed further to the Canadian Biodiversity Strategy, an implementation measure required under Article 6 of the United Nations Convention on Biological Diversity.
Criticism of the Framework
The Framework has been developed from the Canadian Biodiversity Strategy, which has been criticized as having a tendency to focus on species and to assign less importance to other scales of biodiversity from the genetic to the ecosystem level.
See also
Criticisms of the biodiversity paradigm
References
External links
Convention on Biological Diversity
Biodiversity Convention Office
Environment Canada
Ecology organizations
Environment and Climate Change Canada
Convention on Biological Diversity | Biodiversity Outcomes Framework | Biology | 133 |
3,100,586 | https://en.wikipedia.org/wiki/Ore%27s%20theorem | Ore's theorem is a result in graph theory proved in 1960 by Norwegian mathematician Γystein Ore. It gives a sufficient condition for a graph to be Hamiltonian, essentially stating that a graph with sufficiently many edges must contain a Hamilton cycle. Specifically, the theorem considers the sum of the degrees of pairs of non-adjacent vertices: if every such pair has a sum that at least equals the total number of vertices in the graph, then the graph is Hamiltonian.
Formal statement
Let be a (finite and simple) graph with vertices. We denote by the degree of a vertex in , i.e. the number of incident edges in to . Then, Ore's theorem states that if
then is Hamiltonian.
Proof
It is equivalent to show that every non-Hamiltonian graph does not obey condition (β). Accordingly, let be a graph on vertices that is not Hamiltonian, and let be formed from by adding edges one at a time that do not create a Hamiltonian cycle, until no more edges can be added. Let and be any two non-adjacent vertices in . Then adding edge to would create at least one new Hamiltonian cycle, and the edges other than in such a cycle must form a Hamiltonian path in with and . For each index in the range , consider the two possible edges in from to and from to . At most one of these two edges can be present in , for otherwise the cycle would be a Hamiltonian cycle. Thus, the total number of edges incident to either or is at most equal to the number of choices of , which is . Therefore, does not obey property (β), which requires that this total number of edges () be greater than or equal to . Since the vertex degrees in are at most equal to the degrees in , it follows that also does not obey propertyΒ (β).
Algorithm
describes the following simple algorithm for constructing a Hamiltonian cycle in a graph meeting Ore's condition.
Arrange the vertices arbitrarily into a cycle, ignoring adjacencies in the graph.
While the cycle contains two consecutive vertices vi and viΒ +Β 1 that are not adjacent in the graph, perform the following two steps:
Search for an index j such that the four vertices vi, viΒ +Β 1, vj, and vjΒ +Β 1 are all distinct and such that the graph contains edges from vi to vj and from vjΒ +Β 1 to viΒ +Β 1
Reverse the part of the cycle between viΒ +Β 1 and vj (inclusive).
Each step increases the number of consecutive pairs in the cycle that are adjacent in the graph, by one or two pairs (depending on whether vj and vjΒ +Β 1 are already adjacent), so the outer loop can only happen at most n times before the algorithm terminates, where n is the number of vertices in the given graph. By an argument similar to the one in the proof of the theorem, the desired index j must exist, or else the nonadjacent vertices vi and viΒ +Β 1 would have too small a total degree. Finding i and j, and reversing part of the cycle, can all be accomplished in time O(n). Therefore, the total time for the algorithm is O(n2), matching the number of edges in the input graph.
Related results
Ore's theorem is a generalization of Dirac's theorem that, when each vertex has degree at least , the graph is Hamiltonian. For, if a graph meets Dirac's condition, then clearly each pair of vertices has degrees adding to at least .
In turn Ore's theorem is generalized by the BondyβChvΓ‘tal theorem. One may define a closure operation on a graph in which, whenever two nonadjacent vertices have degrees adding to at least , one adds an edge connecting them; if a graph meets the conditions of Ore's theorem, its closure is a complete graph. The BondyβChvΓ‘tal theorem states that a graph is Hamiltonian if and only if its closure is Hamiltonian; since the complete graph is Hamiltonian, Ore's theorem is an immediate consequence.
found a version of Ore's theorem that applies to directed graphs. Suppose a digraph G has the property that, for every two vertices u and v, either there is an edge from u to v or the outdegree of u plus the indegree of v equals or exceeds the number of vertices in G. Then, according to Woodall's theorem, G contains a directed Hamiltonian cycle. Ore's theorem may be obtained from Woodall by replacing every edge in a given undirected graph by a pair of directed edges. A closely related theorem by states that an n-vertex strongly connected digraph with the property that, for every two nonadjacent vertices u and v, the total number of edges incident to u or v is at least 2nΒ βΒ 1 must be Hamiltonian.
Ore's theorem may also be strengthened to give a stronger conclusion than Hamiltonicity as a consequence of the degree condition in the theorem. Specifically, every graph satisfying the conditions of Ore's theorem is either a regular complete bipartite graph or is pancyclic .
References
.
.
.
.
.
Extremal graph theory
Theorems in graph theory
Articles containing proofs
Hamiltonian paths and cycles | Ore's theorem | Mathematics | 1,081 |
118,396 | https://en.wikipedia.org/wiki/Band%20gap | In solid-state physics and solid-state chemistry, a band gap, also called a bandgap or energy gap, is an energy range in a solid where no electronic states exist. In graphs of the electronic band structure of solids, the band gap refers to the energy difference (often expressed in electronvolts) between the top of the valence band and the bottom of the conduction band in insulators and semiconductors. It is the energy required to promote an electron from the valence band to the conduction band. The resulting conduction-band electron (and the electron hole in the valence band) are free to move within the crystal lattice and serve as charge carriers to conduct electric current. It is closely related to the HOMO/LUMO gap in chemistry. If the valence band is completely full and the conduction band is completely empty, then electrons cannot move within the solid because there are no available states. If the electrons are not free to move within the crystal lattice, then there is no generated current due to no net charge carrier mobility. However, if some electrons transfer from the valence band (mostly full) to the conduction band (mostly empty), then current can flow (see carrier generation and recombination). Therefore, the band gap is a major factor determining the electrical conductivity of a solid. Substances having large band gaps (also called "wide" band gaps) are generally insulators, those with small band gaps (also called "narrow" band gaps) are semiconductors, and conductors either have very small band gaps or none, because the valence and conduction bands overlap to form a continuous band.
It is possible to produce laser induced insulator-metal transitions which have already been experimentally observed in some condensed matter systems, like thin films of , doped manganites, or in vanadium sesquioxide . These are special cases of the more general metal-to-nonmetal transitions phenomena which were intensively studied in the last decades. A one-dimensional analytic model of laser induced distortion of band structure was presented for a spatially periodic (cosine) potential. This problem is periodic both in space and time and can be solved analytically using the Kramers-Henneberger co-moving frame. The solutions can be given with the help of the Mathieu functions.
In semiconductor physics
Every solid has its own characteristic energy-band structure. This variation in band structure is responsible for the wide range of electrical characteristics observed in various materials.
Depending on the dimension, the band structure and spectroscopy can vary. The different types of dimensions are as listed: one dimension, two dimensions, and three dimensions.
In semiconductors and insulators, electrons are confined to a number of bands of energy, and forbidden from other regions because there are no allowable electronic states for them to occupy. The term "band gap" refers to the energy difference between the top of the valence band and the bottom of the conduction band. Electrons are able to jump from one band to another. However, in order for a valence band electron to be promoted to the conduction band, it requires a specific minimum amount of energy for the transition. This required energy is an intrinsic characteristic of the solid material. Electrons can gain enough energy to jump to the conduction band by absorbing either a phonon (heat) or a photon (light).
A semiconductor is a material with an intermediate-sized, non-zero band gap that behaves as an insulator at T=0K, but allows thermal excitation of electrons into its conduction band at temperatures that are below its melting point. In contrast, a material with a large band gap is an insulator. In conductors, the valence and conduction bands may overlap, so there is no longer a bandgap with forbidden regions of electronic states.
The conductivity of intrinsic semiconductors is strongly dependent on the band gap. The only available charge carriers for conduction are the electrons that have enough thermal energy to be excited across the band gap and the electron holes that are left off when such an excitation occurs.
Band-gap engineering is the process of controlling or altering the band gap of a material by controlling the composition of certain semiconductor alloys, such as GaAlAs, InGaAs, and InAlAs. It is also possible to construct layered materials with alternating compositions by techniques like molecular-beam epitaxy. These methods are exploited in the design of heterojunction bipolar transistors (HBTs), laser diodes and solar cells.
The distinction between semiconductors and insulators is a matter of convention. One approach is to think of semiconductors as a type of insulator with a narrow band gap. Insulators with a larger band gap, usually greater than 4 eV, are not considered semiconductors and generally do not exhibit semiconductive behaviour under practical conditions. Electron mobility also plays a role in determining a material's informal classification.
The band-gap energy of semiconductors tends to decrease with increasing temperature. When temperature increases, the amplitude of atomic vibrations increase, leading to larger interatomic spacing. The interaction between the lattice phonons and the free electrons and holes will also affect the band gap to a smaller extent. The relationship between band gap energy and temperature can be described by Varshni's empirical expression (named after Y. P. Varshni),
, where Eg(0), Ξ± and Ξ² are material constants.
Furthermore, lattice vibrations increase with temperature, which increases the effect of electron scattering. Additionally, the number of charge carriers within a semiconductor will increase, as more carriers have the energy required to cross the band-gap threshold and so conductivity of semiconductors also increases with increasing temperature. The external pressure also influences the electronic structure of semiconductors and, therefore, their optical band gaps.
In a regular semiconductor crystal, the band gap is fixed owing to continuous energy states. In a quantum dot crystal, the band gap is size dependent and can be altered to produce a range of energies between the valence band and conduction band. It is also known as quantum confinement effect.
Band gaps can be either direct or indirect, depending on the electronic band structure of the material.
It was mentioned earlier that the dimensions have different band structure and spectroscopy. For non-metallic solids, which are one dimensional, have optical properties that are dependent on the electronic transitions between valence and conduction bands. In addition, the spectroscopic transition probability is between the initial and final orbital and it depends on the integral. Οi is the initial orbital, Οf is the final orbital, Κ Οf*ûΡΟi is the integral, Ξ΅ is the electric vector, and u is the dipole moment.
Two-dimensional structures of solids behave because of the overlap of atomic orbitals. The simplest two-dimensional crystal contains identical atoms arranged on a square lattice. Energy splitting occurs at the Brillouin zone edge for one-dimensional situations because of a weak periodic potential, which produces a gap between bands. The behavior of the one-dimensional situations does not occur for two-dimensional cases because there are extra freedoms of motion. Furthermore, a bandgap can be produced with strong periodic potential for two-dimensional and three-dimensional cases.
Direct and indirect band gap
Based on their band structure, materials are characterised with a direct band gap or indirect band gap. In the free-electron model, k is the momentum of a free electron and assumes unique values within the Brillouin zone that outlines the periodicity of the crystal lattice. If the momentum of the lowest energy state in the conduction band and the highest energy state of the valence band of a material have the same value, then the material has a direct bandgap. If they are not the same, then the material has an indirect band gap and the electronic transition must undergo momentum transfer to satisfy conservation. Such indirect "forbidden" transitions still occur, however at very low probabilities and weaker energy. For materials with a direct band gap, valence electrons can be directly excited into the conduction band by a photon whose energy is larger than the bandgap. In contrast, for materials with an indirect band gap, a photon and phonon must both be involved in a transition from the valence band top to the conduction band bottom, involving a momentum change. Therefore, direct bandgap materials tend to have stronger light emission and absorption properties and tend to be better suited for photovoltaics (PVs), light-emitting diodes (LEDs), and laser diodes; however, indirect bandgap materials are frequently used in PVs and LEDs when the materials have other favorable properties.
Light-emitting diodes and laser diodes
LEDs and laser diodes usually emit photons with energy close to and slightly larger than the band gap of the semiconductor material from which they are made. Therefore, as the band gap energy increases, the LED or laser color changes from infrared to red, through the rainbow to violet, then to UV.
Photovoltaic cells
The optical band gap (see below) determines what portion of the solar spectrum a photovoltaic cell absorbs. Strictly, a semiconductor will not absorb photons of energy less than the band gap; whereas most of the photons with energies exceeding the band gap will generate heat. Neither of them contribute to the efficiency of a solar cell. One way to circumvent this problem is based on the so-called photon management concept, in which case the solar spectrum is modified to match the absorption profile of the solar cell.
List of band gaps
Below are band gap values for some selected materials. For a comprehensive list of band gaps in semiconductors, see List of semiconductor materials.
Optical versus electronic bandgap
In materials with a large exciton binding energy, it is possible for a photon to have just barely enough energy to create an exciton (bound electronβhole pair), but not enough energy to separate the electron and hole (which are electrically attracted to each other). In this situation, there is a distinction between "optical band gap" and "electronic band gap" (or "transport gap"). The optical bandgap is the threshold for photons to be absorbed, while the transport gap is the threshold for creating an electronβhole pair that is not bound together. The optical bandgap is at lower energy than the transport gap.
In almost all inorganic semiconductors, such as silicon, gallium arsenide, etc., there is very little interaction between electrons and holes (very small exciton binding energy), and therefore the optical and electronic bandgap are essentially identical, and the distinction between them is ignored. However, in some systems, including organic semiconductors and single-walled carbon nanotubes, the distinction may be significant.
Band gaps for other quasi-particles
In photonics, band gaps or stop bands are ranges of photon frequencies where, if tunneling effects are neglected, no photons can be transmitted through a material. A material exhibiting this behaviour is known as a photonic crystal. The concept of hyperuniformity has broadened the range of photonic band gap materials, beyond photonic crystals. By applying the technique in supersymmetric quantum mechanics, a new class of optical disordered materials has been suggested, which support band gaps perfectly equivalent to those of crystals or quasicrystals.
Similar physics applies to phonons in a phononic crystal.
Materials
Aluminium gallium arsenide
Boron nitride
Indium gallium arsenide
Indium arsenide
Gallium arsenide
Gallium nitride
Germanium
Metallic hydrogen
List of electronics topics
Electronics
Bandgap voltage reference
Condensed matter physics
Direct and indirect bandgaps
Electrical conduction
Electron hole
Field-effect transistor
Light-emitting diode
Photodiode
Photoresistor
Photovoltaics
Solar cell
Solid state physics
Semiconductor
Semiconductor devices
Strongly correlated material
Valence band
See also
Wide-bandgap semiconductors
Band bending
Spectral density
Pseudogap
Tauc plot
MossβBurstein effect
Urbach energy
References
External links
Direct Band Gap Energy Calculator
Electron states
Electronic band structures
Quantum mechanics
Spectroscopy
Nuclear magnetic resonance | Band gap | Physics,Chemistry,Materials_science | 2,501 |
22,837,466 | https://en.wikipedia.org/wiki/Weight-shift%20control | Weight-shift control as a means of aircraft flight control is widely used in hang gliders, powered hang gliders, and ultralight trikes. Control is usually by the pilot using their weight against a triangular control bar that is rigidly attached to the wing structure. The wing is mounted on a pivot above the trike carriage or hang glider harness allowing the weight-shift forces to produce changes in pitch and bank.
References
See also
Ultralight aircraft
Aircraft controls
Applications of control engineering
Aircraft categories | Weight-shift control | Engineering | 101 |
10,113,122 | https://en.wikipedia.org/wiki/Quantum%20radar | Quantum radar is a speculative remote-sensing technology based on quantum-mechanical effects, such as the uncertainty principle or quantum entanglement. Broadly speaking, a quantum radar can be seen as a device working in the microwave range, which exploits quantum features, from the point of view of the radiation source and/or the output detection, and is able to outperform a classical counterpart. One approach is based on the use of input quantum correlations (in particular, quantum entanglement) combined with a suitable interferometric quantum detection at the receiver (strongly related to the protocol of quantum illumination).
Paving the way for a technologically viable prototype of a quantum radar involves the resolution of a number of experimental challenges as discussed in some review articles, the latter of which pointed out "inaccurate reporting" in the media. Current experimental designs seem to be limited to very short ranges, of the order of one meter, suggesting that potential applications might instead be for near-distance surveillance or biomedical scanning.
Concept behind a microwave-range model
A microwave-range model of a quantum radar was proposed in 2015 by an international team and is based on the protocol of Gaussian quantum illumination. The basic concept is to create a stream of entangled visible-frequency photons and split it in half. One half, the "signal beam", goes through a conversion to microwave frequencies in a way that preserves the original quantum state. The microwave signal is then sent and received as in a normal radar system. When the reflected signal is received it is converted back into visible photons and compared with the other half of the original entangled beam, the "idler beam".
Although most of the original entanglement will be lost due to quantum decoherence as the microwaves travel to the target objects and back, enough quantum correlations will still remain between the reflected-signal and the idler beams. Using a suitable quantum detection scheme, the system can pick out just those photons that were originally sent by the radar, completely filtering out any other sources. If the system can be made to work in the field, it represents an enormous advance in detection capability.
One way to defeat conventional radar systems is to broadcast signals on the same frequencies used by the radar, making it impossible for the receiver to distinguish between their own broadcasts and the spoofing signal (or "jamming"). However, such systems cannot know, even in theory, what the original quantum state of the radar's internal signal was. Lacking such information, their broadcasts will not match the original signal and will be filtered out in the correlator. Environmental sources, like ground clutter and aurora, will similarly be filtered out.
History
One design was proposed in 2005 by defence contractor Lockheed Martin. The patent on this work was granted in 2013. The aim was to create a radar system providing a better resolution and higher detail than classical radar could provide. However no quantum advantage or better resolution was theoretically proven by this design.
In 2015, an international team of researchers, showed the first theoretical design of a quantum radar able to achieve a quantum advantage over a classical setup. In this model of quantum radar, one considers the remote sensing of a low-reflectivity target that is embedded within a bright microwave background, with detection performance well beyond the capability of a classical microwave radar. By using a suitable wavelength "electro-optomechanical converter", this scheme generates excellent quantum entanglement between a microwave signal beam, sent to probe the target region, and an optical idler beam, retained for detection. The microwave return collected from the target region is subsequently converted into an optical beam and then measured jointly with the idler beam. Such a technique extends the powerful protocol of quantum illumination to its more natural spectral domain, namely microwave wavelengths.
In 2019, a three-dimensional enhancement quantum radar protocol was proposed. It could be understood as a quantum metrology protocol for the localization of a non-cooperative point-like target in three-dimensional space. It employed quantum entanglement to achieve an uncertainty in localization that is quadratically smaller for each spatial direction than what could be achieved by using independent, unentangled photons.
Review articles that delve more into the history and designs of quantum radar, in addition to the ones mentioned in the introduction above, are available on arXiv.
A quantum radar is challenging to be realized with current technology, even though a preliminary experimental prototype has been realized.
Challenges and limitations
There are a number of non-trivial challenges behind the experimental implementation of a truly-quantum radar prototype, even at short ranges. According to current quantum illumination designs, an
important point is the management of the idler pulse that, ideally, should be jointly detected together with the signal pulse returning from the potential target. However, this would require
the use of a quantum memory with a long coherence time, able to work at times comparable with the round-trip of the signal pulse. Other solutions may degrade the quantum correlations between signal and idler pulses
to a point where the quantum advantage may disappear. This is a problem that also affects optical designs of quantum illumination. For instance, storing the idler pulse in a delay line
by using a standard optical fiber would degrade the system and limit the maximum range of a quantum illumination radar to about 11Β km. This value has to be interpreted as a theoretical
limit of this design, not to be confused with an achievable range. Other limitations include the fact that current quantum designs only consider a single polarization, azimuth, elevation,
range, Doppler bin at a time.
Media speculation about applications
There is media speculation that a quantum radar could operate at long ranges detecting stealth aircraft, filter out deliberate jamming attempts, and operate in areas of high background noise, e.g., due to ground clutter.
Related to the above, there is considerable media speculation of the use of quantum radar as a potential anti-stealth technology. Stealth aircraft are designed to reflect signals away from the radar, typically by using rounded surfaces and avoiding anything that might form a partial corner reflector. This so reduces the amount of signal returned to the radar's receiver that the target is (ideally) lost in the thermal background noise. Although stealth technologies will still be just as effective at reflecting the original signal away from the receiver of a quantum radar, it is the system's ability to separate out the remaining tiny signal, even when swamped by other sources, that allows it to pick out the return even from highly stealthy designs. At the moment these long-range applications are speculative and not supported by experimental data.
More recently, the generation of large numbers of entangled photons for radar detection has been studied by the University of Waterloo.
References
Quantum optics
Quantum information science
Radar | Quantum radar | Physics | 1,378 |
1,368,404 | https://en.wikipedia.org/wiki/Kismet%20%28software%29 | Kismet is a network detector, packet sniffer, and intrusion detection system for 802.11 wireless LANs. Kismet will work with any wireless card which supports raw monitoring mode, and can sniff 802.11a, 802.11b, 802.11g, and 802.11n traffic. The program runs under Linux, FreeBSD, NetBSD, OpenBSD, and macOS. The client can also run on Microsoft Windows, although, aside from external drones (see below), there's only one supported wireless hardware available as packet source.
Distributed under the GNU General Public License, Kismet is free software.
Features
Kismet differs from other wireless network detectors in working passively. Namely, without sending any loggable packets, it is able to detect the presence of both wireless access points and wireless clients, and to associate them with each other. It is also the most widely used and up to date open source wireless monitoring tool.
Kismet also includes basic wireless IDS features such as detecting active wireless sniffing programs including NetStumbler, as well as a number of wireless network attacks.
Kismet features the ability to log all sniffed packets and save them in a tcpdump/Wireshark or Airsnort compatible file format. Kismet can also capture "Per-Packet Information" headers.
Kismet also features the ability to detect default or "not configured" networks, probe requests, and determine what level of wireless encryption is used on a given access point.
In order to find as many networks as possible, Kismet supports channel hopping. This means that it constantly changes from channel to channel non-sequentially, in a user-defined sequence with a default value that leaves big holes between channels (for example, 1-6-11-2-7-12-3-8-13-4-9-14-5-10). The advantage with this method is that it will capture more packets because adjacent channels overlap.
Kismet also supports logging of the geographical coordinates of the network if the input from a GPS receiver is additionally available.
Server / Drone / Client infrastructure
Kismet has three separate parts. A drone can be used to collect packets, and then pass them on to a server for interpretation. A server can either be used in conjunction with a drone, or on its own, interpreting packet data, and extrapolating wireless information, and organizing it. The client communicates with the server and displays the information the server collects.
Plugins
With the updating of Kismet to -ng, Kismet now supports a wide variety of scanning plugins including DECT, Bluetooth, and others.
Usage
Kismet is used in a number of commercial and open source projects. It is distributed with Kali Linux. It is used for wireless reconnaissance, and can be used with other packages for an inexpensive wireless intrusion detection system. It has been used in a number of peer reviewed studies such as "Detecting Rogue Access Points using Kismet".
See also
KisMAC (for Mac OS X)
BackTrack
Kali Linux
Metasploit Project
Nmap
BackBox
OpenVAS
Aircrack-ng
References
External links
Official Website
Introduction to Kismet (via Archive.org)
Java Kismet TCP/IP Client
Network analyzers
Wireless networking | Kismet (software) | Technology,Engineering | 685 |
2,169,754 | https://en.wikipedia.org/wiki/Charge%20pump | A charge pump is a kind of DC-to-DC converter that uses capacitors for energetic charge storage to raise or lower voltage. Charge-pump circuits are capable of high efficiencies, sometimes as high as 90β95%, while being electrically simple circuits.
Description
Charge pumps use some form of switching device to control the connection of a supply voltage across a load through a capacitor in a two stage cycle. In the first stage a capacitor is connected across the supply, charging it to that same voltage. In the second stage the circuit is reconfigured so that the capacitor is in series with the supply and the load. This doubles the voltage across the load - the sum of the original supply and the capacitor voltages. The pulsing nature of the higher voltage switched output is often smoothed by the use of an output capacitor.
An external or secondary circuit drives the switching, typically at tens of kilohertz up to several megahertz. The high frequency minimizes the amount of capacitance required, as less charge needs to be stored and dumped in a shorter cycle.
Charge pumps can double voltages, triple voltages, halve voltages, invert voltages, fractionally multiply or scale voltages (such as Γ, Γ, Γ, etc.) and generate arbitrary voltages by quickly alternating between modes, depending on the controller and circuit topology.
They are commonly used in low-power electronics (such as mobile phones) to raise and lower voltages for different parts of the circuitry - minimizing power consumption by controlling supply voltages carefully.
Terminology for PLL
The term charge pump is also commonly used in phase-locked loop (PLL) circuits even though there is no pumping action involved unlike in the circuit discussed above. A PLL charge pump is merely a bipolar switched current source. This means that it can output positive and negative current pulses into the loop filter of the PLL. It cannot produce higher or lower voltages than its power and ground supply levels.
Applications
A common application for charge-pump circuits is in RS-232 level shifters, where they are used to derive positive and negative voltages (often +10Β V and β10Β V) from a single 5Β V or 3Β V power supply rail.
Charge pumps can also be used as LCD or white-LED drivers, generating high bias voltages from a single low-voltage supply, such as a battery.
Charge pumps are extensively used in NMOS memories and microprocessors to generate a negative voltage "VBB" (about β3Β V), which is connected to the substrate. This guarantees that all N+-to-substrate junctions are reversely biased by 3Β V or more, decreasing junction capacitance and increasing circuit speed.
A charge pump providing a negative voltage spike has been used in NES-compatible games not licensed by Nintendo in order to stun the Nintendo Entertainment System lockout chip.
As of 2007, charge pumps are integrated into nearly all EEPROM and flash-memory integrated circuits. These devices require a high-voltage pulse to "clean out" any existing data in a particular memory cell before it can be written with a new value. Early EEPROM and flash-memory devices required two power supplies: +5Β V (for reading) and +12Β V (for erasing). , commercially available flash memory and EEPROM memory requires only one external power supply β generally 1.8Β V or 3.3Β V. A higher voltage, used to erase cells, is generated internally by an on-chip charge pump.
Charge pumps are used in H bridges in high-side drivers for gate-driving high-side n-channel power MOSFETs and IGBTs. When the centre of a half bridge goes low, the capacitor is charged through a diode, and this charge is used to later drive the gate of the high-side FET a few volts above the source voltage so as to switch it on. This strategy works well, provided the bridge is regularly switched and avoids the complexity of having to run a separate power supply and permits the more efficient n-channel devices to be used for both switches. This circuit (requiring the periodic switching of the high-side FET) may also be called a "bootstrap" circuit, and some would differentiate between that and a charge pump (which would not require that switching).
High-voltage vertical deflection signal generation for cathode-ray tube (CRT) monitors, done for example with the TDA1670A integrated circuit. To achieve maximum deviation, a CRT coil needs around 50Β V. Using a charge pump voltage doubler from an existing 24Β V supply eliminates the need for another supply voltage.
Higher-power fast charge solutions for mobile devices rely on a charge pump instead of a buck converter to reduce the voltage, as higher efficiency reduces heat generation. The Samsung Galaxy S23, which takes an input current of 3Β A, can charge its internal battery packs at 6Β A thanks to a 2:1 current pump. Oppo's 240Β W SUPERVOOC goes further and uses three charge pumps in parallel (98% claimed efficiency) to go from 24V/10A to 10V/24A, which is then taken by two parallel battery packs.
See also
CockcroftβWalton generator
Voltage multiplier
Switched capacitor
Charge transfer switch
Voltage doubler
References
Applying the equivalent resistor concept to calculating the power losses in the charge pumps
Charge pumps where the voltages across the capacitors follow the binary number system
External links
Charge Pump, inductorless, Voltage Regulators
On-chip High-Voltage Generator Design
Charge Pump DC/DC Converters. Applications, circuits and solutions using inductorless (charge pump) dc/dc converters.
DC/DC Conversion without Inductors. General description of charge pump operation; example applications using Maxim controllers.
Charge pump circuits overview.https://picture.iczhiku.com/resource/eetop/wYkRpwFlHrpHWnNB.pdf {new one} Tutorial by G. Palumbo and D. Pappalardo
Electric power conversion
Voltage regulation
es:Multiplicador de tensiΓ³n | Charge pump | Physics | 1,280 |
194,376 | https://en.wikipedia.org/wiki/Code%20page | In computing, a code page is a character encoding and as such it is a specific association of a set of printable characters and control characters with unique numbers. Typically each number represents the binary value in a single byte. (In some contexts these terms are used more precisely; see .)
The term "code page" originated from IBM's EBCDIC-based mainframe systems, but Microsoft, SAP, and Oracle Corporation are among the vendors that use this term. The majority of vendors identify their own character sets by a name. In the case when there is a plethora of character sets (like in IBM), identifying character sets through a number is a convenient way to distinguish them. Originally, the code page numbers referred to the page numbers in the IBM standard character set manual, a condition which has not held for a long time. Vendors that use a code page system allocate their own code page number to a character encoding, even if it is better known by another name; for example, UTF-8 has been assigned page numbers 1208 at IBM, 65001 at Microsoft, and 4110 at SAP.
Hewlett-Packard uses a similar concept in its HP-UX operating system and its Printer Command Language (PCL) protocol for printers (either for HP printers or not). The terminology, however, is different: What others call a character set, HP calls a symbol set, and what IBM or Microsoft call a code page, HP calls a symbol set code. HP developed a series of symbol sets, each with an associated symbol set code, to encode both its own character sets and other vendorsβ character sets.
The multitude of character sets leads many vendors to recommend Unicode.
The code page numbering system
IBM introduced the concept of systematically assigning a small, but globally unique, 16 bit number to each character encoding that a computer system or collection of computer systems might encounter. The IBM origin of the numbering scheme is reflected in the fact that the smallest (first) numbers are assigned to variations of IBM's EBCDIC encoding and slightly larger numbers refer to variations of IBM's extended ASCII encoding as used in its PC hardware.
With the release of PC DOS version 3.3 (and the near identical MS-DOS 3.3) IBM introduced the code page numbering system to regular PC users, as the code page numbers (and the phrase "code page") were used in new commands to allow the character encoding used by all parts of the OS to be set in a systematic way.
After IBM and Microsoft ceased to cooperate in the 1990s, the two companies have maintained the list of assigned code page numbers independently from each other, resulting in some conflicting assignments. At least one third-party vendor (Oracle) also has its own different list of numeric assignments. IBM's current assignments are listed in their CCSID repository, while Microsoft's assignments are documented within the MSDN. Additionally, a list of the names and approximate IANA (Internet Assigned Numbers Authority) abbreviations for the installed code pages on any given Windows machine can be found in the Registry on that machine (this information is used by Microsoft programs such as Internet Explorer).
Most well-known code pages, excluding those for the CJK languages and Vietnamese, fit all their code-points into eight bits and do not involve anything more than mapping each code-point to a single character; furthermore, techniques such as combining characters, complex scripts, etc., are not involved.
The text mode of standard (VGA-compatible) PC graphics hardware is built around using an 8-bit code page, though it is possible to use two at once with some color depth sacrifice, and up to eight may be stored in the display adapter for easy switching. There was a selection of third-party code page fonts that could be loaded into such hardware. However, it is now commonplace for operating system vendors to provide their own character encoding and rendering systems that run in a graphics mode and bypass this hardware limitation entirely. However the system of referring to character encodings by a code page number remains applicable, as an efficient alternative to string identifiers such as those specified by the IETF and IANA for use in various protocols such as e-mail and web pages.
Relationship to ASCII
The majority of code pages in current use are supersets of ASCII, a 7-bit code representing 128 control codes and printable characters. In the distant past, 8-bit implementations of the ASCII code set the top bit to zero or used it as a parity bit in network data transmissions. When the top bit was made available for representing character data, a total of 256 characters and control codes could be represented. Most vendors (including IBM) used this extended range to encode characters used by various languages and graphical elements that allowed the imitation of primitive graphics on text-only output devices. No formal standard existed for these "extended ASCII character sets" and vendors referred to the variants as code pages, as IBM had always done for variants of EBCDIC encodings.
Relationship to Unicode
Unicode is an effort to include all characters from all currently and historically used human languages into single character enumeration (effectively one large single code page), removing the need to distinguish between different code pages when handling digitally stored text. Unicode tries to retain backwards compatibility with many legacy code pages, copying some code pages 1:1 in the design process. An explicit design goal of Unicode was to allow round-trip conversion between all common legacy code pages, although this goal has not always been achieved.
Some vendors, namely IBM and Microsoft, have anachronistically assigned code page numbers to Unicode encodings. This convention allows code page numbers to be used as metadata to identify the correct decoding algorithm when encountering binary stored data.
IBM code pages
EBCDIC-based code pages
These code pages are used by IBM in its EBCDIC character sets for mainframe computers.
1 β USA WP, Original
2 β USA
3 β USA Accounting, Version A
4 β USA
5 β USA
6 β Latin America
7 β Germany F.R. / Austria
8 β Germany F.R.
9 β France, Belgium
10 β Canada (English)
11 β Canada (French)
12 β Italy
13 β Netherlands
14 β Spain
15 β Switzerland (French)
16 β Switzerland (French / German)
17 β Switzerland (German)
18 β Sweden / Finland
19 β Sweden / Finland WP, version 2
20 β Denmark/Norway
21 β Brazil
22 β Portugal
23 β United Kingdom
24 β United Kingdom
25 β Japan (Latin)
26 β Japan (Latin)
27 β Greece (Latin)
29 β Iceland
30 β Turkey
31 β South Africa
32 β Czechoslovakia (Czech / Slovak)
33 β Czechoslovakia
34 β Czechoslovakia
35 β Romania
36 β Romania
37 β USA/Canada - CECP (same with euro: 1140)
37-2 β The real 3279 APL codepage, as used by C/370. This is very close to 1047, except for caret and not-sign inverted. It is not officially recognized by IBM, even though SHARE has pointed out its existence.
38 β USA ASCII
39 β United Kingdom / Israel
40 β United Kingdom
251 β China
252 β Poland
254 β Hungary
256 β International #1 (superseded by 500)
257 β International #2
258 β International #3
259 β Symbols, Set 7
260 β Canadian French - 116
264 β Print Train & Text processing extended
273 β Germany F.R./Austria - CECP (same with euro: 1141)
274 β Old Belgium Code Page
275 β Brazil - CECP
276 β Canada (French) - 94
277 β Denmark, Norway - CECP (same with euro: 1142)
278 β Finland, Sweden - CECP (same with euro: 1143)
279 β French - 94
280 β Italy - CECP (same with euro: 1144)
281 β Japan (Latin) - CECP
282 β Portugal - CECP
283 β Spain - 190
284 β Spain/Latin America - CECP (same with euro: 1145)
285 β United Kingdom - CECP (same with euro: 1146)
286 β Austria / Germany F.R. Alternate
287 β Denmark / Norway Alternate
288 β Finland / Sweden Alternate
289 β Spain Alternate
290 β Japanese (Katakana) Extended
293 β APL
297 β France (same with euro: 1147)
298 β Japan (Katakana)
300 β Japan (Kanji) DBCS (For JIS X 0213)
310 β Graphic Escape APL/TN
320 β Hungary
321 β Yugoslavia
322 β Turkey
330 β International #4
340 β EBCDIC, OCR (same as 893, superseded by 892 and 893)
351 β GDDM default
352 β Printing and publishing option
353 β BCDIC-A
354 β BCDIC-B
355 β PTTC/BCD standard option
357 β PTTC/BCD H option
358 β PTTC/BCD Correspondence option
359 β PTTC/BCD Monocase option
360 β PTTC/BCD Duocase option
361 β EBCDIC Publishing International
363 β Symbols, set 8
382 β EBCDIC Publishing Austria, Germany F.R. Alternate
383 β EBCDIC Publishing Belgium
384 β EBCDIC Publishing Brazil
385 β EBCDIC Publishing Canada (French)
386 β EBCDIC Publishing Denmark, Norway
387 β EBCDIC Publishing Finland, Sweden
388 β EBCDIC Publishing France
389 β EBCDIC Publishing Italy
390 β EBCDIC Publishing Japan (Latin)
391 β EBCDIC Publishing Portugal
392 β EBCDIC Publishing Spain, Philippines
393 β EBCDIC Publishing Latin America (Spanish Speaking)
394 β EBCDIC Publishing China (Hong Kong), UK, Ireland
395 β EBCDIC Publishing Australia, New Zealand, USA, Canada (English)
396 β BookMaster Specials
410 β Cyrillic (revisions: 880, 1025, 1154)
420 β Arabic
421 β Maghreb/French
423 β Greek (superseded by 875)
424 β Hebrew (Bulletin Code)
425 β Arabic / Latin for OS/390 Open Edition
435 β Teletext Isomorphic
500 β International #5 (ECECP; supersedes 256) (same with euro: 1148)
803 β Hebrew Character Set A (Old Code)
829 β Host Math Symbols- Publishing
830 β Math Format
831 β Portugal (Alternate) (same as 37)
833 β Korean Extended (SBCS)
834 β Korean Hangul (KSC5601; DBCS with UDCs)
835 β Traditional Chinese DBCS
836 β Simplified Chinese Extended
837 β Simplified Chinese DBCS
838 β Thai with Low Marks & Accented Characters (same with euro: 1160)
839 β Thai DBCS
870 β Latin 2 (same with euro: 1153) (revision: 1110)
871 β Iceland (same with euro: 1149)
875 β Greek (supersedes 423)
880 β Cyrillic (revision of 410) (revisions: 1025, 1154)
881 β United States - 5080 Graphics System
882 β United Kingdom - 5080 Graphics System
883 β Sweden - 5080 Graphics System
884 β Germany - 5080 Graphics System
885 β France - 5080 Graphics System
886 β Italy - 5080 Graphics System
887 β Japan - 5080 Graphics System
888 β France AZERTY - 5080 Graphics System
889 β Thailand
890 β Yugoslavia
892 β EBCDIC, OCR A
893 β EBCDIC, OCR B
905 β Latin 3
918 β Urdu Bilingual
924 β Latin 9
930 β Japan MIX (290 + 300) (same with euro: 1390)
931 β Japan MIX (37 + 300)
933 β Korea MIX (833 + 834) (same with euro: 1364)
935 β Simplified Chinese MIX (836 + 837) (same with euro: 1388)
937 β Traditional Chinese MIX (37 + 835) (same with euro: 1371)
939 β Japan MIX (1027 + 300) (same with euro: 1399)
1001 β MICR
1002 β EBCDIC DCF Release 2 Compatibility
1003 β EBCDIC DCF, US Text subset
1005 β EBCDIC Isomorphic Text Communication
1007 β EBCDIC Arabic (XCOM2)
1024 β EBCDIC T.61
1025 β Cyrillic, Multilingual (same with euro: 1154) (Revision of 880)
1026 β EBCDIC Turkey (Latin 5) (same with euro: 1155) (supersedes 905 in that country)
1027 β Japanese (Latin) Extended (JIS X 0201 Extended)
1028 β EBCDIC Publishing Hebrew
1030 β Japanese (Katakana) Extended
1031 β Japanese (Latin) Extended
1032 β MICR, E13-B Combined
1033 β MICR, CMC-7 Combined
1037 β Korea - 5080/6090 Graphics System
1039 β GML Compatibility
1047 β Latin 1/Open Systems
1068 β DCF Compatibility
1069 β Latin 4
1070 β USA / Canada Version 0 (Code page 37 Version 0)
1071 β Germany F.R. / Austria (Code page 273 Version 0)
1072 β Belgium (Code page 274 Version 0)
1073 β Brazil (Code page 275 Version 0)
1074 β Denmark, Norway (Code page 277 Version 0)
1075 β Finland, Sweden (Code page 278 Version 0)
1076 β Italy (Code page 280 Version 0)
1077 β Japan (Latin) (Code page 281 Version 0)
1078 β Portugal (Code page 282 Version 0)
1079 β Spain / Latin America Version 0 (Code page 284 Version 0)
1080 β United Kingdom (Code page 285 Version 0)
1081 β France Version 0 (Code page 297 Version 0)
1082 β Israel (Hebrew)
1083 β Israel (Hebrew)
1084 β International#5 Version 0 (Code page 500 Version 0)
1085 β Iceland (Code page 871 Version 0)
1087 β Symbol Set
1091 β Modified Symbols, Set 7
1093 β IBM Logo
1097 β Farsi Bilingual
1110 β Latin 2 (Revision of 870)
1112 β Baltic Multilingual (same with euro: 1156)
1113 β Latin 6
1122 β Estonia (same with euro: 1157)
1123 β Cyrillic, Ukraine (same with euro: 1158)
1130 β Vietnamese (same with euro: 1164)
1132 β Lao EBCDIC
1136 β Hitachi Katakana
1137 β Devanagari EBCDIC
1140 β USA, Canada, etc. ECECP (same without euro: 37) (Traditional Chinese version: 1159)
1141 β Austria, Germany ECECP (same without euro: 273)
1142 β Denmark, Norway ECECP (same without euro: 277)
1143 β Finland, Sweden ECECP (same without euro: 278)
1144 β Italy ECECP (same without euro: 280)
1145 β Spain, Latin America (Spanish) ECECP (same without euro: 284)
1146 β UK ECECP (same without euro: 285)
1147 β France ECECP with euro (same without euro: 297)
1148 β International ECECP with euro (same without euro: 500)
1149 β Icelandic ECECP with euro (same without euro: 871)
1150 β Korean Extended with box characters
1151 β Simplified Chinese Extended with box characters
1152 β Traditional Chinese Extended with box characters
1153 β Latin 2 Multilingual with euro (same without euro: 870)
1154 β Cyrillic, Multilingual with euro (same without euro: 1025; an older version is * 1166)
1155 β Turkey with euro (same without euro: 1026) (same with lira: 1175)
1156 β Baltic Multi with euro (same without euro: 1112)
1157 β Estonia with euro (same without euro: 1122)
1158 β Cyrillic, Ukraine with euro (same without euro: 1123)
1159 β T-Chinese EBCDIC (Traditional Chinese euro update of * 1140)
1160 β Thai with Low Marks & Accented Characters with euro (same without euro: 838)
1164 β Vietnamese with euro (same without euro: 1130)
1165 β Latin 2/Open Systems
1166 β Cyrillic Kazakh
1175 β Turkey with euro and lira (same without lira: 1155)
1278 β EBCDIC Adobe (PostScript) Standard Encoding
1279 β Hitachi Japanese Katakana Host
1300 β Generic Bar Code/OCR-B
1301 β Zip + 4 POSTNET Bar Code
1302 β Facing Identification Marks
1303 β EBCDIC Bar Code
1364 β Korea MIX (833 + 834 + euro) (same without euro: 933)
1371 β Traditional Chinese MIX (1159 + 835) (same without euro: 937)
1376 β Traditional Chinese DBCS Host extension for HKSCS
1377 β Mixed Host HKSCS Growing (37 + 1376)
1378 β Traditional Chinese DBCS Host extension for HKSCS and Simplified Chinese (superset of 1376)
1379 β Mixed Host HKSCS and Simplified Chinese Growingβ(37 + 1378) (superset of 1377)
1388 β Simplified Chinese MIX (same without euro: 935) (836 + 837 + euro)
1390 β Simplified Chinese MIX Japan MIX (same without euro: 930) (290 + 300 + euro)
1399 β Japan MIX (1027 + 300 + euro) (same without euro: 939)
DOS code pages
These code pages are used by IBM in its PC DOS operating system. These code pages were originally embedded directly in the text mode hardware of the graphic adapters used with the IBM PC and its clones, including the original MDA and CGA adapters whose character sets could only be changed by physically replacing a ROM chip that contained the font. The interface of those adapters (emulated by all later adapters such as VGA) was typically limited to single byte character sets with only 256 characters in each font/encoding (although VGA added partial support for slightly larger character sets).
301 β IBM-PC Japan (Kanji) DBCS
437 β Original IBM PC hardware code page
720 β Arabic (Transparent ASMO)
737 β Greek
775 β Latin-7
808 β Russian with euro (same without euro: 866)
848 β Ukrainian with euro (same without euro: 1125)
849 β Belarusian with euro (same without euro: 1131)
850 β Latin-1
851 β Greek
852 β Latin-2
853 β Latin-3
855 β Cyrillic (same with euro: 872)
856 β Hebrew
857 β Latin-5
858 β Latin-1 with euro symbol
859 β Latin-9
860 β Portuguese
861 β Icelandic
862 β Hebrew
863 β Canadian French
864 β Arabic
865 β Danish/Norwegian
866 β Belarusian, Russian, Ukrainian (same with euro: 808)
867 β Hebrew + euro (based on CP862) (conflictive ID: NEC Czech (KamenickΓ½), which was created before this codepage)
868 β Urdu
869 β Greek
872 β Cyrillic with euro (same without euro: 855)
874 β Thai with Low Tone Marks & Ancient Chars (conflictive ID with Windows 874; version with euro: 1161 Windows version: is IBM 1162)
876 β OCR A
877 β OCR B
878 β KOI8-R
891 β Korean PC SBCS
898 β IBM-PC WP Multilingual
899 β IBM-PC Symbol
903 β Simplified Chinese PC SBCS
904 β Traditional Chinese PC SBCS
906 β International Set #5 3812/3820
907 β ASCII APL (3812)
909 β IBM-PC APL2 Extended
910 β IBM-PC APL2
911 β IBM-PC Japan #1
926 β Korean PC DBCS
927 β Traditional Chinese PC DBCS
928 β Simplified Chinese PC DBCS
929 β Thai PC DBCS
932 β IBM-PC Japan MIX (DOS/V) (DBCS) (897 + 301) (conflictive ID with Windows 932; Windows version is IBM 943)
934 β IBM-PC Korea MIX (DOS/V) (DBCS) (891 + 926)
936 β IBM-PC Simplified Chinese MIX (gb2312) (DOS/V) (DBCS) (903 + 928) (conflictive ID with Windows 936; Windows version is IBM 1386)
938 β IBM-PC Traditional Chinese MIX (DOS/V, OS/2) (904 + 927)
942 β IBM-PC Japan MIX (Japanese SAA (OS/2)) (1041 + 301)
943 β IBM-PC Japan OPEN (897 + 941) (Windows CP 932)
944 β IBM-PC Korea MIX (Korean SAA (OS/2)) (1040 + 926)
946 β IBM-PC Simplified Chinese (Simplified Chinese SAA (OS/2)) (1042 + 928)
948 β IBM-PC Traditional Chinese (Traditional Chinese SAA (OS/2)) (1043 + 927)
949 β Korean (Extended Wansung (ks_c_5601-1987)) (1088 + 951) (conflictive ID with Windows 949 (Unified Hangul Code); Windows version is IBM 1363)
951 β Korean DBCS (IBM KS Code) (conflictive ID with Windows 951, a hack of Windows 950 with Unicode mappings for some PUA Unicode characters found in HKSCS, based on the file name)
1034 β Printer Application - Shipping Label, Set #2
1040 β Korean Extended
1041 β Japanese Extended (JIS X 0201 Extended)
1042 β Simplified Chinese Extended
1043 β Traditional Chinese Extended
1044 β Printer Application - Shipping Label, Set #1
1086 β IBM-PC Japan #1
1088 β Revised Korean (SBCS)
1092 β IBM-PC Modified Symbols
1098 β Farsi
1108 β DITROFF Base Compatibility
1109 β DITROFF Specials Compatibility
1115 β IBM-PC People's Republic of China
1116 β Estonian
1117 β Latvian
1118 β Lithuanian (IBM's implementation of Lika's code page 774)
1119 β Lithuanian and Russian (IBM's implementation of Lika's code page 772)
1125 β Cyrillic, Ukrainian (same with euro: 848) (IBM modification of RUSCII)
1127 β IBM-PC Arabic / French
1131 β IBM-PC Data, Cyrillic, Belarusian (same with euro: 849)
1139 β Japan Alphanumeric Katakana
1161 β Thai with Low Tone Marks & Ancient Chars with euro (same without euro: 874)
1167 β KOI8-RU
1168 β KOI8-U
1370 β Traditional Chinese MIX (Big5 encoding) (1114 + 947 + euro) (same without euro: 950)
1380 β IBM-PC Simplified Chinese GB PC-DATA (DBCS PC IBM GB 2312-80)
1381 β IBM-PC Simplified Chinese (1115 + 1380)
1393 β Japanese JIS X 0213 DBCS
1394 β IBM-PC Japan (JIS X 0213) (897 + 1393)
When dealing with older hardware, protocols and file formats, it is often necessary to support these code pages, but newer encoding systems, in particular Unicode, are encouraged for new designs.
DOS code pages are typically stored in .CPI files.
IBM AIX code pages
These code pages are used by IBM in its AIX operating system. They emulate several character sets, namely those ones designed to be used accordingly to ISO, such as UNIX-like operating systems.
367 β 7-bit US-ASCII
371 β 7-bit US-ASCII APL
806 β ISCII
813 β ISO 8859-7
819 β ISO 8859-1
895 β 7-bit Japan Latin
896 β 7-bit Japan Katakana Extended
901 β ISO 8859-13 with euro (later extended) (same without euro: 921)
902 β ISO Estonian with euro (same without euro: 922)
912 β ISO 8859-2 (extended in 1999)
913 β ISO 8859-3
914 β ISO 8859-4
915 β ISO 8859-5 (extended after 1995)
916 β ISO 8859-8
919 β ISO 8859-10
920 β ISO 8859-9
921 β ISO 8859-13 (extended after 1995) (same with euro: 901)
922 β ISO Estonian (same with euro: 902)
923 β ISO 8859-15
952 β EUC Japanese for JIS X 0208
953 β EUC Japanese for JIS X 0212
954 β EUC Japanese (895 + 952 + 896 + 953)
955 β TCP Japanese, JIS X 0208-1978
956 β TCP Japanese (895 + 952 + 896 + 953)
957 β TCP Japanese (895 + 955 + 896 + 953)
958 β TCP Japanese (367 + 952 + 896 + 953)
959 β TCP Japanese (367 + 955 + 896 + 953)
960 β Traditional Chinese DBCS-EUC SICGCC Primary Set (1st plane)
961 β Traditional Chinese DBCS-EUC SICGCC Full Set + IBM Select + UDC
963 β Traditional Chinese TCP, CNS 11643 plane 2 only
964 β EUC Traditional Chinese (367 + 960 + 961)
965 β TCP Traditional Chinese (367 + 960 + 963)
970 β EUC Korean (367 + 971)
971 β EUC Korean DBCS (G1, KSC 5601 1989 (including 188 UDC))
1006 β ISO 8-bit Urdu
1008 β ISO 8-bit Arabic
1009 β 7-bit ISO IRV
1010 β 7-bit France
1011 β 7-bit Germany F.R.
1012 β 7-bit Italy
1013 β 7-bit United Kingdom
1014 β 7-bit Spain
1015 β 7-bit Portugal
1016 β 7-bit Norway
1017 β 7-bit Denmark
1018 β 7-bit Finland/Sweden
1019 β 7-bit Netherlands
1029 β Arabic Extended
1036 β CCITT T.61
1046 β Arabic Extended (Euro)
1089 β ISO 8859-6
1111 β Variant of ISO 8859-2
1124 β ISO Ukrainian, similar to ISO 8859-5
1129 β ISO Vietnamese (same with euro: 1163)
1133 β ISO Lao
1163 β ISO Vietnamese with euro (same without euro: 1129)
1350 β EUC Japanese (JISeucJP) (367 + 952 + 896 + 953)
1382 β EUC Simplified Chinese (DBCS PC GB 2312-80)
1383 β EUC Simplified Chinese (367 + 1382)
Code page 819 is identical to Latin-1, ISO/IEC 8859-1, and with slightly-modified commands, permits MS-DOS machines to use that encoding. It was used with IBM AS/400 minicomputers.
IBM OS/2 code pages
These code pages are used by IBM in its OS/2 operating system.
1004 β Latin-1 Extended, Desk Top Publishing/Windows
Windows emulation code pages
These code pages are used by IBM when emulating the Microsoft Windows character sets. Most of these code pages have the same number as Microsoft code pages, although they are not exactly identical. Some code pages, though, are new from IBM, not devised by Microsoft.
897 β IBM-PC SBCS Japanese (JIS X 0201-1976)
941 β IBM-PC Japanese DBCS for Open environment
947 β IBM-PC DBCS for (Big5 encoding)
950 β Traditional Chinese MIX (Big5 encoding) (1114 + 947) (same with euro: 1370)
1114 β IBM-PC SBCS (Simplified Chinese; GBK; Traditional Chinese; Big5 encoding)
1126 β IBM-PC Korean SBCS
1162 β Windows Thai (Extension of 874; but still called that in Windows)
1169 β Windows Cyrillic Asian
1174 β Windows Kazakh
1250 β Windows Central Europe
1251 β Windows Cyrillic
1252 β Windows Western
1253 β Windows Greek
1254 β Windows Turkish
1255 β Windows Hebrew
1256 β Windows Arabic
1257 β Windows Baltic
1258 β Windows Vietnamese
1360 β Korean JOHAB DBCS
1361 β Korean (JOHAB)
1362 β Korean Hangul DBCS
1363 β Windows Korean (1126 + 1362) (Windows CP 949)
1372 β IBM-PC MS T Chinese Big5 encoding (Special for DB2)
1373 β Windows Traditional Chinese (extension of 950)
1374 β IBM-PC DB Big5 encoding extension for HKSCS
1375 β Mixed Big5 encoding extension for HKSCS (intended to match 950)
1385 β IBM-PC Simplified Chinese DBCS (Growing CS for GB18030, also used for GBK PC-DATA.)
1386 β IBM-PC Simplified Chinese GBK (1114 + 1385) (Windows CP 936)
1391 β Simplified Chinese 4 Byte (Growing CS for GB18030, also used for GBK PC-DATA.)
1392 β IBM-PC Simplified Chinese MIX (1252 + 1385 + 1391)
Macintosh emulation code pages
These code pages are used by IBM when emulating the Apple Macintosh character sets.
1275 β Apple Roman
1280 β Apple Greek
1281 β Apple Turkish
1282 β Apple Central European
1283 β Apple Cyrillic
1284 β Apple Croatian
1285 β Apple Romanian
1286 β Apple Icelandic
Adobe emulation code pages
These code pages are used by IBM when emulating the Adobe character sets.
1038 β Adobe Symbol Encoding
1276 β Adobe (PostScript) Standard Encoding
1277 β Adobe (PostScript) Latin 1
HP emulation code pages
These code pages are used by IBM when emulating the HP character sets.
1050 β HP Roman Extension
1051 β HP Roman-8
1052 β HP Gothic Legal
1053 β HP Gothic-1 (almost the same as ISO 8859-1)
1054 β HP ASCII
1055 β HP PC-Line
1056 β HP Line Draw
1057 β HP PC-8 (almost the same as code page 437)
1058 β HP PC-8DN (not the same as code page 865)
1351 β Japanese DBCS HP character set
5039 β Japanese MIX (1041 + 1351)
DEC emulation code pages
These code pages are used by IBM when emulating the DEC character sets.
1020 β 7-bit Canadian (French) NRC Set
1021 β 7-bit Switzerland NRC Set
1023 β 7-bit Spanish NRC Set
1090 β Special Characters and Line Drawing Set
1100 β DEC Multinational
1101 β 7-bit British NRC Set
1102 β 7-bit Dutch NRC Set
1103 β 7-bit Finnish NRC Set
1104 β 7-bit French NRC Set
1105 β 7-bit Norwegian/Danish NRC Set
1106 β 7-bit Swedish NRC Set
1107 β 7-bit Norwegian/Danish NRC Alternate
1287 β DEC Greek
1288 β DEC Turkish
IBM Unicode code pages
1200 β UTF-16BE Unicode (big-endian) with IBM Private Use Area (PUA)
1201 β UTF-16BE Unicode (big-endian)
1202 β UTF-16LE Unicode (little-endian) with IBM PUA
1203 β UTF-16LE Unicode (little-endian)
1208 β UTF-8 Unicode with IBM PUA
1209 β UTF-8 Unicode
1400 β ISO 10646 UCS-BMP (Based on Unicode 6.0)
1401 β ISO 10646 UCS-SMP (Based on Unicode 6.0)
1402 β ISO 10646 UCS-SIP (Based on Unicode 6.0)
1414 β ISO 10646 UCS-SSP (Based on Unicode 4.0)
1445 β IBM AFP PUA No. 1
1446 β ISO 10646 UCS-PUP15 (Based on Unicode 4.0)
1447 β ISO 10646 UCS-PUP16 (Based on Unicode 4.0)
1448 β UCS-BMP (Generic UDC)
1449 β IBM default PUA
Microsoft code pages
Windows code pages
These code pages are used by Microsoft in its own Windows operating system. Microsoft defined a number of code pages known as the ANSI code pages (as the first one, 1252 was based on an apocryphal ANSI draft of what became ISO 8859-1). Code page 1252 is built on ISO 8859-1 but uses the range 0x80-0x9F for extra printable characters rather than the C1 control codes from ISO 6429 mentioned by ISO 8859-1. Some of the others are based in part on other parts of ISO 8859 but often rearranged to make them closer to 1252.
42 β Windows Symbol
874 β Windows Thai
1250 β Windows Central Europe
1251 β Windows Cyrillic
1252 β Windows Western
1253 β Windows Greek
1254 β Windows Turkish
1255 β Windows Hebrew
1256 β Windows Arabic
1257 β Windows Baltic
1258 β Windows Vietnamese
Microsoft recommends new applications use UTF-8 or UCS-2/UTF-16 instead of these code pages.
DBCS code pages
These code pages represent DBCS character encodings for various CJK languages. In Microsoft operating systems, these are used as both the "OEM" and "Windows" code page for the applicable locale.
932 β Supports Japanese Shift-JIS
936 β Supports Simplified Chinese GB2312 or GBK
949 β Supports Korean Unified Hangul Code
950 β Supports Traditional Chinese Big5
951 β Supports Traditional Chinese Big5 with HKSCS
MS-DOS code pages
These code pages are used by Microsoft in its MS-DOS operating system. Microsoft refers to these as the OEM code pages because they were defined by the original equipment manufacturers who licensed MS-DOS for distribution with their hardware, not by Microsoft or a standards organization. Most of these code pages have the same number as the equivalent IBM code pages, although some are not exactly identical.
708 β Arabic (ASMO 708)
720 β Arabic (Transparent ASMO)
737 β Greek
850 β Latin-1
851 β Greek
852 β Latin-2
855 β Cyrillic
857 β Latin-5
858 β Latin-1 with euro symbol
859 β Latin-9
860 β Portuguese
861 β Icelandic
862 β Hebrew
863 β Canadian French
864 β Arabic
865 β Danish/Norwegian
866 β Belarusian, Russian, Ukrainian
869 β Greek
Macintosh emulation code pages
These code pages are used by Microsoft when emulating the Apple Macintosh character sets.
10000 - Apple Macintosh Roman
10001 - Apple Japanese
10002 - Apple Traditional Chinese (Big5)
10003 - Apple Korean
10004 - Apple Arabic
10005 - Apple Hebrew
10006 - Apple Greek
10007 - Apple Macintosh Cyrillic
10008 - Apple Simplified Chinese (GB 2312)
10010 - Apple Romanian
10017 - Apple Ukrainian
10021 - Apple Thai
10029 - Apple Macintosh Central Europe
10079 - Apple Icelandic
10081 - Apple Turkish
10082 - Apple Croatian
Various other Microsoft code pages
The following code page numbers are specific to Microsoft Windows. IBM may use different numbers for these code pages. They emulate several character sets, namely those ones designed to be used accordingly to ISO, such as UNIX-like operating systems.
20000 β Traditional Chinese CNS
20001 β Traditional Chinese TCA
20002 β Traditional Chinese ETEN
20003 β Traditional Chinese IBM5500
20004 β Traditional Chinese TeleText
20005 β Traditional Chinese Wang
20105 β 7-bit IA5 IRV (CP 1009)
20106 β 7-bit IA5 German (DIN 66003)
20107 β 7-bit IA5 Swedish (SEN 850200 C)
20108 - 7-bit IA5 Norwegian (NS 4551-2)
20127 β 7-bit US-ASCII
20261 β CCITT T.61
20269 β ISO 6937
20273
20277
20278
20284
20285
20290 - Japanese language in EBCDIC
20297
20420
20423
20424
20833
20838
20866 β KOI8-R
20871
20880 β EBCDIC Cyrillic (880)
20905
20924
20932 - EUC-JP
20936
20949
21025 β EBCDIC Cyrillic (1025)
21027
21866 β KOI8-U
28591 β ISO-8859-1
28592 β ISO-8859-2
28593 β ISO-8859-3
28594 β ISO-8859-4
28595 β ISO-8859-5
28596 β ISO-8859-6
28597 β ISO-8859-7
28598 β ISO-8859-8
28599 β ISO-8859-9
28600 β ISO-8859-10
28601 β ISO-8859-11
28602 β not used (reserved for ISO-8859-12)
28603 β ISO-8859-13
28604 β ISO-8859-14
28605 β ISO-8859-15
28606 β ISO-8859-16
38596 β ISO-8859-6
38598 β ISO-8859-8
Microsoft Unicode code pages
1200 β UTF-16LE Unicode (little-endian)
1201 β UTF-16BE Unicode (big-endian)
12000 β UTF-32LE Unicode (little-endian)
12001 β UTF-32BE Unicode (big-endian)
65000 β UTF-7 Unicode
65001 β UTF-8 Unicode
65520 β Empty Unicode Plane
HP Symbol Sets
HP developed a series of Symbol Sets (each with its associated Symbol Set Code) to encode either its own character sets or other vendorsβ character sets. They are normally 7-bit character sets which, when moved to the higher part and associated with the ASCII character set, make up 8-bit character sets.
HP own Symbol Sets
Symbol Set 0E β HP Roman Extension β 7-bit character set with accented letters (coded by IBM as code page 1050)
Symbol Set 0G β HP 7-bit German
Symbol Set 0L β HP 7-bit PC Line (coded by IBM as code page 1055)
Symbol Set 0M β HP Math-7
Symbol Set 0T β HP Thai-8
Symbol Set 1S β HP 7-bit Spanish
Symbol Set 1U β HP 7-bit Gothic Legal (coded by IBM as code page 1052)
Symbol Set 4Q β HP Line Draw (coded by IBM as code page 1056)
Symbol Set 4U β HP Roman-9 β Roman-8 + β¬
Symbol Set 7J β HP Desktop
Symbol Set 7S β HP 7-bit European Spanish
Symbol Set 8E β HP East-8
Symbol Set 8G β HP Greek-8 (based on IR 088; not on ELOT 927)
Symbol Set 8H β HP Hebrew-8
Symbol Set 8I β MS LineDraw (ASCII + HP PC Line)
Symbol Set 8K β HP Kana-8 (ASCII + Japanese Katakana)
Symbol Set 8L β HP LineDraw (ASCII + HP Line Draw)
Symbol Set 8M β HP Math-8 (ASCII + HP Math-8)
Symbol Set 8R β HP Cyrillic-8
Symbol Set 8S β HP 7-bit Latin American Spanish
Symbol Set 8T β HP Turkish-8
Symbol Set 8U β HP Roman-8 (ASCII + HP Roman Extension; coded by IBM as code page 1051)
Symbol Set 8V β HP Arabic-8
Symbol Set 9K β HP Korean-8
Symbol Set 9T β PC 8T (also known as Code Page 437-T; this is not code page 857)
Symbol Set 9V β Latin / Arabic for Windows (this is not code page 1256)
Symbol Set 11U β PC 8D/N (also known as Code Page 437-N; coded by IBM as code page 1058; this is not code page 865)
Symbol set 14G β PC-8 Greek Alternate (also known as Code Page 437-G; almost the same as code page 737)
Symbol Set 18K β
Symbol Set 18T β
Symbol Set 19C β
Symbol Set 19K β
Symbol Sets from other vendors
Symbol Set 0D β ISO 60: 7-bit Norwegian
Symbol Set 0F β ISO 25: 7-bit French
Symbol Set 0H β HP 7-bit Hebrew β Practically the same as Israeli Standard SI 960
Symbol Set 0I β ISO 15: 7-bit Italian
Symbol Set 0K β ISO 14: 7-bit Japanese Katakana
Symbol Set 0N β ISO 8859-1 Latin 1 (Initially called "Gothic-1"; coded by IBM as code page 1053)
Symbol Set 0R β ISO 8859-5 Latin/Cyrillic (1986 version β IR 111)
Symbol Set 0S β ISO 11: 7-bit Swedish
Symbol Set 0U β ISO 6: 7-bit U.S.
Symbol Set 0V β Arabic
Symbol Set 1D β ISO 61: 7-bit Norwegian
Symbol Set 1E β ISO 4: 7-bit U. K.
Symbol Set 1F β ISO 69: 7-bit French
Symbol Set 1G β ISO 21: 7-bit German
Symbol Set 1K β ISO 13: 7-bit Japanese Latin
Symbol Set 1T β Windows Thai (Practically the same as 874)
Symbol Set 2K β ISO 57: 7-bit Simplified Chinese Latin
Symbol Set 2N β ISO 8859-2 Latin 2
Symbol Set 2S β ISO 17: 7-bit Spanish
Symbol Set 2U β ISO 2: 7-bit International Reference Version
Symbol Set 3N β ISO 8859-3 Latin 3
Symbol Set 3R β PC-866 Russia (Practically the same as code page 866)
Symbol Set 3S β ISO 10: 7-bit Swedish
Symbol Set 4N β ISO 8859-4 Latin 4
Symbol Set 4S β ISO 16: 7-bit Portuguese
Symbol Set 5M β PS Math Symbol (Practically the same as Adobe Symbols)
Symbol Set 5N β ISO 8859-9 Latin 5
Symbol Set 5S β ISO 84: 7-bit Portuguese
Symbol Set 5T β Windows 3.1 Latin-5 (Practically the same as code page 1254)
Symbol Set 6J β Microsoft Publishing
Symbol Set 6M β Ventura Math
Symbol Set 6N β ISO 8859-10 Latin 6
Symbol Set 6S β ISO 85: 7-bit Spanish
Symbol Set 7H β ISO 8859-8 Latin/Hebrew
Symbol Set 9E β Windows 3.1 Latin 2 (Practically the same as code page 1250)
Symbol Set 9G β Windows 98 Greek (Practically the same as code page 1253)
Symbol Set 9J β PC 1004
Symbol Set 9L β Ventura ITC Zapf Dingbats
Symbol Set 9N β ISO 8859-15 Latin 9
Symbol Set 9R β Windows 98 Cyrillic (Practically the same as code page 1251)
Symbol Set 9U β Windows 3.0
Symbol Set 10G β PC-851 Latin/Greek (Practically the same as code page 851)
Symbol Set 10J β PS Text (Practically the same as Adobe Standard)
Symbol Set 10L β PS ITC Zapf Dingbats (Practically the same as Adobe Dingbats)
Symbol Set 10N β ISO 8859-5 Latin/Cyrillic (1988 version β IR 144)
Symbol Set 10R β PC-855 Cyrillic (Practically the same as code page 855)
Symbol Set 10T β Teletex
Symbol Set 10U β PC-8 (Practically the same as code page 437; coded by IBM as code page 1057)
Symbol Set 10V β CP-864 (Practically the same as code page 864)
Symbol Set 11G β CP-869 (Practically the same as code page 869)
Symbol Set 11J β PS ISO Latin-1 (Practically the same as Adobe Latin-1)
Symbol Set 11N β ISO 8859-6 Latin/Arabic
Symbol Set 12G β PC Latin/Greek (Practically the same as code page 737)
Symbol Set 12J β MC Text (Practically the same as Macintosh Roman)
Symbol Set 12N β ISO 8859-7 Latin/Greek
Symbol Set 12R β PC Gost (Practically the same as PC GOST Main)
Symbol Set 12U β PC-850 Latin 1 (Practically the same as code page 850)
Symbol Set 13J β Ventura International
Symbol Set 13R β PC Bulgarian (Practically the same as MIK)
Symbol Set 13U β PC-858 Latin 1 + β¬ (Practically the same as code page 858)
Symbol Set 14J β Ventura U. S.
Symbol Set 14L β Windows Dingbats
Symbol Set 14P β ABICOMP International (Practically the same as ABICOMP)
Symbol Set 14R β PC Ukrainian (Practically the same as RUSCII)
Symbol Set 15H β PC-862 Israel (Practically the same as code page 862)
Symbol Set 16U β PC-857 Latin 5 (Practically the same as code page 857)
Symbol Set 17U β PC-852 Latin 2 (Practically the same as code page 852)
Symbol Set 18N β UTF-8
Symbol Set 18U β PC-853 Latin 3 (Practically the same as code page 853)
Symbol Set 19L β Windows 98 Baltic (Practically the same as code page 1257)
Symbol Set 19M β Windows Symbol
Symbol Set 19U β Windows 3.1 Latin 1 (Practically the same as code page 1252)
Symbol Set 20U β PC-860 Portugal (Practically the same as code page 860)
Symbol Set 21U β PC-861 Iceland (Practically the same as code page 861)
Symbol Set 23U β PC-863 Canada - French (Practically the same as code page 863)
Symbol Set 24Q β PC-Polish Mazowia (Practically the same as Mazovia encoding)
Symbol Set 25U β PC-865 Denmark/Norway (Practically the same as code page 865)
Symbol Set 26U β PC-775 Latin 7 (Practically the same as code page 775)
Symbol Set 27Q β PC-8 PC Nova (Practically the same as [PC Nova)
Symbol Set 27U β PC Latvian Russian (also known as 866-Latvian)
Symbol Set 28U β PC Lithuanian/Russian (Practically the same as code page 774)
Symbol Set 29U β PC-772 Lithuanian/Russian (Practically the same as code page 772)
Code pages from other vendors
These code pages are independent assignments by third party vendors. Since the original IBM PC code page (number 437) was not really designed for international use, several partially compatible country or region specific variants emerged.
These code pages number assignments are not official neither by IBM, neither by Microsoft and almost none of them is referred as a usable character set by IANA. The numbers assigned to these code pages are arbitrary and may clash to registered numbers in use by IBM or Microsoft. Some of them may predate codepage switching being added in DOS 3.3.
100 β DOS Hebrew hardware fontpage (Not from IBM; HDOS)
111 β DOS Greek (Not from IBM; AST Premium Exec DOS 5.0)
112 β DOS Turkish (Not from IBM; AST Premium Exec DOS 5.0)
113 β DOS Yugoslavian (Not from IBM; AST Premium Exec DOS 5.0)
151 β DOS Nafitha Arabic (Not from IBM; ADOS)
152 β DOS Nafitha Arabic (Not from IBM; ADOS)
161 β DOS Arabic (Not from IBM; ADOS)
162 β DOS Arabic with vowel diacritics (Not from IBM; ADOS)
163 β DOS Arabic and French (Not from IBM; ADOS)
164 β DOS Arabic and French with vowel diacritics (Not from IBM; ADOS)
165 β DOS Arabic (864 Extended) (Not from IBM; ADOS)
166 β IBM Arabic PC (ADOS)
190 β DEC DOS German (appears to be identical to Code page 437)
210 β DEC DOS Greek (NEC Jetmate printers)
220 β DEC DOS Spanish (Not from IBM)
489 β Czechoslovakian [OCR software 1993]
620 β DOS Polish (Mazovia) (Not from IBM)
667 β DOS Polish (Mazovia) (Not from IBM)
668 β DOS Polish (Not from IBM)
706 β MS-DOS Server Arabic Sakhr (Not from IBM; Sakhr Software from MSX Computers)
707 β MS-DOS Arabic Sakhr (Not from IBM; Sakhr Software from MSX Computers)
709 β MS-DOS Arabic (ASMO 449+/BCON V4)
710 β MS-DOS Arabic (Transparent Arabic)
711 β MS-DOS Arabic Nafitha Enhanced (Not from IBM)
714 β MS-DOS Arabic Sakr (Not from IBM)
715 β MS-DOS Arabic APTEC (Not from IBM)
721 β MS-DOS Arabic Nafitha International (Not from IBM)
768 β Arabic Al-Arabi (Not from IBM)
770 β DOS Estonian, Latvian, Lithuanian (From Lithuanian Lika Software; Lithuanian RST 1095-89 National Standard)
771 β DOS Lithuanian/Cyrillic β KBL (From Lithuanian Lika Software)
772 β DOS Lithuanian/Cyrillic (From Lithuanian Lika Software; Lithuanian LST 1284:1993 National Standard; adopted by IBM as code page 1119)
773 β DOS Latin-7 β KBL (From Lithuanian Lika Software)
774 β DOS Lithuanian (From Lithuanian Lika Software; Lithuanian LST 1283:1993 National Standard; adopted by IBM as code page 1118)
775 β DOS Latin-7 Baltic Rim (From Lithuanian Lika Software; Lithuanian LST 1590-1 National Standard; adopted by IBM and Microsoft as code page 775)
776 β DOS Lithuanian (extended CP770) (From Lithuanian Lika Software)
777 β DOS Accented Lithuanian (old) (extended CP773) β KBL (From Lithuanian Lika Software)
778 β DOS Accented Lithuanian (extended CP775) (From Lithuanian Lika Software)
790 β DOS Polish (Mazovia) with curly quotation marks
854 β Spanish
881 β Latin 1 (Not from IBM; AST Premium Exec DOS 5.0) (conflictive ID with IBM EBCDIC 881)
882 β Latin 2 (ISO 8859-2) (Not from IBM; same as Code page 912; AST Premium Exec DOS 5.0) (conflictive ID with IBM EBCDIC 882)
883 β Latin 3 (Not from IBM; AST Premium Exec DOS 5.0) (conflictive ID with IBM EBCDIC 883)
884 β Latin 4 (Not from IBM; AST Premium Exec DOS 5.0) (conflictive ID with IBM EBCDIC 884)
885 β Latin 5 (Not from IBM; AST Premium Exec DOS 5.0) (conflictive ID with IBM EBCDIC 885)
895 β Czech (KamenickΓ½), (Not from IBM; conflictive ID with IBM CP895 β 7-bit EUC Japanese Roman)
896 β DOS Polish (Mazovia) (Not from IBM; conflictive ID with IBM CP896 β 7-bit EUC Japanese Katakana)
900 β DOS Russian (Russian MS-DOS 5.0 LCD.CPI)
928 β Greek (on Star printers); same as Greek National Standard ELOT 928 (Not from IBM; conflictive ID with IBM CP928 β Simplified Chinese PC DBCS)
966 β Saudi Arabian (Not from IBM)
972 β Hebrew (VT100) (Not from IBM)
991 β DOS Polish (Mazovia) (Not from IBM)
999 β DOS Serbo-Croatian I (Not from IBM); also known as PC Nova and CroSCII; lower part is JUSI.B1.002, upper part is code page 437; supports Slovenian and Serbo-Croatian (Latin script)
1001 β Arabic (on Star printers) (Not from IBM; conflictive ID with IBM CP1001 β MICR)
1261 β Windows Korean IBM-1261 LMBCS-17, similar to 1363
1270 β Windows SΓ‘mi
1300 β ANSI [PTS-DOS 6.70, not 6.51] (Not from IBM; conflictive ID with IBM EBCDIC 1300 β Generic Bar Code/OCR-B)
2001 β Lithuanian KBL (on Star printers); same as code page 771
3001 β Estonian 1 (on Star printers); same as code page 1116
3002 β Estonian 2 (on Star printers); same as code page 922
3011 β Latvian 1 (on Star printers); same as code page 437-Latvian
3012 β Latvian-2 (on Star printers); same as code page 866-Latvian (Latvian RST 1040-90 National Standard)
3021 β Bulgarian (on Star printers); same as MIK
3031 β Hebrew (on Star printers); same as code page 862
3041 β Maltese (on Star printers); same as ISO 646 Maltese
3840 β IBM-Russian (on Star printers); nearly the same as CP 866
3841 β Gost-Russian (on Star printers); GOST 13052 plus characters for Central Asian languages
3843 β Polish (on Star printers); same as Mazovia
3844 β CS2 (on Star printers); same as KamenickΓ½
3845 β Hungarian (on Star printers); same as CWI
3846 β Turkish (on Star printers); same as PC-8 Turkish + old Turkish Lira sign (TΚ) at code point A8
3847 β Brazil-ABNT (on Star printers); same as the Brazilian National Standard NBR-9614:1986
3848 β Brazil-ABICOMP (on Star printers); same as ABICOMP
3850 β Standard KU (on Star printers); variation of the Kasetsart University encoding for Thai
3860 β Rajvitee KU (on Star printers); variation of the Kasetsart University encoding for Thai
3861 β Microwiz KU (on Star printers); variation of the Kasetsart University encoding for Thai
3863 β STD988 TIS (on Star printers); variation of the TIS 620 encoding for Thai
3864 β Popular TIS (on Star printers); variation of the TIS 620 encoding for Thai
3865 β Newsic TIS (on Star printers); variation of the TIS 620 encoding for Thai
28799 β FOCAL (on Star printers); same as FOCAL character set
28800 β HP RPL (on Star printers); same as RPL
(number missing) β CWI-2 (for DOS) supports Hungarian
(number missing) β MIK (for DOS) supports Bulgarian
(number missing) β DOS Serbo-Croatian II; supports Slovenian and Serbo-Croatian (Latin script)
(number missing) β Russian Alternative code page (for DOS); this is the origin for IBM CP 866
List of code page assignments
List of known code page assignments (incomplete):
Criticism
Many older character encodings (unlike Unicode) suffer from several problems. Some vendors insufficiently document the meaning of all code point values in their code pages, which decreases the reliability of handling textual data consistently through various computer systems. Some vendors add proprietary extensions to established code pages, to add or change certain code point values: for example, byte 0x5C in Shift JIS can represent either a back slash or a yen sign depending on the platform. Finally, in order to support several languages in a program that does not use Unicode, the code page used for each string/document needs to be stored.
Applications may also mislabel text in Windows-1252 as ISO-8859-1. The only difference between these code pages is that the code point values in the range 0x800x9F, used by ISO-8859-1 for control characters, are instead used as additional printable characters in Windows-1252 notably for quotation marks, the euro sign and the trademark symbol among others. Browsers on non-Windows platforms would tend to show empty boxes or question marks for these characters, making the text hard to read. Most browsers fixed this by ignoring the character set and interpreting as Windows-1252 to look acceptable. In HTML5, treating ISO-8859-1 as Windows-1252 is even codified as a W3C standard. Although browsers were typically programmed to deal with this behaviour, this was not always true of other software. Consequently, when receiving a file transfer from a Windows system, non-Windows platforms would either ignore these characters or treat them as a standard control characters and attempt to take the specified control action accordingly.
Due to Unicode's extensive documentation, vast repertoire of characters and stability policy of characters, the problems listed above are rarely a concern for Unicode. UTF-8 (which can encode over one million codepoints) has replaced the code-page method in terms of popularity on the Internet.
Private code pages
When, early in the history of personal computers, users did not find their character encoding requirements met, private or local code pages were created using terminate-and-stay-resident utilities or by re-programming BIOS EPROMs. In some cases, unofficial code page numbers were invented (e.g. CP895).
When more diverse character set support became available most of those code pages fell into disuse, with some exceptions such as the KamenickΓ½ or KEYBCS2 encoding for the Czech and Slovak alphabets. Another character set is Iran System encoding standard that was created by Iran System corporation for Persian language support. This standard was in use in Iran in DOS-based programs and after introduction of Microsoft code page 1256 this standard became obsolete. However some Windows and DOS programs using this encoding are still in use and some Windows fonts with this encoding exist.
In order to overcome such problems, the IBM Character Data Representation Architecture level 2 specifically reserves ranges of code page IDs for user-definable and private-use assignments. Whenever such code page IDs are used, the user must not assume that the same functionality and appearance can be reproduced in another system configuration or on another device or system unless the user takes care of this specifically.
The code page range 57344-61439 (-) is officially reserved for user-definable code pages (or actually CCSIDs in the context of IBM CDRA), whereas the range 65280-65533 (-) is reserved for any user-definable "private use" assignments.
For example, a non-registered custom variant of code page 437 () or 28591 () could become 57781 () or 61359 (), respectively, in order to avoid potential conflicts with other assignments and maintain the sometimes existing internal numerical logic in the assignments of the original code pages. An unregistered private code page not based on an existing code page, a device specific code page like a printer font, which just needs a logical handle to become addressable for the system, a frequently changing download font, or a code page number with a symbolic meaning in the local environment could have an assignment in the private range like 65280 ().
The code page IDs 0, 65534 () and 65535 () are reserved for internal use by operating systems such as DOS and must not be assigned to any specific code pages.
See also
Windows code page
Character encoding
CCSID IBM's official "code page" definitions and assignments
Charset detection
Unicode
References
External links
IBM CDRA glossary
IBM/ICU Charset Information
Microsoft Code Page Identifiers (Microsoft's list contains only code pages actively used by normal apps on Windows. See also Torsten Mohrin's list for the full list of supported code pages)
Character Sets And Code Pages At The Push Of A Button
Microsoft Chcp command: Display and set the console active code page
"Code Tables" Character Encoding Wikibook
Character encoding | Code page | Technology | 12,725 |
46,516,892 | https://en.wikipedia.org/wiki/Single%20scan%20dynamic%20molecular%20imaging%20technique | Single Scan Dynamic Molecular Imaging Technique is a positron emission tomography (PET) based neuroimaging technique that allows detection of dopamine released in the brain during a cognitive or behavioral processing. The technique was developed by a psychiatry resident Rajendra Badgaiyan and his colleagues at Massachusetts General Hospital Boston.. The technique has been used to detect dopamine released during cognitive, behavioral and emotional tasks by a number of investigators. This technique has for the first time allowed scientists to detect changes in the concentration of neurotransmitters released acutely during task performance. It expanded the scope of neuroimaging studies by allowing detection of neurochemical changes associated with the brain processing.
References
Special:WhatLinksHere/Rajendra Badgaiyan
Positron emission tomography
Neuroimaging | Single scan dynamic molecular imaging technique | Physics | 166 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.