source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Berthold%20Leibinger%20Innovationspreis |
The Berthold Leibinger Innovationspreis is an award for given to those who have created applied laser technology and innovations on the application or generation of laser light. It is open to participants worldwide. It is biennially awarded by the German non-profit foundation Berthold Leibinger Stiftung. Three prizes are awarded worth 100,000 euros. The prize winners are selected from eight finalists that present their work person in a jury session. The jury is composed of international experts from different fields.
Recipients
2000 |
2002 |
2004 |
2006 |
2008 |
2010 |
2012 |
2014 |
2016 |
2018 |
2000
First Prize: Josef Schneider, MAN Roland Druckmaschinen AG, „Laser and digitally changed Printing systems“
Second Prize: Martin Grabherr, ULM photonics GmbH, „VCSEL - Vertical Cavity Surface Emitting high-power Laser diode“
Third Prize: Lu Yong Feng, National University of Singapore, „Laser micro processing in industry“
2002
First Prize: Work Group Disk Laser, Universität Stuttgart, „Disk laser“
Second Prize: Tibor Juhasz and Ronald Kurtz, IntraLase Inc., „Femtosecond laser scalpel for Corneal surgery“
Third Prize: Stefan Hell, Marcus Dyba and Alexander Egner, Max Planck Institute for Biophysical Chemistry, „Optical nanoscopy with ultrashort pulse laser and stimulated emission“
2004
First Prize: Ursula Keller, ETH Zurich, „SESAM – Semiconductor Saturable Absorber Mirror for ultrafast lasers“
Second Prize: Andreas Tünnermann, Stefan Nolte and Holger Zellmer, Friedrich-Schiller-University, Jena / Fraunhofer Institute for Applied Optics and Precision Engineering, „High-power fiber lasers and their applications“
Third Prize: Axel Rolle, Specialized Hospital Coswig, Saxony, „Lung parenchymal laser surgery“
2006
First Prize: Karin Schütze and Raimund Schütze, P.A.L.M. Microlaser Technologies GmbH, a Company of the Carl Zeiss MicroImaging GmbH, „Laser micro beam and laser catapult for single cell capture“
Second Prize: Ian A. Walmsley, University of Oxford, „Metho |
https://en.wikipedia.org/wiki/Berthold%20Leibinger%20Zukunftspreis | The Berthold Leibinger Zukunftspreis (future prize) is an international award for "excellent research on the application or generation of laser light". Since 2006, it is biennially awarded by the German non-profit foundation Berthold Leibinger Stiftung as part of its Laser Prizes, with an amount of 50,000 euros.
Recipients
, two Zukunftspreis laureates have also received the Nobel Prize in Physics: Gérard Mourou in 2018, and Anne L'Huillier in 2023.
Source:
See also
Berthold Leibinger Innovationspreis (affiliated innovation prize)
Berthold Leibinger (founder of issuing foundation)
List of physics awards |
https://en.wikipedia.org/wiki/George%20S.%20Myers | George Sprague Myers (February 2, 1905 – November 4, 1985) was an American ichthyologist who spent most of his career at Stanford University. He served as the editor of Stanford Ichthyological Bulletin as well as president of the American Society of Ichthyologists and Herpetologists. Myers was also head of the Division of Fishes at the United States National Museum, and held a position as an ichthyologist for the United States Fish and Wildlife Service. He was also an advisor in fisheries and ichthyology to the Brazilian Government.
He was a prolific writer of papers and books and is well known to aquarists as the man who first described numerous popular aquarium species such as the flame tetra (Hyphessobrycon flammeus), the black-winged hatchetfish (Carnegiella marthae), the ram cichlid (Microgeophagus ramirezi) and, most notably, the neon tetra. He also erected the genera Aphyosemion and Fundulopanchax, which include dozens of widely kept killifish species. He is perhaps best known to aquarists for his collaborations with William T. Innes who wrote the classic book Exotic Aquarium Fishes. Myers served as the scientific consultant for this seminal work in the aquarium literature and, after Innes retired, served as the editor for later editions. When Myers described the neon tetra in 1936, he named it Hyphessobrycon innesi in honor of Innes. The species was later moved to the genus Paracheirodon and is now known as Paracheirodon innesi.
He was an ichthyologist with the 1938 Allan Hancock Pacific Expedition. He participated as a biologist in the U.S. Navy's 1947 Bikini Scientific Resurvey.
Myers worked closely with fellow ichthyologist and Stanford Natural History Museum curator, Margaret Hamilton Storey.
Taxon named in his honor
In the scientific field of herpetology his major interest was amphibians.
A genus of Philippine snake, Myersophis, was named in his honor by Edward Harrison Taylor in 1963.
A genus of South Pacific lizards, Geomyersia, was named in hi |
https://en.wikipedia.org/wiki/Advanced%20Television%20Systems%20Committee | The Advanced Television Systems Committee (ATSC) is an international nonprofit organization developing technical standards for digital terrestrial television and data broadcasting. ATSC's 120-plus member organizations represent the broadcast, broadcast equipment, motion picture, consumer electronics, computer, cable, satellite and semiconductor industries.
ATSC was initially formed in 1983 to develop a first-generation digital television standard that could replace existing analog transmission systems. The new digital system became known as "ATSC 1.0." ATSC 1.0 is in use in the United States, Canada, Mexico, South Korea and Honduras and also in the Dominican Republic.
ATSC then developed a next-generation digital television standard known as "ATSC 3.0.” ATSC 3.0 was commercially deployed in South Korea in May 2017 and was approved for voluntary use in the United States in November 2017.
See also
ATSC standards
ATSC tuner |
https://en.wikipedia.org/wiki/Venture%20Arctic | Venture Arctic is an ecosystem simulation video game from Pocketwatch Games. Following its predecessor Wildlife Tycoon: Venture Africa, this Arctic sequel combines educational value and entertainment. The game allows players to build and manage ecosystems of Arctic animals by interacting with the environment using "tools of nature", such as the sun, snow, wind, and sickness.
There are five different environments in the game, from an oil-rig off the coast of Svalbard, Norway, to a new pipeline disturbing the caribou herds in the Alaskan level. While the game maintains impartiality on environmental issues, players are left to discover for themselves the impact of global warming and deforestation throughout the seasons in their sim-ecosystems and the 22 animal species which comprise them.
The visual art was inspired by traditional Inuit sculpture. The music is a reinterpretation of Vivaldi's Four Seasons Concerto using Inuit-inspired instruments and instruments brought to the region by explorers. Venture Arctic was designed, produced, and programmed by Andy Schatz, founder of Pocketwatch Games and host of the 2007 and 2008 Independent Games Festival Awards ceremonies.
External links
Road to the Independent Games Festival
Video game sequels
Windows games
MacOS games
2007 video games
Indie games
Biological simulation video games
Climate change in fiction
Environment of the Arctic
Culture of the Arctic
Torque (game engine) games
Video games developed in the United States
Video games set in the Arctic
Pocketwatch Games |
https://en.wikipedia.org/wiki/Image%20quality | Image quality can refer to the level of accuracy with which different imaging systems capture, process, store, compress, transmit and display the signals that form an image. Another definition refers to image quality as "the weighted combination of all of the visually significant attributes of an image". The difference between the two definitions is that one focuses on the characteristics of signal processing in different imaging systems and the latter on the perceptual assessments that make an image pleasant for human viewers.
Image quality should not be mistaken with image fidelity. Image fidelity refers to the ability of a process to render a given copy in a perceptually similar way to the original (without distortion or information loss), i.e., through a digitization or conversion process from analog media to digital image.
The process of determining the level of accuracy is called Image Quality Assessment (IQA). Image quality assessment is part of the quality of experience measures. Image quality can be assessed using two methods: subjective and objective. Subjective methods are based on the perceptual assessment of a human viewer about the attributes of an image or set of images, while objective methods are based on computational models that can predict perceptual image quality. Objective and subjective methods aren't necessarily consistent or accurate between each other: a human viewer might perceive stark differences in quality in a set of images where a computer algorithm might not.
Subjective methods are costly, require a large number of people, and are impossible to automate in real-time. Therefore, the goal of image quality assessment research is to design algorithms for objective assessment that are also consistent with subjective assessments. The development of such algorithms has a lot of potential applications. They can be used to monitor image quality in control quality systems, to benchmark image processing systems and algorithms and to optimize |
https://en.wikipedia.org/wiki/Low-barrier%20hydrogen%20bond | A Low-barrier hydrogen bond (LBHB) is a special type of hydrogen bond. LBHBs can occur when the pKa of the two heteroatoms are closely matched, which allows the hydrogen to be more equally shared between them. This hydrogen-sharing causes the formation of especially short, strong hydrogen bonds.
Description
Standard hydrogen bonds are longer (e.g. 2.8 Å for an O···O h-bond), and the hydrogen ion clearly belongs to one of the heteroatoms. When pKa of the heteroatoms is closely matched, a LBHB becomes possible at a shorter distance (~2.55 Å). When the distance further decreases (< 2.29 Å) the bond is characterized as a single-well or short-strong hydrogen bond.
Proteins
Low barrier hydrogen bonds occur in the water-excluding environments of proteins. Multiple residues act together in a charge-relay system to control the pKa values of the residues involved. LBHBs also occur on the surfaces of proteins, but are unstable due to their proximity to bulk water, and the conflicting requirements of strong salt-bridges in protein-protein interfaces.
Enzyme catalysis
Low-barrier hydrogen bonds have been proposed to be relevant to enzyme catalysis in two types of circumstance. Firstly, a low-barrier hydrogen bond in a charge relay network within an active site could activate a catalytic residue (e.g. between acid and base within a catalytic triad). Secondly, the formation of a LBHB could form during catalysis to stabilise a transition state (e.g. with substrate transition state in an oxyanion hole). Both of these mechanisms are contentious, with theoretical and experimental evidence split on whether they occur. Since the 2000s, the general consensus has been that LBHBs are not used by enzymes to aid catalysis. However, in 2012, a low-barrier hydrogen bond has been proposed to be involved in phosphate-arsenate discrimination for a phosphate transport protein. This finding might indicate the possibility of low-barrier hydrogen bonds playing a catalytic role in ion size sele |
https://en.wikipedia.org/wiki/Closed%20testing%20procedure | In statistics, the closed testing procedure is a general method for performing more than one hypothesis test simultaneously.
The closed testing principle
Suppose there are k hypotheses H1,..., Hk to be tested and the overall type I error rate is α. The closed testing principle allows the rejection of any one of these elementary hypotheses, say Hi, if all possible intersection hypotheses involving Hi can be rejected by using valid local level α tests; the adjusted p-value is the largest among those hypotheses. It controls the family-wise error rate for all the k hypotheses at level α in the strong sense.
Example
Suppose there are three hypotheses H1,H2, and H3 to be tested and the overall type I error rate is 0.05. Then H1 can be rejected at level α if H1 ∩ H2 ∩ H3, H1 ∩ H2, H1 ∩ H3 and H1 can all be rejected using valid tests with level α.
Special cases
The Holm–Bonferroni method is a special case of a closed test procedure for which each intersection null hypothesis is tested using the simple Bonferroni test. As such, it controls the family-wise error rate for all the k hypotheses at level α in the strong sense.
Multiple test procedures developed using the graphical approach for constructing and illustrating multiple test procedures are a subclass of closed testing procedures.
See also
Multiple comparisons
Holm–Bonferroni method
Bonferroni correction |
https://en.wikipedia.org/wiki/Timway | Timway () is a web portal and directory primarily serving Hong Kong. The Timway Hong Kong Search Engine is designed for searching web sites in Hong Kong. It supports web search query in English and Chinese, and indexes web pages in both languages.
The Timway Hong Kong Search Engine was introduced in 1997 by Tim Yu. It was the first directory in Traditional Chinese and sorted results by popularity and freshness instead of alphabetical order.
Sina.com.hk used Timway in addition to Google for finding Hong Kong webpages. Yahoo also cooperated with Timway for the search engine marketing business. According to web traffic analysis company Alexa, Timway was one of the top ten most popular websites in Hong Kong by 2009.
Timway now also sells web hosting and other services.
History
The search engine was first created by its founder Tim Yu in July 1997. As an engineer and a book lover, Yu discovered that a Hong Kong-oriented search engine would be more effective in processing massive data (in both English and Chinese) and allow the user to get the exact information required. Seeing the potential in this area, Yu started a searchable directory in Hong Kong. This initiative started earlier than Yahoo!HK, which marked a milestone in the developments of the Hong Kong search portal business.
Milestones:
Founded in July 1997 by Tim Yu, using a former domain name of hksrch.com.
Registered as a Hong Kong registered company in February 1998.
Changed the domain name to timway.com in 1999.
Education portal was established in 2000. A wide range of education courses were supplied. The cooperation with tens of education providers made it an education supermarket. Finding education courses became more convenient.
In 2005, Timway appointed Professor Michael Chau as the senior technology advisor.
By 2009, Timway was one of the top ten websites in Hong Kong according to web traffic analysis company Alexa, ranked by popularity.
Timway ran a website called Timway Quiz, which made us |
https://en.wikipedia.org/wiki/SCSI%20Multimedia%20Commands | SCSI Multimedia Commands (MMC) defines a SCSI/ATAPI based command set for accessing and controlling devices of type 05h. Such devices read or write optical media: CD, DVD, BD. T10 subcommittee is responsible for developing MMC as well as other SCSI command set standards. It was approved in December 1997 by ANSI.
See also
Mount Rainier (MRW)
Layer Jump Recording (LJR)
Optical disc recording modes
Small Form Factor committee (SFF) |
https://en.wikipedia.org/wiki/Nopcsaspondylus | Nopcsaspondylus (meaning "Nopcsa's vertebra", in reference to the original describer) is a genus of rebbachisaurid sauropod dinosaur (a type of large, long-necked quadrupedal herbivorous dinosaur) from the Cenomanian-age (Upper Cretaceous) Candeleros Formation of Neuquén, Argentina. It is based on a now-lost back vertebra described by Nopcsa in 1902 but not named at the time. The specimen had a small vertebral body and large hollows, now known to be typical of rebbachisaurids. |
https://en.wikipedia.org/wiki/Red%20Sea%20cliff%20swallow | The Red Sea cliff swallow (Petrochelidon perdita), also known as the Red Sea swallow, is a species of bird in the family Hirundinidae.
Distribution and habitat
It is possibly endemic to Sudan. It is known only from a single specimen, found in May 1984 at the Sanganeb lighthouse, north-east of Port Sudan, Sudan. This enigmatic swallow may still exist, though the lack of recent records is puzzling. Unidentified swallows have been sighted in Lake Langano (c. 20 birds) and in Awash National Park (3–8 birds) in the East African Rift in Ethiopia. Its scientific name means the lost swallow and it has been suggested that it might breed in the hills surrounding the Red Sea in Sudan or Ethiopia.
The Lake Langano birds had blue-black upper parts with a rump varying from off-white to pale pink to rufous whilst the Awash swallows are described as having brownish throats and brownish-white underparts. The variations are not conclusive for attribution to the original specimen but cliff swallows are variable. It is alternatively placed in the genus Hirundo. |
https://en.wikipedia.org/wiki/Wineberry%20latent%20virus | Wineberry latent virus (WLV) is a plant pathogenic virus of the family Alphaflexiviridae.
External links
ICTVdB - The Universal Virus Database: Wineberry latent virus
Family Groups - The Baltimore Method
Potexviruses
Viral plant pathogens and diseases |
https://en.wikipedia.org/wiki/Panicum%20mosaic%20satellite%20virus | Panicum mosaic satellite virus (SPMV) is a plant satellite virus in genus Papanivirus, which is a member of realm Riboviria without assigned family or order. It only infects grasses which are infected by Panicum mosaic virus. One study found that 72% of Stenotaphrum secundatum (St Augustine grass) infected with panicum mosaic virus was also infected with SPMV. In addition to SPMV, many plants infected with panicum mosaic virus are also infected with satellite RNAs. |
https://en.wikipedia.org/wiki/Liouville%27s%20theorem%20%28conformal%20mappings%29 | In mathematics, Liouville's theorem, proved by Joseph Liouville in 1850, is a rigidity theorem about conformal mappings in Euclidean space. It states that any smooth conformal mapping on a domain of Rn, where n > 2, can be expressed as a composition of translations, similarities, orthogonal transformations and inversions: they are Möbius transformations (in n dimensions). This theorem severely limits the variety of possible conformal mappings in R3 and higher-dimensional spaces. By contrast, conformal mappings in R2 can be much more complicated – for example, all simply connected planar domains are conformally equivalent, by the Riemann mapping theorem.
Generalizations of the theorem hold for transformations that are only weakly differentiable . The focus of such a study is the non-linear Cauchy–Riemann system that is a necessary and sufficient condition for a smooth mapping to be conformal:
where Df is the Jacobian derivative, T is the matrix transpose, and I is the identity matrix. A weak solution of this system is defined to be an element f of the Sobolev space with non-negative Jacobian determinant almost everywhere, such that the Cauchy–Riemann system holds at almost every point of Ω. Liouville's theorem is then that every weak solution (in this sense) is a Möbius transformation, meaning that it has the form
where a, b are vectors in Rn, α is a scalar, A is a rotation matrix, , and the matrix in parentheses is I or a Householder matrix (so, orthogonal). Equivalently stated, any quasiconformal map of a domain in Euclidean space that is also conformal is a Möbius transformation. This equivalent statement justifies using the Sobolev space W1,n, since then follows from the geometrical condition of conformality and the ACL characterization of Sobolev space. The result is not optimal however: in even dimensions , the theorem also holds for solutions that are only assumed to be in the space W, and this result is sharp in the sense that there are weak soluti |
https://en.wikipedia.org/wiki/Vivaldi%20coordinates | Vivaldi Coordinate System is a decentralized Network Coordinate System, that allows for distributed systems such as peer-to-peer networks to estimate round-trip time (RTT) between arbitrary nodes in a network.
Through this scheme, network topology awareness can be used to tune the network behavior to more efficiently distribute data. For example, in a peer-to-peer network, more responsive identification and delivery of content can be achieved. In the Azureus application, Vivaldi is used to improve the performance of the distributed hash table that facilitates query matches.
Design
The algorithm behind Vivaldi is an optimization algorithm that figures out the most stable configuration of points in a euclidean space such that distances between the points are as close as possible to real-world measured distances. In effect, the algorithm attempts to embed the multi-dimensional space that is latency measurements between computers into a low-dimensional euclidean space. A good analogy might be a spring-and-mass system in 3D space where each node is a mass and each connection between nodes are springs. The default lengths of the springs are the measured RTTs between nodes, and when the system is simulated, the coordinates of nodes correspond to the resulting 3D positions of the masses in the lowest energy state of the system. This design is taken from previous work in the field, the contribution that Vivaldi makes is to make this algorithm run in parallel across all the nodes in the network.
Advantages
Vivaldi can theoretically can scale indefinitely.
The Vivaldi algorithm is relatively simple implement.
Drawbacks
Vivaldi's coordinates are points in a euclidean space, which requires the predicted distances to obey the triangle inequality as well as euclidean symmetry. However, there are many triangle inequality violations (TIVs) and symmetry violations on the Internet, mostly because of inefficient routing or distance distortion because connections on the inter |
https://en.wikipedia.org/wiki/PCell | PCell stands for parameterized cell, a concept used widely in the automated design of analog integrated circuits. A PCell represents a part or a component of the circuit whose structure is dependent on one or more parameters. Hence, it is a cell which is automatically generated by electronic design automation (EDA) software based on the values of these parameters. For example, one can create a transistor PCell and then use different instances of the same with different user defined lengths and widths. Vendors of EDA software sometimes use different names for the concept of parameterized cells, e.g. T-Cell and Magic Cell.
Application
In electronic circuit designs, cells are basic units of functionality. A given cell may be placed or instantiated many times. A P-Cell is more flexible than a non-parameterized cell because different instances may have different parameter values and, therefore, different structures. For example, rather than having many different cell definitions to represent the variously sized transistors in a given design, a single PCell may take a transistor's dimensions (width and length) as parameters. Different instances of a single PCell can then represent transistors of different sizes, but otherwise similar characteristics.
The structures within an integrated circuit and the rules (design rules) governing their physical dimensions are often complex, thereby making the structures tedious to draw by hand. By using PCells a circuit designer can easily generate a large number of various structures that only differ in a few parameters, thus increasing design productivity and consistency.
Most often, PCell implies a physical PCell, i.e., a physical representation of an electronic component describing its physical structure inside an integrated circuit (IC). Although most PCells are physical PCells, device symbols in circuit schematics may also be implemented as PCells.
Underlying characteristics of all PCells are a dependence on (input) parame |
https://en.wikipedia.org/wiki/Blom%27s%20scheme | Blom's scheme is a symmetric threshold key exchange protocol in cryptography. The scheme was proposed by the Swedish cryptographer Rolf Blom in a series of articles in the early 1980s.
A trusted party gives each participant a secret key and a public identifier, which enables any two participants to independently create a shared key for communicating. However, if an attacker can compromise the keys of at least k users, they can break the scheme and reconstruct every shared key. Blom's scheme is a form of threshold secret sharing.
Blom's scheme is currently used by the HDCP (Version 1.x only) copy protection scheme to generate shared keys for high-definition content sources and receivers, such as HD DVD players and high-definition televisions.
The protocol
The key exchange protocol involves a trusted party (Trent) and a group of users. Let Alice and Bob be two users of the group.
Protocol setup
Trent chooses a random and secret symmetric matrix over the finite field , where p is a prime number. is required when a new user is to be added to the key sharing group.
For example:
Inserting a new participant
New users Alice and Bob want to join the key exchanging group. Trent chooses public identifiers for each of them; i.e., k-element vectors:
.
For example:
Trent then computes their private keys:
Using as described above:
Each will use their private key to compute shared keys with other participants of the group.
Computing a shared key between Alice and Bob
Now Alice and Bob wish to communicate with one another. Alice has Bob's identifier and her private key .
She computes the shared key , where denotes matrix transpose. Bob does the same, using his private key and her identifier, giving the same result:
They will each generate their shared key as follows:
Attack resistance
In order to ensure at least k keys must be compromised before every shared key can be computed by an attacker, identifiers must be k-linearly independent: all sets of k randomly |
https://en.wikipedia.org/wiki/Targeted%20temperature%20management | Targeted temperature management (TTM) previously known as therapeutic hypothermia or protective hypothermia is an active treatment that tries to achieve and maintain a specific body temperature in a person for a specific duration of time in an effort to improve health outcomes during recovery after a period of stopped blood flow to the brain. This is done in an attempt to reduce the risk of tissue injury following lack of blood flow. Periods of poor blood flow may be due to cardiac arrest or the blockage of an artery by a clot as in the case of a stroke.
Targeted temperature management improves survival and brain function following resuscitation from cardiac arrest. Evidence supports its use following certain types of cardiac arrest in which an individual does not regain consciousness. The target temperature is often between 32-34°C. Targeted temperature management following traumatic brain injury is of unclear benefit. While associated with some complications, these are generally mild.
Targeted temperature management is thought to prevent brain injury by several methods, including decreasing the brain's oxygen demand, reducing the production of neurotransmitters like glutamate, as well as reducing free radicals that might damage the brain. Body temperature may be lowered by many means, including cooling blankets, cooling helmets, cooling catheters, ice packs and ice water lavage.
Medical uses
Targeted temperature management may be used in the following conditions:
Cardiac arrest
The 2013 ILCOR and 2010 American Heart Association guidelines support the use of cooling following resuscitation from cardiac arrest. These recommendations were largely based on two trials from 2002 which showed improved survival and brain function when cooled to after cardiac arrest.
However, more recent research suggests that there is no benefit to cooling to when compared with less aggressive cooling only to a near-normal temperature of ; it appears cooling is effective because it |
https://en.wikipedia.org/wiki/PICkit | PICkit is a family of programmers for PIC microcontrollers made by Microchip Technology. They are used to program and debug microcontrollers, as well as program EEPROM. Some models also feature logic analyzers and serial communications (UART) tools.
Versions
PICkit 1
The PICkit 1 — introduced on March 31, 2003, for US$36 — was a rudimentary USB programmer for PIC microcontrollers, produced by Microchip Technology, the manufacturer of the PIC series of microcontrollers. It was integrated into a demonstrator board, featuring eight LEDs, a switch, and a potentiometer. Its default program, explained in the documentation, rotates the LEDs in series. The light display's direction and speed of rotation can be changed with the button and potentiometer on the PICkit board.
PICkit 2
The PICkit 2 — introduced in May 2005 — replaced the PICkit 1. The most notable difference between the two is that the PICkit 2 has a separate programmer/debugger unit which plugs into the board carrying the chip to be programmed, whereas the PICkit 1 is a single unit. This makes it possible to use the programmer with a custom circuit board via an in-circuit serial programming (ICSP) header. This feature is not intended for so-called "production" programming, however.
The PICkit 2 uses an internal PIC18F2550 with FullSpeed USB. The latest PICkit 2 firmware allows the user to program and debug most of the 8 and 16-bit PICmicro and dsPIC members of the Microchip product line.
The PICkit 2 is open to the public, including its hardware schematic, firmware source code (in C language), and application programs (in C# language). End users and third parties can easily modify both the hardware and software for enhanced features. e.g. Linux version of PICkit 2 application software, DOS-style CMD support, etc.
The PICkit 2 has a programmer-to-go (PTG) feature, which can download the hex file and programming instructions into on-board memory (128 KB I²C EEPROM or 256 KB I²C EEPROM), so that no PC is |
https://en.wikipedia.org/wiki/Memory%20ProteXion | For computer memory, Memory ProteXion, found in IBM xSeries servers, is a form of "redundant bit steering". This technology uses redundant bits in a data packet to recover from a DIMM failure.
Memory ProteXion is different from normal ECC error correction in that it uses only 6 bits for ECC, leaving 2 bits behind. These 2 bits can be used to re-route data from failed memory, much like hot spare on a RAID. The ECC is used to reconstruct the data, and the extra bits to store it.
Memory ProteXion, also known as “redundant bit steering”, is the technology behind using redundant bits in a data packet to provide backup in the event of a DIMM failure. One failure does not cause a predictive failure analysis to be issued on the DIMM, but 2 failures and more will issue a PFA to inform the system administrator that a replacement is needed.
See also
Chipkill
External links
Memory ProteXion
Computer memory
Error detection and correction |
https://en.wikipedia.org/wiki/Acquired%20taste | An acquired taste is an appreciation for something unlikely to be enjoyed by a person who has not had substantial exposure to it. It is the opposite of innate taste, which is the appreciation for things that are enjoyable by most persons without prior exposure to them.
Characteristics
In case of food and drink, the difficulty of enjoying the product may be due to a strong odor (such as certain types of cheese, durian, hákarl, black salt, nattō, asafoetida, surströmming, or stinky tofu), taste (as in alcoholic beverages, coffee, Vegemite or Marmite, bitter teas, liquorice/salty liquorice, South Asian pickles, malt bread, unsweetened chocolate, garnatálg, rakfisk, soused herring, haggis), mouthfeel (such as sashimi and sushi featuring uncooked seafood), appearance, or association (such as eating insects or organ meat).
Acquisition
General
The process of acquiring a taste can involve developmental maturation, genetics (of both taste sensitivity and personality), family example, and biochemical reward properties of foods. Infants are born preferring sweet foods and rejecting sour and bitter tastes, and they develop a preference for salt at approximately 4 months. However, vegetables tend to be a favourite as they start to learn to feed themselves. Neophobia (fear of novelty) tends to vary with age in predictable, but not linear, ways. Babies just beginning to eat solid foods generally accept a wide variety of foods, toddlers and young children are relatively neophobic towards food, and older children, adults, and the elderly are often adventurous eaters with wide-ranging tastes.
The general personality trait of novelty-seeking does not necessarily correlate highly with willingness to try new foods. Level of food adventurousness may explain much of the variability of food preferences observed in "supertasters". Supertasters are highly sensitive to bitter, spicy, and pungent flavours, and some avoid them and like to eat only mild, plain foods, but many supertaste |
https://en.wikipedia.org/wiki/Ticket%20Granting%20Ticket | In some computer security systems, a Ticket Granting Ticket or Ticket to Get Tickets (TGT) is a small, encrypted identification file with a limited validity period. After authentication, this file is granted to a user for data traffic protection by the key distribution center (KDC) subsystem of authentication services such as Kerberos. The TGT file contains the session key, its expiration date, and the user's IP address, which protects the user from man-in-the-middle attacks.
The TGT is used to obtain a service ticket from Ticket Granting Service (TGS). User is granted access to network services only after this service ticket is provided.
Key management
Computer access control protocols
Authentication protocols
Key transport protocols
Computer network security |
https://en.wikipedia.org/wiki/Association%20of%20Los%20Alamos%20Scientists | The Association of Los Alamos Scientists (ALAS) was founded on August 30, 1945, by a group of scientists, who had worked on the development of the atomic bomb at the Los Alamos Laboratory, a division of the Manhattan Project.
Purpose
The purpose of the organization was "to promote the attainment and use of scientific and technological advances in the best interests of humanity", according to the manifesto, available in the archives of the University of Chicago.
The scientists believed that they, "by virtue of their special knowledge, have, in certain spheres, special political and social responsibilities beyond their obligations as individual citizens". The association sought to carry out these responsibilities by keeping its members informed, "and by providing a forum through which their views can be publicly and authoritatively expressed".
The ALAS concentrated its activities principally in promoting international control of nuclear power and directing it to peaceful uses. Its members also attempted to promote responsible uses of science, and the freedom and integrity of scientists and scientific research.
The group sponsored public education on the nature and control of atomic energy through lectures, films, and exhibits, and the distribution of literature. It also attempted to influence public policy by means of informed statements to the press and correspondence with high government officials and congressmen. |
https://en.wikipedia.org/wiki/ACIGA | The Australian Consortium for Interferometric Gravitational Astronomy (ACIGA) is a collaboration of Australian research institutions involved in the international gravitational wave research community.
The institutions associated with ACIGA are:
The Australian National University
University of Western Australia
University of Adelaide
Monash University
University of Melbourne
CSIRO optical technology group
Charles Sturt University
See also
AIGO |
https://en.wikipedia.org/wiki/Weisz%E2%80%93Prater%20criterion | The Weisz–Prater criterion is a method used to estimate the influence of pore diffusion on reaction rates in heterogeneous catalytic reactions. If the criterion is satisfied, pore diffusion limitations are negligible. The criterion is
Where is the reaction rate per volume of catalyst, is the catalyst particle radius, is the reactant concentration at the particle surface, and is the effective diffusivity. Diffusion is usually in the Knudsen regime when average pore radius is less than 100 nm.
For a given effectiveness factor,, and reaction order, n, the quantity is defined by the equation:
for small values of beta this can be approximated using the binomial theorem:
Assuming with a reaction order gives value of equal to 0.1. Therefore, for many conditions, if then pore diffusion limitations can be excluded. |
https://en.wikipedia.org/wiki/Stochastic%20partial%20differential%20equation | Stochastic partial differential equations (SPDEs) generalize partial differential equations via random force terms and coefficients, in the same way ordinary stochastic differential equations generalize ordinary differential equations.
They have relevance to quantum field theory, statistical mechanics, and spatial modeling.
Examples
One of the most studied SPDEs is the stochastic heat equation, which may formally be written as
where is the Laplacian and denotes space-time white noise. Other examples also include stochastic versions of famous linear equations, such as the wave equation and the Schrödinger equation.
Discussion
One difficulty is their lack of regularity. In one dimensional space, solutions to the stochastic heat equation are only almost 1/2-Hölder continuous in space and 1/4-Hölder continuous in time. For dimensions two and higher, solutions are not even function-valued, but can be made sense of as random distributions.
For linear equations, one can usually find a mild solution via semigroup techniques.
However, problems start to appear when considering non-linear equations. For example
where is a polynomial. In this case it is not even clear how one should make sense of the equation. Such an equation will also not have a function-valued solution in dimension larger than one, and hence no pointwise meaning. It is well known that the space of distributions has no product structure. This is the core problem of such a theory. This leads to the need of some form of renormalization.
An early attempt to circumvent such problems for some specific equations was the so called da Prato–Debussche trick which involved studying such non-linear equations as perturbations of linear ones. However, this can only be used in very restrictive settings, as it depends on both the non-linear factor and on the regularity of the driving noise term. In recent years, the field has drastically expanded, and now there exists a large machinery to guarantee local existe |
https://en.wikipedia.org/wiki/Filler%20%28materials%29 | Filler materials are particles added to resin or binders (plastics, composites, concrete) that can improve specific properties, make the product cheaper, or a mixture of both. The two largest segments for filler material use is elastomers and plastics. Worldwide, more than 53 million tons of fillers (with a total sum of approximately US$18 billion) are used every year in application areas such as paper, plastics, rubber, paints, coatings, adhesives, and sealants. As such, fillers, produced by more than 700 companies, rank among the world's major raw materials and are contained in a variety of goods for daily consumer needs. The top filler materials used are ground calcium carbonate (GCC), precipitated calcium carbonate (PCC), kaolin, talc, and carbon black. Filler materials can affect the tensile strength, toughness, heat resistance, color, clarity, etc. A good example of this is the addition of talc to polypropylene. Most of the filler materials used in plastics are mineral or glass based filler materials. Particulates and fibers are the main subgroups of filler materials. Particulates are small particles of filler that are mixed in the matrix where size and aspect ratio are important. Fibers are small circular strands that can be very long and have very high aspect ratios.
Types
Calcium carbonate (CaCO3)
Referred to as "chalk" in the plastic industry, calcium carbonate is derived from limestone and marble. It is used in many applications including PVC's and unsaturated polyesters. As much as 90% CaCO3 can be used to make a composite. These additions can improve molding productivity by decreasing the cooling rate. They can also increase the operating temperatures of materials and provide insulation for electrical wiring.
CaCO3 is used in filler masterbatch as a base with a large percentage in composition. Calcium carbonate powder accounts for 97% of the composition will bring white/opaque products more whiteness. So manufacturers can reduce the usage of white ma |
https://en.wikipedia.org/wiki/Electronics%20For%20You | Electronics For You magazine is India's first monthly publication for electronics engineers. It was first conceptualised at IIT Madras in 1969 by Ramesh Chopra, and was published by EFY Enterprises Pvt Ltd headed by S.P Chopra and Veena Khanna.
The publisher of this magazine currently manages multiple magazines, annual events, and around 30 book titles. The company also provides hands-on training courses, and manufactures and markets Do-It-Yourself electronics projects and hobby kits. It has partnered with Mouser Electronics for their entire IoT series in India. The magazine has partnered with ELCINA to conduct events that recognise and award innovative technology companies. Additionally, the magazine sponsors the Electronics For You Prize, an award given to a student at IIT Madras each year.
Electronics For You magazine has a history of being collected and saved by engineers and technologists across India. |
https://en.wikipedia.org/wiki/Cross-linked%20enzyme%20aggregate | In biochemistry, a cross-linked enzyme aggregate is an immobilized enzyme prepared via cross-linking of the physical enzyme aggregates with a difunctional cross-linker. They can be used as stereoselective industrial biocatalysts.
Background
Enzymes are proteins that catalyze (i.e. accelerate) chemical reactions. They are natural catalysts and are ubiquitous, in plants, animals and microorganisms where they catalyze processes that are vital to living organisms. They are intimately involved in numerous biotechnological processes, such as cheese making, beer brewing and winemaking, that date back to the dawn of civilization. Recent advances in biotechnology, particularly in genetic and protein engineering, and genetics have provided the basis for the efficient development of enzymes with improved properties for established applications and novel, tailor-made enzymes for completely new applications where enzymes were not previously used.
Today, enzymes are widely applied in many different industries and the number of applications continues to increase. Examples include food (baking, dairy products, starch conversion) and beverage (beer, wine, fruit and vegetable juices) processing, animal feed, textiles, pulp and paper, detergents, biosensors, cosmetics, health care and nutrition, waste water treatment, pharmaceutical and chemical manufacture and, more recently, biofuels such as biodiesel. The main driver for the widespread application of enzymes is their small environmental footprint.
Many traditional chemical conversions used in various industries suffer from inherent drawbacks from both an economic and environmental viewpoint. Non-specific reactions can afford low product yields, copious amounts of waste and impure products. The need for elevated temperatures and pressures leads to high energy consumption and high capital investment costs. Disposal of unwanted by-products may be difficult and/or expensive and hazardous solvents may be required. In stark contrast, |
https://en.wikipedia.org/wiki/Symobi | Symobi (System for mobile applications) is a proprietary modern and mobile real-time operating system. It was and is developed by the German company Miray Software, since 2002 partly in cooperation with the research team of Prof. Dr. Uwe Baumgarten at the Technical University of Munich. The graphical operating system is designed for the area of embedded and mobile systems. It is also often used on PCs for end users and in the field of industry.
Design
The basis of Symobi is the message-oriented operating system µnOS, which is on its part based on the real-time microkernel Sphere. µnOS offers communication through message passing between all processes (from basic operating system service processes to application processes) using the integrated process manager. On the lowest level, the responsibility of the Sphere microkernel is to implement and enforce security mechanisms and resource management in real-time. Symobi itself additionally offers a complete graphical operating system environment with system services, a consistent graphical user interface, as well as standard programs and drivers.
Classification
Symobi combines features from different fields of application in one operating system. As a modern operating system it offers separated, isolated processes, light-weight threads, and dynamic libraries, like Windows, Linux, and Unix for example. In the area of mobile embedded operating systems, through its low resource requirement and the support of mobile devices it resembles systems like Windows CE, SymbianOS or Palm OS. With conventional real-time operating systems like QNX or VxWorks it shares the real-time ability and the support of different processor architectures.
History
The development of Sphere, µnOS and Symobi is based on the ideas and work of Konrad Foikis and Michael Haunreiter (founders of the company Miray Software), initiated during their schooldays, even before they started studying computer science. The basic concept was to combine useful |
https://en.wikipedia.org/wiki/Streptolydigin | Streptolydigin (Stl) is an antibiotic that works by inhibiting nucleic acid chain elongation by binding to RNA polymerase, thus inhibiting RNA synthesis inside a cell. Streptolydigin inhibits bacterial RNA polymerase, but not eukaryotic RNA polymerase. It has antibacterial activity against a number of Gram positive bacteria. |
https://en.wikipedia.org/wiki/Van%20Wijngaarden%20transformation | In mathematics and numerical analysis, the van Wijngaarden transformation is a variant on the Euler transform used to accelerate the convergence of an alternating series.
One algorithm to compute Euler's transform runs as follows: Compute a row of partial sums and form rows of averages between neighbors The first column then contains the partial sums of the Euler transform.
Adriaan van Wijngaarden's contribution was to point out that it is better not to carry this procedure through to the very end, but to stop two-thirds of the way. If are available, then is almost always a better approximation to the sum than . In many cases the diagonal terms do not converge in one cycle so process of averaging is to be repeated with diagonal terms by bringing them in a row. (For example, this will be needed in a geometric series with ratio .) This process of successive averaging of the average of partial sum can be replaced by using the formula to calculate the diagonal term.
For a simple-but-concrete example, recall the Leibniz formula for pi The algorithm described above produces the following table:
These correspond to the following algorithmic outputs: |
https://en.wikipedia.org/wiki/Apex%20%28diacritic%29 | In written Latin, the apex (plural "apices") is a mark with roughly the shape of an acute accent which was sometimes placed over vowels to indicate that they are long.
The shape and length of the apex can vary, sometimes within a single inscription. While virtually all apices consist of a line sloping up to the right, the line can be more or less curved, and varies in length from less than half the height of a letter to more than the height of a letter. Sometimes, it is adorned at the top with a distinct hook, protruding to the left. Rather than being centered over the vowel it modifies, the apex is often considerably displaced to the right.
Essentially the same diacritic, conventionally called in English the acute accent, is used today for the same purpose of denoting long vowels in a number of languages with Latin orthography, such as Irish (called in it the or simply "long"), Hungarian ( , from the words for "long" and "wedge"), Czech (called in it , "small line") and Slovak ( , from the word for "long"), as well as for the historically long vowels of Icelandic. In the 17th century, with a specialized shape distinct from that of the acute accent, a curved diacritic by the name of "apex" was adopted to mark final nasalization in the early Vietnamese alphabet, which already had an acute accent diacritic that was used to mark one of the tones.
Details
Although hardly known by most modern Latinists, the use of the sign was actually quite widespread during classical and postclassical times. The reason why it so often passes unnoticed lies probably in its smallish size and usually thinner nature in comparison with the lines that compose the letter on which it stands. Yet the more careful observer will soon start to notice apices in the exhibits of any museum, not only in many of the more formal epigraphic inscriptions, but also in handwritten palaeographic documents. However, otherwise punctilious transcriptions of the material customarily overlook this diacr |
https://en.wikipedia.org/wiki/Vba32%20AntiVirus | VBA32 (Virus Block Ada 32) is antivirus software from the vendor VirusBlokAda for personal computers running Microsoft Windows. It detects and neutralizes computer viruses, computer worms, Trojan horses and other malware (backdoors, adware, spyware, etc.) in real time and on demand.
VBA32 is used as one of the antivirus engines at VirusTotal.
VirusBlokAda
VirusBlokAda is an antivirus software vendor established in 1997 in Belarus. In 2010 it discovered Stuxnet, the first malware that attacks supervisory control and data acquisition (SCADA) systems.
The program
In 2009 Judit Papp assessed that its VBA32 Antivirus product could detect 26 percent of unknown malware, compared to 67 percent detected by Avira's Antivir Premium and 8 percent detected by MicroWorld's eScan Anti-Virus.
See also
Antivirus software
Comparison of antivirus software
Comparison of computer viruses |
https://en.wikipedia.org/wiki/Service%20Description%20Table | Service Description Table (SDT) is a metadata table used in Digital Video Broadcasting systems to describe the television, radio or other services contained in MPEG transport streams provided by the system. The purpose and format of the table is defined in ETSI EN 300 468: Specification for Service Information (SI) in DVB systems.
An MPEG transport stream consists of a sequence of packets. SDTs are contained in packets identified by the packet ID (PID) 0x0011. Such packets may alternatively contain a Bouquet Association Table (BAT) or a Stuffing Table (ST). The type of information carried in the packet is identified using a table ID. The table ID 0x42 identifies the SDT providing information about services contained in the same transport stream as the SDT itself. The table ID 0x46 identifies SDTs providing information about services contained in other transport streams in the same network or system.
The SDT provides the following information about each service:
the transport stream id.
the service id.
whether or not programme schedules are provided in the transport stream.
whether or not there is information about the current and next programmes.
the running status of the service (e.g. starting soon, paused, running or off-air).
whether or not the service is scrambled.
Further optional information may be provided about each service, such as the name of the service, the name of the broadcaster responsible for the service, service availability or service authority url. |
https://en.wikipedia.org/wiki/TITAN2D | TITAN2D is a geoflow simulation software application, intended for geological researchers. It is distributed as free software.
Overview
TITAN2D is a free software application developed by the Geophysical Mass Flow Group at the State University of New York (SUNY) at Buffalo.
TITAN2D was developed for the purpose of simulating granular flows (primarily geological mass
flows such as debris avalanches and landslides) over
digital elevation models (DEM)s of natural terrain.
The code is designed to help scientists and civil protection authorities assess the
risk of, and mitigate, hazards due to dry debris flows and avalanches.
TITAN2D combines numerical simulations of a flow with digital elevation data of natural terrain
supported through a Geographical Information System (GIS) interface such as GRASS.
TITAN2D is capable of multiprocessor runs.
A Message Passing Interface (MPI) Application
Programming Interface (API) allows
for parallel computing on multiple processors, which effectively increases computational power, decreases computing time,
and allows for the use of large data sets.
Adaptive gridding allows
for the concentration of computing power on regions of special
interest. Mesh refinement captures the complex flow features that occur at the leading edge
of a flow, as well as locations where rapid changes in topography induce large mass and momentum fluxes. Mesh
unrefinement is applied where solution values are relatively constant
or small to further improve computational efficiency.
TITAN2D requires an initial volume and shape estimate for the starting material, a basal friction angle, and an
internal friction angle for the simulated granular flow. The direct outputs of the program are
dynamic representations of a flow's depth and momentum. Secondary or derived outputs include flow velocity, and such field-observable quantities as run-up height, deposit thickness, and inundation area.
Mathematical Model
The TITAN2D program is based upon a depth-averaged |
https://en.wikipedia.org/wiki/Eclipse%20Metro | Metro is a high-performance, extensible, easy-to-use web service stack. Although historically an open-source part of the GlassFish application server, it can also be used in a stand-alone configuration. Components of Metro include: JAXB RI, JAX-WS RI, SAAJ RI, StAX (SJSXP implementation) and WSIT. Originally available under the CDDL and GPLv2 with classpath exception, it is now available under
History
Originally, the Glassfish project developed two semi-independent projects:
JAX-WS RI, the Reference implementation of the JAX-WS specification
WSIT, a Java implementation of some of the WS-* and an enhanced support for interoperability with the .NET Framework. It is based on JAX-WS RI as "Web Service layer".
In June 2007, it was decided to bundle these two components as a single component named Metro.
Features
Metro compares well with other web service frameworks in terms of functionality. Codehaus started a comparison which compared Apache Axis 1.x, Axis 2.x, Celtix, Glue, JBossWS, Xfire 1.2 and JAX-WS RI + WSIT (the bundle was not yet named Metro at that time). This was later updated by the ASF to replace Celtix with CXF and to include OracleAS 10g.
Metro includes JAXB RI, JAX-WS RI, SAAJ RI, SJSXP, and WSIT, along with libraries that those components depend on, such as xmlstreambuffer, mimepull, etc.
Its features include:
Basic Profile 1.1 Compliant
Easily Create Services from POJOs
RPC-Encoding
Spring Support
REST Support
Soap 1.1/1.2
Streaming XML (StAX based)
WSDL 1.1 ->Code (Client)/(Server)
Server and Client-side Asynchrony
Supported WS-* Standards
Supported Transport protocols include:
HTTP
JMS
SMTP/POP3
TCP
In-VM
Metro augments the JAX-WS environment with advanced features such as trusted, end-to-end security; optimized transport (MTOM, Fast Infoset), reliable messaging, and transactional behavior for SOAP web services.
Market share
Metro is bundled with Java SE 6 in order to allow consumers of Java SE 6 to consume Web Services.
Met |
https://en.wikipedia.org/wiki/Web.com | Web.com is an American dot-com company that provides a website builder, along with website hosting, domain name registration, web development, and various digital marketing services. It serves as a partner for very small to small-sized businesses and entrepreneurs, assisting them in establishing and expanding their online presence.
The company was founded in 1999 by Darin Brannan in Jacksonville, Florida as Website Pros Inc.. In early 2008 it took its current name after acquiring Atlanta-based company Web.com, which was founded in 1981 by Waldemar Fernández and was formerly known as Interland, Inc.
In 2021, Web.com merged with Endurance Web Presence to form a new company, Newfold Digital.
Corporate overview
Web.com is based in Jacksonville, Florida and incorporated in Delaware, and provides domain name registration and web development services, among others. The company caters to very small and small businesses and offers a variety of subscription services designed for entrepreneurs including, including design, hosting, management, e-commerce, lead generation, mobile commerce, online advertising, search engine optimization, and social media solutions.
Web.com reportedly had 3.3 million subscribers in January 2016. The company has offices in more than 20 U.S. states, and in Argentina (Buenos Aires), Canada (Barrie, Ontario, and Nova Scotia), and the United Kingdom (including Cardiff, Wales). Web.com was traded as "WEB" on NASDAQ.
David Brown served as Web.com's chief executive officer (CEO) until early 2019. Okumus Fund Management was the company's top shareholder, with 18.64 percent as of March 2017. In 2015, Okumus and Web.com agreed to appoint two independent directors to its board.
In May 2017, the company had 3,500 employees and a market capitalization of $1.1 billion. Web.com has a $1.21 billion valuation, as of June 2017.
History
Website Pros Inc.
David Brown established Website Pros Inc.'s predecessor, the technology services company Atlantic Tel |
https://en.wikipedia.org/wiki/Generalized%20Procrustes%20analysis | Generalized Procrustes analysis (GPA) is a method of statistical analysis that can be used to compare the shapes of objects, or the results of surveys, interviews, or panels. It was developed for analysing the results of free-choice profiling, a survey technique which allows respondents (such as sensory panelists) to describe a range of products in their own words or language. GPA is one way to make sense of free-choice profiling data; other ways can be multiple factor analysis (MFA), or the STATIS method. The method was first published by J. C. Gower in 1975.
Generalized Procrustes analysis estimates the scaling factor applied to respondent scale usage, generating a weighting factor that is used to compensate for individual scale usage differences. Unlike measures such as a principal component analysis, GPA uses individual level data and a measure of variance is utilized in the analysis.
The Procrustes distance provides a metric to minimize in order to superimpose a pair of shape instances annotated by landmark points. GPA applies the Procrustes analysis method to superimpose a population of shapes instead of only two shape instances.
The algorithm outline is the following:
arbitrarily choose a reference shape (typically by selecting it among the available instances)
superimpose all instances to current reference shape
compute the mean shape of the current set of superimposed shapes
if the Procrustes distance between the mean shape and the reference is above a certain threshold, set the reference to mean shape and continue to step 2.
See also
Procrustes analysis
Orthogonal Procrustes problem |
https://en.wikipedia.org/wiki/Crypts%20of%20Henle | Crypts of Henle are microscopic pockets found in scattered sections of the conjunctiva around the eyeball. They are responsible for secreting mucin, a proteinous substance that makes up the inner layer of tears. It coats the cornea to provide a hydrophilic layer that allows for even distribution of the tear film. The layer of mucin allows tears to glide evenly across the eye's surface. The crypts of Henle are named after German anatomist Friedrich Gustav Jakob Henle (1809-1885).
Another anatomical structure called the glands of Manz perform a similar function. They are located in the eyeball's conjunctiva, arranged in a ring around the cornea, near the scleral junction. They also are responsible for secreting mucin into tears. |
https://en.wikipedia.org/wiki/Quantum%20walk | Quantum walks are quantum analogues of classical random walks. In contrast to the classical random walk, where the walker occupies definite states and the randomness arises due to stochastic transitions between states, in quantum walks randomness arises through: (1) quantum superposition of states, (2) non-random, reversible unitary evolution and (3) collapse of the wave function due to state measurements.
As with classical random walks, quantum walks admit formulations in both discrete time and continuous time.
Motivation
Quantum walks are motivated by the widespread use of classical random walks in the design of randomized algorithms, and are part of several quantum algorithms. For some oracular problems, quantum walks provide an exponential speedup over any classical algorithm. Quantum walks also give polynomial speedups over classical algorithms for many practical problems, such as the element distinctness problem, the triangle finding problem, and evaluating NAND trees. The well-known Grover search algorithm can also be viewed as a quantum walk algorithm.
Relation to classical random walks
Quantum walks exhibit very different features from classical random walks. In particular, they do not converge to limiting distributions and due to the power of quantum interference they may spread significantly faster or slower than their classical equivalents.
Continuous time
Continuous-time quantum walks arise when one replaces the continuum spatial domain in the Schrödinger equation with a discrete set. That is, instead of having a quantum particle propagate in a continuum, one restricts the set of possible position states to the vertex set of some graph which can be either finite or countably infinite. Under particular conditions, continuous-time quantum walks can provide a model for universal quantum computation.
Relation to non-relativistic Schrödinger dynamics
Consider the dynamics of a non-relativistic, spin-less free quantum particle with mass propagating |
https://en.wikipedia.org/wiki/Rollover%20%28fire%29 | Rollover (also known as flameover) is a stage of a structure fire when fire gases in a room or other enclosed area ignite. Since heated fire gases, the product of pyrolysis, rise to the ceiling, this is where a rollover phenomenon is most often witnessed. Visually, this may be seen as flames "rolling" across the ceiling, radiating outward from the seat of the fire to the extent of gas spread.
Rollover is not the same as flashover, although it may precede it, and the terms may be confused. In the case of rollover, only gases present in the room, not the room contents, ignite.
External links
Term of the week: Flameover
Pictures
http://www.ci.frisco.tx.us/departments/fire/PublishingImages/Training%20Center%20-%20Living%20Room%20Flashover.jpg
https://web.archive.org/web/20111008023759/http://www.firecontrolservices.co.uk/large_imgs/structural3.htm
http://www.nyc.gov/html/fdny/images/home_specific/feature_photos/2009/379x250_031909a2.jpg |
https://en.wikipedia.org/wiki/Adaptability | Adaptability ( "fit to, adjust") is a feature of a system or of a process. This word has been put to use as a specialised term in different disciplines and in business operations. Word definitions of adaptability as a specialised term differ little from dictionary definitions. According to Andresen and Gronau adaptability in the field of organizational management can in general be seen as an ability to change something or oneself to fit to occurring changes. In ecology, adaptability has been described as the ability to cope with unexpected disturbances in the environment.
With respect to business and manufacturing systems and processes, adaptability has come to be seen increasingly as an important factor for their efficiency and economic success. In contrast, adaptability and efficiency are held to be in opposition to each other in biological and ecological systems, requiring a trade-off, since both are important factors in the success of such systems. To determine the adaptability of a process or a system, it should be validated concerning some criteria.
Terminology
In the life sciences the term adaptability is used variously. At one end of the spectrum, the ordinary meaning of the word suffices for understanding. At the other end, there is the term as introduced by Conrad, referring to a particular information entropy measure of the biota of an ecosystem, or of any subsystem of the biota, such as a population of a single species, a single individual, cell, protein or gene.
In the technical research field this feature has been considered only since the late 1990s. H. P. Wiendahl first introduced adaptability as a necessary feature of a manufacturing system in 1999. The need to consider adaptability arose in the context of factory planning, where it is an objective to develop modular, adaptable systems. It has now become an important consideration for manufacturing and system engineers.
Adaptability of a system
Adaptability is to be understood here as the a |
https://en.wikipedia.org/wiki/Data%20wrangling | Data wrangling, sometimes referred to as data munging, is the process of transforming and mapping data from one "raw" data form into another format with the intent of making it more appropriate and valuable for a variety of downstream purposes such as analytics. The goal of data wrangling is to assure quality and useful data. Data analysts typically spend the majority of their time in the process of data wrangling compared to the actual analysis of the data.
The process of data wrangling may include further munging, data visualization, data aggregation, training a statistical model, as well as many other potential uses. Data wrangling typically follows a set of general steps which begin with extracting the data in a raw form from the data source, "munging" the raw data (e.g. sorting) or parsing the data into predefined data structures, and finally depositing the resulting content into a data sink for storage and future use. It is closely aligned with the ETL process.
Background
The "wrangler" non-technical term is often said to derive from work done by the United States Library of Congress's National Digital Information Infrastructure and Preservation Program (NDIIPP) and their program partner the Emory University Libraries based MetaArchive Partnership. The term "mung" has roots in munging as described in the Jargon File. The term "data wrangler" was also suggested as the best analogy to describe someone working with data.
One of the first mentions of data wrangling in a scientific context was by Donald Cline during the NASA/NOAA Cold Lands Processes Experiment. Cline stated the data wranglers "coordinate the acquisition of the entire collection of the experiment data." Cline also specifies duties typically handled by a storage administrator for working with large amounts of data. This can occur in areas like major research projects and the making of films with a large amount of complex computer-generated imagery. In research, this involves both data transfer fr |
https://en.wikipedia.org/wiki/Point%20reflection | In geometry, a point reflection (also called a point inversion or central inversion) is a transformation of affine space in which every point is reflected across a specific fixed point. When dealing with crystal structures and in the physical sciences the terms inversion symmetry, inversion center or centrosymmetric are more commonly used.
A point reflection is an involution: applying it twice is the identity transformation. It is equivalent to a homothetic transformation with scale factor . The point of inversion is also called homothetic center.
An object that is invariant under a point reflection is said to possess point symmetry; if it is invariant under point reflection through its center, it is said to possess central symmetry or to be centrally symmetric. A point group including a point reflection among its symmetries is called centrosymmetric.
In Euclidean space, a point reflection is an isometry (preserves distance). In the Euclidean plane, a point reflection is the same as a half-turn rotation (180° or radians); a point reflection through the object's centroid is the same as a half-turn spin.
Terminology
The term reflection is loose, and considered by some an abuse of language, with inversion preferred; however, point reflection is widely used. Such maps are involutions, meaning that they have order 2 – they are their own inverse: applying them twice yields the identity map – which is also true of other maps called reflections. More narrowly, a reflection refers to a reflection in a hyperplane ( dimensional affine subspace – a point on the line, a line in the plane, a plane in 3-space), with the hyperplane being fixed, but more broadly reflection is applied to any involution of Euclidean space, and the fixed set (an affine space of dimension k, where ) is called the mirror. In dimension 1 these coincide, as a point is a hyperplane in the line.
In terms of linear algebra, assuming the origin is fixed, involutions are exactly the diagonalizable maps w |
https://en.wikipedia.org/wiki/Pompadour%20cotinga | The pompadour cotinga (Xipholena punicea) is a species of bird in the family Cotingidae. This species lives in the Amazonian rainforest and has a range that extends across the Amazon Basin and includes Brazil, Peru, Colombia, Venezuela, and the Guianas. The pompadour cotinga is primarily a frugivore but has been known to consume insects on occasion. This species of cotinga is distinct in that the males have a burgundy head and body, bright white wings, and yellow eyes. Like other members of the Cotingidae, this species is sexually dimorphic and the females have a pale grey head and body. Although there are not many documented observations of the nesting behavior of these birds, the males are known to perform elaborate mating displays for the females who then raise the young alone.
Due to its arboreal habitat and generally remote distribution, behavior observations are rare. Pompadour cotinga coloration has been studied extensively, but little is known about the natural history of its behavior and existence in its habitat. Despite threats to Amazonian habitat in recent years, the pompadour cotinga remains a species of least concern.
Taxonomy
The first documentation of the pompadour cotinga was in the 1764 auction catalogue of Dutch natural history collector Adriaan Vroeg which listed many species of birds and mammal specimens which were to be sold in glass cases. The catalogue included an appendix Adumbratiunculae by the Dutch naturalist Peter Simon Pallas. This was written in Latin and used the protonym Turdus puniceus to describe the species. Birds of the family Cotingidae tend to share certain characteristics such as hooked beaks, strong sexual dimorphism, and mating displays performed by the males. The most recent phylogeny created in 2014 examined the family using genetic analyses of both nuclear and mitochondrial genes, and compared the results to the synapomorphies among birds of certain clades. The breeding behavior and sexual dimorphism of certain species |
https://en.wikipedia.org/wiki/Greenskeeper | A greenskeeper is a person responsible for the upkeep of a golf course. Their duties include all horticultural practices, as well as the setting of flag-sticks and marking of hazards. Other responsibilities typically include raking bunkers, watering plants, repairing divots, trimming tee boxes, and mowing the course. Greenskeepers often work under the direction of a golf course superintendent.
Work and duties
Setting flag-sticks
Greenskeepers set the flag-sticks and tee markers, distinguishing their role from that of other groundskeepers and horticulturists. Tee markers distinguish the line from which players tee off or strike the golf ball. Flag-sticks mark the location of the hole for which the players are aiming.
The distance between the tee marker and flag-stick affects difficulty and gameplay. Almost every golf course is measured and rated according to distance, often measured in yardage. It is the greenskeeper's repsponsibility to keep the cumulative yardage for daily play close to the rating for the course; guidelines are not exact and the movement of flag-sticks is largely left to the greenskeeper's discretion.
In order to place a flag-stick, greenskeepers use a cup-cutting tool. Soil is saved to repair the previous hole. Then, a cup-setter tool is used to place the top of the cup about 1 inch (25 mm) below the green's surface. Due to wear and tear, flag-sticks are often moved daily during the summer season. New flag-stick locations are at least 12 to 20 feet (3.7 to 6.1 meters) from the previous location.
Monitoring green speed
Greenskeepers measure the speed of golf greens with a stimpmeter, a device that measures how fast a green allows the golf ball to travel. The stimpmeter not used to compare one facility with another; many factors, including design, undulation, and grass type, affect green speed. A greenskeeper can increase the speed of the green by mowing the grass shorter, mowing more than once in multiple directions, using a lightweight roll |
https://en.wikipedia.org/wiki/Trans-endocytosis | Trans-endocytosis is the biological process where material created in one cell undergoes endocytosis (enters) into another cell. If the material is large enough, this can be observed using an electron microscope. Trans-endocytosis from neurons to glia has been observed using time-lapse microscopy.
Trans-endocytosis also applies to molecules. For example, this process is involved when a part of the protein Notch is cleaved off and undergoes endocytosis into its neighboring cell. Without Notch trans-endocytosis, there would be too many neurons in a developing embryo. Trans-endocytosis is also involved in cell movement when the protein ephrin is bound by its receptor from a neighboring cell. |
https://en.wikipedia.org/wiki/Flags%20and%20emblems%20of%20the%20regions%20of%20Ethiopia | Ethiopia is currently divided into twelve regions and two chartered cities. Each region or chartered city has its own flag and emblem.
Regional flags
Regional emblems
Former regions
See also
Flag of Ethiopia
Emblem of Ethiopia
Regions of Ethiopia
Regional flag |
https://en.wikipedia.org/wiki/Zooloretto | Zooloretto is a board game designed by Michael Schacht, published in 2007 by Abacus Spiele and in English by Rio Grande Games. The premise of the game is that each player is the owner of a zoo, and must collect animals in order to attract visitors to their zoo (thus scoring points to win the game). Having full, or nearly full, animal enclosures scores more points. However, if a player has too many animals such that they must be stored in their "barn", this causes them to lose points. Vending stalls also offer a means for players to score points with enclosures that are not full.
The method that players use to collect animals is based on the mechanics of the card game Coloretto (also designed by Michael Schacht).
Expansions and spin-offs
Three large expansions have been published, XXL, Exotic, and Boss. Numerous small expansions have been published, many of which are available for download (at no cost) at the publisher's website. These include extra animal enclosures, a petting zoo, restaurant, souvenir shop, and pavilions, each of which offers different opportunities for players to score points or money. An additional large expansion, Aquaretto, can be played as a stand-alone game or in combination with Zooloretto.
Digital versions
An iPhone and iPod Touch version was developed by Spinbottle games and published by Chillingo in May 2009.
Zooloretto for PC (including digitally via Steam) was developed by White Bear Studios and published in 2011. Nintendo DS and Wii versions were in deployment but canceled The game has a single player campaign mode and a multiplayer mode. The game can be played (just like the board game) with five local players, with no online option.
Animals
Flamingo
Camel
Leopard
Elephant
Panda
Chimpanzee
Zebra
Kangaroo
Lion
Rabbit
NOTE: The lion is on the King of the Beasts edition of the game. Both the Lion and Rabbit are available as a Promo from their webstore.
Awards
Winner of the Spiel des Jahres 2007
Winner of the Golden |
https://en.wikipedia.org/wiki/Herpolhode | A herpolhode is the curve traced out by the endpoint of the angular velocity vector ω of a rigid rotor, a rotating rigid body. The endpoint of the angular velocity moves in a plane in absolute space, called the invariable plane, that is orthogonal to the angular momentum vector L. The fact that the herpolhode is a curve in the invariable plane appears as part of Poinsot's construction.
The trajectory of the angular velocity around the angular momentum in the invariable plane is a circle in the case of a symmetric top, but in the general case wiggles inside an annulus, while still being concave towards the angular momentum.
See also
Poinsot's construction
Polhode |
https://en.wikipedia.org/wiki/Risk%20measure | In financial mathematics, a risk measure is used to determine the amount of an asset or set of assets (traditionally currency) to be kept in reserve. The purpose of this reserve is to make the risks taken by financial institutions, such as banks and insurance companies, acceptable to the regulator. In recent years attention has turned towards convex and coherent risk measurement.
Mathematically
A risk measure is defined as a mapping from a set of random variables to the real numbers. This set of random variables represents portfolio returns. The common notation for a risk measure associated with a random variable is . A risk measure should have certain properties:
Normalized
Translative
Monotone
Set-valued
In a situation with -valued portfolios such that risk can be measured in of the assets, then a set of portfolios is the proper way to depict risk. Set-valued risk measures are useful for markets with transaction costs.
Mathematically
A set-valued risk measure is a function , where is a -dimensional Lp space, , and where is a constant solvency cone and is the set of portfolios of the reference assets. must have the following properties:
Normalized
Translative in M
Monotone
Examples
Value at risk
Expected shortfall
Superposed risk measures
Entropic value at risk
Drawdown
Tail conditional expectation
Entropic risk measure
Superhedging price
Expectile
Variance
Variance (or standard deviation) is not a risk measure in the above sense. This can be seen since it has neither the translation property nor monotonicity. That is, for all , and a simple counterexample for monotonicity can be found. The standard deviation is a deviation risk measure. To avoid any confusion, note that deviation risk measures, such as variance and standard deviation are sometimes called risk measures in different fields.
Relation to acceptance set
There is a one-to-one correspondence between an acceptance set and a corresponding risk measure. |
https://en.wikipedia.org/wiki/Stan%20%28dinosaur%29 | "Stan", also known by its inventory number BHI 3033, is a Tyrannosaurus rex fossil found in the Hell Creek Formation in South Dakota, just outside of Buffalo in 1987, and excavated in 1992. It is the fifth most complete T. rex fossil discovered to date, at more than 70% bulk. In October 2020, the fossil was sold for $31.8 million at auction, making it the most expensive dinosaur specimen and fossil ever sold. In March 2022 Abu Dhabi's Department of Culture and Tourism stated that they had acquired Stan and were planning on displaying the fossil at a new museum of natural history scheduled to open in 2025.
Discovery
Stan Sacrison, an amateur paleontologist, was responsible for the initial discovery of Stan's bone fragments, and as a result is the namesake for the T. rex. He was out looking at plant life in South Dakota when he spotted Stan's pelvis visible in the side of a cliff. At the time, Sacrison was doing freelance work for the Black Hills Institute of Geological Research. Originally, it was thought that the fossil was that of a Triceratops.
The excavation itself required the skills and resources of the Black Hills Institute; it officially began on 11 July 1992, led by Peter Larson (the lead paleontologist on the excavations of many other T. rex specimens like Sue and Trix as well as the institute's president). The institute's team removed the rock above Stan's skeleton with a Bobcat and finer removal was done manually with picks and brushes until the fossils could be plotted and diagrammed with the help of a grid placed over the dig site. The bones were then wrapped in burlap and plaster and brought to the Black Hills Institute.
Description
The most notable aspect of Stan is his nearly complete and perfectly preserved skull. It is widely regarded as the best T. rex skull ever discovered. Although the bones were separated from each other before excavation, they were in pristine condition and ideal for study by researchers. According to Pete Larson of the |
https://en.wikipedia.org/wiki/Citicoline | Citicoline (INN), also known as cytidine diphosphate-choline (CDP-Choline) or cytidine 5'-diphosphocholine is an intermediate in the generation of phosphatidylcholine from choline, a common biochemical process in cell membranes. Citicoline is naturally occurring in the cells of human and animal tissue, in particular the organs.
Use as a dietary supplement
Citicoline is available as a supplement in over 70 countries under a variety of brand names: CereBleu, Cebroton, Ceraxon, Cidilin, Citifar, Cognizin, Difosfocin, Hipercol, NeurAxon, Nicholin, Sinkron, Somazina, Synapsine, Startonyl, Trausan, Xerenoos, etc. When taken as a supplement, citicoline is hydrolyzed into choline and cytidine in the intestine. Once these cross the blood–brain barrier it is reformed into citicoline by the rate-limiting enzyme in phosphatidylcholine synthesis, CTP-phosphocholine cytidylyltransferase.
Research
Memory and cognition
Studies suggest, but have not confirmed, potential benefits of citicoline for cognitive impairments.
Ischemic stroke
Some preliminary research suggested that citicoline may reduce the rates of death and disability following an ischemic stroke.
However, the largest citicoline clinical trial to date (a randomised, placebo-controlled, sequential trial of 2,298 patients with moderate-to-severe acute ischaemic stroke in Europe), found no benefit of administering citicoline on survival or recovery from stroke. A meta-analysis of seven trials reported no statistically significant benefit for long-term survival or recovery.
Vision
The effect of citicoline on visual function has been studied in patients with glaucoma, with possible positive effect for protecting vision.
Mechanism of action
Neuroprotective effects
Citicoline may have neuroprotective effects due to its preservation of cardiolipin and sphingomyelin, preservation of arachidonic acid content of phosphatidylcholine and phosphatidylethanolamine, partial restoration of phosphatidylcholine levels, and stimula |
https://en.wikipedia.org/wiki/Network%20motif | Network motifs are recurrent and statistically significant subgraphs or patterns of a larger graph. All networks, including biological networks, social networks, technological networks (e.g., computer networks and electrical circuits) and more, can be represented as graphs, which include a wide variety of subgraphs.
Network motifs are sub-graphs that repeat themselves in a specific network or even among various networks. Each of these sub-graphs, defined by a particular pattern of interactions between vertices, may reflect a framework in which particular functions are achieved efficiently. Indeed, motifs are of notable importance largely because they may reflect functional properties. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Although network motifs may provide a deep insight into the network's functional abilities, their detection is computationally challenging.
Definitions
Let and be two graphs. Graph is a sub-graph of graph (written as ) if and . If and contains all of the edges with , then is an induced sub-graph of . We call and isomorphic (written as ), if there exists a bijection (one-to-one correspondence) with for all . The mapping is called an isomorphism between and .
When and there exists an isomorphism between the sub-graph and a graph , this mapping represents an appearance of in . The number of appearances of graph in is called the frequency of in . A graph is called recurrent (or frequent) in when its frequency is above a predefined threshold or cut-off value. We use terms pattern and frequent sub-graph in this review interchangeably. There is an ensemble of random graphs corresponding to the null-model associated to . We should choose random graphs uniformly from and calculate the frequency for a particular frequent sub-graph in . If the frequency of in is higher than its arithmetic mean frequency in random graphs , where , we call |
https://en.wikipedia.org/wiki/Jodrell%20Bank%20Centre%20for%20Astrophysics | The Jodrell Bank Centre for Astrophysics at the University of Manchester, is among the largest astrophysics groups in the UK. It includes the Jodrell Bank Observatory, the MERLIN/VLBI National Facility, and the Jodrell Bank Visitor Centre. The centre was formed after the merger of the Victoria University of Manchester and UMIST which brought two astronomy groups together. The Jodrell Bank site also hosts the headquarters of the SKA Observatory (SKAO) - the International Governmental Organisation (IGO) tasked with the delivery and operation of the Square Kilometre Array, created on the signing of the Rome Convention in 2019. The SKA will be the largest telescope in the world - construction is expected to start at the end of this decade.
The JBCA is part of the School of Physics and Astronomy. The current director is Professor Michael Garrett.
Research
The research at the Centre focuses on:
Astrochemistry
Astrophysical masers
The Cosmic Microwave Background
Galaxy formation and evolution
Gravitational lenses
Theoretical astrophysics and cosmology
Planetary nebulae
Pulsars
Stellar physics (including star formation and solar plasmas)
Development of telescope receivers
Jodrell Bank Observatory
The Jodrell Bank Observatory, located near Goostrey and Holmes Chapel in Cheshire, has played an important role in the research of meteors, quasars, pulsars, masers and gravitational lenses, and was heavily involved with the tracking of space probes at the start of the Space Age.
The main telescope at the observatory is the Lovell Telescope, which is the third largest steerable radio telescope in the world. There are three other active telescopes located at the observatory; the Mark II, as well as 42 ft and 7m-diameter radio telescopes. Jodrell Bank Observatory is also the base of the Multi-Element Radio Linked Interferometer Network (MERLIN), a National Facility run by the University of Manchester on behalf of the Science and Technology Facilities Council. |
https://en.wikipedia.org/wiki/Butyryl%20phosphate | Butyryl phosphate is an intermediate in the fermentation of butyric acid. The glutamate oxidation of butyryl phosphate may provide the main source of energy for Clostridium tetanomorphum.
See also
Butyric acid |
https://en.wikipedia.org/wiki/Finite%20element%20machine | The Finite Element Machine (FEM) was a late 1970s-early 1980s NASA project to build and evaluate the performance of a parallel computer for structural analysis. The FEM was completed and successfully tested at the NASA Langley Research Center in Hampton, Virginia. The motivation for FEM arose from the merger of two concepts: the finite element method of structural analysis and the introduction of relatively low-cost microprocessors.
In the finite element method, the behavior (stresses, strains and displacements resulting from load conditions) of large-scale structures is approximated by a FE model consisting of structural elements (members) connected at structural node points. Calculations on traditional computers are performed at each node point and results communicated to adjacent node points until the behavior of the entire structure is computed. On the Finite Element Machine, microprocessors located at each node point perform these nodal computations in parallel. If there are more node points (N) than microprocessors (P), then each microprocessor performs N/P computations. The Finite Element Machine contained 32 processor boards each with a Texas Instruments TMS9900 processor, 32 Input/Output (IO) boards and a TMS99/4 controller. The FEM was conceived, designed and fabricated at NASA Langley Research Center. The TI 9900 processor chip was selected by the NASA team as it was the first 16-bit processor available on the market which until then was limited to less powerful 8-bit processors. The FEM concept was first successfully tested to solve beam bending equations on a Langley FEM prototype (4 IMSAI 8080s). This led to full-scale FEM fabrication & testing by the FEM hardware-software-applications team led by Dr. Olaf Storaasli formerly of NASA Langley Research Center and Oak Ridge National Laboratory (currently at USEC).
The first significant Finite Element Machine results are documented in: The Finite Element Machine: An experiment in parallel processing (NASA |
https://en.wikipedia.org/wiki/Discounted%20maximum%20loss | Discounted maximum loss, also known as worst-case risk measure, is the present value of the worst-case scenario for a financial portfolio.
In investment, in order to protect the value of an investment, one must consider all possible alternatives to the initial investment. How one does this comes down to personal preference; however, the worst possible alternative is generally considered to be the benchmark against which all other options are measured. The present value of this worst possible outcome is the discounted maximum loss.
Definition
Given a finite state space , let be a portfolio with profit for . If is the order statistic the discounted maximum loss is simply , where is the discount factor.
Given a general probability space , let be a portfolio with discounted return for state . Then the discounted maximum loss can be written as where denotes the essential infimum.
Properties
The discounted maximum loss is the expected shortfall at level . It is therefore a coherent risk measure.
The worst-case risk measure is the most conservative (normalized) risk measure in the sense that for any risk measure and any portfolio then .
Example
As an example, assume that a portfolio is currently worth 100, and the discount factor is 0.8 (corresponding to an interest rate of 25%):
In this case the maximum loss is from 100 to 20 = 80, so the discounted maximum loss is simply |
https://en.wikipedia.org/wiki/Reactor%20pattern | The reactor software design pattern is an event handling strategy that can respond to many potential service requests concurrently. The pattern's key component is an event loop, running in a single thread or process, which demultiplexes incoming requests and dispatches them to the correct request handler.
By relying on event-based mechanisms rather than blocking I/O or multi-threading, a reactor can handle many concurrent I/O bound requests with minimal delay.
A reactor also allows for easily modifying or expanding specific request handler routines, though the pattern does have some drawbacks and limitations.
With its balance of simplicity and scalability, the reactor has become a central architectural element in several server applications and software frameworks for networking. Derivations such as the multireactor and proactor also exist for special cases where even greater throughput, performance, or request complexity are necessary.
Overview
Practical considerations for the client–server model in large networks, such as the C10k problem for web servers, were the original motivation for the reactor pattern.
A naive approach to handle service requests from many potential endpoints, such as network sockets or file descriptors, is to listen for new requests from within an event loop, then immediately read the earliest request. Once the entire request has been read, it can be processed and forwarded on by directly calling the appropriate handler. An entirely "iterative" server like this, which handles one request from start-to-finish per iteration of the event loop, is logically valid. However, it will fall behind once it receives multiple requests in quick succession. The iterative approach cannot scale because reading the request blocks the server's only thread until the full request is received, and I/O operations are typically much slower than other computations.
One strategy to overcome this limitation is multi-threading: by immediately splitting off each n |
https://en.wikipedia.org/wiki/Rendezvous%20protocol | A rendezvous protocol is a computer network protocol that enables resources or P2P network peers to find each other. A rendezvous protocol uses a handshaking model, unlike an eager protocol which directly copies the data. In a rendezvous protocol the data is sent when the destination says it is ready, but in an eager protocol the data is sent assuming the destination can store the data.
Examples of rendezvous protocols include JXTA, SIP, Freenet Project, I2P, and such protocols generally involve hole punching.
Because of firewall network address translation (NAT) issues, rendezvous protocols generally require that there be at least one unblocked and un-NATed server that lets the peers locate each other and initiate concurrent packets at each other. |
https://en.wikipedia.org/wiki/Navassa%20curly-tailed%20lizard | The Navassa curly-tailed lizard or Navassa curlytail lizard (Leiocephalus eremitus) is an extinct lizard species from the family of curly-tailed lizard (Leiocephalidae). It is known only from the holotype, a female specimen from which it was described in 1868. A possible second specimen which was collected by Rollo Beck in 1917 was instead identified as a Tiburon curly-tailed lizard (Leiocephalus melanochlorus) by herpetologist Richard Thomas in 1966.
Geographic range
Leiocephalus eremitus was endemic to Navassa Island.
Description
The size of the holotype is given as snout–vent length (SVL). The head and ventral scales are smooth. The dorsal scales are larger than the scales on the flanks and the ventral scales. The dorsum is dark gray with nine dark transverse bars. The tail is pale with transverse bars on the basal half and uniformly dark gray to black on the posterior half. Throat, breast, belly and the extremities are brown with pale-tipped scales.
Behavior and habitat
Navassa has xeric forest vegetation, but nothing specific is known about biology of this species. The reason for its extinction is also unknown, but predation by cats is a possible reason. |
https://en.wikipedia.org/wiki/Fish%20fillet | A fish fillet, from the French word () meaning a thread or strip, is the flesh of a fish which has been cut or sliced away from the bone by cutting lengthwise along one side of the fish parallel to the backbone. In preparation for filleting, any scales on the fish should be removed. The contents of the stomach also need careful detaching from the fillet. Because fish fillets do not contain the larger bones running along the vertebrae, they are often said to be "boneless". However, some species, such as the common carp, have smaller intramuscular bones called pins within the fillet. The skin present on one side may or may not be stripped from the fillet. Butterfly fillets can be produced by cutting the fillets on each side in such a way that they are held together by the flesh and skin of the belly.
Fish fillets can be contrasted with fish steaks (also known as fish cutlets), which are cut perpendicular to the spine and include the larger bones.
Filleting
Fish fillets comprise the flesh of the fish, which is the skeletal muscles and fat as opposed to the bones and organs. Fillets are usually obtained by slicing the fish parallel to the spine, rather than perpendicular to the spine as is the case with steaks. The remaining bones with the attached flesh is called the "frame", and is often used to make fish stock. As opposed to whole fish or fish steaks, fillets do not contain the fish's backbone; they yield less flesh, but are easier to eat.
Special cut fillets are taken from solid large blocks; these include a "natural" cut fillet, wedge, rhombus or tail shape. Fillets may be skinless or have skin on; pinbones may or may not be removed. A fletch is a large boneless fillet of halibut, swordfish or tuna.
There are several ways to cut a fish fillet:
Cutlet: obtained by slicing from behind the head of the fish, round the belly and tapering towards the tail. The fish is then turned and the process repeated on the other side to produce a double fillet
Single: more |
https://en.wikipedia.org/wiki/Set-valued%20function | A set-valued function (or correspondence) is a mathematical function that maps elements from one set, the domain of the function, to subsets of another set. Set-valued functions are used in a variety of mathematical fields, including optimization, control theory and game theory.
Set-valued functions are also known as multivalued functions in some references, but herein and in many others references in mathematical analysis, a multivalued function is a set-valued function that has a further continuity property, namely that the choice of an element in the set defines a corresponding element in each set for close to , and thus defines locally an ordinary function.
Examples
The argmax of a function is in general, multivalued. For example, .
Set-valued analysis
Set-valued analysis is the study of sets in the spirit of mathematical analysis and general topology.
Instead of considering collections of only points, set-valued analysis considers collections of sets. If a collection of sets is endowed with a topology, or inherits an appropriate topology from an underlying topological space, then the convergence of sets can be studied.
Much of set-valued analysis arose through the study of mathematical economics and optimal control, partly as a generalization of convex analysis; the term "variational analysis" is used by authors such as R. Tyrrell Rockafellar and Roger J-B Wets, Jonathan Borwein and Adrian Lewis, and Boris Mordukhovich. In optimization theory, the convergence of approximating subdifferentials to a subdifferential is important in understanding necessary or sufficient conditions for any minimizing point.
There exist set-valued extensions of the following concepts from point-valued analysis: continuity, differentiation, integration, implicit function theorem, contraction mappings, measure theory, fixed-point theorems, optimization, and topological degree theory. In particular, equations are generalized to inclusions, while differential equations are |
https://en.wikipedia.org/wiki/Saint%20Croix%20racer | The Saint Croix racer (Borikenophis sanctaecrucis) is a possibly extinct species of snake in the family Colubridae that is endemic to the island of Saint Croix in the United States Virgin Islands.
Etymology
The specific name, sanctaecrucis, refers to the island of Saint Croix, on which the holotype was collected.
Description
B. sanctaecrucis may attain a snout-to-vent length (SVL) of . It has smooth dorsal scales, which are arranged in 17 rows at midbody. The holotype has a total length of , which includes a tail long.
B. sanctaecrucis is oviparous.
Habitat
The preferred natural habitat of B. sanctaerucis is xeric forest.
Conservation
B. sanctaecrucis is feared extinct, as it has not been recorded in over 100 years since the holotype was collected. St. Croix is a densely-populated island, and the species is a fairy large snake. If it is extinct, the most probable causes were due to predation from introduced mongooses and deforestation of its habitat. However, recent rediscoveries of other Caribbean reptiles that were also thought extinct brings hope that a small population (probably less than 50 individuals) of B. sanctaecrucis survives somewhere in St. Croix. |
https://en.wikipedia.org/wiki/Parictis | Parictis is an extinct arctoid belonging to the family Subparictidae.
Taxonomy & evolution
It was originally described as a new genus and species Parietis princeous of mustelid by Scott in 1893, for a single specimen, a mandible fragment with two anterior molars. An alternative name and spelling, ?Parictis princeps, was proposed in 1894; and in 1904 both the genus and species name were declared to be in error and the name Parictis primaevus was assigned.
Parictis bathygenus was described in 1947, but it was considered a different genus by 1958, and a synonym of Cynelos caroniavorus by 1976.
Another species was described in 1954 as Campylocynodon personi, and was reassigned to the genus Parictis in 1967. And Parictis major was described during a review of the genus in 1972.
The genus as a whole was placed within various families, including Canidae by Hall in 1931 and Ursidae by Hunt in 1998. It is placed within the family Subparictidae as of 2023.
Description
It was a very small and graceful arctoid with a skull only 7 cm long. Parictis first appeared in North America in the Late Eocene (around 38 million years ago), but it did not arrive in Eurasia until the Miocene. Some suggest that Parictis may have emigrated from Asia into North America during the major sea level low about 37 mya, because of the continued evolution of the Amphicynodontinae into the Hemicyoninae in Asia. Although no Parictis fossils have been found in East Asia, Parictis does appear in Eurasia and Africa, but not until the Miocene. |
https://en.wikipedia.org/wiki/Fujian%20pond%20turtle | The Fujian pond turtle ("Mauremys" × iversoni) is a possibly also naturally occurring intergeneric hybrid turtle in the family Geoemydidae (formerly Bataguridae) produced in larger numbers by Chinese turtle farms as a "copy" of the golden coin turtle Cuora trifasciata. It appears to occur in China and Vietnam. Before its actual origin became known, it was listed as data deficient in the IUCN Red List.
The parents of this hybrid are the Asian yellow pond turtle and the golden coin turtle, with the male apparently usually of the latter species. While it is not unusual for perfectly valid geoemydid species to arise from hybridization, recognition as a species would require that the hybrids are fertile and constitute a phenotypically distinct and self-sustaining lineage. This does not appear to be the case in this "species" as only single specimens have been found rather than an entire population of these turtles and captive breeding has rarely been successful as most males proved to be infertile (while females are fully fertile).
The Fujian pond turtle's scientific name was given in dedication to American herpetologist John B. Iverson.
"Clemmys guangxiensis" is a composite taxon described from specimens of Mauremys mutica and the natural hybrid "Mauremys" × iversoni.
See also
"Mauremys" × pritchardi
"Ocadia" × glyphistoma
Ocadia philippeni
Cuora serrata |
https://en.wikipedia.org/wiki/Mauremys%20pritchardi | Mauremys pritchardi is an interspecific hybrid turtle in the family Geoemydidae. M. pritchardi, described to be from Myanmar (where neither of the parental species occurs apparently), has been found in the wild in China and Japan, and is produced to some extent in Chinese turtle farms. It was listed as data deficient in the IUCN Red List before its actual origin became known.
The parents of this hybrid are the Chinese pond turtle (Mauremys reevesii ) and the Asian yellow pond turtle (Mauremys mutica). While it is not unusual for perfectly valid geoemydid species to arise from hybridization, recognition as a species would require that the hybrids be fertile and constitute a phenotypically distinct and self-sustaining lineage. This does not yet appear to be the case in this "species" as recently (Kosukawa et al. 2006) a population of these turtles has been found in Japan. The hybrid offspring are perfectly fertile, which is not the case in Mauremys iversoni for example, another intergeneric hybrid, and have been bred in captivity already, with all juveniles resembling their parents (and not the parental species) perfectly as well. Genetic studies verify its hybrid origin but scientists are unsure of the time of creation. According to Wink et al. 2001, it might well be a very ancient hybrid, while Parham et al. 2001 suppose that it is of rather recent origin.
Etymology
The specific name, pritchardi, is in honour of British herpetologist Peter Pritchard.
See also
Fujian pond turtle
Ocadia glyphistoma
Ocadia philippeni
Cuora serrata |
https://en.wikipedia.org/wiki/Mauremys%20glyphistoma | "Mauremys" glyphistoma is a hybrid turtle in the family Geoemydidae (formerly Bataguridae). Originally described as a new species supposedly endemic to Guangxi/China; it was classified as Data Deficient in the IUCN Red List.
It is known only from a few specimens including the type series, all from the pet trade supposedly from Guangxi or Vietnam. Either found in the wild or bred for the pet trade, it was later determined to be the offspring of a male Chinese stripe-necked turtle and a female Vietnamese pond turtle (Spinks et al. (2004), Stuart & Parham (2006)). If it is a wild-born hybrid, the specimen thus must have originated in central Vietnam, the only area where Mauremys annamensis is known to exist and overlaps with the range of Mauremys sinensis.
See also
Mauremys iversoni the Fujian pond turtle
Mauremys pritchardi
Mauremys philippeni |
https://en.wikipedia.org/wiki/Philippen%27s%20striped%20turtle | Philippen's striped turtle, "Mauremys" philippeni, has recently shown to be an intergeneric hybrid (Stuart & Parham, 2006) between a male Mauremys sinensis and a female Cuora trifasciata.
The "species" is known only from a handful of specimens (mainly the type series), said to originate from Hainan, but all acquired from a pet trader in Hong Kong.
Etymology
The specific name, phillipeni, is in honor of German herpetologist Hans-Dieter Phillipen (born 1957). |
https://en.wikipedia.org/wiki/Chinese%20false-eyed%20turtle | The Chinese false-eyed turtle (Cuora trifasciata × Sacalia quadriocellata) is a hybrid species of turtle in the family Geoemydidae. It is a hybrid between a male golden coin turtle (Cuora trifasciata) and a female four-eyed turtle (Sacalia quadriocellata). While formerly considered to be a wild type species believed to be originally from Hainan, it is now known only from pet trade type specimens. |
https://en.wikipedia.org/wiki/Bailar%20twist | The Bailar twist is a mechanism proposed for the racemization of octahedral complexes containing three bidentate chelate rings. Such complexes typically adopt an octahedral molecular geometry, in which case they possess helical chirality. One pathway by which these compounds can racemize is via the formation of a trigonal prismatic intermediate with D3h point group symmetry. This pathway is named in honor of John C. Bailar, Jr., an inorganic chemist who investigated this process. An alternative pathway is called the Ray–Dutt twist.
See also
Pseudorotation
Bartell mechanism
Berry mechanism
Ray–Dutt twist
Fluxional molecule |
https://en.wikipedia.org/wiki/Bunyakovsky%20conjecture | The Bunyakovsky conjecture (or Bouniakowsky conjecture) gives a criterion for a polynomial in one variable with integer coefficients to give infinitely many prime values in the sequence It was stated in 1857 by the Russian mathematician Viktor Bunyakovsky. The following three conditions are necessary for to have the desired prime-producing property:
the leading coefficient is positive,
the polynomial is irreducible over the rationals (and integers), and
the values have no common factor. (In particular, the coefficients of should be relatively prime.)
Bunyakovsky's conjecture is that these conditions are sufficient: if satisfies (1)–(3), then is prime for infinitely many positive integers .
A seemingly weaker yet equivalent statement to Bunyakovsky's conjecture is that for every integer polynomial that satisfies (1)–(3), is prime for at least one positive integer : but then, since the translated polynomial still satisfies (1)–(3), in view of the weaker statement is prime for at least one positive integer , so that is indeed prime for infinitely many positive integers . Bunyakovsky's conjecture is a special case of Schinzel's hypothesis H, one of the most famous open problems in number theory.
Discussion of three conditions
The first condition is necessary because if the leading coefficient is negative then for all large , and thus is not a (positive) prime number for large positive integers . (This merely satisfies the sign convention that primes are positive.)
The second condition is necessary because if where the polynomials and have integer coefficients, then we have for all integers ; but and take the values 0 and only finitely many times, so is composite for all large .
The second condition also fails for the polynomials reducible over the rationals.
For example, the integer-valued polynomial doesn't satisfy the condition (2) since , so at least one of the latter two factors must be a divisor of in order to have prime, which |
https://en.wikipedia.org/wiki/DEC%20BATCH-11/DOS-11 | BATCH-11/DOS-11, also known simply as DOS-11, is a discontinued operating system by Digital Equipment Corporation (DEC) of Maynard, Massachusetts. The first version of DOS-11 (V08-02) was released in 1970 and was the first operating system to run on the Digital PDP-11 minicomputer. DOS-11 was not known to be easy to use even in its day and became much less used in 1973 with the release of the RT-11 operating system.
Features
DOS-11 included:
DOS-Monitor
Edit-11 (text editor)
FORTRAN IV (programming language)
Libr-11 (librarian)
Link-11 (linker)
ODT-11R (debugging program)
PAL-11R (assembler)
PIP (file utility package)
DOS-11 came with XXDP, a diagnostics and monitor program for the PDP-11. Like other Digital operating systems, DOS-11 also had a FORTRAN-IV (Ansi-66) compiler. FORTRAN-IV was not supported on PDP-11 systems with less than 12K of memory. DOS-11 systems running in 8K and 12K configurations ran a limited version of the MACRO-11 Assembler (PAL-11R in overlaid form).
The DOS-11 operating system kernel was one file called MONLIB.LCL. The LCL extension was the acronym for LInked Core Image Library (or LICIL). An LICIL could be stored on any type of media that the DOS-11 operating system was distributed on (disk, DECtape, punched tape or magnetic tape). When the LICIL file was installed (Hooked) onto a disk drive as a contiguous file, the monitor library name is changed to MONLIBCIL which could then be booted. The CIL extension was the acronym for Core Image Library. Core, was the term for the core memory systems common to the PDP-11. A Core Image Library could be created with the CILUS (Core Image Library Update and Save) program. A MONLIBCIL typically contained the resident monitor (RMON), the keyboard command routine, device drivers, EMT routines, the clock routines and the transient monitor.
Legacy
DOS-11 was used to compile and install early versions of the RSTS-11 and RSTS/E operating systems however it is an ancestor to the RSX-11 family o |
https://en.wikipedia.org/wiki/Expected%20value%20of%20sample%20information | In decision theory, the expected value of sample information (EVSI) is the expected increase in utility that a decision-maker could obtain from gaining access to a sample of additional observations before making a decision. The additional information obtained from the sample may allow them to make a more informed, and thus better, decision, thus resulting in an increase in expected utility. EVSI attempts to estimate what this improvement would be before seeing actual sample data; hence, EVSI is a form of what is known as preposterior analysis. The use of EVSI in decision theory was popularized by Robert Schlaifer and Howard Raiffa in the 1960s.
Formulation
Let
It is common (but not essential) in EVSI scenarios for , and , which is to say that each observation is an unbiased sensor reading of the underlying state , with each sensor reading being independent and identically distributed.
The utility from the optimal decision based only on the prior, without making any further observations, is given by
If the decision-maker could gain access to a single sample, , the optimal posterior utility would be
where is obtained from Bayes' rule:
Since they don't know what sample would actually be obtained if one were obtained, they must average over all possible samples to obtain the expected utility given a sample:
The expected value of sample information is then defined as
Computation
It is seldom feasible to carry out the integration over the space of possible observations in E[U|SI] analytically, so the computation of EVSI usually requires a Monte Carlo simulation. The method involves randomly simulating a sample, , then using it to compute the posterior and maximizing utility based on . This whole process is then repeated many times, for to obtain a Monte Carlo sample of optimal utilities. These are averaged to obtain the expected utility given a hypothetical sample.
Example
A regulatory agency is to decide whether to approve a new treatment. Before ma |
https://en.wikipedia.org/wiki/Iron%20in%20biology | Iron is an important biological element. It is used in both the ubiquitous Iron-sulfur proteins and in Vertebrates it is used in Hemoglobin which is essential for Blood and oxygen transport.
Overview
Iron is required for life. The iron–sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen fixation. Iron-containing proteins participate in transport, storage and used of oxygen. Iron proteins are involved in electron transfer. The ubiquity of Iron in life has led to the Iron–sulfur world hypothesis that Iron was a central component of the environment of early life.
Examples of iron-containing proteins in higher organisms include hemoglobin, cytochrome (see high-valent iron), and catalase. The average adult human contains about 0.005% body weight of iron, or about four grams, of which three quarters is in hemoglobin – a level that remains constant despite only about one milligram of iron being absorbed each day, because the human body recycles its hemoglobin for the iron content.
Microbial growth may be assisted by oxidation of iron(II) or by reduction of iron (III).
Biochemistry
Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral pH. Thus, these organisms have developed means to absorb iron as complexes, sometimes taking up ferrous iron before oxidising it back to ferric iron. In particular, bacteria have evolved very high-affinity sequestering agents called siderophores.
After uptake in human cells, iron storage is precisely regulated. A major component of this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it in the blood to cells. Transferrin contains Fe3+ in the middle of a distorted octahedron, bonded to one nitrogen, three oxygens and a chelating carbonate anion that traps the Fe3+ ion: it has such a high stability constant that it is very effective at taking up Fe3+ ions even from the most stable comple |
https://en.wikipedia.org/wiki/Handover%20keying | In wireless technology, handover keying (Hokey) refers to maintaining a secure connection seamlessly while migrating from one wireless network to another.
External links
IETF Working Group
Interview with Russ Housley, chair of the Internet Engineering Task Force
Wireless networking |
https://en.wikipedia.org/wiki/Pinhole%20camera%20model | The pinhole camera model describes the mathematical relationship between the coordinates of a point in three-dimensional space and its projection onto the image plane of an ideal pinhole camera, where the camera aperture is described as a point and no lenses are used to focus light. The model does not include, for example, geometric distortions or blurring of unfocused objects caused by lenses and finite sized apertures. It also does not take into account that most practical cameras have only discrete image coordinates. This means that the pinhole camera model can only be used as a first order approximation of the mapping from a 3D scene to a 2D image. Its validity depends on the quality of the camera and, in general, decreases from the center of the image to the edges as lens distortion effects increase.
Some of the effects that the pinhole camera model does not take into account can be compensated, for example by applying suitable coordinate transformations on the image coordinates; other effects are sufficiently small to be neglected if a high quality camera is used. This means that the pinhole camera model often can be used as a reasonable description of how a camera depicts a 3D scene, for example in computer vision and computer graphics.
Geometry
The geometry related to the mapping of a pinhole camera is illustrated in the figure. The figure contains the following basic objects:
A 3D orthogonal coordinate system with its origin at O. This is also where the camera aperture is located. The three axes of the coordinate system are referred to as X1, X2, X3. Axis X3 is pointing in the viewing direction of the camera and is referred to as the optical axis, principal axis, or principal ray. The plane which is spanned by axes X1 and X2 is the front side of the camera, or principal plane.
An image plane, where the 3D world is projected through the aperture of the camera. The image plane is parallel to axes X1 and X2 and is located at distance from the |
https://en.wikipedia.org/wiki/Ray%E2%80%93Dutt%20twist | The Ray–Dutt twist is a mechanism proposed for the racemization of octahedral complexes containing three bidentate chelate rings. Such complexes typically adopt an octahedral molecular geometry in their ground states, in which case they possess helical chirality. The pathway entails formation of an intermediate of C2v point group symmetry. An alternative pathway that also does not break any metal-ligand bonds is called the Bailar twist. Both of these mechanism product complexes wherein the ligating atoms (X in the scheme) are arranged in an approximate trigonal prism.
This pathway is called the Ray–Dutt twist in honor of Priyadaranjan Ray (not Prafulla Chandra Ray) and N. K. Dutt, inorganic chemists at the Indian Association for the Cultivation of Science abbr. IACS who proposed this process.
See also
Pseudorotation
Bailar twist
Bartell mechanism
Berry mechanism
Fluxional molecule
Indian Association for the Cultivation of Science (IACS) |
https://en.wikipedia.org/wiki/Anti-transglutaminase%20antibodies | Anti-transglutaminase antibodies (ATA) are autoantibodies against the transglutaminase protein. Antibodies serve an important role in the immune system by detecting cells and substances that the rest of the immune system then eliminates. These cells and substances can be foreign (for example, viruses) and also can be produced by the body (for example, cancer cells). Antibodies against the body's own products are called autoantibodies. Autoantibodies can sometimes errantly be directed against healthy portions of the organism, causing autoimmune diseases.
ATA can be classified according to 2 different schemes: transglutaminase isoform and immunoglobulin reactivity subclass (IgA, IgG) toward transglutaminases.
Transglutaminase isoform reactivity
Anti-tissue transglutaminase
Antibodies to tissue transglutaminase (abbreviated as anti-tTG or anti-TG2) are found in patients with several conditions, including celiac disease, juvenile diabetes, inflammatory bowel disease, and various forms of arthritis.
In celiac disease, ATA are involved in the destruction of the villous extracellular matrix and target the destruction of intestinal villous epithelial cells
by killer cells. Deposits of anti-tTG in the intestinal epithelium predict celiac disease.
Anti-endomysial reactivity
The endomysium is a layer of connective tissue that ensheaths a muscle fiber. The endomysium contains a form of transglutaminase called "tissue transglutaminase" or "tTG" for short, and antibodies that bind to this form of transglutaminase are called endomysial autoantibodies (EmA).
The antiendomysial antibody test is a histological assay for patient serum binding to esophageal tissue from primate. EmA are present in celiac disease. They do not cause any direct symptoms to muscles, but detection of EmA is useful in the diagnosis of the disease.
Anti-epidermal transglutaminase
Antibodies to epidermal transglutaminase (eTG, also keratinocyte transglutaminase) are the autoantibodies believed to caus |
https://en.wikipedia.org/wiki/Flora%20Londinensis | Flora Londinensis is a folio sized book that described the flora found in the London region of the mid 18th century. The Flora was published by William Curtis in six large volumes. The descriptions of the plants included hand-coloured copperplate plates by botanical artists such as James Sowerby, Sydenham Edwards and William Kilburn.
The full title is Flora Londinensis: or, plates and descriptions of such plants as grow wild in the environs of London: with their places of growth, and times of flowering, their several names according to Linnæus and other authors: with a particular description of each plant in Latin and English. To which are added, their several uses in medicine, agriculture, rural œconomy and other arts.
The first volume was produced in 1777 and the final one, containing a title and an index, was published in 1798. A binary name is given for each species in the survey; common and other names are also provided. Previous works on the flora of Britain had been intended for scientists, apothecaries, and herbalists, while Flora Londinensis was written for the general reader. The appealing plates also provided botanical details which could assist in the identification of a species.
Curtis was praefectus horti (Director, Society of Apothecaries) at the Chelsea Physic Garden and a botanist with a broad knowledge of exotic species. However, Flora Londinensis covered the territory most familiar to him -- the flowering species within a 10-mile radius of London. He commissioned several painters to produce hand-coloured copper engravings to accompany the pages. Curtis wrote the descriptions and managed the publishing and sales of the volumes, producing six fascicles of twelve issues, each containing six plates. The final survey eventually came to include many species found in southern England and a few others.
The Subscriber's List in Volume I records 321 names who between them subscribed for 331 complete copies. Plates were also sold individually, either |
https://en.wikipedia.org/wiki/Phycoerythrocyanin | Phycoerythrocyanin is a kind of phycobiliprotein, magenta chromoprotein involved in photosynthesis of some Cyanobacteria. This chromoprotein consists of alpha- and beta-subunits, generally aggregated as hexamer. Alpha-phycoerythrocyanin contains a phycoviolobilin, a violet bilin, that covalently attached at Cys-84, and beta-phycoerythrocyanin contains two phycocyanobilins, a blue bilin, that covalently attached at Cys-84 and -155, respectively. Phycoerythrocyanin is similar to phycocyanin, an important component of the light-harvesting complex (phycobilisome) of cyanobacteria and red algae.
While only phycocyanobilin is covalently bound to phycocyanin, leading to an absorption maximum around 620 nm, phycoerythrocyanin containing both phycoviolobilin and phycocyanobilin leads to an absorption maximum around 575 nm. As both phycoerythrocyanin and phycocyanin have phycocyanobilin acting as the terminal acceptor of energy transfer, they fluoresce around 635 nm, which is absorbed by allophycocyanins that have maximal absorption around 650 nm and maximal fluorescence around 670 nm. Finally, the light energy absorbed by phycoerythrocyanin is transferred to photosynthetic reaction center. |
https://en.wikipedia.org/wiki/Board%20mix | A board mix is a recording created by running lines directly off a mixing console while the sound is mixed in real-time. The alternative to a board mix is use a virtual mixing console, an increasingly popular approach. |
https://en.wikipedia.org/wiki/Manicule | The manicule, , is a typographic mark with the appearance of a hand with its index finger extending in a pointing gesture. Originally used for handwritten marginal notes, it later came to be used in printed works to draw the reader's attention to important text. Though once widespread, it is rarely used today, except as an occasional archaic novelty.
Terminology
For most of its history, the mark has been inconsistently referred to by a variety of names. William H. Sherman, in the first dedicated study of the mark, uses the term manicule (from the Latin root manicula, meaning "little hand"), but also identifies 15 further names which have been used:
hand
pointing hand
hand director
pointer
digit
fist
mutton fist
bishop's fist
index
indicator
indicule
maniple
pilcrow
printer's fist
The last three Sherman labels erroneous, with indicule and maniple being mishearings or conflations, and pilcrow properly referring to the paragraph mark, .
History
Handwritten manicules
The symbol originates in scribal tradition of the medieval and Renaissance period, appearing in the margin of manuscripts to mark corrections or notes. The earliest book known to include manicules is the 1086 Domesday Book, where they are used for marginal annotations alongside other marks such as daggers. The age of the annotations is not known, and they may date to later than the 11th century.
Manicules are first known to appear in the 12th century in handwritten manuscripts in Spain, and became common in the 14th and 15th centuries in Italy with some very elaborate with shading and artful cuffs. Some were playful and elaborate, but others were as simple as "two squiggly strokes suggesting the barest sketch of a pointing hand" and thus quick to draw.
After the popularization of the printing press starting in the 1450s, the handwritten version continued in handwritten form as a means to annotate printed documents, eventually falling out of popularity by the nineteenth century.
In |
https://en.wikipedia.org/wiki/IBM%20DISOSS | IBM Distributed Office Support System, or DISOSS is a centralized document distribution and filing application for IBM's mainframe computers running the MVS and VSE operating systems. DISOSS runs under both the CICS transaction processing system and the IMS/DS transaction processing system, and later versions use the SNADS architecture of peer to peer communication for distributed services.
Heterogeneous office systems connect through DISOSS to OfficeVision/MVS series. The IBM systems are OV/MVS, “OV/VM, OV/400, PS/CICS, PS/TSO, PS/PC, PROFS, and other Mail Systems Supporting SNADS and DIA. Only a single copy of DISOSS needs to be installed somewhere in the network to accomplish the connection.” A number of other vendors such as Digital Equipment Corporation, Hewlett-Packard, and Data General provided links to DISOSS.
Functions
DISOSS provides document library function with search and retrieval controlled by security based on user ID, along with document translation based on Document Interchange Architecture (DIA) and Document Content Architecture (DCA). The different systems that use DISOSS for document exchange and distribution vary in their implementation of DCA and thus the end results of some combinations are only final form (FFT) documents rather than revisable form text (RFT).
It supports document exchange between various IBM and non-IBM office devices including the IBM Displaywriter System, the IBM 5520, the IBM 8100/DOSF, IBM Scanmaster, and Personal computers and word processors. It offers format transformation and printing services, and provides a rich application programming interface (API) and interfaced with other office products such as IBM OfficeVision.
History
DISOSS was announced in 1980, and "was designated a strategic IBM product in 1982." It was a key part of IBM Systems Application Architecture (SAA), but suffered from a reputation as "difficult to understand" and "a resource hog." DISOSS continues to be actively marketed and support |
https://en.wikipedia.org/wiki/Cosmic%20time | Cosmic time, or cosmological time, is the time coordinate commonly used in the Big Bang models of physical cosmology. Such time coordinate may be defined for a homogeneous, expanding universe so that the universe has the same density everywhere at each moment in time (the fact that this is possible means that the universe is, by definition, homogeneous). The clocks measuring cosmic time should move along the Hubble flow.
Cosmic time is a measure of time by a physical clock with zero peculiar velocity in the absence of matter over-/under-densities (to prevent time dilation due to relativistic effects or confusions caused by expansion of the universe). Unlike other measures of time such as temperature, redshift, particle horizon, or Hubble horizon, the cosmic time (similar and complementary to the comoving coordinates) is blind to the expansion of the universe.
There are two main ways for establishing a reference point for the cosmic time. The most trivial way is to take the present time as the cosmic reference point (sometimes referred to as the lookback time).
Alternatively, the Big Bang may be taken as reference to define as the age of the universe, also known as time since the big bang. The current physical cosmology estimates the present age as 13.8 billion years. The doesn't necessarily have to correspond to a physical event (such as the cosmological singularity) but rather it refers to the point at which the scale factor would vanish for a standard cosmological model such as ΛCDM. For instance, in the case of inflation, i.e. a non-standard cosmology, the hypothetical moment of big bang is still determined using the benchmark cosmological models which may coincide with the end of the inflationary epoch. For technical purposes, concepts such as the average temperature of the universe (in units of eV) or the particle horizon are used when the early universe is the objective of a study since understanding the interaction among particles is more relevant than |
https://en.wikipedia.org/wiki/Little%20pocket%20mouse | The little pocket mouse (Perognathus longimembris) is a species of rodent in the family Heteromyidae. It is found in Baja California and Sonora in Mexico and in Arizona, California, Idaho, Nevada, Oregon and Utah in the United States. Its natural habitat is subtropical or tropical dry lowland grassland. It is a common species and faces no particular threats and the IUCN has listed it as being of "least concern".
Five mice of this species travelled to and orbited the Moon 75 times in an experiment on board the Apollo 17 command module in December 1972. Four of the mice survived the trip. Six other little pocket mice were sent into orbit with Skylab 3 in July 1973, though these animals died only 30 hours into the mission due to a power failure.
Behavior
This small mouse, with a long tail, inhabits arid and semiarid habitats with grasses, sagebrush and other scrubby vegetation. It is nocturnal and has a short period of activity for the first two hours after sunset, and then sporadic activity through the rest of the night. It sleeps in winter and is only active between April and November with numbers building up rapidly in the spring, peaking in June and July. It forages for seeds, plant material and small invertebrates which it carries back to its burrow in its cheek pouches.
Status
The little pocket mouse is common within most of its range although it is scarce in Baja California. The population appears to be steady and no particular threats have been identified for this species so the International Union for Conservation of Nature has assessed it as being of "least concern".
See also
Pacific pocket mouse (Perognathus longimembris pacificus) — an endangered subspecies from coastal Southern California. |
https://en.wikipedia.org/wiki/Comparison%20of%20Prolog%20implementations | The following Comparison of Prolog implementations provides a reference for the relative feature sets and performance of different implementations of the Prolog computer programming language.
Portability
There are Prolog implementations that are radically different, with different syntax and different semantics (e.g. Visual Prolog) and sub-communities have developed around different implementations.
Code that strictly conforms to the ISO-Prolog core language is portable across ISO-compliant implementations. However, the ISO standard for modules was never accepted by most Prolog implementors.
Factors that can adversely affect portability include: use of bounded vs. unbounded integer arithmetic, additional types such as string objects, advanced numeric types (rationals, complex), feature extensions such as Unicode, threads, and tabling. Use of libraries unavailable in other implementations and library organisation:
Currently, the way predicates are spread over the libraries and system built-ins differs enormously. [...] Fortunately, there are only few cases where we find predicates with the same name but different semantics (e.g. delete/3)
Main features
Operating system and web-related features
Static analysis
Optimizations
Release
Benchmarks
Benchmarking issues: Odd Prolog benchmarking, Performance differences.
Benchmarking software: older, Dobry, Aquarius benchmark suite, (Bothe, 1990), (Demoen et al. 2001), benchmark descriptions
Benchmarking results: B-Prolog, SICStus, XSB, SICStus vs Yap vs
Benchmarking results: Survey of java prolog engines by Michael Zeising
Benchmarking results: OpenRuleBench yearly open-source benchmark of rule engines
Notes |
https://en.wikipedia.org/wiki/Tuotu | Tuotu (脫兔), (Rabbit in Chinese), like Thunder by Xunlei, is a newly developed software that provides a peer-to-peer file sharing service and download accelerating services. It is gaining popularity in Mainland China and Malaysia and supports BitTorrent, ED2K, KAD, HTTP, FTP, MMS, RTSP file-transfer protocols. |
https://en.wikipedia.org/wiki/Transferable%20utility | Transferable utility is a concept in cooperative game theory and in economics. Utility is transferable if one player can losslessly transfer part of its utility to another player. Such transfers are possible if the players have a common currency that is valued equally by all. Note that being able to transfer cash payoffs does not imply that utility is transferable: wealthy and poor players may derive a different utility from the same amount of money.
Transferable utility is assumed in many cooperative games, where the payoffs are not given for individual players, but only for coalitions. In this case the assumption implies that irrespective of the division of the coalitional payoff, members of the coalition enjoy the same total utility. |
https://en.wikipedia.org/wiki/KeY | The KeY tool is used in formal verification of Java programs. It accepts specifications written in the Java Modeling Language to Java source files. These are transformed into theorems of dynamic logic and then compared against program semantics that are likewise defined in terms of dynamic logic. KeY is significantly powerful in that it supports both interactive (i.e. by hand) and fully automated correctness proofs. Failed proof attempts can be used for a more efficient debugging or verification-based testing. There have been several extensions to KeY in order to apply it to the verification of C programs or hybrid systems. KeY is jointly developed by Karlsruhe Institute of Technology, Germany; Technische Universität Darmstadt, Germany; and Chalmers University of Technology in Gothenburg, Sweden and is licensed under the GPL.
Overview
The usual user input to KeY consists of a Java source file with annotations in JML. Both are translated to KeY's internal representation, dynamic logic. From the given specifications, several proof obligations arise which are to be discharged, i.e. a proof has to be found. To this ends, the program is symbolically executed with the resulting changes to program variables stored in so-called updates. Once the program has been processed completely, there remains a first-order logic proof obligation. At the heart of the KeY system lies a first-order theorem prover based on sequent calculus, which is used to close the proof. Interference rules are captured in so called taclets which consist of an own simple language to describe changes to a sequent.
Java Card DL
The theoretical foundation of KeY is a formal logic called Java Card DL. DL stands for Dynamic Logic. It is a version of a first-order dynamic logic tailored to Java Card programs. As such, it for example allows statements (formulas) like , which intuitively says that the post-condition must hold in all program states reachable by executing the Java Card program in any stat |
https://en.wikipedia.org/wiki/Formation%20evaluation%20gamma%20ray | The formation evaluation gamma ray log is a record of the variation with depth of the natural radioactivity of earth materials in a wellbore. Measurement of natural emission of gamma rays in oil and gas wells are useful because shales and sandstones typically have different gamma ray levels. Shales and clays are responsible for most natural radioactivity, so gamma ray log often is a good indicator of such rocks. In addition, the log is also used for correlation between wells, for depth correlation between open and cased holes, and for depth correlation between logging runs.
Physics
Natural radioactivity is the spontaneous decay of the atoms of certain isotopes into other isotopes. If the resultant isotope is not stable, it undergoes further decay until a stable isotope forms. The decay process is usually accompanied by emissions of alpha, beta, and gamma radiation. Natural gamma ray radiation is one form of spontaneous radiation emitted by unstable nuclei. Gamma (γ) radiation may be considered either as an electromagnetic wave similar to visible light or X-rays, or as a particle of photon. Gamma rays are electromagnetic radiations emitted from an atomic nucleus during radioactive decay, with the wavelength in the range of 10−9 to 10−11cm
Natural radioactivity in rocks
Isotopes naturally found on earth are usually those that are stable or have a decay time larger than, or at least a significant fraction of the age of the earth (about 5 x 109 years). Isotopes with shorter halflifes mainly exist as decay products from longer lived isotopes, and, as in C14, from irradiation of the upper atmosphere.
Radioisotopes with a sufficiently long halflife, and whose decay produces an appreciable amount of gamma rays are:
Potassium 40K with half-life of 1.3 x 109 years, which emits 0 α, 1 β, and 1 γ-ray
Thorium 232Th with half-life of 1.4 x 1010 years, which emits 7 α, 5 β, and numerous γ-ray with different energies
Uranium 238U with half-life of 4.4 x 109 years, which |
https://en.wikipedia.org/wiki/Hermite%20constant | In mathematics, the Hermite constant, named after Charles Hermite, determines how long a shortest element of a lattice in Euclidean space can be.
The constant γn for integers n > 0 is defined as follows. For a lattice L in Euclidean space Rn with unit covolume, i.e. vol(Rn/L) = 1, let λ1(L) denote the least length of a nonzero element of L. Then is the maximum of λ1(L) over all such lattices L.
The square root in the definition of the Hermite constant is a matter of historical convention.
Alternatively, the Hermite constant γn can be defined as the square of the maximal systole of a flat n-dimensional torus of unit volume.
Example
The Hermite constant is known in dimensions 1–8 and 24.
For n = 2, one has γ2 = . This value is attained by the hexagonal lattice of the Eisenstein integers.
Estimates
It is known that
A stronger estimate due to Hans Frederick Blichfeldt is
where is the gamma function.
See also
Loewner's torus inequality |
https://en.wikipedia.org/wiki/Weak%20value | In quantum mechanics (and computation), a weak value is a quantity related to a shift of a measuring device's pointer when usually there is pre- and postselection. It should not be confused with a weak measurement, which is often defined in conjunction. The weak value was first defined by Yakir Aharonov, David Albert, and Lev Vaidman, published in Physical Review Letters 1988, and is related to the two-state vector formalism. There is also a way to obtain weak values without postselection.
Definition and Derivation
There are many excellent review articles on weak values (see e.g. ) here we briefly cover the basics.
Definition
We will denote the initial state of a system as , while the final state of the system is denoted as . We will refer to the initial and final states of the system as the pre- and post-selected quantum mechanical states. With respect to these states, the weak value of the observable is defined as:
Notice that if then the weak value is equal to the usual expected value in the initial state or the final state . In general the weak value quantity is a complex number. The weak value of the observable becomes large when the post-selected state, , approaches being orthogonal to the pre-selected state, , i.e. . If is larger than the largest eigenvalue of or smaller than the smallest eigenvalue of the weak value is said to be anomalous.
As an example consider a spin 1/2 particle. Take to be the Pauli Z operator with eigenvalues . Using the initial state
and the final state
we can calculate the weak value to be
For the weak value is anomalous.
Derivation
Here we follow the presentation given by Duck, Stevenson, and Sudarshan, (with some notational updates from Kofman et al. )which makes explicit when the approximations used to derive the weak value are valid.
Consider a quantum system that you want to measure by coupling an ancillary (also quantum) measuring device. The observable to be measured on the system is . The system and ancilla |
https://en.wikipedia.org/wiki/Louis-Jeantet%20Prize%20for%20Medicine | Established in 1986, the Louis-Jeantet Prizes are funded by the Fondation Louis-Jeantet and awarded each year to experienced researchers who have distinguished themselves in the field of biomedical research in one of the member states of the Council of Europe. They are not intended solely as the recognition of work that has been completed, but also to encourage the continuation of innovative research projects. The prizes are awarded to fully active researchers whose scientific efforts are focused on biomedical research. When the research being recognised is close to practical applications for combating illnesses affecting humankind, one of the Louis-Jeantet Prizes converts into a Jeantet-Collen Prize for Translational Medicine, supported by generous donations from the Désiré Collen Stichting.
The particular research domains in which prizes have been awarded are physiology, biophysics, structural biology, biochemistry, cellular and molecular biology, developmental biology and genetics; prize-winners have worked in immunology, virology, bacteriology, neurobiology, clinical epidemiology and structural biochemistry.
The Prize is endowed with 1.4 million Swiss francs. The sum available to each prize-winner amounts to 500'000 francs, of which 450'000 francs are to be used for financing ongoing research and 50'000 francs are given to the researcher personally.
Prize winners
List of winners:
See also
Latsis Foundation
List of medicine awards
Louis-Jeantet Foundation
Marcel Benoist Prize
Prizes named after people
Notes and references
External links
Official website
Medicine awards
Awards established in 1986
Swiss awards |
https://en.wikipedia.org/wiki/Gowdy%20solution | Gowdy universes or, alternatively, Gowdy solutions of Einstein's equations are simple model spacetimes in general relativity which represent an expanding universe filled with a regular pattern of gravitational waves.
External links
– a description of the different types of Gowdy universes suitable for a general audience
General relativity |
https://en.wikipedia.org/wiki/Lusitropy | Lusitropy or Lucitropy is the rate of myocardial relaxation. The increase in cytosolic calcium of cardiomyocytes via increased uptake leads to increased myocardial contractility (positive inotropic effect), but the myocardial relaxation, or lusitropy, decreases. This should not be confused, however, with catecholamine-induced calcium uptake into the sarcoplasmic reticulum, which increases lusitropy.
Positive
Increased catecholamine levels promote positive lusitropy, enabling the heart to relax more rapidly. This effect is mediated by the phosphorylation of phospholamban and troponin I via a cAMP-dependent pathway. Catecholamine-induced calcium influx into the sarcoplasmic reticulum increases both inotropy and lusitropy. In other words, a quicker reduction in cytosolic calcium levels (because the calcium enters the sarcoplasmic reticulum) causes an increased rate of relaxation (a positive lusitropy), however, this also enables a greater degree of calcium efflux, back into the cytosol, when the next action potential arrives, thereby increasing inotropy as well. However, unlike the previously mentioned mechanism, a calcium uptake from the extracellular fluid into the cytosol without any catecholamine stimulation simply results in a sustained rise in calcium concentration in the cytosol. This only serves to increase isotropy but doesn't allow total relaxation of the cardiac myocytes between contractions, decreasing lusitropy.
Negative
Relaxation of the heart is negatively impacted by the following factors:
Calcium overload – too much intracellular calcium
Reduced rate of calcium removal from myocyte through pumps if calcium is not removed from the cell quickly enough.
a. Plasma membrane Calcium ATPase (Ca ATPase) this primary active transporter pumps calcium out of the myocyte between beats
b. Sodium-Calcium (Na/Ca) exchanger this secondary active transporter pumps calcium out of cell between beats
Impaired Sarco-Endoplasmic Reticulum Calcium ATPase (SERCA) |
https://en.wikipedia.org/wiki/Contention-based%20protocol | A contention-based protocol (CBP) is a communications protocol for operating wireless telecommunication equipment that allows many users to use the same radio channel without pre-coordination. The "listen before talk" operating procedure in IEEE 802.11 is the most well known contention-based protocol.
Section 90.7 of Part 90 of the United States Federal Communications Commission rules define CBP as:
A protocol that allows multiple users to share the same spectrum by defining the events that must occur when two or more transmitters attempt to simultaneously access the same channel and establishing rules by which a transmitter provides reasonable opportunities for other transmitters to operate. Such a protocol may consist of procedures for initiating new transmissions, procedures for determining the state of the channel (available or unavailable), and procedures for managing retransmissions in the event of a busy channel.
This definition was added as part of the Rules for Wireless Broadband Services in the
3650-3700 MHz Band. |
https://en.wikipedia.org/wiki/SK3 | SK3 (small conductance calcium-activated potassium channel 3) also known as KCa2.3 is a protein that in humans is encoded by the KCNN3 gene.
SK3 is a small-conductance calcium-activated potassium channel partly responsible for the calcium-dependent after hyperpolarisation current (IAHP). It belongs to a family of channels known as small-conductance potassium channels, which consists of three members – SK1, SK2 and SK3 (encoded by the KCNN1, 2 and 3 genes respectively), which share a 60-70% sequence identity. These channels have acquired a number of alternative names, however a NC-IUPHAR has recently achieved consensus on the best names, KCa2.1 (SK1), KCa2.2 (SK2) and KCa2.3 (SK3). Small conductance channels are responsible for the medium and possibly the slow components of the IAHP.
Structure
KCa2.3 contains 6 transmembrane domains, a pore-forming region, and intracellular N- and C- termini and is readily blocked by apamin. The gene for KCa2.3, KCNN3, is located on chromosome 1q21.
Expression
KCa2.3 is found in the central nervous system (CNS), muscle, liver, pituitary, prostate, kidney, pancreas and vascular endothelium tissues. KCa2.3 is most abundant in regions of the brain, but has also been found to be expressed in significant levels in many other peripheral tissues, particularly those rich in smooth muscle, including the rectum, corpus cavernosum, colon, small intestine and myometrium.
The expression level of KCNN3 is dependent on hormonal regulation, particularly by the sex hormone estrogen. Estrogen not only enhances transcription of the KCNN3 gene, but also affects the activity of KCa2.3 channels on the cell membrane. In GABAergic preoptic area neurons, estrogen enhanced the ability of α1 adrenergic receptors to inhibit KCa2.3 activity, increasing cell excitability. Links between hormonal regulation of sex organ function and KCa2.3 expression have been established. The expression of KCa2.3 in the corpus cavernosum in patients undergoing estrogen treat |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.