source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Complication%20%28medicine%29 | A complication in medicine, or medical complication, is an unfavorable result of a disease, health condition, or treatment. Complications may adversely affect the prognosis, or outcome, of a disease. Complications generally involve a worsening in the severity of the disease or the development of new signs, symptoms, or pathological changes that may become widespread throughout the body and affect other organ systems. Thus, complications may lead to the development of new diseases resulting from previously existing diseases. Complications may also arise as a result of various treatments.
The development of complications depends on a number of factors, including the degree of vulnerability, susceptibility, age, health status, and immune system condition. Knowledge of the most common and severe complications of a disease, procedure, or treatment allows for prevention and preparation for treatment if they should occur.
Complications are not to be confused with sequelae, which are residual effects that occur after the acute (initial, most severe) phase of an illness or injury. Sequelae can appear early in the development of disease or weeks to months later and are a result of the initial injury or illness. For example, a scar resulting from a burn or dysphagia resulting from a stroke would be considered sequelae. In addition, complications should not be confused with comorbidities, which are diseases that occur concurrently but have no causative association. Complications are similar to adverse effects, but the latter term is typically used in pharmacological contexts or when the negative consequence is expected or common.
Common illnesses and complications
Iatrogenic complications
Medical errors can fall into various categories listed below:
Medication: Medication medical errors include wrong prescription, impaired delivery, or improper adherence. The process of prescribing medication is a complex process that relies on the accurate transfer of information throug |
https://en.wikipedia.org/wiki/Stop%20sign | A stop sign is a traffic sign designed to notify drivers that they must come to a complete stop and make sure the intersection is safely clear of vehicles and pedestrians before continuing past the sign. In many countries, the sign is a red octagon with the word STOP, in either English or the national language of that particular country, displayed in white or yellow. The Vienna Convention on Road Signs and Signals also allows an alternative version: a red circle with a red inverted triangle with either a white or yellow background, and a black or dark blue STOP. Some countries may also use other types, such as Japan's inverted red triangle stop sign. Particular regulations regarding appearance, installation, and compliance with the signs vary by some jurisdictions.
Design and configuration
The 1968 Vienna Convention on Road Signs and Signals allows for two types of stop sign as well as several acceptable variants. Sign B2a is a red octagon with a white legend. The European Annex to the convention also allows the background to be "light yellow". Sign B2b is a red circle with a red inverted triangle with either a white or yellow background, and a black or dark blue legend. The Convention allows for the word "STOP" to be in either English or the national language of the particular country. The finalized version by the United Nations Economic and Social Council's Conference on Road Traffic in 1968 (and in force in 1978) proposed standard stop sign diameters of 600, 900 or 1200 mm (24", 36" or 48").
The United Kingdom and New Zealand stop signs are 750, 900 or 1200 mm (about 30, 36 or 48 inches), according to sign location and traffic speeds.
In the United States, stop signs are across opposite flats of the red octagon, with a -inch (2 cm) white border. The white uppercase legend is tall. Larger signs of with legend and border are used on multi-lane expressways. Regulatory provisions exist for extra-large signs with legend and -inch border for use where sign |
https://en.wikipedia.org/wiki/Certified%20wireless%20network%20expert | The Certified Wireless Network Expert (CWNE) is the highest level certification in the CWNP program started in 2001 by Planet3 Wireless. It certifies the ability to design, install, secure, optimize and troubleshoot IEEE 802.11 wireless networks.
Certification track
The CWNE credential is the final step in a four-level certification process. It validates the applicant's real-world application of the principles covered by the other CWNP certification exams, including wireless protocol analysis, security, advanced design, spectrum analysis, wired network administration, and troubleshooting.
CWNE Requirements
The requirements for earning the CWNE certification changed on October 1, 2010, when the CWNE exam (PW0-300) was retired. The new requirements for the CWNE certification are:
Valid and current CWSP, CWAP, CWISA, and CWDP certifications (requires CWNA).
Three (3) years of documented enterprise Wi-Fi implementation experience.
Three (3) professional endorsements.
Two (1) other current, valid professional networking certifications.
Documentation of three (3) enterprise Wi-Fi projects in which you participated or led in the form of 500 word essays.
Recertification
Like most other CWNP certifications, the CWNE certification is valid for three (3) years. The certification may be renewed by reporting at least sixty (60) hours of approved Continuing Education (CE).
Passing the most current version of either the CWSP, CWAP, or CWDP exam, which was the only recertification requirement prior to the change, is now worth twenty (20) CE hours.
See also
Professional certification (Computer technology) |
https://en.wikipedia.org/wiki/Wojtek%20%28bear%29 | Wojtek (1942 – 2 December 1963; ; in English, sometimes phonetically spelled Voytek and pronounced as such) was a Syrian brown bear (Ursus arctos syriacus) bought, as a young cub, at a railway station in Hamadan, Iran, by Polish II Corps soldiers who had been evacuated from the Soviet Union. In order to provide for his rations and transportation, he was eventually enlisted officially as a soldier with the rank of private, and was subsequently promoted to corporal.
He accompanied the bulk of the II Corps to Italy, serving with the 22nd Artillery Supply Company. During the Battle of Monte Cassino, in Italy in 1944, Wojtek helped move crates of ammunition and became a celebrity with visiting Allied generals and statesmen. After the war he was mustered out of the Polish Army and lived out the rest of his life at the Edinburgh Zoo in Scotland.
History
In the spring of 1942 the newly formed Anders' Army left the Soviet Union for Iran, accompanied by thousands of Polish civilians who had been deported to the Soviet Union following the 1939 Soviet invasion of eastern Poland. At a railroad station in Hamadan, Iran, on 8 April 1942, Polish soldiers encountered a young Iranian boy who had found a bear cub whose mother had been shot by hunters. One of the civilian refugees in their midst, eighteen-year-old Irena (Inka) Bokiewicz, the great-niece of General Bolesław Wieniawa-Długoszowski, was very taken with the cub. She prompted Lieutenant Anatol Tarnowiecki to buy the young bear, which spent the next three months in a Polish refugee camp established near Tehran, principally under Irena's care. In August, the bear was donated to the 2nd Transport Company, which later became the 22nd Artillery Supply Company, and he was named Wojtek by the soldiers. The name Wojtek is the nickname, diminutive form, or hypocorism of "Wojciech" (Happy Warrior), an old Slavic name still common in Poland.
Wojtek initially had problems swallowing and was fed condensed milk from an old vodka bottl |
https://en.wikipedia.org/wiki/Cinnamyl%20acetate | Cinnamyl acetate (3-phenylprop-2-enyl acetate) is a chemical compound of the cinnamyl ester family, in which the variable R group is substituted by a methyl group. As a result of the non-aromatic carbon-carbon double bond, cinnamyl acetate can exist in a Z and an E configuration:
Cinnamyl acetate naturally occurs in fresh bark of cinnamon (Cinnamomum zeylanicum Blume and other Cinnamomum species), with concentrations of 2,800–51,000 ppm.
Cinnamyl acetate is used as a flavour ester in for example bread and animal feed and has a sweet floral-fruity fragrance. Moreover, it is used in several cosmetics, some toiletries but also in non-cosmetic products, for example detergents.
Legislation and control
Cinnamyl acetate, used in fragrances and as flavour ingredient, has been discussed by several institutions. In 1965, the compound was annotated as 'Generally Recognized as Safe as a flavor ingredient’ by the Flavor and Extract Manufacturers" Association (FEMA). The association determined the average maximum use levels in several products that were considered to be safe:
The European Parliament registered cinnamyl acetate as both a flavouring substance and a cosmetic compound in 1996. The Joint (FAO/WHO) Expert Committee on Food Additives (JECFA) described in 2000 that “the substance does not present a safety concern at current levels of intake when used as a flavouring agent”. In 2009, the EFSA Panel on Food Contact Materials, Enzyme, Flavourings and Processing Aids (CEF) concluded that cinnamyl acetate does not give rise to safety concerns when used as flavour ingredient in food. Cinnamyl acetate is also permitted by the U.S. Food & Drug administration for use as flavouring agent in food if the minimum quantity needed for its effect is used.
Production and intake
Estimates of the average annual production and daily intake of cinnamyl acetate as flavouring agent are reported by the WHO. According to this report, the annual volume of production in Europe is 1498 kg, |
https://en.wikipedia.org/wiki/List%20of%20Intel%20graphics%20processing%20units | This article contains information about Intel's GPUs (see Intel Graphics Technology) and motherboard graphics chipsets in table form. In 1982, Intel licensed the NEC μPD7220 and announced it as the Intel 82720 Graphics Display Controller.
First generation
Intel's first generation GPUs:
Second generation
Intel marketed its second generation using the brand Extreme Graphics. These chips added support for texture combiners allowing support for OpenGL 1.3.
Third generation
Intel's first DirectX 9 GPUs with hardware Pixel Shader 2.0 support.
Gen4
The last generation of motherboard integrated graphics. Full hardware DirectX 10 support starting with GMA X3500.
Each EU has a 128-bit wide FPU that natively executes four 32-bit operations per clock cycle.
Gen5
Integrated graphics chip moved from motherboard into the processor.
Improved gaming performance
Can access CPU's cache
Each EU has a 128-bit wide FPU that natively executes eight 16-bit or four 32-bit operations per clock cycle.
Hierarchical-Z compression and fast Z clear
Gen6
Each EU has a 128-bit wide FPU that natively executes eight 16-bit or four 32-bit operations per clock cycle.
Double peak performance per clock cycle compared to previous generation due to fused multiply-add instruction.
The entire GPU shares a sampler and an ROP.
Gen7
1 FP32 ALUs : EUs : Subslices
Each EU contains 2 × 128-bit FPUs and has double peak performance per clock cycle compared to previous generation. One supports FP32 and FP64, and the other supports only FP32. Since the throughput of FP64 instructions are 2 cycles, the FP64 FLOPS is a quarter of the FP32 FLOPS. Only one of the FPUs supports 32-bit integer instructions.
Each Subslice contains 6 or 8 (or 10 in Haswell GPUs) EUs and a sampler, and has 64 KB shared memory.
Gen8
1 FP32 ALUs : EUs : Subslices
Each EU contains 2 x 128-bit FPUs. One supports 32-bit and 64-bit integer, FP16, FP32, FP64, and transcendental math functions, and the other support |
https://en.wikipedia.org/wiki/NHL%20Network%20%281975%20TV%20program%29 | The NHL Network was an American television syndication package that broadcast National Hockey League games from the through seasons. The NHL Network was distributed by the Hughes Television Network.
Conception
After being dropped by NBC after the season, the NHL had no national television contract in the United States. In response to this, the league put together a network of independent stations covering approximately 55% of the country.
Coverage summary
Games typically aired on Monday nights (beginning at 8 p.m. ET) or Saturday afternoons. The package was offered to local stations with no rights fee. Profits would be derived from the advertising, which was about evenly split between the network and the local station. The Monday night games were often billed as The NHL Game of the Week. Viewers in New York City, Buffalo, St. Louis, Pittsburgh, Detroit and Los Angeles got the Game of the Week on a different channel than their local team's games. Therefore, whenever a team had a “home” game, the NHL Network aired the home team's broadcast rather than their own.
Initially, the Monday night package was marketed to ABC affiliates, the idea being that ABC carried Monday-night NFL football in the fall and (starting in May ) Monday-night Major League Baseball in the spring and summer; as such, stations would want hockey to create a year-round Monday night sports block. But very few ABC stations picked up the package.
During the season, the NHL Network showed selected games from the NHL Super Series (the big one in that package was Red Army at Philadelphia, but the package did not include Red Army at Montreal on New Year's Eve 1975, which was seen only on CBC) as well as some playoff games. During the season, the NHL Network showed 12 regular season games on Monday nights plus the All-Star Game. By (the final season of the NHL Network's existence), there were 18 Monday night games and 12 Saturday afternoon games covered.
The 1979 Challenge Cup replaced the All-St |
https://en.wikipedia.org/wiki/Coastal%20zone%20color%20scanner | The coastal zone color scanner (CZCS) was a multi-channel scanning radiometer aboard the Nimbus 7 satellite, predominately designed for water remote sensing. Nimbus 7 was launched 24 October 1978, and CZCS became operational on 2 November 1978. It was only designed to operate for one year (as a proof-of-concept), but in fact remained in service until 22 June 1986. Its operation on board the Nimbus 7 was limited to alternate days as it shared its power with the passive microwave scanning multichannel microwave radiometer.
CZCS measured reflected solar energy in six channels, at a resolution of 800 meters. These measurements were used to map chlorophyll concentration in water, sediment distribution, salinity, and the temperature of coastal waters and ocean currents.
CZCS lay the foundations for subsequent satellite ocean color sensors, and formed a cornerstone for international efforts to understand the ocean's role in the carbon cycle.
Ocean color
The most significant product of the CZCS was its collection of so-called ocean color imagery. The "color" of the ocean in CZCS images comes from substances in the water, particularly phytoplankton (microscopic, free-floating photosynthetic organisms), as well as inorganic particulates.
Because ocean color data is related to the presence of phytoplankton and particulates, it can be used to calculate the concentrations of material in surface waters and the level of biological activity; as phytoplankton concentration increases, ocean color shifts from blue to green (note that most CZCS images are false colored, so that high levels of phytoplankton appear as red or orange). Satellite-based ocean color observations provide a global picture of life in the world's oceans, because phytoplankton is the basis for the vast majority of oceanic food chains. By recording images over a period of years, scientists also gained a better understanding of how the phytoplankton biomass changed over time; for instance, red tide blooms |
https://en.wikipedia.org/wiki/Community%20%28ecology%29 | In ecology, a community is a group or association of populations of two or more different species occupying the same geographical area at the same time, also known as a biocoenosis, biotic community, biological community, ecological community, or life assemblage. The term community has a variety of uses. In its simplest form it refers to groups of organisms in a specific place or time, for example, "the fish community of Lake Ontario before industrialization".
Community ecology or synecology is the study of the interactions between species in communities on many spatial and temporal scales, including the distribution, structure, abundance, demography, and interactions between coexisting populations. The primary focus of community ecology is on the interactions between populations as determined by specific genotypic and phenotypic characteristics. It is important to understand the origin, maintenance, and consequences of species diversity when evaluating community ecology.
Community ecology also takes into account abiotic factors that influence species distributions or interactions (e.g. annual temperature or soil pH). For example, the plant communities inhabiting deserts are very different from those found in tropical rainforests due to differences in annual precipitation. Humans can also affect community structure through habitat disturbance, such as the introduction of invasive species.
On a deeper level the meaning and value of the community concept in ecology is up for debate. Communities have traditionally been understood on a fine scale in terms of local processes constructing (or destructing) an assemblage of species, such as the way climate change is likely to affect the make-up of grass communities. Recently this local community focus has been criticized. Robert Ricklefs, a professor of biology at the University of Missouri and author of Disintegration of the Ecological Community, has argued that it is more useful to think of communities on a regional sc |
https://en.wikipedia.org/wiki/ARMulator | ARM Instruction Set Simulator, also known as ARMulator, is one of the software development tools provided by the development systems business unit of ARM Limited to all users of ARM-based chips. It owes its heritage to the early development of the instruction set by Sophie Wilson. Part of this heritage is still visible in the provision of a Tube BBC Micro model in ARMulator.
Features
ARMulator is written in C and provides more than just an instruction set simulator, it provides a virtual platform for system emulation. It comes ready to emulate an ARM processor and certain ARM coprocessors. If the processor is part of an embedded system, then licensees may extend ARMulator to add their own implementations of the additional hardware to the ARMulator model. ARMulator provides a number of services to help with the time-based behaviour and event scheduling and ships with examples of memory mapped and co-processor expansions. This way, they can use ARMulator to emulate their entire embedded system. A key limitation for ARMulator is that it can only simulate a single ARM CPU at one time, although almost all ARM cores up to ARM11 are available.
Performance of ARMulator is good for the technology employed, it's about 1000 host (PC) instructions per ARM instruction. This means that emulated speeds of 1 MHz were normal for PCs of the mid to late 90s. Accuracy is good too, although it is classed as cycle count accurate rather than cycle accurate, this is because the ARM pipeline isn't fully modeled (although register interlocks are). Resolution is to an instruction, as a consequence when single stepping the register interlocks are ignored and different cycle counts are returned than if the program had simply run, this was unavoidable.
Testing ARMulator was always a time-consuming challenge, the full ARM architecture validation suites being employed. At over 1 million lines of C code it was a fairly hefty product.
ARMulator allows runtime debugging using either armsd (ARM |
https://en.wikipedia.org/wiki/Scott%E2%80%93Potter%20set%20theory | An approach to the foundations of mathematics that is of relatively recent origin, Scott–Potter set theory is a collection of nested axiomatic set theories set out by the philosopher Michael Potter, building on earlier work by the mathematician Dana Scott and the philosopher George Boolos.
Potter (1990, 2004) clarified and simplified the approach of Scott (1974), and showed how the resulting axiomatic set theory can do what is expected of such theory, namely grounding the cardinal and ordinal numbers, Peano arithmetic and the other usual number systems, and the theory of relations.
ZU etc.
Preliminaries
This section and the next follow Part I of Potter (2004) closely. The background logic is first-order logic with identity. The ontology includes urelements as well as sets, which makes it clear that there can be sets of entities defined by first-order theories not based on sets. The urelements are not essential in that other mathematical structures can be defined as sets, and it is permissible for the set of urelements to be empty.
Some terminology peculiar to Potter's set theory:
ι is a definite description operator and binds a variable. (In Potter's notation the iota symbol is inverted.)
The predicate U holds for all urelements (non-collections).
ιxΦ(x) exists iff (∃!x)Φ(x). (Potter uses Φ and other upper-case Greek letters to represent formulas.)
{x : Φ(x)} is an abbreviation for ιy(not U(y) and (∀x)(x ∈ y ⇔ Φ(x))).
a is a collection if {x : x∈a} exists. (All sets are collections, but not all collections are sets.)
The accumulation of a, acc(a), is the set {x : x is an urelement or ∃b∈a (x∈b or x⊂b)}.
If ∀v∈V(v = acc(V∩v)) then V is a history.
A level is the accumulation of a history.
An initial level has no other levels as members.
A limit level is a level that is neither the initial level nor the level above any other level.
A set is a subcollection of some level.
The birthday of set a, denoted V(a), is the lowest level V such that a⊂V.
Axio |
https://en.wikipedia.org/wiki/Shadow%20square | The shadow square, also known as an altitude scale, was an instrument used to determine the linear height of an object, in conjunction with the alidade, for angular observations. An early example was described in an Arabic treatise likely dating to 9th or 10th-century Baghdad. Shadow squares are often found on the backs of astrolabes.
Uses
The main use of a shadow square is to measure the linear height of an object using its shadow. It does so by simulating the ratio between an object, generally a gnomon, and its shadow. If the sun's ray is between 0 degrees and 45 degrees the umbra versa (Vertical axis) is used, between 45 degrees and 90 degrees the umbra recta (Horizontal axis) is used and when the sun's ray is at 45 degrees its shadow falls exactly on the umbra media (y=x) It was used during the time of medieval astronomy to determine the height of, and to track the movement of celestial bodies such as the sun when more advanced measurement methods were not available. These methods can still be used today to determine the altitude, with reference to the horizon, of any visible celestial body.
Gnomon
A gnomon is used along with a shadow box commonly. A gnomon is a stick placed vertically in a sunny place so that it casts a shadow that can be measured. By studying the shadow of the gnomon you can learn a lot of information about the motion of the sun. Gnomons were most likely independently discovered by many ancient civilizations, but it is known that they were used in the 5th century BC in Greece. Most likely for the measurement of the winter and summer solstices. "Herodotus says in his Histories written around 450 B.C., that the Greeks learned the use of the gnomon from the Babylonians.
Examples
If your shadow is 4 feet long in your own feet, then what is the altitude of the sun? This problem can be solved through the use of the shadow box. The shadow box is divided in half, one half is calibrated by sixes the other by tens. Because it is a shadow cast by the |
https://en.wikipedia.org/wiki/Image%20impedance | Image impedance is a concept used in electronic network design and analysis and most especially in filter design. The term image impedance applies to the impedance seen looking into a port of a network. Usually a two-port network is implied but the concept can be extended to networks with more than two ports. The definition of image impedance for a two-port network is the impedance, Zi 1, seen looking into port 1 when port 2 is terminated with the image impedance, Zi 2, for port 2. In general, the image impedances of ports 1 and 2 will not be equal unless the network is symmetrical (or anti-symmetrical) with respect to the ports.
Derivation
As an example, the derivation of the image impedances of a simple 'L' network is given below. The 'L' network consists of a series impedance, , and a shunt admittance, .
The difficulty here is that in order to find i 1 it is first necessary to terminate port 2 with i 2. However, i 2 is also an unknown at this stage. The problem is solved by terminating port 2 with an identical network: port 2 of the second network is connected to port 2 of the first network and port 1 of the second network is terminated with i 1. The second network is terminating the first network in i 2 as required. Mathematically, this is equivalent to eliminating one variable from a set of simultaneous equations. The network can now be solved for i 1. Writing out the expression for input impedance gives:
and solving for
i 2 is found by a similar process, but it is simpler to work in terms of the reciprocal, that is image admittance i 2,
Also, it can be seen from these expressions that the two image impedances are related to each other by:
Measurement
Directly measuring image impedance by adjusting terminations is inconveniently iterative and requires precision adjustable components to effect the termination. An alternative technique to determine the image impedance of port 1 is to measure the short-circuit impedance ZSC (that is, the input impe |
https://en.wikipedia.org/wiki/Gross%20%28unit%29 | In English and related languages, several terms involving the words "great" or "gross" relate to numbers involving a multiple of exponents of twelve (dozen):
A gross refers to a group of 144 items (a dozen dozen or a square dozen, 122).
A great gross refers to a group of 1,728 items (a dozen gross or a cubic dozen, 123).
A small gross or a great hundred refers to a group of 120 items (ten dozen, 10×12).
The term can be abbreviated gr. or gro., and dates from the early 15th century. It derives from the Old French grosse douzaine, meaning "large dozen”. The continued use of these terms in measurement and counting represents the duodecimal number system. This has led groups such as the Dozenal Society of America to advocate for wider use of "gross" and related terms instead of the decimal system.
See also
Long hundred |
https://en.wikipedia.org/wiki/Aeroallergen | An aeroallergen (pronounced aer·o·al·ler·gen) is any airborne substance, such as pollen or spores, which triggers an allergic reaction.
Pollens
Aeroallergens include the pollens of specific seasonal plants is commonly known as "hay fever", because it is most prevalent during haying season, from late May to the end of June in the Northern Hemisphere; but it is possible to experience hay fever throughout the year.
The pollen which causes hay fever varies from person to person and from region to region; generally speaking, the tiny, hardly visible pollens of wind-pollinated plants are the predominant cause. Pollens of insect-pollinated plants are too large to remain airborne and pose no risk.
Examples of plant pollen commonly responsible for hay fever include:
Trees: such as birch (Betula), alder (Alnus), cedar (Cedrus), hazel (Corylus), hornbeam (Carpinus), horse chestnut (Aesculus), willow (Salix), poplar (Populus), plane (Platanus), linden/lime (Tilia) and olive (Olea). In northern latitudes birch is considered to be the most important allergenic tree pollen. An estimated 15–20% of those with hay fever are sensitive to birch pollen grains. Olive pollen is most predominant in Mediterranean regions.
Grasses (Family Poaceae): especially ryegrass (Lolium sp.) and timothy (Phleum pratense). An estimated 90% of those with hay fever are allergic to grass pollen.
Weeds: ragweed (Ambrosia), plantain (Plantago), nettles/parietaria (Urticaceae), mugwort (Artemisia), Fat hen (Chenopodium) and sorrel/dock (Rumex)
The time of year at which hay fever symptoms manifest themselves varies greatly depending on the types of pollen to which an allergic reaction is produced. The pollen count, in general, is highest from mid-spring to early summer. As most pollens are produced at fixed periods in the year, a person with long-term hay fever may also be able to anticipate when the symptoms are most likely to begin and end, although this may be complicated by an allergy to dust parti |
https://en.wikipedia.org/wiki/Doves%20as%20symbols | Doves, typically domestic pigeons white in plumage, are used in many settings as symbols of peace, freedom, or love. Doves appear in the symbolism of Judaism, Christianity, Islam and paganism, and of both military and pacifist groups.
Mythology
In ancient Mesopotamia, doves were prominent animal symbols of Inanna-Ishtar, the Goddess of Love, Sexuality, and War. Doves are shown on cultic objects associated with Inanna as early as the beginning of the third millennium BC. Lead dove figurines were discovered in the temple of Ishtar at Aššur, dating to the thirteenth century BC, and a painted fresco from Mari, Syria shows a giant dove emerging from a palm tree in the temple of Ishtar, indicating that the goddess herself was sometimes believed to take the form of a dove.
In the ancient Levant, doves were used as symbols for the Canaanite mother goddess Asherah.
The ancient Greek word for "dove" was peristerá, which may be derived from the Semitic phrase peraḥ Ištar, meaning "bird of Ishtar". In classical antiquity, doves were sacred to the Greek goddess Aphrodite, who absorbed this association with doves from Inanna-Ishtar. Aphrodite frequently appears with doves in ancient Greek pottery. The temple of Aphrodite Pandemos on the southwest slope of the Athenian Acropolis was decorated with relief sculptures of doves with knotted fillets in their beaks and votive offerings of small, white, marble doves were discovered in the temple of Aphrodite at Daphni. During Aphrodite's main festival, the Aphrodisia, her altars would be purified with the blood of a sacrificed dove. Aphrodite's associations with doves influenced the Roman goddesses Venus and Fortuna, causing them to become associated with doves as well.
In the Japanese mythology, doves are Hachiman's familiar spirit. Hachiman is the syncretic divinity of archery and war incorporating elements from both Shinto and Buddhism.
Judaism
According to the biblical story (Genesis 8:11), a dove was released by Noah after |
https://en.wikipedia.org/wiki/Endothelium-derived%20hyperpolarizing%20factor | In blood vessels Endothelium-Derived Hyperpolarizing Factor or EDHF is proposed to be a substance and/or electrical signal that is generated or synthesized in and released from the endothelium; its action is to hyperpolarize vascular smooth muscle cells, causing these cells to relax, thus allowing the blood vessel to expand in diameter.
Introduction
The endothelium maintains vascular homeostasis through the release of active vasodilators. Although nitric oxide (NO) is recognized as the primary factor at level of arteries, increased evidence for the role of another endothelium-derived vasodilator known as endothelium-derived hyperpolarizing factor (EDHF) has accumulated in the last years. Experiments show that when NO and Prostacyclin (Vasodilators) are inhibited there is still another factor causing the vessels to dilate
Despite the ongoing debate of its intriguingly variable nature and mechanisms of action, the contribution of EDHF to the endothelium-dependent relaxation is currently appreciated as an important feature of “healthy” endothelium. Since EDHF's contribution is greatest at level of small arteries, the changes in the EDHF action are of critical importance for the regulation of organ blood flow, peripheral vascular resistance, and blood pressure, and in particular when production of NO is compromised. Moreover, depending on the type of cardiovascular disorders altered, EDHF responses may contribute to, or compensate for, endothelial abnormalities associated with pathogenesis of certain diseases.
It is widely accepted EDHF plays an important role in vasotone, especially in micro vessels. Its effect varies, depending on the size of the vessel.
Pathways Of EDHF
There are two general pathways that explain EDH
Diffusible factors are endothelium-derived substances that are able to pass through internal elastic layer (IEL), reach underlying vascular smooth muscle cells at a concentration sufficient to activate ion channels, and initiate smooth muscle hyperpol |
https://en.wikipedia.org/wiki/Passwd | passwd is a command on Unix, Plan 9, Inferno, and most Unix-like operating systems used to change a user's password. The password entered by the user is run through a key derivation function to create a hashed version of the new password, which is saved. Only the hashed version is stored; the entered password is not saved for security reasons.
When the user logs on, the password entered by the user during the log on process is run through the same key derivation function and the resulting hashed version is compared with the saved version. If the hashes are identical, the entered password is considered to be correct, and the user is authenticated. In theory, it is possible for two different passwords to produce the same hash. However, cryptographic hash functions are designed in such a way that finding any password that produces the same hash is very difficult and practically infeasible, so if the produced hash matches the stored one, the user can be authenticated.
The passwd command may be used to change passwords for local accounts, and on most systems, can also be used to change passwords managed in a distributed authentication mechanism such as NIS, Kerberos, or LDAP.
Password file
The /etc/passwd file is a text-based database of information about users that may log into the system or other operating system user identities that own running processes.
In many operating systems this file is just one of many possible back-ends for the more general passwd name service.
The file's name originates from one of its initial functions as it contained the data used to verify passwords of user accounts. However, on modern Unix systems the security-sensitive password information is instead often stored in a different file using shadow passwords, or other database implementations.
The /etc/passwd file typically has file system permissions that allow it to be readable by all users of the system (world-readable), although it may only be modified by the superuser or by us |
https://en.wikipedia.org/wiki/Teardrop%20%28electronics%29 | A teardrop is typically drop-shaped feature on a printed circuit board and can be found on the junction of vias or contact pads.
Purpose
The main purpose of teardrops is to enhance structural integrity in presence of thermal or mechanical stresses, for example due to vibration or flexing. Structural integrity may be compromised, e.g., by misalignment during drilling, so that too much copper may be removed by the drill hole in the area where a trace connects to the pad or via. An extra advantage is the enlarging of manufacturing tolerances, making manufacturing easier and cheaper.
While a typical shape of a teardrop is straight-line tapering, they may be concave. This type of teardrop is also called filleting or straight. To produce a snowman-shaped teardrop, a secondary pad of smaller size is added at the junction overlapping with the primary pad (hence the nickname).
Necking
For similar reasons, a technique called trace necking reduces (or necks down) the width of a trace that approaches a narrower pad of a surface-mounted device or a through-hole with a diameter that is less than the width of the trace, or when the trace passes through bottlenecks (for example, between the pads of a component). |
https://en.wikipedia.org/wiki/Feedback%20linearization | Feedback linearization is a common strategy employed in nonlinear control to control nonlinear systems. Feedback linearization techniques may be applied to nonlinear control systems of the form
where is the state, are the inputs. The approach involves transforming a nonlinear control system into an equivalent linear control system through a change of variables and a suitable control input. In particular, one seeks a change of coordinates and control input so that the dynamics of in the coordinates take the form of a linear, controllable control system,
An outer-loop control strategy for the resulting linear control system can then be applied to achieve the control objective.
Feedback linearization of SISO systems
Here, consider the case of feedback linearization of a single-input single-output (SISO) system. Similar results can be extended to multiple-input multiple-output (MIMO) systems. In this case, and . The objective is to find a coordinate transformation that transforms the system (1) into the so-called normal form which will reveal a feedback law of the form
that will render a linear input–output map from the new input to the output . To ensure that the transformed system is an equivalent representation of the original system, the transformation must be a diffeomorphism. That is, the transformation must not only be invertible (i.e., bijective), but both the transformation and its inverse must be smooth so that differentiability in the original coordinate system is preserved in the new coordinate system. In practice, the transformation can be only locally diffeomorphic and the linearization results only hold in this smaller region.
Several tools are required to solve this problem.
Lie derivative
The goal of feedback linearization is to produce a transformed system whose states are the output and its first derivatives. To understand the structure of this target system, we use the Lie derivative. Consider the time derivative of (2), which c |
https://en.wikipedia.org/wiki/Relative%20abundance%20distribution | In ecology the relative abundance distribution (RAD) or species abundance distribution species abundance distribution (SAD) describes the relationship between the number of species observed in a field study as a function of their observed abundance. The SAD is one of ecology's oldest and most universal laws – every community shows a hollow curve or hyperbolic shape on a histogram with many rare species and just a few common species. When plotted as a histogram of number
(or percent) of species on the y-axis vs. abundance on an arithmetic x-axis, the classic hyperbolic J-curve or hollow curve is produced, indicating a few very abundant species and many rare species. The SAD is central prediction of the Unified neutral theory of biodiversity.
Starting in the 1970s and running unabated to the present day, mechanistic models (models attempting to explain the causes of the hollow curve SAD) and alternative interpretations and extensions of prior theories have proliferated to an extraordinary degree. The graphs obtained in this manner are typically fitted to a Zipf–Mandelbrot law, the exponent of which serves as an index of biodiversity in the ecosystem under study.
Notes and references
Ecological metrics |
https://en.wikipedia.org/wiki/Cousin%20prime | In number theory, cousin primes are prime numbers that differ by four. Compare this with twin primes, pairs of prime numbers that differ by two, and sexy primes, pairs of prime numbers that differ by six.
The cousin primes (sequences and in OEIS) below 1000 are:
(3, 7), (7, 11), (13, 17), (19, 23), (37, 41), (43, 47), (67, 71), (79, 83), (97, 101), (103, 107), (109, 113), (127, 131), (163, 167), (193, 197), (223, 227), (229, 233), (277, 281), (307, 311), (313, 317), (349, 353), (379, 383), (397, 401), (439, 443), (457, 461), (463,467), (487, 491), (499, 503), (613, 617), (643, 647), (673, 677), (739, 743), (757, 761), (769, 773), (823, 827), (853, 857), (859, 863), (877, 881), (883, 887), (907, 911), (937, 941), (967, 971)
Properties
The only prime belonging to two pairs of cousin primes is 7. One of the numbers will always be divisible by 3, so is the only case where all three are primes.
An example of a large proven cousin prime pair is for
which has 20008 digits. In fact, this is part of a prime triple since is also a twin prime (because is also a proven prime).
, the largest-known pair of cousin primes was found by S. Batalov and has 51,934 digits. The primes are:
It follows from the first Hardy–Littlewood conjecture that cousin primes have the same asymptotic density as twin primes. An analogue of Brun's constant for twin primes can be defined for cousin primes, called Brun's constant for cousin primes, with the initial term (3, 7) omitted, by the convergent sum:
Using cousin primes up to 242, the value of was estimated by Marek Wolf in 1996 as
This constant should not be confused with Brun's constant for prime quadruplets, which is also denoted .
The Skewes number for cousin primes is 5206837 ().
Notes |
https://en.wikipedia.org/wiki/Web%20Services%20Flow%20Language | Web Services Flow Language 1.0 (WSFL) was an XML programming language proposed by IBM in 2001 for describing Web services compositions. Language considered two types of compositions. The first type was for describing business processes as a collection of web services and the second was for describing interactions between partners. WSFL was proposed to be layered on top of Web Services Description Language.
In 2003 IBM and Microsoft combined WSFL and Xlang to BPEL4WS and submitted it to OASIS for standardization. Oasis published BPEL4WS as WS-BPEL to properly fit the naming of other WS-* standards.
Web Services Endpoint Language (WSEL)
Web Services Endpoint Language (WSEL) was an XML format proposed to be used to description of non-operational characteristics of service endpoints, such as quality-of-service, cost, or security properties. Format was proposed as a part of report which published Web Service Flow Language . It never gained wide acceptance.
Notes |
https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates%20shuffle | The Fisher–Yates shuffle is an algorithm for shuffling a finite sequence. The algorithm takes a list of all the elements of the sequence, and continually determines the next element in the shuffled sequence by randomly drawing an element from the list until no elements remain. The algorithm produces an unbiased permutation: every permutation is equally likely. The modern version of the algorithm takes time proportional to the number of items being shuffled and shuffles them in place.
The Fisher–Yates shuffle is named after Ronald Fisher and Frank Yates, who first described it, and is also known as the Knuth shuffle after Donald Knuth. A variant of the Fisher–Yates shuffle, known as Sattolo's algorithm, may be used to generate random cyclic permutations of length n instead of random permutations.
Fisher and Yates' original method
The Fisher–Yates shuffle, in its original form, was described in 1938 by Ronald Fisher and Frank Yates in their book Statistical tables for biological, agricultural and medical research. Their description of the algorithm used pencil and paper; a table of random numbers provided the randomness. The basic method given for generating a random permutation of the numbers 1 through N goes as follows:
Write down the numbers from 1 through N.
Pick a random number k between one and the number of unstruck numbers remaining (inclusive).
Counting from the low end, strike out the kth number not yet struck out, and write it down at the end of a separate list.
Repeat from step 2 until all the numbers have been struck out.
The sequence of numbers written down in step 3 is now a random permutation of the original numbers.
Provided that the random numbers picked in step 2 above are truly random and unbiased, so will be the resulting permutation. Fisher and Yates took care to describe how to obtain such random numbers in any desired range from the supplied tables in a manner which avoids any bias. They also suggested the possibility of using a |
https://en.wikipedia.org/wiki/DOPIPE | DOPIPE parallelism is a method to perform loop-level parallelism by pipelining the statements in a loop. Pipelined parallelism may exist at different levels of abstraction like loops, functions and algorithmic stages. The extent of parallelism depends upon the programmers' ability to make best use of this concept. It also depends upon factors like identifying and separating the independent tasks and executing them parallelly.
Background
The main purpose of employing loop-level parallelism is to search and split sequential tasks of a program and convert them into parallel tasks without any prior information about the algorithm. Parts of data that are recurring and consume significant amount of execution time are good candidates for loop-level parallelism. Some common applications of loop-level parallelism are found in mathematical analysis that uses multiple-dimension matrices which are iterated in nested loops.
There are different kind of parallelization techniques which are used on the basis of data storage overhead, degree of parallelization and data dependencies. Some of the known techniques are: DOALL, DOACROSS and DOPIPE.
DOALL: This technique is used where we can parallelize each iteration of the loop without any interaction between the iterations. Hence, the overall run-time gets reduced from N * T (for a serial processor, where T is the execution time for each iteration) to only T (since all the N iterations are executed in parallel).
DOACROSS: This technique is used wherever there is a possibility for data dependencies. Hence, we parallelize tasks in such a manner that all the data independent tasks are executed in parallel, but the dependent ones are executed sequentially. There is a degree of synchronization used to sync the dependent tasks across parallel processors.
Description
DOPIPE is a pipelined parallelization technique that is used in programs where each element produced during each iteration is consumed in the later iteration. The followin |
https://en.wikipedia.org/wiki/Link-Local%20Multicast%20Name%20Resolution | The Link-Local Multicast Name Resolution (LLMNR) is a protocol based on the Domain Name System (DNS) packet format that allows both IPv4 and IPv6 hosts to perform name resolution for hosts on the same local link. It is included in Windows Vista, Windows Server 2008, Windows 7, Windows 8 and Windows 10. It is also implemented by systemd-resolved on Linux. LLMNR is defined in RFC 4795 but was not adopted as an IETF standard.
As of April 2022, Microsoft has begun the process of phasing out both LLMNR and NetBIOS name resolution in favour of mDNS.
Protocol details
In responding to queries, responders listen on UDP port 5355 on the following link-scope Multicast address:
IPv4 - 224.0.0.252, MAC address 01-00-5E-00-00-FC
IPv6 - FF02:0:0:0:0:0:1:3 (this notation can be abbreviated as FF02::1:3), MAC address 33-33-00-01-00-03
The responders also listen on TCP port 5355 on the unicast address that the host uses to respond to queries.
Packet header structure
ID - A 16-bit identifier assigned by the program that generates any kind of query.
QR - Query/Response.
OPCODE - A 4-bit field that specifies the kind of query in this message. This value is set by the originator of a query and copied into the response. This specification defines the behavior of standard queries and responses (opcode value of zero). Future specifications may define the use of other opcodes with LLMNR.
C - Conflict.
TC - TrunCation.
T - Tentative.
Z - Reserved for future use.
RCODE - Response code.
QDCOUNT - An unsigned 16-bit integer specifying the number of entries in the question section.
ANCOUNT - An unsigned 16-bit integer specifying the number of resource records in the answer section.
NSCOUNT - An unsigned 16-bit integer specifying the number of name server resource records in the authority records section.
ARCOUNT - An unsigned 16-bit integer specifying the number of resource records in the additional records section.
See also
Network Basic Input/Output System (NetBIOS |
https://en.wikipedia.org/wiki/Radio%20pack | A radio pack is mainly used for musicians such as guitarists and singers for live performances. It is a small radio transmitter that is either placed in the strap or in the pocket. The receiver is connected to an amp or PA system and the user simply connects the transmitter into the instrument. By using a wireless system, musicians are free to move around the stage. This has meant that more elaborate stage shows are now possible, with musicians performing a long way from the amplifier or speakers.
As with any radio device interference is possible, although modern systems are more stable. An example of a performer who has made use of a radio pack is AC/DC guitarist Angus Young, whose stage antics are legendary.
See Also
Schaffer–Vega diversity system - one wireless guitar system
Sound production technology
Radio technology |
https://en.wikipedia.org/wiki/Localized%20Chern%20class | In algebraic geometry, a localized Chern class is a variant of a Chern class, that is defined for a chain complex of vector bundles as opposed to a single vector bundle. It was originally introduced in Fulton's intersection theory, as an algebraic counterpart of the similar construction in algebraic topology. The notion is used in particular in the Riemann–Roch-type theorem.
S. Bloch later generalized the notion in the context of arithmetic schemes (schemes over a Dedekind domain) for the purpose of giving #Bloch's conductor formula that computes the non-constancy of Euler characteristic of a degenerating family of algebraic varieties (in the mixed characteristic case).
Definitions
Let Y be a pure-dimensional regular scheme of finite type over a field or discrete valuation ring and X a closed subscheme. Let denote a complex of vector bundles on Y
that is exact on . The localized Chern class of this complex is a class in the bivariant Chow group of defined as follows. Let denote the tautological bundle of the Grassmann bundle of rank sub-bundles of . Let . Then the i-th localized Chern class is defined by the formula:
where is the projection and is a cycle obtained from by the so-called graph construction.
Example: localized Euler class
Let be as in #Definitions. If S is smooth over a field, then the localized Chern class coincides with the class
where, roughly, is the section determined by the differential of f and (thus) is the class of the singular locus of f.
Consider an infinite dimensional bundle E over an infinite dimensional manifold M with a section s with Fredholm derivative. In practice this situation occurs whenever we have system of PDE’s which are elliptic when considered modulo some gauge group action. The zero set Z(s) is then the moduli space of solutions modulo gauge, and the index of the derivative is the virtual dimension. The localized Euler class of the pair (E,s) is a homology class with closed support on the zero set of th |
https://en.wikipedia.org/wiki/Hypoglossal%20trigone | In the upper part of the medulla oblongata, the hypoglossal nucleus approaches the rhomboid fossa, where it lies close to the middle line, under an eminence named the hypoglossal trigone. It is a slight elevation in the floor of the inferior recess of the fourth ventricle, beneath which is the nucleus of origin of the twelfth cranial nerve. |
https://en.wikipedia.org/wiki/Dihydroprogesterone | Dihydroprogesterone may refer to:
5α-Dihydroprogesterone
5β-Dihydroprogesterone
20α-Dihydroprogesterone (20α-hydroxyprogesterone)
20β-Dihydroprogesterone (20β-hydroxyprogesterone)
3α-Dihydroprogesterone
3β-Dihydroprogesterone
17α,21-Dihydroprogesterone (11-deoxycortisol)
11β,21-Dihydroprogesterone (corticosterone)
See also
Progesterone
Pregnanedione
Pregnanolone
Pregnanediol
Pregnanetriol
Hydroxyprogesterone
Biochemistry
Pregnanes |
https://en.wikipedia.org/wiki/Lars%20G%C3%A5rding | Lars Gårding (7 March 1919 – 7 July 2014) was a Swedish mathematician. He made notable contributions to the study of partial differential equations and partial differential operators. He was a professor of mathematics at Lund University in Sweden 1952–1984. Together with Marcel Riesz, he was a thesis advisor for Lars Hörmander.
Biography
Gårding was born in Hedemora, Sweden but grew up in Motala, where his father was an engineer at the plant. He began to study mathematics in Lund in 1937 with the first intention of becoming an actuary.
His doctorate thesis, which was written under supervision of Marcel Riesz, was first on group representations in 1944, but in the following years he changed his research focus to the theory of partial differential equations. He held the professorship of mathematics at Lund University from 1952 until retirement in 1984.
His interest was not limited to mathematics, but also in art, literature and music. He played the violin and the piano. Further, he published a book on bird songs and calls in 1987, a result of his interest in bird watching.
Gårding was elected a member of the Royal Swedish Academy of Sciences in 1953.
Gårding died on 7 July 2014, aged 95.
Selected works
Books
1977. Encounter with Mathematics, 1st Edition.
2013. Encounter with Mathematics, softcover reprint of the 1st 1977 edition. Springer
Articles |
https://en.wikipedia.org/wiki/Minacovirus | Minacovirus is a subgenus of viruses in the genus Alphacoronavirus, consisting of a single species, Mink coronavirus 1. |
https://en.wikipedia.org/wiki/Heat%20equation | In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.
As the prototypical parabolic partial differential equation, the heat equation is among the most widely studied topics in pure mathematics, and its analysis is regarded as fundamental to the broader field of partial differential equations. The heat equation can also be considered on Riemannian manifolds, leading to many geometric applications. Following work of Subbaramiah Minakshisundaram and Åke Pleijel, the heat equation is closely related with spectral geometry. A seminal nonlinear variant of the heat equation was introduced to differential geometry by James Eells and Joseph Sampson in 1964, inspiring the introduction of the Ricci flow by Richard Hamilton in 1982 and culminating in the proof of the Poincaré conjecture by Grigori Perelman in 2003. Certain solutions of the heat equation known as heat kernels provide subtle information about the region on which they are defined, as exemplified through their application to the Atiyah–Singer index theorem.
The heat equation, along with variants thereof, is also important in many fields of science and applied mathematics. In probability theory, the heat equation is connected with the study of random walks and Brownian motion via the Fokker–Planck equation. The Black–Scholes equation of financial mathematics is a small variant of the heat equation, and the Schrödinger equation of quantum mechanics can be regarded as a heat equation in imaginary time. In image analysis, the heat equation is sometimes used to resolve pixelation and to identify edges. Following Robert Richtmyer and John von Neumann's introduction of "artificial viscosity" methods, solutions of heat equations have been useful in t |
https://en.wikipedia.org/wiki/Rubroboletus%20satanas | Rubroboletus satanas, commonly known as Satan's bolete or the Devil's bolete, is a basidiomycete fungus of the bolete family (Boletaceae) and one of its most infamous members. It was known as Boletus satanas before its transfer to the new genus Rubroboletus in 2014, based on molecular phylogenetic data. Found in broad-leaved and mixed woodland in the warmer regions of Europe, it is classified as a poisonous mushroom, known to cause gastrointestinal symptoms of diarrhea and violent vomiting. However, reports of poisoning are rare, due to its striking appearance and at times putrid smell, which discourage casual experimentation.
The squat, brightly coloured fruiting bodies are often massive and imposing, with a pale, dull-coloured velvety cap up to , extraordinarily , very rarely across, yellow to orange-red pores and a bulbous red-patterned stem. The flesh turns blue when cut or bruised, and overripe fruit bodies often emit an unpleasant smell reminiscent of carrion. It is arguably the largest bolete found in Europe.
Taxonomy and phylogeny
Originally known as Boletus satanas, the Satan's bolete was described by German mycologist Harald Othmar Lenz in 1831. Lenz was aware of several reports of adverse reactions from people who had consumed this fungus and apparently felt himself ill from its "emanations" while describing it, hence giving it its sinister epithet. The Greek word (satanas, meaning Satan), is derived from the Hebrew śāṭān (שטן). American mycologist Harry D. Thiers concluded that material from North America matches the species description, however, genetic testing has since confirmed that western North American collections represent Rubroboletus eastwoodiae, a different species.
Genetic analysis published in 2013 revealed that B. satanas and several other red-pored boletes, are part of the "/dupainii" clade (named after B. dupainii), and are distantly nested from the core group of Boletus (including B. edulis and relatives) within the Boletineae. This |
https://en.wikipedia.org/wiki/Probabilistic%20CTL | Probabilistic Computation Tree Logic (PCTL) is an extension of computation tree logic (CTL) that allows for probabilistic quantification of described properties. It has been defined in the paper by Hansson and Jonsson.
PCTL is a useful logic for stating soft deadline properties, e.g. "after a request for a service, there is at least a 98% probability that the service will be carried out within 2 seconds". Akin CTL suitability for model-checking PCTL extension is widely used as a property specification language for probabilistic model checkers.
PCTL syntax
A possible syntax of PCTL can be defined as follows:
Therein, is a comparison operator and is a probability threshold.
Formulas of PCTL are interpreted over discrete Markov chains. An interpretation structure
is a quadruple , where
is a finite set of states,
is an initial state,
is a transition probability function, , such that for all we have , and
is a labeling function, , assigning atomic propositions to states.
A path from a state is an infinite sequence of states
. The n-th state of the path is denoted as
and the prefix of of length is denoted as .
Probability measure
A probability measure on the set of paths with a common prefix of length is given by the product of transition probabilities along the prefix of the path:
For the probability measure is equal to .
Satisfaction relation
The satisfaction relation is inductively defined as follows:
if and only if ,
if and only if not ,
if and only if or ,
if and only if and ,
if and only if , and
if and only if .
See also
Computation tree logic
Temporal logic |
https://en.wikipedia.org/wiki/Aggregated%20indices%20randomization%20method | In applied mathematics and decision making, the aggregated indices randomization method (AIRM) is a modification of a well-known aggregated indices method, targeting complex objects subjected to multi-criteria estimation under uncertainty. AIRM was first developed by the Russian naval applied mathematician Aleksey Krylov around 1908.
The main advantage of AIRM over other variants of aggregated indices methods is its ability to cope with poor-quality input information. It can use non-numeric (ordinal), non-exact (interval) and non-complete expert information to solve multi-criteria decision analysis (MCDM) problems. An exact and transparent mathematical foundation can assure the precision and fidelity of AIRM results.
Background
Ordinary aggregated indices method allows comprehensive estimation of complex (multi-attribute) objects’ quality. Examples of such complex objects (decision alternatives, variants of a choice, etc.) may be found in diverse areas of business, industry, science, etc. (e.g., large-scale technical systems, long-time projects, alternatives of a crucial financial/managerial decision, consumer goods/services, and so on). There is a wide diversity of qualities under evaluation too: efficiency, performance, productivity, safety, reliability, utility, etc.
The essence of the aggregated indices method consists in an aggregation (convolution, synthesizing, etc.) of some single indices (criteria) q(1),...,q(m), each single index being an estimation of a fixed quality of multiattribute objects under investigation, into one aggregated index (criterion) Q=Q(q(1),...,q(m)).
In other words, in the aggregated indices method single estimations of an object, each of them being made from a single (specific) “point of view” (single criterion), is synthesized by aggregative function Q=Q(q(1),...,q(m)) in one aggregated (general) object's estimation Q, which is made from the general “point of view” (general criterion).
Aggregated index Q value is determined |
https://en.wikipedia.org/wiki/Fernhurst%20Research%20Station | The Fernhurst Research Station was a crop protection chemical research institute in West Sussex, mainly run by ICI, for the fruit industry. The site is to the east of the A286, around a mile south of the village of Fernhurst and a mile north of the Haslemere to Petersfield Serpent Trail.
History
Plant Protection Limited moved to the site in 1945 and opened a research institute on the estate of Sir Felix Schuster (1854-1936). The research institute was to investigate pest and disease control in horticultural crops. As well as being an administrative site, the station comprised a orchard including 9 acres of plums and 26 acres of dessert apples at Hurstfold Farm.
In June 1951 an international conference, with scientists from 39 countries, took place at the site on food scarcity. On 10 May 1955, the site was visited by the Duke of Edinburgh. Another international conference took place at the site in June 1956.
In 1958 Plant Protection Limited became a wholly owned subsidiary of Imperial Chemical Industries: ICI Plant Protection Division, which had its international headquarters at the site until the 1990s; in 1986 a new international conference centre was opened on the site by Prime Minister Margaret Thatcher. ICI Public Health was formed in 1989 and based at the site.
Throughout its history, indoor and outdoor crops were grown for wholesale and for research, and the station developed advanced growing and application methods for crops, including the establishment of a film unit. In April 1990, the site won a Queen's Award for Technological Achievement for herbicides, fungicides and pesticides.
The site was taken over by Zeneca in 1994, and later Syngenta, becoming the headquarters of Syngenta Europe Ltd. Syngenta left the site in December 2001, and the site ceased to function as a research station or administrative centre apart from the principal building. The other office buildings were left unoccupied and were subsequently comprehensively vandalised. At its pea |
https://en.wikipedia.org/wiki/Q0%20%28mathematical%20logic%29 | Q0 is Peter Andrews' formulation of the simply-typed lambda calculus,
and provides a foundation for mathematics comparable to first-order logic plus set theory.
It is a form of higher-order logic and closely related to the logics of the
HOL theorem prover family.
The theorem proving systems TPS and ETPS
are based on Q0. In August 2009, TPS won the first-ever competition
among higher-order theorem proving systems.
Axioms of Q0
The system has just five axioms, which can be stated as:
℩
(Axioms 2, 3, and 4 are axiom schemas—families of similar axioms. Instances of Axiom 2 and
Axiom 3 vary only by the types of variables and constants, but instances of Axiom 4 can have
any expression replacing A and B.)
The subscripted "o" is the type symbol for boolean values, and subscripted
"i" is the type symbol for individual (non-boolean) values. Sequences of these
represent types of functions, and can include parentheses to distinguish different function
types. Subscripted Greek letters such as α and β are syntactic variables for type
symbols. Bold capital letters such as , , and
are syntactic variables for WFFs, and bold lower case letters such as
, are syntactic variables for variables.
indicates syntactic substitution at all free occurrences.
The only primitive constants are , denoting equality
of members of each type α, and , denoting a
description operator for individuals, the unique element of a set containing exactly one individual.
The symbols λ and brackets ("[" and "]") are syntax of the language.
All other symbols are abbreviations for terms containing these, including quantifiers ∀ and ∃.
In Axiom 4, must be free for in ,
meaning that the substitution does not cause any occurrences of
free variables of to become bound in the result of the substitution.
About the axioms
Axiom 1 expresses the idea that and are the only boolean values.
Axiom schemas 2α and 3αβ express fundamental properties of functions.
Axiom schema 4 |
https://en.wikipedia.org/wiki/SPIN%20%28operating%20system%29 | The SPIN operating system is a research project implemented in the computer programming language Modula-3, and is an open source project. It is designed with three goals: flexibility, safety, and performance. SPIN was developed at the University of Washington.
The kernel can be extended by dynamic loading of modules which implement interfaces that represent domains. These domains are defined by Modula-3 INTERFACE. All kernel extensions are written in Modula-3 safe subset with metalanguage constructs and type safe casting system. The system also issued a special run-time extension compiler.
One set of kernel extensions provides an application programming interface (API) that emulates the Digital UNIX system call interface. This allows Unix applications to run on SPIN. |
https://en.wikipedia.org/wiki/128-bit%20computing | General home computing and gaming utility emerged at 8-bit (but not at 1-bit or 4-bit) word sizes, as 28=256 words become possible. Thus, early 8-bit CPUs (Zilog Z80, 6502, Intel 8088 introduced 1976-1981 by Commodore, Tandy Corporation, Apple and IBM) inaugurated the era of personal computing. Many 16-bit CPUs already existed in the mid-1970's. Over the next 30 years, the shift to 16-bit, 32-bit and 64-bit computing allowed, respectively, 216=65,536 unique words, 232=4,294,967,296 unique words and 264=18,446,744,073,709,551,615 unique words respectively, each step offering a meaningful advantage until 64 bits was reached. Further advantages evaporate from 64-bit to 128-bit computing as the number of possible values in a register increases from roughly 18 quintillion () to 340 undecillion () as so many unique values are never utilized. Thus, with a register that can store 2128 values, no advantages over 64-bit computing accrue to either home computing or gaming. CPUs with a larger word size also require more circuitry, are physically larger, require more power and generate more heat. Thus, there are currently no mainstream general-purpose processors built to operate on 128-bit integers or addresses, although a number of processors do have specialized ways to operate on 128-bit chunks of data, and uses are given below.
Representation
A processor with 128-bit byte addressing could directly address up to 2128 (over ) bytes, which would greatly exceed the total data captured, created, or replicated on Earth as of 2018, which has been estimated to be around 33 zettabytes (over 274 bytes).
A 128-bit register can store 2128 (over 3.40 × 1038) different values. The range of integer values that can be stored in 128 bits depends on the integer representation used. With the two most common representations, the range is 0 through 340,282,366,920,938,463,463,374,607,431,768,211,455 (2128 − 1) for representation as an (unsigned) binary number, and −170,141,183,460,469,231,731,6 |
https://en.wikipedia.org/wiki/Sink%20%28computing%29 | In computing, a sink, or data sink generally refers to the destination of data flow.
The word sink has multiple uses in computing. In software engineering, an event sink is a class or function that receives events from another object or function, while a sink can also refer to a node of a directed acyclic graph with no additional nodes leading out from it, among other uses.
In software engineering
An event sink is a class or function designed to receive incoming events from another object or function. This is commonly implemented in C++ as callbacks. Other object-oriented languages, such as Java and C#, have built-in support for sinks by allowing events to be fired to delegate functions.
Due to lack of formal definition, a sink is often misconstrued with a gateway, which is a similar construct but the latter is usually either an end-point or allows bi-direction communication between dissimilar systems, as opposed to just an event input point . This is often seen in C++ and hardware-related programming , thus the choice of nomenclature by a developer usually depends on whether the agent acting on a sink is a producer or consumer of the sink content.
In graph theory
In a Directed acyclic graph, a source node is a node (also known as a vertex) with no incoming connections from other nodes, while a sink node is a node without outgoing connections.
Directed acyclic graphs are used in instruction scheduling, neural networks and data compression.
In stream processing
In several computer programs employing streams, such as GStreamer, PulseAudio, or PipeWire, a source is the starting point of a pipeline which produces a stream but does not consume any, while a sink is the end point which accepts a stream without producing any.
An example is an audio pipeline in the PulseAudio sound system. An input device such as a microphone is a type of audio source, while an output device like a speaker is the audio sink.
Other uses
The word sink has been used for both in |
https://en.wikipedia.org/wiki/Homoarchy | Homoarchy is "the relation of elements to one another when they are rigidly ranked one way only, and thus possess no (or not more than very limited) potential for being unranked or ranked in another or a number of different ways at least without cardinal reshaping of the whole socio-political order."
Homoarchy and Heterarchy
This notion is coupled with the one of heterarchy, defined by Crumley as "the relation of elements to one another when they are unranked or when they possess the potential for being ranked in a number of different ways". Note that heterarchy is not the opposite of any hierarchy all together, but is rather the opposite of "homoarchy".
Homoarchy and Hierarchy
Homoarchy must not be identified with hierarchy (as well as heterarchy must not be confused with egalitarianism in the proper meaning of the word). In any society both “vertical” and “horizontal” social links may be observed. More so: sometimes it seems too difficult to designate a society as “homoarchic” or “heterarchic” even at the most general level of analysis, like in the cases of the late-ancient Germans and early-medieval “Barbarian kingdoms” in which one can observe the monarchy and quite rigid social hierarchy combined with (at least at the beginning) democratic institutions and procedures (like selection of the king), not less significant for the whole socio-political system's operation. So, it does look like it is impossible to measure degrees of homoarchy and heterarchy in a society with mathematical exactness, for example, in per cent. A purely quantitative approach is also inapplicable here: the presence of, say five hierarchies in a society as an entity does not make it more heterarchic and less homoarchic in comparison with a society with four hierarchies if in the former there is and in the latter there is no one dominant hierarchy. The pathway to evaluation of a society as heterarchic or homoarchic (in either absolute or relative categories) goes through an analysis of i |
https://en.wikipedia.org/wiki/Essentially%20surjective%20functor | In mathematics, specifically in category theory, a functor
is essentially surjective (or dense) if each object of is isomorphic to an object of the form for some object of .
Any functor that is part of an equivalence of categories is essentially surjective. As a partial converse, any full and faithful functor that is essentially surjective is part of an equivalence of categories.
Notes |
https://en.wikipedia.org/wiki/Straight%20and%20Crooked%20Thinking | Straight and Crooked Thinking, first published in 1930 and revised in 1953, is a book by Robert H. Thouless which describes, assesses and critically analyses flaws in reasoning and argument. Thouless describes it as a practical manual, rather than a theoretical one.
Synopsis
Thirty-eight fallacies are discussed in the book. Among them are:
No. 3. proof by example, biased sample, cherry picking
No. 6. ignoratio elenchi: "red herring"
No. 9. false compromise/middle ground
No. 12. argument in a circle
No. 13. begging the question
No. 17. equivocation
No. 18. false dilemma: black and white thinking
No. 19. continuum fallacy (fallacy of the beard)
No. 21. ad nauseam: "argumentum ad nauseam" or "argument from repetition" or "argumentum ad infinitum"
No. 25. style over substance fallacy
No. 28. appeal to authority
No. 31. thought-terminating cliché
No. 36. special pleading
No. 37. appeal to consequences
No. 38. appeal to motive
See also
List of cognitive biases
List of common misconceptions
List of fallacies
List of memory biases
List of topics related to public relations and propaganda |
https://en.wikipedia.org/wiki/Chargaff%27s%20rules | Chargaff's rules [given by Erwin Chargaff] states that in the DNA of any species and any organism, the amount of guanine should be equal to the amount of cytosine and the amount of adenine should be equal to the amount of thymine. Further a 1:1 stoichiometric ratio of purine and pyrimidine bases (i.e., A+G=T+C) should exist. This pattern is found in both strands of the DNA. They were discovered by Austrian-born chemist Erwin Chargaff, in the late 1940s.
Definitions
First parity rule
The first rule holds that a double-stranded DNA molecule, globally has percentage base pair equality: A% = T% and G% = C%. The rigorous validation of the rule constitutes the basis of Watson-Crick pairs in the DNA double helix model.
Second parity rule
The second rule holds that both Α% ≈ Τ% and G% ≈ C% are valid for each of the two DNA strands. This describes only a global feature of the base composition in a single DNA strand.
Research
The second parity rule was discovered in 1968. It states that, in single-stranded DNA, the number of adenine units is approximately equal to that of thymine (%A ≈ %T), and the number of cytosine units is approximately equal to that of guanine (%C ≈ %G).
The first empirical generalization of Chargaff's second parity rule, called the Symmetry Principle, was proposed by Vinayakumar V. Prabhu in 1993. This principle states that for any given oligonucleotide, its frequency is approximately equal to the frequency of its complementary reverse oligonucleotide. A theoretical generalization was mathematically derived by Michel E. B. Yamagishi and Roberto H. Herai in 2011.
In 2006, it was shown that this rule applies to four of the five types of double stranded genomes; specifically it applies to the eukaryotic chromosomes, the bacterial chromosomes, the double stranded DNA viral genomes, and the archaeal chromosomes. It does not apply to organellar genomes (mitochondria and plastids) smaller than ~20-30 kbp, nor does it apply to single stranded DNA (vira |
https://en.wikipedia.org/wiki/Integral%20equation | In mathematics, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form: where is an integral operator acting on u. Hence, integral equations may be viewed as the analog to differential equations where instead of the equation involving derivatives, the equation contains integrals. A direct comparison can be seen with the mathematical form of the general integral equation above with the general form of a differential equation which may be expressed as follows:where may be viewed as a differential operator of order i. Due to this close connection between differential and integral equations, one can often convert between the two. For example, one method of solving a boundary value problem is by converting the differential equation with its boundary conditions into an integral equation and solving the integral equation. In addition, because one can convert between the two, differential equations in physics such as Maxwell's equations often have an analog integral and differential form. See also, for example, Green's function and Fredholm theory.
Classification and overview
Various classification methods for integral equations exist. A few standard classifications include distinctions between linear and nonlinear; homogenous and inhomogeneous; Fredholm and Volterra; first order, second order, and third order; and singular and regular integral equations. These distinctions usually rest on some fundamental property such as the consideration of the linearity of the equation or the homogeneity of the equation. These comments are made concrete through the following definitions and examples:
Linearity
: An integral equation is linear if the unknown function u(x) and its integrals appear linear in the equation. Hence, an example of a linear equation would be:As a note on naming convention: i) u(x) is called the unknown function, ii) f(x) is called a |
https://en.wikipedia.org/wiki/Carleson%E2%80%93Jacobs%20theorem | In mathematics, the Carleson–Jacobs theorem, introduced by , describes the best approximation to a continuous function on the unit circle by a function in a Hardy space.
Notes |
https://en.wikipedia.org/wiki/Base%20change%20lifting | In mathematics, base change lifting is a method of constructing new automorphic forms from old ones, that corresponds in Langlands philosophy to the operation of restricting a representation of a Galois group to a subgroup.
The Doi–Naganuma lifting from 1967 was a precursor of the base change lifting. Base change lifting was introduced by for Hilbert modular forms of cyclic totally real fields of prime degree, by comparing the trace of twisted Hecke operators on Hilbert modular forms with the trace of Hecke operators on ordinary modular forms. gave a representation theoretic interpretation of Saito's results and used this to generalize them. extended the base change lifting to more general automorphic forms and showed how to use the base change lifting for GL2 to prove the Artin conjecture for tetrahedral and some octahedral 2-dimensional representations of the Galois group.
, and gave expositions of the base change lifting for GL2 and its applications to the Artin conjecture.
Properties
If E/F is a finite cyclic Galois extension of global fields, then the base change lifting of gives a map from automorphic forms for GLn(F) to automorphic forms for GLn(E) = ResE/FGLn(F). This base change lifting is the special case of Langlands functoriality, corresponding (roughly) to the diagonal embedding of the Langlands dual GLn(C) of GLn to the Langlands dual GLn(C)×...×GLn(C) of ResE/FGLn. |
https://en.wikipedia.org/wiki/Rectoprostatic%20fascia | The rectoprostatic fascia (Denonvilliers' fascia) is a membranous partition at the lowest part of the rectovesical pouch. It separates the prostate and urinary bladder from the rectum. It consists of a single fibromuscular structure with several layers that are fused together and covering the seminal vesicles. It is also called Denonvilliers' fascia after French anatomist and surgeon Charles-Pierre Denonvilliers.
The structure corresponds to the rectovaginal fascia in the female. The rectoprostatic fascia also inhibits the posterior spread of prostatic adenocarcinoma; therefore invasion of the rectum is less common than is invasion of other contiguous structures. |
https://en.wikipedia.org/wiki/Relativistic%20wave%20equations | In physics, specifically relativistic quantum mechanics (RQM) and its applications to particle physics, relativistic wave equations predict the behavior of particles at high energies and velocities comparable to the speed of light. In the context of quantum field theory (QFT), the equations determine the dynamics of quantum fields.
The solutions to the equations, universally denoted as or (Greek psi), are referred to as "wave functions" in the context of RQM, and "fields" in the context of QFT. The equations themselves are called "wave equations" or "field equations", because they have the mathematical form of a wave equation or are generated from a Lagrangian density and the field-theoretic Euler–Lagrange equations (see classical field theory for background).
In the Schrödinger picture, the wave function or field is the solution to the Schrödinger equation;
one of the postulates of quantum mechanics. All relativistic wave equations can be constructed by specifying various forms of the Hamiltonian operator Ĥ describing the quantum system. Alternatively, Feynman's path integral formulation uses a Lagrangian rather than a Hamiltonian operator.
More generally – the modern formalism behind relativistic wave equations is Lorentz group theory, wherein the spin of the particle has a correspondence with the representations of the Lorentz group.
History
Early 1920s: Classical and quantum mechanics
The failure of classical mechanics applied to molecular, atomic, and nuclear systems and smaller induced the need for a new mechanics: quantum mechanics. The mathematical formulation was led by De Broglie, Bohr, Schrödinger, Pauli, and Heisenberg, and others, around the mid-1920s, and at that time was analogous to that of classical mechanics. The Schrödinger equation and the Heisenberg picture resemble the classical equations of motion in the limit of large quantum numbers and as the reduced Planck constant , the quantum of action, tends to zero. This is the correspondence |
https://en.wikipedia.org/wiki/NAT%20Port%20Mapping%20Protocol | NAT Port Mapping Protocol (NAT-PMP) is a network protocol for establishing network address translation (NAT) settings and port forwarding configurations automatically without user effort. The protocol automatically determines the external IPv4 address of a NAT gateway, and provides means for an application to communicate the parameters for communication to peers. Apple introduced NAT-PMP in 2005 by as part of the Bonjour specification, as an alternative to the more common ISO Standard Internet Gateway Device Protocol implemented in many NAT routers. The protocol was published as an informational Request for Comments (RFC) by the Internet Engineering Task Force (IETF) in RFC 6886.
NAT-PMP runs over the User Datagram Protocol (UDP) and uses port number 5351. It has no built-in authentication mechanisms because forwarding a port typically does not allow any activity that could not also be achieved using STUN methods. The benefit of NAT-PMP over STUN is that it does not require a STUN server and a NAT-PMP mapping has a known expiration time, allowing the application to avoid sending inefficient keep-alive packets.
NAT-PMP is the predecessor to the Port Control Protocol (PCP).
See also
Port Control Protocol (PCP)
Internet Gateway Device Protocol (UPnP IGD)
Universal Plug and Play (UPnP)
NAT traversal
STUN
Zeroconf |
https://en.wikipedia.org/wiki/Niven%27s%20constant | In number theory, Niven's constant, named after Ivan Niven, is the largest exponent appearing in the prime factorization of any natural number n "on average". More precisely, if we define H(1) = 1 and H(n) = the largest exponent appearing in the unique prime factorization of a natural number n > 1, then Niven's constant is given by
where ζ is the Riemann zeta function.
In the same paper Niven also proved that
where h(1) = 1, h(n) = the smallest exponent appearing in the unique prime factorization of each natural number n > 1, o is little o notation, and the constant c is given by
and consequently that |
https://en.wikipedia.org/wiki/Sequence%20motif | In biology, a sequence motif is a nucleotide or amino-acid sequence pattern that is widespread and usually assumed to be related to biological function of the macromolecule. For example, an N-glycosylation site motif can be defined as Asn, followed by anything but Pro, followed by either Ser or Thr, followed by anything but Pro residue.
Overview
When a sequence motif appears in the exon of a gene, it may encode the "structural motif" of a protein; that is a stereotypical element of the overall structure of the protein. Nevertheless, motifs need not be associated with a distinctive secondary structure. "Noncoding" sequences are not translated into proteins, and nucleic acids with such motifs need not deviate from the typical shape (e.g. the "B-form" DNA double helix).
Outside of gene exons, there exist regulatory sequence motifs and motifs within the "junk", such as satellite DNA. Some of these are believed to affect the shape of nucleic acids (see for example RNA self-splicing), but this is only sometimes the case. For example, many DNA binding proteins that have affinity for specific DNA binding sites bind DNA in only its double-helical form. They are able to recognize motifs through contact with the double helix's major or minor groove.
Short coding motifs, which appear to lack secondary structure, include those that label proteins for delivery to particular parts of a cell, or mark them for phosphorylation.
Within a sequence or database of sequences, researchers search and find motifs using computer-based techniques of sequence analysis, such as BLAST. Such techniques belong to the discipline of bioinformatics. See also consensus sequence.
Motif Representation
Consider the N-glycosylation site motif mentioned above:
Asn, followed by anything but Pro, followed by either Ser or Thr, followed by anything but Pro
This pattern may be written as N{P}[ST]{P} where N = Asn, P = Pro, S = Ser, T = Thr; {X} means any amino acid except X; and [XY] means either X o |
https://en.wikipedia.org/wiki/Lexipedia | Lexipedia is an online visual semantic network with dictionary and thesaurus reference functionality built on Vantage Learning's Multilingual ConceptNet. Lexipedia presents words with their semantic relationships displayed in an animated visual word web. Lexipedia contains an expanded version of the English Wordnet and supports six languages; English, Dutch, French, German, Italian, Spanish languages. |
https://en.wikipedia.org/wiki/Food%20and%20Beverage%20Workers%27%20Union | The Food and Beverage Workers' Union was a trade union representing workers in the food processing industry in South Africa.
The union was established in 1979 by Skakes Sikhakhane, after he had lost re-election as general secretary of the Sweet, Food and Allied Workers' Union. In 1980, it was a founding affiliate of the Council of Unions of South Africa, and by the following year, it had 6,000 members. By 1986, when it transferred to the new National Council of Trade Unions, it had grown to 16,124 members. In 1993, it merged with the National Union of Wine, Spirits and Allied Workers, to form the National Union of Food, Beverage, Wine, Spirit and Allied Workers. |
https://en.wikipedia.org/wiki/Defence%20Information%20Infrastructure | Defence Information Infrastructure (DII) is a secure military network owned by the United Kingdom's Ministry of Defence MOD. It is used by all branches of the armed forces, including the Royal Navy, British Army and Royal Air Force as well as MOD civil servants. It reaches to deployed bases and ships at sea, but not to aircraft in flight.
The partnership developing DII is called the Atlas Consortium and is made up of DXC Technology (formerly EDS), Fujitsu, Airbus Defence and Space (formerly EADS Defence & Security) and CGI (formerly Logica).
Starting in May 2016, MOD users of DII begin to migrate to the New Style of IT within the defence to be known as MODNET; again supported by ATLAS.
Overview
DII supports 2,000 MOD sites with some 150,000 terminals (desktops and laptops) and 300,000 user accounts. It is designed to offer a high level of resilience, flexibility, and security in the provision of connectivity from ‘business space to battlespace’ in MOD offices in the UK, bases overseas, at sea, and on the front line. It aims to rationalise and improve IT provision for the defence sector in the 21st century; involving a major culture change for MOD users and their ways of working through a structure of shared working areas with controlled security and access. It should provide a records management system and search facility together with a range of office services. It hosts several hundred COTS (commercial off-the-shelf) and bespoke MOD applications from a range of suppliers judged to meet the required security standards. The network handles alphanumeric data, graphics, and video. The system carries information from Restricted to above-Secret levels, but users are able to see only the data and applications for which they are authorised.
Incremental approach
In order to de-risk the programme Atlas and the MOD took an incremental approach to the development and implementation of DII, with a separate contract for each increment. The extended timeline allowed t |
https://en.wikipedia.org/wiki/TV%20accessory | A television accessory (TV accessory) is an accessory that is used in conjunction with a television (TV) or other compatible display devices and is intended to either improve the user experience or to offer new possibilities of using it.
History
It is difficult to say when the very first TV accessory was invented or when it hit the consumer market.
The first TV accessory with which owners could actively influence the content displayed on the screen in real time was the Magnavox Odyssey, the first commercial home video game console, released in September 1972 by Magnavox for a list price of $99.95.
One of the first TV accessories that could record TV programs available for consumers was the Clie Pega-VR100K by Sony, released on October 9, 2003, for a list price of $479.99.
As of 2017, TV accessories are a rapidly growing market which is expected to grow even more rapidly in the near future. Some of the most popular manufacturers of TV accessories include Sony, Magnavox, Apple, Nvidia, Amazon, Samsung, Google as well as many independent third-party suppliers.
Types
Soundbars
A soundbar (also called sound bar or media bar) is a type of loudspeaker that projects audio from a wide enclosure. Soundbars are one of the most popular TV accessories because they are affordable, very easy to install and a relatively large upgrade compared to other accessories, offering much better sound than most integrated TV loudspeakers.
Universal remotes
A universal remote is a remote control that can be programmed to operate various brands of one or more types of consumer electronics devices.
On May 30, 1985, Philips introduced the first universal remote (U.S. Pat. #4774511) under the Magnavox brand name. In 1985, Robin Rumbolt, William "Russ" McIntyre, and Larry Goodson with North American Philips Consumer Electronics (Magnavox, Sylvania, and Philco) developed the first universal remote control.
Streaming television
Streaming television is the digital distribution of telev |
https://en.wikipedia.org/wiki/Hering%E2%80%93Hillebrand%20deviation | The Hering–Hillebrand deviation describes the mismatch between the theoretical and empirical horopter. The horopter is the set of points that projects at the same location in the two retinae (i.e. that have the same visual direction). Geometrically the horopter is a circle passing through the nodal point of the two eyes and through the fixation point. This is known as the horizontal geometrical horopter, or as the Vieth–Müller circle. This is the set of points that correspond geometrically to the intersection between visual lines at identical eccentricities. There is also a vertical horopter which is a straight line on the sagittal plane and passing through the intersection between the sagittal plane and the Vieth–Müller circle (typically fixation if the observer fixates straight ahead, but not necessarily).
An empirical horopter can be defined following different criteria. Following Hering, it is usually meant by empirical horopter the equal visual direction horopter. That is the set of points that appear to have the same visual direction in both eyes. But the horopter can also be defined as the center of the Panum's fusional area, the apparent fronto-parallel plane or the equal distance from fixation. All these empirical horopters are in fact corresponding, empirically, to the equal visual direction horopter.
The Hering–Hillebrand deviation describes the fact that the empirical horopter does not fall on the geometrical horopter. This was observed by Hering and Hillebrand at the same time, as well as Helmholtz for the vertical horopter. At short fixation distances, the empirical horopter is a concave parabola flatter that a circle. At some given distance, called the abathic distance, the empirical horopter becomes a straight line, thus matching the apparent fronto-parallel plane. Finally for fixation distances farther than the abathic distance the empirical horopter is a convex parabola.
The origin of the Hering–Hilebrand deviation is still unclear. It was origi |
https://en.wikipedia.org/wiki/B%C3%BCchi%20arithmetic | Büchi arithmetic of base k is the first-order theory of the natural numbers with addition and the function which is defined as the largest power of k dividing x, named in honor of the Swiss mathematician Julius Richard Büchi. The signature of Büchi arithmetic contains only the addition operation, and equality, omitting the multiplication operation entirely.
Unlike Peano arithmetic, Büchi arithmetic is a decidable theory. This means it is possible to effectively determine, for any sentence in the language of Büchi arithmetic, whether that sentence is provable from the axioms of Büchi arithmetic.
Büchi arithmetic and automata
A subset is definable in Büchi arithmetic of base k if and only if it is k-recognisable.
If this means that the set of integers of X in base k is accepted by an automaton. Similarly if there exists an automaton that reads the first digits, then the second digits, and so on, of n integers in base k, and accepts the words if the n integers are in the relation X.
Properties of Büchi arithmetic
If k and l are multiplicatively dependent, then the Büchi arithmetics of base k and l have the same expressivity. Indeed can be defined in , the first-order theory of and .
Otherwise, an arithmetic theory with both and functions is equivalent to Peano arithmetic, which has both addition and multiplication, since multiplication is definable in .
Further, by the Cobham–Semënov theorem, if a relation is definable in both k and l Büchi arithmetics, then it is definable in Presburger arithmetic. |
https://en.wikipedia.org/wiki/Volume%20of%20an%20n-ball | In geometry, a ball is a region in a space comprising all points within a fixed distance, called the radius, from a given point; that is, it is the region enclosed by a sphere or hypersphere. An -ball is a ball in an -dimensional Euclidean space. The volume of a -ball is the Lebesgue measure of this ball, which generalizes to any dimension the usual volume of a ball in 3-dimensional space. The volume of a -ball of radius is where is the volume of the unit -ball, the -ball of radius .
The real number can be expressed via a two-dimension recurrence relation.
Closed-form expressions involve the gamma, factorial, or double factorial function.
The volume can also be expressed in terms of , the area of the unit -sphere.
Formulas
The first volumes are as follows:
Closed form
The -dimensional volume of a Euclidean ball of radius in -dimensional Euclidean space is:
where is Euler's gamma function. The gamma function is offset from but otherwise extends the factorial function to non-integer arguments. It satisfies if is a positive integer and if is a non-negative integer.
Two-dimension recurrence relation
The volume can be computed without use of the Gamma function. As is proved below using a vector-calculus double integral in polar coordinates, the volume of an -ball of radius can be expressed recursively in terms of the volume of an -ball, via the interleaved recurrence relation:
This allows computation of in approximately steps.
Alternative forms
The volume can also be expressed in terms of an -ball using the one-dimension recurrence relation:
Inverting the above, the radius of an -ball of volume can be expressed recursively in terms of the radius of an - or -ball:
Using explicit formulas for particular values of the gamma function at the integers and half-integers gives formulas for the volume of a Euclidean ball in terms of factorials. For non-negative integer , these are:
The volume can also be expressed in terms of double factorials. |
https://en.wikipedia.org/wiki/Collaborative%20diffusion | Collaborative Diffusion is a type of pathfinding algorithm which uses the concept of antiobjects, objects within a computer program that function opposite to what would be conventionally expected. Collaborative Diffusion is typically used in video games, when multiple agents must path towards a single target agent. For example, the ghosts in Pac-Man. In this case, the background tiles serve as antiobjects, carrying out the necessary calculations for creating a path and having the foreground objects react accordingly, whereas having foreground objects be responsible for their own pathing would be conventionally expected.
Collaborative Diffusion is favored for its efficiency over other pathfinding algorithms, such as A*, when handling multiple agents. Also, this method allows elements of competition and teamwork to easily be incorporated between tracking agents. Notably, the time taken to calculate paths remains constant as the number of agents increases. |
https://en.wikipedia.org/wiki/Tribimaximal%20mixing | Tribimaximal mixing is a specific postulated form for the Pontecorvo–Maki–Nakagawa–Sakata (PMNS) lepton mixing matrix U. Tribimaximal mixing is defined by a particular choice of the matrix of moduli-squared of the elements of the PMNS matrix as follows:
This mixing is historically interesting as it is quite close to reality when compared to other simple hypotheses where the squares of matrix elements take exact ratios, and also compared to the naive supposition that the matrix would be approximately diagonal like the CKM matrix. However the precision of modern experiments mean that such a simple form is excluded by experiment at a level of over 5σ, mainly due to the fact the tribimaximal scheme has a zero in the element, but also (to a much lesser extent) because it predicts no violation of CP symmetry.
The tribimaximal mixing form was compatible with pre-2011 neutrino oscillation experiments and may be used as a zeroth-order approximation to more general forms for the PMNS matrix, including some that are consistent with the data. In the PDG convention for the PMNS matrix, tribimaximal mixing may be specified in terms of lepton mixing angles as follows:
The above prediction has been falsified experimentally, because θ13 was found to be nontrivial, θ13 =8.5°.
A non-negligible value of θ13 has been foreseen in certain theoretical schemes that were put forward before tribimaximal mixing and that
supported a large solar mixing, before it was confirmed experimentally (these theoretical schemes do not have a special name, but for the reasons explained above, they could be called pre-tribimaximal or also non-tribimaximal). This situation is not new: also in the 1990s, the solar mixing angle was supposed to be small by most theorists, until KamLAND proved the contrary to be true.
Explanation of name
The name tribimaximal reflects the commonality of the tribimaximal mixing matrix with two previously proposed specific forms for the PMNS matrix, the trimaximal and |
https://en.wikipedia.org/wiki/Phase%20converter | A phase converter is a device that converts electric power provided as single phase to multiple phase or vice versa. The majority of phase converters are used to produce three-phase electric power from a single-phase source, thus allowing the operation of three-phase equipment at a site that only has single-phase electrical service. Phase converters are used where three-phase service is not available from the utility provider or is too costly to install. A utility provider will generally charge a higher fee for a three-phase service because of the extra equipment, including transformers, metering, and distribution wire required to complete a functional installation.
Types of Phase Converters
Three-phase induction motors may operate adequately on an unbalanced supply if not heavily loaded. This allows various imperfect techniques to be used. A single-phase motor can drive a three-phase generator, which will produce a high-quality three-phase source but at a high cost to the longevity of the system. While there are multiple phase conversion systems in place, the most common types are:
Rotary phase converters constructed from a three-phase electric motor or generator "idler" and a simple on/off circuit. Rotary phase converters are known to drive up operations costs, due to the continued draw of power while idling that is not common in other phase converters. Rotary phase converters are considered a two-motor solution; one motor is not connected to a load and produces the three-phase power, the second motor driving the load runs on the power produced.
A digital phase converter uses a rectifier and inverter to create a third leg of power, which is added to the two legs of the single-phase source to create three-phase power. Unlike a phase-converting VFD, it cannot vary the frequency and motor speed, since it generates only one leg. Digital phase Converters use a Digitial Signal Processor (DSP) to ensure the generated third leg matches the voltage and frequency of t |
https://en.wikipedia.org/wiki/Psophometric%20weighting | Psophometric weighting refers to any weighting curve used in the measurement of noise. In the field of audio engineering it has a more specific meaning, referring to noise weightings used especially in measuring noise on telecommunications circuits. Key standards are ITU-T O.41 and C-message weighting as shown here.
Use
A major use of noise weighting is in the measurement of residual noise in audio equipment, usually present as hiss or hum in quiet moments of programme material. The purpose of weighting here is to emphasise the parts of the audible spectrum that ears perceive most readily, and attenuate the parts that contribute less to perception of loudness, in order to get a measured figure that correlates well with subjective effect.
See also
Audio system measurements
Equal-loudness contour
Fletcher–Munson curves
Noise measurement
Headroom
Psophometric voltage
Rumble measurement
ITU-R 468 noise weighting
A-weighting
Weighting filter
Weighting
Weighting curve |
https://en.wikipedia.org/wiki/Bhatnagar%E2%80%93Gross%E2%80%93Krook%20operator | The Bhatnagar–Gross–Krook operator (abbreviated BGK operator) term refers to a collision operator used in the Boltzmann equation and in the lattice Boltzmann method, a computational fluid dynamics technique. It is given by the following formula:
where is a local equilibrium value for the population of particles in the direction of link The term is a relaxation time, and related to the viscosity.
The operator is named after Prabhu L. Bhatnagar, Eugene P. Gross, and Max Krook, the three scientists who introduced it in a paper in Physical Review in 1954. |
https://en.wikipedia.org/wiki/Approach%20and%20departure%20angles | Approach angle is the maximum angle of a ramp onto which a vehicle can climb from a horizontal plane without interference. It is defined as the angle between the ground and the line drawn between the front tire and the lowest-hanging part of the vehicle at the front overhang. Departure angle is its counterpart at the rear of the vehicle – the maximum ramp angle from which the car can descend without damage. Approach and departure angles are also referred to as ramp angles.
Approach and departure angles are indicators of off-road ability of the vehicle: they indicate how steep obstacles, such as rocks or logs, the vehicle can negotiate according to its body shape alone.
See also
Breakover angle
Overhang (automotive)
Ride height |
https://en.wikipedia.org/wiki/Internal%20pressure | Internal pressure is a measure of how the internal energy of a system changes when it expands or contracts at constant temperature. It has the same dimensions as pressure, the SI unit of which is the pascal.
Internal pressure is usually given the symbol . It is defined as a partial derivative of internal energy with respect to volume at constant temperature:
Thermodynamic equation of state
Internal pressure can be expressed in terms of temperature, pressure and their mutual dependence:
This equation is one of the simplest thermodynamic equations. More precisely, it is a thermodynamic property relation, since it holds true for any system and connects the equation of state to one or more thermodynamic energy properties. Here we refer to it as a "thermodynamic equation of state."
Derivation of the thermodynamic equation of state
The fundamental thermodynamic equation states for the exact differential of the internal energy:
Dividing this equation by at constant temperature gives:
And using one of the Maxwell relations:
, this gives
Perfect gas
In a perfect gas, there are no potential energy interactions between the particles, so any change in the internal energy of the gas is directly proportional to the change in the kinetic energy of its constituent species and therefore also to the change in temperature:
.
The internal pressure is taken to be at constant temperature, therefore
, which implies and finally ,
i.e. the internal energy of a perfect gas is independent of the volume it occupies. The above relation can be used as a definition of a perfect gas.
The relation can be proved without the need to invoke any molecular arguments. It follows directly from the thermodynamic equation of state if we use the ideal gas law . We have
Real gases
Real gases have non-zero internal pressures because their internal energy changes as the gases expand isothermally - it can increase on expansion (, signifying presence of dominant attractive forces between the pa |
https://en.wikipedia.org/wiki/Michael%20Overton | Michael L. Overton is an American computer scientist and mathematician.
He is the Silver Professor of Computer Science and former Chair of the Computer Science department at Courant Institute of Mathematical Sciences, New York University. His research interests are in
Numerical Analysis, Optimization, and Scientific Computing.
Education and career
Overton received his B.Sc. in Computer Science from the University of British Columbia in 1974 and received his Ph.D. in computer science from Stanford University in 1979, under the supervision of Gene Golub. He joined the Computer Science Department at Courant Institute of Mathematical Sciences, New York University soon after that, and served as its chair. He was the editor-in-chief of the SIAM Journal on Optimization from 1995 to 1999 and is an inaugural SIAM Fellow.
Works
Numerical Computing with IEEE Floating Point Arithmetic, SIAM, 2001. |
https://en.wikipedia.org/wiki/Respirator%20diplomacy%20of%20Taiwan | Respirator diplomacy of Taiwan refers to the exchange of masks between Taiwan and other countries, aimed to help the Global Coronavirus Response.
Background
After the outbreak of COVID-19, the global demand for face masks increased rapidly. By the end of January 2020, the Taiwanese Government pushed out a series of decisions on masks, included export restrictions, forced appropriation and investments, to allow Taiwan to become the second largest mask exporter globally. By March 2020, Taiwan successfully increased its production of face masks from 1.88 million to 10 million units per day, carried out rationing, and became one of the largest markets for imports second only to Mainland China. In the middle of May, face mask production has increased to 20 million units per day.
It also launched a hospital ship through the Pacific, providing ventilators and masks to nations unable to obtain medical support from other sources, like Palau.
In July 2021, Taiwan donated 150,000 masks to the Brazilian state of Goiás.
Diplomatic endowment
On 1 April 2020, President Tsai Ing-wen announced that Taiwan would donate 10 million masks to countries severely affected by the pandemic, including America and Europe, along with any other nation that has established full diplomatic relations with Taiwan.
According to the Taiwanese Foreign Ministry, as of July 2021, more than 54 million masks had been donated to over 80 countries since early 2020.
Reactions
Reactions from the European Union
The President of the European Commission, Ursula von der Leyen, send a message on Twitter to thank the contributions made by Taiwan.
See also
Medical diplomacy |
https://en.wikipedia.org/wiki/Nucleon%20magnetic%20moment | The nucleon magnetic moments are the intrinsic magnetic dipole moments of the proton and neutron, symbols μp and μn. The nucleus of an atom comprises protons and neutrons, both nucleons that behave as small magnets. Their magnetic strengths are measured by their magnetic moments. The nucleons interact with normal matter through either the nuclear force or their magnetic moments, with the charged proton also interacting by the Coulomb force.
The proton's magnetic moment, surprisingly large, was directly measured in 1933 by Otto Stern team in University of Hamburg. While the neutron was determined to have a magnetic moment by indirect methods in the mid 1930s, Luis Alvarez and Felix Bloch made the first accurate, direct measurement of the neutron's magnetic moment in 1940. The proton's magnetic moment is exploited to make measurements of molecules by proton nuclear magnetic resonance. The neutron's magnetic moment is exploited to probe the atomic structure of materials using scattering methods and to manipulate the properties of neutron beams in particle accelerators.
The existence of the neutron's magnetic moment and the large value for the proton magnetic moment indicate that nucleons are not elementary particles. For an elementary particle to have an intrinsic magnetic moment, it must have both spin and electric charge. The nucleons have spin ħ/2, but the neutron has no net charge. Their magnetic moments were puzzling and defied a valid explanation until the quark model for hadron particles was developed in the 1960s. The nucleons are composed of three quarks, and the magnetic moments of these elementary particles combine to give the nucleons their magnetic moments.
Description
The CODATA recommended value for the magnetic moment of the proton is or The best available measurement for the value of the magnetic moment of the neutron is Here, μN is the nuclear magneton, a standard unit for the magnetic moments of nuclear components, and μB is the Bohr mag |
https://en.wikipedia.org/wiki/Straw%20wine | Straw wine, or raisin wine, is a wine made from grapes that have been dried off the vine to concentrate their juice. Under the classic method, after a careful hand harvest, selected bunches of ripe grapes will be laid out on mats in full sun. (Originally the mats were made of straw, but these days the plastic nets for the olive harvest are likely to be used). This drying will probably be done on well exposed terraces somewhere near the wine press and the drying process will take around a week or longer. Small scale productions were laid out on flat roofs; however, if this still happens, it is extremely rare nowadays.
Under less labour-intensive versions of the technique, easily portable racks might be used instead of mats or nets, or the grapes are left lying on the ground beneath the vines, or even left hanging on the vine with the vine-arm cut or the stem twisted. Technically speaking the grapes must be cut off from the vine in order for the wine to be a 'straw wine'. If the grapes are just left to over-ripen before being harvested, even if this is to the point of raisining, this is a 'late harvest' wine.
The exact technique used varies according to local conditions, traditions and increasingly modern innovations. In some regions the grapes are laid first in the sun and later covered or they are covered at night to protect them against dew fall. In cooler, damper regions, the entire drying process takes place indoors in huts, attics or greenhouses with the bunches lying on racks or hanging up with good air circulation.
Straw wines are typically sweet to very sweet white wines, similar in density and sweetness to Sauternes but potentially sweeter. They are capable of long ageing. The low yields and labour-intensive production method means that they are quite expensive. Around Verona red grapes are dried, and are fermented in two different ways to make a strong dry red wine (Amarone) and a sweet red wine (Recioto della Valpolicella).
History
The technique dates |
https://en.wikipedia.org/wiki/Measurement%20of%20biodiversity | Conservation biologists have designed a variety of objective means to empirically measure biodiversity. Each measure of biodiversity relates to a particular use of the data. For practical conservationists, measurements should include . For others, a more economically defensible definition should allow the ensuring of continued possibilities for both adaptation and future use by humans, assuring environmental sustainability.
As a consequence, biologists argue that this measure is likely to be associated with the variety of genes. Since it cannot always be said which genes are more likely to prove beneficial, the best choice for conservation is to assure the persistence of as many genes as possible. For ecologists, this latter approach is sometimes considered too restrictive, as it prohibits ecological succession.
Taxonomic Diversity
Biodiversity is usually plotted as taxonomic richness of a geographic area, with some reference to a temporal scale. Whittaker described three common metrics used to measure species-level biodiversity, encompassing attention to species richness or species evenness:
Species richness - the simplest of the indices available.
Simpson index
Shannon-Wiener index
More recently, two new indices have been invented. The Mean Species Abundance Index (MSA) calculates the trend in population size of a cross section of the species. It does this in line with the CBD 2010 indicator for species abundance. The Biodiversity Intactness Index (BII) measures biodiversity change using abundance data on plants, fungi and animals worldwide. The BII shows how local terrestrial biodiversity responds to human pressures such as land use change and intensification.
Other Measures of Diversity
Alternatively, other types of diversity may be plotted against a temporal timescale:
species diversity
ecological diversity
morphological diversity
genetic diversity
These different types of diversity may not be independent. There is, for example, a close link betw |
https://en.wikipedia.org/wiki/LVCMOS | Low voltage complementary metal oxide semiconductor (LVCMOS) is a low voltage class of CMOS technology digital integrated circuits.
Overview
To obtain better performance and lower costs, semiconductor manufacturers reduce the device geometries of integrated circuits. With each reduction the associated operating voltage must also be reduced in order to maintain the same basic operational characteristics of the transistors. As semiconductor technology has progressed, LVCMOS power supply voltage and interface standards for decreasing voltages have been defined by the Joint Electron Device Engineering Council (JEDEC) for digital logic levels lower than 5 volts. |
https://en.wikipedia.org/wiki/Soil%20classification | Soil classification deals with the systematic categorization of soils based on distinguishing characteristics as well as criteria that dictate choices in use.
Overview
Soil classification is a dynamic subject, from the structure of the system, to the definitions of classes, to the application in the field. Soil classification can be approached from the perspective of soil as a material and soil as a resource.
Inscriptions at the temple of Horus at Edfu outline a soil classification used by Tanen to determine what kind of temple to build at which site. Ancient Greek scholars produced a number of classification based on several different qualities of the soil.
Engineering
Geotechnical engineers classify soils according to their engineering properties as they relate to use for foundation support or building material. Modern engineering classification systems are designed to allow an easy transition from field observations to basic predictions of soil engineering properties and behaviors.
The most common engineering classification system for soils in North America is the Unified Soil Classification System (USCS). The USCS has three major classification groups: (1) coarse-grained soils (e.g. sands and gravels); (2) fine-grained soils (e.g. silts and clays); and (3) highly organic soils (referred to as "peat"). The USCS further subdivides the three major soil classes for clarification. It distinguishes sands from gravels by grain size, classifying some as "well-graded" and the rest as "poorly-graded". Silts and clays are distinguished by the soils' Atterberg limits, and thus the soils are separated into "high-plasticity" and "low-plasticity" soils. Moderately organic soils are considered subdivisions of silts and clays and are distinguished from inorganic soils by changes in their plasticity properties (and Atterberg limits) on drying. The European soil classification system (ISO 14688) is very similar, differing primarily in coding and in adding an "intermediate-p |
https://en.wikipedia.org/wiki/Daniel%20McKinsey | Daniel Nicholas McKinsey is an American experimental physicist. McKinsey is a leader in the field of direct searches for dark matter interactions, and serves as Co-Spokesperson of the
Large Underground Xenon experiment. and is an Executive Committee member of the LUX-ZEPLIN experiment. He serves as Director and Principal Investigator of the TESSERACT Project, and is also The Georgia Lee Distinguished Professor of Physics at the University of California, Berkeley.
Biography
Daniel N. McKinsey joined the University of California, Berkeley Physics Department faculty in July 2015. He received a B.S. in Physics with highest honors at the University of Michigan in 1995. His Ph.D. was awarded by Harvard University in 2002, with a thesis on the magnetic trapping, storage, and detection of ultracold neutrons in superfluid helium. His postdoctoral research was performed at Princeton University, and in 2003 he joined the Yale University physics department, where he was promoted to Full Professor in 2014. He was awarded a Packard Fellowship in Science and Engineering Fellowship and an Alfred P. Sloan Research Fellowship, and served on the 2013-2014 Particle Physics Project Prioritization Panel (P5).
Research interests
McKinsey's research centers on non-accelerator particle physics, particle astrophysics, and low temperature physics. In particular, his work is on the development, construction, and operation of new detectors using liquefied noble gases, which are useful in looking for physics beyond the Standard Model. Applications include the search for dark matter interactions with ordinary matter, searches for neutrinoless double beta decay, and the measurement of the low energy solar neutrino flux. He is especially interested in the physics of the response of liquefied noble gases to particle interactions, the calibration of these detectors so as to understand their response, and the overall development of new experimental techniques for reaching sensitivity to extremel |
https://en.wikipedia.org/wiki/Apamin | Apamin is an 18 amino acid globular peptide neurotoxin found in apitoxin (bee venom). Dry bee venom consists of 2–3% of apamin. Apamin selectively blocks SK channels, a type of Ca2+-activated K+ channel expressed in the central nervous system. Toxicity is caused by only a few amino acids, in particular cysteine1, lysine4, arginine13, arginine14 and histidine18. These amino acids are involved in the binding of apamin to the Ca2+-activated K+ channel. Due to its specificity for SK channels, apamin is used as a drug in biomedical research to study the electrical properties of SK channels and their role in the afterhyperpolarizations occurring immediately following an action potential.
Origin
The first symptoms of apitoxin (bee venom), that are now thought to be caused by apamin, were described back in 1936 by Hahn and Leditschke. Apamin was first isolated by Habermann in 1965 from Apis mellifera, the Western honey bee. Apamin was named after this bee. Bee venom contains many other compounds, like histamine, phospholipase A2, hyaluronidase, MCD peptide, and the main active component melittin. Apamin was separated from the other compounds by gel filtration and ion exchange chromatography.
Structure and active site
Apamin is a polypeptide possessing an amino acid sequence of H-Cys-Asn-Cys-Lys-Ala-Pro-Glu-Thr-Ala-Leu-Cys-Ala-Arg-Arg-Cys-Gln-Gln-His-NH2 (one-letter sequence CNCKAPETALCARRCQQH-NH2, with disulfide bonds between Cys1-Cys11 and Cys3-Cys15).
Apamin is very rigid because of the two disulfide bridges and seven hydrogen bonds. The three-dimensional structure of apamin has been studied with several spectroscopical techniques: HNMR, Circular Dichroism, Raman spectroscopy, FT-IR. The structure is presumed to consist of an alpha-helix and beta-turns, but the exact structure is still unknown.
By local alterations it is possible to find the amino acids that are involved in toxicity of apamin. It was found by Vincent et al. that guanidination of the ε-amino group of |
https://en.wikipedia.org/wiki/Roark%27s%20Formulas%20for%20Stress%20and%20Strain | Roark's Formulas for Stress and Strain is a mechanical engineering design book written by Richard G. Budynas and Ali M. Sadegh. It was first published in 1938 and the most current ninth edition was published in March 2020.
Subjects
The book covers various subjects, including bearing and shear stress, experimental stress analysis, stress concentrations, material behavior, and stress and strain measurement. It also features expanded tables and cases, improved notations and figures within the tables, consistent table and equation numbering, and verification of correction factors. The formulas are organized into tables in a hierarchical format: chapter, table, case, subcase, and each case and subcase is accompanied by diagrams.
It Covers:
• The behavior of bodies under stress
• Analytical, numerical, and experimental methods
• Tension, compression, shear, and combined stress
• Beams and curved beams
• Torsion, flat plates, and columns
• Shells of revolution, pressure vessels, and pipes
• Bodies under direct pressure and shear stress
• Elastic stability
• Dynamic and temperature stresses
• Stress concentration
• Fatigue and fracture
• Stresses in fasteners and joints
• Composite materials and solid biomechanics
Topics
Here are the topics covered in the 7th Edition:
Chapter 1 – Introduction
Chapter 2 – Stress and Strain: Important Relationships
Chapter 3 – The Behavior of Bodies Under Stress
Chapter 4 – Principles and Analytical Methods
Chapter 5 – Numerical Methods
Chapter 6 – Experimental Methods
Chapter 7 – Tension, Compression, Shear, and Combined Stress
Chapter 8 – Beams; Flexure of Straight Bars
Chapter 9 – Bending of Curved Beams
Chapter 10 – Torsion
Chapter 11 – Flat Plates
Chapter 12 – Columns and Other Compression Members
Chapter 13 – Shells of Revolution; Pressure Vessels; Pipes
Chapter 14 – Bodies in Contact Undergoing Direct Bearing and Shear Stress
Chapter 15 – Elastic Stability
Chapter 16 – Dynamic and Temperature Stresses
Chapter 17 – Str |
https://en.wikipedia.org/wiki/Rheintaler%20Ribelmais | Rheintaler Ribelmais, Rheintaler Ribel or Türggenribel is a ground product that is made from a traditional type of maize grown in the Swiss Rhine Valley and Liechtenstein. Since summer 2000, Rheintaler Ribel AOP (formerly AOC) has been the only Swiss cereal product with a protected geographical indication. The name Ribelmais comes from the traditional dish, Ribel, from which it is made.
History
Maize plays an important role in the Rhine valley, including three regions from different countries: in municipalities in the cantons of St. Gallen & Grisons (Switzerland), Principality of Liechtenstein and Vorarlberg (Austria). The rhine valley, which runs from south to north here, is influenced by Foehn wind and thus has a milder climate than the surrounding area, which is why maize could establish there. Maize is important in all three regions in their culture as well as in their economic history.
Rheintaler Ribelmais has a large genetic diversity, since the best cobs were used in small-scale cultivation for new crops and, through centuries of artificial selection, varieties have been chosen that are optimally adapted to local conditions. In general, Ribelmais is characterised by the fact that it grows much better in spring under cool conditions than today's silage maize, which is why it has been used in recent years in Switzerland for producing new varieties.
History and tradition in Switzerland and in the Principality of Liechtenstein
The cultivation of maize in Rhine valley is first reported in 1571 in Altstaetten. At that time, it was thought that it came from the Balkans, which is why the name Türggen or Türggenkorn (Turkish corn) became used for maize. After maize was established in the Rhine valley, it was eaten as a porridge breakfast dish “Ribel”. That's what it's called until today and where the name Rheintaler Ribelmais comes from. The cultivation of Ribelmais in Liechtenstein was first mentioned in a note from 1713, which said that maize had been first cul |
https://en.wikipedia.org/wiki/Fergus%20W.%20Campbell | Fergus William Campbell (30 January 1924 – 3 May 1993) was a Scottish
vision scientist who conducted foundational research into the optics of the human eye, into the electrical activity of the brains of people experiencing various phenomena of vision, and into the sorts of images which, when shown to people, might reveal the processes of their visual systems. Campbell's research changed the course of vision science.
Life and education
Campbell was born on 30 January 1924 in Glasgow, Scotland. His father, William Campbell (1891–1968), was a pharmacist and doctor; his mother, Anne Fleming (1898–1984) was her future husband's counter assistant before taking care of their four children. Campbell was his parents' second child and their first boy. His parents provided an intellectually nurturing home full of books. Campbell received his primary and secondary education at Glasgow High School for Boys. As a child, Campbell was an avid reader and, encouraged by his teachers and father, had hobbies in chemistry, physics, optics, photography, electricity, and radio.
Campbell studied medicine at Glasgow University Medical School, graduating in 1946. After graduating, Campbell prepared to become an ophthalmologist, passing the Diploma of Ophthalmic Medicine and Surgery from the Royal College of Surgeons in 1948. But Campbell became more interested in research than in clinical practice, completing a PhD (on corneal wound healing) from Glasgow University in 1952, then an MD (on the depth of focus of the eye) in 1959.
Although Campbell's research work took him to Oxford University and then University of Cambridge, both in England, he remained true to his Scottish heritage and his humble origins. He was much beloved by colleagues and students, renowned for his kindliness, generosity, and stock of good stories.
While doing his medical studies at University of Glasgow, Campbell met his wife-to-be, Helen, who became a doctor. They married in 1947 and had four children, one of whom |
https://en.wikipedia.org/wiki/Beltrami%20equation | In mathematics, the Beltrami equation, named after Eugenio Beltrami, is the partial differential equation
for w a complex distribution of the complex variable z in some open set U, with derivatives that are locally L2, and where μ is a given complex function in L∞(U) of norm less than 1, called the Beltrami coefficient, and where and are Wirtinger derivatives. Classically this differential equation was used by Gauss to prove the existence locally of isothermal coordinates on a surface with analytic Riemannian metric. Various techniques have been developed for solving the equation. The most powerful, developed in the 1950s, provides global solutions of the equation on C and relies on the Lp theory of the Beurling transform, a singular integral operator defined on Lp(C) for all 1 < p < ∞. The same method applies equally well on the unit disk and upper half plane and plays a fundamental role in Teichmüller theory and the theory of quasiconformal mappings. Various uniformization theorems can be proved using the equation, including the measurable Riemann mapping theorem and the simultaneous uniformization theorem. The existence of conformal weldings can also be derived using the Beltrami equation. One of the simplest applications is to the Riemann mapping theorem for simply connected bounded open domains in the complex plane. When the domain has smooth boundary, elliptic regularity for the equation can be used to show that the uniformizing map from the unit disk to the domain extends to a C∞ function from the closed disk to the closure of the domain.
Metrics on planar domains
Consider a 2-dimensional Riemannian manifold, say with an (x, y) coordinate system on it. The curves of constant x on that surface typically don't intersect the curves of constant y orthogonally. A new coordinate system (u, v) is called isothermal when the curves of constant u do intersect the curves of constant v orthogonally and, in addition, the parameter spacing is the same — that is, fo |
https://en.wikipedia.org/wiki/Stevo%20Todor%C4%8Devi%C4%87 | Stevo Todorčević (; born February 9, 1955), is a Yugoslavian mathematician specializing in mathematical logic and set theory. He holds a Canada Research Chair in mathematics at the University of Toronto, and a director of research position at the Centre national de la recherche scientifique in Paris.
Early life and education
Todorčević was born in Ubovića Brdo. As a child he moved to Banatsko Novo Selo, and went to school in Pančevo. At Belgrade University, he studied pure mathematics, attending lectures by Đuro Kurepa. He began graduate studies in 1978, and wrote his doctoral thesis in 1979 with Kurepa as his advisor.
Research
Todorčević's work involves mathematical logic, set theory, and their applications to pure mathematics.
In Todorčević's 1978 master’s thesis, he constructed a model of MA + ¬wKH in a way to allow him to make the continuum any regular cardinal, and so derived a variety of topological consequences. Here MA is an abbreviation for Martin's axiom and wKH stands for the weak Kurepa Hypothesis.
In 1980, Todorčević and Abraham proved the existence of rigid Aronszajn trees and the consistency of MA + the negation of the continuum hypothesis + there exists a first countable S-space.
Awards and honours
Todorčević is the winner of
the first prize of the Balkan Mathematical Society for 1980 and 1982,
the 2012 CRM-Fields-PIMS prize in mathematical sciences, and
the Shoenfield prize of the Association for Symbolic Logic for "outstanding expository writing in the field of logic" in 2013, for his book Introduction to Ramsey Spaces.
He was selected by the Association for Symbolic Logic as their 2016 Gödel Lecturer.
He became a corresponding member of the Serbian Academy of Sciences and Arts as of 1991 and a full member of the Academy in 2009.
In 2016 Todorčević became a fellow of the Royal Society of Canada.
Todorčević has been described as "the greatest Serbian mathematician" since the time of Mihailo Petrović Alas.
Books
Todorčević is the author of |
https://en.wikipedia.org/wiki/FFTPACK | FFTPACK is a package of Fortran subroutines for the fast Fourier transform. It includes complex, real, sine, cosine, and quarter-wave transforms. It was developed by Paul Swarztrauber of the National Center for Atmospheric Research, and is included in the general-purpose mathematical library SLATEC.
Much of the package is also available in C and Java translations.
See also
FFTW
LAPACK |
https://en.wikipedia.org/wiki/Symbols%20of%20Saskatchewan | Saskatchewan is one of Canada's provinces, and has established several provincial symbols.
Symbols |
https://en.wikipedia.org/wiki/Australian%20Plant%20Name%20Index | The Australian Plant Name Index (APNI) is an online database of all published names of Australian vascular plants. It covers all names, whether current names, synonyms or invalid names. It includes bibliographic and typification details, information from the Australian Plant Census including distribution by state, links to other resources such as specimen collection maps and plant photographs, and the facility for notes and comments on other aspects.
History
Originally the brainchild of Nancy Tyson Burbidge, it began as a four-volume printed work consisting of 3,055 pages and containing over 60,000 plant names. Compiled by Arthur Chapman, it was part of the Australian Biological Resources Study (ABRS). In 1991 it was made available as an online database and handed over to the Australian National Botanic Gardens. Two years later, responsibility for its maintenance was given to the newly formed Centre for Plant Biodiversity Research.
Scope
Recognised by Australian herbaria as the authoritative source for Australian plant nomenclature, it is the core component of Australia's Virtual Herbarium, a collaborative project with A$10 million funding, aimed at providing integrated online access to the data and specimen collections of Australia's major herbaria.
Two query interfaces are offered:
Australian Plant Name Index (APNI), a full query interface that delivers full results, with no automatic interpretation, and
What's Its Name (WIN), a less powerful query interface that delivers concise results, augmented with automatic
See also
Atlas of Living Australia
Botanical nomenclature
Council of Heads of Australasian Herbaria
Index Kewensis
International Plant Names Index |
https://en.wikipedia.org/wiki/Oxymormyrus | Oxymormyrus is a small genus of elephantfish in the family Mormyridae. Its members reach about in length and are restricted to the Congo, Campo, Kouilou-Niari, Nyanga and Ogowe river basins in Middle Africa.
Taxonomy and species
The validity of this genus is disputed. Catalog of Fishes only includes one species, while a second (marked with a star* in the list) occasionally has been included. Both these are placed in Mormyrops by FishBase.
Oxymormyrus boulengeri (Pellegrin, 1900) (Alima River mormyrid)
Oxymormyrus zanclirostris* (Günther, 1867) (Pool elephantfish) |
https://en.wikipedia.org/wiki/Coherent%20control | Coherent control is a quantum mechanics-based method for controlling dynamic processes by light. The basic principle is to control quantum interference phenomena, typically by shaping the phase of laser pulses. The basic ideas have proliferated, finding vast application in spectroscopy mass spectra, quantum information processing, laser cooling, ultracold physics and more.
Brief History
The initial idea was to control the outcome of chemical reactions. Two approaches were pursued:
in the time domain, a "pump-dump" scheme where the control is the time delay between pulses
in the frequency domain, interfering pathways controlled by one and three photons.
The two basic methods eventually merged with the introduction of optimal control theory.
Experimental realizations soon followed in the time domain and in the frequency domain. Two interlinked developments accelerated the field of coherent control: experimentally, it was the development of pulse shaping by a spatial light modulator and its employment in coherent control. The second development was the idea of automatic feedback control and its experimental realization.
Controllability
Coherent control aims to steer a quantum system from an initial state to a target state via an external field. For given initial and final (target) states, the coherent control is termed state-to-state control. A generalization is steering simultaneously an arbitrary set of initial pure states to an arbitrary set of final states i.e. controlling a unitary transformation. Such an application sets the foundation for a quantum gate operation.
Controllability of a closed quantum system has been addressed by Tarn and Clark. Their theorem based in control theory states that for a finite-dimensional, closed-quantum system, the system is completely controllable, i.e. an arbitrary unitary transformation of the system can be realized by an appropriate application of the controls if the control operators and the unperturbed Hamiltonian |
https://en.wikipedia.org/wiki/Internet%20Protocol%20Options | There are a number of optional parameters that may be present in an Internet Protocol version 4 datagram. They typically configure a number of behaviors such as for the method to be used during source routing, some control and probing facilities and a number of experimental features.
Loose source routing
Loose Source Routing is an IP option which can be used for address translation. LSR is also used to implement mobility in IP networks.
Loose source routing uses a source routing option in IP to record the set of routers a packet must visit. The destination of the packet is replaced with the next router the packet must visit. By setting the forwarding agent (FA) to one of the routers that the packet must visit, LSR is equivalent to tunneling. If the corresponding node stores the LSR options and reverses it, it is equivalent to the functionality in mobile IPv6.
The name loose source routing comes from the fact that only part of the path is set in advance.
Strict source routing
Strict source routing is in contrast with loose source routing, in which every step of the route is decided in advance where the packet is sent.
Restrictions and considerations
The following two options are discouraged because they create security concerns: Loose Source and Record Route (LSRR) and Strict Source and Record Route (SSRR). Many routers block packets containing these options.
See also
Dynamic Source Routing
Source routing
Internet Protocol |
https://en.wikipedia.org/wiki/ZipBooks | ZipBooks is free online accounting software company based in American Fork, Utah. The cloud-based software is an accounting and bookkeeping tool that helps business owners process credit cards, track finances, and send invoices, among other features.
History
ZipBooks was founded by Tim Chaves in June 2015, backed by venture capital firm Peak Ventures. The company secured an additional $2 million of funding in July 2016, and in 2017 it was awarded a $100,000 economic grant by the Utah Governor's Office of Economic Development Technology Commercialization and Innovation Program.
Products
ZipBooks' core modules are invoicing, transactions, bills, reporting, time tracking, contacts, and payroll. Accrual accounting was added in 2017.
The application is available on G Suite, iOS, Slack, and as a web application.
Reception
Computerworld compared ZipBooks favorably with other accounting software. PC Magazine praised its user experience, but stated it lacked "a lot of features that competing sites offer".
See also
Comparison of accounting software
Double-entry bookkeeping system
Software as a service
Time tracking software
Web application |
https://en.wikipedia.org/wiki/Common%20cold | The common cold or the cold is a viral infectious disease of the upper respiratory tract that primarily affects the respiratory mucosa of the nose, throat, sinuses, and larynx. Signs and symptoms may appear fewer than two days after exposure to the virus. These may include coughing, sore throat, runny nose, sneezing, headache, and fever. People usually recover in seven to ten days, but some symptoms may last up to three weeks. Occasionally, those with other health problems may develop pneumonia.
Well over 200 virus strains are implicated in causing the common cold, with rhinoviruses, coronaviruses, adenoviruses and enteroviruses being the most common. They spread through the air during close contact with infected people or indirectly through contact with objects in the environment, followed by transfer to the mouth or nose. Risk factors include going to child care facilities, not sleeping well, and psychological stress. The symptoms are mostly due to the body's immune response to the infection rather than to tissue destruction by the viruses themselves. The symptoms of influenza are similar to those of a cold, although usually more severe and less likely to include a runny nose.
There is no vaccine for the common cold. The primary methods of prevention are hand washing; not touching the eyes, nose or mouth with unwashed hands; and staying away from sick people. Some evidence supports the use of face masks. There is also no cure, but the symptoms can be treated. Zinc may reduce the duration and severity of symptoms if started shortly after the onset of symptoms. Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen may help with pain. Antibiotics, however, should not be used, as all colds are caused by viruses, and there is no good evidence that cough medicines are effective.
The common cold is the most frequent infectious disease in humans. Under normal circumstances, the average adult gets two to three colds a year, while the average child may get six |
https://en.wikipedia.org/wiki/The%20Hillside%20Group | The Hillside Group is an educational nonprofit organization founded in August 1993 to help software developers analyze and document common development and design problems as software design patterns. The Hillside Group supports the patterns community through sponsorship of the Pattern Languages of Programs conferences.
History
In August 1993, Kent Beck and Grady Booch sponsored a mountain retreat in Colorado where a group converged on foundations for software patterns. Ward Cunningham, Ralph Johnson, Ken Auer, Hal Hildebrand, Grady Booch, Kent Beck, and Jim Coplien examined architect Christopher Alexander's work in pattern language and their own experiences as software developers to combine the concepts of objects and patterns and apply them to writing computer programs. The group agreed to build on Erich Gamma's study of object-oriented patterns, but to use patterns in a generative way in the sense that Alexander uses patterns for urban planning and architecture. They used the word generative to mean creational, to distinguish them from Gamma's patterns' that captured observations. The group was meeting on the side of a hill, which led them to name themselves the Hillside Group.
Since then, the Hillside Group has been incorporated as an educational non-profit organization. It sponsors and helps run Pattern Languages of Programs (PLoP) conferences such as PLoP, EuroPlop, ChiliPlop, GuruPLoP, Asian PLoP, Scrum PLoP, Viking PLoP and Sugarloaf PLoP. The Hillside Group has also worked on the Pattern Languages of Program Design series of books.
Activities
The Hillside Group sponsors the Pattern Languages of Programs conferences in various countries, including the U.S., Brazil, Norway, Germany, Australia, and Japan. The Hillside Group assisted in publishing the Pattern Languages of Program Design book series until 2006. Since 2006, The Hillside Group has published patterns and conference proceedings through the Association for Computing Machinery (ACM) Digital Library. |
https://en.wikipedia.org/wiki/Product%20integral | A product integral is any product-based counterpart of the usual sum-based integral of calculus. The first product integral (Type I below) was developed by the mathematician Vito Volterra in 1887 to solve systems of linear differential equations. Other examples of product integrals are the geometric integral (Type II below), the bigeometric integral (Type III below), and some other integrals of non-Newtonian calculus.
Product integrals have found use in areas from epidemiology (the Kaplan–Meier estimator) to stochastic population dynamics using multiplication integrals (multigrals), analysis and quantum mechanics. The geometric integral, together with the geometric derivative, is useful in image analysis and in the study of growth/decay phenomena (e.g., in economic growth, bacterial growth, and radioactive decay). The bigeometric integral, together with the bigeometric derivative, is useful in some applications of fractals, and in the theory of elasticity in economics.
This article adopts the "product" notation for product integration instead of the "integral" (usually modified by a superimposed "times" symbol or letter P) favoured by Volterra and others. An arbitrary classification of types is also adopted to impose some order in the field.
Basic definitions
The classical Riemann integral of a function can be defined by the relation
where the limit is taken over all partitions of the interval whose norms approach zero.
Roughly speaking, product integrals are similar, but take the limit of a product instead of the limit of a sum. They can be thought of as "continuous" versions of "discrete" products.
The most popular product integrals are the following:
Type I: Volterra integral
The type I product integral corresponds to Volterra's original definition. The following relationship exists for scalar functions :
which is not a multiplicative operator. (So the concepts of product integral and multiplicative integral are not the same).
The Volterra product |
https://en.wikipedia.org/wiki/Anal%20hygiene | Anal hygiene or anal cleansing refers to hygienic practices that are performed on a person's anus, usually shortly after defecation. Post-defecation cleansing is rarely discussed academically, partly due to the social taboo. The scientific objective of post-defecation cleansing is to prevent exposure to pathogens while socially it becomes a cultural norm. The process of post-defecation cleansing involves either rinsing the anus and inner part of the buttocks with water or wiping the area with dry materials such as toilet paper. In water-based cleansing, either a hand is used for rubbing the area while rinsing it with the aid of running water or (in bidet systems) pressurized water is used. In either method subsequent hand sanitization is essential to achieve the ultimate objectives of post-defecation cleansing.
History
Ancient Greeks were known to use fragments of ceramic known as pessoi to perform anal cleansing.
Roman anal cleansing was done with a sponge on a stick called a tersorium (). The stick would be soaked in a water channel in front of a toilet, and then stuck through the hole built into the front of the toilet for anal cleaning. The tersorium was shared by people using public latrines. To clean the sponge, they washed it in a bucket with water and salt or vinegar. This became a breeding ground for bacteria, causing the spread of disease in the latrine.
In ancient Japan, a wooden skewer known as chuugi ("shit sticks") was used for cleaning after defecation.
The use of toilet paper for post-defecation cleansing first started in China in the 2nd century BC. According to Charlier (2012) French novelist (and physician) François Rabelais had argued about the ineffectiveness of toilet paper in the 16th century. The first commercially available toilet paper was invented by Joseph Gayetty, a New York entrepreneur, in 1857 with the dawning of the second industrial revolution.
Cultural preferences
In predominantly Catholic countries, Eastern Orthodox, Hin |
https://en.wikipedia.org/wiki/OSHmail | OSHMail is an electronic newsletter from the European Agency for Safety and Health at Work (EU-OSHA).
It is a monthly summary of the latest news, events, and occupational safety and health information published at the EU-OSHA website. Most of the content is available in 25 languages. Currently more than 68,000 subscribers receive OSHmail each month; subscription is free.
External links
OSHA website
Newsletters
Multilingual magazines
Online magazines
Monthly magazines published in Spain |
https://en.wikipedia.org/wiki/Phylogenetic%20profiling | Phylogenetic profiling is a bioinformatics technique in which the joint presence or joint absence of two traits across large numbers of species is used to infer a meaningful biological connection, such as involvement of two different proteins in the same biological pathway. Along with examination of conserved synteny, conserved operon structure, or "Rosetta Stone" domain fusions, comparing phylogenetic profiles is a designated "post-homology" technique, in that the computation essential to this method begins after it is determined which proteins are homologous to which. A number of these techniques were developed by David Eisenberg and colleagues; phylogenetic profile comparison was introduced in 1999 by Pellegrini, et al.
Method
Over 2000 species of bacteria, archaea, and eukaryotes are now represented by complete DNA genome sequences. Typically, each gene in a genome encodes a protein that can be assigned to a particular protein family on the basis of homology. For a given protein family, its presence or absence in each genome (in the original, binary, formulation) is represented by either 1 (present) or 0 (absent). Consequently, the phylogenetic distribution of the protein family can be represented by a long binary number with a digit for each genome; such binary representations are easily compared with each other to search for correlated phylogenetic distributions. The large number of complete genomes makes these profiles rich in information. The advantage of using only complete genomes is that the 0 values, representing the absence of a trait, tend to be reliable.
Theory
Closely related species should be expected to have very similar sets of genes. However, changes accumulate between more distantly related species by processes that include horizontal gene transfer and gene loss. Individual proteins have specific molecular functions, such as carrying out a single enzymatic reaction or serving as one subunit of a larger protein complex. A biological process su |
https://en.wikipedia.org/wiki/Fire%E2%80%93vegetation%20feedbacks%20and%20alternative%20stable%20states | The relationships between fire, vegetation, and climate create what is known as a fire regime. Within a fire regime, fire ecologists study the relationship between diverse ecosystems and fire; not only how fire affects vegetation, but also how vegetation affects the behavior of fire. The study of neighboring vegetation types that may be highly flammable and less flammable has provided insight into how these vegetation types can exist side by side, and are maintained by the presence or absence of fire events. Ecologists have studied these boundaries between different vegetation types, such as a closed canopy forest and a grassland, and hypothesized about how climate and soil fertility create these boundaries in vegetation types. Research in the field of pyrogeography shows how fire also plays an important role in the maintenance of dominant vegetation types, and how different vegetation types with distinct relationships to fire can exist side by side in the same climate conditions. These relationships can be described in conceptual models called fire–vegetation feedbacks, and alternative stable states.
Fire–vegetation feedbacks
Vegetation can be understood as highly flammable (pyrophilic) and less flammable (pyrophobic). A fire–vegetation feedback describes the relationship between fire and the dominant vegetation type. An example of a highly flammable vegetation type is a grassland. Frequent fire will maintain grassland as the dominant vegetation in a positive feedback loop. This happens because frequent fire will kill trees trying to establish in the area, yet the intervals between each fire will allow for new grasses to establish, grow into fuel, and burn again. Therefore, frequent fire on a grassland area will maintain grass as the dominant vegetation and not permit the encroachment of trees. In contrast, fire will occur less frequently and less severely in closed canopy forests because the fuels are more dense, shaded, and therefore more humid thereby not ign |
https://en.wikipedia.org/wiki/Index%20of%20physics%20articles%20%280%E2%80%939%29 | The index of physics articles is split into multiple pages due to its size.
To navigate by individual letter use the table of contents below.
0–9
1/N expansion
100 Authors Against Einstein
120° parhelion
1958 Lituya Bay megatsunami
1964 PRL symmetry breaking papers
1s Slater-type function
22° halo
23rd International Solvay Conference in Physics
3-j symbol
331 model
3D ultrasound
46° halo
4GLS
4Pi Microscope
4Pi STED microscopy
4 pi laser
5 dimensional warped geometry theory
6-j symbol
9-j symbol
Indexes of physics articles |
https://en.wikipedia.org/wiki/Labyrinthine%20artery | The labyrinthine artery (auditory artery, internal auditory artery) is a branch of either the anterior inferior cerebellar artery or the basilar artery. It accompanies the vestibulocochlear nerve (CN VIII) through the internal acoustic meatus. It supplies blood to the internal ear.
Structure
The labyrinthine artery is a branch of either the anterior inferior cerebellar artery (AICA) or the basilar artery. It accompanies the vestibulocochlear nerve (CN VIII) through the internal acoustic meatus. It divides into a cochlear branch and a labyrinthine (or anterior vestibular) branch.
Function
The labyrinthine artery supplies blood to the inner ear. It also supplies the vestibulocochlear nerve (CN VIII) along its length.
Clinical significance
The labyrinthine artery may become occluded. This can cause loss of hearing and balance on the affected side.
History
The labyrinthine artery may also be known as the internal auditory artery or the auditory artery.
See also
Internal auditory veins |
https://en.wikipedia.org/wiki/Bacterial%20recombination | Bacterial recombination is a type of genetic recombination in bacteria characterized by DNA transfer from one organism called donor to another organism as recipient. This process occurs in three main ways:
Transformation, the uptake of exogenous DNA from the surrounding environment.
Transduction, the virus-mediated transfer of DNA between bacteria.
Conjugation, the transfer of DNA from one bacterium to another via cell-to-cell contact.
The final result of conjugation, transduction, and/or transformation is the production of genetic recombinants, individuals that carry not only the genes they inherited from their parent cells but also the genes introduced to their genomes by conjugation, transduction, and/or transformation.
Recombination in bacteria is ordinarily catalyzed by a RecA type of recombinase. These recombinases promote repair of DNA damages by homologous recombination.
The ability to undergo natural transformation is present in at least 67 bacterial species. Natural transformation is common among pathogenic bacterial species. In some cases, the DNA repair capability provided by recombination during transformation facilitates survival of the infecting bacterial pathogen. Bacterial transformation is carried out by numerous interacting bacterial gene products.
Evolution
Evolution in bacteria was previously viewed as a result of mutation or genetic drift. Today, genetic exchange, or gene transfer is viewed as a major driving force in the evolution of prokaryotes. This driving force has been widely studied in organisms like E. coli. Bacteria reproduces asexually, where daughter cells are clones of the parent. This clonal nature leads to random mutations that occur during DNA replication that potentially helps bacteria evolve. It was originally thought that only accumulated mutations helped bacteria evolve. In contrast, bacteria also import genes in a process called homologous recombination, first discovered by the observation of mosaic genes at loc |
https://en.wikipedia.org/wiki/Directional%20Cubic%20Convolution%20Interpolation | Directional Cubic Convolution Interpolation (DCCI) is an edge-directed image scaling algorithm created by Dengwen Zhou and Xiaoliu Shen.
By taking into account the edges in an image, this scaling algorithm reduces artifacts common to other image scaling algorithms. For example, staircase artifacts on diagonal lines and curves are eliminated.
The algorithm resizes an image to 2x its original dimensions, minus 1.
The algorithm
The algorithm works in three main steps:
Copy the original pixels to the output image, with gaps between the pixels.
Calculate the pixels for the diagonal gaps.
Calculate the pixels for the remaining horizontal and vertical gaps.
Calculating pixels in diagonal gaps
Evaluation of diagonal pixels is done on the original image data in a 4×4 region, with the new pixel that is being calculated in the center, in the gap between the original pixels. This can also be thought of as the 7×7 region in the enlarged image centered on the new pixel to calculate, and the original pixels have already been copied.
The algorithm decides one of three cases:
Edge in up-right direction — interpolates along down-right direction.
Edge in down-right direction — interpolates along up-right direction.
Smooth area — interpolates in both directions, then multiples the values by weights.
Calculating diagonal edge strength
Let d1 be the sum of edges in the up-right direction, and d2 be the sum of edges in the down-right direction.
To calculate d1, take the sum of abs(P(X, Y) - P(X - 1, Y + 1)), in the region of X = 1 to 3, and Y = 0 to 2.
To calculate d2, take the sum of abs(P(X, Y) - P(X + 1, Y + 1)), in the region of X = 0 to 2, and Y = 0 to 2.
Interpolating pixels
If (1 + d1) / (1 + d2) > 1.15, then there is an edge in the up-right direction. If (1 + d2) / (1 + d1) > 1.15, then there is an edge in the down-right direction.
Otherwise, one is in a smooth area. To avoid division and floating-point operations, this can also be expressed as 100 * (1 + d1) |
https://en.wikipedia.org/wiki/Carlitz%20exponential | In mathematics, the Carlitz exponential is a characteristic p analogue to the usual exponential function studied in real and complex analysis. It is used in the definition of the Carlitz module – an example of a Drinfeld module.
Definition
We work over the polynomial ring Fq[T] of one variable over a finite field Fq with q elements. The completion C∞ of an algebraic closure of the field Fq((T−1)) of formal Laurent series in T−1 will be useful. It is a complete and algebraically closed field.
First we need analogues to the factorials, which appear in the definition of the usual exponential function. For i > 0 we define
and D0 := 1. Note that the usual factorial is inappropriate here, since n! vanishes in Fq[T] unless n is smaller than the characteristic of Fq[T].
Using this we define the Carlitz exponential eC:C∞ → C∞ by the convergent sum
Relation to the Carlitz module
The Carlitz exponential satisfies the functional equation
where we may view as the power of map or as an element of the ring of noncommutative polynomials. By the universal property of polynomial rings in one variable this extends to a ring homomorphism ψ:Fq[T]→C∞{τ}, defining a Drinfeld Fq[T]-module over C∞{τ}. It is called the Carlitz module. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.