source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Bixin | Bixin is an apocarotenoid found in the seeds of the achiote tree (Bixa orellana) from which it derives its name. It is commonly extracted from the seeds to form annatto, a natural food coloring, containing about 5% pigments of which 70–80% are bixin.
Applications
Several thousand tons are harvested annually.
Chemical properties
Bixin is unstable. It isomerizes into trans-bixin (β-bixin), the double-bond isomer.
Bixin is soluble in fats and alcohols but insoluble in water. Upon exposure to alkali, the methyl ester is hydrolyzed to produce the dicarboxylic acid norbixin, a water-soluble derivative. |
https://en.wikipedia.org/wiki/Universal%20conductance%20fluctuations | Universal conductance fluctuations (UCF) in mesoscopic physics is a phenomenon encountered in electrical transport experiments in mesoscopic species. The measured electrical conductance will vary from sample to sample, mainly due to inhomogeneous scattering sites. Fluctuations originate from coherence effects for electronic wavefunctions and thus the phase-coherence length needs be larger than the momentum relaxation length . UCF is more profound when electrical transport is in weak localization regime. where , is the number of conduction channels and is the momentum relaxation due to phonon scattering events length or mean free path. For weakly localized samples fluctuation in conductance is equal to fundamental conductance regardless of the number of channels.
Many factors will influence the amplitude of UCF. At zero temperature without decoherence, the UCF is influenced by mainly two factors, the symmetry and the shape of the sample. Recently, a third key factor, anisotropy of Fermi surface, is also found to fundamentally influence the amplitude of UCF.
See also
Speckle patterns, the optical analogues of conductance fluctuation patterns. |
https://en.wikipedia.org/wiki/Paprika%20oleoresin | Paprika oleoresin (also known as paprika extract and oleoresin paprika) is an oil-soluble extract from the fruits of Capsicum annuum or Capsicum frutescens, and is primarily used as a colouring and/or flavouring in food products. It is composed of vegetable oil (often in the range of 97% to 98%), capsaicin, the main flavouring compound giving pungency in higher concentrations, and capsanthin and capsorubin, the main colouring compounds (among other carotenoids). It is much milder than capsicum oleoresin, often containing no capsaicin at all.
Extraction is performed by percolation with a variety of solvents, primarily hexane, which are removed prior to use. Vegetable oil is then added to ensure a uniform color saturation.
Uses
Foods colored with paprika oleoresin include cheese, orange juice, spice mixtures, sauces, sweets, ketchup, soups, fish fingers, chips, pastries, fries, dressings, seasonings, jellies, bacon, ham, ribs, and among other foods even cod fillets. In poultry feed, it is used to deepen the colour of egg yolks.
In the United States, paprika oleoresin is listed as a color additive “exempt from certification”. In Europe, paprika oleoresin (extract), and the compounds capsanthin and capsorubin are designated by E160c.
Names and CAS nos |
https://en.wikipedia.org/wiki/Onium | An onium (plural: onia) is a bound state of a particle and its antiparticle. These states are usually named by adding the suffix -onium to the name of one of the constituent particles (replacing an -on suffix when present), with one exception for "muonium"; a muon–antimuon bound pair is called "true muonium" to avoid confusion with old nomenclature.
Examples
Positronium is an onium which consists of an electron and a positron bound together as a long-lived metastable state. Positronium has been studied since the 1950s to understand bound states in quantum field theory. A recent development called non-relativistic quantum electrodynamics (NRQED) used this system as a proving ground.
Pionium, a bound state of two oppositely-charged pions, is interesting for exploring the strong interaction. This should also be true of protonium. The true analogs of positronium in the theory of strong interactions are the quarkonium states: they are mesons made of a heavy quark and antiquark (namely, charmonium and bottomonium). Exploration of these states through non-relativistic quantum chromodynamics (NRQCD) and lattice QCD are increasingly important tests of quantum chromodynamics.
Understanding bound states of hadrons such as pionium and protonium is also important in order to clarify notions related to exotic hadrons such as mesonic molecules and pentaquark states.
See also
Exotic atom
mesons
Footnotes |
https://en.wikipedia.org/wiki/Soil%20biodiversity | Soil biodiversity refers to the relationship of soil to biodiversity and to aspects of the soil that can be managed in relative to biodiversity. Soil biodiversity relates to some catchment management considerations.
Biodiversity
According to the Australian Department of the Environment and Water Resources, biodiversity is "the variety of life: the different plants, animals and micro-organisms, their genes and the ecosystems of which they are a part." Biodiversity and soil are strongly linked, because soil is the medium for a large variety of organisms, and interacts closely with the wider biosphere. Conversely, biological activity is a primary factor in the physical and chemical formation of soils.
Soil provides a vital habitat, primarily for microbes (including bacteria and fungi), but also for microfauna (such as protozoa and nematodes), mesofauna (such as microarthropods and enchytraeids), and macrofauna (such as earthworms, termites, and millipedes). The primary role of soil biota is to recycle organic matter that is derived from the "above-ground plant-based food web".
Soil is in close cooperation with the wider biosphere. The maintenance of fertile soil is "one of the most vital ecological services the living world performs", and the "mineral and organic contents of soil must be replenished constantly as plants consume soil elements and pass them up the food chain".
The correlation of soil and biodiversity can be observed spatially. For example, both natural and agricultural vegetation boundaries correspond closely to soil boundaries, even at continental and global scales.
A "subtle synchrony" is how Baskin (1997) describes the relationship that exists between the soil and the diversity of life, above and below the ground. It is not surprising that soil management has a direct effect on biodiversity. This includes practices that influence soil volume, structure, biological, and chemical characteristics, and whether soil exhibits adverse effects such as re |
https://en.wikipedia.org/wiki/Betalain | Betalains are a class of red and yellow tyrosine-derived pigments found in plants of the order Caryophyllales, where they replace anthocyanin pigments. Betalains also occur in some higher order fungi. They are most often noticeable in the petals of flowers, but may color the fruits, leaves, stems, and roots of plants that contain them. They include pigments such as those found in beets.
Description
The name "betalain" comes from the Latin name of the common beet (Beta vulgaris), from which betalains were first extracted. The deep red color of beets, bougainvillea, amaranth, and many cacti results from the presence of betalain pigments. The particular shades of red to purple are distinctive and unlike that of anthocyanin pigments found in most plants.
There are two categories of betalains:
Betacyanins include the reddish to violet betalain pigments. Among the betacyanins present in plants include betanin, isobetanin, probetanin, and neobetanin.
Betaxanthins are those betalain pigments which appear yellow to orange. Among the betaxanthins present in plants include vulgaxanthin, miraxanthin, portulaxanthin, and indicaxanthin.
The physiological function of betalains in plants is uncertain, but there is some evidence that they may have fungicidal properties. Additionally, betalains have been found in fluorescent flowers, though their role in these plants is also uncertain.
Chemistry
Betalains (betacyanins) were first isolated and its chemical structure discovered in 1960 at the University of Zurich by Dr. Tom Mabry. It was once thought that betalains were related to anthocyanins, the reddish pigments found in most plants. Both betalains and anthocyanins are water-soluble pigments found in the vacuoles of plant cells. However, betalains are structurally and chemically unlike anthocyanins and the two have never been found in the same plant together. For example, betalains contain nitrogen whereas anthocyanins do not.
It is now known that betalains are aromatic indo |
https://en.wikipedia.org/wiki/Transponder%20%28satellite%20communications%29 | A communications satellite's transponder is the series of interconnected units that form a communications channel between the receiving and the transmitting antennas.
It is mainly used in satellite communication to transfer the received signals.
A transponder is typically composed of:
an input band-limiting device (an input band-pass filter),
an input low-noise amplifier (LNA), designed to amplify the signals received from the Earth station (normally very weak, because of the large distances involved),
a frequency translator (normally composed of an oscillator and a frequency mixer) used to convert the frequency of the received signal to the frequency required for the transmitted signal,
an output band-pass filter,
a power amplifier (this can be a traveling-wave tube or a solid-state amplifier).
Most communication satellites are radio relay stations in orbit and carry dozens of transponders, each with a bandwidth of tens of megahertz. Most transponders operate on a (i.e., u-bend) principle, sending back to Earth what goes into the conduit with only amplification and a shift from uplink to downlink frequency. However, some modern satellites use on-board processing, where the signal is demodulated, decoded, re-encoded and modulated aboard the satellite. This type, called a "regenerative" transponder, is more complex, but has many advantages, such as improving the signal to noise ratio as the signal is regenerated from the digital domain, and also permits selective processing of the data in the digital domain.
With data compression and multiplexing, several video (including digital video) and audio channels may travel through a single transponder on a single wideband carrier.
Original analog video only had one channel per transponder, with subcarriers for audio and automatic transmission-identification service ATIS. Non-multiplexed radio stations can also travel in single channel per carrier (SCPC) mode, with multiple carriers (analog or digital) per tr |
https://en.wikipedia.org/wiki/Chirikov%20criterion | The Chirikov criterion or Chirikov resonance-overlap criterion
was established by the Russian physicist Boris Chirikov.
Back in 1959, he published a seminal article,
where he introduced the very first physical criterion for the onset of chaotic motion in
deterministic Hamiltonian systems. He then applied such a criterion to explain
puzzling experimental results on plasma confinement in magnetic bottles
obtained by Rodionov at the Kurchatov Institute.
Description
According to this criterion a deterministic trajectory will begin to move
between two nonlinear resonances in a chaotic and unpredictable manner,
in the parameter range
Here is the perturbation parameter,
while
is the resonance-overlap parameter, given by the ratio of the
unperturbed resonance width in frequency
(often computed in the pendulum
approximation and proportional to the square-root of perturbation),
and the frequency difference
between two unperturbed resonances. Since its introduction, the Chirikov criterion has become an important analytical tool for the determination of the chaos border.
See also
Chirikov criterion at Scholarpedia
Chirikov standard map and standard map
Boris Chirikov and Boris Chirikov at Scholarpedia |
https://en.wikipedia.org/wiki/Galileo%20GDS | Galileo is a computer reservations system (CRS) owned by Travelport. As of 2000, it had a 26.4% share of worldwide CRS airline bookings.
In addition to airline reservations, the Galileo CRS is also used to book train travel, cruises, car rental, and hotel rooms.
The Galileo system was moved from Denver, Colorado, to the Worldspan datacenter in Atlanta, Georgia, on September 28, 2008, following the 2007 merger of Travelport and Worldspan (although they now share the same datacenter, they continue to be run as separate systems).
Galileo is subject to the Capps II and its successor Secure Flight program for the selection of passengers with a risk profile.
Galileo is a member of the International Air Transport Association, of the OpenTravel Alliance and of SITA.
History
Galileo traces its roots back to 1971 when United Airlines created its first computerized central reservation system under the name Apollo. During the 1980s and early 1990s, a significant proportion of airline tickets were sold by travel agents. Flights by the airline owning the reservation system had preferential display on the computer screen. Due to the high market penetration of the Sabre and Apollo systems, owned by American Airlines and United Airlines, respectively, Worldspan and Galileo were created by other airline groups in an attempt to gain market share in the computer reservation system market and, by inference, the commercial airline market. Galileo was formed in 1987 by nine European carriers -- British Airways, KLM Royal Dutch Airlines, Alitalia, Swissair, Austrian Airlines, Olympic, Sabena, Air Portugal and Aer Lingus.
In response and to prevent possible government intervention, United Airlines spun off its Apollo reservation system, which was then controlled by Covia. Galileo International was born when Covia acquired Europe's Galileo and merged it with the Apollo system in 1992.
The Apollo reservation system was used by United Airlines until 3 March 2012, when it switched t |
https://en.wikipedia.org/wiki/Security%20descriptor | Security descriptors are data structures of security information for securable Windows objects, that is objects that can be identified by a unique name. Security descriptors can be associated with any named objects, including files, folders, shares, registry keys, processes, threads, named pipes, services, job objects and other resources.
Security descriptors contain discretionary access control lists (DACLs) that contain access control entries (ACEs) that grant and deny access to trustees such as users or groups. They also contain a system access control list (SACLs) that control auditing of object access. ACEs may be explicitly applied to an object or inherited from a parent object. The order of ACEs in an ACL is important, with access denied ACEs appearing higher in the order than ACEs that grant access. Security descriptors also contain the object owner.
Mandatory Integrity Control is implemented through a new type of ACE on a security descriptor.
Files and folder permissions can be edited by various tools including Windows Explorer, WMI, command line tools like Cacls, XCacls, ICacls, SubInACL, the freeware Win32 console FILEACL, the free software utility SetACL, and other utilities. To edit a security descriptor, a user needs WRITE_DAC permissions to the object, a permission that is usually delegated by default to administrators and the object's owner.
Permissions in NTFS
The following table summarizes NTFS permissions and their roles (in individual rows.) The table exposes the following information:
Permission code: Each access control entry (ACE) specifies its permission with binary code. There are 14 codes (12 in older systems.)
Meaning: Each permission code has a meaning, depending on whether it is applied to a file or a folder. For example, code 0x01 on file indicates the permission to read the file, while on a folder indicates the permission to list the content of the folder. Knowing the meaning alone, however, is useless. An ACE must also spec |
https://en.wikipedia.org/wiki/Bruce%20Lee%20%28video%20game%29 | Bruce Lee is a platform game written by Ron J. Fortier for the Atari 8-bit family and published in 1984 by Datasoft. The graphics are by Kelly Day and music by John A. Fitzpatrick. The player takes the role of Bruce Lee, while a second player controls either Yamo or alternates with player one for control of Bruce Lee.
Commodore 64 and Apple II versions were released the same year. The game was converted to the ZX Spectrum and Amstrad CPC and published by U.S. Gold. It was the first U.S. Gold release featuring a famous individual. An MSX version was published in 1985 by Comptiq.
Gameplay
The plot involves the eponymous martial artist advancing from chamber to chamber in a wizard's tower, seeking to claim infinite wealth and the secret of immortality. There are twenty chambers, each represented by a single screen with platforms and ladders. To progress, the player must collect a number of lanterns suspended from various points in the chamber.
Most chambers are guarded by two mobile enemies; The Ninja, who attacks with a "bokken stick" and The Green Yamo, a large unarmed warrior, visually styled as a sumo wrestler but attacking with punches and "crushing kicks". On platforms with sufficient graphics support, Yamo's skin is actually pictured as green, though in cover art he has a natural human skin tone.
A multiplayer mode allows a second player to control Yamo, or to allow two players to alternately control Bruce. If the player playing Yamo is inactive for a certain time, the computer takes over. The Ninja and Yamo are also vulnerable to the screen's dangers, but have infinite lives so they always return; whereas Yamo is consistently identified as a single person, one version of the manual implies that each reappearance of the ninja is a new individual, replacing the previous one.
Later chambers include more hazards such as mines and moving walls, as well as a "comb-like" surface that has an electric spark racing along it. Skilful walking, climbing, ducking and j |
https://en.wikipedia.org/wiki/Rhyniella | Rhyniella is a genus of fossil springtails (Collembola) from the Rhynie chert, which formed during the Pragian stage of the Early Devonian. One species has been described, Rhyniella praecursor. For some time it was believed to be the only hexapod from the Early Devonian ( )
History
Its remains were discovered in 1919. Reconstructed from the scattered bits and pieces of its exoskeleton, R. praecursor was described in 1926, and at first believed to be a larval insect. This study also described euthycarcinoid Heterocrania, and supposed larval insect mouthparts, later redescribed as Rhyniognatha hirsti in 1928 and considered as an insect or myriapod.
Description
Rhyniella grew to a length of about 1–2 mm and would have been a scavenger, feeding on rotting matter. |
https://en.wikipedia.org/wiki/EpiData | EpiData is a group of applications used in combination for creating documented data structures and analysis of quantitative data.
Overview
The EpiData Association, which created the software, was created in 1999 and is based in Denmark. EpiData was developed in Pascal and uses open standards such as HTML where possible.
EpiData is widely used by organizations and individuals to create and analyze large amounts of data. The World Health Organization (WHO) uses EpiData in its STEPS method of collecting epidemiological, medical, and public health data, for biostatistics, and for other quantitative-based projects.
Epicentre, the research wing of Médecins Sans Frontières, uses EpiData to manage data from its international research studies and field epidemiology studies. E.g.: Piola P, Fogg C et al.: Supervised versus unsupervised intake of six-dose artemether-lumefantrine for treatment of acute, uncomplicated Plasmodium falciparum malaria in Mbarara, Uganda: a randomised trial. Lancet. 2005 Apr 23–29;365(9469):1467-73 ''. Other examples: '', '' or ''.
EpiData has two parts:
Epidata Entry – used for simple or programmed data entry and data documentation. It handles simple forms or related systems
EpiData Analysis – performs basic statistical analysis, graphs, and comprehensive data management, such as recoding data, label values and variables, and basic statistics. This application can create control charts, such as pareto charts or p-charts, and many other methods to visualize and describe statistical data.
The software is free; development is funded by governmental and non-governmental organizations like WHO.
See also
Clinical surveillance
Disease surveillance
Epidemiological methods
Control chart |
https://en.wikipedia.org/wiki/Travelport | Travelport Worldwide Ltd provides distribution, technology, payment solutions for the travel and tourism industry. It is the smallest, by revenue, of the top three global distribution systems (GDS) after Amadeus IT Group and Sabre Corporation.
The company also provides IT services to airlines, such as shopping, ticketing, and departure control.
History
The company was formed by Cendant in 2001 following its acquisitions of Galileo GDS for $2.9 billion and CheapTickets for $425 million.
In 2004, the company acquired Orbitz for $1.25 billion and Flairview Travel for $88 million.
In 2005, the company acquired eBookers for $350 million and Gullivers Travel Associates for $1.1 billion.
In August 2006, Cendant sold Orbitz and Galileo to The Blackstone Group for $4.3 billion, forming Travelport.
In August 2007, Travelport acquired Worldspan for $1.4 billion.
In July 2007, the company completed the partial corporate spin-off of Orbitz via an initial public offering.
In May 2010, the company acquired Sprice.com.
In 2011, the company sold Gullivers Travel Associates to Kuoni Travel for $720 million.
On September 25, 2014, the company became a public company via an initial public offering on the New York Stock Exchange.
In 2015, Travelport acquired Mobile Travel Technologies for €55 million.
On March 10, 2023, the company acquired Deem from Enterprise Holdings for an undisclosed amount.
Awards
In 2017, Travelport was the first GDS to be awarded the International Air Transport Association NDC (New Distribution Capability) Level 3 certification as an aggregator of travel content. In 2018, it became the first GDS operator to manage the live booking of flights using the NDC standard.
Acquisition
On May 30, 2019, the company was acquired by affiliates of Siris Capital Group and Evergreen Coast Capital, an affiliate of Elliott Management Corporation, for $4.4 billion. |
https://en.wikipedia.org/wiki/International%20Journal%20of%20Systematic%20and%20Evolutionary%20Microbiology | The International Journal of Systematic and Evolutionary Microbiology is a peer-reviewed scientific journal covering research in the field of microbial systematics that was established in 1951. Its scope covers the taxonomy, nomenclature, identification, characterisation, culture preservation, phylogeny, evolution, and biodiversity of all microorganisms, including prokaryotes, yeasts and yeast-like organisms, protozoa and algae. The journal is currently published monthly by the Microbiology Society.
An official publication of the International Committee on Systematics of Prokaryotes (ICSP) and International Union of Microbiological Societies (Bacteriology and Applied Microbiology Division), the journal is the single official international forum for the publication of new species names for prokaryotes. In addition to research papers, the journal also publishes the minutes of meetings of the ICSP and its various subcommittees.
Background and history
From the first identification of a bacterial species in 1872, microbial species were named according to the binomial nomenclature, based on largely subjective descriptive characteristics. By the end of the 19th century, however, it was clear that this nomenclature and classification system required reform. Although several different comprehensive nomenclature systems were invented (most notably, that described in Bergey's Manual of Determinative Bacteriology, first published in 1923), none gained international recognition. In 1930, a single international body, now named the International Committee on Systematics of Prokaryotes (ICSP), was established to oversee all aspects of prokaryotic nomenclature. Work began in 1936 on drafting a Code of Bacteriological Nomenclature, the first version of which was approved in 1947.
In 1950, at the 5th International Congress for Microbiology, a journal was established to disseminate the committee's conclusions to the microbiological community. It first appeared the following year un |
https://en.wikipedia.org/wiki/List%20of%20Runge%E2%80%93Kutta%20methods | Runge–Kutta methods are methods for the numerical solution of the ordinary differential equation
Explicit Runge–Kutta methods take the form
Stages for implicit methods of s stages take the more general form, with the solution to be found over all s
Each method listed on this page is defined by its Butcher tableau, which puts the coefficients of the method in a table as follows:
For adaptive and implicit methods, the Butcher tableau is extended to give values of , and the estimated error is then
.
Explicit methods
The explicit methods are those where the matrix is lower triangular.
Forward Euler
The Euler method is first order. The lack of stability and accuracy limits its popularity mainly to use as a simple introductory example of a numeric solution method.
Explicit midpoint method
The (explicit) midpoint method is a second-order method with two stages (see also the implicit midpoint method below):
Heun's method
Heun's method is a second-order method with two stages. It is also known as the explicit trapezoid rule, improved Euler's method, or modified Euler's method. (Note: The "eu" is pronounced the same way as in "Euler", so "Heun" rhymes with "coin"):
Ralston's method
Ralston's method is a second-order method with two stages and a minimum local error bound:
Generic second-order method
Kutta's third-order method
Generic third-order method
See Sanderse and Veldman (2019).
for α ≠ 0, , 1:
Heun's third-order method
Van der Houwen's/Wray third-order method
Ralston's third-order method
Ralston's third-order method is used in the embedded Bogacki–Shampine method.
Third-order Strong Stability Preserving Runge-Kutta (SSPRK3)
Classic fourth-order method
The "original" Runge–Kutta method.
3/8-rule fourth-order method
This method doesn't have as much notoriety as the "classic" method, but is just as classic because it was proposed in the same paper (Kutta, 1901).
Ralston's fourth-order method
This fourth order method has minimum truncation er |
https://en.wikipedia.org/wiki/Strong%20focusing | In accelerator physics strong focusing or alternating-gradient focusing is the principle that, using sets of multiple electromagnets, it is possible to make a particle beam simultaneously converge in both directions perpendicular to the direction of travel. By contrast, weak focusing is the principle that nearby circles, described by charged particles moving in a uniform magnetic field, only intersect once per revolution.
Earnshaw's theorem shows that simultaneous focusing in two directions transverse to the beam axis at once by a single magnet is impossible - a magnet which focuses in one direction will defocus in the perpendicular direction. However, iron "poles" of a cyclotron or two or more spaced quadrupole magnets (arranged in quadrature) can alternately focus horizontally and vertically, and the net overall effect of a combination of these can be adjusted to focus the beam in both directions.
Strong focusing was first conceived by Nicholas Christofilos in 1949 but not published (Christofilos opted instead to patent his idea), In 1952, the strong focusing principle was independently developed by Ernest Courant, M. Stanley Livingston, Hartland Snyder and J. Blewett at Brookhaven National Laboratory, who later acknowledged the priority of Christofilos' idea. The advantages of strong focusing were then quickly realised, and deployed on the Alternating Gradient Synchrotron.
Courant and Snyder found that the net effect of alternating the field gradient was that both the vertical and horizontal focusing of protons could be made strong at the same time, allowing tight control of proton paths in the machine. This increased beam intensity while reducing the overall construction cost of a more powerful accelerator. The theory revolutionised cyclotron design and permitted very high field strengths to be employed, while massively reducing the size of the magnets needed by minimising the size of the beam. Most particle accelerators today use the strong-focusing princi |
https://en.wikipedia.org/wiki/Animal%20coloration | Animal colouration is the general appearance of an animal resulting from the reflection or emission of light from its surfaces. Some animals are brightly coloured, while others are hard to see. In some species, such as the peafowl, the male has strong patterns, conspicuous colours and is iridescent, while the female is far less visible.
There are several separate reasons why animals have evolved colours. Camouflage enables an animal to remain hidden from view. Animals use colour to advertise services such as cleaning to animals of other species; to signal their sexual status to other members of the same species; and in mimicry, taking advantage of the warning coloration of another species. Some animals use flashes of colour to divert attacks by startling predators. Zebras may possibly use motion dazzle, confusing a predator's attack by moving a bold pattern rapidly. Some animals are coloured for physical protection, with pigments in the skin to protect against sunburn, while some frogs can lighten or darken their skin for temperature regulation. Finally, animals can be coloured incidentally. For example, blood is red because the haem pigment needed to carry oxygen is red. Animals coloured in these ways can have striking natural patterns.
Animals produce colour in both direct and indirect ways. Direct production occurs through the presence of visible coloured cells known as pigment which are particles of coloured material such as freckles. Indirect production occurs by virtue of cells known as chromatophores which are pigment-containing cells such as hair follicles. The distribution of the pigment particles in the chromatophores can change under hormonal or neuronal control. For fishes it has been demonstrated that chromatophores may respond directly to environmental stimuli like visible light, UV-radiation, temperature, pH, chemicals, etc. colour change helps individuals in becoming more or less visible and is important in agonistic displays and in camouflage. Som |
https://en.wikipedia.org/wiki/Wireless%20Session%20Protocol | The Wireless Session Protocol (WSP) is a communication protocol used in wireless networks to establish and manage sessions between a client and a server. It is part of the Wireless Application Protocol (WAP) suite of protocols and is designed to provide reliable and efficient data transfer over wireless connections. WSP allows for the exchange of messages between the client and server, enabling the delivery of content such as web pages, emails, and other data.
Wireless Session Protocol (WSP) is an open standard for maintaining high-level Wireless sessions. The Protocol is involved from the second that the user connects to one URL and ends when the user leaves that URL. The session-wide properties are defined once at the beginning of the session, saving bandwidth over continuous monitoring. The session-establishing process does not have long connection algorithms.
WSP is based on HTTP 1.1 with few enhancements. WSP provides the upper-level application layer of WAP with a consistent interface for two session services. The first is a connection-oriented service that operates above a transaction layer protocol WTP and the
second is a connectionless service that operates above a secure or non-secure data-gram transport service. Therefore, WSP exists for two reasons. First, the connection mode enhances HTTP 1.1's performance over the wireless environment. Second, it provides a session layer so the whole WAP environment resembles OSI OSI Reference Model. |
https://en.wikipedia.org/wiki/Internet%20linguistics | Internet linguistics is a domain of linguistics advocated by the English linguist David Crystal. It studies new language styles and forms that have arisen under the influence of the Internet and of other new media, such as Short Message Service (SMS) text messaging. Since the beginning of human–computer interaction (HCI) leading to computer-mediated communication (CMC) and Internet-mediated communication (IMC), experts, such as Gretchen McCulloch have acknowledged that linguistics has a contributing role in it, in terms of web interface and usability. Studying the emerging language on the Internet can help improve conceptual organization, translation and web usability. Such study aims to benefit both linguists and web users combined.
The study of internet linguistics can take place through four main perspectives: sociolinguistics, education, stylistics and applied linguistics. Further dimensions have developed as a result of further technological advances, which include the development of the Web as corpus and the spread and influence of the stylistic variations brought forth by the spread of the Internet, through the mass media and through literary works. In view of the increasing number of users connected to the Internet, the linguistics future of the Internet remains to be determined, as new computer-mediated technologies continue to emerge and people adapt their languages to suit these new media. The Internet continues to play a significant role both in encouraging people and in diverting attention away from the usage of languages.
Main perspectives
David Crystal has identified four main perspectives for further investigation: the sociolinguistic perspective, the educational perspective, the stylistic perspective and the applied perspective. The four perspectives are effectively interlinked and affect one another.
Sociolinguistic perspective
This perspective deals with how society views the impact of Internet development on languages. The advent of the Inte |
https://en.wikipedia.org/wiki/JavaHelp | JavaHelp is both an application and a format for online help files that can be displayed by the JavaHelp browser. It is written in Java, and is mainly used in Java applications. It can be used for any application and it does require the overhead of the JRE to load.
The file format is XML.
The GUI uses a tri-pane layout, with a toolbar and menu at the top, navigation pane on the left, and content viewer on the right. The navigation pane includes searching, indexing, and a table of contents.
License
JavaHelp application used to come with the GNU General Public License with Classpath exception
However, since the source code from JavaHelp was put on GitHub for JavaEE, the source code has been re-licensed to the Common Development and Distribution License. However the source code files are still (mistakingly) under the GPLv2 with Classpath exception license |
https://en.wikipedia.org/wiki/IvanAnywhere | IvanAnywhere is a simple, remote-controlled telepresence robot created by Sybase iAnywhere programmers to enable their co-worker, Ivan Bowman, to efficiently remote work. The robot enables Bowman to be virtually present at conferences and presentations, and to discuss product development with other developers face-to-face. IvanAnywhere is powered by SAP's mobile database product, SQL Anywhere.
IvanAnywhere evolution
Ivan Bowman has been a software developer at Sybase/iAnywhere/SAP since 1993, and now is an Engineering Director at SAP Canada. In 2002 his wife received a job in Halifax approximately from his place of work in Waterloo, Ontario, Canada, North America. His employers allowed him to remote work initially via email, instant messenger, and phone.
Using speakerphone during meetings was less than ideal because Ivan could not see his co-workers' visual communication clues, or what they wrote on the white board. The first solution was a stationary webcam with a speaker, which was kept in the corner of the office. The problem with this method was that the webcam was just that – stationary. Ivan could not see people if they were not standing near the webcam. More frustrating, perhaps, was that Ivan could hear distant conversations through the webcam's microphone, but was unable to contribute to the conversation if the impromptu meeting did not take place in his visual range.
Proof of concept
In November 2010, iAnywhere programmer Ian McHardy and Director of Engineering Glenn Paulley (Ivan’s immediate manager) conceived the idea of IvanAnywhere after Glenn saw a television commercial for a remote controlled toy blimp. In January 2007, after considering different possible designs and getting through a number of deadlines related to iAnywhere releases, Ian started working on a proof-of-concept: a tablet computer and webcam mounted on a radio-controlled toy truck.
In February 2007, even though the truck was challenging to drive and the webcam was only a few inch |
https://en.wikipedia.org/wiki/Cognitive%20description | Cognitive description is a term used in psychology to describe the cognitive workings of the human mind.
A cognitive description specifies what information is utilized during a cognitive action, how this information is processed and transformed, what data structures are used, and what behaviour is generated.
See also
Cognitive module |
https://en.wikipedia.org/wiki/Aminocoumarin | Aminocoumarin is a class of antibiotics that act by an inhibition of the DNA gyrase enzyme involved in the cell division in bacteria. They are derived from Streptomyces species, whose best-known representative – Streptomyces coelicolor – was completely sequenced in 2002.
The aminocoumarin antibiotics include:
Novobiocin, Albamycin (Pharmacia And Upjohn)
Coumermycin
Clorobiocin
Structure
The core of aminocoumarin antibiotics is made up of a 3-amino-4,7-dihydroxycumarin ring, which is linked, e.g., with a sugar in 7-Position and a benzoic acid derivative in 3-Position.
Clorobiocin is a natural antibiotic isolated from several Streptomyces strains and differs from novobiocin in that the methyl group at the 8 position in the coumarin ring of novobiocin is replaced by a chlorine atom, and the carbamoyl at the 3' position of the noviose sugar is substituted by a 5-methyl-2-pyrrolylcarbonyl group.
Mechanism of action
The aminocoumarin antibiotics are known inhibitors of DNA gyrase. Antibiotics of the aminocoumarin family exert their therapeutic activity by binding tightly to the B subunit of bacterial DNA gyrase, thereby inhibiting this essential enzyme. They compete with ATP
for binding to the B subunit of this enzyme and inhibit the ATP-dependent DNA supercoiling catalysed by gyrase. X-ray crystallography studies have confirmed binding at the ATP-binding site located on the gyrB subunit of DNA gyrase. Their affinity for gyrase is considerably higher than that of modern fluoroquinolones, which also target DNA gyrase but at the gyrA subunit.
Resistance
Resistance to this class of antibiotics usually results from genetic mutation in the gyrB subunit. Other mechanisms include de novo synthesis of a coumarin-resistant gyrase B subunit by the novobiocin producer S. sphaeroides .
Clinical use
The clinical use of this antibiotic class has been restricted due to the low water solubility, low activity against gram-negative bacteria, and toxicity in vivo of this class |
https://en.wikipedia.org/wiki/Cold%20sensitivity | Cold sensitivity or cold intolerance is unusual discomfort felt by some people when in a cool environment.
Cold sensitivity may be a symptom of hypothyroidism, anemia, fibromyalgia or vasoconstriction. Vitamin B12 deficiency usually accompanies cold intolerance as well. There are other conditions that may cause a cold intolerance, including low body weight, high body temperature and low blood pressure. There may also be differences in people in the expression of uncoupling proteins, thus affecting their amount of thermogenesis. Psychology may also play a factor in perceived temperature. |
https://en.wikipedia.org/wiki/Micellar%20cubic | A micellar cubic phase is a lyotropic liquid crystal phase formed when the concentration of micelles dispersed in a solvent (usually water) is sufficiently high that they are forced to pack into a structure having a long-ranged positional (translational) order. For example, spherical micelles a cubic packing of a body-centered cubic lattice. Normal topology micellar cubic phases, denoted by the symbol I1, are the first lyotropic liquid crystalline phases that are formed by type I amphiphiles. The amphiphiles' hydrocarbon tails are contained on the inside of the micelle and hence the polar-apolar interface of the aggregates has a positive mean curvature, by definition (it curves away from the polar phase). The first pure surfactant system found to exhibit three different type I (oil-in-water) micellar cubic phases was observed in the dodecaoxyethylene mono-n-dodecyl ether (C12EO12)/water system.
Inverse topology micellar cubic phases (such as the Fd3m phase) are observed for some type II amphiphiles at very high amphiphile concentrations. These aggregates, in which water is the minority phase, have a polar-apolar interface with a negative mean curvature. The structures of the normal topology micellar cubic phases that are formed by some types of amphiphiles (e.g. the oligoethyleneoxide monoalkyl ether series of non-ionic surfactants are the subject of debate. Micellar cubic phases are isotropic phases but are distinguished from micellar solutions by their very high viscosity. When thin film samples of micellar cubic phases are viewed under a polarising microscope they appear dark and featureless. Small air bubbles trapped in these preparations tend to appear highly distorted and occasionally have faceted surfaces. A reversed micellar cubic phase has been observed, although it is much less common. It was observed that a reverse micellar cubic phase with Fd3m (Q227) symmetry formed in a ternary system of an amphiphilic diblock copolymer (EO17BO10, where EO represents |
https://en.wikipedia.org/wiki/Hexagonal%20phase | A hexagonal phase of lyotropic liquid crystal is formed by some amphiphilic molecules when they are mixed with water or another polar solvent. In this phase, the amphiphile molecules are aggregated into cylindrical structures of indefinite length and these cylindrical aggregates are disposed on a hexagonal lattice, giving the phase long-range orientational order.
In normal topology hexagonal phases, which are formed by type I amphiphiles, the hydrocarbon chains are contained within the cylindrical aggregates such that the polar-apolar interface has a positive mean curvature. Inverse topology hexagonal phases have water within the cylindrical aggregates and the hydrocarbon chains fill the voids between the hexagonally packed cylinders. Normal topology hexagonal phases are denoted by HI while inverse topology hexagonal phases are denoted by HII. When viewed by polarization microscopy, thin films of both normal and inverse topology hexagonal phases exhibit birefringence, giving rise to characteristic optical textures. Typically, these textures are smoke-like, fan-like or mosaic in appearance. The phases are highly viscous and small air bubbles trapped within the preparation have highly distorted shapes. Size and shapes of lamellar, micellar and hexagonal phases of lipid bilayer phase behavior and mixed lipid polymorphism in aqueous dispersions can be easily identified and characterized by negative staining transmission electron microscopy too.
See also
Lamellar phase
Lipid polymorphism
Micelle |
https://en.wikipedia.org/wiki/%C4%B0smet%20G%C3%BCney | İsmet Vahid Güney (15 July 1923 – 23 June 2009) was a Turkish Cypriot artist, cartoonist, teacher and painter. He is best known as the designer of the modern flag of the Republic of Cyprus, the country's coat of arms and the original Cyprus lira in 1960. Güney's design was unique, as the Republic of Cyprus is the first country in the world to display a map on its flag.
Biography
Güney was born in 1923 in Limassol, Cyprus. He began painting while he was a student in high school. After graduating from the Teachers Training College, he started working as an arts teacher in 1948.
From 1948 to 1977 he attended Lefkoşa Erkek Lisesi teaching Art and History. In 1956, he met artist Ibrahim Çallı and worked with him until 1960.
In 1947, Güney became the first Turkish Cypriot painter to open a solo art exhibition. Güney had many solo exhibitions as well as participating in group exhibitions both in Cyprus and abroad. In 1967, a scholarship enabled him to study at Belfast Queen's University's Stranmills College. In 1986 he had a grand retrospective exhibition in Nicosia. Towards the end of his life he worked on graphics and color-separation.
İsmet Güney died of cancer on 23 June 2009, at the age of 85.
Creation of the Cyprus flag
Before the flag of Cyprus was introduced, the flags of Turkey and Greece were used. The current flag was created as the result of a design competition in 1960. Under the constitution, the flag should not include either red or blue colors (the colours of the flag of Turkey and the flag of Greece), nor portray a cross or a crescent. All participants deliberately avoided use of these four elements in an attempt to make the flag "neutral". Thus the Greek blue and Turkish red were avoided by Güney and the other design competitors.
The winning design was based on the proposal by İsmet Güney. The design was chosen by Makarios III, the President of the Republic of Cyprus, with the consent of Fazil Küçük, the then Vice President, in 1960.
The white fl |
https://en.wikipedia.org/wiki/Timoshenko%E2%80%93Ehrenfest%20beam%20theory | The Timoshenko–Ehrenfest beam theory was developed by Stephen Timoshenko and Paul Ehrenfest early in the 20th century. The model takes into account shear deformation and rotational bending effects, making it suitable for describing the behaviour of thick beams, sandwich composite beams, or beams subject to high-frequency excitation when the wavelength approaches the thickness of the beam. The resulting equation is of 4th order but, unlike Euler–Bernoulli beam theory, there is also a second-order partial derivative present. Physically, taking into account the added mechanisms of deformation effectively lowers the stiffness of the beam, while the result is a larger deflection under a static load and lower predicted eigenfrequencies for a given set of boundary conditions. The latter effect is more noticeable for higher frequencies as the wavelength becomes shorter (in principle comparable to the height of the beam or shorter), and thus the distance between opposing shear forces decreases.
Rotary inertia effect was introduced by Bresse and Rayleigh.
If the shear modulus of the beam material approaches infinity—and thus the beam becomes rigid in shear—and if rotational inertia effects are neglected, Timoshenko beam theory converges towards ordinary beam theory.
Quasistatic Timoshenko beam
In static Timoshenko beam theory without axial effects, the displacements of the beam are assumed to be given by
where are the coordinates of a point in the beam, are the components of the displacement vector in the three coordinate directions, is the angle of rotation of the normal to the mid-surface of the beam, and is the displacement of the mid-surface in the -direction.
The governing equations are the following coupled system of ordinary differential equations:
The Timoshenko beam theory for the static case is equivalent to the Euler–Bernoulli theory when the last term above is neglected, an approximation that is valid when
where
is the length of the beam.
is the c |
https://en.wikipedia.org/wiki/Clay%20Mathematics%20Monographs | Clay Mathematics Monographs is a series of expositions in mathematics co-published by AMS and Clay Mathematics Institute. Each volume in the series offers an exposition of an active area of current research, provided by a group of mathematicians.
List of books
External links
Clay Mathematics Monographs list at ams.org
Series of mathematics books |
https://en.wikipedia.org/wiki/Contraction%20principle%20%28large%20deviations%20theory%29 | In mathematics — specifically, in large deviations theory — the contraction principle is a theorem that states how a large deviation principle on one space "pushes forward" (via the pushforward of a probability measure) to a large deviation principle on another space via a continuous function.
Statement
Let X and Y be Hausdorff topological spaces and let (με)ε>0 be a family of probability measures on X that satisfies the large deviation principle with rate function I : X → [0, +∞]. Let T : X → Y be a continuous function, and let νε = T∗(με) be the push-forward measure of με by T, i.e., for each measurable set/event E ⊆ Y, νε(E) = με(T−1(E)). Let
with the convention that the infimum of I over the empty set ∅ is +∞. Then:
J : Y → [0, +∞] is a rate function on Y,
J is a good rate function on Y if I is a good rate function on X, and
(νε)ε>0 satisfies the large deviation principle on Y with rate function J. |
https://en.wikipedia.org/wiki/Richardson%27s%20theorem | In mathematics, Richardson's theorem establishes the undecidability of the equality of real numbers defined by expressions involving integers, , and exponential and sine functions. It was proved in 1968 by mathematician and computer scientist Daniel Richardson of the University of Bath.
Specifically, the class of expressions for which the theorem holds is that generated by rational numbers, the number π, the number ln 2, the variable x, the operations of addition, subtraction, multiplication, composition, and the sin, exp, and abs functions.
For some classes of expressions (generated by other primitives than in Richardson's theorem) there exist algorithms that can determine whether an expression is zero.
Statement of the theorem
Richardson's theorem can be stated as follows:
Let E be a set of expressions that represent functions. Suppose that E includes these expressions:
x (representing the identity function)
ex (representing the exponential functions)
sin x (representing the sin function)
all rational numbers, ln 2, and π (representing constant functions that ignore their input and produce the given number as output)
Suppose E is also closed under a few standard operations. Specifically, suppose that if A and B are in E, then all of the following are also in E:
A + B (representing the pointwise addition of the functions that A and B represent)
A − B (representing pointwise subtraction)
AB (representing pointwise multiplication)
A∘B (representing the composition of the functions represented by A and B)
Then the following decision problems are unsolvable:
Deciding whether an expression A in E represents a function that is nonnegative everywhere
If E includes also the expression |x| (representing the absolute value function), deciding whether an expression A in E represents a function that is zero everywhere
If E includes an expression B representing a function whose antiderivative has no representative in E, deciding whether an expression A in E re |
https://en.wikipedia.org/wiki/Gary%20Taubes | Gary Taubes (born April 30, 1956) is an American journalist, writer, and low-carbohydrate / high-fat (LCHF) diet advocate. His central claim is that carbohydrates, especially sugar and high-fructose corn syrup, overstimulate the secretion of insulin, causing the body to store fat in fat cells and the liver, and that it is primarily a high level of dietary carbohydrate consumption that accounts for obesity and other metabolic syndrome conditions. He is the author of Nobel Dreams (1987); Bad Science: The Short Life and Weird Times of Cold Fusion (1993); Good Calories, Bad Calories (2007), titled The Diet Delusion (2008) in the UK and Australia; Why We Get Fat: And What to Do About It (2010); The Case Against Sugar (2016); and The Case for Keto: Rethinking Weight Control and the Science and Practice of Low-Carb/High-Fat Eating (2020). Taubes's work often goes against accepted scientific, governmental, and popular tenets such as that obesity is caused by eating too much and exercising too little and that excessive consumption of fat, especially saturated fat in animal products, leads to cardiovascular disease.
Biography
Born in Rochester, New York, Taubes studied applied physics at Harvard University (BS, 1977) and aerospace engineering at Stanford University (MS, 1978). After receiving a master's degree in journalism at Columbia University in 1981, Taubes joined Discover magazine as a staff reporter in 1982. Since then he has written numerous articles for Discover, Science and other magazines. Originally focusing on physics issues, his interests have more recently turned to medicine and nutrition.
His brother, Clifford Henry Taubes, is the William Petschek Professor of Mathematics at Harvard University.
Scientific controversies
Taubes' books have all dealt with scientific controversies.
Nobel Dreams takes a critical look at the politics and experimental techniques behind the Nobel Prize-winning work of physicist Carlo Rubbia.
In Bad Science: The Short Life and Weir |
https://en.wikipedia.org/wiki/Pile%20%28abstract%20data%20type%29 | In computer science, a pile is an abstract data type for storing data in a loosely ordered way. There are two different usages of the term; one refers to an ordered double-ended queue, the other to an improved heap.
Ordered double-ended queue
The first version combines the properties of the double-ended queue (deque) and a priority queue and may be described as an ordered deque.
An item may be added to the head of the list if the new item is valued less than or equal to the current head or to the tail of the list if the new item is greater than or equal to the current tail. Elements may be removed from both the head and the tail.
Piles of this kind are used in the "UnShuffle sort" sorting algorithm.
Improved heap
The second version is a subject of patents and improves the heap data structure.
The whole data pile based system can be generalized as shown: |
https://en.wikipedia.org/wiki/Bell%20Laboratories%20Layered%20Space-Time | Bell Laboratories Layer Space-Time (BLAST) is a transceiver architecture for offering spatial multiplexing over multiple-antenna wireless communication systems. Such systems have multiple antennas at both the transmitter and the receiver in an effort to exploit the many different paths between the two in a highly-scattering wireless environment. BLAST was developed by Gerard Foschini at Lucent Technologies' Bell Laboratories (now Nokia Bell Labs). By careful allocation of the data to be transmitted to the transmitting antennas, multiple data streams can be transmitted simultaneously within a single frequency band — the data capacity of the system then grows directly in line with the number of antennas (subject to certain assumptions). This represents a significant advance on current, single-antenna systems.
V-BLAST
V-BLAST (Vertical-Bell Laboratories Layered Space-Time) is a detection algorithm to the receipt of multi-antenna MIMO systems. Available for the first time in 1996 at Bell Laboratories in New Jersey in the United States by Gerard J. Foschini. He proceeded simply to eliminate interference caused successively issuers.
Its principle is quite simple: to make a first detection of the most powerful signal. It regenerates the received signal from this user from this decision. Then, the signal is regenerated subtracted from the received signal and, with this new signal, it proceeds to the detection of the second user's most powerful, since it has already cleared the first and so forth. What gives a vector containing received less interference.
The complete detection algorithm can be summarized as recursive as follows:
Initialize:
Recursive:
See also
Space–time code — a means for using multiple antennas to improve reliability rather than data-rate.
Telecommunication |
https://en.wikipedia.org/wiki/Adi%20Bulsara | Ardeshir "Adi" Ratan Bulsara (born 15 May 1951) is a scientist in the area nonlinear dynamics. The 2007 International Conference on Applied Nonlinear Dynamics (ICAND), held in Kauai, Hawaii, was a festschrift held in his honor of his 55th birthday.
Honours
In 2004, Bulsara was elected to Fellow of the American Physical Society (APS) for "developing the statistical mechanics of noisy nonlinear dynamical oscillators especially in the theory, application and technology of stochastic resonance detectors." His festschrift in honor of his 55th birthday, which, for logistic reasons, was held when he was 56.
Books by Bulsara
S. Baglio and A. Bulsara, (editors) Device Applications of Nonlinear Dynamics, Springer-Verlag, Berlin, 2006,
A. Bulsara, Noise and Chaos in the RF SQUID, Office of Naval Research, 1992, ASIN B0006R2M5I
A. Bulsara, Coupled Neural-Dendritic Processes: Cooperative Stochastic Effects and the Analysis of Spike Trains, Office of Naval Research, 1994, ASIN B0006R2O9C
J. B. Kadtke and A. Bulsara, (editors) Applied Nonlinear Dynamics and Stochastic Systems Near the Millen[n]ium, 411, San Diego, CA, American Institute of Physics, Woodbury, N.Y., 1997, |
https://en.wikipedia.org/wiki/WiMAX%20MIMO | WiMAX MIMO refers to the use of Multiple-input multiple-output communications (MIMO) technology on WiMAX, which is the technology brand name for the implementation of the standard IEEE 802.16.
Background
WiMAX
WiMAX is the technology brand name for the implementation of the standard IEEE 802.16, which specifies the air interface at the PHY (Physical layer) and at the MAC (Medium Access Control layer) . Aside from specifying the support of various channel bandwidths and adaptive modulation and coding, it also specifies the support for MIMO antennas to provide good Non-line-of-sight (NLOS) characteristics.
See Also: WiMax Forum
MIMO
MIMO stands for Multiple Input and Multiple Output, and refers to the technology where there are multiple antennas at the base station and multiple antennas at the mobile device. Typical usage of multiple antenna technology includes cellular phones with two antennas, laptops with two antennas (e.g. built in the left and right side of the screen), as well as CPE devices with multiple sprouting antennas.
The predominant cellular network implementation is to have multiple antennas at the base station and a single antenna on the mobile device. This minimizes the cost of the mobile radio. As the costs for radio frequency (RF) components in mobile devices go down, second antennas in mobile device may become more common. Multiple mobile device antennas are currently used in Wi-Fi technology (e.g. IEEE 802.11n), where WiFi-enabled cellular phones, laptops and other devices often have two or more antennas.
MIMO Technology in WiMAX
WiMAX implementations that use MIMO technology have become important. The use of MIMO technology improves the reception and allows for a better reach and rate of transmission. The implementation of MIMO also gives WiMAX a significant increase in spectral efficiency.
MIMO auto-negotiation
The 802.16 defined MIMO configuration is negotiated dynamically between each individual base station and mobile station. |
https://en.wikipedia.org/wiki/Choristoma | Choristomas, a form of heterotopia, are masses of normal tissues found in abnormal locations. In contrast to a neoplasm or tumor, the growth of a choristoma is normally regulated.
It is different from a hamartoma. The two can be differentiated as follows: a hamartoma is disorganized overgrowth of tissues in their normal location (e.g., Peutz–Jeghers polyps), while a choristoma is normal tissue growth in an abnormal location (e.g., osseous choristoma, gastric tissue located in distal ileum in Meckel diverticulum). |
https://en.wikipedia.org/wiki/Russell%20bodies | Russell bodies are inclusion bodies usually found in atypical plasma cells that become known as Mott cells. Russell bodies are eosinophilic, homogeneous immunoglobulin (Ig)-containing inclusions usually found in cells undergoing excessive synthesis of Ig; the Russell body is characteristic of the distended endoplasmic reticulum. Russell bodies are large and globular of varying size, and become packed into the cell's cytoplasm pushing the nucleus to the edge of the cell, and are found in the peripheral areas of tumors. Russell bodies are thought to have originated as abnormal proteins that have not been secreted. The excess immunoglobulin builds up and forms intracytoplasmic globules, which is thought to be a result of insufficient protein transport within the cell. This causes the proteins to neither be degraded or secreted and stay stored in dilated cisternae. In 1949, Pearse discovered that Russell bodies also contain mucoproteins that are secreted by plasma cells. Russell bodies are not tissue specific; during research they were induced in rat glioma cells. Russell bodies were found to have positive reactions to PAS stain, CD 38 and CD 138 stains. Plasma cells that contain Russell bodies and are stained with H&E stain are found to be autofluorescent, while those without Russell bodies are not. Russell bodies tend to be found in places with chronic inflammation.
This is one cell variation found in multiple myeloma.
Similar inclusion bodies that tend to overlie the nucleus or invaginate into it are known as Dutcher bodies.
They are named for William Russell (1852–1940), a Scottish physician. |
https://en.wikipedia.org/wiki/Anaphase%20lag | Anaphase lag is a consequence of an event during cell division where sister chromatids do not properly separate from each other because of improper spindle formation. The chromosome or chromatid does not properly migrate during anaphase and the daughter cells will lose some genetic information. It is one of many causes of aneuploidy. This event can occur during both meiosis and mitosis with unique repercussions. In either case, anaphase lag will cause one daughter cell to receive a complete set of chromosomes while the other lacks one paired set of chromosomes, creating a form of monosomy. Whether the cell survives depends on which sister chromatid was lost and the background genomic state of the cell. The passage of abnormal numbers of chromosomes will have unique consequences with regards to mosaicism and development as well as the progression and heterogeneity of cancers.
Mechanisms
There are two notable mechanisms that cause Anaphase Lag, each of which are characterized by merotelic attachments of kinetochores to the microtubules responsible for chromatid separation. Merotelic attachments occur when a single centromere kinetochore attaches to microtubules originating from both spindle poles of the dividing cell. The merotelic attachments can occur in two ways: centrosome spindle attachments from both poles on the same chromatid kinetochore or the formation of a third centrosome whose microtubule spindles attach to a chromatid kinetochore. Because the chromatid is being pulled in two opposing directions or away from the correct centriole, it cannot migrate to the mass of segregated chromatids at either pole. If the migration is significantly delayed the reformation of nuclei will begin to occur without a full complement of chromosomes. This nuclear envelope formation is also seen for the lone lagging sister chromatid, forming a micronucleus. The micronucleus has the capacity to persist in the daughter cell but with abnormal replication and maintenance machi |
https://en.wikipedia.org/wiki/Generalized%20forces | In analytical mechanics (particularly Lagrangian mechanics), generalized forces are conjugate to generalized coordinates. They are obtained from the applied forces , acting on a system that has its configuration defined in terms of generalized coordinates. In the formulation of virtual work, each generalized force is the coefficient of the variation of a generalized coordinate.
Virtual work
Generalized forces can be obtained from the computation of the virtual work, , of the applied forces.
The virtual work of the forces, , acting on the particles , is given by
where is the virtual displacement of the particle .
Generalized coordinates
Let the position vectors of each of the particles, , be a function of the generalized coordinates, . Then the virtual displacements are given by
where is the virtual displacement of the generalized coordinate .
The virtual work for the system of particles becomes
Collect the coefficients of so that
Generalized forces
The virtual work of a system of particles can be written in the form
where
are called the generalized forces associated with the generalized coordinates .
Velocity formulation
In the application of the principle of virtual work it is often convenient to obtain virtual displacements from the velocities of the system. For the n particle system, let the velocity of each particle Pi be , then the virtual displacement can also be written in the form
This means that the generalized force, , can also be determined as
D'Alembert's principle
D'Alembert formulated the dynamics of a particle as the equilibrium of the applied forces with an inertia force (apparent force), called D'Alembert's principle. The inertia force of a particle, , of mass is
where is the acceleration of the particle.
If the configuration of the particle system depends on the generalized coordinates , then the generalized inertia force is given by
D'Alembert's form of the principle of virtual work yields |
https://en.wikipedia.org/wiki/Server%20room | A server room is a room, usually air-conditioned, devoted to the continuous operation of computer servers. An entire building or station devoted to this purpose is a data center.
The computers in server rooms are usually headless systems that can be operated remotely via KVM switch or remote administration software, such as Secure Shell, VNC, and remote desktop.
Climate is one of the factors that affects the energy consumption and environmental impact of a server room. In areas where climate favours cooling and an abundance of renewable electricity, the environmental effects will be more moderate. Thus, countries with favourable conditions such as Canada, Finland, Sweden, and Switzerland are trying to attract companies to site server rooms there.
Design considerations
Building a server or computer room requires detailed attention to five main design considerations:
Location
Computer or server room location is the first consideration, even before considering the layout of the room's contents. Most designers agree that, where possible, the computer room should not be built where one of its walls is an exterior wall of the building. Exterior walls can often be quite damp and can contain water pipes that could burst and drench the equipment.
Avoiding exterior windows means avoiding a security risk, and breakages. Avoiding both the top floors and basements means avoiding flooding, and leaks in the case of roofs. Lastly, server rooms should be centrally located because of the horizontal cabling involved which extends from this room to devices in other rooms. If a centralized computer room is not feasible, server closets on each floor may be an option. This is where computer, network and phone equipment are housed in closets and each closet is stacked above each other on the floor that they service.
In addition to the hazards of exterior walls, designers need to evaluate any potential sources of interference in proximity to the computer room. Designing such a room mea |
https://en.wikipedia.org/wiki/Taxonomy%20of%20commonly%20fossilised%20invertebrates | Taxonomy of commonly fossilised invertebrates is a complex and evolving field that combines both traditional and modern paleozoological terminology. This article aims to provide a comprehensive overview of the various invertebrate taxa that are commonly found in the fossil record, from protists to arthropods. The taxonomy presented here is not intended to be exhaustive but focuses on invertebrates that are either popularly collected as fossils or are extinct. Special notations are used to highlight invertebrate groups that are important as fossils, very abundant in the fossil record, or have a large proportion of extinct species. These notations are explained below for clarity:
[ ! ]: Indicates clades that are important as fossils or very abundant in the fossil record.
[ – ]: Indicates clades that contain a large proportion of extinct species.
[ † ]: Indicates clades that are completely extinct.
The paleobiologic systematics that follow are not intended to be comprehensive, rather encompass invertebrates that (a) are popularly collected as fossils and/or (b) extinct. As a result, some groups of invertebrates are not listed.
If an invertebrate animal is mentioned below using its common (vernacular) name, it is an extant (living) taxon, but if it is cited by its scientific genus, then it is typically an extinct invertebrate known only from the fossil record.
Invertebrate clades that are important fossils (e.g. ostracods, frequently used as index fossils), and/or clades that are very abundant as fossils (e.g. crinoids, easily found in crinoidal limestone), are highlighted with a bracketed exclamation mark [ ! ].
Domain of Eukaryota/Eukarya
Eukaryotes; eukaryotes are cellular organisms bearing a central, organized nucleus with DNA.
most of the species which have been documented by biologists and paleontologists, extinct or extant, are eukaryotic.
includes: a wide variety of single-celled protists; all algae; most plankton; most molds; the green plants; all an |
https://en.wikipedia.org/wiki/Bending%20stiffness | The bending stiffness () is the resistance of a member against bending deformation. It is a function of the Young's modulus , the second moment of area of the beam cross-section about the axis of interest, length of the beam and beam boundary condition. Bending stiffness of a beam can analytically be derived from the equation of beam deflection when it is applied by a force.
where is the applied force and is the deflection. According to elementary beam theory, the relationship between the applied bending moment and the resulting curvature of the beam is:
where is the deflection of the beam and is the distance along the beam. Double integration of the above equation leads to computing the deflection of the beam, and in turn, the bending stiffness of the beam.
Bending stiffness in beams is also known as Flexural rigidity.
See also
Applied mechanics
Beam theory
Bending
Stiffness
External links
Efunda's beam calculator
Continuum mechanics
Structural analysis |
https://en.wikipedia.org/wiki/Lyotropic%20liquid%20crystal | Lyotropic liquid crystals result when fat-loving and water-loving chemical compounds known as amphiphiles dissolve into a solution that behaves both like a liquid and a solid crystal. This liquid crystalline mesophase includes everyday mixtures like soap and water.
To break the word down, "lyo" and "tropic" mean, respectively, "dissolve" and "change." Historically, the term was used to describe the common behavior of materials composed of amphiphilic molecules upon the addition of a solvent. Such molecules comprise a water-loving hydrophilic head-group (which may be ionic or non-ionic) attached to a water-hating, hydrophobic group.
The micro-phase segregation of two incompatible components on a nanometer scale results in different type of solvent-induced extended anisotropic arrangement, depending on the volume balances between the hydrophilic part and hydrophobic part. In turn, they generate the long-range order of the phases, with the solvent molecules filling the space around the compounds to provide fluidity to the system.
In contrast to thermotropic liquid crystals, lyotropic liquid crystals have therefore an additional degree of freedom, that is the concentration that enables them to induce a variety of different phases. As the concentration of amphiphilic molecules is increased, several different type of lyotropic liquid crystal structures occur in solution. Each of these different types has a different extent of molecular ordering within the solvent matrix, from spherical micelles to larger cylinders, aligned cylinders and even bilayered and multiwalled aggregates.
Types of lyotropic systems
Examples of amphiphilic compounds are the salts of fatty acids, phospholipids. Many simple amphiphiles are used as detergents. A mixture of soap and water is an everyday example of a lyotropic liquid crystal.
Biological structures such as fibrous proteins showings relatively long and well-defined hydrophobic and hydrophilic ‘‘blocks’’ of aminoacids can also show ly |
https://en.wikipedia.org/wiki/Quantum%20bus | A quantum bus is a device which can be used to store or transfer information between independent qubits in a quantum computer, or combine two qubits into a superposition. It is the quantum analog of a classical bus.
There are several physical systems that can be used to realize a quantum bus, including trapped ions, photons, and superconducting qubits. Trapped ions, for example, can use the quantized motion of ions (phonons) as a quantum bus, while photons can act as a carrier of quantum information by utilizing the increased interaction strength provided by cavity quantum electrodynamics. Circuit quantum electrodynamics, which uses superconducting qubits coupled to a microwave cavity on a chip, is another example of a quantum bus that has been successfully demonstrated in experiments.
History
The concept was first demonstrated by researchers at Yale University and the National Institute of Standards and Technology (NIST) in 2007. Prior to this experimental demonstration, the quantum bus had been described by scientists at NIST as one of the possible cornerstone building blocks in quantum computing architectures.
Mathematical description
A quantum bus for superconducting qubits can be built with a resonance cavity. The hamiltonian for a system with qubit A, qubit B, and the resonance cavity or quantum bus connecting the two is where is the single qubit hamiltonian, is the raising or lowering operator for creating or destroying excitations in the th qubit, and is controlled by the amplitude of the D.C. and radio frequency flux bias. |
https://en.wikipedia.org/wiki/Tilted%20large%20deviation%20principle | In mathematics — specifically, in large deviations theory — the tilted large deviation principle is a result that allows one to generate a new large deviation principle from an old one by "tilting", i.e. integration against an exponential functional. It can be seen as an alternative formulation of Varadhan's lemma.
Statement of the theorem
Let X be a Polish space (i.e., a separable, completely metrizable topological space), and let (με)ε>0 be a family of probability measures on X that satisfies the large deviation principle with rate function I : X → [0, +∞]. Let F : X → R be a continuous function that is bounded from above. For each Borel set S ⊆ X, let
and define a new family of probability measures (νε)ε>0 on X by
Then (νε)ε>0 satisfies the large deviation principle on X with rate function IF : X → [0, +∞] given by |
https://en.wikipedia.org/wiki/Physical%20design%20%28electronics%29 | In integrated circuit design, physical design is a step in the standard design cycle which follows after the circuit design. At this step, circuit representations of the components (devices and interconnects) of the design are converted into geometric representations of shapes which, when manufactured in the corresponding layers of materials, will ensure the required functioning of the components. This geometric representation is called integrated circuit layout. This step is usually split into several sub-steps, which include both design and verification and validation of the layout.
Modern day Integrated Circuit (IC) design is split up into Front-end Design using HDLs and Back-end Design or Physical Design. The inputs to physical design are (i) a netlist, (ii) library information on the basic devices in the design, and (iii) a technology file containing the manufacturing constraints. Physical design is usually concluded by Layout Post Processing, in which amendments and additions to the chip layout are performed. This is followed by the Fabrication or Manufacturing Process where designs are transferred onto silicon dies which are then packaged into ICs.
Each of the phases mentioned above has design flows associated with them. These design flows lay down the process and guide-lines/framework for that phase. The physical design flow uses the technology libraries that are provided by the fabrication houses. These technology files provide information regarding the type of silicon wafer used, the standard-cells used, the layout rules (like DRC in VLSI), etc.
Divisions
Typically, the IC physical design is categorized into full custom and semi-custom design.
Full-Custom: Designer has full flexibility on the layout design, no predefined cells are used.
Semi-Custom: Pre-designed library cells (preferably tested with DFM) are used, designer has flexibility in placement of the cells and routing.
One can use ASIC for Full Custom design and FPGA for Semi-Custom design |
https://en.wikipedia.org/wiki/Index%20selection | Index selection is a method of artificial selection in which several useful traits are selected simultaneously. First, each trait that is going to be selected is assigned a weight – the importance of the trait. I.e., if you were selecting for both height and the coat darkness in dogs, if height were the more important of the two one would assign that a higher weighting. For instance, height's weighting could be ten and coat darkness could be one. This weighting value is then multiplied by the observed value in each individual animal and then the score for each of the characteristics is summed for each individual. This result is the index score and can be used to compare the worth of each organism being selected. Therefore, only those with the highest index score are selected for breeding via artificial selection.
This method has advantages over other methods of artificial selection, such as tandem selection, in that you can select for traits simultaneously rather than sequentially. Thereby, no useful traits are being excluded from selection at any one time and so none will stagnate or reverse while you concentrate on improving another property of the organism. However, its major disadvantage is that the weightings assigned to each characteristic are inherently quite hard to calculate precisely and so require some elements of trial and error before they become optimal to the breeder.
The selection index theory is well described in Erling Strandberg and Birgitte Malmfors's notes under the headings Genetic Evaluation.
Calculation of a selection index based on actual data can be carried out using an applet made by Knud Christensen. The applet can be found here |
https://en.wikipedia.org/wiki/Microbial%20biodegradation | Microbial biodegradation is the use of bioremediation and biotransformation methods to harness the naturally occurring ability of microbial xenobiotic metabolism to degrade, transform or accumulate environmental pollutants, including hydrocarbons (e.g. oil), polychlorinated biphenyls (PCBs), polyaromatic hydrocarbons (PAHs), heterocyclic compounds (such as pyridine or quinoline), pharmaceutical substances, radionuclides and metals.
Interest in the microbial biodegradation of pollutants has intensified in recent years, and recent major methodological breakthroughs have enabled detailed genomic, metagenomic, proteomic, bioinformatic and other high-throughput analyses of environmentally relevant microorganisms, providing new insights into biodegradative pathways and the ability of organisms to adapt to changing environmental conditions.
Biological processes play a major role in the removal of contaminants and take advantage of the catabolic versatility of microorganisms to degrade or convert such compounds. In environmental microbiology, genome-based global studies are increasing the understanding of metabolic and regulatory networks, as well as providing new information on the evolution of degradation pathways and molecular adaptation strategies to changing environmental conditions.
Aerobic biodegradation of pollutants
The increasing amount of bacterial genomic data provides new opportunities for understanding the genetic and molecular bases of the degradation of organic pollutants. Aromatic compounds are among the most persistent of these pollutants and lessons can be learned from the recent genomic studies of Burkholderia xenovorans LB400 and Rhodococcus sp. strain RHA1, two of the largest bacterial genomes completely sequenced to date. These studies have helped expand our understanding of bacterial catabolism, non-catabolic physiological adaptation to organic compounds, and the evolution of large bacterial genomes. First, the metabolic pathways from phylogeneti |
https://en.wikipedia.org/wiki/Pentadin | Pentadin, a sweet-tasting protein, was discovered and isolated in 1989, in the fruit of Oubli (Pentadiplandra brazzeana ), a climbing shrub growing in some tropical countries of Africa.
The fruit has been consumed by the apes and the natives for a long time. The berries of the plant were incredibly sweet African locals call them "j'oublie" (French for "I forget") because their taste helps nursing infants forget their mothers' milk.
Pentadin, with brazzein discovered in 1994, are the 2 sweet-tasting proteins discovered in this African fruit.
Pentadin molecular weight estimated to be 12kDa. It is reported to be 500 times sweeter than sucrose on a weight basis, with its sweetness having a slow onset and decline similar to monellin and thaumatin. However, pentadin's sweetness profile is closer to monellin than to thaumatin.
There are six sweet-tasting proteins - pentadin, thaumatin, monellin, mabinlin, brazzein, and curculin - that are all from isolated plants in tropical forests. They show no similarities in a structural or homologous sequence aspect.
Uses
The six sweet-tasting proteins can be used as a natural low-calorie sweetener to replace certain sugars. They are also good for the response of insulin in people who are diabetic.
See also
Brazzein
Mabinlin
Monellin
Thaumatin |
https://en.wikipedia.org/wiki/Differentiation%20of%20integrals | In mathematics, the problem of differentiation of integrals is that of determining under what circumstances the mean value integral of a suitable function on a small neighbourhood of a point approximates the value of the function at that point. More formally, given a space X with a measure μ and a metric d, one asks for what functions f : X → R does
for all (or at least μ-almost all) x ∈ X? (Here, as in the rest of the article, Br(x) denotes the open ball in X with d-radius r and centre x.) This is a natural question to ask, especially in view of the heuristic construction of the Riemann integral, in which it is almost implicit that f(x) is a "good representative" for the values of f near x.
Theorems on the differentiation of integrals
Lebesgue measure
One result on the differentiation of integrals is the Lebesgue differentiation theorem, as proved by Henri Lebesgue in 1910. Consider n-dimensional Lebesgue measure λn on n-dimensional Euclidean space Rn. Then, for any locally integrable function f : Rn → R, one has
for λn-almost all points x ∈ Rn. It is important to note, however, that the measure zero set of "bad" points depends on the function f.
Borel measures on Rn
The result for Lebesgue measure turns out to be a special case of the following result, which is based on the Besicovitch covering theorem: if μ is any locally finite Borel measure on Rn and f : Rn → R is locally integrable with respect to μ, then
for μ-almost all points x ∈ Rn.
Gaussian measures
The problem of the differentiation of integrals is much harder in an infinite-dimensional setting. Consider a separable Hilbert space (H, ⟨ , ⟩) equipped with a Gaussian measure γ. As stated in the article on the Vitali covering theorem, the Vitali covering theorem fails for Gaussian measures on infinite-dimensional Hilbert spaces. Two results of David Preiss (1981 and 1983) show the kind of difficulties that one can expect to encounter in this setting:
There is a Gaussian measure γ on a |
https://en.wikipedia.org/wiki/Speed%20of%20gravity | In classical theories of gravitation, the changes in a gravitational field propagate. A change in the distribution of energy and momentum of matter results in subsequent alteration, at a distance, of the gravitational field which it produces. In the relativistic sense, the "speed of gravity" refers to the speed of a gravitational wave, which, as predicted by general relativity and confirmed by observation of the GW170817 neutron star merger, is the same speed as the speed of light (c).
Introduction
The speed of gravitational waves in the general theory of relativity is equal to the speed of light in a vacuum, . Within the theory of special relativity, the constant is not only about light; instead it is the highest possible speed for any interaction in nature. Formally, is a conversion factor for changing the unit of time to the unit of space. This makes it the only speed which does not depend either on the motion of an observer or a source of light and / or gravity. Thus, the speed of "light" is also the speed of gravitational waves, and further the speed of any massless particle. Such particles include the gluon (carrier of the strong force), the photons that make up light (hence carrier of electromagnetic force), and the hypothetical gravitons (which are the presumptive field particles associated with gravity; however, an understanding of the graviton, if it exists, requires an as-yet unavailable theory of quantum gravity).
Static fields
The speed of physical changes in a gravitational or electromagnetic field should not be confused with "changes" in the behavior of static fields that are due to pure observer-effects. These changes in direction of a static field are, because of relativistic considerations, the same for an observer when a distant charge is moving, as when an observer (instead) decides to move with respect to a distant charge. Thus, constant motion of an observer with regard to a static charge and its extended static field (either a gravitati |
https://en.wikipedia.org/wiki/The%20Mathematical%20Experience | The Mathematical Experience (1981) is a book by Philip J. Davis and Reuben Hersh that discusses the practice of modern mathematics from a historical and philosophical perspective. The book discusses the psychology of mathematicians, and gives examples of famous proofs and outstanding problems. It goes on to speculate about what a proof really means, in relationship to actual truth. Other topics include mathematics in education and some of the math that occurs in computer science.
The first paperback edition won a U.S. National Book Award in Science. It is cited by some mathematicians as influential in their decision to continue their studies in graduate school; and has been hailed as a classic of mathematical literature.
On the other hand, Martin Gardner disagreed with some of the authors' philosophical opinions.
A new edition, published in 1995, includes exercises and problems, making the book more suitable for classrooms. There is also The Companion Guide to The Mathematical Experience, Study Edition. Both were co-authored with Elena Marchisotto. Davis and Hersh wrote a follow-up book, Descartes' Dream: The World According to Mathematics (Harcourt, 1986), and each has written other books with related themes, such as Mathematics And Common Sense: A Case of Creative Tension by Davis and What is Mathematics, Really? by Hersh.
Notes |
https://en.wikipedia.org/wiki/Stein%E2%80%93Str%C3%B6mberg%20theorem | In mathematics, the Stein–Strömberg theorem or Stein–Strömberg inequality is a result in measure theory concerning the Hardy–Littlewood maximal operator. The result is foundational in the study of the problem of differentiation of integrals. The result is named after the mathematicians Elias M. Stein and Jan-Olov Strömberg.
Statement of the theorem
Let λn denote n-dimensional Lebesgue measure on n-dimensional Euclidean space Rn and let M denote the Hardy–Littlewood maximal operator: for a function f : Rn → R, Mf : Rn → R is defined by
where Br(x) denotes the open ball of radius r with center x. Then, for each p > 1, there is a constant Cp > 0 such that, for all natural numbers n and functions f ∈ Lp(Rn; R),
In general, a maximal operator M is said to be of strong type (p, p) if
for all f ∈ Lp(Rn; R). Thus, the Stein–Strömberg theorem is the statement that the Hardy–Littlewood maximal operator is of strong type (p, p) uniformly with respect to the dimension n. |
https://en.wikipedia.org/wiki/Spinodal%20decomposition | Spinodal decomposition is a mechanism by which a single thermodynamic phase spontaneously separates into two phases (without nucleation). Decomposition occurs when there is no thermodynamic barrier to phase separation. As a result, phase separation via decomposition does not require the nucleation events resulting from thermodynamic fluctuations, which normally trigger phase separation.
Spinodal decomposition is observed when mixtures of metals or polymers separate into two co-existing phases, each rich in one species and poor in the other. When the two phases emerge in approximately equal proportion (each occupying about the same volume or area), characteristic intertwined structures are formed that gradually coarsen (see animation). The dynamics of spinodal decomposition is commonly modeled using the Cahn–Hilliard equation.
Spinodal decomposition is fundamentally different from nucleation and growth. When there is a nucleation barrier to the formation of a second phase, time is taken by the system to overcome that barrier. As there is no barrier (by definition) to spinodal decomposition, some fluctuations (in the order parameter that characterizes the phase) start growing instantly. Furthermore, in spinodal decomposition, the two distinct phases start growing in any location uniformly throughout the volume, whereas a nucleated phase change begins at a discrete number of points.
Spinodal decomposition occurs when a homogenous phase becomes thermodynamically unstable. An unstable phase lies at a maximum in free energy. In contrast, nucleation and growth occur when a homogenous phase becomes metastable. That is, another biphasic system becomes lower in free energy, but the homogenous phase remains at a local minimum in free energy, and so is resistant to small fluctuations. J. Willard Gibbs described two criteria for a metastable phase: that it must remain stable against a small change over a large area.
History
In the early 1940s, Bradley reported the observa |
https://en.wikipedia.org/wiki/Graduation%20%28scale%29 | A graduation is a marking used to indicate points on a visual scale, which can be present on a container, a measuring device, or the axes of a line plot, usually one of many along a line or curve, each in the form of short line segments perpendicular to the line or curve. Often, some of these line segments are longer and marked with a numeral, such as every fifth or tenth graduation. The scale itself can be linear (the graduations are spaced at a constant distance apart) or nonlinear.
Linear graduation of a scale occurs mainly (but not exclusively) on straight measuring devices, such as a rule or measuring tape, using units such as inches or millimetres.
Graduations can also be spaced at varying spatial intervals, such as when using a logarithmic, for instance on a measuring cup, can vary in scale due to the container's non-cylindrical shape.
Graduations along a curve
Circular graduations of a scale occur on a circular arc or limb of an instrument. In some cases, non-circular curves are graduated in instruments. A typical circular arc graduation is the division into angular measurements, such as degrees, minutes and seconds. These types of graduated markings are traditionally seen on devices ranging from compasses and clock faces to alidades found on such instruments as telescopes, theodolites, inclinometers, astrolabes, armillary spheres, and celestial spheres.
There can also be non-uniform graduations such as logarithmic or other scales such as seen on circular slide rules and graduated cylinders.
Manufacture of graduations
Graduations can be placed on an instrument by etching, scribing or engraving, painting, printing or other means. For durability and accuracy, etched or scribed marks are usually preferable to surface coatings such as paints and inks. Markings can be a combination of both physical marks such as a scribed line and a paint or other marking material. For example, it is common for black ink or paint to fill the grooves cut in a scribed ru |
https://en.wikipedia.org/wiki/Roasting%20jack | A roasting jack is a machine which rotates meat roasting on a spit. It can also be called a spit jack, a spit engine or a turnspit, although this name can also refer to a human turning the spit, or a turnspit dog. Cooking meat on a spit dates back at least to the 1st century BC, but at first spits were turned by human power. In Britain, starting in the Tudor period, dog-powered turnspits were used; the dog ran in a treadmill linked to the spit by belts and pulleys. Other forms of roasting jacks included the steam jack, driven by steam, the smoke jack, driven by hot gas rising from the fire, and the bottle jack or clock jack, driven by weights or springs.
Weight or clock jacks
A great majority of the jacks used prior to the 19th century were powered by a descending weight, often made of stone or iron, sometimes of lead. Although most commonly referred to as spit engines or jacks, these were also termed weight or clock jacks (clock jacks was the more common term in North America).
Earlier jacks of this type had a train of two arbors (spindles), later ones had a more efficient three arbor train. In the case of British examples, almost without exception, the governor or flywheel was set above the engine (as opposed to being located within the frame) and to one side; to the right for two train and to the left for three train engines.
European jacks are characterised by a flywheel set centrally and often within the frame; commonly the highest part of the frame has a bell-like arch that the shaft for the flywheel passes through.
Steam jack
A steam-powered roasting jack was first described by the Ottoman polymath and engineer Taqi al-Din in his Al-Turuq al-samiyya fi al-alat al-ruhaniyya (The Sublime Methods of Spiritual Machines), in 1551. A steam-driven jack was patented by the American clockmaker John Bailey II in 1792, and steam jacks were later commercially available in the United States.
Smoke jack
Leonardo da Vinci sketched a smoke-jack in the form of a tu |
https://en.wikipedia.org/wiki/List%20of%20psilocybin%20mushroom%20species | Psilocybin mushrooms are mushrooms which contain the hallucinogenic substances psilocybin, psilocin, baeocystin and norbaeocystin. The mushrooms are collected and grown as an entheogen and recreational drug, despite being illegal in many countries. Many psilocybin mushrooms are in the genus Psilocybe, but species across several other genera contain the drugs.
General
Conocybe
Galerina
Gymnopilus
Inocybe
Panaeolus
Pholiotina
Pluteus
Psilocybe
Conocybe
Conocybe siligineoides R. Heim
Conocybe velutipes (Velen.) Hauskn. & Svrcek
Galerina
Galerina steglichii Besl
Gymnopilus
Gymnopilus aeruginosus (Peck) Singer (photo)
Gymnopilus braendlei (Peck) Hesler
Gymnopilus cyanopalmicola Guzm.-Dáv
Gymnopilus dilepis (Berk. & Broome) Singer
Gymnopilus dunensis H. Bashir, Jabeen & Khalid
Gymnopilus intermedius (Singer) Singer
Gymnopilus lateritius (Pat.) Murrill
Gymnopilus luteofolius (Peck) Singer (photo)
Gymnopilus luteoviridis Thiers (photo)
Gymnopilus luteus (Peck) Hesler (photo)
Gymnopilus palmicola Murrill
Gymnopilus purpuratus (Cooke & Massee) Singer (photo)
Gymnopilus subpurpuratus Guzmán-Davalos & Guzmán
Gymnopilus subspectabilis Hesler
Gymnopilus validipes (Peck) Hesler
Gymnopilus viridans Murrill
Inocybe
Inocybe aeruginascens Babos
Inocybe caerulata Matheny, Bougher & G.M. Gates
Inocybe coelestium Kuyper
Inocybe corydalina
Inocybe corydalina var. corydalina Quél.
Inocybe corydalina var. erinaceomorpha (Stangl & J. Veselsky) Kuyper
Inocybe haemacta (Berk. & Cooke) Sacc.
Inocybe tricolor Kühner
Most species in this genus are poisonous.
Panaeolus
Panaeolus affinis (E. Horak) Ew. Gerhardt
Panaeolus africanus Ola'h
Panaeolus axfordii Y. Hu, S.C. Karunarathna, P.E. Mortimer & J.C. Xu
Panaeolus bisporus (Malencon and Bertault) Singer and Weeks
Panaeolus cambodginiensis (OlaĽh et Heim) Singer & Weeks. (Merlin & Allen, 1993)
Panaeolus chlorocystis (Singer & R.A. Weeks) Ew. Gerhardt
Panaeolus cinctulus (Bolton) Britzelm.
Panaeolus cyanescens (Berk. & Broome) Sacc.
Pan |
https://en.wikipedia.org/wiki/Radio-frequency%20engineering | Radio-frequency (RF) engineering is a subset of electronic engineering involving the application of transmission line, waveguide, antenna and electromagnetic field principles to the design and application of devices that produce or use signals within the radio band, the frequency range of about 20 kHz up to 300 GHz.
It is incorporated into almost everything that transmits or receives a radio wave, which includes, but is not limited to, mobile phones, radios, WiFi, and two-way radios.
RF engineering is a highly specialized field that typically includes the following areas of expertise:
Design of antenna systems to provide radiative coverage of a specified geographical area by an electromagnetic field or to provide specified sensitivity to an electromagnetic field impinging on the antenna.
Design of coupling and transmission line structures to transport RF energy without radiation.
Application of circuit elements and transmission line structures in the design of oscillators, amplifiers, mixers, detectors, combiners, filters, impedance transforming networks and other devices.
Verification and measurement of performance of radio frequency devices and systems.
To produce quality results, the RF engineer needs to have an in-depth knowledge of mathematics, physics and general electronics theory as well as specialized training in areas such as wave propagation, impedance transformations, filters and microstrip printed circuit board design.
Radio electronics
Radio electronics is concerned with electronic circuits which receive or transmit radio signals.
Typically, such circuits must operate at radio frequency and power levels, which imposes special constraints on their design. These constraints increase in their importance with higher frequencies. At microwave frequencies, the reactance of signal traces becomes a crucial part of the physical layout of the circuit.
List of radio electronics topics:
RF oscillators: Phase-locked loop, voltage-controlled oscillator
Tr |
https://en.wikipedia.org/wiki/Bhaskara%27s%20lemma | Bhaskara's Lemma is an identity used as a lemma during the chakravala method. It states that:
for integers and non-zero integer .
Proof
The proof follows from simple algebraic manipulations as follows: multiply both sides of the equation by , add , factor, and divide by .
So long as neither nor are zero, the implication goes in both directions. (The lemma holds for real or complex numbers as well as integers.) |
https://en.wikipedia.org/wiki/Horosphere | In hyperbolic geometry, a horosphere (or parasphere) is a specific hypersurface in hyperbolic n-space. It is the boundary of a horoball, the limit of a sequence of increasing balls sharing (on one side) a tangent hyperplane and its point of tangency. For n = 2 a horosphere is called a horocycle.
A horosphere can also be described as the limit of the hyperspheres that share a tangent hyperplane at a given point, as their radii go towards infinity. In Euclidean geometry, such a "hypersphere of infinite radius" would be a hyperplane, but in hyperbolic geometry it is a horosphere (a curved surface).
History
The concept has its roots in a notion expressed by F. L. Wachter in 1816 in a letter to his teacher Gauss. Noting that in Euclidean geometry the limit of a sphere as its radius tends to infinity is a plane, Wachter affirmed that even if the fifth postulate were false, there would nevertheless be a geometry on the surface identical with that of the ordinary plane. The terms horosphere and horocycle are due to Lobachevsky, who established various results showing that the geometry of horocycles and the horosphere in hyperbolic space were equivalent to those of lines and the plane in Euclidean space. The term "horoball" is due to William Thurston, who used it in his work on hyperbolic 3-manifolds. The terms horosphere and horoball are often used in 3-dimensional hyperbolic geometry.
Models
In the conformal ball model, a horosphere is represented by a sphere tangent to the horizon sphere. In the upper half-space model, a horosphere can appear either as a sphere tangent to the horizon plane, or as a plane parallel to the horizon plane. In the hyperboloid model, a horosphere is represented by a plane whose normal lies in the asymptotic cone.
Curvature
A horosphere has a critical amount of (isotropic) curvature: if the curvature were any greater, the surface would be able to close, yielding a sphere, and if the curvature were any less, the surface would be an (N − |
https://en.wikipedia.org/wiki/Arctic%20realm | The Arctic realm is one of the planet's twelve marine realms, as designated by the WWF and Nature Conservancy. It includes the coastal regions and continental shelves of the Arctic Ocean and adjacent seas, including the Arctic Archipelago, Hudson Bay, and the Labrador Sea of northern Canada, the seas surrounding Greenland, the northern and eastern coasts of Iceland, and the eastern Bering Sea.
The Arctic realm transitions to the Temperate Northern Atlantic realm in the Atlantic Basin, and the Temperate Northern Pacific realm in the Pacific Basin.
Ecoregions
The Arctic realm is further subdivided into 19 marine ecoregions:
North Greenland
North and East Iceland
East Greenland Shelf
West Greenland Shelf
Northern Grand Banks-Southern Labrador
Northern Labrador
Baffin Bay-Davis Strait
Hudson Complex
Lancaster Sound
High Arctic Archipelago
Beaufort-Amundsen-Viscount Melville-Queen Maud
Beaufort Sea-continental coast and shelf
Chukchi Sea
Eastern Bering Sea
East Siberian Sea
Laptev Sea
Kara Sea
North and East Barents Sea
White Sea |
https://en.wikipedia.org/wiki/Chakragati%20mouse | Chakragati mouse (ckr) is an insertional transgenic mouse mutant (Mus musculus) displaying hyperactive behaviour and circling. It is also deficient in prepulse inhibition, latent inhibition and has brain abnormalities such as lateral ventricular enlargement that are typical to endophenotypic models of schizophrenia, which make it useful in screening for antipsychotic drug candidates. The mouse is currently licensed by Chakra Biotech. |
https://en.wikipedia.org/wiki/Ponseti%20method | The Ponseti method is a manipulative technique that corrects congenital clubfoot without invasive surgery. It was developed by Ignacio V. Ponseti of the University of Iowa Hospitals and Clinics, US, in the 1950s, and was repopularized in 2000 by John Herzenberg in the US and Europe and in Africa by NHS surgeon Steve Mannion. It is a standard treatment for clubfoot.
Description
Ponseti treatment was introduced in UK in the late 1990s and widely popularized around the country by NHS physiotherapist Steve Wildon.
The manipulative treatment of club foot deformity is based on the inherent properties of the connective tissue, cartilage, and bone, which respond to the proper mechanical stimuli created by the gradual reduction of the deformity. The ligaments, joint capsules, and tendons are stretched under gentle manipulations. A plaster cast is applied after each manipulation to retain the degree of correction and soften the ligaments. The displaced bones are thus gradually brought into the correct alignment with their joint surfaces progressively remodeled yet maintaining congruency. After two months of manipulation and casting the foot appears slightly over-corrected. After a few weeks in splints, however, the foot looks normal.
Proper foot manipulations require a thorough understanding of the anatomy and kinematics of the normal foot and of the deviations of the tarsal bones in the clubfoot. Poorly conducted manipulations will further complicate the clubfoot deformity. The non-operative treatment will succeed better if it is started a few days or weeks after birth and if the podiatrist understands the nature of the deformity and possesses manipulative skill and expertise in plaster-cast applications.
The Ponseti technique is painless, fast, cost-effective and successful in almost 100% of all congenital clubfoot cases. The Ponseti method is endorsed and supported by the World Health Organization, National Institutes of Health, American Academy of Orthopedic Surgeons, |
https://en.wikipedia.org/wiki/Relative%20volatility | Relative volatility is a measure comparing the vapor pressures of the components in a liquid mixture of chemicals. This quantity is widely used in designing large industrial distillation processes. In effect, it indicates the ease or difficulty of using distillation to separate the more volatile components from the less volatile components in a mixture. By convention, relative volatility is usually denoted as .
Relative volatilities are used in the design of all types of distillation processes as well as other separation or absorption processes that involve the contacting of vapor and liquid phases in a series of equilibrium stages.
Relative volatilities are not used in separation or absorption processes that involve components reacting with each other (for example, the absorption of gaseous carbon dioxide in aqueous solutions of sodium hydroxide).
Definition
For a liquid mixture of two components (called a binary mixture) at a given temperature and pressure, the relative volatility is defined as
When their liquid concentrations are equal, more volatile components have higher vapor pressures than less volatile components. Thus, a value (= ) for a more volatile component is larger than a value for a less volatile component. That means that ≥ 1 since the larger value of the more volatile component is in the numerator and the smaller of the less volatile component is in the denominator.
is a unitless quantity. When the volatilities of both key components are equal, = 1 and separation of the two by distillation would be impossible under the given conditions because the compositions of the liquid and the vapor phase are the same (azeotrope). As the value of increases above 1, separation by distillation becomes progressively easier.
A liquid mixture containing two components is called a binary mixture. When a binary mixture is distilled, complete separation of the two components is rarely achieved. Typically, the overhead fraction from the distillation |
https://en.wikipedia.org/wiki/Tensilica | Tensilica Inc. was a company based in Silicon Valley in the semiconductor intellectual property core business. It is now a part of Cadence Design Systems.
Tensilica is known for its customizable Xtensa microprocessor core. Other products include: HiFi audio/voice DSPs (digital signal processors) with a software library of over 225 codecs from Cadence and over 100 software partners; Vision DSPs that handle complex algorithms in imaging, video, computer vision, and neural networks; and the ConnX family of baseband DSPs ranging from the dual-MAC ConnX D2 to the 64-MAC ConnX BBE64EP.
Tensilica was founded in 1997 by Chris Rowen (one of the founders of MIPS Technologies). It employed Earl Killian, who contributed to the MIPS architecture, as director of architecture. On March 11, 2013, Cadence Design Systems announced its intent to buy Tensilica for approximately $380 million in cash. Cadence completed the acquisition in April 2013, with a cash outlay at closing of approximately $326 million.
Cadence Tensilica products
Cadence Tensilica develops SIP blocks to be included on the chip (IC) designs of products of their licensees, such as system on a chip for embedded systems. Tensilica processors are delivered as synthesizable RTL for easy integration into chip designs.
Xtensa configurable cores
Xtensa processors range from small, low-power cache-less microcontroller to high-performance 16-way SIMD processors, 3-issue VLIW DSP cores, or 1 TMAC/sec neural network processors. All Cadence standard DSPs are based on the Xtensa architecture. The Xtensa architecture offers a user-customizable instruction set through automated customization tools that can extend the Xtensa base instruction set, including SIMD instructions, new register files.
Xtensa instruction set
The Xtensa instruction set is a 32-bit architecture with a compact 16- and 24-bit instruction set. The base instruction set has 82 RISC instructions and includes a 32-bit ALU, 16 general-purpose 32-bit registers |
https://en.wikipedia.org/wiki/Hatsune%20Miku | , officially code-named CV01, is a Vocaloid software voicebank developed by Crypton Future Media and its official anthropomorphic mascot character, a 16-year-old girl with long, turquoise twintails. Miku's personification has been marketed as a virtual idol, and has performed at live virtual concerts onstage as an animated projection (rear-cast projection on a specially coated glass screen).
Miku uses Yamaha Corporation's Vocaloid 2, Vocaloid 3, and Vocaloid 4 singing synthesizing technologies. She also uses Crypton Future Media's Piapro Studio, a standalone singing synthesizer editor. She was the second Vocaloid sold using the Vocaloid 2 engine and the first Japanese Vocaloid to use the Japanese version of the 2 engine. The voice is modeled from Japanese voice actress Saki Fujita.
The name of the character comes from merging the Japanese words for , , and , thus meaning "the first sound of the future", which, along with her code name, refers to her position as the first of Crypton's "Character Vocal Series" (abbreviated "CV Series"), preceding Kagamine Rin/Len (code-named CV02) and Megurine Luka (code-named CV03). The number 01 can also be seen on her left shoulder in official artwork.
Development
Hatsune Miku was the first Vocaloid developed by Crypton Future Media after they handled the release of the Yamaha vocal Meiko and Kaito. Miku was intended to be the first of a series of Vocaloids called the "Character Vocal Series" (abbreviated "CV Series"), which included Kagamine Rin/Len and Megurine Luka. Each had a particular concept and vocal direction.
She was built using Yamaha's Vocaloid 2 technology, and later updated to newer engine versions. She was created by taking vocal samples from voice actress Saki Fujita at a controlled pitch and tone. Those samples all contain a single Japanese phonic that, when strung together, creates full lyrics and phrases. The pitch of the samples was to be altered by the synthesizer engine and constructed into a keyboard-s |
https://en.wikipedia.org/wiki/Quantum%20spin%20Hall%20effect | The quantum spin Hall state is a state of matter proposed to exist in special, two-dimensional semiconductors that have a quantized spin-Hall conductance and a vanishing charge-Hall conductance. The quantum spin Hall state of matter is the cousin of the integer quantum Hall state, and that does not require the application of a large magnetic field. The quantum spin Hall state does not break charge conservation symmetry and spin- conservation symmetry (in order to have well defined Hall conductances).
Description
The first proposal for the existence of a quantum spin Hall state was developed by Charles Kane and Gene Mele who adapted an earlier model for graphene by F. Duncan M. Haldane which exhibits an integer quantum Hall effect. The Kane and Mele model is two copies of the Haldane model such that the spin up electron exhibits a chiral integer quantum Hall Effect while the spin down electron exhibits an anti-chiral integer quantum Hall effect. A relativistic version of the quantum spin Hall effect was introduced in the 1990s for the numerical simulation of chiral gauge theories; the simplest example consisting of a parity and time reversal symmetric U(1) gauge theory with bulk fermions of opposite sign mass, a massless Dirac surface mode, and bulk currents that carry chirality but not charge (the spin Hall current analogue). Overall the Kane-Mele model has a charge-Hall conductance of exactly zero but a spin-Hall conductance of exactly (in units of ). Independently, a quantum spin Hall model was proposed by Andrei Bernevig and Shoucheng Zhang in an intricate strain architecture which engineers, due to spin-orbit coupling, a magnetic field pointing upwards for spin-up electrons and a magnetic field pointing downwards for spin-down electrons. The main ingredient is the existence of spin–orbit coupling, which can be understood as a momentum-dependent magnetic field coupling to the spin of the electron.
Real experimental systems, however, are far from the idealized |
https://en.wikipedia.org/wiki/Pac-Man%20%28character%29 | is a fictional character and the titular protagonist of the video game franchise of the same name. Created by Toru Iwatani, he first appeared in the arcade game Pac-Man (1980), and has since appeared in more than 30 licensed sequels and spin-offs for multiple platforms, and spawning mass amounts of merchandise in his image, including two television series and a hit single by Buckner & Garcia. He is the official mascot of Bandai Namco Entertainment. Pac-Man's most common antagonists are the Ghost Gang — Blinky, Pinky, Inky and Clyde that are determined to defeat him to accomplish their goals, which change throughout the series. Pac-Man also has a voracious appetite, being able to consume vast amounts of food in a short timespan, and can eat his enemies by consuming large "Power Pellets".
The idea of Pac-Man was taken from both the image of a pizza with a slice removed and from rounding out the Japanese symbol "kuchi", meaning "mouth". The character was made to be cute and colorful to appeal to younger players, particularly women. In Japan, he was titled "Puckman" for his hockey puck-like shape, which was changed in international releases to prevent defacement of the arcade cabinets by changing the P into an F.
Pac-Man has the highest-brand awareness of any video game character in North America, becoming an icon in video games and pop culture. He is credited as the first video game mascot character and the first to receive merchandise. He also appears as a playable guest character in some other games, most notably in the Super Smash Bros. series (specifically in the fourth and fifth installments) and in the Ridge Racer series.
Character design
Pac-Man's origins are debated. According to the character's creator Toru Iwatani, the inspiration was pizza without a slice, which gave him a vision of "an animated pizza, racing through a maze and eating things with its absent-slice mouth". However, he said in a 1986 interview that the design of the character also came from |
https://en.wikipedia.org/wiki/Leu-enkephalin | Leu-enkephalin is an endogenous opioid peptide neurotransmitter with the amino acid sequence Tyr-Gly-Gly-Phe-Leu that is found naturally in the brains of many animals, including humans. It is one of the two forms of enkephalin; the other is met-enkephalin. The tyrosine residue at position 1 is thought to be analogous to the 3-hydroxyl group on morphine. Leu-enkephalin has agonistic actions at both the μ- and δ-opioid receptors, with significantly greater preference for the latter. It has little to no effect on the κ-opioid receptor.
See also
Met-enkephalin |
https://en.wikipedia.org/wiki/Big%20dynorphin | Big dynorphin is an endogenous opioid peptide of the dynorphin family that is composed of both dynorphin A and dynorphin B. Big dynorphin has the amino acid sequence: Tyr-Gly-Gly-Phe-Leu-Arg-Arg-Ile-Arg-Pro-Lys-Leu-Lys-Trp-Asp-Asn-Gln-Lys-Arg-Tyr-Gly-Gly-Phe-Leu-Arg-Arg-Gln-Phe-Lys-Val-Val-Thr. It has nociceptive and anxiolytic-like properties, as well as effects on memory in mice.
Big dynorphin is a principal endogenous agonist at the human kappa-opioid receptor. |
https://en.wikipedia.org/wiki/Dense%20regular%20connective%20tissue | Dense regular connective tissue (DRCT) provides connection between different tissues in the human body. The collagen fibers in dense regular connective tissue are bundled in a parallel fashion. DRCT is divided into white fibrous connective tissue and yellow fibrous connective tissue, both of which occur in two forms: cord arrangement and sheath arrangement.
In cord arrangement, bundles of collagen and matrix are distributed in regular alternate patterns. In sheath arrangement, collagen bundles and matrix are distributed in irregular patterns, sometimes in the form of a network. It is similar to areolar tissue, but in DRCT elastic fibers are completely absent.
Structures formed
An example of their use is in tendons, which connect muscle to bone and derive their strength from the regular, longitudinal arrangement of bundles of collagen fibers.
Ligaments bind bone to bone and are similar in structure to tendons.
Aponeuroses are layers of flat, broad tendons that join muscles and the body parts the muscles act upon, whether it be bone or muscle.
Functions
Dense regular connective tissue has great tensile strength that resists pulling forces especially well in one direction.
DRCT has a very poor blood supply, which is why damaged tendons and ligaments are slow to heal. |
https://en.wikipedia.org/wiki/Category%20algebra | In category theory, a field of mathematics, a category algebra is an associative algebra, defined for any locally finite category and commutative ring with unity. Category algebras generalize the notions of group algebras and incidence algebras, just as categories generalize the notions of groups and partially ordered sets.
Definition
If the given category is finite (has finitely many objects and morphisms), then the following two definitions of the category algebra agree.
Group algebra-style definition
Given a group G and a commutative ring R, one can construct RG, known as the group algebra; it is an R-module equipped with a multiplication. A group is the same as a category with a single object in which all morphisms are isomorphisms (where the elements of the group correspond to the morphisms of the category), so the following construction generalizes the definition of the group algebra from groups to arbitrary categories.
Let C be a category and R be a commutative ring with unity. Define RC (or R[C]) to be the free R-module with the set of morphisms of C as its basis. In other words, RC consists of formal linear combinations (which are finite sums) of the form , where fi are morphisms of C, and ai are elements of the ring R. Define a multiplication operation on RC as follows, using the composition operation in the category:
where if their composition is not defined. This defines a binary operation on RC, and moreover makes RC into an associative algebra over the ring R. This algebra is called the category algebra of C.
From a different perspective, elements of the free module RC could also be considered as functions from the morphisms of C to R which are finitely supported. Then the multiplication is described by a convolution: if (thought of as functionals on the morphisms of C), then their product is defined as:
The latter sum is finite because the functions are finitely supported, and therefore .
Incidence algebra-style definition
The definition use |
https://en.wikipedia.org/wiki/Venous%20return | Venous return is the rate of blood flow back to the heart. It normally limits cardiac output.
Superposition of the cardiac function curve and venous return curve is used in one hemodynamic model.
Physiology
Venous return (VR) is the flow of blood back to the heart. Under steady-state conditions, venous return must equal cardiac output (Q), when averaged over time because the cardiovascular system is essentially a closed loop. Otherwise, blood would accumulate in either the systemic or pulmonary circulations. Although cardiac output and venous return are interdependent, each can be independently regulated.
The circulatory system is made up of two circulations (pulmonary and systemic) situated in series between the right ventricle (RV) and left ventricle (LV). Balance is achieved, in large part, by the Frank–Starling mechanism. For example, if systemic venous return is suddenly increased (e.g., changing from upright to supine position), right ventricular preload increases leading to an increase in stroke volume and pulmonary blood flow. The left ventricle experiences an increase in pulmonary venous return, which in turn increases left ventricular preload and stroke volume by the Frank–Starling mechanism. In this way, an increase in venous return can lead to a matched increase in cardiac output.
Venous return curve
Hemodynamically, venous return (VR) to the heart from the venous vascular beds is determined by a pressure gradient (venous pressure - right atrial pressure) and venous resistance (RV). Therefore, increases in venous pressure or decreases in right atrial pressure or venous resistance will lead to an increase in venous return, except when changes are brought about by altered body posture. Although the above relationship is true for the hemodynamic factors that determine the flow of blood from the veins back to the heart, it is important not to lose sight of the fact that blood flow through the entire systemic circulation represents both the cardiac |
https://en.wikipedia.org/wiki/Nachman%20Aronszajn | Nachman Aronszajn (26 July 1907 – 5 February 1980) was a Polish American mathematician. Aronszajn's main field of study was mathematical analysis, where he systematically developed the concept of reproducing kernel Hilbert space. He also contributed to mathematical logic.
Life
An Ashkenazi Jew, Aronszajn received his Ph.D. from the University of Warsaw, in 1930, in Poland. Stefan Mazurkiewicz was his thesis advisor. He also received a Ph.D. from Paris University, in 1935; this time Maurice Fréchet was his thesis advisor. He joined the Oklahoma State University faculty, but moved to the University of Kansas in 1951 with his colleague Ainsley Diamond after Diamond, a Quaker, was fired for refusing to sign a newly instituted loyalty oath. Aronszajn retired in 1977. He was a Summerfield Distinguished Scholar from 1964 to his death.
Work
He introduced, together with Prom Panitchpakdi, injective metric spaces under the name of "hyperconvex metric spaces". Together with Kennan T. Smith, Aronszajn offered proof of the Aronszajn–Smith theorem. Also, the existence of Aronszajn trees was proven by Aronszajn; Aronszajn lines, also named after him, are the lexicographic orderings of Aronszajn trees.
He also made a contribution to the theory of reproducing kernel Hilbert space. The Moore–Aronszajn theorem is named after him. |
https://en.wikipedia.org/wiki/Tropical%20Atlantic | The Tropical Atlantic realm is one of twelve marine realms that cover the world's coastal seas and continental shelves.
The Tropical Atlantic covers both sides of the Atlantic. In the western Atlantic, it extends from Bermuda, southern Florida, and the southern Gulf of Mexico through the Caribbean and along South America's Atlantic coast to Cape Frio in Brazil's Rio de Janeiro state. In the Eastern Atlantic, it extends along the African coast from Cape Blanco in Mauritania to the Tigres Peninsula on the coast of Angola. It also includes the seas around St. Helena and Ascension islands.
The Tropical Atlantic is bounded on the north and south by temperate ocean realms. The Temperate Northern Atlantic realm lies to the north on both the North American and African-European shores of the Atlantic. To the south, the ocean realms conform to the continental margins, not the ocean basins; the Temperate South America realm lies to the south along the South American coast, and the Temperate Southern Africa realm lies to the south along the African coast.
Marine provinces
The Tropical Atlantic realm is divided into six marine provinces, which are in turn divided into 25 marine ecoregions.
Tropical Northwestern Atlantic
Bermuda
Bahamian
Eastern Caribbean
Greater Antilles
Southern Caribbean
Southwestern Caribbean
Western Caribbean
Southern Gulf of Mexico
Floridian
North Brazil Shelf
Guianian
Amazonia
Tropical Southwestern Atlantic
Sao Pedro and Sao Paulo Islands
Fernando de Noronha and Atol das Rocas
Northeastern Brazil
Eastern Brazil
Trindade and Martin Vaz Islands
Saint Helena, Ascension and Tristan da Cunha Islands
Saint Helena, Ascension and Tristan da Cunha Islands
West African Transition
Cape Verde
Sahelian Upwelling
Gulf of Guinea
Gulf of Guinea West
Gulf of Guinea Upwelling
Gulf of Guinea Central
Gulf of Guinea Islands
Gulf of Guinea South
Angolan |
https://en.wikipedia.org/wiki/Sergei%20Alexander%20Schelkunoff | Sergei Alexander Schelkunoff (; January 27, 1897 – May 2, 1992), who published as S. A. Schelkunoff, was a distinguished mathematician, engineer and electromagnetism theorist who made noted contributions to antenna theory.
Biography
Schelkunoff was born in Samara, Russia in 1897, attended the University of Moscow before being drafted in 1917. He crossed Siberia into Manchuria and then Japan before settling in Seattle in 1921. There he received bachelor's and master's degrees in mathematics from the State College of Washington, now Washington State University, and in 1928 received his Ph.D. from Columbia University for his dissertation On Certain Properties of the Metrical and Generalized Metrical Groups in Linear Spaces of Dimension.
After receiving his degree, Schelkunoff joined Western Electric's research wing, which became Bell Laboratories. In 1933 he and Sally P. Mead began analysis of waveguide propagation discovered analytically by their colleague George C. Southworth. Their analysis uncovered the transverse modes. Schelkunoff appears to have been the first to notice the important practical consequences of the fact that attenuation in the TE01 mode decays inversely with the power of the frequency. In 1935 he and his colleagues reported that coaxial cable, then new, could transmit television pictures or up to 200 telephone conversations.
During his 35 year career at Bell Labs, Schelkunoff's research included radar, electromagnetic wave propagation in the atmosphere and in microwave guides, short-wave radio, broad-band antennas, and grounding. He taught for five years at Columbia University, and later served as assistant director of mathematical research and assistant vice president for university relations. He retired from Columbia U. in 1965, and served as a consultant on magnetrons for the United States Naval Station at San Diego.
Schelkunoff received 15 patents, the IEEE Morris N. Liebmann Memorial Award from the Institute of Radio Engineers (1942), a |
https://en.wikipedia.org/wiki/Vincotto | Vincotto () is a dark, sweet, thick paste produced in rural areas of Italy. It is made by the slow cooking and reduction over many hours of non-fermented grape must until it has been reduced to about one-fifth of its original volume and the sugars present have caramelized. It can be made from a number of varieties of local red wine grapes, including Primitivo, Negroamaro and Malvasia Nera, and before the grapes are picked they are allowed to wither naturally on the vine for about thirty days. In Roman times it was known as sapa in Latin and epsima in Greek, the same names that are often used for it in Italy and Cyprus, respectively, today.
The paste is made in the Emilia Romagna, Veneto, Lombardy, Apulia, Basilicata, Sardinia and Marche regions of Italy.
Description
Although it may be used as a basis to make sweet vinegar, vincotto has a pleasant flavor and is not a type of vinegar. This additional product is called a Vinegar of Vincotto, Vincotto vinegar, or Vincotto balsamic and can be used in the same way as a good mellow Balsamic vinegar.
Vincotto appears to be related to defrutum and other forms of grape juice boiled down to varying strengths (carenum, sapa) that were produced in Ancient Rome. Defrutum was used to preserve, sweeten, and/or flavor many foods (including wine), by itself or with honey or garum. Defrutum was also consumed as a drink when diluted with water, or fermented into a heady Roman "wine". (Note: defrutum should not be confused with passum, a wine made from fermented raisins that originated in ancient Carthage and was popular in ancient Rome. Passum was therefore more similar to modern Vin Santo than to vincotto.)
Over many centuries, the vincotto produced in Basilicata and the Salento area of Apulia, was further developed into several different varieties of higher quality and culinary sophistication and is produced from the slow reduction together of a blend of cooked grape must and of a wine that has started to spoil and sour attainin |
https://en.wikipedia.org/wiki/Ideotype | In systematics, an ideotype is a specimen identified as belonging to a specific taxon by the author of that taxon, but collected from somewhere other than the type locality.
The concept of ideotype in plant breeding was introduced by Donald in 1968 to describe the idealized appearance of a plant variety. It literally means 'a form denoting an idea'. According to Donald, an ideotype is a biological model which is expected to perform or behave in a particular manner within a defined environment: "a crop ideotype is a plant model, which is expected to yield a greater quantity or quality of grain, oil or other useful product when developed as a cultivar." Donald and Hamblin (1976) proposed the concepts of isolation, competition and crop ideotypes. Market ideotype, climatic ideotype, edaphic ideotype, stress ideotype and disease/pest ideotypes are its other concepts. The term ideotype has the following synonyms: model plant type, ideal model plant type and ideal plan type.
The term is also used in cognitive science and cognitive psychology, where Ronaldo Vigo (2011, 2013, 2014) introduced it to refer to a type of concept metarepresentation that is a compound memory trace consisting of the structural information detected by humans in categorical stimuli.
Notes
Molecular biology
Botanical nomenclature |
https://en.wikipedia.org/wiki/Babu%C5%A1ka%E2%80%93Lax%E2%80%93Milgram%20theorem | In mathematics, the Babuška–Lax–Milgram theorem is a generalization of the famous Lax–Milgram theorem, which gives conditions under which a bilinear form can be "inverted" to show the existence and uniqueness of a weak solution to a given boundary value problem. The result is named after the mathematicians Ivo Babuška, Peter Lax and Arthur Milgram.
Background
In the modern, functional-analytic approach to the study of partial differential equations, one does not attempt to solve a given partial differential equation directly, but by using the structure of the vector space of possible solutions, e.g. a Sobolev space W k,p. Abstractly, consider two real normed spaces U and V with their continuous dual spaces U∗ and V∗ respectively. In many applications, U is the space of possible solutions; given some partial differential operator Λ : U → V∗ and a specified element f ∈ V∗, the objective is to find a u ∈ U such that
However, in the weak formulation, this equation is only required to hold when "tested" against all other possible elements of V. This "testing" is accomplished by means of a bilinear function B : U × V → R which encodes the differential operator Λ; a weak solution to the problem is to find a u ∈ U such that
The achievement of Lax and Milgram in their 1954 result was to specify sufficient conditions for this weak formulation to have a unique solution that depends continuously upon the specified datum f ∈ V∗: it suffices that U = V is a Hilbert space, that B is continuous, and that B is strongly coercive, i.e.
for some constant c > 0 and all u ∈ U.
For example, in the solution of the Poisson equation on a bounded, open domain Ω ⊂ Rn,
the space U could be taken to be the Sobolev space H01(Ω) with dual H−1(Ω); the former is a subspace of the Lp space V = L2(Ω); the bilinear form B associated to −Δ is the L2(Ω) inner product of the derivatives:
Hence, the weak formulation of the Poisson equation, given f ∈ L2(Ω), is to find uf such that
Statement of |
https://en.wikipedia.org/wiki/Rhizodermis | Rhizodermis is the root epidermis (also referred to as epiblem), the outermost primary cell layer of the root.
Specialized rhisodermal cells, trichoblasts, form long tubular structures (from 5 to 17 micrometers in diameter and from 80 micrometers to 1.5 millimeters in length) almost perpendicular to the main cell axis - root hairs that absorb water and nutrients. Root hairs of the rhizodermis are always in close contact with soil particles and because of their high surface to volume ratio form an absorbing surface which is much larger than the transpiring surfaces of the plant.
With some species of the family Fabaceae, the rhizodermis participates in the recognition and the uptake of nitrogen-fixing Rhizobia bacteria - the first stage of nodulation leading to formation of root nodules. Rhizodermis plays an important role in nutrient uptake by the plant roots.
In contrast with the epidermis, rhizodermis contains no stomata, and is not covered by cuticle. Its unique feature is the presence of root hairs. Root hair is the outgrowth of a single rhizodermal cell. They occur in high frequency in the adsorptive zone of the root. Root hair derives from a trichoblast as a result of an unequal division. It contains a large vacuole; its cytoplasm and nucleus are superseded to the apical region of the outgrowth. Although it does not divide, its DNA replicates so the nucleus is polyploid. Root hairs live only for few days, and die off in 1–2 days due to mechanical damages. |
https://en.wikipedia.org/wiki/Topps%20Meat%20Company | Topps Meat Company (Topps Meat Company LLC) was a privately owned family company founded in 1940 by Benjamin Sachs in Manhattan, New York. The company later relocated to Elizabeth, New Jersey. The company produced and distributed frozen ground beef patties and other meat products processed at its plant in Elizabeth and posted about $8.8 million a year in sales, according to information reported by Dun & Bradstreet. In 2003, the company was purchased by Strategic Investment and Holdings, an investment firm based in Buffalo, New York and by 2007 it was "one of the country’s largest manufacturers of frozen hamburgers." In 2007 the company ceased operations following Escherichia coli O157:H7 (E. coli) contamination of products and the ensuing recall.
Ownership
According to the New York Times:
"Topps opened in 1940 in Manhattan. The founder, Benjamin Sachs, later sold the company to his son, Steven Sachs, according to Ann Sachs, the founder’s former daughter-in-law. A few years before the company moved to New Jersey, Joseph D’Urso became vice president. After Mr. D’Urso died in 2003, the company was bought by Strategic Investment and Holdings."
Timeline
1940 - Founded
2003 - Purchased by Strategic Investment and Holdings
2005
USDA found that the plant had received meat tainted with E. coli
Topps settled a $1.7 million accidental arm amputation lawsuit
Topps was sued after a consumer became ill from eating a Topps hamburger.
2007
July 5 - first illness linked to recall
July 8 - second illness case
September 7 - USDA's first positive test results for E. coli contamination
September 25 - initial recall of 331,582 pounds of frozen hamburger patties
September 29 - with additional evidence of "inadequate sanitation and process controls" and 25 illnesses under investigation in eight states, the USDA expanded the recall to a total of 21.7 million pounds of Topps beef.
October 4 - class-action lawsuit filed
October 4 - USDA's "notice of intended enforcement"
October 5, 2:35 |
https://en.wikipedia.org/wiki/Wii%20system%20software | The Wii system software is a discontinued set of updatable firmware versions and a software frontend on the Wii home video game console. Updates, which could be downloaded over the Internet or read from a game disc, allowed Nintendo to add additional features and software, as well as to patch security vulnerabilities used by users to load homebrew software. When a new update became available, Nintendo sent a message to the Wii Message Board of Internet-connected systems notifying them of the available update.
Most game discs, including first-party and third-party games, include system software updates so that systems that are not connected to the Internet can still receive updates. The system menu will not start such games if their updates have not been installed, so this has the consequence of forcing users to install updates in order to play these games. Some games, such as online games like Super Smash Bros. Brawl and Mario Kart Wii, contain specific extra updates, such as the ability to receive Wii Message Board posts from game-specific addresses; therefore, these games always require that an update be installed before their first time running on a given console.
Technology
IOS
The Wii's firmware has many active branches known as IOSes, thought by the Wii homebrew developers to stand for "Input Output Systems" or "Internal Operating Systems". The currently active IOS, also simply referred to as just "IOS," runs on a separate ARM926EJ-S processor unofficially nicknamed Starlet, which resides within the Hollywood GPU. The patent for the Wii U shows a similar device which is simply named "Input/Output Processor". IOS controls I/O between the code running on the main Broadway processor and the various Wii hardware that does not also exist on the GameCube.
Except for bug fixes, new IOS versions do not replace existing IOS versions. Instead, Wii consoles have multiple IOS versions installed. All native Wii software (including games distributed on Nintendo optica |
https://en.wikipedia.org/wiki/Time-of-flight%20mass%20spectrometry | Time-of-flight mass spectrometry (TOFMS) is a method of mass spectrometry in which an ion's mass-to-charge ratio is determined by a time of flight measurement. Ions are accelerated by an electric field of known strength. This acceleration results in an ion having the same kinetic energy as any other ion that has the same charge. The velocity of the ion depends on the mass-to-charge ratio (heavier ions of the same charge reach lower speeds, although ions with higher charge will also increase in velocity). The time that it subsequently takes for the ion to reach a detector at a known distance is measured. This time will depend on the velocity of the ion, and therefore is a measure of its mass-to-charge ratio. From this ratio and known experimental parameters, one can identify the ion.
Theory
The potential energy of a charged particle in an electric field is related to the charge of the particle and to the strength of the electric field:
where Ep is potential energy, q is the charge of the particle, and U is the electric potential difference (also known as voltage).
When the charged particle is accelerated into time-of-flight tube (TOF tube or flight tube) by the voltage U, its potential energy is converted to kinetic energy. The kinetic energy of any mass is:
In effect, the potential energy is converted to kinetic energy, meaning that equations () and () are equal
The velocity of the charged particle after acceleration will not change since it moves in a field-free time-of-flight tube. The velocity of the particle can be determined in a time-of-flight tube since the length of the path (d) of the flight of the ion is known and the time of the flight of the ion (t) can be measured using a transient digitizer or time to digital converter.
Thus,
and we substitute the value of v in () into ().
Rearranging () so that the flight time is expressed by everything else:
Taking the square root yields the time,
These factors for the time of flight have been grouped pur |
https://en.wikipedia.org/wiki/Analytic%20semigroup | In mathematics, an analytic semigroup is particular kind of strongly continuous semigroup. Analytic semigroups are used in the solution of partial differential equations; compared to strongly continuous semigroups, analytic semigroups provide better regularity of solutions to initial value problems, better results concerning perturbations of the infinitesimal generator, and a relationship between the type of the semigroup and the spectrum of the infinitesimal generator.
Definition
Let Γ(t) = exp(At) be a strongly continuous one-parameter semigroup on a Banach space (X, ||·||) with infinitesimal generator A. Γ is said to be an analytic semigroup if
for some 0 < θ < π/ 2, the continuous linear operator exp(At) : X → X can be extended to t ∈ Δθ ,
and the usual semigroup conditions hold for s, t ∈ Δθ : exp(A0) = id, exp(A(t + s)) = exp(At) exp(As), and, for each x ∈ X, exp(At)x is continuous in t;
and, for all t ∈ Δθ \ {0}, exp(At) is analytic in t in the sense of the uniform operator topology.
Characterization
The infinitesimal generators of analytic semigroups have the following characterization:
A closed, densely defined linear operator A on a Banach space X is the generator of an analytic semigroup if and only if there exists an ω ∈ R such that the half-plane Re(λ) > ω is contained in the resolvent set of A and, moreover, there is a constant C such that
for Re(λ) > ω and where is the resolvent of the operator A. Such operators are called sectorial. If this is the case, then the resolvent set actually contains a sector of the form
for some δ > 0, and an analogous resolvent estimate holds in this sector. Moreover, the semigroup is represented by
where γ is any curve from e−iθ∞ to e+iθ∞ such that γ lies entirely in the sector
with π/ 2 < θ < π/ 2 + δ. |
https://en.wikipedia.org/wiki/European%20Congress%20of%20Mathematics | The European Congress of Mathematics (ECM) is the second largest international conference of the mathematics community, after the International Congresses of Mathematicians (ICM).
The ECM are held every four years and are timed precisely between the ICM. The ECM is held under the auspices of the European Mathematical Society (EMS), and was one of its earliest initiatives. It was founded by Max Karoubi and the first edition took place in Paris in 1992.
Its objectives are "to present various new aspects of pure and applied mathematics to a wide audience, to be a forum for discussion of the relationship between mathematics and society in Europe, and to enhance cooperation among mathematicians from all European countries."
Activities
The Congresses generally last a week and consist of plenary lectures, parallel (invited) lectures and several mini-symposia devoted to a particular subject, where participants can contribute with posters and short talks. Many editions featured also special lectures, e.g. by prize winners, and public sessions aimed at a general audience.
Other mathematics conferences and workshops organised in the same period become often satellite events of the ECM.
Prizes
Several prizes are awarded at the beginning of the Congress:
The EMS Prize (awarded since the first edition in 1992), to up to ten young mathematicians of European nationality or working in Europe
The Felix Klein Prize (awarded since 2000), to at most three young applied mathematicians
The Otto Neugebauer Prize (awarded since 2012) to a researcher in history of mathematics
List of congresses
1st edition – Paris (1992)
2nd edition – Budapest (1996)
3rd edition – Barcelona (2000)
4th edition – Stockholm (2004)
5th edition – Amsterdam (2008)
6th edition – Kraków (2012)
7th edition – Berlin (2016)
8th edition – Portorož (2021)
The 9th European Congress of Mathematics will be held in Seville in 2024. |
https://en.wikipedia.org/wiki/Medicon%20Valley%20Alliance | Medicon Valley Alliance (or MVA for short) is the Danish-Swedish cluster organisation representing human life sciences in the cross-border region of Medicon Valley. As a non-profit member organisation, Medicon Valley Alliance (MVA) carries out initiatives on behalf of the local life science community in order to create new research and business opportunities – initiatives which members would not be able to implement individually, and with the aim of strengthening the development of Medicon Valley.
The organisation
MVA's member base comprises biotech, medtech and pharma companies of all sizes, CRO's and CMO's, as well as public organizations, universities, science parks, investors, and various business service providers.
MVA is committed to facilitate economic growth, increased competitiveness and employment in Medicon Valley, and is furthermore committed to raise the international recognition of Medicon Valley with the aim of attracting labour, investments, and partners. MVA accomplish this by enhancing local networks, improving local framework conditions, increasing the visibility of Medicon Valley and facilitating international relations to companies and research institutions around the world.
There are currently +300 MVA-members including numerous big and small private biotech companies and public sector research institutions. Among the most prominent members are Novo Nordisk, Technical University of Denmark, University of Lund and University of Copenhagen
The current CEO of Medicon Valley Alliance is Anette Stenberg, following Petter Hartman and Stig Jørgensen.
Chairman of the board is CEO of Alligator Bioscience, Søren Bregenholt. Deputy chairman is Ulf G. Andersson, CEO of MEDEON.
Membership
MVA participants comprise academic departments, regions (hospital managers), states, research, pharmaceutical and medical firms, CROs, CMOs, technology parks, developers, market service providers and other Medicon Valley organizations. |
https://en.wikipedia.org/wiki/Planarity | Planarity is a 2005 puzzle computer game by John Tantalo, based on a concept by Mary Radcliffe at Western Michigan University.
The name comes from the concept of planar graphs in graph theory; these are graphs that can be embedded in the Euclidean plane so that no edges intersect. By Fáry's theorem, if a graph is planar, it can be drawn without crossings so that all of its edges are straight line segments. In the planarity game, the player is presented with a circular layout of a planar graph, with all the vertices placed on a single circle and with many crossings. The goal for the player is to eliminate all of the crossings and construct a straight-line embedding of the graph by moving the vertices one by one into better positions.
History and versions
The game was written in Flash by John Tantalo at Case Western Reserve University in 2005. Online popularity and the local notoriety he gained placed Tantalo as one of Cleveland's most interesting people for 2006. It in turn has inspired the creation of a GTK+ version by Xiph.org's Chris Montgomery, which possesses additional level generation algorithms and the ability to manipulate multiple nodes at once.
Puzzle generation algorithm
The definition of the planarity puzzle does not depend on how the planar graphs in the puzzle are generated, but the original implementation uses the following algorithm:
Generate a set of random lines in a plane such that no two lines are parallel and no three lines meet in a single point.
Calculate the intersections of every line pair.
Create a graph with a vertex for each intersection and an edge for each line segment connecting two intersections (the arrangement of the lines).
If a graph is generated from lines, then the graph will have exactly vertices (each line has vertices, and each vertex is shared with one other line) and edges (each line contains edges). The first level of Planarity is built with lines, so it has vertices and edges. Each level after is generated |
https://en.wikipedia.org/wiki/Pan-African%20Congress%20of%20Mathematicians | The Pan-African Congress of Mathematicians (PACOM) is an international congress of mathematics, held under the auspices of the African Mathematical Union.
List of congresses
2008 –
2004 – Tunis, Tunisia
2000 – Cape Town, South Africa
1995 – Ifrane, Morocco
1991 – Nairobi, Kenya
1986 – Jos, Nigeria
1976 – Rabat, Morocco
External links
7th PACOM 2008
Recurring events established in 1976
Mathematics conferences |
https://en.wikipedia.org/wiki/Cram%C3%A9r%27s%20decomposition%20theorem | Cramér’s decomposition theorem for a normal distribution is a result of probability theory. It is well known that, given independent normally distributed random variables ξ1, ξ2, their sum is normally distributed as well. It turns out that the converse is also true. The latter result, initially announced by Paul Lévy, has been proved by Harald Cramér. This became a starting point for a new subfield in probability theory, decomposition theory for random variables as sums of independent variables (also known as arithmetic of probabilistic distributions).
The precise statement of the theorem
Let a random variable ξ be normally distributed and admit a decomposition as a sum ξ=ξ1+ξ2 of two independent random variables. Then the summands ξ1 and ξ2 are normally distributed as well.
A proof of Cramér's decomposition theorem uses the theory of entire functions.
See also
Raikov's theorem: Similar result for Poisson distribution. |
https://en.wikipedia.org/wiki/Densely%20defined%20operator | In mathematics – specifically, in operator theory – a densely defined operator or partially defined operator is a type of partially defined function. In a topological sense, it is a linear operator that is defined "almost everywhere". Densely defined operators often arise in functional analysis as operations that one would like to apply to a larger class of objects than those for which they a priori "make sense".
Definition
A densely defined linear operator from one topological vector space, to another one, is a linear operator that is defined on a dense linear subspace of and takes values in written Sometimes this is abbreviated as when the context makes it clear that might not be the set-theoretic domain of
Examples
Consider the space of all real-valued, continuous functions defined on the unit interval; let denote the subspace consisting of all continuously differentiable functions. Equip with the supremum norm ; this makes into a real Banach space. The differentiation operator given by is a densely defined operator from to itself, defined on the dense subspace The operator is an example of an unbounded linear operator, since
This unboundedness causes problems if one wishes to somehow continuously extend the differentiation operator to the whole of
The Paley–Wiener integral, on the other hand, is an example of a continuous extension of a densely defined operator. In any abstract Wiener space with adjoint there is a natural continuous linear operator (in fact it is the inclusion, and is an isometry) from to under which goes to the equivalence class of in It can be shown that is dense in Since the above inclusion is continuous, there is a unique continuous linear extension of the inclusion to the whole of This extension is the Paley–Wiener map.
See also |
https://en.wikipedia.org/wiki/Optical%20square | The optical square uses a pentaprism to reflect and refract a beam or sighting 90 degrees, it is used in pairs in surveying and in a singular block in metrology.
In optical square
A Horizon glass is placed at an angle of 120° with the horizon sight.
The Index glass is placed at an angle of 105° with the Index sight.
Angle between Index glass and Horizon glass is 45°.
Metrology
Used with an autocollimator or angle dekkor and mirror it can be used for machine tool axis squareness checking and for measuring the squareness of surfaces. It has two mirrors at 45 degree to each other. One is half-silvered, called horizon glass, and other is fully silvered, called index glass. It measures angle by reflection. Two prisms can be used as an optical square.
Optical square in surveying
In surveying it is used both as a hand held tool for sighting between two poles (often with a plumb bob hung from the handle) and also mounted on a Jacob's staff. |
https://en.wikipedia.org/wiki/Pseudoforest | In graph theory, a pseudoforest is an undirected graph in which every connected component has at most one cycle. That is, it is a system of vertices and edges connecting pairs of vertices, such that no two cycles of consecutive edges share any vertex with each other, nor can any two cycles be connected to each other by a path of consecutive edges. A pseudotree is a connected pseudoforest.
The names are justified by analogy to the more commonly studied trees and forests. (A tree is a connected graph with no cycles; a forest is a disjoint union of trees.) Gabow and Tarjan attribute the study of pseudoforests to Dantzig's 1963 book on linear programming, in which pseudoforests arise in the solution of certain network flow problems. Pseudoforests also form graph-theoretic models of functions and occur in several algorithmic problems. Pseudoforests are sparse graphs – their number of edges is linearly bounded in terms of their number of vertices (in fact, they have at most as many edges as they have vertices) – and their matroid structure allows several other families of sparse graphs to be decomposed as unions of forests and pseudoforests. The name "pseudoforest" comes from .
Definitions and structure
We define an undirected graph to be a set of vertices and edges such that each edge has two vertices (which may coincide) as endpoints. That is, we allow multiple edges (edges with the same pair of endpoints) and loops (edges whose two endpoints are the same vertex). A subgraph of a graph is the graph formed by any subsets of its vertices and edges such that each edge in the edge subset has both endpoints in the vertex subset.
A connected component of an undirected graph is the subgraph consisting of the vertices and edges that can be reached by following edges from a single given starting vertex. A graph is connected if every vertex or edge is reachable from every other vertex or edge. A cycle in an undirected graph is a connected subgraph in which each vertex is |
https://en.wikipedia.org/wiki/Bicircular%20matroid | In the mathematical subject of matroid theory, the bicircular matroid of a graph G is the matroid B(G) whose points are the edges of G and whose independent sets are the edge sets of pseudoforests of G, that is, the edge sets in which each connected component contains at most one cycle.
The bicircular matroid was introduced by and explored further by and others. It is a special case of the frame matroid of a biased graph.
Circuits
The circuits, or minimal dependent sets, of this matroid are the bicircular graphs (or bicycles, but that term has other meanings in graph theory); these are connected graphs whose circuit rank is exactly two.
There are three distinct types of bicircular graph:
The theta graph consists of three paths joining the same two vertices but not intersecting each other.
The figure eight graph (or tight handcuff) consists of two cycles having just one common vertex.
The loose handcuff (or barbell) consists of two disjoint cycles and a minimal connecting path.
All these definitions apply to multigraphs, i.e., they permit multiple edges (edges sharing the same endpoints) and loops (edges whose two endpoints are the same vertex).
Flats
The closed sets (flats) of the bicircular matroid of a graph can be described as the forests of such that in the induced subgraph of , every connected component has a cycle. Since the flats of a matroid form a geometric lattice when partially ordered by set inclusion, these forests of also form a geometric lattice. In the partial ordering for this lattice, that if
each component tree of is either contained in or vertex-disjoint from every tree of , and
each vertex of is a vertex of .
For the most interesting example, let be with a loop added to every vertex. Then the flats of are all the forests of , spanning or nonspanning. Thus, all forests of a graph form a geometric lattice, the forest lattice of G .
As transversal matroids
Bicircular matroids can be characterized as the transversal matro |
https://en.wikipedia.org/wiki/Biobank | A biobank is a type of biorepository that stores biological samples (usually human) for use in research. Biobanks have become an important resource in medical research, supporting many types of contemporary research like genomics and personalized medicine.
Biobanks can give researchers access to data representing a large number of people. Samples in biobanks and the data derived from those samples can often be used by multiple researchers for cross purpose research studies. For example, many diseases are associated with single-nucleotide polymorphisms. Genome-wide association studies using data from tens or hundreds of thousands of individuals can identify these genetic associations as potential disease biomarkers. Many researchers struggled to acquire sufficient samples prior to the advent of biobanks.
Biobanks have provoked questions on privacy, research ethics, and medical ethics. Viewpoints on what constitutes appropriate biobank ethics diverge. However, a consensus has been reached that operating biobanks without establishing carefully considered governing principles and policies could be detrimental to communities that participate in biobank programs.
Background
The term "biobank" first appeared in the late 1990s and is a broad term that has evolved in recent years. One definition is "an organized collection of human biological material and associated information stored for one or more research purposes." Collections of plant, animal, microbe, and other nonhuman materials may also be described as biobanks but in some discussions the term is reserved for human specimens.
Biobanks usually incorporate cryogenic storage facilities for the samples. They may range in size from individual refrigerators to warehouses, and are maintained by institutions such as hospitals, universities, nonprofit organizations, and pharmaceutical companies.
Biobanks may be classified by purpose or design. Disease-oriented biobanks usually have a hospital affiliation through whic |
https://en.wikipedia.org/wiki/Itautec | Itautec is a Brazilian electronics company founded in 1979. It is part of Itaúsa, a Brazilian business group.
Ituatec is an ATM, kiosk, and computer manufacturer in the Brazilian and South American markets, and also has a key role in project deployment and IT services.
It mainly focuses on making consumer electronics, banking, and retail automation. The company has a large base of ATMs globally and in Latin America. Headquartered in São Paulo and with a manufacturing plant in the city of Jundiaí (SP), Itautec has 5,709 direct employees – 5,285 in Brazil and 424 abroad.
Product lines
Presently the company's product lines include:
Personal Computers: Desktop, tablet, and laptop personal computers
Monitors: LCD, LED, OLED, and touchscreen monitors
Commercial and banking automation
Software: Point of sale, credit card processing, an in-house Linux distribution called Librix, terminal management, digital signatures, and banking correspondence, among others
Services and Integration: Technical support, infrastructure, security, phone support, servers, and networks
Components: Printed circuit boards, Memory boards, and integrated circuits
History
1980 – First online presence as GRI Gerenciador de Redes Itautec "Itautec Network Services Provider" and Banktec mainframes.
1981 – Central agency of Itaú is founded, including an automation system developed by Itautec.
1982 – Bank of Brazil installs GRI and Banktec
1985 – PC/XT microcomputer launched
1986 – Itautec installs the first compact Automated teller machine
1989 – GRIP (Gerenciamento de Redes Itautec para PC "Itautec Network Management for PCs") is launched
1990 – Launch of the first Notebook computer, IS 386 Note
1994 – Itautec launches a second-generation ATM in Brazil
1995 – First version of Banktec Multicanal in Banco Itaú Argentina
2001 – First ATMs exported to the United States/Europe
2002 – Itautec acquires technology from NMD for DelaRue, and installs the first WEB system in Banco Itaú Buen A |
https://en.wikipedia.org/wiki/Functional%20psychology | Functional psychology or functionalism refers to a psychological school of thought that was a direct outgrowth of Darwinian thinking which focuses attention on the utility and purpose of behavior that has been modified over years of human existence. Edward L. Thorndike, best known for his experiments with trial-and-error learning, came to be known as the leader of the loosely defined movement. This movement arose in the U.S. in the late 19th century in direct contrast to Edward Titchener's structuralism, which focused on the contents of consciousness rather than the motives and ideals of human behavior. Functionalism denies the principle of introspection, which tends to investigate the inner workings of human thinking rather than understanding the biological processes of the human consciousness.
While functionalism eventually became its own formal school, it built on structuralism's concern for the anatomy of the mind and led to greater concern over the functions of the mind and later to the psychological approach of behaviorism.
History
Functionalism was a philosophy opposing the prevailing structuralism of psychology of the late 19th century. Edward Titchener, the main structuralist, gave psychology its first definition as a science of the study of mental experience, of consciousness, to be studied by trained introspection.
At the start of the nineteenth century, there was a discrepancy between psychologists who were interested in the analysis of the structures of the mind and those who turned their attention to studying the function of mental processes. This resulted in a battle of structuralism versus functionalism.
The main goal of Structuralism was to make attempts to study human consciousness within the confines of an actual living experience, but this could make studying the human mind impossible, functionalism is in stark contrast to that. Structural psychology was concerned with mental contents while functionalism is concerned with mental operations. |
https://en.wikipedia.org/wiki/Xylose%20metabolism | D-Xylose is a five-carbon aldose (pentose, monosaccharide) that can be catabolized or metabolized into useful products by a variety of organisms.
There are at least four different pathways for the catabolism of D-xylose: An oxido-reductase pathway is present in eukaryotic microorganisms. Prokaryotes typically use an isomerase pathway, and two oxidative pathways, called Weimberg and Dahms pathways respectively, are also present in prokaryotic microorganisms.
Pathways
The oxido-reductase pathway
This pathway is also called the “Xylose Reductase-Xylitol Dehydrogenase” or XR-XDH pathway. Xylose reductase (XR) and xylitol dehydrogenase (XDH) are the first two enzymes in this pathway. XR is reducing D-xylose to xylitol using NADH or NADPH. Xylitol is then oxidized to D-xylulose by XDH, using the cofactor NAD. In the last step D-xylulose is phosphorylated by an ATP utilising kinase, XK, to result in D-xylulose-5-phosphate which is an intermediate of the pentose phosphate pathway.
The isomerase pathway
In this pathway the enzyme xylose isomerase converts D-xylose directly into D-xylulose. D-xylulose is then phosphorylated to D-xylulose-5-phosphate as in the oxido-reductase pathway. At equilibrium, the isomerase reaction results in a mixture of 83% D-xylose and 17% D-xylulose because the conversion of xylose to xylulose is energetically unfavorable.
Weimberg pathway
The Weimberg pathway is an oxidative pathway where the D-xylose is oxidized to D-xylono-lactone by a D-xylose dehydrogenase followed by a lactonase to hydrolyze the lactone to D-xylonic acid. A xylonate dehydratase is splitting off a water molecule resulting in 2-keto 3-deoxy-xylonate. 2-keto-3-deox-D-xylonate dehydratase forms the α-ketoglutarate semialdehyde. This is subsequently oxidised via α-ketoglutarate semialdehyde dehydrogenase to yield 2-ketoglutarate which serves as a key intermediate in the citric acid cycle.
Dahms pathway
The Dahms pathway starts as the Weimberg pathway but the 2-keto-3 deoxy-x |
https://en.wikipedia.org/wiki/Transformation%20efficiency | Transformation efficiency refers to the ability of a cell to take up and incorporate exogenous DNA, such as plasmids, during a process called transformation. The efficiency of transformation is typically measured as the number of transformants (cells that have taken up the exogenous DNA) per microgram of DNA added to the cells. A higher transformation efficiency means that more cells are able to take up the DNA, and a lower efficiency means that fewer cells are able to do so.
In molecular biology, transformation efficiency is a crucial parameter, it is used to evaluate the ability of different methods to introduce plasmid DNA into cells and to compare the efficiency of different plasmid, vectors and host cells. This efficiency can be affected by a number of factors, including the method used for introducing the DNA, the type of cell and plasmid used, and the conditions under which the transformation is performed. Therefore, measuring and optimizing transformation efficiency is an important step in many molecular biology applications, including genetic engineering, gene therapy and biotechnology.
Measurement
By measuring the transformation efficiency, we can utilize the information from our experiment to evaluate how effectively our transformation went. This is a quantification of how many cells were altered by 1 µg of plasmid DNA. In essence, it is a sign that the transformation experiment was successful. It should be determined under conditions of cell excess.
Transformation efficiency is typically measured as the number of transformed cells per total number of cells. It can be represented as a percentage or as colony forming units (CFUs) per microgram of DNA.
One of the most common ways to measure transformation efficiency is by performing a colony forming assay. Here is an example of how to calculate transformation efficiency using colony forming units (CFUs):
Plate a known number of cells on agar plates containing the appropriate antibiotics.
Incubate th |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.