source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Qualitative%20property | Qualitative properties are properties that are observed and can generally not be measured with a numerical result. They are contrasted to quantitative properties which have numerical characteristics.
Some engineering and scientific properties are qualitative. A test method can result in qualitative data about something. This can be a categorical result or a binary classification (e.g., pass/fail, go/no go, conform/non-conform). It can sometimes be an engineering judgement.
The data that all share a qualitative property form a nominal category. A variable which codes for the presence or absence of such a property is called a binary categorical variable, or equivalently a dummy variable.
In businesses
Some important qualitative properties that concern businesses are:
Human factors, 'human work capital' is probably one of the most important issues that deals with qualitative properties. Some common aspects are work, motivation, general participation, etc. Although all of these aspects are not measurable in terms of quantitative criteria, the general overview of them could be summarized as a quantitative property.
Environmental issues are in some cases quantitatively measurable, but other properties are qualitative e.g.: environmentally friendly manufacturing, responsibility for the entire life of a product (from the raw-material till scrap), attitudes towards safety, efficiency, and minimum waste production.
Ethical issues are closely related to environmental and human issues, and may be covered in corporate governance. Child labour and illegal dumping of waste are examples of ethical issues.
The way a company deals with its stockholders (the 'acting' of a company) is probably the most obvious qualitative aspect of a business. Although measuring something in qualitative terms is difficult, most people can (and will) make a judgement about a behaviour on the basis of how they feel treated. This indicates that qualitative properties are closely related to emotiona |
https://en.wikipedia.org/wiki/Trigonal%20bipyramidal%20molecular%20geometry | In chemistry, a trigonal bipyramid formation is a molecular geometry with one atom at the center and 5 more atoms at the corners of a triangular bipyramid. This is one geometry for which the bond angles surrounding the central atom are not identical (see also pentagonal bipyramid), because there is no geometrical arrangement with five terminal atoms in equivalent positions. Examples of this molecular geometry are phosphorus pentafluoride (), and phosphorus pentachloride () in the gas phase.
Axial (or apical) and equatorial positions
The five atoms bonded to the central atom are not all equivalent, and two different types of position are defined. For phosphorus pentachloride as an example, the phosphorus atom shares a plane with three chlorine atoms at 120° angles to each other in equatorial positions, and two more chlorine atoms above and below the plane (axial or apical positions).
According to the VSEPR theory of molecular geometry, an axial position is more crowded because an axial atom has three neighboring equatorial atoms (on the same central atom) at a 90° bond angle, whereas an equatorial atom has only two neighboring axial atoms at a 90° bond angle. For molecules with five identical ligands, the axial bond lengths tend to be longer because the ligand atom cannot approach the central atom as closely. As examples, in PF5 the axial P−F bond length is 158 pm and the equatorial is 152 pm, and in PCl5 the axial and equatorial are 214 and 202 pm respectively.
In the mixed halide PF3Cl2 the chlorines occupy two of the equatorial positions, indicating that fluorine has a greater apicophilicity or tendency to occupy an axial position. In general ligand apicophilicity increases with electronegativity and also with pi-electron withdrawing ability, as in the sequence Cl < F < CN. Both factors decrease electron density in the bonding region near the central atom so that crowding in the axial position is less important.
Related geometries with lone pairs
The VSEPR th |
https://en.wikipedia.org/wiki/Lean%20software%20development | Lean software development is a translation of lean manufacturing principles and practices to the software development domain. Adapted from the Toyota Production System, it is emerging with the support of a pro-lean subculture within the agile community. Lean offers a solid conceptual framework, values and principles, as well as good practices, derived from experience, that support agile organizations.
Origin
The expression "lean software development" originated in a book by the same name, written by Mary Poppendieck and Tom Poppendieck in 2003. The book restates traditional lean principles, as well as a set of 22 tools and compares the tools to corresponding agile practices. The Poppendiecks' involvement in the agile software development community, including talks at several Agile conferences has resulted in such concepts being more widely accepted within the agile community.
Lean principles
Lean development can be summarized by seven principles, very close in concept to lean manufacturing principles:
Eliminate waste
Amplify learning
Decide as late as possible
Deliver as fast as possible
Empower the team
Build integrity in
Optimize the whole
Eliminate waste
Lean philosophy regards everything not adding value to the customer as waste (muda). Such waste may include:
Partially done work
Extra features
Relearning
Task switching
Waiting
Handoffs
Defects
Management activities
In order to eliminate waste, one should be able to recognize it. If some activity could be bypassed or the result could be achieved without it, it is waste. Partially done coding eventually abandoned during the development process is waste. Extra features like paperwork and features not often used by customers are waste. Switching people between tasks is waste (because of time spent, and often lost, by people involved in context-switching). Waiting for other activities, teams, processes is waste. Relearning requirements to complete work is waste. Defects and lower quality are wast |
https://en.wikipedia.org/wiki/Monoscope | A monoscope was a special form of video camera tube which displayed a single still video image. The image was built into the tube, hence the name. The tube resembled a small cathode ray tube (CRT). Monoscopes were used beginning in the 1950s to generate TV test patterns and station logos. This type of test card generation system was technologically obsolete by the 1980s.
Design
The monoscope was similar in construction to a CRT, with an electron gun at one end and at the other, a metal target screen with an image formed on it. This was in the position where a CRT would have its phosphor-coated display screen. As the electron beam scanned the target, varying numbers of electrons were reflected from the different areas of the image. The reflected electrons were picked up by an internal electrode ring, producing a varying electrical signal which was amplified to become the video output of the tube.
This signal reproduced an accurate still image of the target, so the monoscope was used to produce still images such as test patterns and station logo cards. For example, the classic Indian Head test card as used by many television stations in North America, was often produced using a monoscope.
Usage
Monoscopes were available with a wide variety of standard patterns and messages, and could be ordered with a custom image such as a station logo. Monoscope "cameras" were widely used to produce test cards, station logos, special signals for test purposes and standard announcements like "Please stand by" and "normal service will be resumed....". They had many advantages over using a live camera pointed at a card; an expensive camera was not tied up, they were always ready, and were never misframed or out of focus. Indeed, monoscopes were often used to calibrate the live cameras, by comparing the monoscope image and the live camera image of the same test pattern.
Pointing an electronic camera at the same stationary monochrome caption for a long period of time could res |
https://en.wikipedia.org/wiki/Pathogen-associated%20molecular%20pattern | Pathogen-associated molecular patterns (PAMPs) are small molecular motifs conserved within a class of microbes, but not present in the host. They are recognized by toll-like receptors (TLRs) and other pattern recognition receptors (PRRs) in both plants and animals. This allows the innate immune system to recognize pathogens and thus, protect the host from infection.
Although the term "PAMP" is relatively new, the concept that molecules derived from microbes must be detected by receptors from multicellular organisms has been held for many decades, and references to an "endotoxin receptor" are found in much of the older literature. The recognition of PAMPs by the PRRs triggers activation of several signaling cascades in the host immune cells like the stimulation of interferons (IFNs) or other cytokines.
Common PAMPs
A vast array of different types of molecules can serve as PAMPs, including glycans and glycoconjugates. Flagellin is also another PAMP that is recognized via the constant domain, D1 by TLR5. Despite being a protein, its N- and C-terminal ends are highly conserved, due to its necessity for function of flagella. Nucleic acid variants normally associated with viruses, such as double-stranded RNA (dsRNA), recognized by TLR3 or unmethylated CpG motifs, recognized by TLR9. The CpG motifs must be internalized in order to be recognized by TLR9. Viral glycoproteins, as seen in the viral-envelope, as well as fungal PAMPS on the cell surface or fungi are recognized by TLR2 and TLR4.
Gram-Negative Bacteria
Bacterial lipopolysaccharides (LPSs), also known as endotoxins, are found on the cell membranes of gram-negative bacteria, are considered to be the prototypical class of PAMPs. The lipid portion of LPS, lipid A, contains a diglycolamine backbone with multiple acyl chains. This is the conserved structural motif that is recognized by TLR4, particularly the TLR4-MD2 complex. Microbes have two main strategies in which they try to avoid the immune system, either by |
https://en.wikipedia.org/wiki/Clioquinol | Clioquinol (iodochlorhydroxyquin) is an antifungal drug and antiprotozoal drug. It is neurotoxic in large doses. It is a member of a family of drugs called hydroxyquinolines which inhibit certain enzymes related to DNA replication. The drugs have been found to have activity against both viral and protozoal infections.
Antiprotozoal use
A 1964 report described the use of clioquinol in both the treatment and prevention of shigella infection and Entamoeba histolytica infection in institutionalized individuals at Sonoma State Hospital in California. The report indicates 4000 individuals were treated over a 4-year period with few side effects.
Several recently reported journal articles describing its use as an antiprotozoal include:
A 2005 reference to its use in treating a Dutch family for Entamoeba histolytica infection.
A 2004 reference to its use in the Netherlands in the treatment of Dientamoeba fragilis infection.
A 1979 reference to the use in Zaire in the treatment of Entamoeba histolytica infection.
Subacute myelo-optic neuropathy
Clioquinol's use as an antiprotozoal drug has been restricted or discontinued in some countries due to an event in Japan where over 10,000 people developed subacute myelo-optic neuropathy (SMON) between 1957 and 1970. The drug was used widely in many countries before and after the SMON event without similar reports. As yet, no explanation exists as to why it produced this reaction, and some researchers have questioned whether clioquinol was the causative agent in the disease, noting that the drug had been used for 20 years prior to the epidemic without incident, and that the SMON cases began to reduce in number prior to the discontinuation of the drug. Theories suggested have included improper dosing, the permitted use of the drug for extended periods of time, and dosing which did not consider the smaller average stature of Japanese; however a dose dependent relationship between SMON development and clioquinol use was nev |
https://en.wikipedia.org/wiki/Feature-driven%20development | Feature-driven development (FDD) is an iterative and incremental software development process. It is a lightweight or Agile method for developing software. FDD blends a number of industry-recognized best practices into a cohesive whole. These practices are driven from a client-valued functionality (feature) perspective. Its main purpose is to deliver tangible, working software repeatedly in a timely manner in accordance with the Principles behind the Agile Manifesto.
History
FDD was initially devised by Jeff De Luca to meet the specific needs of a 15-month, 50-person software development project at a large Singapore bank in 1997. This resulted in a set of five processes that covered the development of an overall model and the listing, planning, design, and building of features. The first process is heavily influenced by Peter Coad's approach to object modelling. The second process incorporates Coad's ideas of using a feature list to manage functional requirements and development tasks. The other processes are a result of Jeff De Luca's experience. There have been several implementations of FDD since its successful use on the Singapore project.
The description of FDD was first introduced to the world in Chapter 6 of the book Java modelling in Color with UML by Peter Coad, Eric Lefebvre, and Jeff De Luca in 1999. Later, in Stephen Palmer and Mac Felsing's book A Practical Guide to Feature-Driven Development (published in 2002), a more general description of FDD was given decoupled from Java modelling.
Overview
FDD is a model-driven short-iteration process that consists of five basic activities. For accurate state reporting and keeping track of the software development project, milestones that mark the progress made on each feature are defined. This section gives a high level overview of the activities. In the figure on the right, the meta-process model for these activities is displayed. During the first two sequential activities, an overall model shape is estab |
https://en.wikipedia.org/wiki/Pattern%20recognition%20receptor | Pattern recognition receptors (PRRs) play a crucial role in the proper function of the innate immune system. PRRs are germline-encoded host sensors, which detect molecules typical for the pathogens. They are proteins expressed, mainly, by cells of the innate immune system, such as dendritic cells, macrophages, monocytes, neutrophils and epithelial cells, to identify two classes of molecules: pathogen-associated molecular patterns (PAMPs), which are associated with microbial pathogens, and damage-associated molecular patterns (DAMPs), which are associated with components of host's cells that are released during cell damage or death. They are also called primitive pattern recognition receptors because they evolved before other parts of the immune system, particularly before adaptive immunity. PRRs also mediate the initiation of antigen-specific adaptive immune response and release of inflammatory cytokines.
The microbe-specific molecules that are recognized by a given PRR are called pathogen-associated molecular patterns (PAMPs) and include bacterial carbohydrates (such as lipopolysaccharide or LPS, mannose), nucleic acids (such as bacterial or viral DNA or RNA), bacterial peptides (flagellin, microtubule elongation factors), peptidoglycans and lipoteichoic acids (from Gram-positive bacteria), N-formylmethionine, lipoproteins and fungal glucans and chitin. Endogenous stress signals are called damage-associated molecular patterns (DAMPs) and include uric acid and extracellular ATP, among many other compounds. There are several subgroups of PRRs. They are classified according to their ligand specificity, function, localization and/or evolutionary relationships.
PRR types and signaling
Based on their localization, PRRs may be divided into membrane-bound PRRs and cytoplasmic PRRs:
Membrane-bound PRRs include Toll like receptors (TLRs) and C-type lectin receptors (CLRs).
Cytoplasmic PRRs include NOD-like receptors (NLRs) and RIG-I-like receptors (RLRs).
PRRs were firs |
https://en.wikipedia.org/wiki/Chladni%27s%20law | Chladni's law, named after Ernst Chladni, relates the frequency of modes of vibration for flat circular surfaces with fixed center as a function of the numbers m of diametric (linear) nodes and n of radial (circular) nodes. It is stated as the equation
where C and p are coefficients which depend on the properties of the plate.
For flat circular plates, p is roughly 2, but Chladni's law can also be used to describe the vibrations of cymbals, handbells, and church bells in which case p can vary from 1.4 to 2.4. In fact, p can even vary for a single object, depending on which family of modes is being examined. |
https://en.wikipedia.org/wiki/Betavoltaic%20device | A betavoltaic device (betavoltaic cell or betavoltaic battery) is a type of nuclear battery which generates electric current from beta particles (electrons) emitted from a radioactive source, using semiconductor junctions. A common source used is the hydrogen isotope tritium.
Unlike most nuclear power sources which use nuclear radiation to generate heat which then is used to generate electricity, betavoltaic devices use a non-thermal conversion process, converting the electron-hole pairs produced by the ionization trail of beta particles traversing a semiconductor.
Betavoltaic power sources (and the related technology of alphavoltaic power sources) are particularly well-suited to low-power electrical applications where long life of the energy source is needed, such as implantable medical devices or military and space applications.
History
Betavoltaics were invented in the 1970s. Some pacemakers in the 1970s used betavoltaics based on promethium, but were phased out as cheaper lithium batteries were developed.
Early semiconducting materials weren't efficient at converting electrons from beta decay into usable current, so higher energy, more expensive—and potentially hazardous—isotopes were used. The more efficient semiconducting materials used can be paired with relatively benign isotopes such as tritium, which produce less radiation.
The Betacel was considered the first successfully commercialized betavoltaic battery.
Proposals
The primary use for betavoltaics is for remote and long-term use, such as spacecraft requiring electrical power for a decade or two. Recent progress has prompted some to suggest using betavoltaics to trickle-charge conventional batteries in consumer devices, such as cell phones and laptop computers. As early as 1973, betavoltaics were suggested for use in long-term medical devices such as pacemakers.
In 2018 a Russian design based on 2-micron thick nickel-63 slabs sandwiched between 10 micron diamond layers was introduced. It produc |
https://en.wikipedia.org/wiki/Sensory%20substitution | Sensory substitution is a change of the characteristics of one sensory modality into stimuli of another sensory modality.
A sensory substitution system consists of three parts: a sensor, a coupling system, and a stimulator. The sensor records stimuli and gives them to a coupling system which interprets these signals and transmits them to a stimulator. In case the sensor obtains signals of a kind not originally available to the bearer it is a case of sensory augmentation. Sensory substitution concerns human perception and the plasticity of the human brain; and therefore, allows us to study these aspects of neuroscience more through neuroimaging.
Sensory substitution systems may help people by restoring their ability to perceive certain defective sensory modality by using sensory information from a functioning sensory modality.
History
The idea of sensory substitution was introduced in the 1980s by Paul Bach-y-Rita as a means of using one sensory modality, mainly tactition, to gain environmental information to be used by another sensory modality, mainly vision. Thereafter, the entire field was discussed by Chaim-Meyer Scheff in "Experimental model for the study of changes in the organization of human sensory information processing through the design and testing of non-invasive prosthetic devices for sensory impaired people". The first sensory substitution system was developed by Bach-y-Rita et al. as a means of brain plasticity in congenitally blind individuals. After this historic invention, sensory substitution has been the basis of many studies investigating perceptive and cognitive neuroscience. Sensory substitution is often employed to investigate predictions of the embodied cognition framework. Within the theoretical framework specifically the concept of sensorimotor contingencies is investigated utilizing sensory substitution. Furthermore, sensory substitution has contributed to the study of brain function, human cognition and rehabilitation.
Physiology
W |
https://en.wikipedia.org/wiki/Burst%20noise | Burst noise is a type of electronic noise that occurs in semiconductors and ultra-thin gate oxide films. It is also called random telegraph noise (RTN), popcorn noise, impulse noise, bi-stable noise, or random telegraph signal (RTS) noise.
It consists of sudden step-like transitions between two or more discrete voltage or current levels, as high as several hundred microvolts, at random and unpredictable times. Each shift in offset voltage or current often lasts from several milliseconds to seconds, and sounds like popcorn popping if hooked up to an audio speaker.
Burst noise was first observed in early point contact diodes, then re-discovered during the commercialization of one of the first semiconductor op-amps; the 709. No single source of burst noise is theorized to explain all occurrences, however the most commonly invoked cause is the random trapping and release of charge carriers at thin film interfaces or at defect sites in bulk semiconductor crystal. In cases where these charges have a significant impact on transistor performance (such as under a MOS gate or in a bipolar base region), the output signal can be substantial. These defects can be caused by manufacturing processes, such as heavy ion implantation, or by unintentional side-effects such as surface contamination.
Individual op-amps can be screened for burst noise with peak detector circuits, to minimize the amount of noise in a specific application.
Burst noise is modeled mathematically by means of the telegraph process, a Markovian continuous-time stochastic process that jumps discontinuously between two distinct values.
See also
Atomic electron transition
Telegraph process |
https://en.wikipedia.org/wiki/Wave%20front%20set | In mathematical analysis, more precisely in microlocal analysis, the wave front (set) WF(f) characterizes the singularities of a generalized function f, not only in space, but also with respect to its Fourier transform at each point. The term "wave front" was coined by Lars Hörmander around 1970.
Introduction
In more familiar terms, WF(f) tells not only where the function f is singular (which is already described by its singular support), but also how or why it is singular, by being more exact about the direction in which the singularity occurs. This concept is mostly useful in dimension at least two, since in one dimension there are only two possible directions. The complementary notion of a function being non-singular in a direction is microlocal smoothness.
Intuitively, as an example, consider a function ƒ whose singular support is concentrated on a smooth curve in the plane at which the function has a jump discontinuity. In the direction tangent to the curve, the function remains smooth. By contrast, in the direction normal to the curve, the function has a singularity. To decide on whether the function is smooth in another direction v, one can try to smooth the function out by averaging in directions perpendicular to v. If the resulting function is smooth, then we regard ƒ to be smooth in the direction of v. Otherwise, v is in the wavefront set.
Formally, in Euclidean space, the wave front set of ƒ is defined as the complement of the set of all pairs (x0,v) such that there exists a test function with (x0) ≠ 0 and an open cone Γ containing v such that the estimate
holds for all positive integers N. Here denotes the Fourier transform. Observe that the wavefront set is conical in the sense that if (x,v) ∈ Wf(ƒ), then (x,λv) ∈ Wf(ƒ) for all λ > 0. In the example discussed in the previous paragraph, the wavefront set is the set-theoretic complement of the image of the tangent bundle of the curve inside the tangent bundle of the plane.
Because the def |
https://en.wikipedia.org/wiki/Antonio%20Ruberti | Antonio Ruberti (24 January 1927 – 4 September 2000) was an Italian politician and engineer. He was a member of the Italian Government and a European Commissioner as well as a professor of engineering at La Sapienza University.
Biography
Antonio Ruberti was born in Aversa in the province of Caserta, Campania.
He trained as an engineer and taught control engineering and systems theory as the first head of the Department of Science and Engineering of La Sapienza university in Rome, a university of which he was later Rector.
In 1987, he joined the Italian government as Minister for the Coordination of Scientific and Technological Research. He held this position for five years. In 1992 Ruberti was elected to the Chamber of Deputies among the ranks of the Italian Socialist Party, where he sat until 1993, when he was appointed by the Italian government to the European Commission chaired by Delors with the portfolio covering science, research, technological development and education. Ruberti was only a commissioner until 1995 but during this short mandate, he launched a series of new initiatives including the Socrates and Leonardo da Vinci programmes, the European Week of Scientific Culture, and the European Science and Technology Forum. After leaving the commission, Ruberti was once more elected to the Chamber of Deputies, where he chaired the Committee for European Union Policies.
He died in Rome in 2000. |
https://en.wikipedia.org/wiki/Brassica%20rapa | Brassica rapa is a plant species growing in various widely cultivated forms including the turnip (a root vegetable); Komatsuna, napa cabbage, bomdong, bok choy, and rapini.
Brassica rapa subsp. oleifera is an oilseed which has many common names, including rape, field mustard, bird's rape, and keblock. The term rapeseed oil is a general term for oil from Brassica species. Food grade oil made from the seed of low-erucic acid Canadian-developed strains is also called canola oil, while non-food oil is called colza oil. Canola oil is sourced from three species of Brassica plants: Brassica rapa and Brassica napus are commonly grown in Canada, while Brassica juncea (brown mustard) is a minor crop for oil production.
History
The origin of B. rapa, both geographically and any surviving wild relatives, has been difficult to identify because it has been developed by humans into many types of vegetables, is now found in most parts of the world, and has returned to the wild many times as a feral plant. A study of genetic sequences from over 400 domesticated and feral B. rapa individuals, along with environmental modelling, has provided more information about the complex history. These indicate that the ancestral B. rapa probably originated 4000 to 6000 years ago in the Hindu Kush area of Central Asia, and had three sets of chromosomes. This provided the genetic potential for a diversity of form, flavour and growth requirements. Domestication has produced modern vegetables and oil-seed crops, all with two sets of chromosomes.
Oilseed subspecies (oleifera) of Brassica rapa may have been domesticated several times from the Mediterranean to India, starting as early as 2000 BC. Edible turnips were possibly first cultivated in northern Europe, and were an important food in ancient Rome. The turnip then spread east to China, and reached Japan by 700 AD. There are descriptions of B. rapa vegetables in Indian and Chinese documents from around 1000 BC.
In the 18th century, the turnip |
https://en.wikipedia.org/wiki/Transformation%20%28function%29 | In mathematics, a transformation is a function f, usually with some geometrical underpinning, that maps a set X to itself, i.e. .
Examples include linear transformations of vector spaces and geometric transformations, which include projective transformations, affine transformations, and specific affine transformations, such as rotations, reflections and translations.
Partial transformations
While it is common to use the term transformation for any function of a set into itself (especially in terms like "transformation semigroup" and similar), there exists an alternative form of terminological convention in which the term "transformation" is reserved only for bijections. When such a narrow notion of transformation is generalized to partial functions, then a partial transformation is a function f: A → B, where both A and B are subsets of some set X.
Algebraic structures
The set of all transformations on a given base set, together with function composition, forms a regular semigroup.
Combinatorics
For a finite set of cardinality n, there are nn transformations and (n+1)n partial transformations.
See also
Coordinate transformation
Data transformation (statistics)
Geometric transformation
Infinitesimal transformation
Linear transformation
Rigid transformation
Transformation geometry
Transformation semigroup
Transformation group
Transformation matrix |
https://en.wikipedia.org/wiki/BRENDA | BRENDA (The Comprehensive Enzyme Information System) is an information system representing one of the most comprehensive enzyme repositories. It is an electronic resource that comprises molecular and biochemical information on enzymes that have been classified by the IUBMB. Every classified enzyme is characterized with respect to its catalyzed biochemical reaction. Kinetic properties of the corresponding reactants (that is, substrates and products) are described in detail. BRENDA contains enzyme-specific data manually extracted from primary scientific literature and additional data derived from automatic information retrieval methods such as text mining. It provides a web-based user interface that allows a convenient and sophisticated access to the data.
History
BRENDA was founded in 1987 at the former German Research Centre for Biotechnology (now the Helmholtz Centre for Infection Research) in Braunschweig and was originally published as a series of books. Its name was originally an acronym for the BRaunschweig ENzyme DAtabase. From 1996 to 2007, BRENDA was located at the University of Cologne. There, BRENDA developed into a publicly accessible enzyme information system. In 2007, BRENDA returned to Braunschweig. Currently, BRENDA is maintained and further developed at the BRICS ( Braunschweig Integrated Centre of Systems Biology) at the TU Braunschweig.
In 2018 BRENDA was appointed to ELIXIR Core Data Resource with fundamental importance to biological and biomedical research and long-term preservation of biological data.
Updates
A major update of the data in BRENDA is performed twice a year. Besides the upgrade of its content, improvements of the user interface are also incorporated into the BRENDA database.
Content and features
Database:
The database contains more than 40 data fields with enzyme-specific information on more than 8300 EC numbers that are classified according to the IUBMB.
The different data fields cover information on the enzyme's nomencla |
https://en.wikipedia.org/wiki/Pannus | Pannus is an abnormal layer of fibrovascular tissue or granulation tissue. Common sites for pannus formation include over the cornea, over a joint surface (as seen in rheumatoid arthritis), or on a prosthetic heart valve. Pannus may grow in a tumor-like fashion, as in joints where it may erode articular cartilage and bone.
In common usage, the term pannus is often used to refer to a panniculus (a hanging flap of tissue).
Pannus in rheumatoid arthritis
The term "pannus" is derived from the Latin for "tablecloth". Chronic inflammation and exuberant proliferation of the synovium leads to formation of pannus and destruction of cartilage, bone, tendons, ligaments, and blood vessels. Pannus tissue is composed of aggressive macrophage- and fibroblast-like mesenchymal cells, macrophage-like cells and other inflammatory cells that release collagenolytic enzymes.
In people suffering from rheumatoid arthritis, pannus tissue eventually forms in the joint affected by the disease, causing bony erosion and cartilage loss via release of IL-1, prostaglandins, and substance P by macrophages.
Pannus in ophthalmology
In ophthalmology, pannus refers to the growth of blood vessels into the peripheral cornea. In normal individuals, the cornea is avascular. Chronic local hypoxia (such as that occurring with overuse of contact lenses) or inflammation may lead to peripheral corneal vascularization, or pannus. Pannus may also develop in diseases of the corneal stem cells, such as aniridia. It is often resolved by peritomy. |
https://en.wikipedia.org/wiki/IP%20code | The IP code or ingress protection code indicates how well a device is protected against water and dust. It is defined by the International Electrotechnical Commission (IEC) under the international standard IEC 60529 which classifies and provides a guideline to the degree of protection provided by mechanical casings and electrical enclosures against intrusion, dust, accidental contact, and water. It is published in the European Union by the European Committee for Electrotechnical Standardization (CENELEC) as EN 60529.
The standard aims to provide users more detailed information than vague marketing terms such as waterproof. For example, a cellular phone rated at IP67 is "dust resistant" and can be "immersed in 1 meter of freshwater for up to 30 minutes". Similarly, an electrical socket rated IP22 is protected against insertion of fingers and will not become unsafe during a specified test in which it is exposed to vertically or nearly vertically dripping water. IP22 or IP2X are typical minimum requirements for the design of electrical accessories for indoor use.
The digits indicate conformity with the conditions summarized in the tables below. The digit 0 is used where no protection is provided. The digit is replaced with the letter X when insufficient data has been gathered to assign a protection level. The device can become less capable; however, it cannot become unsafe.
There are no hyphens in a standard IP code. IPX-8 (for example) is thus an invalid IP code.
Origin of the letters IP
In the original IEC 60529 standard from year 1976, the letters IP are used without providing an explanation, and referred as "characteristic letters". In the next editions of the standard, from years 1989 and 1999 respectively, the IP is explained to denote "international protection" on both French and English pages. According to the Finnish national committee of the IEC, one possibility is that the abbreviation is a combination of English word ingress and French word pénétration |
https://en.wikipedia.org/wiki/Mantis%20Bug%20Tracker | Mantis Bug Tracker is a free and open source, web-based bug tracking system. The most common use of MantisBT is to track software defects. However, MantisBT is often configured by users to serve as a more generic issue tracking system and project management tool.
The name Mantis and the logo of the project refer to the insect family Mantidae, known for the tracking of and feeding on other insects, colloquially referred to as "bugs". The name of the project is typically abbreviated to either MantisBT or just Mantis.
History
Kenzaburo Ito started development of the Mantis Bug Tracking project in 2000. In 2002, Kenzaburo was joined by Jeroen Latour, Victor Boctor and Julian Fitzell to be the administrators and it became a team project.
Version 1.0.0 was released in February 2006.
Version 1.1.0 was released in December 2007.
In November 2008, after a long discussion, the project switched from using the Subversion revision control tool to Git, a distributed revision control tool.
In February 2010, version 1.2.0 was released.
In July 2012, the MantisBT organization on GitHub became the official repository for the Project's source code.
Features
Plug-ins
An event-driven plug-in system was introduced with the release of version 1.2.0. This plug-in system allows extension of MantisBT through both officially maintained and third party plug-ins. As of November 2013, there are over 50 plug-ins available on the MantisBT-plugins organization on GitHub.
Prior to version 1.2.0, a third party plug-in system created by Vincent Debout was available to users along with a variety of different plug-ins. This system was not officially supported by the MantisBT project and is incompatible with MantisBT 1.2.0 and later.
Notifications
MantisBT supports the sending of e-mail notifications upon changes being made to issues in the system. Users have the ability to specify the type of e-mails they receive and set filters to define the minimum severity of issues to receive notifications abo |
https://en.wikipedia.org/wiki/Flexible-fuel%20vehicle | A flexible-fuel vehicle (FFV) or dual-fuel vehicle (colloquially called a flex-fuel vehicle) is an alternative fuel vehicle with an internal combustion engine designed to run on more than one fuel, usually gasoline blended with either ethanol or methanol fuel, and both fuels are stored in the same common tank. Modern flex-fuel engines are capable of burning any proportion of the resulting blend in the combustion chamber as fuel injection and spark timing are adjusted automatically according to the actual blend detected by a fuel composition sensor. This device is known as an oxygen sensor and it reads the oxygen levels in the stream of exhaust gasses, its signal enriching or leaning the fuel mixture going into the engine. Flex-fuel vehicles are distinguished from bi-fuel vehicles, where two fuels are stored in separate tanks and the engine runs on one fuel at a time, for example, compressed natural gas (CNG), liquefied petroleum gas (LPG), or hydrogen.
The most common commercially available FFV in the world market is the ethanol flexible-fuel vehicle, with about 60 million automobiles, motorcycles and light duty trucks manufactured and sold worldwide by March 2018, and concentrated in four markets, Brazil (30.5 million light-duty vehicles and over 6 million motorcycles), the United States (21 million by the end of 2017), Canada (1.6 million by 2014), and Europe, led by Sweden (243,100). In addition to flex-fuel vehicles running with ethanol, in Europe and the US, mainly in California, there have been successful test programs with methanol flex-fuel vehicles, known as M85 flex-fuel vehicles. There have been also successful tests using P-series fuels with E85 flex fuel vehicles, but as of June 2008, this fuel is not yet available to the general public. These successful tests with P-series fuels were conducted on Ford Taurus and Dodge Caravan flexible-fuel vehicles.
Though technology exists to allow ethanol FFVs to run on any mixture of gasoline and ethanol, from pu |
https://en.wikipedia.org/wiki/Negri%20body | Negri bodies are eosinophilic, sharply outlined, pathognomonic inclusion bodies (2–10 μm in diameter) found in the cytoplasm of certain nerve cells containing the virus of rabies, especially in pyramidal cells within Ammon's horn of the hippocampus. They are also often found in the Purkinje cells of the cerebellar cortex from postmortem brain samples of rabies victims. They consist of ribonuclear proteins produced by the virus.
They are named for Adelchi Negri.
History and use as a Rabies Diagnosis
Adelchi Negri, an assistant pathologist working in the laboratory of Camillo Golgi, observed these inclusions in rabbits and dogs with rabies. These findings were presented in 1903 at a meeting of the Società Medico-Chirurgica of Pavia. The American pathologist Anna Wessels Williams made the same discovery, but because Negri published his results first, the bodies bear his name.
Negri was convinced the inclusions were a parasitic protozoon and the etiologic agent of rabies. Later that same year, however, Paul Remlinger and Rifat-Bey Frasheri in Constantinople and, separately, Alfonso di Vestea in Naples showed that the etiologic agent of rabies is a filterable virus. Negri continued until 1909 to try to prove that the intraneuronal inclusions named after him corresponded to steps in the developmental cycle of a protozoan.
In spite of his incorrect etiologic hypothesis, Negri's discovery represented a breakthrough in the rapid diagnosis of rabies, and the detection of Negri bodies, using a method developed by Anna Wessels Williams, remained the primary way to detect rabies for the next thirty years. |
https://en.wikipedia.org/wiki/Schr%C3%B6dinger%E2%80%93Newton%20equation | The Schrödinger–Newton equation, sometimes referred to as the Newton–Schrödinger or Schrödinger–Poisson equation, is a nonlinear modification of the Schrödinger equation with a Newtonian gravitational potential, where the gravitational potential emerges from the treatment of the wave function as a mass density, including a term that represents interaction of a particle with its own gravitational field. The inclusion of a self-interaction term represents a fundamental alteration of quantum mechanics. It can be written either as a single integro-differential equation or as a coupled system of a Schrödinger and a Poisson equation. In the latter case it is also referred to in the plural form.
The Schrödinger–Newton equation was first considered by Ruffini and Bonazzola in connection with self-gravitating boson stars. In this context of classical general relativity it appears as the non-relativistic limit of either the Klein–Gordon equation or the Dirac equation in a curved space-time together with the Einstein field equations.
The equation also describes fuzzy dark matter and approximates classical cold dark matter described by the Vlasov–Poisson equation in the limit that the particle mass is large.
Later on it was proposed as a model to explain the quantum wave function collapse by Lajos Diósi and Roger Penrose, from whom the name "Schrödinger–Newton equation" originates. In this context, matter has quantum properties, while gravity remains classical even at the fundamental level. The Schrödinger–Newton equation was therefore also suggested as a way to test the necessity of quantum gravity.
In a third context, the Schrödinger–Newton equation appears as a Hartree approximation for the mutual gravitational interaction in a system of a large number of particles. In this context, a corresponding equation for the electromagnetic Coulomb interaction was suggested by Philippe Choquard at the 1976 Symposium on Coulomb Systems in Lausanne to describe one-component plasmas. |
https://en.wikipedia.org/wiki/Differential%20GPS | Differential Global Positioning Systems (DGPSs) supplement and enhance the positional data available from global navigation satellite systems (GNSSs). A DGPS for GPS can increase accuracy by about a thousandfold, from approximately to .
DGPSs consist of networks of fixed position, ground-based reference stations. Each reference station calculates the difference between its highly accurate known position and its less accurate satellite-derived position. The stations broadcast this data locally—typically using ground-based transmitters of shorter range. Non-fixed (mobile) receivers use it to correct their position by the same amount, thereby improving their accuracy.
The United States Coast Guard (USCG) previously ran DGPS in the United States on longwave radio frequencies between and near major waterways and harbors. It was discontinued in March of 2022. The USCG's DGPS was known as NDGPS (Nationwide DGPS) and was jointly administered by the Coast Guard and the Army Corps of Engineers. It consisted of broadcast sites located throughout the inland and coastal portions of the United States including Alaska, Hawaii and Puerto Rico. The Canadian Coast Guard (CCG) also ran a separate DGPS system, but discontinued its use on December 15, 2022. Other countries have their own DGPS.
A similar system which transmits corrections from orbiting satellites instead of ground-based transmitters is called a Wide-Area DGPS (WADGPS) Satellite Based Augmentation System.
History
When GPS was first being put into service, the US military was concerned about the possibility of enemy forces using the globally available GPS signals to guide their own weapon systems. Originally, the government thought the "coarse acquisition" (C/A) signal would give only about , but with improved receiver designs, the actual accuracy was . Starting in March 1990, to avoid providing such unexpected accuracy, the C/A signal transmitted on the L1 frequency () was deliberately degraded by offsetting its cl |
https://en.wikipedia.org/wiki/Common-mode%20rejection%20ratio | In electronics, the common mode rejection ratio (CMRR) of a differential amplifier (or other device) is a metric used to quantify the ability of the device to reject common-mode signals, i.e. those that appear simultaneously and in-phase on both inputs. An ideal differential amplifier would have infinite CMRR, however this is not achievable in practice. A high CMRR is required when a differential signal must be amplified in the presence of a possibly large common-mode input, such as strong electromagnetic interference (EMI). An example is audio transmission over balanced line in sound reinforcement or recording.
Theory
Ideally, a differential amplifier takes the voltages, and on its two inputs and produces an output voltage , where is the differential gain. However, the output of a real differential amplifier is better described as :
where is the "common-mode gain", which is typically much smaller than the differential gain.
The CMRR is defined as the ratio of the powers of the differential gain over the common-mode gain, measured in positive decibels (thus using the 20 log rule):
As differential gain should exceed common-mode gain, this will be a positive number, and the higher the better.
The CMRR is a very important specification, as it indicates how much of the common-mode signal will appear in your measurement. The value of the CMRR often depends on signal frequency as well, and must be specified as a function thereof.
It is often important in reducing noise on transmission lines. For example, when measuring the resistance of a thermocouple in a noisy environment, the noise from the environment appears as an offset on both input leads, making it a common-mode voltage signal. The CMRR of the measurement instrument determines the attenuation applied to the offset or noise.
Amplifier design
CMRR is an important feature of operational amplifiers, difference amplifiers and instrumentation amplifiers, and can be found in the datasheet. The CMRR often v |
https://en.wikipedia.org/wiki/Human%20reproductive%20system | The human reproductive system includes the male reproductive system which functions to produce and deposit sperm; and the female reproductive system which functions to produce egg cells, and to protect and nourish the fetus until birth. Humans have a high level of sexual differentiation. In addition to differences in nearly every reproductive organ, there are numerous differences in typical secondary sex characteristics.
Human reproduction usually involves internal fertilization by sexual intercourse. In this process, the male inserts his penis into the female's vagina and ejaculates semen, which contains sperm. A small proportion of the sperm pass through the cervix into the uterus, and then into the fallopian tubes for fertilization of the ovum. Only one sperm is required to fertilize the ovum. Upon successful fertilization, the fertilized ovum, or zygote, travels out of the fallopian tube and into the uterus, where it implants in the uterine wall. This marks the beginning of gestation, better known as pregnancy, which continues for around nine months as the fetus develops. When the fetus has developed to a certain point, pregnancy is concluded with childbirth, involving labor. During labor, the muscles of the uterus contract and the cervix dilates over the course of hours, and the baby passes out of the vagina. Human infants are completely dependent on their caregivers, and require high levels of parental care. Infants rely on their caregivers for comfort, cleanliness, and food. Food may be provided by breastfeeding or formula feeding.
Structure
Female
The human female reproductive system is a series of organs primarily located inside the body and around the pelvic region of a female that contribute towards the reproductive process. The human female reproductive system contains three main parts: the vulva, which leads to the vagina, the vaginal opening, to the uterus; the uterus, which holds the developing fetus; and the ovaries, which produce the female's o |
https://en.wikipedia.org/wiki/List%20of%20communication%20satellite%20companies | This is a list of all companies currently operating at least one commercial communication satellite or currently has one on order.
Global Top 20
The World Teleport Association publishes lists of companies based on revenues from all customized communications sources and includes operators of teleports and satellite fleets. In order from largest to smallest, the Global Top 20 of 2021 were:
SES (Luxembourg)
Intelsat S.A. (Luxembourg)
EchoStar Satellite Services (USA)
Hughes Network Systems (USA)
Eutelsat (France)
Arqiva (UK)
Telesat (Canada)
Speedcast (USA)
Telespazio S.p.A.(Italy)
Encompass Digital Media (USA)
SingTel Satellite (Singapore)
Hispasat (Spain)
Globecast (France)
Liquid Intelligent Technologies (South Africa)
Russian Satellite Communications Company (Russia)
Telstra (Australia)
MEASAT Global (Malaysia)
Thaicom (Thailand)
du (UAE)
Gazprom Space Systems (Russia)
List |
https://en.wikipedia.org/wiki/Software%20rot | Software rot (bit rot, code rot, software erosion, software decay, or software entropy) is either a slow deterioration of software quality over time or its diminishing responsiveness that will eventually lead to software becoming faulty, unusable, or in need of upgrade. This is not a physical phenomenon; the software does not actually decay, but rather suffers from a lack of being responsive and updated with respect to the changing environment in which it resides.
The Jargon File, a compendium of hacker lore, defines "bit rot" as a jocular explanation for the degradation of a software program over time even if "nothing has changed"; the idea behind this is almost as if the bits that make up the program were subject to radioactive decay.
Causes
Several factors are responsible for software rot, including changes to the environment in which the software operates, degradation of compatibility between parts of the software itself, and the appearance of bugs in unused or rarely used code.
Environment change
When changes occur in the program's environment, particularly changes which the designer of the program did not anticipate, the software may no longer operate as originally intended. For example, many early computer game designers used the CPU clock speed as a timer in their games. However, newer CPU clocks were faster, so the gameplay speed increased accordingly, making the games less usable over time.
Onceability
There are changes in the environment not related to the program's designer, but its users. Initially, a user could bring the system into working order, and have it working flawlessly for a certain amount of time. But, when the system stops working correctly, or the users want to access the configuration controls, they cannot repeat that initial step because of the different context and the unavailable information (password lost, missing instructions, or simply a hard-to-manage user interface that was first configured by trial and error). Information Arc |
https://en.wikipedia.org/wiki/Carl%20St%C3%B8rmer | Fredrik Carl Mülertz Størmer (3 September 1874 – 13 August 1957) was a Norwegian mathematician and astrophysicist. In mathematics, he is known for his work in number theory, including the calculation of and Størmer's theorem on consecutive smooth numbers. In physics, he is known for studying the movement of charged particles in the magnetosphere and the formation of aurorae, and for his book on these subjects, From the Depths of Space to the Heart of the Atom. He worked for many years as a professor of mathematics at the University of Oslo in Norway. A crater on the far side of the Moon is named after him.
Personal life and career
Størmer was born on 3 September 1874 in Skien, the only child of a pharmacist Georg Ludvig Størmer (1842–1930) and Elisabeth Amalie Johanne Henriette Mülertz (1844–1916). His uncle was the entrepreneur and inventor Henrik Christian Fredrik Størmer.
Størmer studied mathematics at the Royal Frederick University in Kristiania, Norway (now the University of Oslo, in Oslo) from 1892 to 1897, earning the rank of candidatus realium in 1898. He then studied with Picard, Poincaré, Painlevé, Jordan, Darboux, and Goursat at the Sorbonne in Paris from 1898 to 1900. He returned to Kristiania in 1900 as a research fellow in mathematics, visited the University of Göttingen in 1902, and returned to Kristiania in 1903, where he was appointed as a professor of mathematics, a position he held for 43 years. After he received a permanent position in Kristiania, Størmer published his subsequent writings under a shortened version of his name, Carl Størmer. In 1918, he was elected as the first president of the newly formed Norwegian Mathematical Society. He participated regularly in Scandinavian mathematical congresses, and was president of the 1936 International Congress of Mathematicians in Oslo (from 1924 the new name of Kristiania). Størmer was also affiliated with the Institute of Theoretical Astrophysics at the University of Oslo, which was founded in 1 |
https://en.wikipedia.org/wiki/Wochenend%20und%20Sonnenschein | "Wochenend und Sonnenschein" ("Weekend and Sunshine") is a song with German lyrics that was copyrighted in 1930 by Charles Amberg (lyrics) and Milton Ager (music). The music is based on the famed American song "Happy Days Are Here Again" that was copyrighted in 1929 by Ager and Jack Yellen (English lyrics). The German lyrics are very different in spirit from the English ones:
Tief im Wald nur ich und du,
der Herrgott drückt ein Auge zu,
denn er schenkt uns ja zum Glücklichsein
Wochenend und Sonnenschein.
which translates roughly as:
Only you and me, deep in the woods
The lord above turns a blind eye,
For he grants us for being happy
Weekend and sunshine.
"Wochenend und Sonnenschein" was first performed by the popular German sextet, the Comedian Harmonists, who recorded the song on 22 August 1930 on 78 rpm gramophone record. The recording is available on CD. |
https://en.wikipedia.org/wiki/R/K%20selection%20theory | In ecology, r/K selection theory relates to the selection of combinations of traits in an organism that trade off between quantity and quality of offspring. The focus on either an increased quantity of offspring at the expense of individual parental investment of r-strategists, or on a reduced quantity of offspring with a corresponding increased parental investment of K-strategists, varies widely, seemingly to promote success in particular environments. The concepts of quantity or quality offspring are sometimes referred to as "cheap" or "expensive", a comment on the expendable nature of the offspring and parental commitment made. The stability of the environment can predict if many expendable offspring are made or if fewer offspring of higher quality would lead to higher reproductive success. An unstable environment would encourage the parent to make many offspring, because the likelihood of all (or the majority) of them surviving to adulthood is slim. In contrast, more stable environments allow parents to confidently invest in one offspring because they are more likely to survive to adulthood.
The terminology of r/K-selection was coined by the ecologists Robert MacArthur and E. O. Wilson in 1967 based on their work on island biogeography; although the concept of the evolution of life history strategies has a longer history (see e.g. plant strategies).
The theory was popular in the 1970s and 1980s, when it was used as a heuristic device, but lost importance in the early 1990s, when it was criticized by several empirical studies. A life-history paradigm has replaced the r/K selection paradigm, but continues to incorporate its important themes as a subset of life history theory. Some scientists now prefer to use the terms fast versus slow life history as a replacement for, respectively, r versus K reproductive strategy.
Overview
In r/K selection theory, selective pressures are hypothesised to drive evolution in one of two generalized directions: r- or K-selection |
https://en.wikipedia.org/wiki/Extended%20Data%20Services | Extended Data Services (now XDS, previously EDS), is an American standard classified under Electronic Industries Alliance standard CEA-608-E for the delivery of any ancillary data (metadata) to be sent with an analog television program, or any other NTSC video signal.
XDS is used by TV stations, TV networks, and TV program syndication distributors in the US for several purposes.
Here are some of the most common uses of XDS:
The "autoclock" system delivers time data via an XDS "Time-of-Day Packet" for automatically setting the clock of newer TVs & VCRs sold in the US. Most PBS stations provide this service.
Rudimentary program information which can be displayed on-screen, such as the name and remaining time of the program,
Station identification,
V-chip content ratings data.
XDS is also used by the American TV network ABC for their Network Alert System (NAS). NAS is a one-way communication system used by ABC to inform and alert their local affiliate stations across the US of information regarding ABC's network programming (such as program timings & changes, news special report information, etc.), using a special decoder manufactured for ABC by EEG Enterprises , a manufacturer of related equipment for the TV broadcast industry such as closed captioning and general-purpose XDS encoders. The CBS Television Network uses a similar method to transmit three separate internal messaging services to stations: one for programming departments, one for master control operations, and one for newsrooms.
Many standard definition receivers produced by Dish Network encode XDS data into their output signal. Data encoded includes time of day, program name, program description, program time remaining, channel identification, and content rating. This data is obtained from the satellite service's EPG and replaces any data which may have been present when the signal was uplinked.
XDS uses the same line in the vertical blanking interval as closed captioning (NTSC line 21), a |
https://en.wikipedia.org/wiki/Dispersive%20mass%20transfer | Dispersive mass transfer, in fluid dynamics, is the spreading of mass from highly concentrated areas to less concentrated areas. It is one form of mass transfer.
Dispersive mass flux is analogous to diffusion, and it can also be described using Fick's first law:
where c is mass concentration of the species being dispersed, E is the dispersion coefficient, and x is the position in the direction of the concentration gradient. Dispersion can be differentiated from diffusion in that it is caused by non-ideal flow patterns (i.e. deviations from plug flow) and is a macroscopic phenomenon, whereas diffusion is caused by random molecular motions (i.e. Brownian motion) and is a microscopic phenomenon. Dispersion is often more significant than diffusion in convection-diffusion problems. The dispersion coefficient is frequently modeled as the product of the fluid velocity, U, and some characteristic length scale, α:
Transport phenomena |
https://en.wikipedia.org/wiki/Einstein%20solid | The Einstein solid is a model of a crystalline solid that contains a large number of independent three-dimensional quantum harmonic oscillators of the same frequency. The independence assumption is relaxed in the Debye model.
While the model provides qualitative agreement with experimental data, especially for the high-temperature limit, these oscillations are in fact phonons, or collective modes involving many atoms. Albert Einstein was aware that getting the frequency of the actual oscillations would be difficult, but he nevertheless proposed this theory because it was a particularly clear demonstration that quantum mechanics could solve the specific heat problem in classical mechanics.
Historical impact
The original theory proposed by Einstein in 1907 has great historical relevance. The heat capacity of solids as predicted by the empirical Dulong–Petit law was required by classical mechanics, the specific heat of solids should be independent of temperature. But experiments at low temperatures showed that the heat capacity changes, going to zero at absolute zero. As the temperature goes up, the specific heat goes up until it approaches the Dulong and Petit prediction at high temperature.
By employing Planck's quantization assumption, Einstein's theory accounted for the observed experimental trend for the first time. Together with the photoelectric effect, this became one of the most important pieces of evidence for the need of quantization. Einstein used the levels of the quantum mechanical oscillator many years before the advent of modern quantum mechanics.
Heat capacity
For a thermodynamic approach, the heat capacity can be derived using different statistical ensembles. All solutions are equivalent at the thermodynamic limit.
Microcanonical ensemble
The heat capacity of an object at constant volume V is defined through the internal energy U as
, the temperature of the system, can be found from the entropy
To find the entropy consider a solid made of a |
https://en.wikipedia.org/wiki/Finite-difference%20time-domain%20method | Finite-difference time-domain (FDTD) or Yee's method (named after the Chinese American applied mathematician Kane S. Yee, born 1934) is a numerical analysis technique used for modeling computational electrodynamics (finding approximate solutions to the associated system of differential equations). Since it is a time-domain method, FDTD solutions can cover a wide frequency range with a single simulation run, and treat nonlinear material properties in a natural way.
The FDTD method belongs in the general class of grid-based differential numerical modeling methods (finite difference methods). The time-dependent Maxwell's equations (in partial differential form) are discretized using central-difference approximations to the space and time partial derivatives. The resulting finite-difference equations are solved in either software or hardware in a leapfrog manner: the electric field vector components in a volume of space are solved at a given instant in time; then the magnetic field vector components in the same spatial volume are solved at the next instant in time; and the process is repeated over and over again until the desired transient or steady-state electromagnetic field behavior is fully evolved.
History
Finite difference schemes for time-dependent partial differential equations (PDEs) have been employed for many years in computational fluid dynamics problems, including the idea of using centered finite difference operators on staggered grids in space and time to achieve second-order accuracy.
The novelty of Kane Yee's FDTD scheme, presented in his seminal 1966 paper, was to apply centered finite difference operators on staggered grids in space and time for each electric and magnetic vector field component in Maxwell's curl equations.
The descriptor "Finite-difference time-domain" and its corresponding "FDTD" acronym were originated by Allen Taflove in 1980.
Since about 1990, FDTD techniques have emerged as primary means to computationally model many scienti |
https://en.wikipedia.org/wiki/Slewing | Slewing is the rotation of an object around an axis, usually the z axis. An example is a radar scanning 360 degrees by slewing around the z axis. This is also common terminology in astronomy. The process of rotating a telescope to observe a different region of the sky is referred to as slewing.
The term slewing is also found in motion control applications. Often the slew axis is combined with another axis to form a motion profile.
In crane terminology, slewing is the angular movement of a crane boom or crane jib in a horizontal plane.
The term is also used in the computer game Microsoft Flight Simulator wherein the user presses a key and he or she can rotate and move the virtual aircraft along all three spatial planes.
In the modern day use of CNC programs, slewing is a vital part of the process.
Mechanics |
https://en.wikipedia.org/wiki/Artificial%20brain | An artificial brain (or artificial mind) is software and hardware with cognitive abilities similar to those of the animal or human brain.
Research investigating "artificial brains" and brain emulation plays three important roles in science:
An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.
A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, at least in theory, to create a machine that has all the capabilities of a human being.
A long-term project to create machines exhibiting behavior comparable to those of animals with complex central nervous system such as mammals and most particularly humans. The ultimate goal of creating a machine exhibiting human-like behavior or intelligence is sometimes called strong AI.
An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create "neurospheres" (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer's, motor neurone and Parkinson's disease.
The second objective is a reply to arguments such as John Searle's Chinese room argument, Hubert Dreyfus's critique of AI or Roger Penrose's argument in The Emperor's New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper "Computing Machinery and Intelligence".
The third objective is generally called artificial general intelligence by researchers. However, Ray Kurzweil prefers the term "strong AI". In his book The Singularity is Near, he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on groun |
https://en.wikipedia.org/wiki/Literal%20%28computer%20programming%29 | In computer science, a literal is a textual representation (notation) of a value as it is written in source code. Almost all programming languages have notations for atomic values such as integers, floating-point numbers, and strings, and usually for booleans and characters; some also have notations for elements of enumerated types and compound values such as arrays, records, and objects. An anonymous function is a literal for the function type.
In contrast to literals, variables or constants are symbols that can take on one of a class of fixed values, the constant being constrained not to change. Literals are often used to initialize variables; for example, in the following, 1 is an integer literal and the three letter string in "cat" is a string literal:
int a = 1;
string s = "cat";
In lexical analysis, literals of a given type are generally a token type, with a grammar rule, like "a string of digits" for an integer literal. Some literals are specific keywords, like true for the boolean literal "true".
In some object-oriented languages (like ECMAScript), objects can also be represented by literals. Methods of this object can be specified in the object literal using function literals. The brace notation below, which is also used for array literals, is typical for object literals:
{"cat", "dog"}
{name: "cat", length: 57}
Literals of objects
In ECMAScript (as well as its implementations JavaScript or ActionScript), an object with methods can be written using the object literal like this:
var newobj = {
var1: true,
var2: "very interesting",
method1: function () {
alert(this.var1)
},
method2: function () {
alert(this.var2)
}
};
newobj.method1();
newobj.method2();
These object literals are similar to anonymous classes in other languages like Java.
The JSON data interchange format is based on a subset of the JavaScript object literal syntax, with some additional restrictions (among them requiring all keys to be quoted, and disallowing functi |
https://en.wikipedia.org/wiki/Polarimetry | Polarimetry is the measurement and interpretation of the polarization of transverse waves, most notably electromagnetic waves, such as radio or light waves. Typically polarimetry is done on electromagnetic waves that have traveled through or have been reflected, refracted or diffracted by some material in order to characterize that object.
Plane polarized light: According to the wave theory of light, an ordinary ray of light is considered to be vibrating in all planes of right angles to the direction of its propagation. If this ordinary ray of light is passed through a nicol prism, the emergent ray has its vibration only in one plane.
Applications
Polarimetry of thin films and surfaces is commonly known as ellipsometry.
Polarimetry is used in remote sensing applications, such as planetary science, astronomy, and weather radar.
Polarimetry can also be included in computational analysis of waves. For example, radars often consider wave polarization in post-processing to improve the characterization of the targets. In this case, polarimetry can be used to estimate the fine texture of a material, help resolve the orientation of small structures in the target, and, when circularly-polarized antennas are used, resolve the number of bounces of the received signal (the chirality of circularly polarized waves alternates with each reflection).
Imaging
In 2003, a visible-near IR (VNIR) Spectropolarimetric Imager with an acousto-optic tunable filter (AOTF) was reported. These hyperspectral and spectropolarimetric imager functioned in radiation regions spanning from ultraviolet (UV) to long-wave infrared (LWIR). In AOTFs a piezoelectric transducer converts a radio frequency (RF) signal into an ultrasonic wave. This wave then travels through a crystal attached to the transducer and upon entering an acoustic absorber is diffracted. The wavelength of the resulting light beams can be modified by altering the initial RF signal. VNIR and LWIR hyperspectral imaging consistently |
https://en.wikipedia.org/wiki/Nicol%20prism | A Nicol prism is a type of polarizer. It is an optical device made from calcite crystal used to convert ordinary light into plane polarized light. It is made in such a way that it eliminates one of the rays by total internal reflection, i.e. the ordinary ray is eliminated and only the extraordinary ray is transmitted through the prism.
It was the first type of polarizing prism, invented in 1828 by William Nicol (1770–1851) of Edinburgh.
Mechanism
The Nicol prism consists of a rhombohedral crystal of Iceland spar (a variety of calcite) that has been cut at an angle of 68° with respect to the crystal axis, cut again diagonally, and then rejoined, using a layer of transparent Canada balsam as a glue.
Unpolarized light ray enters through the side face of the crystal, and is split into two orthogonally polarized, differently directed rays by the birefringence property of calcite. The ordinary ray, or o-ray, experiences a refractive index of no = 1.658 in the calcite and undergoes a total internal reflection at the calcite–glue interface because of its angle of incidence at the glue layer (refractive index n = 1.550) exceeds the critical angle for the interface. It passes out the top side of the upper half of the prism with some refraction. The extraordinary ray, or e-ray, experiences a lower refractive index (ne = 1.486) in the calcite crystal and is not totally reflected at the interface because it strikes the interface at a sub-critical angle. The e-ray merely undergoes a slight refraction, or bending, as it passes through the interface into the lower half of the prism. It finally leaves the prism as a ray of plane-polarized light, undergoing another refraction, as it exits the opposite side of the prism. The two exiting rays have polarizations orthogonal (at right angles) to each other, but the lower, or e-ray, is the more commonly used for further experimentation because it is again traveling in the original horizontal direction, assuming that the calcite prism |
https://en.wikipedia.org/wiki/Composability | Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides components that can be selected and assembled in various combinations to satisfy specific user requirements. In information systems, the essential features that make a component composable are that it be:
self-contained (modular): it can be deployed independently – note that it may cooperate with other components, but dependent components are replaceable
stateless: it treats each request as an independent transaction, unrelated to any previous request. Stateless is just one technique; managed state and transactional systems can also be composable, but with greater difficulty.
It is widely believed that composable systems are more trustworthy than non-composable systems because it is easier to evaluate their individual parts.
Simulation theory
In simulation theory, current literature distinguishes between Composability of Models and Interoperability of Simulation. Modeling is understood as the purposeful abstraction of reality, resulting in the formal specification of a conceptualization and underlying assumptions and constraints. Modeling and simulation (M&S) is, in particular, interested in models that are used to support the implementation of an executable version on a computer. The execution of a model over time is understood as the simulation. While modeling targets the conceptualization, simulation challenges mainly focus on implementation, in other words, modeling resides on the abstraction level, whereas simulation resides on the implementation level. Following the ideas derived from the Levels of Conceptual Interoperability model (LCIM), Composability addresses the model challenges on higher levels, interoperability deals with simulation implementation issues, and integratability with network questions. Tolk proposes the following definitions: Interoperability allows exchanging information between the systems and using |
https://en.wikipedia.org/wiki/Smart%20cow%20problem | The smart cow problem is the concept that, when a group of individuals is faced with a technically difficult task, only one of their members has to solve it. When the problem has been solved once, an easily repeatable method may be developed, allowing the less technically proficient members of the group to accomplish the task.
The term smart cow problem is thought to be derived from the expression: "It only takes one smart cow to open the latch of the gate, and then all the other cows follow."
This concept has been applied to digital rights management (DRM), where, due to the rapid spread of information on the Internet, it only takes one individual's defeat of a DRM scheme to render the method obsolete.
See also
Jon Lech Johansen (aka "DVD Jon", among the first hackers to crack DVD encryption)
Script kiddie (an unskilled hacker who relies on tools created by others) |
https://en.wikipedia.org/wiki/Redshift%20quantization | Redshift quantization, also referred to as redshift periodicity, redshift discretization, preferred redshifts and redshift-magnitude bands, is the hypothesis that the redshifts of cosmologically distant objects (in particular galaxies and quasars) tend to cluster around multiples of some particular value.
In standard inflationary cosmological models, the redshift of cosmological bodies is ascribed to the expansion of the universe, with greater redshift indicating greater cosmic distance from the Earth (see Hubble's Law). This is referred to as cosmological redshift and is one of the main pieces of evidence for the Big Bang. Quantized redshifts of objects would indicate, under Hubble's Law, that astronomical objects are arranged in a quantized pattern around the Earth. It is more widely posited that the redshift is unrelated to cosmic expansion and is the outcome of some other physical mechanism, referred to as "intrinsic redshift" or "non-cosmological redshift".
In 1973, astronomer William G. Tifft was the first to report evidence of this pattern. Subsequent discourse focused upon whether redshift surveys of quasars (QSOs) have produced evidence of quantization in excess of what is expected due to selection effect or galactic clustering. The idea has been on the fringes of astronomy since the mid-1990s and is now discounted by the vast majority of astronomers, but a few scientists who espouse nonstandard cosmological models, including those who reject the Big Bang theory, have referred to evidence of redshift quantization as reason to reject conventional accounts of the origin and evolution of the universe.
Original investigation by William G. Tifft
György Paál (for QSOs, 1971) and William G. Tifft (for galaxies) were the first to investigate possible redshift quantization, referring to it as "redshift-magnitude banding correlation". In 1973, he wrote:
"Using more than 200 redshifts in Coma, Perseus, and A2199, the presence of a distinct band-related periodicity |
https://en.wikipedia.org/wiki/Nixtamalization | Nixtamalization () is a process for the preparation of maize, or other grain, in which the grain is soaked and cooked in an alkaline solution, usually limewater (but sometimes aqueous alkali metal carbonates), washed, and then hulled. The term can also refer to the removal via an alkali process of the pericarp from other grains such as sorghum.
Nixtamalized corn has several benefits over unprocessed grain: It is more easily ground, its nutritional value is increased, flavor and aroma are improved, and mycotoxins are reduced by up to 97%–100% (for aflatoxins).
Lime and ash are highly alkaline: the alkalinity helps the dissolution of hemicellulose, the major glue-like component of the maize cell walls, and loosens the hulls from the kernels and softens the maize. Corn's hemicellulose-bound niacin is converted to free niacin (a form of vitamin B3), making it available for absorption into the body, thus helping to prevent pellagra.
Some of the corn oil is broken down into emulsifying agents (monoglycerides and diglycerides), while bonding of the maize proteins to each other is also facilitated. The divalent calcium in lime acts as a cross-linking agent for protein and polysaccharide acidic side chains.
While cornmeal made from untreated ground maize is unable by itself to form a dough on addition of water, the chemical changes in masa allow dough formation. These benefits make nixtamalization a crucial preliminary step for further processing of maize into food products, and the process is employed using both traditional and industrial methods, in the production of tortillas and tortilla chips (but not corn chips), tamales, hominy, and many other items.
Etymology
In the Aztec language Nahuatl, the word for the product of this procedure is or ( or ), which in turn has yielded Mexican Spanish (). The Nahuatl word is a compound of "lime ashes" and "unformed/cooked corn dough, tamal". The term nixtamalization can also be used to describe the removal of the pericar |
https://en.wikipedia.org/wiki/NatureServe%20conservation%20status | The NatureServe conservation status system, maintained and presented by NatureServe in cooperation with the Natural Heritage Network, was developed in the United States in the 1980s by The Nature Conservancy (TNC) as a means for ranking or categorizing the relative imperilment of species of plants, animals, or other organisms, as well as natural ecological communities, on the global, national or subnational levels. These designations are also referred to as NatureServe ranks, NatureServe statuses, or Natural Heritage ranks. While the Nature Conservancy is no longer substantially involved in the maintenance of these ranks, the name TNC ranks is still sometimes encountered for them.
NatureServe ranks indicate the imperilment of species or ecological communities as natural occurrences, ignoring individuals or populations in captivity or cultivation, and also ignoring non-native occurrences established through human intervention beyond the species' natural range, as for example with many invasive species).
NatureServe ranks have been designated primarily for species and ecological communities in the United States and Canada, but the methodology is global, and has been used in some areas of Latin America and the Caribbean. The NatureServe Explorer website presents a centralized set of global, national, and subnational NatureServe ranks developed by NatureServe or provided by cooperating U.S. Natural Heritage Programs and Canadian and other international Conservation Data Centers.
Introduction
Most NatureServe ranks show the conservation status of a plant or animal species or a natural ecological community using a one-to-five numerical scale (from most vulnerable to most secure), applied either globally (world-wide or range-wide) or to the entity's status within a particular nation or a specified subnational unit within a nation. Letter-based notations are used for various special cases to which the numerical scale does not apply, as explained below. Ranks at variou |
https://en.wikipedia.org/wiki/Level%20%28video%20games%29 | In video games, a level (also referred to as a map, stage, or round in some older games) is any space available to the player during the course of completion of an objective. Video game levels generally have progressively increasing difficulty to appeal to players with different skill levels. Each level may present new concepts and challenges to keep a player's interest high.
In games with linear progression, levels are areas of a larger world, such as Green Hill Zone. Games may also feature interconnected levels, representing locations. Although the challenge in a game is often to defeat some sort of character, levels are sometimes designed with a movement challenge, such as a jumping puzzle, a form of obstacle course. Players must judge the distance between platforms or ledges and safely jump between them to reach the next area. These puzzles can slow the momentum down for players of fast action games; the first Half-Life's penultimate chapter, "Interloper", featured multiple moving platforms high in the air with enemies firing at the player from all sides.
Level design
Level design or environment design, is a discipline of game development involving the making of video game levels—locales, stages or missions. This is commonly done using a level editor, a game development software designed for building levels; however, some games feature built-in level editing tools.
History
In the early days of video games (1970s–2000s), a single programmer would develop the maps and layouts for a game, and a discipline or profession dedicated solely to level design did not exist. Early games often featured a level system of ascending difficulty as opposed to progression of storyline. An example of the former approach is the arcade shoot 'em up game Space Invaders (1978), where each level looks the same, repeating endlessly until the player loses all their lives. An example of the latter approach is the arcade platform game Donkey Kong (1981), which uses multiple distinct lev |
https://en.wikipedia.org/wiki/Marine%20protected%20area | Marine protected areas (MPA) are protected areas of seas, oceans, estuaries or in the US, the Great Lakes. These marine areas can come in many forms ranging from wildlife refuges to research facilities. MPAs restrict human activity for a conservation purpose, typically to protect natural or cultural resources. Such marine resources are protected by local, state, territorial, native, regional, national, or international authorities and differ substantially among and between nations. This variation includes different limitations on development, fishing practices, fishing seasons and catch limits, moorings and bans on removing or disrupting marine life. In some situations (such as with the Phoenix Islands Protected Area), MPAs also provide revenue for countries, potentially equal to the income that they would have if they were to grant companies permissions to fish. The value of MPA to mobile species is unknown.
There are a number of global examples of large marine conservation areas. The Papahānaumokuākea Marine National Monument, is situated in the central Pacific Ocean, around Hawaii, occupying an area of 1.5 million square kilometers. The area is rich in wild life, including the green turtle and the Hawaiian monkfish, alongside 7,000 other species, and 14 million seabirds. In 2017 the Cook Islands passed the Marae Moana Act designating the whole of the country's marine exclusive economic zone, which has an area of 1.9 million square kilometers as a zone with the purpose of protecting and conserving the "ecological, biodiversity and heritage values of the Cook Islands marine environment". Other large marine conservation areas include those around Antarctica, New Caledonia, Greenland, Alaska, Ascension Island, and Brazil.
As areas of protected marine biodiversity expand, there has been an increase in ocean science funding, essential for preserving marine resources. In 2020, only around 7.5 to 8% of the global ocean area falls under a conservation designation. This |
https://en.wikipedia.org/wiki/Implant%20%28medicine%29 | An implant is a medical device manufactured to replace a missing biological structure, support a damaged biological structure, or enhance an existing biological structure. For example, an implant may be a rod, used to strengthen weak bones. Medical implants are human-made devices, in contrast to a transplant, which is a transplanted biomedical tissue. The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone, or apatite depending on what is the most functional. In some cases implants contain electronics, e.g. artificial pacemaker and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills or drug-eluting stents.
Applications
Implants can roughly be categorized into groups by application:
Sensory and neurological
Sensory and neurological implants are used for disorders affecting the major senses and the brain, as well as other neurological disorders. They are predominately used in the treatment of conditions such as cataract, glaucoma, keratoconus, and other visual impairments; otosclerosis and other hearing loss issues, as well as middle ear diseases such as otitis media; and neurological diseases such as epilepsy, Parkinson's disease, and treatment-resistant depression. Examples include the intraocular lens, intrastromal corneal ring segment, cochlear implant, tympanostomy tube, and neurostimulator.
Cardiovascular
Cardiovascular medical devices are implanted in cases where the heart, its valves, and the rest of the circulatory system is in disorder. They are used to treat conditions such as heart failure, cardiac arrhythmia, ventricular tachycardia, valvular heart disease, angina pectoris, and atherosclerosis. Examples include the artificial heart, artificial heart valve, implantable cardioverter-defibrillator, artificial cardiac pacemaker, and coronary stent.
Orthopedic
Orthopaedic implants help alleviate issues with the bones and joints |
https://en.wikipedia.org/wiki/Object%20composition | In computer science, object composition and object aggregation are closely related ways to combine objects or data types into more complex ones. In conversation the distinction between composition and aggregation is often ignored. Common kinds of compositions are objects used in object-oriented programming, tagged unions, sets, sequences, and various graph structures. Object compositions relate to, but are not the same as, data structures.
Object composition refers to the logical or conceptual structure of the information, not the implementation or physical data structure used to represent it. For example, a sequence differs from a set because (among other things) the order of the composed items matters for the former but not the latter. Data structures such as arrays, linked lists, hash tables, and many others can be used to implement either of them. Perhaps confusingly, some of the same terms are used for both data structures and composites. For example, "binary tree" can refer to either: as a data structure it is a means of accessing a linear sequence of items, and the actual positions of items in the tree are irrelevant (the tree can be internally rearranged however one likes, without changing its meaning). However, as an object composition, the positions are relevant, and changing them would change the meaning (as for example in cladograms).
Programming technique
Object-oriented programming is based on objects to encapsulate data and behavior. It uses two main techniques for assembling and composing functionality into more complex ones, sub-typing and object composition. Object composition is about combining objects within compound objects, and at the same time, ensuring the encapsulation of each object by using their well-defined interface without visibility of their internals. In this regard, object composition differs from data structures, which do not enforce encapsulation.
Object composition may also be about a group of multiple related objects, s |
https://en.wikipedia.org/wiki/Function%20composition%20%28computer%20science%29 | In computer science, function composition is an act or mechanism to combine simple functions to build more complicated ones. Like the usual composition of functions in mathematics, the result of each function is passed as the argument of the next, and the result of the last one is the result of the whole.
Programmers frequently apply functions to results of other functions, and almost all programming languages allow it. In some cases, the composition of functions is interesting as a function in its own right, to be used later. Such a function can always be defined but languages with first-class functions make it easier.
The ability to easily compose functions encourages factoring (breaking apart) functions for maintainability and code reuse. More generally, big systems might be built by composing whole programs.
Narrowly speaking, function composition applies to functions that operate on a finite amount of data, each step sequentially processing it before handing it to the next. Functions that operate on potentially infinite data (a stream or other codata) are known as filters, and are instead connected in a pipeline, which is analogous to function composition and can execute concurrently.
Composing function calls
For example, suppose we have two functions and , as in and . Composing them means we first compute , and then use to compute . Here is the example in the C language:
float x, y, z;
// ...
y = g(x);
z = f(y);
The steps can be combined if we don't give a name to the intermediate result:
z = f(g(x));
Despite differences in length, these two implementations compute the same result. The second implementation requires only one line of code and is colloquially referred to as a "highly composed" form. Readability and hence maintainability is one advantage of highly composed forms, since they require fewer lines of code, minimizing a program's "surface area". DeMarco and Lister empirically verify an inverse relationship between surface area and maintainab |
https://en.wikipedia.org/wiki/Marc%20Kirschner | Marc Wallace Kirschner (born February 28, 1945) is an American cell biologist and biochemist and the founding chair of the Department of Systems Biology at Harvard Medical School. He is known for major discoveries in cell and developmental biology related to the dynamics and function of the cytoskeleton, the regulation of the cell cycle, and the process of signaling in embryos, as well as the evolution of the vertebrate body plan. He is a leader in applying mathematical approaches to biology. He is the John Franklin Enders University Professor at Harvard University. In 2021 he was elected to the American Philosophical Society.
Education and early life
Kirschner was born in Chicago, Illinois, on February 28, 1945. He graduated from Northwestern University with a B.A. in chemistry in 1966. He received a Graduate Research Fellowship from the National Science Foundation in 1966 and earned a doctorate in biochemistry from the University of California, Berkeley in 1971.
Career and research
He held postdoctoral positions at UC Berkeley and at the University of Oxford in England. He became assistant professor at Princeton University in 1972. In 1978 he was made professor at the University of California, San Francisco. In 1993, he moved to Harvard Medical School, where he served as the chair of the new Department of Cell Biology for a decade. He became the founding chair of the HMS Department of Systems Biology in 2003. He was named the John Franklin Enders University Professor in 2009. In 2018, he was succeeded as Chair of the Department of Systems Biology by Galit Lahav.
Kirschner studies how cells divide, how they generate their shape, how they control their size, and how embryos develop. In his eclectic lab, developmental work on the frog coexists with biochemical work on mechanism of ubiquitination, cytoskeleton assembly or signal transduction.
At Princeton, his early work on microtubules established their unusual molecular assembly from tubulin proteins and ident |
https://en.wikipedia.org/wiki/Global%20Telecommunications%20System | The Global Telecommunication System (GTS) is a secured communication network enabling real-time exchange of meteorological data from weather stations, satellites and numerical weather prediction centres, providing critical meteorological forecasting, warnings, and alerts. It was established by the World Meteorological Organization in 1951 under the World Weather Watch programme for the free and open exchange of meteorological information.
The GTS consists of an integrated network of point-to-point circuits, and multi-point circuits which interconnect meteorological telecommunication centres. The circuits of the GTS are composed of a combination of terrestrial and satellite telecommunication links. They comprise point-to-point circuits, point-to-multi-point circuits for data distribution, multi-point-to-point circuits for data collection, as well as two-way multi-point circuits. Meteorological Telecommunication Centres are responsible for receiving data and relaying it selectively on GTS circuits. The GTS is organized on a three level basis:
The Main Telecommunication Network (MTN)
The Regional Meteorological Telecommunication Networks (RMTNs)
The National Meteorological Telecommunication Networks (NMTNs)
Satellite-based data collection and/or data distribution systems are integrated in the GTS as an essential element of the global, regional and national levels of the GTS. Data collection systems operated via geostationary or near-polar orbiting meteorological/environmental satellites, including the Argos System, are widely used for the collection of observational data from data collection platforms. Marine data are also collected through the International Maritime Mobile Service and Inmarsat satellites. |
https://en.wikipedia.org/wiki/Lorentz%20scalar | In a relativistic theory of physics, a Lorentz scalar is an expression, formed from items of the theory, which evaluates to a scalar, invariant under any Lorentz transformation. A Lorentz scalar may be generated from e.g., the scalar product of vectors, or from contracting tensors of the theory. While the components of vectors and tensors are in general altered under Lorentz transformations, Lorentz scalars remain unchanged.
A Lorentz scalar is not always immediately seen to be an invariant scalar in the mathematical sense, but the resulting scalar value is invariant under any basis transformation applied to the vector space, on which the considered theory is based. A simple Lorentz scalar in Minkowski spacetime is the spacetime distance ("length" of their difference) of two fixed events in spacetime. While the "position"-4-vectors of the events change between different inertial frames, their spacetime distance remains invariant under the corresponding Lorentz transformation. Other examples of Lorentz scalars are the "length" of 4-velocities (see below), or the Ricci curvature in a point in spacetime from General relativity, which is a contraction of the Riemann curvature tensor there.
Simple scalars in special relativity
The length of a position vector
In special relativity the location of a particle in 4-dimensional spacetime is given by
where is the position in 3-dimensional space of the particle, is the velocity in 3-dimensional space and is the speed of light.
The "length" of the vector is a Lorentz scalar and is given by
where is the proper time as measured by a clock in the rest frame of the particle and the Minkowski metric is given by
This is a time-like metric.
Often the alternate signature of the Minkowski metric is used in which the signs of the ones are reversed.
This is a space-like metric.
In the Minkowski metric the space-like interval is defined as
We use the space-like Minkowski metric in the rest of this article.
The length of a |
https://en.wikipedia.org/wiki/Pseudo-LRU | Pseudo-LRU or PLRU is a family of cache algorithms which improve on the performance of the Least Recently Used (LRU) algorithm by replacing values using approximate measures of age rather than maintaining the exact age of every value in the cache.
PLRU usually refers to two cache replacement algorithms: tree-PLRU and bit-PLRU.
Tree-PLRU
Tree-PLRU is an efficient algorithm to select an item that most likely has not been accessed very recently, given a set of items and a sequence of access events to the items.
This technique is used in the CPU cache of the Intel 486 and in many processors in the PowerPC family, such as Freescale's PowerPC G4 used by Apple Computer.
The algorithm works as follows: consider a binary search tree for the items in question. Each node of the tree has a one-bit flag denoting "go left to insert a pseudo-LRU element" or "go right to insert a pseudo-LRU element". To find a pseudo-LRU element, traverse the tree according to the values of the flags. To update the tree with an access to an item N, traverse the tree to find N and, during the traversal, set the node flags to denote the direction that is opposite to the direction taken.
This algorithm can be sub-optimal since it is an approximation. For example, in the above diagram with A, C, B, D cache lines, if the access pattern was: C, B, D, A, on an eviction, B would chosen instead of C. This is because both A and C are in the same half and accessing A directs the algorithm to the other half that does not contain cache line C.
Bit-PLRU
Bit-PLRU stores one status bit for each cache line. These bits are called MRU-bits. Every access to a line sets its MRU-bit to 1, indicating that the
line was recently used. Whenever the last remaining 0 bit of a set's status bits is
set to 1, all other bits are reset to 0. At cache misses, the leftmost line whose MRU-bit is 0 is replaced.
See also
Cache algorithms |
https://en.wikipedia.org/wiki/Alvircept%20sudotox | Alvircept sudotox is a form of recombinant CD4 derived from Pneumonas aeruginosa exotoxin A, or 'PE40, which has a size of 59,187 daltons and is an anti-viral agent. |
https://en.wikipedia.org/wiki/Magic%20angle%20spinning | In solid-state NMR spectroscopy, magic-angle spinning (MAS) is a technique routinely used to produce better resolution NMR spectra. MAS NMR consists in spinning the sample (usually at a frequency of 1 to 130 kHz) at the magic angle θm (ca. 54.74°, where cos2θm=1/3) with respect to the direction of the magnetic field.
Three main interactions responsible in solid state NMR (dipolar, chemical shift anisotropy, quadrupolar) often lead to very broad and featureless NMR lines. However, these three interactions in solids are orientation-dependent and can be averaged to some extent by MAS:
The nuclear dipolar interaction has a dependence, where is the angle between the internuclear axis and the main magnetic field. As a result, the dipolar interaction vanish at the magic angle θm and the interaction contributing to the line broadening is removed. Even though all internuclear vectors cannot be all set to the magic angle, rotating the sample around this axis produces the same effect, provided the frequency is comparable to that of the interaction. In addition, a set of spinning sidebands appear on the spectra, which are sharp lines separated from the isotropic resonance frequency by a multiple of the spinning rate.
The chemical shift anisotropy (CSA) represents the orientation-dependence of the chemical shift. Powder patterns generated by the CSA interaction can be averaged by MAS, resulting to one single resonance centred at the isotropic chemical shift (centre of mass of the powder pattern).
The quadrupolar interaction is only partially averaged by MAS leaving a residual secondary quadrupolar interaction.
In solution-state NMR, most of these interactions are averaged out because of the rapid time-averaged molecular motion that occurs due to the thermal energy (molecular tumbling).
The spinning of the sample is achieved via an impulse air turbine mechanism, where the sample tube is lifted with a frictionless compressed gas bearing and spun with a gas drive. Sample |
https://en.wikipedia.org/wiki/Language%20model | A language model is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ‘Shannon-style’ experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text.
Language models are useful for a variety of tasks, including speech recognition (helping prevent predictions of low-probability (e.g. nonsense) sequences), machine translation, natural language generation (generating more human-like text), optical character recognition, handwriting recognition, grammar induction, and information retrieval.
Large language models, currently their most advanced form, are a combination of larger datasets (frequently using scraped words from the public internet), feedforward neural networks, and transformers. They have superseded recurrent neural network-based models, which had previously superseded the pure statistical models, such as word n-gram language model.
Pure statistical models
Models based on word n-grams
Exponential
Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is
where is the partition function, is the parameter vector, and is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on or some form of regularization.
The log-bilinear model is another example of an exponential language model.
Skip-gram model
Neural models
Recurrent neural network
Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help t |
https://en.wikipedia.org/wiki/WHUT-TV | WHUT-TV (channel 32) is the secondary PBS member television station in Washington, D.C. The station is owned by Howard University, a historically black college, and is sister to commercial urban contemporary radio station WHUR-FM (96.3). WHUT-TV's studios are located on the Howard University campus, and its transmitter is located in the Tenleytown neighborhood in the northwest quadrant of Washington.
WHUT airs a variety of standard PBS programming, as well as programs produced by Howard University, and international programs focusing on regions such as the Caribbean and Africa.
History
On June 25, 1974, Howard University was granted a construction permit to build a new television station on channel 32 in Washington, D.C. It was more than six years before the station signed on November 17, 1980. WHMM-TV (whose call letters stood for Howard University Mass Media) turned Howard, owner of the only radio station owned by an HBCU at the time, into the owner of the first Black-owned public television station. At the outset, the station suffered from some problems with its antenna and the need to train staff on the job. It also faced issues carving out an identity for itself and its mission, with standard PBS fare airing during much of the day; in 1983, its budget was one-third that of WETA-TV. However, within its first decade, it produced 1,000 Howard graduates trained in television production. The long-running Evening Exchange public affairs program, which debuted with the station, became a station staple; it was hosted by Kojo Nnamdi between 1985 and 2011.
Budget cuts at Howard in the late 1980s and 1990s prompted staff cuts in operations. Even as the station tried to significantly step up fundraising, its treatment as another academic department, requiring a different style of management, often hurt WHMM-TV. Staff levels were cut from 90 in 1988 to 65 five years later, when a blue-ribbon panel was convened by PBS to discuss the station's problems; that year, it had |
https://en.wikipedia.org/wiki/Libquantum | Libquantum is a C library quantum mechanics simulator originally focused on virtual quantum computers. It is licensed under the GNU GPL. It was a part of SPEC 2006. The latest version is stated to be v1.1.1 (Jan 2013) on the mailing list, but on the website there is only v0.9.1 from 2007.
An author of libquantum, Hendrik Weimer, has published a paper in Nature about using Rydberg atoms for universal quantum simulation with colleagues, using his own work. |
https://en.wikipedia.org/wiki/Annona%20squamosa | Annona squamosa is a small, well-branched tree or shrub from the family Annonaceae that bears edible fruits called sugar-apples or . It tolerates a tropical lowland climate better than its relatives Annona reticulata and Annona cherimola (whose fruits often share the same name) helping make it the most widely cultivated of these species.
Annona squamosa is a small, semi-(or late) deciduous,
much-branched shrub or small tree tall
similar to soursop (Annona muricata).
Description
The fruit of A. squamosa (sugar-apple) has sweet whitish pulp, and is popular in tropical markets.
Stems and leaves
Branches with light brown bark and visible leaf scars; inner bark light yellow and slightly bitter; twigs become brown with light brown dots (lenticels – small, oval, rounded spots upon the stem or branch of a plant, from which the underlying tissues may protrude or roots may issue).
Thin, simple, alternate leaves occur singly, long and wide; rounded at the base and pointed at the tip (oblong-lanceolate). They are pale green on both surfaces and mostly hairless with slight hairs on the underside when young. The sides sometimes are slightly unequal and the leaf edges are without teeth, inconspicuously hairy when young.
The leaf stalks are long, green, and sparsely pubescent.
Flowers
Solitary or in short lateral clusters of 2–4 about long, greenish-yellow flowers on a hairy, slender long stalk. Three green outer petals, purplish at the base, oblong, long, and wide, three inner petals reduced to minute scales or absent. Very numerous stamens; crowded, white, less than long; ovary light green. Styles white, crowded on the raised axis. Each pistil forms a separate tubercle (small rounded wartlike protuberance), mostly long and wide which matures into the aggregate fruit.
Flowering occurs in spring-early summer and flowers are pollinated by nitidulid beetles. Its pollen is shed as permanent tetrads.
Fruits and reproduction
Fruits ripen 3 to 4 months after fl |
https://en.wikipedia.org/wiki/Electromagnetic%20tensor | In electromagnetism, the electromagnetic tensor or electromagnetic field tensor (sometimes called the field strength tensor, Faraday tensor or Maxwell bivector) is a mathematical object that describes the electromagnetic field in spacetime. The field tensor was first used after the four-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski. The tensor allows related physical laws to be written very concisely, and allows for the quantization of the electromagnetic field by Lagrangian formulation described below.
Definition
The electromagnetic tensor, conventionally labelled F, is defined as the exterior derivative of the electromagnetic four-potential, A, a differential 1-form:
Therefore, F is a differential 2-form—that is, an antisymmetric rank-2 tensor field—on Minkowski space. In component form,
where is the four-gradient and is the four-potential.
SI units for Maxwell's equations and the particle physicist's sign convention for the signature of Minkowski space , will be used throughout this article.
Relationship with the classical fields
The Faraday differential 2-form is given by
This is the exterior derivative of its 1-form antiderivative
,
where has ( is a scalar potential for the irrotational/conservative vector field ) and has ( is a vector potential for the solenoidal vector field ).
Note that
where is the exterior derivative, is the Hodge star, (where is the electric current density, and is the electric charge density) is the 4-current density 1-form, is the differential forms version of Maxwell's equations.
The electric and magnetic fields can be obtained from the components of the electromagnetic tensor. The relationship is simplest in Cartesian coordinates:
where c is the speed of light, and
where is the Levi-Civita tensor. This gives the fields in a particular reference frame; if the reference frame is changed, the components of the electromagnetic tensor will transform covariantly, and the |
https://en.wikipedia.org/wiki/Ellsberg%20paradox | In decision theory, the Ellsberg paradox (or Ellsberg's paradox) is a paradox in which people's decisions are inconsistent with subjective expected utility theory. Daniel Ellsberg popularized the paradox in his 1961 paper, "Risk, Ambiguity, and the Savage Axioms". John Maynard Keynes published a version of the paradox in 1921. It is generally taken to be evidence of ambiguity aversion, in which a person tends to prefer choices with quantifiable risks over those with unknown, incalculable risks.
Ellsberg's findings indicate that choices with an underlying level of risk are favored in instances where the likelihood of risk is clear, rather than instances in which the likelihood of risk is unknown. A decision-maker will overwhelmingly favor a choice with a transparent likelihood of risk, even in instances where the unknown alternative will likely produce greater utility. When offered choices with varying risk, people prefer choices with calculable risk, even when they have less utility.
Experimental research
Ellsberg's experimental research involved two separate thought experiments: the 2-urn 2-color scenario and the 1-urn 3-color scenario.
Two-urns paradox
There are two urns each containing 100 balls. It is known that urn A contains 50 red and 50 black, but urn B contains an unknown mix of red and black balls.
The following bets are offered to a participant:
Bet 1A: get $1 if red is drawn from urn A, $0 otherwise
Bet 2A: get $1 if black is drawn from urn A, $0 otherwise
Bet 1B: get $1 if red is drawn from urn B, $0 otherwise
Bet 2B: get $1 if black is drawn from urn B, $0 otherwise
Typically, participants were seen to be indifferent between bet 1A and bet 2A (consistent with expected utility theory) but were seen to strictly prefer Bet 1A to Bet 1B and Bet 2A to 2B. This result is generally interpreted to be a consequence of ambiguity aversion (also known as uncertainty aversion); people intrinsically dislike situations where they cannot attach prob |
https://en.wikipedia.org/wiki/Shadow%20price | A shadow price is the monetary value assigned to an abstract or intangible commodity which is not traded in the marketplace. This often takes the form of an externality. Shadow prices are also known as the recalculation of known market prices in order to account for the presence of distortionary market instruments (e.g. quotas, tariffs, taxes or subsidies). Shadow prices are the real economic prices given to goods and services after they have been appropriately adjusted by removing distortionary market instruments and incorporating the societal impact of the respective good or service. A shadow price is often calculated based on a group of assumptions and estimates because it lacks reliable data, so it is subjective and somewhat inaccurate.
The need for shadow prices arises as a result of “externalities” and the presence of distortionary market instruments. An externality is defined as a cost or benefit incurred by a third party as a result of production or consumption of a good or services. Where the external effect is not being accounted for in the final cost-benefit analysis of its production. These inaccuracies and skewed results produce an imperfect market mechanism which inefficiently allocates resources.
Market distortion happen when the market is not behaving as it would in a perfect competition due to interventions by governments, companies, and other economic agents. Specifically, the presence of a monopoly or monopsony, in which firms do not behave in a perfect competition, government intervention through taxes and subsidies, public goods, information asymmetric, and restrictions on labour markets are distortionary effects on the market.
Shadow prices are often utilised in cost-benefit analyses by economic and financial analysts when evaluating the merits of public policy & government projects, when externalities or distortionary market instruments are present. The utilisation of shadow prices in these types of public policy decisions is extremely impo |
https://en.wikipedia.org/wiki/Vertex%20Pharmaceuticals | Vertex Pharmaceuticals is an American biopharmaceutical company based in Boston, Massachusetts. It was one of the first biotech firms to use an explicit strategy of rational drug design rather than combinatorial chemistry. It maintains headquarters in South Boston, Massachusetts, and three research facilities, in San Diego, California, and Milton Park, Oxfordshire, England.
History
Vertex was founded in 1989 by Joshua Boger and Kevin J. Kinsella to "transform the way serious diseases are treated."
The company's beginnings were profiled by Barry Werth in the 1994 book, The Billion-Dollar Molecule. His 2014 book, The Antidote: Inside the World of New Pharma, chronicled the company's subsequent development over the next two decades.
By 2004, its product pipeline focused on viral infections, inflammatory and autoimmune disorders, and cancer.
In 2009, the company had about 1,800 employees, including 1,200 in the Boston area. By 2019 there were about 2,500 employees.
Since late 2011, Vertex has ranked among the top 15 best-performing companies on the Standard & Poor's 500. Vertex shares increased 250 percent in the same period. In January 2014, Vertex completed its move from Cambridge, Massachusetts, to Boston, Massachusetts, and took residence in a new, $800 million complex. Located on the South Boston waterfront, it marked the first time in the company's history that all of the roughly 1,200 Vertex employees in the Greater Boston area worked together.
On 23 January 2019, Ian Smith, the COO and interim CFO of Vertex, was terminated from his position for undisclosed personal behavior that violated established company code of conduct rules. In June of the same year, Vertex announced it would acquire Exonics Therapeutics for up to $1 billion and collaborate with CRISPR Therapeutics, boosting its development of treatments for Duchenne muscular dystrophy and myotonic dystrophy type 1.
In September 2019 the company announced it would acquire Semma Therapeutics for $950 |
https://en.wikipedia.org/wiki/Vanessa%20cardui | Vanessa cardui is the most widespread of all butterfly species. It is commonly called the painted lady, or formerly in North America the cosmopolitan.
Description
Distribution
V. cardui is one of the most widespread of all butterflies, found on every continent except Antarctica and South America. In Australia, V. cardui has a limited range around Bunbury, Fremantle, and Rottnest Island. However, its close relative, the Australian painted lady (V. kershawi, sometimes considered a subspecies) ranges over half the continent. Other closely related species are the American painted lady (V. virginiensis) and the West Coast lady (V. annabella).
Migration
V. cardui occurs in any temperate zone, including mountains in the tropics. The species is resident only in warmer areas, but migrates in spring, and sometimes again in autumn. It migrates from North Africa and the Mediterranean to Britain and Europe in May and June, occasionally reaching Iceland, and from the Red Sea basin, via Israel and Cyprus, to Turkey in March and April. The occasional autumn migration made by V. cardui is likely for the inspection of resource changes; it consists of a round trip from Europe to Africa.
For decades, naturalists have debated whether the offspring of these immigrants ever make a southwards return migration. Research suggests that British painted ladies do undertake an autumn migration, making round trip from tropical Africa to the Arctic Circle in a series of steps by up to six successive generations. The Radar Entomology Unit at Rothamsted Research provided evidence that autumn migrations take place at high altitude, which explains why these migrations are seldom witnessed. In recent years, thanks to the activity of The Worldwide Painted Lady Migration citizen science project, led by the Barcelona-based Institute of Evolutionary Biology (Catalan: Institut de Biologia Evolutiva), the huge range of migration has begun to be revealed. For example, some butterflies migrated from Ice |
https://en.wikipedia.org/wiki/Eisai%20%28company%29 | is a Japanese pharmaceutical company headquartered in Tokyo, Japan. It has some 10,000 employees, among them about 1,500 in research. Eisai is listed on the Tokyo Stock Exchange and is a member of the Topix 100 and Nikkei 225 stock indices.
History
Nihon Eisai Co. Ltd. was established in 1941. In 1944, merger with Sakuragaoka Research Laboratory resulted in creation of Eisai Co. Ltd. The American subsidiary of the company, Eisai Inc., was established in 1995.
On November 25, 1996, Eisai received approval from the United States Food and Drug Administration (USFDA) for Aricept (donepezil), a drug discovered in the company's labs and co-marketed with Pfizer.
Three years later in 1999, the company received USFDA approval for Aciphex (rabeprazole), a drug co-marketed with Johnson & Johnson.
In September 2006, the company acquired four oncology products from Ligand Pharmaceuticals.
In April 2007, Eisai acquired Exton, Pennsylvania-based Morphotek, a company developing therapeutic monoclonal antibodies for the treatment of cancer, rheumatoid arthritis, and infectious diseases.
In December 2007, Eisai acquired MGI Pharma, a company specializing in oncology, for US$3.9 billion. This event brought Dacogen (decitabine), Aloxi (palonosetron), Hexalen (altretamine) for ovarian cancer, and the Gliadel Wafer (carmustine) for brain tumors into the Eisai product portfolio.
In 2009, Eisai received the Corporate Award from the National Organization for Rare Disorders (NORD) for the development of Banzel (rufinamide).
In June 2023, the company suffered from a ransomware attack, causing a shutdown of some of its logistical systems.
Locations
Eisai Co., Ltd. is based in Tokyo, Japan, while its American subsidiary, Eisai Inc., is headquartered in Nutley, New Jersey. Eisai Inc. is led by Ivan Cheung as CEO. Eisai maintains medical research headquarters in Nutley as well as at locations in Japan, the United Kingdom, the Research Triangle in North Carolina, and Massachusetts wh |
https://en.wikipedia.org/wiki/Havriliak%E2%80%93Negami%20relaxation | The Havriliak–Negami relaxation is an empirical modification of the Debye relaxation model in electromagnetism. Unlike the Debye model, the Havriliak–Negami relaxation accounts for the asymmetry and broadness of the dielectric dispersion curve. The model was first used to describe the dielectric relaxation of some polymers, by adding two exponential parameters to the Debye equation:
where is the permittivity at the high frequency limit, where is the static, low frequency permittivity, and is the characteristic relaxation time of the medium. The exponents and describe the asymmetry and broadness of the corresponding spectra.
Depending on application, the Fourier transform of the stretched exponential function can be a viable alternative that has one parameter less.
For the Havriliak–Negami equation reduces to the Cole–Cole equation, for to the Cole–Davidson equation.
Mathematical properties
Real and imaginary parts
The storage part and the loss part of the permittivity (here: with ) can be calculated as
and
with
Loss peak
The maximum of the loss part lies at
Superposition of Lorentzians
The Havriliak–Negami relaxation can be expressed as a superposition of individual Debye relaxations
with the real valued distribution function
where
if the argument of the arctangent is positive, else
Noteworthy, becomes imaginary valued for
and complex valued for
Logarithmic moments
The first logarithmic moment of this distribution, the average logarithmic relaxation time is
where is the digamma function and the Euler constant.
Inverse Fourier transform
The inverse Fourier transform of the Havriliak-Negami function (the corresponding time-domain relaxation function) can be numerically calculated. It can be shown that the series expansions involved are special cases of the Fox–Wright function. In particular, in the time-domain the corresponding of can be represented as
where is the Dirac delta function and
is a special instance of the Fox–W |
https://en.wikipedia.org/wiki/Olga%20Ladyzhenskaya | Olga Aleksandrovna Ladyzhenskaya (; 7 March 1922 – 12 January 2004) was a Russian mathematician who worked on partial differential equations, fluid dynamics, and the finite difference method for the Navier–Stokes equations. She received the Lomonosov Gold Medal in 2002. She is the author of more than two hundred scientific works, among which are six monographs.
Biography
Ladyzhenskaya was born and grew up in the small town of Kologriv, the daughter of a mathematics teacher who is credited with her early inspiration and love of mathematics. The artist Gennady Ladyzhensky was her grandfather's brother, also born in this town. In 1937 her father, Aleksandr Ivanovich Ladýzhenski, was arrested by the NKVD and executed as an "enemy of the people".
Ladyzhenskaya completed high school in 1939, unlike her older sisters who weren't permitted to do the same. She was not admitted to the Leningrad State University due to her father's status and attended a pedagogical institute. After the German invasion of June 1941, she taught school in Kologriv. She was eventually admitted to Moscow State University in 1943 and graduated in 1947.
She began teaching in the Physics department of the university in 1950 and defended her PhD there, in 1951, under Sergei Sobolev and Vladimir Smirnov. She received a second doctorate from the Moscow State University in 1953. In 1954, she joined the mathematical physics laboratory of the Steklov Institute and became its head in 1961.
Ladyzhenskaya had a love of arts and storytelling, counting writer Aleksandr Solzhenitsyn and poet Anna Akhmatova among her friends. Like Solzhenitsyn she was religious. She was once a member of the city council, and engaged in philanthropic activities, repeatedly risking her personal safety and career to aid people opposed to the Soviet regime. Ladyzhenskaya suffered from various eye problems in her later years and relied on special pencils to do her work.
Two days before a trip to Florida, she died in her sleep |
https://en.wikipedia.org/wiki/Heats%20of%20fusion%20of%20the%20elements%20%28data%20page%29 |
Heat of fusion
Notes
Values refer to the enthalpy change between the liquid phase and the most stable solid phase at the melting point (normal, 101.325 kPa). |
https://en.wikipedia.org/wiki/Heats%20of%20vaporization%20of%20the%20elements%20%28data%20page%29 |
Heat of vaporization
Notes
Values refer to the enthalpy change in the conversion of liquid to gas at the boiling point (normal, 101.325 kPa). |
https://en.wikipedia.org/wiki/Holarctic%20realm | The Holarctic realm is a biogeographic realm that comprises the majority of habitats found throughout the continents in the Northern Hemisphere. It corresponds to the floristic Boreal Kingdom. It includes both the Nearctic zoogeographical region (which covers most of North America), and Alfred Wallace's Palearctic zoogeographical region (which covers North Africa, and all of Eurasia except for Southeast Asia, the Indian subcontinent, the southern Arabian Peninsula).
These regions are further subdivided into a variety of ecoregions. Many ecosystems and the animal and plant communities that depend on them extend across a number of continents and cover large portions of the Holarctic realm. This continuity is the result of those regions’ shared glacial history.
Major ecosystems
Within the Holarctic realm, there are a variety of ecosystems. The type of ecosystem found in a given area depends on its latitude and the local geography. In the far north, a band of Arctic tundra encircles the shore of the Arctic Ocean. The ground beneath this land is permafrost (frozen year-round). In these difficult growing conditions, few plants can survive. South of the tundra, the boreal forest stretches across North America and Eurasia. This land is characterized by coniferous trees. Further south, the ecosystems become more diverse. Some areas are temperate grassland, while others are temperate forests dominated by deciduous trees. Many of the southernmost parts of the Holarctic are deserts, which are dominated by plants and animals adapted to the dry conditions.
Animal species with a Holarctic distribution
A variety of animal species are distributed across continents, throughout much of the Holarctic realm. These include the brown bear, grey wolf, red fox, wolverine, moose, caribou, golden eagle and common raven.
The brown bear (Ursus arctos) is found in mountainous and semi-open areas distributed throughout the Holarctic. It once occupied much larger areas, but has been driv |
https://en.wikipedia.org/wiki/Dental%20abrasion | Abrasion is the non-carious, mechanical wear of tooth from interaction with objects other than tooth-tooth contact. It most commonly affects the premolars and canines, usually along the cervical margins. Based on clinical surveys, studies have shown that abrasion is the most common but not the sole aetiological factor for development of non-carious cervical lesions (NCCL) and is most frequently caused by incorrect toothbrushing technique.
Abrasion frequently presents at the cemento-enamel junction and can be caused by many contributing factors, all with the ability to affect the tooth surface in varying degrees.
The appearance may vary depending on the cause of abrasion, however most commonly presents in a V-shaped caused by excessive lateral pressure whilst tooth-brushing. The surface is shiny rather than carious, and sometimes the ridge is deep enough to see the pulp chamber within the tooth itself.
Non-carious cervical loss due to abrasion may lead to consequences and symptoms such as increased tooth sensitivity to hot and cold, increased plaque trapping which will result in caries and periodontal disease, and difficulty of dental appliances such as retainers or dentures engaging the tooth. It may also be aesthetically unpleasant to some people.
For successful treatment of abrasion, the cause first needs to be identified and ceased (e.g. overzealous brushing). Once this has occurred, subsequent treatment may involve the changes in oral hygiene, application of fluoride to reduce sensitivity, or the placement of a restoration to help prevent further loss of tooth structure and aid plaque control.
Cause
Cause of abrasion may arise from interaction of teeth with other objects such as toothbrushes, toothpicks, floss, and ill-fitting dental appliance like retainers and dentures. Apart from that, people with habits such as nail biting, chewing tobacco, lip or tongue piercing, and having occupation such as joiner, are subjected to higher risks of abrasion.
The ae |
https://en.wikipedia.org/wiki/Code-talker%20paradox | A code-talker paradox is a situation in which a language prevents communication. As an issue in linguistics, the paradox raises questions about the fundamental nature of languages. As such, the paradox is a problem in philosophy of language.
The term code-talker paradox was coined in 2001 by Mark Baker to describe the Navajo code talking used during World War II. Code talkers are able to create a language mutually intelligible to each other but completely unintelligible to everyone who does not know the code. This causes a conflict of interests without actually causing any conflict at all. In the case of Navajo code-talkers, cryptanalysts were unable to decode messages in Navajo, even when using the most sophisticated methods available. At the same time, the code talkers were able to encrypt and decrypt messages quickly and easily by translating them into and from Navajo. Thus the code talker paradox refers to how human languages can be so similar and different at once: so similar that one can learn them both and gain the ability to translate from one to the other, yet so different that if someone knows one language but does not know another, it is not always possible to derive the meaning of a text by analyzing it or infer it from the other language.
See also
Drift (linguistics)
List of paradoxes
Plato's Problem
The Analysis of Verbal Behavior
Philip Johnston |
https://en.wikipedia.org/wiki/Wei-Liang%20Chow | Chow Wei-Liang (; October 1, 1911, Shanghai – August 10, 1995, Baltimore) was a Chinese mathematician and stamp collector born in Shanghai, known for his work in algebraic geometry.
Biography
Chow was a student in the US, graduating from the University of Chicago in 1931. In 1932 he attended the University of Göttingen, then transferred to the Leipzig University where he worked with van der Waerden. They produced a series of joint papers on intersection theory, introducing in particular the use of what are now generally called Chow coordinates (which were in some form familiar to Arthur Cayley).
He married Margot Victor in 1936, and took a position at the National Central University in Nanjing. His mathematical work was seriously affected by the wartime situation in China. He taught at the National Tung-Chi University in Shanghai in the academic year 1946–47, and then went to the Institute for Advanced Study in Princeton, where he returned to his research. From 1948 to 1977 he was a professor at Johns Hopkins University.
He was also a stamp collector, known for his book Shanghai Large Dragons, The First Issue of The Shanghai Local Post, published in 1996.
Research
According to Shiing-Shen Chern,
"Wei-Liang was an original and versatile mathematician, although his major field was algebraic geometry. He made several fundamental contributions to mathematics:
A fundamental issue in algebraic geometry is intersection theory. The Chow ring has many advantages and is widely used.
The Chow associated forms give a description of the moduli space of the algebraic varieties in projective space. It gives a beautiful solution of an important problem.
His theorem that a compact analytic variety in a projective space is algebraic is justly famous. The theorem shows the close analogy between algebraic geometry and algebraic number theory.
Generalizing a result of Caratheodory on thermodynamics, he formulated a theorem on accessibility of differential spaces. The theorem pla |
https://en.wikipedia.org/wiki/Off-the-record%20messaging | Off-the-Record Messaging (OTR) is a cryptographic protocol that provides encryption for instant messaging conversations. OTR uses a combination of AES symmetric-key algorithm with 128 bits key length, the Diffie–Hellman key exchange with 1536 bits group size, and the SHA-1 hash function. In addition to authentication and encryption, OTR provides forward secrecy and malleable encryption.
The primary motivation behind the protocol was providing deniable authentication for the conversation participants while keeping conversations confidential, like a private conversation in real life, or off the record in journalism sourcing. This is in contrast with cryptography tools that produce output which can be later used as a verifiable record of the communication event and the identities of the participants. The initial introductory paper was named "Off-the-Record Communication, or, Why Not To Use PGP".
The OTR protocol was designed by cryptographers Ian Goldberg and Nikita Borisov and released on 26 October 2004. They provide a client library to facilitate support for instant messaging client developers who want to implement the protocol. A Pidgin and Kopete plugin exists that allows OTR to be used over any IM protocol supported by Pidgin or Kopete, offering an auto-detection feature that starts the OTR session with the buddies that have it enabled, without interfering with regular, unencrypted conversations. Version 4 of the protocol has been in development since 2017 by a team led by Sofía Celi, and reviewed by Nik Unger and Ian Goldberg. This version aims to provide online and offline deniability, to update the cryptographic primitives, and to support out-of-order delivery and asynchronous communication.
History
OTR was presented in 2004 by Nikita Borisov, Ian Avrum Goldberg, and Eric A. Brewer as an improvement over the OpenPGP and the S/MIME system at the "Workshop on Privacy in the Electronic Society" (WPES). The first version 0.8.0 of the reference implementation w |
https://en.wikipedia.org/wiki/%40stake | ATstake, Inc. was a computer security professional services company in Cambridge, Massachusetts, United States. It was founded in 1999 by Battery Ventures (Tom Crotty, Sunil Dhaliwal, and Scott Tobin) and Ted Julian. Its initial core team of technologists included Dan Geer (Chief Technical Officer) and the east coast security team from Cambridge Technology Partners (including Dave Goldsmith).
History
In January 2000, @stake acquired L0pht Heavy Industries (who were known for their many hacker employees), bringing on Mudge as its Vice President of Research and Development. Its domain name was atstake.com. In July 2000, @stake acquired Cerberus Information Security Limited of London, England, from David and Mark Litchfield and Robert Stein-Rostaing, to be their launchpad into Europe, the Middle East and Africa. @stake was subsequently acquired by Symantec in 2004.
In addition to Dan Geer and Mudge, @stake employed many famous security experts including Dildog, Window Snyder, Dave Aitel, Katie Moussouris, David Litchfield, Mark Kriegsman, Mike Schiffman, the grugq, Chris Wysopal, Alex Stamos, Cris Thomas, and Joe Grand.
In September 2000, an @stake recruiter contacted Mark Abene to recruit him for a security consultant position. The recruiter was apparently unaware of his past felony conviction since @stake had a policy of not hiring convicted hackers. Mark was informed by a company representative that @stake could not hire him, saying: "We ran a background check." This caused some debate regarding the role of convicted hackers working in the security business.
@stake was primarily a consulting company, but also offered information security training through the @stake academy, and created a number of software security tools:
LC 3, LC 4 and LC 5 were versions of a password auditing and recovery tool also known as L0phtCrack
WebProxy was a security testing tool for Web applications
SmartRisk Analyzer was an application security analysis tool
The @stake Sleu |
https://en.wikipedia.org/wiki/Dielectric%20heating | Dielectric heating, also known as electronic heating, radio frequency heating, and high-frequency heating, is the process in which a radio frequency (RF) alternating electric field, or radio wave or microwave electromagnetic radiation heats a dielectric material. At higher frequencies, this heating is caused by molecular dipole rotation within the dielectric.
Mechanism
Molecular rotation occurs in materials containing polar molecules having an electrical dipole moment, with the consequence that they will align themselves in an electromagnetic field. If the field is oscillating, as it is in an electromagnetic wave or in a rapidly oscillating electric field, these molecules rotate continuously by aligning with it. This is called dipole rotation, or dipolar polarisation. As the field alternates, the molecules reverse direction. Rotating molecules push, pull, and collide with other molecules (through electrical forces), distributing the energy to adjacent molecules and atoms in the material. The process of energy transfer from the source to the sample is a form of radiative heating.
Temperature is related to the average kinetic energy (energy of motion) of the atoms or molecules in a material, so agitating the molecules in this way increases the temperature of the material. Thus, dipole rotation is a mechanism by which energy in the form of electromagnetic radiation can raise the temperature of an object. There are also many other mechanisms by which this conversion occurs.
Dipole rotation is the mechanism normally referred to as dielectric heating, and is most widely observable in the microwave oven where it operates most effectively on liquid water, and also, but much less so, on fats and sugars. This is because fats and sugar molecules are far less polar than water molecules, and thus less affected by the forces generated by the alternating electromagnetic fields. Outside of cooking, the effect can be used generally to heat solids, liquids, or gases, provided th |
https://en.wikipedia.org/wiki/Mean%20free%20time | Molecules in a fluid constantly collide with each other. The mean free time for a molecule in a fluid is the average time between collisions. The mean free path of the molecule is the product of the average speed and the mean free time. These concepts are used in the kinetic theory of gases to compute transport coefficients such as the viscosity.
In a gas the mean free path may be much larger than the average distance between molecules. In a liquid these two lengths may be very similar.
Scattering is a random process. It is often modeled as a Poisson process, in which the probability of a collision in a small time interval is . For a Poisson process like this, the average time since the last collision, the average time until the next collision and the average time between collisions are all equal to . |
https://en.wikipedia.org/wiki/Backup%20validation | Backup validation is the process whereby owners of computer data may examine how their data was backed up in order to understand what their risk of data loss might be. It also speaks to optimization of such processes, charging for them as well as estimating future requirements, sometimes called capacity planning.
History
Over the past several decades (leading up to 2005), organizations (banks, governments, schools, manufacturers and others) have increased their reliance more on "Open Systems" and less on "Closed Systems". For example, 25 years ago, a large bank might have most if not all of its critical data housed in an IBM mainframe computer (a "Closed System"), but today, that same bank might store a substantially greater portion of its critical data in spreadsheets, databases, or even word processing documents (i.e., "Open Systems"). The problem with Open Systems is, primarily, their unpredictable nature. The very nature of an Open System is that it is exposed to potentially thousands if not millions of variables ranging from network overloads to computer virus attacks to simple software incompatibility. Any one, or indeed several in combination, of these factors may result in either lost data and/or compromised data backup attempts. These types of problems do not generally occur on Closed Systems, or at least, in unpredictable ways. In the "old days", backups were a nicely contained affair. Today, because of the ubiquity of, and dependence upon, Open Systems, an entire industry has developed around data protection. Three key elements of such data protection are Validation, Optimization and Chargeback.
Validation
Validation is the process of finding out whether a backup attempt succeeded or not, or, whether the data is backed up enough to consider it "protected". This process usually involves the examination of log files, the "smoking gun" often left behind after a backup attempts takes place, as well as media databases, data traffic and even magnetic tapes. |
https://en.wikipedia.org/wiki/Circular%20shift | In combinatorial mathematics, a circular shift is the operation of rearranging the entries in a tuple, either by moving the final entry to the first position, while shifting all other entries to the next position, or by performing the inverse operation. A circular shift is a special kind of cyclic permutation, which in turn is a special kind of permutation. Formally, a circular shift is a permutation σ of the n entries in the tuple such that either
modulo n, for all entries i = 1, ..., n
or
modulo n, for all entries i = 1, ..., n.
The result of repeatedly applying circular shifts to a given tuple are also called the circular shifts of the tuple.
For example, repeatedly applying circular shifts to the four-tuple (a, b, c, d) successively gives
(d, a, b, c),
(c, d, a, b),
(b, c, d, a),
(a, b, c, d) (the original four-tuple),
and then the sequence repeats; this four-tuple therefore has four distinct circular shifts. However, not all n-tuples have n distinct circular shifts. For instance, the 4-tuple (a, b, a, b) only has 2 distinct circular shifts. The number of distinct circular shifts of an n-tuple is , where is a divisor of , indicating the maximal number of repeats over all subpatterns.
In computer programming, a bitwise rotation, also known as a circular shift, is a bitwise operation that shifts all bits of its operand. Unlike an arithmetic shift, a circular shift does not preserve a number's sign bit or distinguish a floating-point number's exponent from its significand. Unlike a logical shift, the vacant bit positions are not filled in with zeros but are filled in with the bits that are shifted out of the sequence.
Implementing circular shifts
Circular shifts are used often in cryptography in order to permute bit sequences. Unfortunately, many programming languages, including C, do not have operators or standard functions for circular shifting, even though virtually all processors have bitwise operation instructions for it (e.g. Intel x86 has ROL a |
https://en.wikipedia.org/wiki/Deoxyribozyme | Deoxyribozymes, also called DNA enzymes, DNAzymes, or catalytic DNA, are DNA oligonucleotides that are capable of performing a specific chemical reaction, often but not always catalytic. This is similar to the action of other biological enzymes, such as proteins or ribozymes (enzymes composed of RNA).
However, in contrast to the abundance of protein enzymes in biological systems and the discovery of biological ribozymes in the 1980s,
there is only little evidence for naturally occurring deoxyribozymes.
Deoxyribozymes should not be confused with DNA aptamers which are oligonucleotides that selectively bind a target ligand, but do not catalyze a subsequent chemical reaction.
With the exception of ribozymes, nucleic acid molecules within cells primarily serve as storage of genetic information due to its ability to form complementary base pairs, which allows for high-fidelity copying and transfer of genetic information. In contrast, nucleic acid molecules are more limited in their catalytic ability, in comparison to protein enzymes, to just three types of interactions: hydrogen bonding, pi stacking, and metal-ion coordination. This is due to the limited number of functional groups of the nucleic acid monomers: while proteins are built from up to twenty different amino acids with various functional groups, nucleic acids are built from just four chemically similar nucleobases. In addition, DNA lacks the 2'-hydroxyl group found in RNA which limits the catalytic competency of deoxyribozymes even in comparison to ribozymes.
In addition to the inherent inferiority of DNA catalytic activity, the apparent lack of naturally occurring deoxyribozymes may also be due to the primarily double-stranded conformation of DNA in biological systems which would limit its physical flexibility and ability to form tertiary structures, and so would drastically limit the ability of double-stranded DNA to act as a catalyst; though there are a few known instances of biological single-stranded |
https://en.wikipedia.org/wiki/Nuclear%20weapons%20debate | The nuclear weapons debate refers to the controversies surrounding the threat, use and stockpiling of nuclear weapons. Even before the first nuclear weapons had been developed, scientists involved with the Manhattan Project were divided over the use of the weapon. The only time nuclear weapons have been used in warfare was during the final stages of World War II when USAAF B-29 Superfortress bombers dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki in early August 1945. The role of the bombings in Japan's surrender and the U.S.'s ethical justification for them have been the subject of scholarly and popular debate for decades.
Nuclear disarmament refers both to the act of reducing or eliminating nuclear weapons and to the end state of a nuclear-free world. Proponents of disarmament typically condemn a priori the threat or use of nuclear weapons as immoral and argue that only total disarmament can eliminate the possibility of nuclear war. Critics of nuclear disarmament say that it would undermine deterrence and make conventional wars more likely, more destructive, or both. The debate becomes considerably complex when considering various scenarios for example, total vs partial or unilateral vs multilateral disarmament.
Nuclear proliferation is a related concern, which most commonly refers to the spread of nuclear weapons to additional countries and increases the risks of nuclear war arising from regional conflicts. The diffusion of nuclear technologies -- especially the nuclear fuel cycle technologies for producing weapons-usable nuclear materials such as highly enriched uranium and plutonium -- contributes to the risk of nuclear proliferation. These forms of proliferation are sometimes referred to as horizontal proliferation to distinguish them from vertical proliferation, the expansion of nuclear stockpiles of established nuclear powers.
History
Manhattan Project
Because the Manhattan Project was considered to be "top secret," there was no |
https://en.wikipedia.org/wiki/Local%20Area%20Transport | Local Area Transport (LAT) is a non-routable (data link layer) networking technology developed by Digital Equipment Corporation to provide connection between the DECserver terminal servers and Digital's VAX and Alpha and MIPS host computers via Ethernet, giving communication between those hosts and serial devices such as video terminals and printers. The protocol itself was designed in such a manner as to maximize packet efficiency over Ethernet by bundling multiple characters from multiple ports into a single packet for Ethernet transport.
One LAT strength was efficiently handling time-sensitive data transmission. Over time, other host implementations of the LAT protocol appeared allowing communications to a wide range of Unix and other non-Digital operating systems using the LAT protocol.
History
In 1984, the first implementation of the LAT protocol connected a terminal server to a VMS VAX-Cluster in Spit Brook Road, Nashua, NH. By "virtualizing" the terminal port at the host end, a very large number of plug-and-play VT100-class terminals could connect to each host computer system. Additionally, a single physical terminal could connect via multiple sessions to multiple hosts simultaneously. Future generations of terminal servers included both LAT and TELNET protocols, one of the earliest protocols created to run on a burgeoning TCP/IP based Internet. Additionally, the ability to create reverse direction pathways from users to non-traditional RS232 devices (i.e. UNIX Host TTYS1 operator ports) created an entirely new market for Terminal Servers, now known as console servers in the mid to late 1990s, year 2000 and beyond through today.
LAT and VMS drove the initial surge of adoption of thick Ethernet by the computer industry. By 1986, terminal server networks accounted for 10% of Digital's $10 billion revenue. These early Ethernet LANs scaled using Ethernet bridges (another DEC invention) as well as DECnet routers. Subsequently, Cisco routers, which implemented T |
https://en.wikipedia.org/wiki/Height%20above%20ground%20level | In aviation, atmospheric sciences and broadcasting, a height above ground level (AGL or HAGL) is a height measured with respect to the underlying ground surface. This is as opposed to height above mean sea level (AMSL or HAMSL), height above ellipsoid (HAE, as reported by a GPS receiver), or height above average terrain (AAT or HAAT, in broadcast engineering). In other words, these expressions (AGL, AMSL, HAE, AAT) indicate where the "zero level" or "reference altitude" – the vertical datum – is located.
Aviation
A pilot flying an aircraft under instrument flight rules (typically under poor visibility conditions) must rely on the aircraft's altimeter to decide when to deploy the undercarriage and prepare for landing. Therefore, the pilot needs reliable information on the height of the plane with respect to the landing area (usually an airport). The altimeter, which is usually a barometer calibrated in units of distance instead of atmospheric pressure, can therefore be set in such a way as to indicate the height of the aircraft above ground. This is done by communicating with the control tower of the airport (to get the current surface pressure) and setting the altimeter so as to read zero on the ground of that airport. Confusion between AGL and AMSL, or improper calibration of the altimeter, may result in controlled flight into terrain, a crash of a fully functioning aircraft under pilot control.
While the use of a barometric altimeter setting that provides a zero reading on the ground of the airport is a reference available to pilots, in commercial aviation it is a country-specific procedure that is not often used (it is used, e.g., in Russia, and a few other countries). Most countries (Far East, North and South America, all of Europe, Africa, Australia) use the airport's AMSL (above mean sea level) elevation as a reference. During approaches to landing, there are several other references that are used, including AFE (above field elevation) which is height refer |
https://en.wikipedia.org/wiki/List%20of%20materials%20analysis%20methods | This is a list of analysis methods used in materials science. Analysis methods are listed by their acronym, if one exists.
Symbols
μSR – see muon spin spectroscopy
χ – see magnetic susceptibility
A
AAS – Atomic absorption spectroscopy
AED – Auger electron diffraction
AES – Auger electron spectroscopy
AFM – Atomic force microscopy
AFS – Atomic fluorescence spectroscopy
Analytical ultracentrifugation
APFIM – Atom probe field ion microscopy
APS – Appearance potential spectroscopy
ARPES – Angle resolved photoemission spectroscopy
ARUPS – Angle resolved ultraviolet photoemission spectroscopy
ATR – Attenuated total reflectance
B
BET – BET surface area measurement (BET from Brunauer, Emmett, Teller)
BiFC – Bimolecular fluorescence complementation
BKD – Backscatter Kikuchi diffraction, see EBSD
BRET – Bioluminescence resonance energy transfer
BSED – Back scattered electron diffraction, see EBSD
C
CAICISS – Coaxial impact collision ion scattering spectroscopy
CARS – Coherent anti-Stokes Raman spectroscopy
CBED – Convergent beam electron diffraction
CCM – Charge collection microscopy
CDI – Coherent diffraction imaging
CE – Capillary electrophoresis
CET – Cryo-electron tomography
CL – Cathodoluminescence
CLSM – Confocal laser scanning microscopy
COSY – Correlation spectroscopy
Cryo-EM – Cryo-electron microscopy
Cryo-SEM – Cryo-scanning electron microscopy
CV – Cyclic voltammetry
D
DE(T)A – Dielectric thermal analysis
dHvA – De Haas–van Alphen effect
DIC – Differential interference contrast microscopy
Dielectric spectroscopy
DLS – Dynamic light scattering
DLTS – Deep-level transient spectroscopy
DMA – Dynamic mechanical analysis
DPI – Dual polarisation interferometry
DRS – Diffuse reflection spectroscopy
DSC – Differential scanning calorimetry
DTA – Differential thermal analysis
DVS – Dynamic vapour sorption
E
EBIC – Electron beam induced current (see IBIC: ion beam induced charge)
EBS – Elastic (non-Rutherford) backscatterin |
https://en.wikipedia.org/wiki/Recoil%20temperature | In condensed matter physics, the recoil temperature is a fundamental lower limit of temperature attainable by some laser cooling schemes, and corresponds to the kinetic energy imparted in an atom initially at rest by the spontaneous emission of a photon. The recoil temperature is
where
is the magnitude of the wavevector of the light,
is the mass of the atom,
is the Boltzmann constant,
is the Planck constant,
is the photon's momentum.
In general, the recoil temperature is below the Doppler cooling limit for atoms and molecules, so sub-Doppler cooling techniques such as Sisyphus cooling are necessary to reach it. For example, the recoil temperature for the D2 lines of alkali atoms is typically on the order of 1 μK, in contrast with a Doppler cooling limit on the order of 100 μK.
Cooling beyond the recoil limit is possible using specific schemes such as Raman cooling. Sub-recoil temperatures can also occur in the Lamb Dicke regime, where an atom is so strongly confined that its motion (and thus temperature) is effectively unchanged by recoil photons. |
https://en.wikipedia.org/wiki/Biological%20Dynamics%20of%20Forest%20Fragments%20Project | The Biological Dynamics of Forest Fragments Project (BDFFP; or Projeto Dinâmica Biológica de Fragmentos Florestais, PDBFF, in Portuguese) is a large-scale ecological experiment looking at the effects of habitat fragmentation on tropical rainforest. The experiment which was established in 1979 is located near Manaus in the Brazilian Amazon rainforest. The project is jointly managed by the Amazon Biodiversity Center and the Brazilian Institute for Research in the Amazon (INPA).
The project was initiated in 1979 by Thomas Lovejoy to investigate the SLOSS debate. Initially named the Minimum Critical Size of Ecosystems Project, the project created forest fragments of sizes , , and . Data were collected prior to the creation of the fragments and studies of the effects of fragmentation now exceed 25 years.
As of April 2020, 785 scholarly journal articles and more than 150 graduate dissertations and theses had emerged from the project.
History
The Biological Dynamics of Forest Fragments Project (BDFFP), was born out of the SLOSS (single large or several small reserves of equal area) debate in the mid 1970s about the application of the theory of island biogeography to conservation planning. The debate was triggered when Dan Simberloff and Larry Abele questioned the use of the theory of island biogeography to the design of nature reserves. The theory, developed by Robert MacArthur and E. O. Wilson predicted that the species richness of an island increases as the area of a reserve increases and distance to mainland colonizing sources decreases. It also determined that the shape of a reserve is very important to the species diversity. Reserves with a large edge to area ratio tend to be affected more by edge effects than reserves with a small edge to area ratio. The distance between reserves and the habitat surrounding the reserves (the matrix) can affect species richness and diversity as well. The concept was applied to planning nature preserves, which are effectively i |
https://en.wikipedia.org/wiki/Scenario%20%28computing%29 | In computing, a scenario (, ; loaned (), ) is a narrative of foreseeable interactions of user roles (known in the Unified Modeling Language as 'actors') and the technical system, which usually includes computer hardware and software.
A scenario has a goal, which is usually functional. A scenario describes one way that a system is used, or is envisaged to be used, in the context of an activity in a defined time-frame. The time-frame for a scenario could be (for example) a single transaction; a business operation; a day or other period; or the whole operational life of a system. Similarly the scope of a scenario could be (for example) a single system or a piece of equipment; an equipped team or a department; or an entire organization.
Scenarios are frequently used as part of the system development process. They are typically produced by usability or marketing specialists, often working in concert with end users and developers. Scenarios are written in plain language, with minimal technical details, so that stakeholders (designers, usability specialists, programmers, engineers, managers, marketing specialists, etc.) can have a common ground to focus their discussions.
Increasingly, scenarios are used directly to define the wanted behaviour of software: replacing or supplementing traditional functional requirements. Scenarios are often defined in use cases, which document alternative and overlapping ways of reaching a goal.
Types of scenario in system development
Many types of scenario are in use in system development. Alexander and Maiden list the following types:
Story: "a narrated description of a causally connected sequence of events, or of actions taken". Brief User stories are written in the Agile style of software development.
Situation, Alternative World: "a projected future situation or snapshot". This meaning is common in planning, but less usual in software development.
Simulation: use of models to explore and animate 'Stories' or 'Situations', to |
https://en.wikipedia.org/wiki/Continuum%20limit | In mathematical physics and mathematics, the continuum limit or scaling limit of a lattice model refers to its behaviour in the limit as the lattice spacing goes to zero. It is often useful to use lattice models to approximate real-world processes, such as Brownian motion. Indeed, according to Donsker's theorem, the discrete random walk would, in the scaling limit, approach the true Brownian motion.
Terminology
The term continuum limit mostly finds use in the physical sciences, often in reference to models of aspects of quantum physics, while the term scaling limit is more common in mathematical use.
Application in quantum field theory
A lattice model that approximates a continuum quantum field theory in the limit as the lattice spacing goes to zero may correspond to finding a second order phase transition of the model. This is the scaling limit of the model.
See also
Universality classes |
https://en.wikipedia.org/wiki/Lorenzo%20de%20Zavala | Manuel Lorenzo Justiniano de Zavala y Sánchez (October 3, 1788 – November 15, 1836), known simply as Lorenzo de Zavala, was a Mexican and later Tejano physician, politician, diplomat and author. Born in Yucatán under Spanish rule, he was closely involved in drafting the constitution for the First Federal Republic of Mexico in 1824 after Mexico won independence from Spain. Years later, he also helped in drafting a constitution for Mexico's rebellious enemy at the time, the Republic of Texas, to secure independence from Mexico in 1836. Zavala was said to have had a keen intellect and was fluent in multiple languages.
Zavala was one of the most prominent liberals in the era of the First Republic. Since his youth, Zavala was an indefatigable believer in the principle of democratic representative government. As a young man he founded several newspapers and wrote extensively, espousing democratic reforms — writings which led to his imprisonment by the Spanish crown. While imprisoned, he learned English and studied medicine; after his release, he practiced medicine for two years before entering politics.
Over his career, he served in many different capacities including the Spanish Cortes (legislature) in Madrid representing Yucatán, and in Mexico's Senate. He became Mexico's Minister of Finance and served as Ambassador to France and Governor of the State of Mexico. In 1829, a conservative coup brought Anastasio Bustamante to power, and Zavala was forced into exile, moving to the United States for two years. He wrote a book about U.S. political culture during this time and also traveled extensively in Europe. With his diplomatic experience and linguistic skills, Zavala was well received by foreign governments.
In 1832, a liberal coup brought Valentin Gomez Farias to power. Zavala returned to Mexico and was appointed as Minister to France. While serving in Paris, Zavala witnessed overthrow of Gomez Farias and the subsequent fall of the First Mexican Republic. Santa Anna w |
https://en.wikipedia.org/wiki/Coccygeus%20muscle | The coccygeus muscle or ischiococcygeus is a muscle of the pelvic floor, located posterior to levator ani and anterior to the sacrospinous ligament.
Structure
The coccygeus muscle is posterior to levator ani and anterior to the sacrospinous ligament in the pelvic floor. It is a triangular plane of muscular and tendinous fibers. It arises by its apex from the spine of the ischium and sacrospinous ligament. It is inserted by its base into the margin of the coccyx and into the side of the lowest piece of the sacrum.
In combination with the levator ani, it forms the pelvic diaphragm.
The pudendal nerve runs between the coccygeus muscle and the piriformis muscle, superficial to the coccygeus muscle.
Nerve supply
The coccygeus muscle is innervated by the pudendal nerve, which runs between it and the piriformis muscle.
Function
The coccygeus muscle assists the levator ani and piriformis muscle in closing in the back part of the outlet of the pelvis. This helps to support the vagina in women, and the other pelvic organs.
See also
Extensor coccygis
Coccyx
Coccydynia (coccyx pain, tailbone pain)
Pubococcygeus muscle |
https://en.wikipedia.org/wiki/Propylthiouracil | Propylthiouracil (PTU) is a medication used to treat hyperthyroidism. This includes hyperthyroidism due to Graves' disease and toxic multinodular goiter. In a thyrotoxic crisis it is generally more effective than methimazole. Otherwise it is typically only used when methimazole, surgery, and radioactive iodine is not possible. It is taken by mouth.
Common side effects include itchiness, hair loss, parotid swelling, vomiting, muscle pains, numbness, and headache. Other severe side effects include liver problems and low blood cell counts. Use during pregnancy may harm the baby. Propylthiouracil is in the antithyroid family of medications. It works by decreasing the amount of thyroid hormone produced by the thyroid gland and blocking the conversion of thyroxine (T4) to triiodothyronine (T3).
Propylthiouracil came into medical use in the 1940s. It is on the World Health Organization's List of Essential Medicines.
Side effects
Propylthiouracil is generally well tolerated, with side effects occurring in one of every 100 patients. The most common side effects are related to the skin and include rash, itching, hives, abnormal hair loss, and skin pigmentation. Other common side effects are swelling, nausea, vomiting, heartburn, loss of taste, joint or muscle aches, numbness and headache, allergic reactions, and hair whitening.
Its notable side effects include a risk of agranulocytosis and aplastic anemia. On 3 June 2009, the FDA published an alert "notifying healthcare professionals of the risk of serious liver injury, including liver failure and death, with the use of propylthiouracil." As a result, propylthiouracil is no longer recommended in non-pregnant adults and in children as the front line antithyroid medication.
One possible side effect is agranulocytosis, a decrease of white blood cells in the blood. Symptoms and signs of agranulocytosis include infectious lesions of the throat, the gastrointestinal tract, and skin with an overall feeling of illness and fever |
https://en.wikipedia.org/wiki/Low%20Pin%20Count | The Low Pin Count (LPC) bus is a computer bus used on IBM-compatible personal computers to connect low-bandwidth devices to the CPU, such as the BIOS ROM (BIOS ROM was moved to the Serial Peripheral Interface (SPI) bus in 2006), "legacy" I/O devices (integrated into Super I/O, Embedded Controller, CPLD, and/or IPMI chip), and Trusted Platform Module (TPM). "Legacy" I/O devices usually include serial and parallel ports, PS/2 keyboard, PS/2 mouse, and floppy disk controller.
Most PC motherboards with an LPC bus have either a Platform Controller Hub (PCH) or a southbridge chip, which acts as the host and controls the LPC bus. All other devices connected to the physical wires of the LPC bus are peripherals.
Overview
The LPC bus was introduced by Intel in 1998 as a software-compatible substitute for the Industry Standard Architecture (ISA) bus. It resembles ISA to software, although physically it is quite different. The ISA bus has a 16-bit data bus and a 24-bit address bus that can be used for both 16-bit I/O port addresses and 24-bit memory addresses; both run at speeds up to 8.33 MHz. The LPC bus uses a heavily multiplexed four-bit-wide bus operating at four times the clock speed (33.3 MHz) to transfer addresses and data with similar performance.
LPC's main advantage is that the basic bus requires only seven signals, greatly reducing the number of pins required on peripheral chips. An integrated circuit using LPC will need 30 to 72 fewer pins than its ISA equivalent. It is also easier to route on modern motherboards, which are often quite crowded. The clock rate was chosen to match that of PCI in order to further ease integration. Also, LPC is intended to be a motherboard-only bus. There is no standardized connector in common use, though Intel defines one for use for debug modules, and few LPC peripheral daughterboards are available, including Trusted Platform Modules (TPMs) with a TPM daughterboard whose pinout is proprietary to the motherboard vendor as wel |
https://en.wikipedia.org/wiki/Arbitrated%20loop | The arbitrated loop, also known as FC-AL, is a Fibre Channel topology in which devices are connected in a one-way loop fashion in a ring topology. Historically it was a lower-cost alternative to a fabric topology. It allowed connection of many servers and computer storage devices without using then very costly Fibre Channel switches. The cost of the switches dropped considerably, so by 2007, FC-AL had become rare in server-to-storage communication. It is however still common within storage systems.
It is a serial architecture that can be used as the transport layer in a SCSI network, with up to 127 devices. The loop may connect into a fibre channel fabric via one of its ports.
The bandwidth on the loop is shared among all ports.
Only two ports may communicate at a time on the loop. One port wins arbitration and may open one other port in either half or full duplex mode.
A loop with two ports is valid and has the same physical topology as point-to-point, but still acts as a loop protocol-wise.
Fibre Channel ports capable of arbitrated loop communication are NL_port (node loop port) and FL_port (fabric loop port), collectively referred to as the L_ports. The ports may attach to each other via a hub, with cables running from the hub to the ports. The physical connectors on the hub are not ports in terms of the protocol. A hub does not contain ports.
An arbitrated loop with no fabric port (with only NL_ports) is a private loop.
An arbitrated loop connected to a fabric (through an FL_port) is a public loop.
An NL_Port must provide fabric logon (FLOGI) and name registration facilities to initiate communication with other node through the fabric (to be an initiator).
Arbitrated loop can be physically cabled in a ring fashion or using a hub. The physical ring ceases to work if one of the devices in the chain fails. The hub on the other hand, while maintaining a logical ring, allows a star topology on the cable level. Each receive port on the hub is simply p |
https://en.wikipedia.org/wiki/Switched%20fabric | Switched fabric or switching fabric is a network topology in which network nodes interconnect via one or more network switches (particularly crossbar switches). Because a switched fabric network spreads network traffic across multiple physical links, it yields higher total throughput than broadcast networks, such as the early 10BASE5 version of Ethernet and most wireless networks such as Wi-Fi.
The generation of high-speed serial data interconnects that appeared in 2001–2004 which provided point-to-point connectivity between processor and peripheral devices are sometimes referred to as fabrics; however, they lack features such as a message-passing protocol. For example, HyperTransport, the computer processor interconnect technology, continues to maintain a processor bus focus even after adopting a higher speed physical layer. Similarly, PCI Express is just a serial version of PCI; it adheres to PCI's host/peripheral load/store direct memory access (DMA)-based architecture on top of a serial physical and link layer.
Fibre Channel
In the Fibre Channel Switched Fabric (FC-SW-6) topology, devices are connected to each other through one or more Fibre Channel switches. While this topology has the best scalability of the three FC topologies (the other two are Arbitrated Loop and point-to-point), it is the only one requiring switches, which are costly hardware devices.
Visibility among devices (called nodes) in a fabric is typically controlled with Fibre Channel zoning.
Multiple switches in a fabric usually form a mesh network, with devices being on the "edges" ("leaves") of the mesh. Most Fibre Channel network designs employ two separate fabrics for redundancy. The two fabrics share the edge nodes (devices), but are otherwise unconnected. One of the advantages of such setup is capability of failover, meaning that in case one link breaks or a fabric goes out of order, datagrams can be sent via the second fabric.
The fabric topology allows the connection of up to the |
https://en.wikipedia.org/wiki/Calcifuge | A calcifuge is a plant that does not tolerate alkaline (basic) soil. The word is derived from the Latin 'to flee from chalk'. These plants are also described as ericaceous, as the prototypical calcifuge is the genus Erica (heaths). It is not the presence of carbonate or hydroxide ions per se that these plants cannot tolerate, but the fact that under alkaline conditions, iron becomes less soluble. Consequently, calcifuges grown on alkaline soils often develop the symptoms of iron deficiency, i.e. interveinal chlorosis of new growth. There are many horticultural plants which are calcifuges, most of which require an 'ericaceous' compost with a low pH, composed principally of Sphagnum moss peat. Alternatively sulphur chips may be used to lower soil pH.
A plant that thrives in lime-rich soils is known as a calcicole.
Examples
Order Ericales
Ericaceae
Andromeda polifolia
Calluna (common heather)
Cassiope lycopodioides
Daboecia
Enkianthus campanulatus
Erica (but not E. carnea or E. erigena)
Gaultheria mucronata
Kalmia latifolia (calico bush)
Pieris
Rhododendron (many species of rhododendron and azalea)
Vaccinium corymbosum (northern highbush blueberry)
Vaccinium myrtillus (bilberry)
Sarraceniaceae (carnivorous)
Pitcher plants of the genera Sarracenia, Darlingtonia, and Heliamphora
Styracaceae
Styrax wilsonii
Theaceae
Camellia sinensis (Tea plant)
Order Caryophyllales
Droseraceae (carnivorous)
Drosera (sundew species; but some species are calcitolerant or calciphilous)
Dionaea muscipula (Venus flytrap)
Nepenthaceae (carnivorous)
Nepenthes (pitcher plants; but some species are calcitolerant or even calciphilous)
Order Lamiales
Lentibulariaceae (carnivorous)
Utricularia sect. Calpidisca and some other subgenera (non-epiphytic terrestrial bladderworts; there are some species that prefer neutral pH or are calciphilous)
Other orders
Asteraceae
Arnica montana
Columelliaceae
Desfontainia spinosa
Cornaceae
Cornus florida (dogwood)
Elaeocarpaceae
Crinodendron hoo |
https://en.wikipedia.org/wiki/Calcicole | A calcicole, calciphyte or calciphile is a plant that thrives in lime rich soil. The word is derived from the Latin 'to dwell on chalk'. Under acidic conditions, aluminium becomes more soluble and phosphate less. As a consequence, calcicoles grown on acidic soils often develop the symptoms of aluminium toxicity, i.e. necrosis, and phosphate deficiency, i.e. anthocyanosis (reddening of the leaves) and stunting.
A plant that thrives in acid soils is known as a calcifuge.
A plant thriving on sand (which may be acidic or calcic) is termed psammophilic or arenaceous (see also arenite).
Examples of calcicole plants
Ash trees (Fraxinus spp.)
Honeysuckle (Lonicera)
Buddleja
Lilac (Syringa)
Beet
Clematis
Sanguisorba minor
Some European orchids
Some succulent plants genera Sansevieria and Titanopsis or cacti genus Thelocactus.
Calcicolous grasses |
https://en.wikipedia.org/wiki/Constituent%20quark | A constituent quark is a current quark with a notional "covering" induced by the renormalization group.
In the low-energy limit of QCD, a description by means of perturbation theory is not possible: Here, no asymptotic freedom exists, but collective interactions between valence quarks and sea quarks gain strongly in significance. Part of the effects of virtual quarks and virtual gluons in the "sea" can be assigned to a quark so well, that the term "constituent quark" can serve as an effective description of the low-energy system.
Constituent quarks appear like "dressed" current quarks, i.e. current quarks surrounded by a cloud of virtual quarks and gluons. This cloud, in the end, underlies the large constituent-quark masses.
Definition Constituent quarks are valence quarks for which the correlations for the description of hadrons by means of gluons and sea-quarks are put into effective quark masses of these valence quarks.
The effective quark mass is called constituent quark mass. Hadrons consist of "glued" constituent quarks.
Binding energy
The quantum chromodynamic binding energy of a valence quark in a hadron is the amount of energy required to make the hadron spontaneously emit a meson containing the valence quark. This is the same as the constituent-quark mass.
Note that the following values are model-dependent. |
https://en.wikipedia.org/wiki/QCD%20matter | Quark matter or QCD matter (quantum chromodynamic) refers to any of a number of hypothetical phases of matter whose degrees of freedom include quarks and gluons, of which the prominent example is quark-gluon plasma. Several series of conferences in 2019, 2020, and 2021 were devoted to this topic.
Quarks are liberated into quark matter at extremely high temperatures and/or densities, and some of them are still only theoretical as they require conditions so extreme that they cannot be produced in any laboratory, especially not at equilibrium conditions. Under these extreme conditions, the familiar structure of matter, where the basic constituents are nuclei (consisting of nucleons which are bound states of quarks) and electrons, is disrupted. In quark matter it is more appropriate to treat the quarks themselves as the basic degrees of freedom.
In the standard model of particle physics, the strong force is described by the theory of QCD. At ordinary temperatures or densities this force just confines the quarks into composite particles (hadrons) of size around 10−15 m = 1 femtometer = 1 fm (corresponding to the QCD energy scale ΛQCD ≈ 200 MeV) and its effects are not noticeable at longer distances.
However, when the temperature reaches the QCD energy scale (T of order 1012 kelvins) or the density rises to the point where the average inter-quark separation is less than 1 fm (quark chemical potential μ around 400 MeV), the hadrons are melted into their constituent quarks, and the strong interaction becomes the dominant feature of the physics. Such phases are called quark matter or QCD matter.
The strength of the color force makes the properties of quark matter unlike gas or plasma, instead leading to a state of matter more reminiscent of a liquid. At high densities, quark matter is a Fermi liquid, but is predicted to exhibit color superconductivity at high densities and temperatures below 1012 K.
Occurrence
Natural occurrence
According to the Big Bang theory, in |
https://en.wikipedia.org/wiki/Gas%20leak | A gas leak refers to a leak of natural gas or another gaseous product from a pipeline or other containment into any area where the gas should not be present. Gas leaks can be hazardous to health as well as the environment. Even a small leak into a building or other confined space may gradually build up an explosive or lethal concentration of gas. Natural gas leaks and the escape of refrigerant gas into the atmosphere are especially harmful, because of their global warming potential and ozone depletion potential.
Leaks of gases associated with industrial operations and equipment are also generally known as fugitive emissions. Natural gas leaks from fossil fuel extraction and use are known as fugitive gas emissions. Such unintended leaks should not be confused with similar intentional types of gas release, such as:
gas venting emissions which are controlled releases, and often practised as a part of routine operations, or
"emergency pressure releases" which are intended to prevent equipment damage and safeguard life.
Gas leaks should also not be confused with "gas seepage" from the earth or oceans - either natural or due to human activity.
Fire and explosion safety
Pure natural gas is colorless and odorless, and is composed primarily of methane. Unpleasant scents in the form of traces of mercaptans are usually added, to assist in identifying leaks. This odor may be perceived as rotting eggs, or a faintly unpleasant skunk smell. Persons detecting the odor must evacuate the area and abstain from using open flames or operating electrical equipment, to reduce the risk of fire and explosion.
As a result of the Pipeline Safety Improvement Act of 2002 passed in the United States, federal safety standards require companies providing natural gas to conduct safety inspections for gas leaks in homes and other buildings receiving natural gas. The gas company is required to inspect gas meters and inside gas piping from the point of entry into the building to the outlet side o |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.