source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/System%20analysis | System analysis in the field of electrical engineering characterizes electrical systems and their properties. System analysis can be used to represent almost anything from population growth to audio speakers; electrical engineers often use it because of its direct relevance to many areas of their discipline, most notably signal processing, communication systems and control systems.
Characterization of systems
A system is characterized by how it responds to input signals. In general, a system has one or more input signals and one or more output signals. Therefore, one natural characterization of systems is by how many inputs and outputs they have:
SISO (Single Input, Single Output)
SIMO (Single Input, Multiple Outputs)
MISO (Multiple Inputs, Single Output)
MIMO (Multiple Inputs, Multiple Outputs)
It is often useful (or necessary) to break up a system into smaller pieces for analysis. Therefore, we can regard a SIMO system as multiple SISO systems (one for each output), and similarly for a MIMO system. By far, the greatest amount of work in system analysis has been with SISO systems, although many parts inside SISO systems have multiple inputs (such as adders).
Signals can be continuous or discrete in time, as well as continuous or discrete in the values they take at any given time:
Signals that are continuous in time and continuous in value are known as analog signals.
Signals that are discrete in time and discrete in value are known as digital signals.
Signals that are discrete in time and continuous in value are called discrete-time signals. Switched capacitor systems, for instance, are often used in integrated circuits. The methods developed for analyzing discrete time signals and systems are usually applied to digital and analog signals and systems.
Signals that are continuous in time and discrete in value are sometimes seen in the timing analysis of logic circuits or PWM amplifiers, but have little to no use in system analysis.
With this categ |
https://en.wikipedia.org/wiki/Link%20Layer%20Discovery%20Protocol | The Link Layer Discovery Protocol (LLDP) is a vendor-neutral link layer protocol used by network devices for advertising their identity, capabilities, and neighbors on a local area network based on IEEE 802 technology, principally wired Ethernet. The protocol is formally referred to by the IEEE as Station and Media Access Control Connectivity Discovery specified in IEEE 802.1AB with additional support in IEEE 802.3 section 6 clause 79.
LLDP performs functions similar to several proprietary protocols, such as Cisco Discovery Protocol, Foundry Discovery Protocol, Nortel Discovery Protocol and Link Layer Topology Discovery.
Information gathered
Information gathered with LLDP can be stored in the device management information base (MIB) and queried with the Simple Network Management Protocol (SNMP) as specified in RFC 2922. The topology of an LLDP-enabled network can be discovered by crawling the hosts and querying this database. Information that may be retrieved include:
System name and description
Port name and description
VLAN name
IP management address
System capabilities (switching, routing, etc.)
MAC/PHY information
MDI power
Link aggregation
Applications
The Link Layer Discovery Protocol may be used as a component in network management and network monitoring applications.
One such example is its use in data center bridging requirements. The (DCBX) is a discovery and capability exchange protocol that is used for conveying capabilities and configuration of the above features between neighbors to ensure consistent configuration across the network.
LLDP is used to advertise power over Ethernet capabilities and requirements and negotiate power delivery.
Media endpoint discovery extension
Media Endpoint Discovery is an enhancement of LLDP, known as LLDP-MED, that provides the following facilities:
Auto-discovery of LAN policies (such as VLAN, Layer 2 Priority and Differentiated services (Diffserv) settings) enabling plug and play networking.
Device |
https://en.wikipedia.org/wiki/Ataxia%20telangiectasia%20and%20Rad3%20related | Serine/threonine-protein kinase ATR, also known as ataxia telangiectasia and Rad3-related protein (ATR) or FRAP-related protein 1 (FRP1), is an enzyme that, in humans, is encoded by the ATR gene. It is a large kinase of about 301.66 kDa. ATR belongs to the phosphatidylinositol 3-kinase-related kinase protein family. ATR is activated in response to single strand breaks, and works with ATM to ensure genome integrity.
Function
ATR is a serine/threonine-specific protein kinase that is involved in sensing DNA damage and activating the DNA damage checkpoint, leading to cell cycle arrest in eukaryotes. ATR is activated in response to persistent single-stranded DNA, which is a common intermediate formed during DNA damage detection and repair. Single-stranded DNA occurs at stalled replication forks and as an intermediate in DNA repair pathways such as nucleotide excision repair and homologous recombination repair. ATR is activated during more persistent issues with DNA damage; within cells, most DNA damage is repaired quickly and faithfully through other mechanisms. ATR works with a partner protein called ATRIP to recognize single-stranded DNA coated with RPA. RPA binds specifically to ATRIP, which then recruits ATR through an ATR activating domain (AAD) on its surface. This association of ATR with RPA is how ATR specifically binds to and works on single-stranded DNA—this was proven through experiments with cells that had mutated nucleotide excision pathways. In these cells, ATR was unable to activate after UV damage, showing the need for single stranded DNA for ATR activity. The acidic alpha-helix of ATRIP binds to a basic cleft in the large RPA subunit to create a site for effective ATR binding. Many other proteins exist that are recruited to the cite of ssDNA that are needed for ATR activation. While RPA recruits ATRIP, the RAD9-RAD1-HUS1 (9-1-1) complex is loaded onto the DNA adjacent to the ssDNA; though ATRIP and the 9-1-1 complex are recruited independently to th |
https://en.wikipedia.org/wiki/Cleavage%20furrow | In cell biology, the cleavage furrow is the indentation of the cell's surface that begins the progression of cleavage, by which animal and some algal cells undergo cytokinesis, the final splitting of the membrane, in the process of cell division. The same proteins responsible for muscle contraction, actin and myosin, begin the process of forming the cleavage furrow, creating an actomyosin ring. Other cytoskeletal proteins and actin binding proteins are involved in the procedure.
Mechanism
Plant cells do not perform cytokinesis through this exact method but the two procedures are not totally different. Animal cells form an actin-myosin contractile ring within the equatorial region of the cell membrane that constricts to form the cleavage furrow. In plant cells, Golgi vesicle secretions form a cell plate or septum on the equatorial plane of the cell wall by the action of microtubules of the phragmoplast. The cleavage furrow in animal cells and the phragmoplast in plant cells are complex structures made up of microtubules and microfilaments that aide in the final separation of the cells into two identical daughter cells.
Cell cycle
The cell cycle begins with interphase when the DNA replicates, the cell grows and prepares to enter mitosis. Mitosis includes four phases: prophase, metaphase, anaphase, and telophase. Prophase is the initial phase when spindle fibers appear that function to move the chromosomes toward opposite poles. This spindle apparatus consists of microtubules, microfilaments and a complex network of various proteins. During metaphase, the chromosomes line up using the spindle apparatus in the middle of the cell along the equatorial plate. The chromosomes move to opposite poles during anaphase and remain attached to the spindle fibers by their centromeres. Animal cell cleavage furrow formation is caused by a ring of actin microfilaments called the contractile ring, which forms during early anaphase. Myosin is present in the region of the contracti |
https://en.wikipedia.org/wiki/Principle%20of%20indifference | The principle of indifference (also called principle of insufficient reason) is a rule for assigning epistemic probabilities. The principle of indifference states that in the absence of any relevant evidence, agents should distribute their credence (or 'degrees of belief') equally among all the possible outcomes under consideration.
In Bayesian probability, this is the simplest non-informative prior. The principle of indifference is meaningless under the frequency interpretation of probability, in which probabilities are relative frequencies rather than degrees of belief in uncertain propositions, conditional upon state information.
Examples
The textbook examples for the application of the principle of indifference are coins, dice, and cards.
In a macroscopic system, at least, it must be assumed that the physical laws that govern the system are not known well enough to predict the outcome. As observed some centuries ago by John Arbuthnot (in the preface of Of the Laws of Chance, 1692),
It is impossible for a Die, with such determin'd force and direction, not to fall on such determin'd side, only I don't know the force and direction which makes it fall on such determin'd side, and therefore I call it Chance, which is nothing but the want of art....
Given enough time and resources, there is no fundamental reason to suppose that suitably precise measurements could not be made, which would enable the prediction of the outcome of coins, dice, and cards with high accuracy: Persi Diaconis's work with coin-flipping machines is a practical example of this.
Coins
A symmetric coin has two sides, arbitrarily labeled heads (many coins have the head of a person portrayed on one side) and tails. Assuming that the coin must land on one side or the other, the outcomes of a coin toss are mutually exclusive, exhaustive, and interchangeable. According to the principle of indifference, we assign each of the possible outcomes a probability of 1/2.
It is implicit in this analysis t |
https://en.wikipedia.org/wiki/Embryotroph | Embryotroph is the embryonic nourishment in placental animals.
Formation of syncytiotrophoblast
On approximately the seventh day of development, the trophoblast (cells that make up the outer part of the blastocyst) divides to form two separate layers: an inner cytotrophoblast layer, and an outer syncytiotrophoblast layer. Using enzymes, the syncytiotrophoblast penetrates the tissues of the mother, then it attaches to these tissues by burrowing with long projections, breaking maternal blood vessels. The chemical reason why this process occurs is currently unknown.
Uterine milk
Uterine milk is part of the embryotroph. It is a white secretion containing proteins and amino acids that nourishes the embryo during development. The uterine milk is the actual nutritional liquid that feeds the embryo, while the embryotroph is the uterine milk plus the syncytiotrophoblast.
Malformations and embryotrophic nutrition
Studies have shown that when embryotrophic nutrition is interrupted for some reason or another, malformations in embryos tend to occur. This is expected, because when important proteins and amino acids are withheld, the embryo will surely be at a disadvantage. The yolk sac is the part of the embryo most likely to be malformed, leading to other malformations later on. |
https://en.wikipedia.org/wiki/Wellcome%20Centre%20for%20Human%20Genetics | The Wellcome Centre for Human Genetics is a human genetics research centre of the Nuffield Department of Medicine in the Medical Sciences Division, University of Oxford, funded by the Wellcome Trust among others.
Facilities & resources
The centre is located at the Henry Wellcome Building of Genomic Medicine, which cost £20 million and was officially opened in June 2000 with Anthony Monaco as the director.
Within the WHG a number of 'cores' provide services to the researchers:
Oxford Genomics Centre
The Oxford Genomics Centre provides high throughput sequencing services, using Illumina HiSeq4000 2500 and NextSeq500 and MiSeq. They also offer Oxford Nanopore MinION and PromethION sequencing. There are also Array platforms for genotyping, gene expression, and methylation including Illuminia Infinium, Affymetrix and Fluidigm.
Research Computing Core
The Research Computing Core provides access to computer resources including 4120 cores and 4.2 PB of storage.
Transgenics
The Transgenics Core provides access to genetically modified mice and cell lines.
Cellular Imaging
Cellular Imaging Core provides microscopy facilities including fluorescence microscopy (including Fluorescence Correlation Spectroscopy (FCS), Fluorescence Lifetime Correlation Spectroscopy (FLCS), Fluorescence Lifetime Imaging Microscopy (FLIM), Total Internal Reflection Fluorescence Microscopy (TIRF), Photoactivated Localisation Microscopy (PALM), Spectral Imaging (SI) and Single Particle Tracking (SPT).
Research
Statistical and population genetics
The WHG has been involved in many international statistical genetics advances including the Wellcome Trust Case Control Consortia (WTCCC, WTCCC2), the 1000 Genomes Project and the International HapMap Project. |
https://en.wikipedia.org/wiki/Filamentation | Filamentation is the anomalous growth of certain bacteria, such as Escherichia coli, in which cells continue to elongate but do not divide (no septa formation). The cells that result from elongation without division have multiple chromosomal copies.
In the absence of antibiotics or other stressors, filamentation occurs at a low frequency in bacterial populations (4–8% short filaments and 0–5% long filaments in 1- to 8-hour cultures). The increased cell length can protect bacteria from protozoan predation and neutrophil phagocytosis by making ingestion of cells more difficult. Filamentation is also thought to protect bacteria from antibiotics, and is associated with other aspects of bacterial virulence such as biofilm formation.
The number and length of filaments within a bacterial population increases when the bacteria are exposed to different physical, chemical and biological agents (e.g. UV light, DNA synthesis-inhibiting antibiotics, bacteriophages). This is termed conditional filamentation. Some of the key genes involved in filamentation in E. coli include sulA, minCD and damX.
Filament formation
Antibiotic-induced filamentation
Some peptidoglycan synthesis inhibitors (e.g. cefuroxime, ceftazidime) induce filamentation by inhibiting the penicillin binding proteins (PBPs) responsible for crosslinking peptidoglycan at the septal wall (e.g. PBP3 in E. coli and P. aeruginosa). Because the PBPs responsible for lateral wall synthesis are relatively unaffected by cefuroxime and ceftazidime, cell elongation proceeds without any cell division and filamentation is observed.
DNA synthesis-inhibiting and DNA damaging antibiotics (e.g. metronidazole, mitomycin C, the fluoroquinolones, novobiocin) induce filamentation via the SOS response. The SOS response inhibits septum formation until the DNA can be repaired, this delay stopping the transmission of damaged DNA to progeny. Bacteria inhibit septation by synthesizing protein SulA, an FtsZ inhibitor that halts Z-ri |
https://en.wikipedia.org/wiki/E%C3%B6tv%C3%B6s%20rule | The Eötvös rule, named after the Hungarian physicist Loránd (Roland) Eötvös (1848–1919) enables the prediction of the surface tension of an arbitrary liquid pure substance at all temperatures. The density, molar mass and the critical temperature of the liquid have to be known. At the critical point the surface tension is zero.
The first assumption of the Eötvös rule is:
1. The surface tension is a linear function of the temperature.
This assumption is approximately fulfilled for most known liquids. When plotting the surface tension versus the temperature a fairly straight line can be seen which has a surface tension of zero at the critical temperature.
The Eötvös rule also gives a relation of the surface tension behaviour of different liquids in respect to each other:
2. The temperature dependence of the surface tension can be plotted for all liquids in a way that the data collapses to a single master curve. To do so either the molar mass, the density, or the molar volume of the corresponding liquid has to be known.
More accurate versions are found on the main page for surface tension.
The Eötvös rule
If V is the molar volume and Tc the critical temperature of a liquid the surface tension γ is given by
where k is a constant valid for all liquids. The Eötvös constant has a value of 2.1×10−7 J/(K·mol2/3).
More precise values can be gained when considering that the line normally passes the temperature axis 6 K before the critical point:
The molar volume V is given by the molar mass M and the density ρ
The term is also referred to as the "molar surface tension" γmol :
A useful representation that prevents the use of the unit mol−2/3 is given by the Avogadro constant NA :
As John Lennard-Jones and Corner showed in 1940 by means of the statistical mechanics the constant k′ is nearly equal to the Boltzmann constant.
Water
For water, the following equation is valid between 0 and 100 °C.
History
As a student, Eötvös started to research surface tension and |
https://en.wikipedia.org/wiki/Transpiration%20cooling | Transpiration cooling is a thermodynamic process where cooling is achieved by a process of moving a liquid or a gas through the wall of a structure to absorb some portion of the heat energy from the structure while simultaneously actively reducing the convective and radiative heat flux coming into the structure from the surrounding space.
One approach to transpiration cooling is to move liquid through small pores in the outer wall of a body leading to evaporation of the liquid to a gas via the physical mechanism of evaporative cooling. Other approaches are possible.
Applications
Transpiration cooling is used in the aerospace industry, in jet and rocket engines. In 2018, researchers at the University of Oxford were experimentally testing transpiration cooling as a Thermal Protection System for Hypersonic Vehicles such as rockets or spaceplanes.
Transpiration cooling is one of a variety of cooling techniques that may be used to reduce regenerative cooling loads in rocket engines and subsequently reduce propellant requirements. Other techniques exist, such as film cooling, ablative cooling, radiative cooling, heat sink cooling and dump cooling.
Transpiration cooling is being considered for use in space vehicles reentering the Earth's atmosphere at hypersonic velocities where a transpirationally cooled outer skin could serve as a part of the thermal protection system of the reentering spacecraft.
SpaceX publicly mentioned such a system in 2019 for use on their Starship reusable second stage and orbital spacecraft to mitigate the harsh conditions of reentry.
The design concept envisioned a double stainless-steel skin, with active coolant flowing between the two layers, with some areas additionally containing multiple small pores that would allow for transpiration cooling.
After design and testing in terrestrial labs, SpaceX subsequently stated that although an alternative heat mitigation approach—using low-cost ceramic tiles on the windward side of Starship |
https://en.wikipedia.org/wiki/Scaffold/matrix%20attachment%20region | The term S/MAR (scaffold/matrix attachment region), otherwise called SAR (scaffold-attachment region), or MAR (matrix-associated region), are sequences in the DNA of eukaryotic chromosomes where the nuclear matrix attaches. As architectural DNA components that organize the genome of eukaryotes into functional units within the cell nucleus, S/MARs mediate structural organization of the chromatin within the nucleus. These elements constitute anchor points of the DNA for the chromatin scaffold and serve to organize the chromatin into structural domains. Studies on individual genes led to the conclusion that the dynamic and complex organization of the chromatin mediated by S/MAR elements plays an important role in the regulation of gene expression.
Overview
It has been known for many years that a polymer meshwork, a so-called "nuclear matrix" or "nuclear-scaffold" is an essential component of eukaryotic nuclei. This nuclear skeleton acts as a dynamic support for many specialized events concerning the readout a spread of genetic information (see below).
S/MARs map to non-random locations in the genome. They occur at the flanks of transcribed regions, in 5´-introns, and also at gene breakpoint cluster regions (BCRs). Being association points for common nuclear structural proteins S/MARs are required for authentic and efficient chromosomal replication and transcription, for recombination and chromosome condensation. S/MARs do not have an obvious consensus sequence. Although prototype elements consist of AT-rich regions several hundred base pairs in length, the overall base composition is definitely not the primary determinant of their activity. Instead, their function requires a pattern of "AT-patches" that confer the propensity for local strand unpairing under torsional strain.
Bioinformatics approaches support the idea that, by these properties, S/MARs not only separate a given transcriptional unit (chromatin domain) from its neighbors, but also provide platforms for |
https://en.wikipedia.org/wiki/BCS-FACS | BCS-FACS is the BCS Formal Aspects of Computing Science Specialist Group.
Overview
The FACS group, inaugurated on 16 March 1978, organizes meetings for its members and others on formal methods and related computer science topics. There is an associated journal, Formal Aspects of Computing, published by Springer, and a more informal FACS FACTS newsletter.
The group celebrated its 20th anniversary with a meeting at the Royal Society in London in 1998, with presentations by four eminent computer scientists, Mike Gordon, Tony Hoare, Robin Milner and Gordon Plotkin, all Fellows of the Royal Society.
From 2002–2008 and since 2013 again, the Chair of BCS-FACS has been Jonathan Bowen. Jawed Siddiqi was Chair during 2008–2013. In December 2002, BCS-FACS organized a conference on the Formal Aspects of Security (FASec'02) at Royal Holloway, University of London. In 2004, FACS organized a major event at London South Bank University to celebrate its own 25th anniversary and also 25 Years of CSP (CSP25), attended by the originator of CSP, Sir Tony Hoare, and others in the field.
The group liaises with other related groups such as the Centre for Software Reliability, Formal Methods Europe, the London Mathematical Society Computer Committee, the Safety-Critical Systems Club, and the Z User Group. It has held joint meetings with other BCS specialist groups such as the Advanced Programming Group and BCSWomen.
FACS sponsors and supports meetings, such as the Refinement Workshop. It has often held a Christmas event each year, with a theme related to formal aspects of computing — for example, teaching formal methods and formal methods in industry. BCS-FACS supported the ABZ 2008 conference at the BCS London premises. In 2015, FACS hosted a two-day ProCoS Workshop on "Provably Correct Systems", with many former members of the ESPRIT ProCoS I and II projects and Working Group of the 1990s.
Evening seminars
In recent years, a series of evening seminars have been held, mainly at the |
https://en.wikipedia.org/wiki/Rigid%20belt%20actuator | A rigid belt actuator, also known as a push-pull belt actuator or zipper belt actuator, is a specialized mechanical linear actuator used in push-pull and lift applications. The actuator is a belt and pinion device that forms a telescoping beam or column member to transmit traction and thrust. Rigid belt actuators can move dynamic loads up to approximately 230 pounds over about 3 feet of travel.
Principle of operation
Rigid belt actuators can be thought of as rack and pinion devices that use a flexible rack. Rigid belt actuators use two reinforced plastic ribbed belts, that engage with pinions mounted on drive shafts within a housing. The belts have evenly spaced load bearing blocks on the non-ribbed face. As the pinions spin, the belts are rotated 90 degrees through the housing, which interlocks the blocks like a zipper into a rigid linear form. The resulting beam or column is effective at resisting tension and compression (buckling). Because the actuating member can fold on itself, it can be stored relatively compactly in a storage magazine, either in an overlapping or coiled arrangement. The actuator is driven by an electric motor.
Development
A rigid belt actuator is effectively a non-metallic variation of the rigid chain actuator. But, while the interlocking chain actuator has been around since the middle of the 20th century, rigid belt technology didn't emerge until the new millennium. Joël Bourc'His received a patent for his “Linear Belt Actuator” in 2007.
See also
Linear actuator
Rigid chain actuator |
https://en.wikipedia.org/wiki/Triple%20parentheses | Triple parentheses or triple brackets, or an echo, often referred to in print as an (or the) (((echo))), are an antisemitic symbol that has been used to highlight the names of individuals thought to be Jews, and the names of organizations thought to be owned by Jews. This use of the symbol originated from the alt-right-affiliated, neo-Nazi blog The Right Stuff, whose editors said that the symbol refers to the historic actions of Jews which have caused their surnames to "echo throughout history". The triple parentheses have been adopted as an online stigma by antisemites, neo-Nazis, browsers of the "Politically Incorrect" board on 4chan, and white nationalists to identify individuals of Jewish background as targets for online harassment, such as Jewish political journalists critical of Donald Trump during his 2016 election campaign.
Use of the notation was brought to mainstream attention by an article posted by Mic in June 2016. The reports also led Google to remove a browser extension meant to automatically place the "echo" notation around Jewish names on web pages, and the notation being classified as a form of hate speech by the Anti-Defamation League. In the wake of these actions, some users, both Jews and non-Jews, have intentionally placed their own names within triple parentheses as an act of reappropriation or solidarity.
Prior to its use as an antisemitic label or identifier, ((( screen name ))) had been used in online communities such as AOL to indicate that a user was "cyberhugging" the user with the specified screen name.
Use
The use of the "echo" originated from a 2014 episode of The Daily Shoah, a podcast produced by the alt-right, antisemitic, white nationalist blog The Right Stuff. The podcast includes a segment known as the "Merchant Minute", where Jewish names are spoken with a cartoonish echo effect to single them out. The editors of The Right Stuff explained that the use of an echo, represented in text using triple parentheses, was an internal |
https://en.wikipedia.org/wiki/Square%20cupola | In geometry, the square cupola, sometimes called lesser dome, is one of the Johnson solids (). It can be obtained as a slice of the rhombicuboctahedron. As in all cupolae, the base polygon has twice as many edges and vertices as the top; in this case the base polygon is an octagon.
Formulae
The following formulae for the circumradius, surface area, volume, and height can be used if all faces are regular, with edge length a:
Related polyhedra and honeycombs
Other convex cupolae
Dual polyhedron
The dual of the square cupola has 8 triangular and 4 kite faces:
Crossed square cupola
The crossed square cupola is one of the nonconvex Johnson solid isomorphs, being topologically identical to the convex square cupola. It can be obtained as a slice of the nonconvex great rhombicuboctahedron or quasirhombicuboctahedron, analogously to how the square cupola may be obtained as a slice of the rhombicuboctahedron. As in all cupolae, the base polygon has twice as many edges and vertices as the top; in this case the base polygon is an octagram.
It may be seen as a cupola with a retrograde square base, so that the squares and triangles connect across the bases in the opposite way to the square cupola, hence intersecting each other.
Honeycombs
The square cupola is a component of several nonuniform space-filling lattices:
with tetrahedra;
with cubes and cuboctahedra; and
with tetrahedra, square pyramids and various combinations of cubes, elongated square pyramids and elongated square bipyramids. |
https://en.wikipedia.org/wiki/Alpha%20%26%20Omega%20%28book%29 | Alpha & Omega: The Search for the Beginning and End of the Universe is the second non-fiction book by Charles Seife, published by Viking, a division of Penguin Putnam, in 2003.
Background
It is a survey of historic and contemporary efforts at cosmology: to describe the universe, trace the universe back to its origins, including the Big Bang Theory, and to determine the universe's eventual end-state. The books title refers to the Alpha and Omega appellation for Christ, as found in the Book of Revelation. A paperback reprint was published in 2004, also from Penguin.
Table of contents
Preface
"The First Cosmology: The Golden Age of the Gods"
"The First Cosmological Revolution: The Copernican Theory"
"The Second Cosmological Revolution: Hubble and the Big Bang"
"The Third Revolution Begins: The Universe Amok"
"The Music of the Spheres: The Cosmic Microwave Background"
"The Dark Universe: What's the Matter with Matter?"
"Darker Still: The Enigma of Exotic Dark Matter"
"The Big Bang in Our Backyard: The Birth of Baryons"
"The Good Nus: The Exotic Neutrino"
"Supersymmetry: Fearlessly Framing the Laws of Matter"
"Seeing the Invisible: MACHOs, WIMPs, and Illuminating the Darkest Regions of the Universe"
"The Deepest Mystery in Physics: Λ, the Vacuum, and Inflation", Λ being the symbol for the Cosmological constant
"Wrinkles in Spacetime: Gravitational Waves and the Early Universe"
"Beyond the Third Revolution: Voyage to the Ends of Time"
"Appendix A: Tired Light Retired"
"Appendix B: Where Does Matter Come From?"
"Appendix C: Nobel Prizes in Physics—Past and Future" Seife predicts which scientists are likely to win a Nobel Prize for their work in cosmology.
"Appendix D: Some Experiments to Watch"
Glossary, Select Bibliography, Acknowledgements, Index
Reception
The New York Times praised the book, describing it as "A primer on the history and state of cosmology that is easy to read and understand… Seife's book shines." The Los Angeles Times described it |
https://en.wikipedia.org/wiki/Flag-waving | Flag-waving is a fallacious argument or propaganda technique used to justify an action based on the undue connection to nationalism or patriotism or benefit for an idea, group or country. It is a variant of argumentum ad populum. This fallacy appeals to emotion instead to logic of the audience aiming to manipulate them to win an argument. All ad populum fallacies are based on the presumption that the recipients already have certain beliefs, biases, and prejudices about the issue.
If flag-waving is based on connecting to some symbol of patriotism or nationalism it is a form of appeal to stirring symbols which can be based on undue connection not only to nationalism but also to some religious or cultural symbols—for example, a politician appearing on TV with children, farmer, teacher, together with the "common" man, etc.
The act of flag-waving is a superficial display of support or loyalty to, for example, a nation or a political party. |
https://en.wikipedia.org/wiki/Pancreatic%20elastase | Pancreatic elastase is a form of elastase that is produced in the acinar cells of the pancreas, initially produced as an inactive zymogen and later activated in the duodenum by trypsin. Elastases form a subfamily of serine proteases, characterized by a distinctive structure consisting of two beta barrel domains converging at the active site that hydrolyze amides and esters amongst many proteins in addition to elastin, a type of connective tissue that holds organs together. Pancreatic elastase 1 is a serine endopeptidase, a specific type of protease that has the amino acid serine at its active site. Although the recommended name is pancreatic elastase, it can also be referred to as elastase-1, pancreatopeptidase, PE, or serine elastase.
The first isozyme, pancreatic elastase 1, was initially thought to be expressed in the pancreas. However it was later discovered that it was the only chymotrypsin-like elastase that was not expressed in the pancreas. In fact, pancreatic elastase is expressed in basal layers of epidermis (at protein level). Hence pancreatic elastase 1 has been renamed elastase 1 (ELA1) or chymotrypsin-like elastase family, member 1 (CELA1). For a period of time, it was thought that ELA1 / CELA1 was not transcribed into a protein. However it was later discovered that it was expressed in skin keratinocytes.
Clinical literature that describes human elastase 1 activity in the pancreas or fecal material is actually referring to chymotrypsin-like elastase family, member 3B (CELA3B).
Structure
Pancreatic elastase is a compact globular protein with a hydrophobic core. This enzyme is formed by three subunits. Each subunit binds one calcium ion (cofactor). There are three important metal-binding sites in amino acids 77, 82, 87. The catalytic triad , located in the active site is formed by three hydrogen-bonded amino acid residues (H71, D119, S214), and plays an essential role in the cleaving ability of all proteases. It is composed of a single peptide chai |
https://en.wikipedia.org/wiki/David%20Bressoud | David Marius Bressoud (born March 27, 1950, in Bethlehem, Pennsylvania) is an American mathematician who works in number theory, combinatorics, and special functions. As of 2019 he is DeWitt Wallace Professor of Mathematics at Macalester College, Director of the Conference Board of the Mathematical Sciences and a former President of the Mathematical Association of America.
Life and education
Bressoud was born March 27, 1950, in Bethlehem, Pennsylvania.
He became interested in mathematics in the seventh grade, where he had a teacher who encouraged him and gave him challenging problems. He attended Albert Wilansky's National Science Foundation summer program at Lehigh University between his junior and senior years in high school, where he also spent most of his time working on problems.
He graduated from Swarthmore College in 1971. When he started at Swarthmore he had not yet decided on a major, but after his first year he decided to get out of college as quickly as possibly and had no interest in graduate school, and the quickest way out was to major in mathematics.
After graduating Bressoud became a Peace Corps volunteer in Antigua from 1971 to 1973, teaching math and science at Clare Hall School. While in Antigua he realized he missed mathematics, and kept working on it as a hobby. After the Peace Corps he went to graduate school at Temple University, and received his PhD in 1977 under Emil Grosswald.
Career
After receiving his PhD, Bressoud taught at Pennsylvania State University from 1977 to 1994, reaching the rank of full professor in 1986. During this period he held visiting positions at the Institute for Advanced Study (1979–1980), the University of Wisconsin (1980–81 and 1982), the University of Minnesota (1983 and 1998), and the University of Strasbourg (1984–85).
His focus at Penn State was mathematics research, but in the late 1980s he became more interested in teaching and writing textbooks, and he decided to make a move. He said in a 2008 intervie |
https://en.wikipedia.org/wiki/Cattle | Cattle (Bos taurus) are large, domesticated, bovid ungulates. They are prominent modern members of the subfamily Bovinae and the most widespread species of the genus Bos. Mature female cattle are referred to as cows and mature male cattle are referred to as bulls. Colloquially, young female cattle (heifers), young male cattle (bullocks), and castrated male cattle (steers) are also referred to as "cows".
Cattle are commonly raised as livestock for meat (beef or veal, see beef cattle), for milk (see dairy cattle), and for hides, which are used to make leather. They are used as riding animals and draft animals (oxen or bullocks, which pull carts, plows and other implements). Another product of cattle is their dung, which can be used to create manure or fuel. In some regions, such as parts of India, cattle have significant religious significance. Cattle, mostly small breeds such as the Miniature Zebu, are also kept as pets.
Different types of cattle are common to different geographic areas. Taurine cattle are found primarily in Europe and temperate areas of Asia, the Americas, and Australia. Zebus (also called indicine cattle) are found primarily in India and tropical areas of Asia, America, and Australia. Sanga cattle are found primarily in sub-Saharan Africa. These types (which are sometimes classified as separate species or subspecies) are further divided into over 1,000 recognized breeds.
Around 10,500 years ago, taurine cattle were domesticated from as few as 80 wild aurochs progenitors in central Anatolia, the Levant and Western Iran. A separate domestication event occurred in the Indian subcontinent, which gave rise to zebu. According to the Food and Agriculture Organization (FAO), there are approximately 1.5 billion cattle in the world as of 2018. Cattle are the main source of greenhouse gas emissions from livestock, and are responsible for around 10% of global greenhouse gas emissions. In 2009, cattle became one of the first livestock animals to have a fully |
https://en.wikipedia.org/wiki/Arc%20fault | An arc fault is a high power discharge of electricity between two or more conductors. This discharge generates heat, which can break down the wire's insulation and trigger an electrical fire. Arc faults can range in current from a few amps up to thousands of amps, and are highly variable in strength and duration.
Some common causes of arc fault are loose wire connections, over heated wires, or wires pinched by furniture.
Location and detection
Two types of wiring protection are standard thermal breakers and arc fault circuit breakers. Thermal breakers require an overload condition long enough that a heating element in the breaker trips the breaker off. In contrast, arc fault circuit breakers use magnetic or other means to detect increases in current draw much more quickly. Without such protection, visually detecting arc faults in defective wiring is very difficult, as the arc fault occurs in a very small area. A problem with arc fault circuit breaker is they are more likely to produce false positives due to normal circuit behaviors appearing to be arc faults. For instance, lightning strikes on the outside of an aircraft mimic arc faults in their voltage and current profiles. Research has been able to largely eliminate such false positives, however, providing the ability to quickly identify and locate repairs that need to be done.
In simple wiring systems visual inspection can lead to finding the fault location, but in complex wiring systems, for instance aircraft wiring, devices such as a time-domain reflectometer are helpful, even on live wires.
See also
Arc flash
Arc-fault circuit interrupter
Time-domain reflectometer |
https://en.wikipedia.org/wiki/Cyclotomic%20fast%20Fourier%20transform | The cyclotomic fast Fourier transform is a type of fast Fourier transform algorithm over finite fields. This algorithm first decomposes a DFT into several circular convolutions, and then derives the DFT results from the circular convolution results. When applied to a DFT over , this algorithm has a very low multiplicative complexity. In practice, since there usually exist efficient algorithms for circular convolutions with specific lengths, this algorithm is very efficient.
Background
The discrete Fourier transform over finite fields finds widespread application in the decoding of error-correcting codes such as BCH codes and Reed–Solomon codes. Generalized from the complex field, a discrete Fourier transform of a sequence over a finite field GF(pm) is defined as
where is the N-th primitive root of 1 in GF(pm). If we define the polynomial representation of as
it is easy to see that is simply . That is, the discrete Fourier transform of a sequence converts it to a polynomial evaluation problem.
Written in matrix format,
Direct evaluation of DFT has an complexity. Fast Fourier transforms are just efficient algorithms evaluating the above matrix-vector product.
Algorithm
First, we define a linearized polynomial over GF(pm) as
is called linearized because , which comes from the fact that for elements
Notice that is invertible modulo because must divide the order of the multiplicative group of the field . So, the elements can be partitioned into cyclotomic cosets modulo :
where . Therefore, the input to the Fourier transform can be rewritten as
In this way, the polynomial representation is decomposed into a sum of linear polynomials, and hence is given by
.
Expanding with the proper basis , we have where , and by the property of the linearized polynomial , we have
This equation can be rewritten in matrix form as , where is an matrix over GF(p) that contains the elements , is a block diagonal matrix, and is a permutation matrix |
https://en.wikipedia.org/wiki/Bear%20JJ1 | Bear JJ1 (2004 – 26 June 2006) was a brown bear whose travels and exploits in Austria and Germany in the first half of 2006 drew international attention. JJ1, also known as Bruno in the German press (some newspapers also gave the bear different names, such as Beppo or Petzi), is believed to have been the first brown bear on German soil in 170 years.
Origin
JJ1 was originally part of an EU-funded €1 million conservation project in Italy, but had walked across to Austria and into Germany. A spokesman said that there had been "co-ordination" between Italy, Austria and Slovenia to ensure the bear's welfare but apparently Germany had not been informed. The Life Ursus reintroduction project of the Italian province of Trento had introduced 10 Slovenian bears in the region, monitoring them. JJ1 was the first son of Jurka and Joze (thus the name JJ1); his younger brother JJ3 also showed an aggressive character, wandered into Switzerland in 2008, and was killed there. Because of this second problem the mother Jurka was put in captivity in Italy, despite protests by environmentalists; park authorities maintained that 50% of the incidents involving bears had been caused by Jurka or her descendants.
In April 2023, his sister JJ4 killed a 26-year-old jogger in Trentino province of northern Italy. In the summer of 2020, she had already attacked and injured a man and his son on Monte Peller and should have been killed; but animal rights activists had prevented that.
Overview
Previously, the last sighting of a bear in what is now Germany was recorded in 1838 when hunters shot a bear in Bavaria. Initially heralded as a welcome visitor and a symbol of the success of endangered species reintroduction programs, JJ1's dietary preferences for sheep, chickens, and beehives led government officials to believe that he could become a threat to humans, and they ordered that he be shot or captured. Public objection to the order resulted in its revision, and the German government tried to |
https://en.wikipedia.org/wiki/ESET | ESET, s.r.o., is a Slovak software company specializing in cybersecurity. ESET's security products are made in Europe and provide security software in over 200 countries and territories worldwide, and its software is localized into more than 30 languages.
The company was founded in 1992 in Bratislava, Slovakia. However, its history dates back to 1987, when two of the company's founders, Miroslav Trnka and Peter Paško, developed their first antivirus program called NOD. This sparked an idea between friends to help protect PC users and soon grew into an antivirus software company. At present, ESET is recognized as Europe's biggest privately held cybersecurity company.
History
1987–1992
The product NOD was launched in Czechoslovakia when the country was part of the Soviet Union's sphere of influence. Under the communist regime, private entrepreneurship was banned. It wasn't until 1992 when Miroslav Trnka and Peter Paško, together with Rudolf Hrubý, established ESET as a privately owned limited liability company in the former Czechoslovakia. In parallel with NOD, the company also started developing Perspekt.
2003–2017
In 2013, ESET launched WeLiveSecurity, a blog site dedicated to a vast spectrum of security-related topics.
December 2017 marked the 30th anniversary of the company's first security product. To mark its accomplishments, the company released a short documentary describing the company's evolution from the perspective of founders Miroslav Trnka and Peter Paško. In the same year, the company partnered with Google to integrate its technology into Chrome Cleanup.
2018–present
In December 2018, ESET partnered with No More Ransom, a global initiative that provides victims of ransomware decryption keys, thus removing the pressure to pay attackers. The initiative is supported by Interpol and has been joined by various national police forces. ESET has developed technologies to address the threat of ransomware and has produced papers documenting its evoluti |
https://en.wikipedia.org/wiki/Interquartile%20mean | The interquartile mean (IQM) (or midmean) is a statistical measure of central tendency based on the truncated mean of the interquartile range. The IQM is very similar to the scoring method used in sports that are evaluated by a panel of judges: discard the lowest and the highest scores; calculate the mean value of the remaining scores.
Calculation
In calculation of the IQM, only the data between the first and third quartiles is used, and the lowest 25% and the highest 25% of the data are discarded.
assuming the values have been ordered.
Examples
Dataset size divisible by four
The method is best explained with an example. Consider the following dataset:
5, 8, 4, 38, 8, 6, 9, 7, 7, 3, 1, 6
First sort the list from lowest-to-highest:
1, 3, 4, 5, 6, 6, 7, 7, 8, 8, 9, 38
There are 12 observations (datapoints) in the dataset, thus we have 4 quartiles of 3 numbers. Discard the lowest and the highest 3 values:
1, 3, 4, 5, 6, 6, 7, 7, 8, 8, 9, 38
We now have 6 of the 12 observations remaining; next, we calculate the arithmetic mean of these numbers:
xIQM = (5 + 6 + 6 + 7 + 7 + 8) / 6 = 6.5
This is the interquartile mean.
For comparison, the arithmetic mean of the original dataset is
(5 + 8 + 4 + 38 + 8 + 6 + 9 + 7 + 7 + 3 + 1 + 6) / 12 = 8.5
due to the strong influence of the outlier, 38.
Dataset size not divisible by four
The above example consisted of 12 observations in the dataset, which made the determination of the quartiles very easy. Of course, not all datasets have a number of observations that is divisible by 4. We can adjust the method of calculating the IQM to accommodate this. So ideally we want to have the IQM equal to the mean for symmetric distributions, e.g.:
1, 2, 3, 4, 5, 6
has a mean value xmean = 3, and since it is a symmetric distribution, xIQM = 3 would be desired.
We can solve this by using a weighted average of the quartiles and the interquartile dataset:
Consider the following dataset of 9 observations:
1, 3, 5, 7, 9, 11, 13, 15, |
https://en.wikipedia.org/wiki/IEEE%20Registration%20Authority | The IEEE Registration Authority is the administrative body that is responsible for registering and administering organizationally unique identifiers (OUI) and other types of identifiers which are used in the computer and electronics industries (Individual Address Blocks (IAB), Manufacturer IDs, Standard Group MAC Addresses, Unique Registration Numbers (URN), EtherType values, etc.)
The IEEE Registration Authority was formed in 1986 in response to a need for this service that was recognized by the P802 (LAN/MAN) standards group. The IEEE Registration Authority is currently recognized by ISO/IEC as the authorized registration authority to provide the service of globally assigning, administering, and registering OUIs.
Note: The term 'Registration' as used in this context is "the assignment of unambiguous names to objects in a way which makes the assignment available to interested parties". |
https://en.wikipedia.org/wiki/Bound%20graph | In graph theory, a bound graph expresses which pairs of elements of some partially ordered set have an upper bound. Rigorously, any graph G is a bound graph if there exists a partial order ≤ on the vertices of G with the property that for any vertices u and v of G, uv is an edge of G if and only if u ≠ v and there is a vertex w such that u ≤ w and v ≤ w.
Bound graphs are sometimes referred to as upper bound graphs, but the analogously defined lower bound graphs comprise exactly the same class—any lower bound for ≤ is easily seen to be an upper bound for the dual partial order ≥. |
https://en.wikipedia.org/wiki/Hydrolethalus%20syndrome | Hydrolethalus syndrome (HLS) is a rare genetic disorder that causes improper fetal development, resulting in birth defects and, most commonly, stillbirth.
HLS is associated with HYLS1 mutations. The gene encoding HYLS1 is responsible for proper cilial development within the human body. Cilia are microscopic projections that allow sensory input and signalling output within cells, as well as cell motility. Dysfunction results in a range of abnormalities that are often the result of improper cell signalling. A variant form, HLS2, with additional mutations to the KIF7 gene, is less common. KIF7 also ensures correct cilia formation and function, specifically cilia stability and length.
Hydrolethalus syndrome (HLS) was first mistakenly identified in Finland, during a study on Meckel syndrome. Like HLS, Meckel syndrome presents with severe physiological abnormalities, namely disruptions to the central nervous system and the presence of extra fingers or toes (polydactyly). HLS can be distinguished from Meckel syndrome by analysing kidney function, which is dysfunctional in Meckel syndrome as a result of cyst formation.
Signs and symptoms
HLS presents itself as various, lethal developmental abnormalities, which often result in either premature stillbirth or death shortly after birth. Rare cases of children born with HLS surviving for several months have been noted. A characteristic abnormality of HLS is an absence of brain tissue and midline structures, with the presence of excess brain fluid (hydrocephalus) as a result of abnormal development of the central nervous system. Other common defects include incomplete lung development, heart defects, a cleft lip or palate, polydactyly, and an abnormally small jaw. Stillbirth and an excess of amniotic fluid (polyhydramnios) are common during pregnancy with a HLS-affected foetus, with cases of up to 8 litres cited compared to the normal 1 litre. Less common symptoms such as abnormally small eyes and a broad nose are also possib |
https://en.wikipedia.org/wiki/List%20of%20physics%20concepts%20in%20primary%20and%20secondary%20education%20curricula | This is a list of topics that are included in high school physics curricula or textbooks.
Mathematical Background
SI Units
Scalar (physics)
Euclidean vector
Motion graphs and derivatives
Pythagorean theorem
Trigonometry
Motion and forces
Motion
Force
Linear motion
Linear motion
Displacement
Speed
Velocity
Acceleration
Center of mass
Mass
Momentum
Newton's laws of motion
Work (physics)
Free body diagram
Rotational motion
Angular momentum (Introduction)
Angular velocity
Centrifugal force
Centripetal force
Circular motion
Tangential velocity
Torque
Conservation of energy and momentum
Energy
Conservation of energy
Elastic collision
Inelastic collision
Inertia
Moment of inertia
Momentum
Kinetic energy
Potential energy
Rotational energy
Electricity and magnetism
Ampère's circuital law
Capacitor
Coulomb's law
Diode
Direct current
Electric charge
Electric current
Alternating current
Electric field
Electric potential energy
Electron
Faraday's law of induction
Ion
Inductor
Joule heating
Lenz's law
Magnetic field
Ohm's law
Resistor
Transistor
Transformer
Voltage
Heat
Entropy
First law of thermodynamics
Heat
Heat transfer
Second law of thermodynamics
Temperature
Thermal energy
Thermodynamic cycle
Volume (thermodynamics)
Work (thermodynamics)
Waves
Wave
Longitudinal wave
Transverse waves
Transverse wave
Standing Waves
Wavelength
Frequency
Light
Light ray
Speed of light
Sound
Speed of sound
Radio waves
Harmonic oscillator
Hooke's law
Reflection
Refraction
Snell's law
Refractive index
Total internal reflection
Diffraction
Interference (wave propagation)
Polarization (waves)
Vibrating string
Doppler effect
Gravity
Gravitational potential
Newton's law of universal gravitation
Newtonian constant of gravitation
See also
Outline of physics
Physics education |
https://en.wikipedia.org/wiki/Infectious%20diseases%20%28medical%20specialty%29 | Infectious diseases or ID, also known as infectiology, is a medical specialty dealing with the diagnosis and treatment of infections. An infectious diseases specialist's practice consists of managing nosocomial (healthcare-acquired) infections or community-acquired infections. An ID specialist investigates the cause of a disease to determine what kind of Bacteria, viruses, parasites, or fungi the disease is caused by. Once the pathogen is known, an ID specialist can then run various tests to determine the best antimicrobial drug to kill the pathogen and treat the disease. While infectious diseases have always been around, the infectious disease specialty did not exist until the late 1900s after scientists and physicians in the 19th century paved the way with research on the sources of infectious disease and the development of vaccines.
Scope
Infectious diseases specialists typically serve as consultants to other physicians in cases of complex infections, and often manage patients with HIV/AIDS and other forms of immunodeficiency. Although many common infections are treated by physicians without formal expertise in infectious diseases, specialists may be consulted for cases where an infection is difficult to diagnose or manage. They may also be asked to help determine the cause of a fever of unknown origin.
Specialists in infectious diseases can practice both in hospitals (inpatient) and clinics (outpatient). In hospitals, specialists in infectious diseases help ensure the timely diagnosis and treatment of acute infections by recommending the appropriate diagnostic tests to identify the source of the infection and by recommending appropriate management such as prescribing antibiotics to treat bacterial infections. For certain types of infections, involvement of specialists in infectious diseases may improve patient outcomes. In clinics, specialists in infectious diseases can provide long-term care to patients with chronic infections such as HIV/AIDS.
History
Inf |
https://en.wikipedia.org/wiki/Transitional%20ballistics | Transitional ballistics, also known as intermediate ballistics, is the study of a projectile's behavior from the time it leaves the muzzle until the pressure behind the projectile is equalized, so it lies between internal ballistics and external ballistics.
The transitional period
Transitional ballistics is a complex field that involves a number of variables that are not fully understood; therefore, it is not an exact science. When the bullet reaches the muzzle of the barrel, the escaping gases are still, in many cases, at hundreds of atmospheres of pressure. Once the bullet exits the barrel, breaking the seal, the gases are free to move past the bullet and expand in all directions. This expansion is what gives gunfire its explosive sound (in conjunction with the sonic boom of the projectile), and is often accompanied by a bright flash as the gases combine with the oxygen in the air and finish combusting.
The propellant gases continue to exert force on the bullet and firearm for a short while after the bullet leaves the barrel. One of the essential elements of accurizing a firearm is to make sure that this force does not disrupt the bullet from its path. The worst case is a muzzle or muzzle device such as a flash-hider that is cut at a non-square angle, so that one side of the bullet leaves the barrel early; this will cause the gas to escape in an asymmetric pattern, and will push the bullet away from that side, causing shots to form a "string", where the shots cluster along a line rather than forming a normal Gaussian pattern.
Most firearms have muzzle velocities in excess of the ambient speed of sound, and even in subsonic cartridges the escaping gases will exceed the speed of sound, forming a shock wave. This wave will quickly slow as the expanding gas cools, dropping the speed of sound within the expanding gas, but at close range this shockwave can be very damaging. The muzzle blast from a high powered cartridge can literally shred soft objects in its vicinit |
https://en.wikipedia.org/wiki/VESA%20Plug%20and%20Display | VESA Plug and Display (abbreviated as P&D) is a video connector that carries digital signals for monitors, such as flat panel displays and video projectors, ratified by Video Electronics Standards Association (VESA) in 1997. Introduced around the same time as the competing connectors for the Digital Visual Interface (DVI, 1999) and VESA's own Digital Flat Panel (DFP, 1999), it was marketed as a replacement for the VESA Enhanced Video Connector (EVC, 1994). Unlike DVI, it never achieved widespread implementation.
The P&D connector shares the 30-pin plus quad-coax layout of EVC, which carries digital video, analog video, and data over Universal Serial Bus (USB) and IEEE 1394 (FireWire). At a minimum, the P&D connector is required to carry digital video, in which case the connector is designated P&D-D; when both digital and analog video are included, the connector is designated P&D-A/D.
Design
The P&D receptacle and plug are required to bear a standardized symbol to designate the standards with which it is compatible. The upper left quadrant designates analog video support. The upper right quadrant designates digital video support. The lower quadrants designate IEEE 1394 and USB support.
All P&D connectors are required to carry single-link TMDS digital video signal (max 160 MHz), and support VESA Display Data Channel version 2 at a minimum. Maximum resolution is 1600×1280 with a 60 Hz refresh rate.
Analogue video signals, if supported, must be provided as three separate color channels (red / green / blue) along with one composite or two (horizontal & vertical) sync signals. The nominal impedance of each signal line is 75 Ω and each channel must be capable of carrying a bandwidth of at least 2.4 GHz. The type designation for the analogue video signals designates the voltage values of the signals only, including the Type 4 (VESA) analog DC protocol introduced with EVC:
The P&D connector supports optional charging power at 18–20 VDC and up to 1.5 A. In addition, a s |
https://en.wikipedia.org/wiki/Active%20Body%20Control | Active Body Control, or ABC, is the Mercedes-Benz brand name used to describe electronically controlled hydropneumatic suspension.
This suspension combines a high level of ride quality with control of the vehicle body motions, and therefore virtually eliminates body roll in many driving situations including cornering, accelerating, and braking.
Mercedes-Benz has been experimenting with these capabilities for automobile suspension since the air suspension of the 1963 600 and the hydropneumatic (fluid and air) suspension of the 1974 6.9.
ABC was only offered on rear-wheel drive models, as all-wheel drive 4MATIC models were available only with Airmatic semi-active air suspension, with the 2019 Mercedes-Benz GLE 450 4MATIC being the first AWD to have ABC available.
The production version was introduced at the 1999 Geneva Motor Show on the new Mercedes-Benz CL-Class C215.
Description
In the ABC system, a computer detects body movement from sensors located throughout the vehicle, and controls the action of the active suspension with the use of hydraulic servomechanisms. The hydraulic pressure to the servos is supplied by a high pressure radial piston hydraulic pump, operating at 3,000psi. Accumulators regulate the hydraulic pressure, by means of an enclosed nitrogen bubble separated from the hydraulic fluid by a membrane.
A total of 13 sensors continually monitor body movement and vehicle level and supply the ABC controller with new data every ten milliseconds. Four level sensors, one at each wheel measure the ride level of the vehicle, three accelerometers measure the vertical body acceleration, one acceleration sensor measures the longitudinal and one sensor the transverse body acceleration. As the ABC controller receives and processes data, it operates four hydraulic servos, each mounted on an air and pressurized hydraulic fluid strut, beside each wheel.
Almost instantaneously, the servo regulated suspension generates counter forces to body lean, dive and squa |
https://en.wikipedia.org/wiki/Bell%20diagonal%20state | Bell diagonal states are a class of bipartite qubit states that are frequently used in quantum information and quantum computation theory.
Definition
The Bell diagonal state is defined as the probabilistic mixture of Bell states:
In density operator form, a Bell diagonal state is defined as
where is a probability distribution. Since , a Bell diagonal state is determined by three real parameters. The maximum probability of a Bell diagonal state is defined as .
Properties
1. A Bell-diagonal state is separable if all the probabilities are less or equal to 1/2, i.e., .
2. Many entanglement measures have a simple formulas for entangled Bell-diagonal states:
Relative entropy of entanglement: , where is the binary entropy function.
Entanglement of formation: ,where is the binary entropy function.
Negativity:
Log-negativity:
3. Any 2-qubit state where the reduced density matrices are maximally mixed, , is Bell-diagonal in some local basis. Viz., there exist local unitaries such that is Bell-diagonal. |
https://en.wikipedia.org/wiki/Antimicrobial%20nanotechnology | Antimicrobial nanotechnology is the study of using biofilms to disrupt a microbe's cell membrane, deliver an electric charge to the microbe, and cause immediate cellular death via a "mechanical kill" process, preventing the original microbe from mutating into a superbug. The biofilms are made up of long atomic chains that can breach the cell wall. These spikes are roughly the size of a human hair and are far too small to injure large cells in mammals. These atom chains have a significant positive charge that attracts bacteria that are negatively charged. A new class of antimicrobial has been created by applying nanotechnology to the challenge of superbugs and multiple drug resistance organisms.
Problem statement
According to a report published in the Archives of Internal Medicine on 22 February 2010, health care–associated infections affect 1.7 million hospitalizations per year.
The most prevalent nosocomial infections can live or stay on surfaces for months, posing a continuing transmission risk. On dry surfaces, most gram-positive bacteria, including Enterococcus spp. (including VRE), Staphylococcus aureus (including MRSA), and Streptococcus pyogenes, can persist for months.
VRE has been cultured from frequently touched objects and has been found to survive on surfaces for more than three days. Dried cotton fabrics have been shown to support Enterococci that is resistant to vancomycin for up to 18 hours and fungi for more than five days.
Nanotechnology antimicrobials are promising because they limit the spread of bacteria by lowering the number of infection agents at frequent contact points (doorknobs, rails, tables, etc.). These new treatments have been certified by the Environmental Protection Agency and are being considered for use in hospitals and other settings where community-acquired illnesses spread quickly, such as cruise ships and jails. Environmental measures and adequate antibiotic use are the first steps in preventing the emergence of superbugs. |
https://en.wikipedia.org/wiki/Svante%20Janson | Carl Svante Janson (born 21 May 1955) is a Swedish mathematician. A member of the Royal Swedish Academy of Sciences since 1994, Janson has been the chaired professor of mathematics at Uppsala University since 1987.
In mathematical analysis, Janson has publications in functional analysis (especially harmonic analysis) and probability theory. In mathematical statistics, Janson has made contributions to the theory of U-statistics. In combinatorics, Janson has publications in probabilistic combinatorics, particularly random graphs and in the analysis of algorithms: In the study of random graphs, Janson introduced U-statistics and the Hoeffding decomposition.
Janson has published four books and over 300 academic papers (). He has an Erdős number of 1.
Biography
Svante Janson has already had a long career in mathematics, because he started research at a very young age.
From prodigy to docent
A child prodigy in mathematics, Janson took high-school and even university classes while in primary school. He was admitted in 1968 to Gothenburg University at age 12. After his 1968 matriculation at Uppsala University at age 13, Janson obtained the following degrees in mathematics: a "candidate of philosophy" (roughly an "honours" B.S. with a thesis) at age 14 (in 1970) and a doctor of philosophy at age 21–22 (in 1977). Janson's Ph.D. was awarded on his 22nd birthday. Janson's doctoral dissertation was supervised by Lennart Carleson, who had himself received his doctoral degree when he was 22 years old.
After having earned his doctorate, Janson was a postdoc with the Mittag-Leffler Institute from 1978 to 1980. Thereafter he worked at Uppsala University. Janson's ongoing research earned him another PhD from Uppsala University in 1984 – this second doctoral degree being in mathematical statistics; the supervisor was Carl-Gustav Esseen.
In 1984, Janson was hired by Stockholm University as docent (roughly associate professor in the USA).
Professorships
In 1985 Janson returned t |
https://en.wikipedia.org/wiki/Skewb%20Diamond | The Skewb Diamond is an octahedron-shaped combination puzzle similar to the Rubik's Cube. It has 14 movable pieces which can be rearranged in a total of 138,240 possible combinations. This puzzle is the dual polyhedron of the Skewb. It was invented by Uwe Mèffert, a German puzzle inventor and designer.
Description
The Skewb Diamond has 6 octahedral corner pieces and 8 triangular face centers. All pieces can move relative to each other. It is a deep-cut puzzle; its planes of rotation bisect it.
It is very closely related to the Skewb, and shares the same piece count and mechanism. However, the triangular "corners" present on the Skewb have no visible orientation on the Skewb Diamond, and the square "centers" gain a visible orientation on the Skewb Diamond. In other words, the corners on the Skewb are equivalent to the centers on the Skewb diamond. Combining pieces from the two can either give you an unscrambleable cuboctahedron or a compound of cube and octahedron with visible orientation on all pieces.
Number of combinations
The purpose of the puzzle is to scramble its colors, and then restore it to its original solved state.
The puzzle has 6 corner pieces and 8 face centers. The positions of four of the face centers is completely determined by the positions of the other 4 face centers, and only even permutations of such positions are possible, so the number of arrangements of face centers is only 4!/2. Each face center has only a single orientation.
Only even permutations of the corner pieces are possible, so the number of possible arrangements of corner pieces is 6!/2. Each corner has two possible orientations (it is not possible to change their orientation by 90° without disassembling the puzzle), but the orientation of the last corner is determined by the other 5. Hence, the number of possible corner orientations is 25.
Hence, the number of possible combinations is:
See also
Skewb Ultimate
External links
Jaap's Skewb Diamond page
Combination puzzles |
https://en.wikipedia.org/wiki/Discharger | A discharger in electronics is a device or circuit that releases stored energy or electric charge from a battery, capacitor or other source.
Discharger types include:
metal probe with insulated handle & ground wire, and sometimes resistor (for capacitors)
resistor (for batteries)
parasitic discharge (for batteries arranged in parallel)
more complex electronic circuits (for batteries)
See also Bleeder resistor
Electronic circuits |
https://en.wikipedia.org/wiki/ISSN | An International Standard Serial Number (ISSN) is an eight-digit serial number used to uniquely identify a serial publication, such as a magazine. The ISSN is especially helpful in distinguishing between serials with the same title. ISSNs are used in ordering, cataloging, interlibrary loans, and other practices in connection with serial literature.
The ISSN system was first drafted as an International Organization for Standardization (ISO) international standard in 1971 and published as ISO 3297 in 1975. ISO subcommittee TC 46/SC 9 is responsible for maintaining the standard.
When a serial with the same content is published in more than one media type, a different ISSN is assigned to each media type. For example, many serials are published both in print and electronic media. The ISSN system refers to these types as print ISSN (p-ISSN) and electronic ISSN (e-ISSN). Consequently, as defined in ISO 3297:2007, every serial in the ISSN system is also assigned a linking ISSN (ISSN-L), typically the same as the ISSN assigned to the serial in its first published medium, which links together all ISSNs assigned to the serial in every medium.
Code format
An ISSN is an eight-digit code, divided by a hyphen into two four-digit numbers. The last digit, which may be zero through nine or an X, is a check digit, so the ISSN is uniquely represented by its first seven digits. Formally, the general form of the ISSN (also named "ISSN structure" or "ISSN syntax") can be expressed as follows:
where N is in the set {0,1,2,...,9}, a decimal digit character, and C is in {0,1,2,...,9,X}; or by a Perl Compatible Regular Expressions (PCRE) regular expression:
For example, the ISSN of the journal Hearing Research, is 0378-5955, where the final 5 is the check digit, that is C=5. To calculate the check digit, the following algorithm may be used:
To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by their position in the number, counting from the righ |
https://en.wikipedia.org/wiki/Solid%20partition | In mathematics, solid partitions are natural generalizations of partitions and plane partitions defined by Percy Alexander MacMahon. A solid partition of is a three-dimensional array of non-negative integers (with indices ) such that
and
for all
Let denote the number of solid partitions of . As the definition of solid partitions involves three-dimensional arrays of numbers, they are also called three-dimensional partitions in notation where plane partitions are two-dimensional partitions and partitions are one-dimensional partitions. Solid partitions and their higher-dimensional generalizations are discussed in the book by Andrews.
Ferrers diagrams for solid partitions
Another representation for solid partitions is in the form of Ferrers diagrams. The Ferrers diagram of a solid partition of is a collection of points or nodes, , with satisfying the condition:
Condition FD: If the node , then so do all the nodes with for all .
For instance, the Ferrers diagram
where each column is a node, represents a solid partition of . There is a natural action of the permutation group on a Ferrers diagram – this corresponds to permuting the four coordinates of all nodes. This generalises the operation denoted by conjugation on usual partitions.
Equivalence of the two representations
Given a Ferrers diagram, one constructs the solid partition (as in the main definition) as follows.
Let be the number of nodes in the Ferrers diagram with coordinates of the form where denotes an arbitrary value. The collection form a solid partition. One can verify that condition FD implies that the conditions for a solid partition are satisfied.
Given a set of that form a solid partition, one obtains the corresponding Ferrers diagram as follows.
Start with the Ferrers diagram with no nodes. For every non-zero , add nodes for to the Ferrers diagram. By construction, it is easy to see that condition FD is satisfied.
For example, the Ferrers diagram with nodes given |
https://en.wikipedia.org/wiki/D-37C | The D-37C (D37C) is the computer component of the all-inertial NS-17 Missile Guidance Set (MGS) for accurately navigating to its target thousands of miles away. The NS-17 MGS was used in the Minuteman II (LGM-30F) ICBM. The MGS, originally designed and produced by the Autonetics Division of North American Aviation, could store multiple preprogrammed targets in its internal memory.
Unlike other methods of navigation, inertial guidance does not rely on observations of land positions or the stars, radio or radar signals, or any other information from outside the vehicle. Instead, the inertial navigator provides the guidance information using gyroscopes that indicate direction and accelerometers that measure changes in speed and direction. A computer then uses this information to calculate the vehicle's position and guide it on its course. Enemies could not "jam" the system with false or confusing information.
The Ogden Air Logistics Center at Hill AFB has been Program Manager for the Minuteman ICBM family since January 1959. The base has had complete logistics management responsibilities for Minuteman and the rest of the ICBM fleet since July 1965.
The D-37C computer consists of four main sections: the memory, the central processing unit (CPU), and the input and output units. These sections are enclosed in one case. The memory is a two-sided, fixed-head disk which rotates
at 6000 rpm. It contains 7222 words of 27 bits. Each word contains 24 data bits and three spacer bits not available to the programmer. The memory is arranged in 56 channels of 128 words each plus ten rapid access channels of one to sixteen words. The memory also includes the accumulators and instruction register.
The MM II missile was deployed with a D-37C disk computer. Autonetics also programmed functional simulators for flight program development and testing, and the code inserter verifier that was used at Wing headquarters to generate the codes to go into the airborne computer. It became ne |
https://en.wikipedia.org/wiki/APBB1 | Amyloid beta A4 precursor protein-binding family B member 1 is a protein that in humans is encoded by the APBB1 gene.
Function
The protein encoded by this gene is a member of the Fe65 protein family. It is an adaptor protein localized in the nucleus. It interacts with the Alzheimer's disease amyloid precursor protein (APP), transcription factor CP2/LSF/LBP1 and the low-density lipoprotein receptor-related protein. APP functions as a cytosolic anchoring site that can prevent the gene product's nuclear translocation. This encoded protein could play an important role in the pathogenesis of Alzheimer's disease. It is thought to regulate transcription. Also it is observed to block cell cycle progression by downregulating thymidylate synthase expression. Multiple alternatively spliced transcript variants have been described for this gene but some of their full length sequence is not known.
Interactions
APBB1 has been shown to interact with APLP2, TFCP2, LRP1 and Amyloid precursor protein. |
https://en.wikipedia.org/wiki/Mohamed%20M.%20Atalla | Mohamed M. Atalla (; August 4, 1924 – December 30, 2009) was an Egyptian-American engineer, physicist, cryptographer, inventor and entrepreneur. He was a semiconductor pioneer who made important contributions to modern electronics. He is best known for the first working demonstration of the MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) in 1959 (along with his colleague Dawon Kahng), which along with Atalla's earlier surface passivation processes, had a significant impact on the development of the electronics industry. He is also known as the founder of the data security company Atalla Corporation (now Utimaco Atalla), founded in 1972. He received the Stuart Ballantine Medal (now the Benjamin Franklin Medal in physics) and was inducted into the National Inventors Hall of Fame for his important contributions to semiconductor technology as well as data security.
Born in Port Said, Egypt, he was educated at Cairo University in Egypt and then Purdue University in the United States, before joining Bell Labs in 1949 and later adopting the more anglicized "John" or "Martin" M. Atalla as professional names. He made several important contributions to semiconductor technology at Bell Labs, including his development of the surface passivation process and his demonstration of the MOSFET with Kahng in 1959.
His work on MOSFET was initially overlooked at Bell, which led to his resignation from Bell and joining Hewlett-Packard (HP), founding its Semiconductor Lab in 1962 and then HP Labs in 1966, before leaving to join Fairchild Semiconductor, founding its Microwave & Optoelectronics division in 1969. His work at HP and Fairchild included research on Schottky diode, gallium arsenide (GaAs), gallium arsenide phosphide (GaAsP), indium arsenide (InAs) and light-emitting diode (LED) technologies. He later left the semiconductor industry, and became an entrepreneur in cryptography and data security. In 1972, he founded Atalla Corporation, and filed a pa |
https://en.wikipedia.org/wiki/BIOS%20boot%20partition | The BIOS boot partition is a partition on a data storage device that GNU GRUB uses on legacy BIOS-based personal computers in order to boot an operating system, when the actual boot device contains a GUID Partition Table (GPT). Such a layout is sometimes referred to as BIOS/GPT boot.
A BIOS boot partition is needed on GPT-partitioned storage devices to hold the second stages of GRUB. On traditional MBR-partitioned devices, the disk sectors immediately following the first are usually unused, as the partitioning scheme does not designate them for any special purpose and partitioning tools avoid them for alignment purposes. On GPT-based devices, the sectors hold the actual partition table, necessitating the use of an extra partition. On MBR-partitioned disks, boot loaders are usually implemented so the portion of their code stored within the MBR, which cannot hold more than 512 bytes, operates as a first stage that serves primarily to load a more sophisticated second stage, which is, for example, capable of reading and loading an operating system kernel from a file system.
Overview
When used, the BIOS boot partition contains the second stage of the boot loader program, such as the GRUB 2; the first stage is the code that is contained within the Master Boot Record (MBR). Use of this partition is not the only way BIOS-based boot can be performed while using GPT-partitioned hard drives; however, complex boot loaders such as GRUB 2 cannot fit entirely within the confines of the MBR's 398 to 446 bytes of space, thus they need an ancillary storage space. On MBR disks, such boot loaders typically use the sectors immediately following the MBR for this storage; that space is usually known as the "MBR gap". No equivalent unused space exists on GPT disks, and the BIOS boot partition is a way to officially allocate such space for use by the boot loader.
The globally unique identifier (GUID) for the BIOS boot partition in the GPT scheme is
(which, when written to a GPT in t |
https://en.wikipedia.org/wiki/Elementary%20cognitive%20task | An elementary cognitive task (ECT) is any of a range of basic tasks which require only a small number of mental processes and which have easily specified correct outcomes.
The term was proposed by John Bissell Carroll in 1980, who posited that all test performance could be analyzed and broken down to building blocks called ECTs. Test batteries such as Microtox were developed based on this theory and have shown utility in the evaluation of test subjects under the influence of carbon monoxide or alcohol.
See also
Mental chronometry
Inspection time |
https://en.wikipedia.org/wiki/Deligne%E2%80%93Mumford%20stack | In algebraic geometry, a Deligne–Mumford stack is a stack F such that
Pierre Deligne and David Mumford introduced this notion in 1969 when they proved that moduli spaces of stable curves of fixed arithmetic genus are proper smooth Deligne–Mumford stacks.
If the "étale" is weakened to "smooth", then such a stack is called an algebraic stack (also called an Artin stack, after Michael Artin). An algebraic space is Deligne–Mumford.
A key fact about a Deligne–Mumford stack F is that any X in , where B is quasi-compact, has only finitely many automorphisms.
A Deligne–Mumford stack admits a presentation by a groupoid; see groupoid scheme.
Examples
Affine Stacks
Deligne–Mumford stacks are typically constructed by taking the stack quotient of some variety where the stabilizers are finite groups. For example, consider the action of the cyclic group on given by
Then the stack quotient is an affine smooth Deligne–Mumford stack with a non-trivial stabilizer at the origin. If we wish to think about this as a category fibered in groupoids over then given a scheme the over category is given by
Note that we could be slightly more general if we consider the group action on .
Weighted Projective Line
Non-affine examples come up when taking the stack quotient for weighted projective space/varieties. For example, the space is constructed by the stack quotient where the -action is given by
Notice that since this quotient is not from a finite group we have to look for points with stabilizers and their respective stabilizer groups. Then if and only if or and or , respectively, showing that the only stabilizers are finite, hence the stack is Deligne–Mumford.
Stacky curve
Non-Example
One simple non-example of a Deligne–Mumford stack is since this has an infinite stabilizer. Stacks of this form are examples of Artin stacks. |
https://en.wikipedia.org/wiki/Photoperiodism | Photoperiodism is the physiological reaction of organisms to the length of night or a dark period. It occurs in plants and animals. Plant photoperiodism can also be defined as the developmental responses of plants to the relative lengths of light and dark periods. They are classified under three groups according to the photoperiods: short-day plants, long-day plants, and day-neutral plants.
In animals photoperiodism (sometimes called seasonality) is the suite of physiological changes that occur in response to changes in day length. This allows animals to respond to a temporally changing environment associated with changing seasons as the earth orbits the sun.
Plants
Many flowering plants (angiosperms) use a circadian rhythm together with photoreceptor protein, such as phytochrome or cryptochrome, to sense seasonal changes in night length, or photoperiod, which they take as signals to flower. In a further subdivision, obligate photoperiodic plants absolutely require a long or short enough night before flowering, whereas facultative photoperiodic plants are more likely to flower under one condition.
Phytochrome comes in two forms: Pr and Pfr. Red light (which is present during the day) converts phytochrome to its active form (Pfr) which then stimulates various processes such as germination, flowering or branching. In comparison, plants receive more far-red in the shade, and this converts phytochrome from Pfr to its inactive form, Pr, inhibiting germination. This system of Pfr to Pr conversion allows the plant to sense when it is night and when it is day. Pfr can also be converted back to Pr by a process known as dark reversion, where long periods of darkness trigger the conversion of Pfr. This is important in regards to plant flowering. Experiments by Halliday et al. showed that manipulations of the red-to far-red ratio in Arabidopsis can alter flowering. They discovered that plants tend to flower later when exposed to more red light, proving that red light i |
https://en.wikipedia.org/wiki/Suslin%20homology | In mathematics, the Suslin homology is a homology theory attached to algebraic varieties. It was proposed by Suslin in 1987, and developed by . It is sometimes called singular homology as it is analogous to the singular homology of topological spaces.
By definition, given an abelian group A and a scheme X of finite type over a field k, the theory is given by
where C is a free graded abelian group whose degree n part is generated by integral subschemes of , where is an n-simplex, that are finite and surjective over . |
https://en.wikipedia.org/wiki/Romer-Simpson%20Medal | The Romer-Simpson Medal is the highest award issued by the Society of Vertebrate Paleontology for "sustained and outstanding scholarly excellence and service to the discipline of vertebrate paleontology". The award is named in honor of Alfred S. Romer and George G. Simpson.
Past awards
Source: Society for Vertebrate Paleontology
1987 Everett C. Olson
1988 Bobb Schaeffer
1989 Edwin H. Colbert
1990 Richard Estes
1991 no award
1992 Loris S. Russell
1993 Zhou Mingzhen
1994 John H. Ostrom
1995 Zofia Kielan-Jaworowska
1996 Percy Butler
1997 Colin Patterson
1998 Albert E. Wood
1999 Robert Warren Wilson
2000 John A. Wilson
2001 Malcolm McKenna
2002 Mary R. Dawson
2003 Rainer Zangerl
2004 Robert L. Carroll
2005 Donald E. Russell
2006 William A. Clemens
2007 Wann Langston, Jr.
2008 Jose Bonaparte
2009 Farish Jenkins
2010 Rinchen Barsbold
2011 Alfred W. Crompton
2012 Philip D. Gingerich
2013 Jack Horner
2014 Hans-Peter Schultze
2015 Jim Hopson
2016 Mee-mann Chang
2017 Philip J. Currie
2018 Kay Behrensmeyer
2019 Michael Archer
2020 Jenny Clack
2021 Blaire Van Valkenburgh
2022 David W. Krause
See also
List of biology awards
List of paleontology awards |
https://en.wikipedia.org/wiki/Seminorm | In mathematics, particularly in functional analysis, a seminorm is a vector space norm that need not be positive definite. Seminorms are intimately connected with convex sets: every seminorm is the Minkowski functional of some absorbing disk and, conversely, the Minkowski functional of any such set is a seminorm.
A topological vector space is locally convex if and only if its topology is induced by a family of seminorms.
Definition
Let be a vector space over either the real numbers or the complex numbers
A real-valued function is called a if it satisfies the following two conditions:
Subadditivity/Triangle inequality: for all
Absolute homogeneity: for all and all scalars
These two conditions imply that and that every seminorm also has the following property:
Nonnegativity: for all
Some authors include non-negativity as part of the definition of "seminorm" (and also sometimes of "norm"), although this is not necessary since it follows from the other two properties.
By definition, a norm on is a seminorm that also separates points, meaning that it has the following additional property:
Positive definite/Positive/: whenever satisfies then
A is a pair consisting of a vector space and a seminorm on If the seminorm is also a norm then the seminormed space is called a .
Since absolute homogeneity implies positive homogeneity, every seminorm is a type of function called a sublinear function. A map is called a if it is subadditive and positive homogeneous. Unlike a seminorm, a sublinear function is necessarily nonnegative. Sublinear functions are often encountered in the context of the Hahn–Banach theorem.
A real-valued function is a seminorm if and only if it is a sublinear and balanced function.
Examples
The on which refers to the constant map on induces the indiscrete topology on
Let be a measure on a space . For an arbitrary constant , let be the set of all functions for which
exists and is finite. It can be shown th |
https://en.wikipedia.org/wiki/FSU%20Young%20Scholars%20Program | FSU Young Scholars Program (YSP) is a six-week residential science and mathematics summer program for 40 high school students from Florida, USA, with significant potential for careers in the fields of science, technology, engineering and mathematics. The program was developed in 1983 and is currently administered by the Office of Science Teaching Activities in the College of Arts and Sciences at Florida State University (FSU).
Academic program
Each young scholar attends three courses in the fields of mathematics, science and computer programming. The courses are designed specifically for this program — they are neither high school nor college courses.
Research
Each student who attends YSP is assigned an independent research project (IRP) based on his or her interests. Students join the research teams of FSU professors, participating in scientific research for two days each week. The fields of study available include robotics, molecular biology, chemistry, geology, physics and zoology. At the conclusion of the program, students present their projects in an academic conference, documenting their findings and explaining their projects to both students and faculty.
Selection process
YSP admits students who have completed the eleventh grade in a Florida public or private high school. A few exceptionally qualified and mature tenth graders have been selected in past years, though this is quite rare.
All applicants must have completed pre-calculus and maintain at least a 3.0 unweighted GPA to be considered for acceptance. Additionally, students must have scored at the 90th percentile or better in science or mathematics on a nationally standardized exam, such as the SAT, PSAT, ACT or PLAN. Students are required to submit an application package, including high school transcripts and a letter of recommendation.
Selection is extremely competitive, as there are typically over 200 highly qualified applicants competing for only 40 positions. The majority of past participant |
https://en.wikipedia.org/wiki/Xanadu%20Quantum%20Technologies | Xanadu Quantum Technologies is a Canadian quantum computing hardware and software company headquartered in Toronto, Ontario. The company develops cloud accessible photonic quantum computers and develops open-source software for quantum machine learning and simulating quantum photonic devices.
History
Xanadu was founded in 2016 by Christian Weedbrook and was a participant in the Creative Destruction Lab's accelerator program. Since then, Xanadu has raised a total of US$245M in funding with venture capital financing from Bessemer Venture Partners, Capricorn Investment Group, Tiger Global Management, In-Q-Tel, Business Development Bank of Canada, OMERS Ventures, Georgian, Real Ventures, Golden Ventures and Radical Ventures and innovation grants from Sustainable Development Technology Canada and DARPA.
Technology
Xanadu's hardware efforts have been focused on developing programmable Gaussian boson sampling (GBS) devices. GBS is a generalization of boson sampling, which traditionally uses single photons as an input; GBS uses squeezed states of light. In 2020, Xanadu published a blueprint for building a fault-tolerant quantum computer using photonic technology.
In June 2022 Xanadu reported on a boson sampling experiment summing up to those of Google and University of Science and Technology of China (USTC). Their setup used loops of optical fiber and multiplexing to replace the network of beam splitters by a single one which made it also more easily reconfigurable. They detected a mean of 125 to 219 photons from 216 squeezed modes (squeezed light follows a photon number distribution so they can contain more than one photon per mode) and claimed to have obtained a speedup 50 million times bigger than previous experiments. |
https://en.wikipedia.org/wiki/Microserver | A data center 64 bit microserver is a server class computer which is based on a system on a chip (SoC). The goal is to integrate all of the server motherboard functions onto a single microchip, except DRAM, boot FLASH and power circuits. Thus, the main chip contains more than only compute cores, caches, memory interfaces and PCI controllers. It typically also contains SATA, networking, serial port and boot FLASH interfaces on the same chip. This eliminates support chips (and therefore area, power and cost) at the board level. Multiple microservers can be put together in a small package to construct dense data center (example: DOME MicroDataCenter).
History
The term "microserver" first appeared in the late 1990s and was popularized by a Palo Alto incubator; PicoStar when incubating Cobalt Microservers. Microserver again appeared around 2010 and is commonly misunderstood to imply low performance. Microservers first appeared in the embedded market, where due to cost and space these types of SoCs appeared before they did in general purpose computing. Indeed, recent research indicates that emerging scale-out services and popular datacenter workloads (e.g., as in CloudSuite) require a certain degree of single-thread performance (with out-of-order execution cores) which may be lower than those in conventional desktop processors but much higher than those in the embedded systems.
A modern microserver typically offers medium-high performance at high packaging densities, allowing very small compute node form factors. This can result in high energy efficiency (operations per Watt), typically better than that of highest single-thread performance processors.
One of the early microservers is the 32-bit SheevaPlug. There are plenty of consumer grade 32-bit microservers available, for instance the Banana Pi as seen on Comparison of single-board computers. Early 2015, even a 64-bit consumer grade microserver is announced. Mid 2017 consumer-grade 64-bit microservers started app |
https://en.wikipedia.org/wiki/List%20of%20group-0%20ISBN%20publisher%20codes | A list of publisher codes for (978) International Standard Book Numbers with a group code of zero.
Assignation
The group-0 publisher codes are assigned as follows:
2-digit publisher codes
3-digit publisher codes
(Note: the status of codes not listed in this table is unclear; please help fill the gaps.)
4-digit publisher codes
(Note: many codes are not yet listed in this table; please help fill the gaps.)
5-digit publisher codes
(Note: many codes are not yet listed in this table; please help fill the gaps.)
6-digit publisher codes
(Note: many codes are not yet listed in this table; please help fill the gaps.)
7-digit publisher codes
(Note: many codes are not yet listed in this table; please help fill the gaps.)
See also
List of group-1 ISBN publisher codes
List of ISBN identifier groups |
https://en.wikipedia.org/wiki/Traffic%20wave | Traffic waves, also called stop waves, ghost jams, traffic snakes or traffic shocks, are traveling disturbances in the distribution of cars on a highway. Traffic waves travel backwards relative to the cars themselves. Relative to a fixed spot on the road the wave can move with, or against the traffic, or even be stationary (when the wave moves away from the traffic with exactly the same speed as the traffic). Traffic waves are a type of traffic jam. A deeper understanding of traffic waves is a goal of the physical study of traffic flow, in which traffic itself can often be seen using techniques similar to those used in fluid dynamics. It is related to the accordion effect.
Mitigation
It has been said that by knowing how traffic waves are created, drivers can sometimes reduce their effects by increasing vehicle headways and reducing the use of brakes, ultimately alleviating traffic congestion for everyone in the area.
However, in other models, increasing headway leads to diminishing the capacity of the travel lanes, increasing the congestion; however, disputed by acknowledging that similar principles apply to herding sheep through gates, and that in such a case, via human intervention, solitons are diminished simply by slapping "stuck sheep" and holding back aggressive sheep. In funnelling sheep through gates it can be determined how much intervention is needed to curb bottlenecks. Similar principles can be applied to human traffic streams, where, if each individual had the knowledge of final destination and complete route planning, then traversal along a route would be done so with the full knowledge that any abrupt change from any itinerary causes delays for those about to traverse the same route.
History
The earliest theoretical model of traffic shock waves was offered by Lighthill and Whitham in 1955. The following year Paul Richards independently published a similar model. Both papers were based on fluid dynamics and the model is known as the Lighthill-Whith |
https://en.wikipedia.org/wiki/GMER | GMER is a software tool written by a Polish researcher Przemysław Gmerek, for detecting and removing rootkits. It runs on Microsoft Windows and has support for Windows NT, 2000, XP, Vista, 7, 8 and 10. With version 2.0.18327 full support for Windows x64 is added.
At the time of first release in 2004, it introduced innovative rootkit detection techniques and quickly gained popularity for its effectiveness. It was incorporated into a few antivirus tools including Avast! antivirus and SDFix.
For several months in 2006 and 2007, the tool's website was the target of heavy DDoS attacks attempting to block its downloads. |
https://en.wikipedia.org/wiki/Advanced%20Disc%20Filing%20System | The Advanced Disc Filing System (ADFS) is a computing file system unique to the Acorn computer range and RISC OS-based successors. Initially based on the rare Acorn Winchester Filing System, it was renamed to the Advanced Disc Filing System when support for floppy discs was added (using a WD1770 floppy disc controller) and on later 32-bit systems a variant of a PC-style floppy controller.
Acorn's original Disc Filing System was limited to 31 files per disk surface, 7 characters per file name and a single character for directory names, a format inherited from the earlier Atom and System 3–5 Eurocard computers. To overcome some of these restrictions Acorn developed ADFS. The most dramatic change was the introduction of a hierarchical directory structure. The filename length increased from 7 to 10 letters and the number of files in a directory expanded to 47. It retained some superficial attributes from DFS; the directory separator continued to be a dot and $ now indicated the hierarchical root of the filesystem. ^ was used to refer to the parent directory, @ the current directory, and \ was the previously-visited directory.
The BBC Master Compact contained ADFS version 2.0, which provided the addition of format, verify and backup commands in ROM, but omitted support for hard discs.
8-bit usage
ADFS on 8-bit systems required a WD1770 or later 1772-series floppy controller, owing to the inability of the original Intel 8271 chip to cope with the double-density format ADFS required. ADFS could however be used to support hard discs without a 1770 controller present; in development the use of hard discs was the primary goal, extension to handle floppies came later. The 1770 floppy controller was directly incorporated into the design of the Master Series and B+ models, and was available as an upgrade board for the earlier Model B. ADFS could be added to Model B and B+ systems with an additional upgrade.
The Acorn Plus 3, Acorn's official disc expansion for the Acorn Elec |
https://en.wikipedia.org/wiki/Matground | Matgrounds are strong surface layers of seabed-hardening bacterial fauna preserved in the Proterozoic and lower Cambrian. Wrinkled matgrounds are informally named "elephant skin" because of its wrinkled surface in the fossil record. Matgrounds supported themselves until early burrowing worms were ubiquitous enough to unharden them. Burrowing animals broke down the hardy mats to further penetrate the underlying sediment for protection and feeding. Once matgrounds disappeared, exceptional preservation of lagerstätten such as the Burgess Shale or Ediacara Hills also did so too. Trace fossils such as Treptichnus are evidence for soft-bodied burrowers more anatomically complex than the Ediacaran biota that also caused the matgrounds disappearance.
See also
Cambrian substrate revolution |
https://en.wikipedia.org/wiki/Microbiome%20in%20the%20Drosophila%20gut | The microbiota are the sum of all symbiotic microorganisms (mutualistic, commensal or pathogenic) living on or in an organism. The fruit fly Drosophila melanogaster is a model organism and known as one of the most investigated organisms worldwide. The microbiota in flies is less complex than that found in humans. It still has an influence on the fitness of the fly, and it affects different life-history characteristics such as lifespan (life expectancy), resistance against pathogens (immunity) and metabolic processes (digestion). Considering the comprehensive toolkit available for research in Drosophila, analysis of its microbiome could enhance our understanding of similar processes in other types of host-microbiota interactions, including those involving humans. Microbiota plays key roles in the intestinal immune and metabolic responses via their fermentation product (short chain fatty acid), acetate.
Microbial composition
Drosophila melanogaster possesses a comparatively simple gut microbiota, consisting of only few bacterial species, mainly from two bacterial taxonomic groups: Bacillota and Pseudomonadota. The most common species belong to the families Lactobacillaceae (abundance of approx. 30%, members of the Bacillota) and Acetobacteraceae (approx. 55%, members of the Proteobacteria). Other less common bacterial species are from the families Leuconostocaceae, Enterococceae, and Enterobacteriaceae (all with an abundance in between 2–4%). The most common species include Lactobacillus plantarum, Lactobacillus brevis, Acetobacter pomorum and Enterococcus faecalis, while other species such as Acetobacter aceti, Acetobacter tropicalis and Acetobacter pasteurianus are also often found.
The particular species of the host fly has a central influence on the composition and quality of the gut microbiota, even if flies are raised under similar conditions. Nevertheless, the host's diet and nutritional environment also shape the exact composition of the microbiota. For ins |
https://en.wikipedia.org/wiki/Bottleneck%20%28engineering%29 | In engineering, a bottleneck is a phenomenon by which the performance or capacity of an entire system is severely limited by a single component. The component is sometimes called a bottleneck point. The term is metaphorically derived from the neck of a bottle, where the flow speed of the liquid is limited by its neck.
Formally, a bottleneck lies on a system's critical path and provides the lowest throughput. Bottlenecks are usually avoided by system designers, also a great amount of effort is directed at locating and tuning them. Bottleneck may be for example a processor, a communication link, a data processing software, etc.
Bottlenecks in software
In computer programming, tracking down bottlenecks (sometimes known as "hot spots" - sections of the code that execute most frequently - i.e. have the highest execution count) is called performance analysis. Reduction is usually achieved with the help of specialized tools, known as performance analyzers or profilers. The objective being to make those particular sections of code perform as fast as possible to improve overall algorithmic efficiency.
Bottlenecks in max-min fairness
In a communication network, sometimes a max-min fairness of the network is desired, usually opposed to the basic first-come first-served policy. With max-min fairness, data flow between any two nodes is maximized, but only at the cost of more or equally expensive data flows. To put it another way, in case of network congestion any data flow is only impacted by smaller or equal flows.
In such context, a bottleneck link for a given data flow is a link that is fully utilized (is saturated) and of all the flows sharing this link, the given data flow achieves maximum data rate network-wide. Note that this definition is substantially different from a common meaning of a bottleneck. Also note, that this definition does not forbid a single link to be a bottleneck for multiple flows.
A data rate allocation is max-min fair if and only if a data flo |
https://en.wikipedia.org/wiki/Hydroxyethyl%20starch | Hydroxyethyl starch (HES/HAES), sold under the brand name Voluven among others, is a nonionic starch derivative, used as a volume expander in intravenous therapy. The use of HES on critically ill patients is associated with an increased risk of death and kidney problems.
HES is a general term and can be sub-classified according to average molecular weight, molar substitution, concentration, C2/C6 ratio and Maximum Daily Dose. The European Medicines Agency commenced in June 2013 the process of agreeing to reduced indications which was completed in October 2013. The process of full withdrawal in the EU was expected to complete in 2018.
Medical uses
An intravenous solution of hydroxyethyl starch is used to prevent shock following severe blood loss caused by trauma, surgery, or other problem. It however appears to have greater risk of a poor outcome compared to other intravenous solutions and may increase the risk of death.
Adverse effects
HES can cause anaphylactoid reactions: hypersensitivity, mild influenza-like symptoms, slow heart rate, fast heart rate, spasms of the airways, and non-cardiogenic pulmonary edema. It is also linked to a decrease in hematocrit and disturbances in blood clotting. One liter of 6% solution (Hespan) reduces factor VIII level by 50% and will prolong the aPTT and will also decrease vWF. A coagulation effect of hetastarch administration is direct movement into fibrin clots and a dilutional effect on serum. Hetastarch may lead to platelet dysfunction by causing a reduction in the availability of glycoprotein IIb-IIIa on platelets.
HES derivatives have been demonstrated to have increased rates of acute kidney failure and need for renal replacement therapy and to decrease long-term survival when used alone in cases of severe sepsis compared with Ringer lactate solution. The effects were tested on HES 130kDa/0.42 in people with severe sepsis; analysis showed increased rates of kidney failure and increased mortality when compared to LR. I |
https://en.wikipedia.org/wiki/Acclimatisation%20%28neurons%29 | Acclimatisation is the process by which the nervous system fails to respond to a stimulus, as a result of the repeated stimulation of a transmission across a synapse. Acclimatisation is believed to occur when the synaptic knob of the presynaptic neuron runs out of vesicles containing neurotransmitters due to overuse over a short period of time.
A synapse that has undergone acclimatisation is said to be fatigued.
Acclimatisation is said to be responsible for 'getting used to' background noises and smells.
See also
Adaptive system
Neural adaptation |
https://en.wikipedia.org/wiki/Entoloma%20rhodopolium | Entoloma rhodopolium, commonly known as the wood pinkgill, is a poisonous mushroom found in Europe and Asia. In fact, it is one of the three most commonly implicated fungi in cases of mushroom poisoning in Japan (Other two are Omphalotus japonicus and Tricholoma ustale). E. rhodopolium is often mistaken for edible mushroom, E. sarcopum. Symptoms are predominantly gastrointestinal in nature, though muscarine, muscaridine, and choline have been isolated as toxic agents.
The taxonomy of this species is currently unclear, with several different forms identified in North America, and questions over whether the European and North American fungi are even the same species.
Entoloma is a genus of pink spored fungi. An alternate scientific name seen is Rhodophyllus rhodopolius, from Quelet's broader genus containing a larger subsection of pink-spored fungi.
Entoloma nidorosum, previously considered a separate species, is now classified as a variety of this fungus.
See also
List of Entoloma species
Gallery |
https://en.wikipedia.org/wiki/PixelJunk%20Shooter | PixelJunk Shooter is a video game developed by Q-Games for the PlayStation 3. It is the fourth major title in the PixelJunk series. It was released on the worldwide PlayStation Store in December 2009, and for Steam on November 11, 2013. A remastered version of the game, PixelJunk Shooter Ultimate, was released for the PlayStation 4 and PlayStation Vita in June 2014, and for Microsoft Windows on October 21, 2015.
Gameplay
In PixelJunk Shooter, up to two players can control their own subterranean vehicles to rescue a number of surviving scientists trapped underground. Using their ships' missiles, players can defeat enemies and destroy weak rock to progress through the environment. In addition to rock and ice, players must manipulate three types of fluid (water, magma, and ferrofluid) in order to reach the survivors. Once each survivor is rescued or killed, players may progress to the next part of the stage. If too many survivors are killed, players are forced to quit or restart the stage. The game has fifteen stages divided evenly among three "episodes", each episode ending with a boss encounter.
Development
PixelJunk Shooter was formally announced during a 2009 pre-E3 press event on April 29, 2009. Originally referred to as PixelJunk 1–4, a 13-day contest was held in which fans submitted game title suggestions to Q-Games. The official title, PixelJunk Shooter, was announced on May 25, 2009. The simplistic name was received negatively by some fans to which Q-Games president Dylan Cuthbert explained that the name was chosen not only for its simplicity, but also because shooting is the game's central mechanic ("Shooting jets of magma, shooting streams of water, shooting enemies, missiles, lasers, plasma spread weapons etc.") Several other titles were considered, including "PixelJunk Elements", the most popular submission. Ultimately, "Elements" was dismissed because "[it didn't] sound action-packed enough".
PixelJunk Shooter is the first title in the PixelJunk series |
https://en.wikipedia.org/wiki/List%20of%20object%E2%80%93relational%20mapping%20software | This is a list of well-known object–relational mapping software.
Java
Apache Cayenne, open-source for Java
Apache OpenJPA, open-source for Java
DataNucleus, open-source JDO and JPA implementation (formerly known as JPOX)
Ebean, open-source ORM framework
EclipseLink, Eclipse persistence platform
Enterprise JavaBeans (EJB)
Enterprise Objects Framework, Mac OS X/Java, part of Apple WebObjects
Hibernate, open-source ORM framework, widely used
Java Data Objects (JDO)
JOOQ Object Oriented Querying (jOOQ)
Kodo, commercial implementation of both Java Data Objects and Java Persistence API
TopLink by Oracle
iOS
Core Data by Apple for Mac OS X and iOS
.NET
Base One Foundation Component Library, free or commercial
Dapper, open source
Entity Framework, included in .NET Framework 3.5 SP1 and above
iBATIS, free open source, maintained by ASF but now inactive.
LINQ to SQL, included in .NET Framework 3.5
NHibernate, open source
nHydrate, open source
Quick Objects, free or commercial
Objective-C, Cocoa
Enterprise Objects, one of the first commercial OR mappers, available as part of WebObjects
Core Data, object graph management framework with several persistent stores, ships with Mac OS X and iOS
Perl
DBIx::Class
PHP
Laravel, framework that contains an ORM called "Eloquent" an ActiveRecord implementation.
Doctrine, open source ORM for PHP 5.2.3, 5.3.X., 7.4.X Free software (MIT)
CakePHP, ORM and framework for PHP 5, open source (scalars, arrays, objects); based on database introspection, no class extending
CodeIgniter, framework that includes an ActiveRecord implementation
Yii, ORM and framework for PHP 5, released under the BSD license. Based on the ActiveRecord pattern
FuelPHP, ORM and framework for PHP 5.3, released under the MIT license. Based on the ActiveRecord pattern.
Laminas, framework that includes a table data gateway and row data gateway implementations
Propel, ORM and query-toolkit for PHP 5, inspired by Apache Torque, free software, MIT
Qcodo, ORM and framework |
https://en.wikipedia.org/wiki/Epithemia | Epithemia is a genus of diatoms belonging to the family Rhopalodiaceae.
The genus has cosmopolitan distribution.
They have recently been linked to nitrogen fixation and can be a possible indicator of eutrophication. This is because levels of epithemia “containing cyanobacteria endosymbionts, decreased with increased ambient inorganic N concentrations” (Stancheva 2013). Concentrations of members of the epithemia genus existing with cyanobacteria endosymbionts would mean that there is more fixed nitrogen in the ecosystem. It could act as an early indicator of nutrient overload.
Species
Species:
Epithemia alpestris
Epithemia alpestris
Epithemia anasthasiae |
https://en.wikipedia.org/wiki/Neoclassical%20transport | In plasma physics and magnetic confinement fusion, neoclassical transport or neoclassical diffusion is a theoretical description of collisional transport in toroidal plasmas, usually found in tokamaks or stellerators. It is a modification of classical diffusion adding in effects of non-uniform magnetic fields due to the toroidal geometry, which give rise to new diffusion effects.
Description
Classical transport models a plasma in a magnetic field as a large number of particles traveling in helical paths around a line of force. In typical reactor designs, the lines are roughly parallel, so particles orbiting adjacent lines may collide and scatter. This results in a random walk process which eventually leads to the particles finding themselves outside the magnetic field.
Neoclassical transport adds the effects of the geometry of the fields. In particular, it considers the field inside the tokamak and similar toroidal arrangements, where the field is stronger on the inside curve than the outside simply due to the magnets being closer together in that area. To even out these forces, the field as a whole is twisted into a helix, so that the particles alternately move from the inside to the outside of the reactor.
In this case, as the particle transits from the outside to the inside, it sees an increasing magnetic force. If the particle energy is low, this increasing field may cause the particle to reverse directions, as in a magnetic mirror. The particle now travels in the reverse direction through the reactor, to the outside limit, and then back towards the inside where the same reflection process occurs. This leads to a population of particles bouncing back and forth between two points, tracing out a path that looks like a banana from above, the so-called banana orbits.
Since any particle in the long tail of the Maxwell–Boltzmann distribution is subject to this effect, there is always some natural population of such banana particles. Since these travel in the reve |
https://en.wikipedia.org/wiki/Von%20Neumann%27s%20theorem | In mathematics, von Neumann's theorem is a result in the operator theory of linear operators on Hilbert spaces.
Statement of the theorem
Let and be Hilbert spaces, and let be an unbounded operator from into Suppose that is a closed operator and that is densely defined, that is, is dense in Let denote the adjoint of Then is also densely defined, and it is self-adjoint. That is,
and the operators on the right- and left-hand sides have the same dense domain in |
https://en.wikipedia.org/wiki/Primary%20and%20secondary%20antibodies | Primary and secondary antibodies are two groups of antibodies that are classified based on whether they bind to antigens or proteins directly or target another (primary) antibody that, in turn, is bound to an antigen or protein.
Primary
A primary antibody can be very useful for the detection of biomarkers for diseases such as cancer, diabetes, Parkinson’s and Alzheimer’s disease and they are used for the study of absorption, distribution, metabolism, and excretion (ADME) and multi-drug resistance (MDR) of therapeutic agents.
Secondary
Secondary antibodies provide signal detection and amplification along with extending the utility of an antibody through conjugation to proteins. Secondary antibodies are especially efficient in immunolabeling. Secondary antibodies bind to primary antibodies, which are directly bound to the target antigen(s). In immunolabeling, the primary antibody's Fab domain binds to an antigen and exposes its Fc domain to secondary antibody. Then, the secondary antibody's Fab domain binds to the primary antibody's Fc domain. Since the Fc domain is constant within the same animal class, only one type of secondary antibody is required to bind to many types of primary antibodies. This reduces the cost by labeling only one type of secondary antibody, rather than labeling various types of primary antibodies. Secondary antibodies help increase sensitivity and signal amplification due to multiple secondary antibodies binding to a primary antibody.
Whole Immunoglobulin molecule secondary antibodies are the most commonly used format, but these can be enzymatically processed to enable assay refinement. F(ab')2 fragments are generated by pepsin digestion to remove most of the Fc fragment, this avoids recognition by Fc receptors on live cells, or to Protein A or Protein G. Papain digestion generates Fab fragments, which removes the entire Fc fragment including the hinge region, yielding two monovalent Fab moieties. They can be used to block endogenous imm |
https://en.wikipedia.org/wiki/Butler%20matrix | A Butler matrix is a beamforming network used to feed a phased array of antenna elements. Its purpose is to control the direction of a beam, or beams, of radio transmission. It consists of an matrix ( some power of two) with hybrid couplers and fixed-value phase shifters at the junctions. The device has input ports (the beam ports) to which power is applied, and output ports (the element ports) to which antenna elements are connected. The Butler matrix feeds power to the elements with a progressive phase difference between elements such that the beam of radio transmission is in the desired direction. The beam direction is controlled by switching power to the desired beam port. More than one beam, or even all of them can be activated simultaneously.
The concept was first proposed by Butler and Lowe in 1961. It is a development of the work of Blass in 1960. Its advantage over other methods of angular beamforming is the simplicity of the hardware. It requires far fewer phase shifters than other methods and can be implemented in microstrip on a low-cost printed circuit board.
Antenna elements
The antenna elements fed by a Butler matrix are typically horn antennae at the microwave frequencies at which Butler matrices are usually used. Horns have limited bandwidth and more complex antennae may be used if more than an octave is required. The elements are commonly arranged in a linear array. A Butler matrix can also feed a circular array giving 360° coverage. A further application with a circular antenna array is to produce omnidirectional beams with orthogonal phase-modes so that multiple mobile stations can all simultaneously use the same frequency, each using a different phase-mode. A circular antenna array can be made to simultaneously produce an omnidirectional beam and multiple directional beams when fed through two Butler matrices back-to-back.
Butler matrices can be used with both transmitters and receivers. Since they are passive and reciproc |
https://en.wikipedia.org/wiki/Physical%20Society%20of%20Iran | The Physical Society of Iran (PSI) (انجمن فيزيک ايران) is Iran's professional and academic society of physicists. PSI is a non-profit organization aimed at establishing and strengthening scientific contacts between physicists and academic members of the country's institutes of higher education in the field of physics.
The society has over 10,000 members inside and outside Iran. In addition to its awards scheme and publications programme, the Physical Society of Iran holds annual conferences in several different fields, including optics and condensed matter physics. The society has proved instrumental in improving the state of education and research in physics throughout the country.
The society organizes annual meetings and it is an active member of TWAS. It has also close collaboration with the American Physical Society. In October 2003 APS and PSI jointly sponsored a school/workshop on string theory in Tehran.
The society's main journal is the Iranian Journal of Physics Research, which is published via the Isfahan University of Technology Press, and is recognized by the Ministry of Science of Iran. PSI was a sponsor of the 2007 International Physics Olympiad, which was hosted by Isfahan University of Technology.
History
The Physical Society of Iran was established in 1963 by Iran's elite physicists and engineers. Among the founders was Yusef Sobouti, currently chancellor of IASBS.
The first Annual Physics Conference of Iran was inaugurated in 1973 at Sepah Bank's arboretum, followed by Iran's second national conference on Physics the next year at Shahid Beheshti University. Activities of the society suffered a setback during the early years of the revolution, but picked up in 1983 and have been gathering momentum ever since.
Presidents
Yousef Sobouti (1988–91 and 1996–99)
Reza Mansouri
Hessamaddin Arfaei
Ezatolah Arzi
Hadi Akbarzadeh
Shahin Rouhani
Mohammad Reza Ejtehadi (current)
Awards
The following are awarded annually by PSI to selected recipie |
https://en.wikipedia.org/wiki/Sparrow%27s%20resolution%20limit | Sparrow's resolution limit is an estimate of the angular resolution limit of an optical instrument.
Rayleigh criterion
When a star is observed with a telescope, the light is diffracted or spread apart into an Airy disk. The resolution limit is defined as the minimum angular separation between two stars that can still be perceived as separate by an observer. The angular diameter of the Airy disk is determined by the aperture of the instrument.
Rayleigh's resolution limit is reached when the two stars are separated by the theoretical radius of the first dark interval around the Airy disk, which is larger than the disk's apparent radius, so that a distinct dark gap appears between the two disks. Most astronomers say they can still distinguish the two stars when they are closer than Rayleigh's resolution limit. Sparrow's Resolution Limit is reached when the combined light from two overlapping and equally bright Airy disks is constant along a line between the central peak brightness of the two Airy disks. However, at the Sparrow resolution limit the two Airy disks will appear to be just touching at their edges, which according to Sparrow is due to a brightness contrast response of the eye. The same reasoning applies to the resolution of two wavelengths in a spectroscope, where lines of emission or absorption will have a diffraction induced width analogous to the diameter of an Airy disk.
Sparrow's resolution limit is nearly equivalent to the theoretical diffraction limit of resolution, the wavelength of light divided by the aperture diameter, and about 20% smaller than the Rayleigh limit. For example, in a 200 mm (eight-inch) telescope, Rayleigh's resolution limit is 0.69 arc seconds, Sparrow's resolution limit is 0.54 arc seconds.
Dawes' limit
Sparrow's resolution limit was derived in 1916 from photographic experiments with simulated spectroscopic lines and is most commonly applied in spectroscopy, microscopy and photography. The Dawes resolution limit is more oft |
https://en.wikipedia.org/wiki/Quanta%20Magazine | Quanta Magazine is an editorially independent online publication of the Simons Foundation covering developments in physics, mathematics, biology and computer science.
Undark Magazine described Quanta Magazine as "highly regarded for its masterful coverage of complex topics in science and math." The science news aggregator RealClearScience ranked Quanta Magazine first on its list of "The Top 10 Websites for Science in 2018." In 2020, the magazine received a National Magazine Award for General Excellence from the American Society of Magazine Editors for its "willingness to tackle some of the toughest and most difficult topics in science and math in a language that is accessible to the lay reader without condescension or oversimplification."
The articles in the magazine are freely available to read online. Scientific American, Wired, The Atlantic, and The Washington Post, as well as international science publications like Spektrum der Wissenschaft, have reprinted articles from the magazine.
History
Quanta Magazine was initially launched as Simons Science News in October 2012, but it was renamed to its current title in July 2013. It was founded by the former New York Times journalist Thomas Lin, who is the magazine's editor-in-chief. The two deputy editors are John Rennie and Michael Moyer, formerly of Scientific American, and the art director is Samuel Velasco.
In November 2018, MIT Press published two collections of articles from Quanta Magazine, Alice and Bob Meet the Wall of Fire and The Prime Number Conspiracy.
In May 2022 the magazine's staff, notably Natalie Wolchover, were awarded the Pulitzer Prize for Explanatory Reporting. |
https://en.wikipedia.org/wiki/Correlation%20gap | In stochastic programming, the correlation gap is the worst-case ratio between the cost when the random variables are correlated to the cost when the random variables are independent.
As an example, consider the following optimization problem. A teacher wants to know whether to come to class or not. There are n potential students. For each student, there is a probability of 1/n that the student will attend the class. If at least one student attends, then the teacher must come and his cost is 1. If no students attend, then the teacher can stay at home and his cost is 0. The goal of the teacher is to minimize his cost. This is a stochastic-programming problem, because the constraints are not known in advance – only their probabilities are known. Now, there are two cases regarding the correlation between the students:
Case #1: the students are uncorrelated: each student decides whether to come to class or not by tossing a coin with probability , independently of the others. The expected cost in this case is .
Case #2: the students are correlated: one student is selected at random and comes to class, while the others stay at home. Note that the probability of each student to come is still . However, now the cost is 1.
The correlation gap is the cost in case #2 divided by the cost in case #1, which is .
prove that the correlation gap is bounded in several cases. For example, when the cost function is a submodular set function (as in the above example), the correlation gap is at most (so the above example is a worst-case).
An upper bound on the correlation gap implies an upper bound on the loss that results from ignoring the correlation. For example, suppose we have a stochastic programming problem with a submodular cost function. We know the marginal probabilities of the variables, but we do not know whether they are correlated or not. If we just ignore the correlation and solve the problem as if the variables are independent, the resulting solution is a -approxim |
https://en.wikipedia.org/wiki/Estonian%20Biocentre | The Estonian Biocentre (EBC; ) is a genetics and genomics research institute located in Tartu, Estonia. It's a joint venture between the University of Tartu and the National Institute of Chemical Physics and Biophysics. The goal of the EBC is to promote research and technological development (RTD) in gene and cell technologies in Estonia. The EBC is regulated by a nine-member Scientific Council, comprising researchers from the EBC and external members, and is advised by an international Advisory Board, currently consisting of five members from different countries.
The EBC was established in 1986, and the current director is Prof. Richard Villems.
See also
Estonian Genome Project |
https://en.wikipedia.org/wiki/Prusa%20i3 | The Prusa i3 is a family of fused deposition modeling 3D printers, manufactured by Czech company Prusa Research under the trademarked name Original Prusa i3. Part of the RepRap project, Prusa i3 printers were called the most used 3D printer in the world in 2016. The first Prusa i3 was designed by Josef Průša in 2012, and was released as a commercial kit product in 2015. The latest model (Prusa MK4 on sale as of March 2023) is available in both kit and factory assembled versions. The Prusa i3's comparable low cost and ease of construction and modification made it popular in education and with hobbyists and professionals, with the Prusa i3 model MK2 printer receiving several awards in 2016.
The i3 series is released under an open source license, so many other companies and individuals have made variants of the printer.
Models
RepRap Mendel
First conceived in 2009, RepRap Mendel 3D printers were designed to be assembled from 3D printed parts and commonly available off-the-shelf components (referred to as "vitamins," as they cannot be produced by the printer itself). These parts include threaded rods, leadscrews, smooth rods and bearings, screws, nuts, stepper motors, control circuit boards, and a "hot end" to melt and place thermoplastic materials. A Cartesian mechanism permits placement of material anywhere in a cubic volume; this design has continued throughout development of the i3 series. The flat "print bed" (the surface on which parts are printed) is movable in one axis (Y), while two horizontal and two vertical rods permit tool motion in two axes, designated X and Z.
Prusa Mendel
Josef Průša, a core developer of the RepRap project who had previously developed a PCB heated "print bed", adapted and simplified the RepRap Mendel design, reducing the time to print 3D plastic parts from 20 to 10 hours, and including 3D printed bushings in place of regular bearings. First announced in September 2010, the printer was dubbed Prusa Mendel by Průša himself. According |
https://en.wikipedia.org/wiki/Delay%20%28audio%20effect%29 | Delay is an audio signal processing technique that records an input signal to a storage medium and then plays it back after a period of time. When the delayed playback is mixed with the live audio, it creates an echo-like effect, whereby the original audio is heard followed by the delayed audio. The delayed signal may be played back multiple times, or fed back into the recording, to create the sound of a repeating, decaying echo.
Delay effects range from a subtle echo effect to a pronounced blending of previous sounds with new sounds. Delay effects can be created using tape loops, an approach developed in the 1940s and 1950s and used by artists including Elvis Presley and Buddy Holly.
Analog effects units were introduced in the 1970s; digital effects pedals in 1984; and audio plug-in software in the 2000s.
History
The first delay effects were achieved using tape loops improvised on reel-to-reel audio tape recording systems. By shortening or lengthening the loop of tape and adjusting the read-and-write heads, the nature of the delayed echo could be controlled. This technique was most common among early composers of musique concrète such as Pierre Schaeffer, and composers such as Karlheinz Stockhausen, who had sometimes devised elaborate systems involving long tapes and multiple recorders and playback systems, collectively processing the input of a live performer or ensemble.
American producer Sam Phillips created a slapback echo effect with two Ampex 350 tape recorders in 1954. The effect was used by artists including Elvis Presley (such as on his track "Blue Moon of Kentucky") and Buddy Holly, and became one of Phillips' signatures. Guitarist and instrument designer Les Paul was an early pioneer in delay devices. According to Sound on Sound, "The character and depth of sound that was produced from tape echo on these old records is extremely lush, warm and wide."
Tape echoes became commercially available in the 1950s. Tape echo machines contain loops of tape tha |
https://en.wikipedia.org/wiki/Characterization%20test | In computer programming, a characterization test (also known as Golden Master Testing) is a means to describe (characterize) the actual behavior of an existing piece of software, and therefore protect existing behavior of legacy code against unintended changes via automated testing. This term was coined by Michael Feathers.
Overview
The goal of characterization tests is to help developers verify that the modifications made to a reference version of a software system did not modify its behavior in unwanted or undesirable ways. They enable, and provide a safety net for, extending and refactoring code that does not have adequate unit tests.
In James Bach's and Michael Bolton's classification of test oracles, this kind of testing corresponds to the historical oracle. In contrast to the usual approach of assertions-based software testing, the outcome of the test is not determined by individual values or properties (that are checked with assertions), but by comparing a complex result of the tested software-process as a whole with the result of the same process in a previous version of the software. In a sense, characterization testing inverts traditional testing: Traditional tests check individual properties (whitelists them), where characterization testing checks all properties that are not removed (blacklisted).
When creating a characterization test, one must observe what outputs occur for a given set of inputs. Given an observation that the legacy code gives a certain output based on given inputs, then a test can be written that asserts that the output of the legacy code matches the observed result for the given inputs. For example, if one observes that f(3.14) == 42, then this could be created as a characterization test. Then, after modifications to the system, the test can determine if the modifications caused changes in the results when given the same inputs.
Unfortunately, as with any testing, it is generally not possible to create a characterization test fo |
https://en.wikipedia.org/wiki/Resource%20Unit | Resource Unit (RU) is a unit in OFDMA terminology used in 802.11ax WLAN to denote a group of 78.125 kHz bandwidth subcarriers (tones) used in both DownLink (DL) and UpLink (UL) transmissions. With OFDMA, different transmit powers may be applied to different RUs. There are maximum of 9 RUs for 20 MHz bandwidth, 18 in case of 40 MHz and more in case of 80 or 160 MHz bandwidth. The RUs enables an Access Point station to allow WLAN stations to access it simultaneously and efficiently.
Description
In the older WLAN standard (802.11ac) only single-user station is allowed to transmit (uplink transmission) at one point in time, although multi-user downlink (DL-MU-MIMO) from AP to Non-AP stations has been supported through MIMO beamforming. The more stations active in the network, the longer the stations need to wait before allowed to transmit, hence the overall wireless traffic gets slower.
802.11ax WLAN is the first WLAN standard to use OFDMA to enable transmissions with multiple users simultaneously (it is called High Efficiency Multi Users [HE-MU] Access). In OFDMA, a symbol is constructed of subcarriers where the total number defines a Physical Layer PDU bandwidth. Each user is assigned different subsets of subcarriers to achieve simultaneous data transmission in MU (Multi-Users) environment. The more subcarriers are used, the longer their symbol rate is, which means that the overall rate of information remains the same. For example, in 20 MHz OFDMA bandwidth there is a total of 256 subcarriers (tones) which are grouped in sub-channels (or Resource Units).
There are three subcarrier types used in OFDMA WLAN:
Data subcarrier; used for actual data transmission
Pilot subcarrier; used for phase information and parameter tracking
Unused subcarrier which is neither data nor pilot subcarrier. This includes DC, Guard band and null subcarriers.
There are a few RUs currently defined: 26-tone RU, 52-tone RU, 106-tone RU, 242-tone RU (primary channel), 484-tone RU, 99 |
https://en.wikipedia.org/wiki/CUT%20domain | In molecular biology, the CUT domain (also known as ONECUT) is a DNA-binding motif which can bind independently or in cooperation with the homeodomain, which is often found downstream of the CUT domain. Proteins display two modes of DNA binding, which hinge on the homeodomain and on the linker that separates it from the CUT domain, and two modes of transcriptional stimulation, which hinge on the homeodomain. |
https://en.wikipedia.org/wiki/440%20%28number%29 | 440 (four hundred [and] forty) is the natural number following 439 and preceding 441.
In mathematics
440 has the factorization
440 is:
Even
The sum of the first 17 prime numbers
A harshad number
An abundant number
A happy number |
https://en.wikipedia.org/wiki/Rate%20of%20convergence | In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence that converges to is said to have order of convergence and rate of convergence if
The rate of convergence is also called the asymptotic error constant.
Note that this terminology is not standardized and some authors will use rate where
this article uses order (e.g., ).
In practice, the rate and order of convergence provide useful insights when using iterative methods for calculating numerical approximations. If the order of convergence is higher, then typically fewer iterations are necessary to yield a useful approximation. Strictly speaking, however, the asymptotic behavior of a sequence does not give conclusive information about any finite part of the sequence.
Similar concepts are used for discretization methods. The solution of the discretized problem converges to the solution of the continuous problem as the grid size goes to zero, and the speed of convergence is one of the factors of the efficiency of the method. However, the terminology, in this case, is different from the terminology for iterative methods.
Series acceleration is a collection of techniques for improving the rate of convergence of a series discretization. Such acceleration is commonly accomplished with sequence transformations.
Convergence speed for iterative methods
Convergence definitions
Suppose that the sequence converges to the number . The sequence is said to converge with order to , and with a rate of convergence of , if
for some positive constant if , and if . It is not necessary, however, that be an integer. For example, the secant method, when converging to a regular, simple root, has an order of φ ≈ 1.618.
Convergence with order
is called linear convergence if , and the sequence is said to converge Q-linearly to .
is called quadratic convergence.
is called cubic convergen |
https://en.wikipedia.org/wiki/Nested%20radical | In algebra, a nested radical is a radical expression (one containing a square root sign, cube root sign, etc.) that contains (nests) another radical expression. Examples include
which arises in discussing the regular pentagon, and more complicated ones such as
Denesting
Some nested radicals can be rewritten in a form that is not nested. For example,
Another simple example,
Rewriting a nested radical in this way is called denesting. This is not always possible, and, even when possible, it is often difficult.
Two nested square roots
In the case of two nested square roots, the following theorem completely solves the problem of denesting.
If and are rational numbers and is not the square of a rational number, there are two rational numbers and such that
if and only if is the square of a rational number .
If the nested radical is real, and are the two numbers
and where is a rational number.
In particular, if and are integers, then and are integers.
This result includes denestings of the form
as may always be written and at least one of the terms must be positive (because the left-hand side of the equation is positive).
A more general denesting formula could have the form
However, Galois theory implies that either the left-hand side belongs to or it must be obtained by changing the sign of either or both. In the first case, this means that one can take and In the second case, and another coefficient must be zero. If one may rename as for getting Proceeding similarly if it results that one can suppose This shows that the apparently more general denesting can always be reduced to the above one.
Proof: By squaring, the equation
is equivalent with
and, in the case of a minus in the right-hand side,
(square roots are nonnegative by definition of the notation). As the inequality may always be satisfied by possibly exchanging and , solving the first equation in and is equivalent with solving
This equality implies that bel |
https://en.wikipedia.org/wiki/66%20block | A 66 block is a type of punch-down block used to connect sets of wires in a telephone system. They have been manufactured in four common configurations, A, B, E and M. A and B styles have the clip rows on 0.25" centers while E and M have the clip rows on 0.20" centers. The A blocks have 25 slotted holes on the left side for position the incoming building cable with a 50 slot fanning strip on the right side for distribution cables. They have been obsolete for many years. The B & M styles have 50 slot fanning strip on both sides. The B style is used mainly in distribution panels where several destinations (often 1A2 key telephones) need to connect to the same source. The M blocks are often used to connect a single instrument to such a distribution block. The E style has 5 columns of 10 2 clips rows and are used for transitioning from the 25 pair distribution cable to a 25 pair RJ21 style female ribbon connector.
66 blocks are designed to terminate 20 through 26 AWG insulated solid copper wire or 18 & 19 gauge skinned solid copper wire. The 66 series connecting block, introduced in the Bell System in 1962, was the first terminating device with insulation displacement connector technology. The term 66 block reflects its Western Electric model number.
The 25-pair standard non-split 66 block contains 50 rows; each row has two (E) or four (M) or six (A) & (B) columns of clips that are electrically bonded. The 25-pair split 50 66 block is the industry standard for easy termination of voice cabling, and is a standard network termination by telephone companies—generally on commercial properties. Each row contains four (M) or six (B) clips, but the left-side clips are electrically isolated from the right-side clips. Smaller versions also exist with fewer rows for smaller-scale use, such as residential.
66 E blocks are available pre-assembled with an RJ-21 female connector that accepts a quick connection to a 25-pair cable with a male end. These connections are typica |
https://en.wikipedia.org/wiki/Exertion | Exertion is the physical or perceived use of energy. Exertion traditionally connotes a strenuous or costly effort, resulting in generation of force, initiation of motion, or in the performance of work. It often relates to muscular activity and can be quantified, empirically and by measurable metabolic response.
Physical
In physics, exertion is the expenditure of energy against, or inductive of, inertia as described by Isaac Newton's third law of motion. In physics, force exerted equivocates work done. The ability to do work can be either positive or negative depending on the direction of exertion relative to gravity. For example, a force exerted upwards, like lifting an object, creates positive work done on that object.
Exertion often results in force generated, a contributing dynamic of general motion. In mechanics it describes the use of force against a body in the direction of its motion (see vector).
Physiological
Exertion, physiologically, can be described by the initiation of exercise, or, intensive and exhaustive physical activity that causes cardiovascular stress or a sympathetic nervous response. This can be continuous or intermittent exertion.
Exertion requires, of the body, modified oxygen uptake, increased heart rate, and autonomic monitoring of blood lactate concentrations. Mediators of physical exertion include cardio-respiratory and musculoskeletal strength, as well as metabolic capability. This often correlates to an output of force followed by a refractory period of recovery. Exertion is limited by cumulative load and repetitive motions.
Muscular energy reserves, or stores for biomechanical exertion, stem from metabolic, immediate production of ATP and increased oxygen consumption. Muscular exertion generated depends on the muscle length and the velocity at which it is able to shorten, or contract.
Perceived exertion can be explained as subjective, perceived experience that mediates response to somatic sensations and mechanisms. A rating of pe |
https://en.wikipedia.org/wiki/Mailbox%20provider | A mailbox provider, mail service provider or, somewhat improperly, email service provider is a provider of email hosting. It implements email servers to send, receive, accept, and store email for other organizations or end users, on their behalf.
The term "mail service provider" was coined in the Internet Mail Architecture document .
Types
There are various kinds of email providers. There are paid and free ones, possibly sustained by advertising. Some allow anonymous users, whereby a single user can get multiple, apparently unrelated accounts. Some require full identification credentials; for example, a company may provide email accounts to full-time staff only. Often, companies, universities, organizations, groups, and individuals that manage their mail servers themselves adopt naming conventions that make it straightforward to identify who is the owner of a given email address. Besides control of the local names, insourcing may provide for data confidentiality, network traffic optimization, and fun.
Mailbox providers typically accomplish their task by implementing Simple Mail Transfer Protocol (SMTP) and possibly providing access to messages through Internet Message Access Protocol (IMAP), the Post Office Protocol, Webmail, or a proprietary protocol. Parts of the task can still be outsourced, for example virus and spam filtering of incoming mail, or authentication of outgoing mail.
ISP-based email
Many mailbox providers are also access providers. Not the core product, their email services could lack some interesting features, such as IMAP, Transport Layer Security, or SMTP Authentication —in fact, an ISP can do without the latter, as it can recognize its clients by the IP addresses it assigns them.
Free mail providers
Launched in the 1990s, AOL Mail, Hotmail, Lycos, Mail.com and Yahoo! Mail were among the early providers of free email accounts, joined by Gmail in 2004. They attract users because they are free and can advertise their service on eve |
https://en.wikipedia.org/wiki/Active%20galactic%20nucleus | An active galactic nucleus (AGN) is a compact region at the center of a galaxy that emits a significant amount of energy across the electromagnetic spectrum, with characteristics indicating that the luminosity is not produced by stars. Such excess, non-stellar emissions have been observed in the radio, microwave, infrared, optical, ultra-violet, X-ray and gamma ray wavebands. A galaxy hosting an AGN is called an active galaxy. The non-stellar radiation from an AGN is theorized to result from the accretion of matter by a supermassive black hole at the center of its host galaxy.
Active galactic nuclei are the most luminous persistent sources of electromagnetic radiation in the universe and, as such, can be used as a means of discovering distant objects; their evolution as a function of cosmic time also puts constraints on models of the cosmos.
The observed characteristics of an AGN depend on several properties such as the mass of the central black hole, the rate of gas accretion onto the black hole, the orientation of the accretion disk, the degree of obscuration of the nucleus by dust, and presence or absence of jets.
Numerous subclasses of AGN have been defined on the basis of their observed characteristics; the most powerful AGN are classified as quasars. A blazar is an AGN with a jet pointed toward the Earth, in which radiation from the jet is enhanced by relativistic beaming.
History
During the first half of the 20th century, photographic observations of nearby galaxies detected some characteristic signatures of AGN emission, although there was not yet a physical understanding of the nature of the AGN phenomenon. Some early observations included the first spectroscopic detection of emission lines from the nuclei of NGC 1068 and Messier 81 by Edward Fath (published in 1909), and the discovery of the jet in Messier 87 by Heber Curtis (published in 1918). Further spectroscopic studies by astronomers including Vesto Slipher, Milton Humason, and Nicholas Mayall n |
https://en.wikipedia.org/wiki/Heterogram%20%28linguistics%29 | Heterogram (classical compound: "different" + "written") is a term used mostly in the study of ancient texts for a special kind of a logogram consisting of the embedded written representation of a word in a foreign language, which does not have a spoken counterpart in the main (matrix) language of the text. In most cases, the matrix and embedded languages share the same script. While from the perspective of the embedded language the word may be written either phonetically (representing the sounds of the embedded language) or logographically, it is never a phonetic spelling from the point of view of the matrix language of the text, since there is no relationship between the symbols used and the underlying pronunciation of the word in the matrix language.
In English, the written abbreviations e.g., i.e., and viz. are sometimes read respectively as "for example", "that is", and "namely". When read this way, the abbreviations for the Latin phrases exempli gratia, id est, and videlicet are being used logographically to indicate English phrases which are rough translations. Similarly, the ampersand ⟨&⟩, originally a ligature for the Latin word et, in many European languages stands logographically for the local word for "and" regardless of pronunciation. This can be contrasted with the older way of abbreviating et cetera—&c.—where ⟨&⟩ is used to represent et as a full loanword, not a heterogram.
Heterograms are frequent in cuneiform scripts, such as the Akkadian cuneiform, which uses Sumerian heterograms, or the Anatolian cuneiform, which uses both Sumerian and Akkadian heterograms. In Middle Iranian scripts derived from the Aramaic scripts (such as the Pahlavi scripts), all logograms are heterograms coming from Aramaic. Sometimes such heterograms are referred to by terms identifying the source language such as "Sumerograms" or "Aramaeograms".
Another example is kanji in Japanese, literally "Sinograms" or "Han characters".
See also
Heterography and homography
Ideogr |
https://en.wikipedia.org/wiki/Business%20record | A business record is a document (hard copy or digital) that records an "act, condition, or event" related to business. Business records include meeting minutes, memoranda, employment contracts, and accounting source documents.
It must be retrievable at a later date so that the business dealings can be accurately reviewed as required. Since business is dependent upon confidence and trust, not only must the record be accurate and easily retrieved, but the processes surrounding its creation and retrieval must be perceived by customers and the business community to consistently deliver a full and accurate record with no gaps or additions.
Most business records have specified retention periods based on legal requirements and/or internal company policies. This is important because in many countries (including the United States), many documents may be required by law to be disclosed to government regulatory agencies or to the general public. Likewise, they may be discoverable if the business is sued. Under the business records exception in the Federal Rules of Evidence, certain types of business records, particularly those made and kept with regularity, may be considered admissible in court despite containing hearsay.
See also
Records management
Information governance
Regulation Fair Disclosure
Sarbanes-Oxley Act |
https://en.wikipedia.org/wiki/Flip-flop%20%28electronics%29 | In electronics, flip-flops and latches are circuits that have two stable states that can store state information – a bistable multivibrator. The circuit can be made to change state by signals applied to one or more control inputs and will output its state (often along with its logical complement too). It is the basic storage element in sequential logic. Flip-flops and latches are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems.
Flip-flops and latches are used as data storage elements to store a single bit (binary digit) of data; one of its two states represents a "one" and the other represents a "zero". Such data storage can be used for storage of state, and such a circuit is described as sequential logic in electronics. When used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal.
The term flip-flop has historically referred generically to both level-triggered (asynchronous, transparent, or opaque) and edge-triggered (synchronous, or clocked) circuits that store a single bit of data using gates. Modern authors reserve the term flip-flop exclusively for edge-triggered storage elements and latches for level-triggered ones. The terms "edge-triggered", and "level-triggered" may be used to avoid ambiguity.
When a level-triggered latch is enabled it becomes transparent, but an edge-triggered flip-flop's output only changes on a clock edge (either positive going or negative going).
Different types of flip-flops and latches are available as integrated circuits, usually with multiple elements per chip. For example, 74HC75 is a quadruple transparent latch in the 7400 series.
History
The first electronic latch was invented in 1918 by the British physicists William Eccles |
https://en.wikipedia.org/wiki/Josephinian%20Land%20Survey | The Josephinian Land Survey () was the first comprehensive land survey and mapping of the Habsburg Empire. The survey was ordered by Holy Roman Empress Maria Theresa after Austria's defeat in the Seven Years' War. It was conducted from 1763 to 1787, concluding in the reign of Holy Roman Emperor Joseph II. The maps are currently stored in the National Archives of Austria. |
https://en.wikipedia.org/wiki/Saprobiont | Saprobionts are organisms that digest their food externally and then absorb the products. This process is called saprotrophic nutrition. Fungi are examples of saprobiontic organisms, which are a type of decomposer.
Saprobiontic organisms feed off dead and/or decaying biological materials. Digestion is accomplished by excretion of digestive enzymes which break down cell tissues, allowing saprobionts to extract the nutrients they need while leaving the indigestible waste. This is called extracellular digestion. This is very important in ecosystems, for the nutrient cycle.
Saprobionts should not be confused with detritivores, another class of decomposers which digest internally.
These organisms can be good sources of extracellular enzymes for industrial processes such as the production of fruit juice. For instance, the fungus Aspergillus niger is used to produce pectinase, an enzyme which is used to break down pectin in juice concentrates, making the juice appear more translucent. |
https://en.wikipedia.org/wiki/Pilea%20cavernicola | Pilea cavernicola is a herbaceous plant about 0.5 meters tall, native to China. A sciophyte, it grows in very low light conditions in caves in Fengshan County, Guangxi, China. |
https://en.wikipedia.org/wiki/Cameron%E2%80%93Martin%20theorem | In mathematics, the Cameron–Martin theorem or Cameron–Martin formula (named after Robert Horton Cameron and W. T. Martin) is a theorem of measure theory that describes how abstract Wiener measure changes under translation by certain elements of the Cameron–Martin Hilbert space.
Motivation
The standard Gaussian measure on -dimensional Euclidean space is not translation-invariant. (In fact, there is a unique translation invariant Radon measure up to scale by Haar's theorem: the -dimensional Lebesgue measure, denoted here .) Instead, a measurable subset has Gaussian measure
Here refers to the standard Euclidean dot product in . The Gaussian measure of the translation of by a vector is
So under translation through , the Gaussian measure scales by the distribution function appearing in the last display:
The measure that associates to the set the number is the pushforward measure, denoted . Here refers to the translation map: . The above calculation shows that the Radon–Nikodym derivative of the pushforward measure with respect to the original Gaussian measure is given by
The abstract Wiener measure on a separable Banach space , where is an abstract Wiener space, is also a "Gaussian measure" in a suitable sense. How does it change under translation? It turns out that a similar formula to the one above holds if we consider only translations by elements of the dense subspace .
Statement of the theorem
Let be an abstract Wiener space with abstract Wiener measure . For , define by . Then is equivalent to with Radon–Nikodym derivative
where
denotes the Paley–Wiener integral.
The Cameron–Martin formula is valid only for translations by elements of the dense subspace , called Cameron–Martin space, and not by arbitrary elements of . If the Cameron–Martin formula did hold for arbitrary translations, it would contradict the following result:
If is a separable Banach space and is a locally finite Borel measure on that is equivalent to its own push |
https://en.wikipedia.org/wiki/Newton%20for%20Beginners | Newton for Beginners, republished as Introducing Newton, is a 1993 graphic study guide to the Isaac Newton and classical physics written and illustrated by William Rankin. The volume, according to the publisher's website, "explains the extraordinary ideas of a man who [...] single-handedly made enormous advances in mathematics, mechanics and optics," and, "was also a secret heretic, a mystic and an alchemist."
"William Rankin," Public Understanding of Science reviewer Patrick Fullick confirms, "sets out to illuminate the man whose work laid the foundations of the physics of the last 350 years, and to place him and his work in the context of the times in which he lived." New Scientist reviewer Roy Herbert adds that, "alongside theories of the Universe from ancient times, the book explains those originating since Isaac Newton, so placing him deftly in his scientific context."
Publication History
This volume was originally published in the UK by Icon Books in 1993 as Newton for Beginners, and subsequently republished with different covers in different editions.
Selected editions:
Related volumes in the series:
Reception
"This book shares the general characteristics of the Beginners series with a large number of line drawings and cartoons with associated text and many asides," states Patrick Fullick, writing in Public Understanding of Science, "for some readers the asides may seem idiosyncratic or even annoying." "Some may dislike the humour and bad puns that abound in this work," confirms Bill Palmer, writing in the Journal of the Science Teacher Association of the Northern Territory, "but I suspect that those starting the study of Newton's life and work will appreciate this attempt to facilitate reading."
"The book is well-grounded in recent historiography," and, "Rankin is clearly sympathetic towards his subject," states Fullick, "but inevitably Newton still comes over as one whose intellectual vanity was at times apt to overcome his self-control." Roy Herbert |
https://en.wikipedia.org/wiki/Proofing%20%28baking%20technique%29 | In cooking, proofing (also called proving) is a step in the preparation of yeast bread and other baked goods in which the dough is allowed to rest and rise a final time before baking. During this rest period, yeast ferments the dough and produces gases, thereby leavening the dough.
In contrast, proofing or blooming yeast (as opposed to proofing the dough) may refer to the process of first suspending yeast in warm water, a necessary hydration step when baking with active dry yeast. Proofing can also refer to the process of testing the viability of dry yeast by suspending it in warm water with carbohydrates (sugars). If the yeast is still alive, it will feed on the sugar and produce a visible layer of foam on the surface of the water mixture.
Fermentation rest periods are not always explicitly named, and can appear in recipes as "Allow dough to rise." When they are named, terms include "bulk fermentation", "first rise", "second rise", "final proof" and "shaped proof".
Dough processes
The process of making yeast-leavened bread involves a series of alternating work and rest periods. Work periods occur when the dough is manipulated by the baker. Some work periods are called mixing, kneading, and folding, as well as division, shaping, and panning. Work periods are typically followed by rest periods, which occur when dough is allowed to sit undisturbed. Particular rest periods include, but are not limited to, autolyse, bulk fermentation and proofing. Proofing, also sometimes called final fermentation, is the specific term for allowing dough to rise after it has been shaped and before it is baked.
Some breads begin mixing with an autolyse. This refers to a period of rest after the initial mixing of flour and water, a rest period that occurs sequentially before the addition of yeast, salt and other ingredients. This rest period allows for better absorption of water and helps the gluten and starches to align. The autolyse is credited to Raymond Calvel, who recommende |
https://en.wikipedia.org/wiki/Metals%20in%20medicine | Metals in medicine are used in organic systems for diagnostic and treatment purposes. Inorganic elements are also essential for organic life as cofactors in enzymes called metalloproteins. When metals are scarce or high quantities, equilibrium is set out of balance and must be returned to its natural state via interventional and natural methods.
Toxic metals
Metals can be toxic in high quantities. Either ingestion or faulty metabolic pathways can lead to metal toxicity (metal poisoning). Sources of toxic metals include cadmium from tobacco, arsenic from agriculture and mercury from volcanoes and forest fires. Nature, in the form of trees and plants, is able to trap many toxins and can bring abnormally high levels back into equilibrium. Toxic metal poisoning is usually treated with some type of chelating agent. Heavy metal poisoning, such as from mercury, cadmium, or lead, is particularly pernicious.
Examples of specific types of toxic metals include:
Copper: copper toxicity usually presents itself as a side effect of low levels of the protein ceruloplasmin, which normally is involved in copper storage. This is referred to as Wilson’s disease. Wilson's disease is an autosomal recessive genetic disorder whose mutation causes the ATPase that transports copper into bile and ultimately incorporates it into ceruloplasmin to malfunction.
Plutonium: ever since the nuclear age, plutonium poisoning is a potential danger, especially among nuclear reactor employees; inhalation of Pu dust is particularly dangerous due to its intense alpha particle emission. There have been very few cases of plutonium poisoning.
Mercury: mercury is usually ingested from agricultural sources or other environmental sources. Mercury poisoning can lead to neurological disease and kidney failure if left untreated.
Iron: iron toxicity, iron poisoning, or iron overload is well known. Iron does test only very weakly positive for the Ames test for cancer, however, since it is such a strong catal |
https://en.wikipedia.org/wiki/Sugeno%20integral | In mathematics, the Sugeno integral, named after M. Sugeno, is a type of integral with respect to a fuzzy measure.
Let be a measurable space and let be an -measurable function.
The Sugeno integral over the crisp set of the function with respect to the fuzzy measure is defined by:
where .
The Sugeno integral over the fuzzy set of the function with respect to the fuzzy measure is defined by:
where is the membership function of the fuzzy set .
Usage and Relationships
Sugeno integral is related to h-index. |
https://en.wikipedia.org/wiki/FICON | FICON (Fibre Connection) is the IBM proprietary name for the ANSI FC-SB-3 Single-Byte Command Code Sets-3 Mapping Protocol for Fibre Channel (FC) protocol. It is a FC layer 4 protocol used to map both IBM's antecedent (either ESCON or parallel Bus and Tag) channel-to-control-unit cabling infrastructure and protocol onto standard FC services and infrastructure. The topology is fabric utilizing FC switches or directors. Valid rates include 1, 2, 4, 8 and 16 Gigabit per second data rates at distances up to 100 km.
FICON was introduced in 1998 as part of fifth generation of IBM System/390 mainframes. After 2011 FICON replaced ESCON in new IBM mainframe deployments because of FICON's technical superiority (especially its higher performance) and lower cost.
Protocol internals
Each FICON channel port is capable of multiple concurrent data exchanges (a maximum of 32) in full duplex mode. Information for active exchanges is transferred in Fibre Channel sequences mapped as FICON Information Units (IUs) which consist of one to four Fibre Channel frames, only the first of which carries 32 bytes of FICON (FC-SB-3) mapping protocol. Each FICON exchange may transfer one or many such IUs.
FICON channels use five classes of IUs to conduct information transfers between a channel and a control unit. They are: Data, Command, Status, Control, and lastly Link Control. Only a channel port may send Command or Command and Data IUs, while only a control unit port may send Status IUs.
As with prior Z channel protocols, there is a concept of a channel to control unit "connection". In its most primitive form, a connection is associated with a single channel program. In practice, a single channel program may result in the establishment of several sequential connections. This normally occurs during periods where data transfers become dormant while waiting for some type of independent device activity to complete (such as the physical positioning of tape or a disk access arm). In such |
https://en.wikipedia.org/wiki/Coprostanol | 5β-Coprostanol (5β-cholestan-3β-ol) is a 27-carbon stanol formed from the biohydrogenation of cholesterol (cholest-5en-3β-ol) in the gut of most higher animals and birds. This compound has frequently been used as a biomarker for the presence of human faecal matter in the environment.
Chemical properties
Solubility
5β-coprostanol has a low water solubility, and consequently a high octanol-water partition coefficient (log Kow = 8.82). This means that in most environmental systems, 5β-coprostanol will be associated with the solid phase.
Degradation
In anaerobic sediments and soils, 5β-coprostanol is stable for many hundreds of years enabling it to be used as an indicator of past faecal discharges. As such, records of 5β-coprostanol from paleo-environmental archives have been used to further constrain the timing of human settlements in a region, as well as reconstruct relative changes in human populations and agricultural activities over several thousand years.
Chemical analysis
Since the molecule has a hydroxyl (-OH) group, it is frequently bound to other lipids including fatty acids; most analytical methods, therefore, utilise a strong alkali (KOH or NaOH) to saponify the ester linkages. Typical extraction solvents include 6% KOH in methanol. The free sterols and stanols (saturated sterols) are then separated from the polar lipids by partitioning into a less polar solvent such as hexane. Prior to analysis, the hydroxyl group is frequently derivatised with BSTFA (bis-trimethyl silyl trifluoroacetamide) to replace the hydrogen with the less exchangeable trimethylsilyl (TMS) group. Instrumental analysis is frequently conducted on gas chromatograph (GC) with either a flame ionisation detector (FID) or mass spectrometer (MS). The mass spectrum for 5β-coprostanol - TMS ether can be seen in the figure.
Isomers
As well as the faecally derived stanol, two other isomers can be identified in the environment; 5α-cholestanol
Formation and occurrence
Faecal sources
5β-copro |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.