source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Dopant%20activation | Dopant activation is the process of obtaining the desired electronic contribution from impurity species in a semiconductor host. The term is often restricted to the application of thermal energy following the ion implantation of dopants. In the most common industrial example, rapid thermal processing is applied to silicon following the ion implantation of dopants such as phosphorus, arsenic and boron. Vacancies generated at elevated temperature (1200 °C) facilitate the movement of these species from interstitial to substitutional lattice sites while amorphization damage from the implantation process recrystallizes. A relatively rapid process, peak temperature is often maintained for less than one second to minimize unwanted chemical diffusion. |
https://en.wikipedia.org/wiki/Tapinella%20panuoides | Tapinella panuoides is a fungus species in the genus Tapinella.
Atromentin is a phenolic compound. The first enzymes in its biosynthesis have been characterised in T. panuoides.
Despite its pleasant taste, the species is poisonous. |
https://en.wikipedia.org/wiki/Chandy%E2%80%93Lamport%20algorithm | The Chandy–Lamport algorithm is a snapshot algorithm that is used in distributed systems for recording a consistent global state of an asynchronous system. It was developed by and named after Leslie Lamport and K. Mani Chandy.
History
According to Leslie Lamport's website, “The distributed snapshot algorithm described here came about when I visited Chandy, who was then at the University of Texas in Austin. He posed the problem to me over dinner, but we had both had too much wine to think about it right then. The next morning, in the shower, I came up with the solution. When I arrived at Chandy's office, he was waiting for me with the same solution.”
Definition
The assumptions of the algorithm are as follows:
There are no failures and all messages arrive intact and only once
The communication channels are unidirectional and FIFO ordered
There is a communication path between any two processes in the system
Any process may initiate the snapshot algorithm
The snapshot algorithm does not interfere with the normal execution of the processes
Each process in the system records its local state and the state of its incoming channels
The algorithm works using marker messages. Each process that wants to initiate a snapshot records its local state and sends a marker on each of its outgoing channels. All the other processes, upon receiving a marker, record their local state, the state of the channel from which the marker just came as empty, and send marker messages on all of their outgoing channels. If a process receives a marker after having recorded its local state, it records the state of the incoming channel from which the marker came as carrying all the messages received since it first recorded its local state.
Some of the assumptions of the algorithm can be facilitated using a more reliable communication protocol such as TCP/IP. The algorithm can be adapted so that there could be multiple snapshots occurring simultaneously.
Algorithm
The Chandy–Lamport algorit |
https://en.wikipedia.org/wiki/Nonlinear%20expectation | In probability theory, a nonlinear expectation is a nonlinear generalization of the expectation. Nonlinear expectations are useful in utility theory as they more closely match human behavior than traditional expectations. The common use of nonlinear expectations is in assessing risks under uncertainty. Generally, nonlinear expectations are categorized into sub-linear and super-linear expectations dependent on the additive properties of the given sets. Much of the study of nonlinear expectation is attributed to work of mathematicians within the past two decades.
Definition
A functional (where is a vector lattice on a probability space) is a nonlinear expectation if it satisfies:
Monotonicity: if such that then
Preserving of constants: if then
The complete consideration of the given set, the linear space for the functions given that set, and the nonlinear expectation value is called the nonlinear expectation space.
Often other properties are also desirable, for instance convexity, subadditivity, positive homogeneity, and translative of constants. For a nonlinear expectation to be further classified as a sublinear expectation, the following two conditions must also be met:
Subadditivity: for then
Positive homogeneity: for then
For a nonlinear expectation to instead be classified as a superlinear expectation, the subadditivity condition above is instead replaced by the condition:
Superadditivity: for then
Examples
Choquet expectation: a subadditive or superadditive integral that is used in image processing and behavioral decision theory.
g-expectation via nonlinear BSDE's: frequently used to model financial drift uncertainty.
If is a risk measure then defines a nonlinear expectation.
Markov Chains: for the prediction of events undergoing model uncertainties. |
https://en.wikipedia.org/wiki/Alliance%20for%20Affordable%20Internet | The Alliance for Affordable Internet (A4AI) is an initiative to make the Internet more affordable to people around the world. The World Wide Web Foundation serves as the Secretariat, and major members of coalition include Google, the Omidyar Network, the Department for International Development, USAID, Facebook, Cisco, Intel, Microsoft, UN Women and many others from the public, private and civil society sectors.
History
Purpose
A4AI was created with the goal of obtaining global broadband internet access priced at less than 5% of average per capita income globally; the target of the UN Broadband Commission. It cites the lack of investment in infrastructure, competition in the market, and inefficient taxation, amongst other policy and regulatory obstacles, as being major constraints to reducing prices.
It claims the internet as being an essential source of information and services for all and advocates an open, competitive broadband and telecommunications market, regulated by an independent agency. Particular attention is paid to internet freedom and the rights of online expression.
It works closely with governments and local stakeholders in Africa, Asia, and Latin America on policy and regulatory reform through a combination of advocacy, research and knowledge-sharing activities.
Launch
The initiative was officially launched on October 7, 2013, at the "Commonwealth Telecommunications Organisation Forum" in Abuja, Nigeria. The launch was covered by many news sources.
Reception
A4AI was briefly discussed in relation to Internet.org, a Facebook-led initiative for Internet accessibility, by David Talbot in an article for Technology Review.
1 for 2 target
The A4AI has also coined the term "affordability threshold" in its "1 for 2 target".
It considers the affordability threshold to be at 1GB of mobile broadband data priced at 2% or less of average monthly income. The UN Broadband Commission has adopted the “1 for 2” target as the new standard for affordable inte |
https://en.wikipedia.org/wiki/Interstellar%20communication | Interstellar communication is the transmission of signals between planetary systems. Sending interstellar messages is potentially much easier than interstellar travel, being possible with technologies and equipment which are currently available. However, the distances from Earth to other potentially inhabited systems introduce prohibitive delays, assuming the limitations of the speed of light. Even an immediate reply to radio communications sent to stars tens of thousands of light-years away would take many human generations to arrive.
Radio
The SETI project has for the past several decades been conducting a search for signals being transmitted by extraterrestrial life located outside the Solar System, primarily in the radio frequencies of the electromagnetic spectrum. Special attention has been given to the Water Hole, the frequency of one of neutral hydrogen's absorption lines, due to the low background noise at this frequency and its symbolic association with the basis for what is likely to be the most common system of biochemistry (but see alternative biochemistry).
The regular radio pulses emitted by pulsars were briefly thought to be potential intelligent signals; the first pulsar to be discovered was originally designated "LGM-1", for "Little Green Men." They were quickly determined to be of natural origin, however.
Several attempts have been made to transmit signals to other stars as well. (See "Realized projects" at Active SETI.) One of the earliest and most famous was the 1974 radio message sent from the largest radio telescope in the world, the Arecibo Observatory in Puerto Rico. An extremely simple message was aimed at a globular cluster of stars known as M13 in the Milky Way Galaxy and at a distance of 30,000 light years from the Solar System. These efforts have been more symbolic than anything else, however. Further, a possible answer needs double the travel time, i.e. tens of years (near stars) or 60,000 years (M13).
Other methods
It has also bee |
https://en.wikipedia.org/wiki/Vensim | Vensim is a simulation software developed by Ventana Systems. It primarily supports continuous simulation (system dynamics), with some discrete event and agent-based modelling capabilities. It is available commercially and as a free "Personal Learning Edition".
Modeling environment
Vensim provides a graphical modeling interface with stock and flow and causal loop diagrams, on top of a text-based system of equations in a declarative programming language. It includes a patented method for interactive tracing of behavior through causal links in model structure, as well as a language extension for automating quality control experiments on models called Reality Check.
The modeling language supports arrays (subscripts) and permits mapping among dimensions and aggregation. Built-in allocation functions satisfy constraints that are sometimes not met by conventional approaches like logit. It supports discrete delays, queues and a variety of stochastic processes.
There are multiple paths for cross sectional and time-series data import and export, including text files, spreadsheets and ODBC. Models may be calibrated against data using optimization, Kalman Filtering or Markov chain Monte Carlo methods. Sensitivity analysis options provide a variety of ways to test and sample models, including Monte Carlo simulation with Latin Hypercube sampling.
Vensim model files can be packaged and published in a customizable read-only format that can be executed by a freely available Model Reader. This allows sharing of interactive models with users who do not own the program and/or who the model author does not wish to have access to the model's code base.
Applications
Vensim is general-purpose software, used in a wide variety of problem domains. Common or high-profile applications include:
Transportation and Energy
Business Strategy
Health
Security and Terrorism
Project Management
Marketing Science in Pharmaceuticals and Consumer Products
Logistics
Environment
See also
|
https://en.wikipedia.org/wiki/Ocean%20temperature | Several factors cause the ocean temperature to vary. These are depth, geographical location and season. Both the temperature and salinity of ocean water differ. Warm surface water is generally saltier than the cooler deep or polar waters. In polar regions, the upper layers of ocean water are cold and fresh. Deep ocean water is cold, salty water found deep below the surface of Earth's oceans. This water has a uniform temperature of around 0-3°C. The ocean temperature also depends on the amount of solar radiation falling on its surface. In the tropics, with the Sun nearly overhead, the temperature of the surface layers can rise to over . Near the poles the temperature in equilibrium with the sea ice is about . There is a continuous circulation of water in the oceans. Thermohaline circulation (THC) is part of the large-scale ocean circulation. It is driven by global density gradients created by surface heat and freshwater fluxes. Warm surface currents cool as they move away from the tropics. This happens as the water becomes denser and sinks. Changes in temperature and density move the cold water back towards the equator as a deep sea current. Then it eventually wells up again towards the surface.
Ocean temperature as a term applies to the temperature in the ocean at any depth. It can also apply specifically to the ocean temperatures that are not near the surface. In this case it is synonymous with "deep ocean temperature").
It is clear that the oceans are warming as a result of climate change and this rate of warming is increasing. The upper ocean (above 700 m) is warming fastest, but the warming trend extends throughout the ocean. In 2022, the global ocean was the hottest ever recorded by humans.
Definition and types
Sea surface temperature
Deep ocean temperature
Experts refer to the temperature further below the surface as "ocean temperature" or "deep ocean temperature". Ocean temperatures more than 20 metres below the surface vary by region and time. They con |
https://en.wikipedia.org/wiki/S-Adenosyl-L-homocysteine | {{DISPLAYTITLE:S-Adenosyl-L-homocysteine}}
S-Adenosyl-L-homocysteine (SAH) is the biosynthetic precursor to homocysteine. SAH is formed by the demethylation of S-adenosyl-L-methionine. Adenosylhomocysteinase converts SAH into homocysteine and adenosine.
Biological role
DNA methyltransferases are inhibited by SAH. Two S-adenosyl-L-homocysteine cofactor products can bind the active site of DNA methyltransferase 3B and prevent the DNA duplex from binding to the active site, which inhibits DNA methylation. |
https://en.wikipedia.org/wiki/Forensic%20statistics | Forensic statistics is the application of probability models and statistical techniques to scientific evidence, such as DNA evidence, and the law. In contrast to "everyday" statistics, to not engender bias or unduly draw conclusions, forensic statisticians report likelihoods as likelihood ratios (LR). This ratio of probabilities is then used by juries or judges to draw inferences or conclusions and decide legal matters. Jurors and judges rely on the strength of a DNA match, given by statistics, to make conclusions and determine guilt or innocence in legal matters.
In forensic science, the DNA evidence received for DNA profiling often contains a mixture of more than one person's DNA. DNA profiles are generated using a set procedure, however, the interpretation of a DNA profile becomes more complicated when the sample contains a mixture of DNA. Regardless of the number of contributors to the forensic sample, statistics and probabilities must be used to provide weight to the evidence and to describe what the results of the DNA evidence mean. In a single-source DNA profile, the statistic used is termed a random match probability (RMP). RMPs can also be used in certain situations to describe the results of the interpretation of a DNA mixture. Other statistical tools to describe DNA mixture profiles include likelihood ratios (LR) and combined probability of inclusion (CPI), also known as random man not excluded (RMNE).
Computer programs have been implemented with forensic DNA statistics for assessing the biological relationships between two or more people. Forensic science uses several approaches for DNA statistics with computer programs such as; match probability, exclusion probability, likelihood ratios, Bayesian approaches, and paternity and kinship testing.
Although the precise origin of this term remains unclear, it is apparent that the term was used in the 1980s and 1990s. Among the first forensic statistics conferences were two held in 1991 and 1993.
Random ma |
https://en.wikipedia.org/wiki/Nevesia | Nevesia is a monotypic genus of lichenized fungus in the family Pannariaceae. It contains the species Nevesia sampaiana. The genus name honors Carlos das Neves Tavares, the Portuguese lichenologist who first identified the species in 1950. |
https://en.wikipedia.org/wiki/Gould%27s%20emerald | Gould's emerald (Riccordia elegans) is an extinct species of hummingbird in the family Trochilidae. It was described based on a single specimen taken in 1860; it is of unknown origin, but the northern Bahamas or especially Jamaica are likely sources.
In 2023 the International Ornithological Committee deleted it from its species list "until, and if, genetic analysis and/or stable isotope analysis sheds further light on its status."
Extinction
Except for the type specimen, there are no records, and it is presumed extinct. While there is no information about the exact cause of extinction, the likely reasons include the loss of habitat or required food plants, and predation by introduced mammals. The holotype is currently located in Natural History Museum at Tring. |
https://en.wikipedia.org/wiki/ETransportation | eTransportation is a peer-reviewed open-access scientific journal covering all modes of transportation by using electricity (vehicles, ships and airplanes). The journal was established in 2019 and is published by Elsevier. The editor-in-chief is Minggao Ouyang (Tsinghua University). It is emphasized that efforts to advocate UN's goals of sustainable development are welcomed, specifically "Affordable and clean energy".
Abstracting and indexing
The journal is abstracted and indexed in Ei Compendex, Scopus, and the Science Citation Index Expanded. According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.65. |
https://en.wikipedia.org/wiki/Lateral%20tarsal%20artery | The lateral tarsal artery (tarsal artery) arises from the dorsalis pedis, as that vessel crosses the navicular bone; it passes in an arched direction lateralward, lying upon the tarsal bones, and covered by extensor hallucis brevis and extensor digitorum brevis; it supplies these muscles and the articulations of the tarsus, and receives the arcuate over the base of the fifth metatarsal. It may receive contributions from branches of the anterior lateral malleolar and the perforating branch of the peroneal artery directed towards the joint capsule, and from the lateral plantar arteries through perforating arteries of the foot. |
https://en.wikipedia.org/wiki/BCPNN | A Bayesian Confidence Propagation Neural Network (BCPNN) is an artificial neural network inspired by Bayes' theorem, which regards neural computation and processing as probabilistic inference. Neural unit activations represent probability ("confidence") in the presence of input features or categories, synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posterior probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH Royal Institute of Technology. This probabilistic neural network model can also be run in generative mode to produce spontaneous activations and temporal sequences.
The basic model is a feedforward neural network comprising neural units with continuous activation, having a bias representing prior, and being connected by Bayesian weights in the form of point-wise mutual information. The original network has been extended to a modular structure of minicolumns and hypercolumns, representing discrete coded features or attributes. The units can also be connected as a recurrent neural network (losing the strict interpretation of their activations as probabilities) but becoming a possible abstract model of biological neural networks and associative memory.
BCPNN has been used for machine learning classification and data mining, for example for discovery of adverse drug reactions. The BCPNN learning rule has also been used to model biological synaptic plasticity and intrinsic excitability in large-scale spiking neural network (SNN) models of cortical associative memory and reward learning in Basal ganglia.
Network architecture
The BCPNN network architecture is modular in terms of hypercolumns and minicolumns. This modular structure is inspired by and generalized from the modular structure of the mammalian cortex. In abstract models, the minicolumns serve as the smallest units and they typically feature a membrane time constant and adaptation. In spiking models of cortex, |
https://en.wikipedia.org/wiki/Graph%20property | In graph theory, a graph property or graph invariant is a property of graphs that depends only on the abstract structure, not on graph representations such as particular labellings or drawings of the graph.
Definitions
While graph drawing and graph representation are valid topics in graph theory, in order to focus only on the abstract structure of graphs, a graph property is defined to be a property preserved under all possible isomorphisms of a graph. In other words, it is a property of the graph itself, not of a specific drawing or representation of the graph.
Informally, the term "graph invariant" is used for properties expressed quantitatively, while "property" usually refers to descriptive characterizations of graphs. For example, the statement "graph does not have vertices of degree 1" is a "property" while "the number of vertices of degree 1 in a graph" is an "invariant".
More formally, a graph property is a class of graphs with the property that any two isomorphic graphs either both belong to the class, or both do not belong to it. Equivalently, a graph property may be formalized using the indicator function of the class, a function from graphs to Boolean values that is true for graphs in the class and false otherwise; again, any two isomorphic graphs must have the same function value as each other. A graph invariant or graph parameter may similarly be formalized as a function from graphs to a broader class of values, such as integers, real numbers, sequences of numbers, or polynomials, that again has the same value for any two isomorphic graphs.
Properties of properties
Many graph properties are well-behaved with respect to certain natural partial orders or preorders defined on graphs:
A graph property P is hereditary if every induced subgraph of a graph with property P also has property P. For instance, being a perfect graph or being a chordal graph are hereditary properties.
A graph property is monotone if every subgraph of a graph with property P al |
https://en.wikipedia.org/wiki/Modulus%20%28algebraic%20number%20theory%29 | In mathematics, in the field of algebraic number theory, a modulus (plural moduli) (or cycle, or extended ideal) is a formal product of places of a global field (i.e. an algebraic number field or a global function field). It is used to encode ramification data for abelian extensions of a global field.
Definition
Let K be a global field with ring of integers R. A modulus is a formal product
where p runs over all places of K, finite or infinite, the exponents ν(p) are zero except for finitely many p. If K is a number field, ν(p) = 0 or 1 for real places and ν(p) = 0 for complex places. If K is a function field, ν(p) = 0 for all infinite places.
In the function field case, a modulus is the same thing as an effective divisor, and in the number field case, a modulus can be considered as special form of Arakelov divisor.
The notion of congruence can be extended to the setting of moduli. If a and b are elements of K×, the definition of a ≡∗b (mod pν) depends on what type of prime p is:
if it is finite, then
where ordp is the normalized valuation associated to p;
if it is a real place (of a number field) and ν = 1, then
under the real embedding associated to p.
if it is any other infinite place, there is no condition.
Then, given a modulus m, a ≡∗b (mod m) if a ≡∗b (mod pν(p)) for all p such that ν(p) > 0.
Ray class group
The ray modulo m is
A modulus m can be split into two parts, mf and m∞, the product over the finite and infinite places, respectively. Let Im to be one of the following:
if K is a number field, the subgroup of the group of fractional ideals generated by ideals coprime to mf;
if K is a function field of an algebraic curve over k, the group of divisors, rational over k, with support away from m.
In both case, there is a group homomorphism i : Km,1 → Im obtained by sending a to the principal ideal (resp. divisor) (a).
The ray class group modulo m is the quotient Cm = Im / i(Km,1). A coset of i(Km,1) is called a ray class modulo m.
Erich Hecke's |
https://en.wikipedia.org/wiki/Centre%20for%20Quantum%20Technologies | The Centre for Quantum Technologies (CQT) in Singapore is a Research Centre of Excellence hosted by the National University of Singapore. The Centre brings together physicists, computer scientists and engineers to do basic research on quantum physics and to build devices based on quantum phenomena. Experts in quantum technologies are applying their discoveries in computing, communications and sensing.
Mission statement
The mission of CQT is to conduct interdisciplinary theoretical and experimental research in quantum theory and its application to information processing. The discovery that quantum physics allows fundamentally new modes of information processing has required that classical theories of computation, information and cryptography be superseded by their quantum generalizations. These hold out the promise of faster computation and more secure communication than is possible classically. A key focus of CQT is the development of quantum technologies for the coherent control of individual photons and atoms, exploring both the theory and the practical possibilities of constructing quantum-mechanical devices for cryptography and computation.
History
Research in quantum information science in Singapore began in 1998. It was initiated by Kwek Leong Chuan, Lai Choy Heng, Oh Choo Hiap and Kuldip Singh as a series of informal seminars at the National University of Singapore. The seminars attracted local researchers and as a result, the Quantum Information Technology Group (informally referred to in Singlish as "quantum lah") was formed.
In February 2002, with support from Singapore's Agency for Science, Technology and Research (A*STAR), research efforts in the field were consolidated. This led to a number of faculty appointments.
In 2007 the Quantum Information Technology Group was selected as the core of Singapore's first Research Centre of Excellence. The Centre for Quantum Technologies was founded in December 2007 with $158 million to spend over ten years.
The |
https://en.wikipedia.org/wiki/Vasanti%20N.%20Bhat-Nayak | Vasanti N. Bhat-Nayak was a mathematician whose research concerned balanced incomplete block designs, bivariegated graphs, graceful graphs, graph equations and frequency partitions.
She earned a Ph.D. from the University of Mumbai in 1970 with the dissertation Some New Results in PBIBD Designs and Combinatorics. S. S. Shrikhande was her advisor.
After completing her doctorate, she remained on the faculty at the university, and eventually served as department head. |
https://en.wikipedia.org/wiki/Natta%20projection | In chemistry, the Natta projection (named for Italian chemist Giulio Natta) is a way to depict molecules with complete stereochemistry in two dimensions in a skeletal formula. In a hydrocarbon molecule with all carbon atoms making up the backbone in a tetrahedral molecular geometry, the zigzag backbone is in the paper plane (chemical bonds depicted as solid line segments) with the substituents either sticking out of the paper toward the viewer (chemical bonds depicted as solid wedges) or away from the viewer (chemical bonds depicted as dashed wedges). The Natta projection is useful for representing the tacticity of a polymer.
See also
Structural formula
Wedge-and-dash notation in skeletal formulas
Haworth projection
Newman projection
Fischer projection |
https://en.wikipedia.org/wiki/Repolarization | In neuroscience, repolarization refers to the change in membrane potential that returns it to a negative value just after the depolarization phase of an action potential which has changed the membrane potential to a positive value. The repolarization phase usually returns the membrane potential back to the resting membrane potential. The efflux of potassium (K+) ions results in the falling phase of an action potential. The ions pass through the selectivity filter of the K+ channel pore.
Repolarization typically results from the movement of positively charged K+ ions out of the cell. The repolarization phase of an action potential initially results in hyperpolarization, attainment of a membrane potential, termed the afterhyperpolarization, that is more negative than the resting potential. Repolarization usually takes several milliseconds.
Repolarization is a stage of an action potential in which the cell experiences a decrease of voltage due to the efflux of potassium (K+) ions along its electrochemical gradient. This phase occurs after the cell reaches its highest voltage from depolarization. After repolarization, the cell hyperpolarizes as it reaches resting membrane potential (−70 mV in neuron). Sodium (Na+) and potassium ions inside and outside the cell are moved by a sodium potassium pump, ensuring that electrochemical equilibrium remains unreached to allow the cell to maintain a state of resting membrane potential. In the graph of an action potential, the hyper-polarization section looks like a downward dip that goes lower than the line of resting membrane potential. In this afterhyperpolarization (the downward dip), the cell sits at more negative potential than rest (about −80 mV) due to the slow inactivation of voltage gated K+ delayed rectifier channels, which are the primary K+ channels associated with repolarization. At these low voltages, all of the voltage gated K+ channels close, and the cell returns to resting potential within a few milliseconds. A |
https://en.wikipedia.org/wiki/Atychodracon | Atychodracon is an extinct genus of rhomaleosaurid plesiosaurian known from the Late Triassic - Early Jurassic boundary (probably early Hettangian stage) of England. It contains a single species, Atychodracon megacephalus, named in 1846 originally as a species of Plesiosaurus. The holotype of "P." megacephalus was destroyed during a World War II air raid in 1940 and was later replaced with a neotype. The species had a very unstable taxonomic history, being referred to four different genera by various authors until a new genus name was created for it in 2015. Apart from the destroyed holotype and its three partial casts (that survived), a neotype and two additional individuals are currently referred to Atychodracon megacephalus, making it a relatively well represented rhomaleosaurid.
History of discovery
The type species of Atychodracon was first described and named by Samuel Stutchbury in January 1846, as a species of the wastebasket taxon Plesiosaurus. The specific name means "large-headed" in Greek in reference to the very large skull compared to the rest of the skeletal elements "Plesiosaurus" megacephalus had, relatively to other / actual species of Plesiosaurus. The pliosauroid nature of "Plesiosaurus" megacephalus remained unnoted until a revision by Richard Lydekker in 1889. Lydekker recognized the rhomaleosaurid affinities of "P." megacephalus, but because he and Harry G. Seeley "refused steadfastly to recognize the generic and specific names proposed by one another", he moved "P." megacephalus to the genus Thaumatosaurus which was regarded by him as a replacement to Seeley's Rhomaleosaurus - creating the new combination T. megacephalus.
The holotype of Atychodracon is BRSMG Cb 2335 and its casts and digital reproductions. BRSMG Cb 2335 represented a complete and articulated skeleton including the skull and lower jaw measuring 4.960 meters in total body length, and was one of several plesiosaurian specimens displayed in the Bristol City Museum and Art Gal |
https://en.wikipedia.org/wiki/Relativistic%20beaming | Relativistic beaming (also known as Doppler beaming, Doppler boosting, or the headlight effect) is the process by which relativistic effects modify the apparent luminosity of emitting matter that is moving at speeds close to the speed of light. In an astronomical context, relativistic beaming commonly occurs in two oppositely-directed relativistic jets of plasma that originate from a central compact object that is accreting matter. Accreting compact objects and relativistic jets are invoked to explain x-ray binaries, gamma-ray bursts, and, on a much larger scale, active galactic nuclei (quasars are also associated with an accreting compact object, but are thought to be merely a particular variety of active galactic nuclei, or AGNs).
Beaming affects the apparent brightness of a moving object. Consider a cloud of gas moving relative to the observer and emitting electromagnetic radiation. If the gas is moving towards the observer, it will be brighter than if it were at rest, but if the gas is moving away, it will appear fainter. The magnitude of the effect is illustrated by the AGN jets of the galaxies M87 and 3C 31 (see images at right). M87 has twin jets aimed almost directly towards and away from Earth; the jet moving towards Earth is clearly visible (the long, thin blueish feature in the top image), while the other jet is so much fainter it is not visible. In 3C 31, both jets (labeled in the lower figure) are at roughly right angles to our line of sight, and thus, both are visible. The upper jet actually points slightly more in Earth's direction and is therefore brighter.
Relativistically moving objects are beamed due to a variety of physical effects. Light aberration causes most of the photons to be emitted along the object's direction of motion. The Doppler effect changes the energy of the photons by red- or blue-shifting them. Finally, time intervals as measured by clocks moving alongside the emitting object are different from those measured by an observer |
https://en.wikipedia.org/wiki/Slurry | A slurry is a mixture of denser solids suspended in liquid, usually water. The most common use of slurry is as a means of transporting solids or separating minerals, the liquid being a carrier that is pumped on a device such as a centrifugal pump. The size of solid particles may vary from 1 micrometre up to hundreds of millimetres.
The particles may settle below a certain transport velocity and the mixture can behave like a Newtonian or non-Newtonian fluid. Depending on the mixture, the slurry may be abrasive and/or corrosive.
Examples
Examples of slurries include:
Cement slurry, a mixture of cement, water, and assorted dry and liquid additives used in the petroleum and other industries
Soil/cement slurry, also called Controlled Low-Strength Material (CLSM), flowable fill, controlled density fill, flowable mortar, plastic soil-cement, K-Krete, and other names
A mixture of thickening agent, oxidizers, and water used to form a gel explosive
A mixture of pyroclastic material, rocky debris, and water produced in a volcanic eruption and known as a lahar
A mixture of bentonite and water used to make slurry walls
Coal slurry, a mixture of coal waste and water, or crushed coal and water
Slip, a mixture of clay and water used for joining, glazing and decoration of ceramics and pottery.
Slurry oil, the highest boiling fraction distilled from the effluent of an FCC unit in an oil refinery. It contains a large amount of catalyst, in form of sediments hence the denomination of slurry.
A mixture of wood pulp and water used to make paper
Manure slurry, a mixture of animal waste, organic matter, and sometimes water often known simply as "slurry" in agricultural use, used as fertilizer after aging in a slurry pit
Meat slurry, a mixture of finely ground meat and water, centrifugally dewatered and used as a food ingredient.
An abrasive substance used in chemical-mechanical polishing
Slurry ice, a mixture of ice crystals, freezing point depressant, and water
A mixture of raw materials |
https://en.wikipedia.org/wiki/Responsible%20drug%20use | Responsible drug use maximizes the benefits and reduces the risk of negative impact psychoactive drugs cause on the lives of the user. For illegal psychoactive drugs that are not diverted prescription controlled substances, some critics believe that illegal recreational drug use is inherently irresponsible, due to the unpredictable and unmonitored strength and purity of the drugs and the risks of addiction, infection, and other side effects.
Nevertheless, harm reduction advocates claim that the user can be responsible by employing the same general principles applicable to the use of alcohol: avoiding hazardous situations, excessive doses, and hazardous combinations of drugs; avoiding injection; and not using drugs at the same time as activities that may be unsafe without a sober state. Drug use can be thought of as an activity that is potentially beneficial but also risky, analogous to skiing, skydiving, surfing, or mountain climbing, the risks of which can be minimized by using caution and common sense. These advocates also point out that government action (or inaction) makes responsible drug use more difficult, such as by making drugs of known purity and strength unavailable.
Principles
Duncan and Gold argue that to use controlled and other drugs responsibly, a person must adhere to a list of principles. They and others argue that drug users ought to proceed by:
understanding and educating oneself on the effects, risks, side effects and legal status of the drug they are taking
measuring accurate dosages, and take other precautions to reduce the risk of overdose when taking drugs where an overdose is possible
if possible, drug checking all substances before use to determine their purity and strength
attempting to gain the most pure and high-quality drugs laced with no cutting agent at best such as by buying on darknet markets
using drugs only in relaxed and responsible social situations as altered consciousness can be inappropriate in potentially dangerous o |
https://en.wikipedia.org/wiki/Bernard%20Kettlewell | Henry Bernard Davis Kettlewell (24 February 1907 – 11 May 1979) was a British geneticist, lepidopterist and medical doctor, who performed research on the influence of industrial melanism on peppered moth (Biston betularia) coloration, showing why moths are darker in polluted areas. This experiment is cited as a classic demonstration of natural selection in action. After live video record of the experiment with Niko Tinbergen, Sewall Wright called the study as "the clearest case in which a conspicuous evolutionary process has actually been observed."
Early life
Kettlewell was born in Howden, Yorkshire, and educated at Charterhouse School. During 1926 he studied medicine and zoology at Gonville and Caius College, Cambridge. During 1929 he began clinical training at St Bartholomew's Hospital, London, then during 1935 joined a general medical practice in Cranleigh, Surrey. He also worked as an anaesthetist at St. Luke's Hospital, Guildford. During World War II, from 1939 to 1945, he worked for the Emergency Medical Service at Woking War Hospital.
He emigrated to South Africa during 1949, and from then until 1954 was a researcher at the International Locust Control Centre at Cape Town University, investigating methods of locust control and going on expeditions to the Kalahari Desert, the Knysna Forest, the Belgian Congo, and Mozambique.
During 1952 he was appointed to a Nuffield Research Fellowship in the Department of Genetics of the Department of Zoology at Oxford University. Until 1954 he divided his time between South Africa and Oxford, then he gained the position of Senior Research Officer of the Department of Genetics and spent the rest of his career in Oxford as a genetics researcher. He was assigned to investigate peppered moth evolution under the supervision of E. B. Ford.
Peppered moth experiments
His grant was to study industrial melanism in general and in particular the peppered moth Biston betularia which had been studied by William Bateson during the 1 |
https://en.wikipedia.org/wiki/CONN%20%28functional%20connectivity%20toolbox%29 | CONN is a Matlab-based cross-platform imaging software for the computation, display, and analysis of functional connectivity in fMRI (functional Magnetic Resonance Imaging) in the resting state and during task.
CONN is available as an SPM toolbox, as well as precompiled binaries for MacOS/Windows/Linux environments, and it is freely available for non-commercial use.
Functionality
CONN includes a user-friendly GUI to manage all aspects of functional connectivity analyses, including preprocessing of functional and anatomical volumes, elimination of subject-movement and physiological noise, outlier scrubbing, estimation of multiple connectivity and network measures, and population-level hypothesis testing. In addition the processing pipeline can also be automated using batch scripts.
Preprocessing and denoising
CONN preprocessing pipeline includes steps designed to estimate and correct effects derived from subject motion within the scanner (realignment), correct spatial distortions due to inhomogeneities in the magnetic field (susceptibility distortion correction), correct for temporal misalignment across slices (slice timing correction), identify potential outlier images within each scanning session (outlier identification), classify different tissue types from each subject's anatomy (segmentation), or align functional and anatomical data across different subjects (functional or anatomical normalization). In addition, the BOLD signal at white matter and ventricles can be used to characterize potential motion and physiological noise sources, and the combined effect of these and other noise sources can be removed from the functional data to improve the robustness of functional connectivity measures.
Functional connectivity estimation
CONN computes multiple measures of functional connectivity, including Fisher-transformed Pearson correlation coefficients between the BOLD timeseries from different regions of interest (ROIs), as well as with every voxel in the brain. I |
https://en.wikipedia.org/wiki/List%20of%20broadband%20over%20power%20line%20deployments | It is important to highlight that BPL deployments consist in large scale installations that have been technically approved in a pilot project.
BPL deployments - 2nd Gen (G.hn)
Active BPL deployments
Western Europe:
Germany: In 2017, E.ON initiated its BPL deployment across 100,000 German households by providing 10,000 repeaters and headends for the low voltage part of its grid. The international energy company selected Corinex as its sole provider for BPL technology. This deployment was one of the most relevant for the utility sector in Europe because it represented the first massive BPL deployment in the world.
BPL pilot projects - 1st Gen (UPA)
Inactive pilot projects
North America:
United States: The United Telecom Council publishes the Federal Communications Commission (FCC)-mandated BPL Interference Resolution website, which provides a list of all BPL deployments in the US.
Canada: Quebec: As of 2005, PLC communication technology developed by Ariane Controls is being installed inside and outside existing buildings to control lights and other energy-hungry devices. The cheap devices allow energy consumption to be better managed, and so save much energy and bring a clear return on investment.
Western Europe:
Sweden: Vattenfall is using PLC technology at 1200 baud for automatic meter reading based on an Iskraemeco product.
Central and Eastern Europe, and Eurasia:
Russian Federation: Electro-com has deployed widely BPL/PLC technology and offers internet access service in Moscow, Nizhny Novgorod, Ryazan, Kaluga and Rostov-on-Don, planning to extend coverage to main Russian cities. Currently the company does not provide other services, though plans to start providing telephone, and television services someday. Base equipment is a DefiDev modem with a DS2 chipset. The company had 35,000 subscribers and an annual growth of 15-20%. The company has, however, halted operations in Moscow in September, 2008, having sold its client network to an IDSL interne |
https://en.wikipedia.org/wiki/Dieudonn%C3%A9%20determinant | In linear algebra, the Dieudonné determinant is a generalization of the determinant of a matrix to matrices over division rings and local rings. It was introduced by .
If K is a division ring, then the Dieudonné determinant is a homomorphism of groups from the group GLn(K) of invertible n by n matrices over K onto the abelianization K×/[K×, K×] of the multiplicative group K× of K.
For example, the Dieudonné determinant for a 2-by-2 matrix is the residue class, in K×/[K×, K×], of
Properties
Let R be a local ring. There is a determinant map from the matrix ring GL(R) to the abelianised unit group R×ab with the following properties:
The determinant is invariant under elementary row operations
The determinant of the identity is 1
If a row is left multiplied by a in R× then the determinant is left multiplied by a
The determinant is multiplicative: det(AB) = det(A)det(B)
If two rows are exchanged, the determinant is multiplied by −1
If R is commutative, then the determinant is invariant under transposition
Tannaka–Artin problem
Assume that K is finite over its centre F. The reduced norm gives a homomorphism Nn from GLn(K) to F×. We also have a homomorphism from GLn(K) to F× obtained by composing the Dieudonné determinant from GLn(K) to K×/[K×, K×] with the reduced norm N1 from GL1(K) = K× to F× via the abelianization.
The Tannaka–Artin problem is whether these two maps have the same kernel SLn(K). This is true when F is locally compact but false in general.
See also
Moore determinant over a division algebra |
https://en.wikipedia.org/wiki/Simon%27s%20problem | In computational complexity theory and quantum computing, Simon's problem is a computational problem that is proven to be solved exponentially faster on a quantum computer than on a classical (that is, traditional) computer. The quantum algorithm solving Simon's problem, usually called Simon's algorithm, served as the inspiration for Shor's algorithm. Both problems are special cases of the abelian hidden subgroup problem, which is now known to have efficient quantum algorithms.
The problem is set in the model of decision tree complexity or query complexity and was conceived by Daniel R. Simon in 1994. Simon exhibited a quantum algorithm that solves Simon's problem exponentially faster with exponentially fewer queries than the best probabilistic (or deterministic) classical algorithm. In particular, Simon's algorithm uses a linear number of queries and any classical probabilistic algorithm must use an exponential number of queries.
This problem yields an oracle separation between the complexity classes BPP (bounded-error classical query complexity) and BQP (bounded-error quantum query complexity). This is the same separation that the Bernstein–Vazirani algorithm achieves, and different from the separation provided by the Deutsch–Jozsa algorithm, which separates P and EQP. Unlike the Bernstein–Vazirani algorithm, Simon's algorithm's separation is exponential.
Because this problem assumes the existence of a highly-structured "black box" oracle to achieve its speedup, this problem has little practical value. However, without such an oracle, exponential speedups cannot easily be proven, since this would prove that P is different from PSPACE.
Problem description
Given a function (implemented by a black box or oracle) with the promise that, for some unknown , for all ,
if and only if ,
where denotes bitwise XOR. The goal is to identify by making as few queries to as possible. Note that
if and only if
Furthermore, for some and in , is unique (not equa |
https://en.wikipedia.org/wiki/Baller%E2%80%93Gerold%20syndrome | Baller–Gerold syndrome (BGS) is a rare genetic syndrome that involves premature fusion of the skull bones and malformations of facial, forearm and hand bones. The symptoms of Baller–Gerold syndrome overlap with features of a few other genetics disorders: Rothmund–Thomson syndrome and RAPADILINO syndrome. The prevalence of BGS is unknown, as there have only been a few reported cases, but it is estimated to be less than 1 in a million. The name of the syndrome comes from the researchers Baller and Gerold who discovered the first three cases.
Signs and symptoms
The most common and defining features of BGS are craniosynostosis and radial ray deficiency. The observations of these features allow for a diagnosis of BGS to be made, as these symptoms characterize the syndrome. Craniosynostosis involves the pre-mature fusion of bones in the skull. The coronal craniosynostosis that is commonly seen in patients with BGS results in the fusion of the skull along the coronal suture. Because of the changes in how the bones of the skull are connected together, people with BGS will have an abnormally shaped head, known as brachycephaly. Features commonly seen in those with coronal craniosynostosis are bulging eyes, shallow eye pockets, and a prominent forehead. Radial ray deficiency is another clinical characteristic of those with BGS, and results in the under-development (hypoplasia) or the absence (aplasia) of the bones in the arms and the hands. These bones include the radius, the carpal bones associated with the radius and the thumb. Oligodactyly can also result from radial ray deficiency, meaning that someone with BGS may have fewer than five fingers. Radial ray deficiency that is associated with syndromes (such as BGS) occurs bi-laterally, affecting both arms.
Some of the other clinical characteristics sometimes associated with this disorder are growth retardation and poikiloderma. Although the presentation of BGS may differ between individuals, these characteristics are of |
https://en.wikipedia.org/wiki/Lists%20of%20mathematics%20topics | Lists of mathematics topics cover a variety of topics related to mathematics. Some of these lists link to hundreds of articles; some link only to a few. The template to the right includes links to alphabetical lists of all mathematical articles. This article brings together the same content organized in a manner better suited for browsing.
Lists cover aspects of basic and advanced mathematics, methodology, mathematical statements, integrals, general concepts, mathematical objects, and reference tables.
They also cover equations named after people, societies, mathematicians, journals, and meta-lists.
The purpose of this list is not similar to that of the Mathematics Subject Classification formulated by the American Mathematical Society. Many mathematics journals ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The subject codes so listed are used by the two major reviewing databases, Mathematical Reviews and Zentralblatt MATH. This list has some items that would not fit in such a classification, such as list of exponential topics and list of factorial and binomial topics, which may surprise the reader with the diversity of their coverage.
Basic mathematics
This branch is typically taught in secondary education or in the first year of university.
Outline of arithmetic
Outline of discrete mathematics
List of calculus topics
List of geometry topics
Outline of geometry
List of trigonometry topics
Outline of trigonometry
List of trigonometric identities
List of logarithmic identities
List of integrals of logarithmic functions
List of set identities and relations
List of topics in logic
Areas of advanced mathematics
As a rough guide, this list is divided into pure and applied sections although in reality, these branches are overlapping and intertwined.
Pure mathematics
Algebra
Algebra includes the study of algebraic structures, which are sets and operations defined o |
https://en.wikipedia.org/wiki/Colorado%20Memory%20Systems | Colorado Memory Systems, Inc. (CMS), was an American technology company independently active from 1985 to 1992 and based in Loveland, Colorado. The company primarily manufactured tape drive systems, especially those using quarter-inch cartridges (QIC)s, for personal computers and workstations. Colorado Memory Systems was founded by Bill Beierwaltes as an offshoot of his previous company, Colorado Time Systems, also based in Loveland. It was acquired by Hewlett-Packard in 1992.
History
Colorado Memory Systems, Inc., was founded by William "Bill" Beierwaltes in Loveland, Colorado, in 1985, as a division of Colorado Time Systems, another Loveland-based company that he had previously founded in 1972. Whereas Colorado Time Systems focused on computerized timekeeping displays for athletics while also selling a broad range of other products, Beierwaltes founded Colorado Memory Systems chiefly to focus on data storage products for the burgeoning personal computer industry of the 1980s. Before founding Colorado Time Systems in 1972, Beierwaltes was a product manager for Hewlett-Packard from 1964 to 1974, working on the development and marketing for HP's electronic measuring equipment, especially their line of voltmeters.
In March 1985, CMS launched its first products, a line of 60-MB quarter-inch cartridge (QIC) drives manufactured by Rexon's WangTek division and rebadged as Tecmar products for redistribution by IBM. The drives made use of the then-ubiquitous QIC-24 tapes and came in three configurations: an external drive sporting only the QIC tape mechanism (QIC/60AT), another external drive comprising the QIC reader–writer and a 20-MB hard disk drive (QIC/60W20), and an internal drive and controller board (QIC/60H). These drives were the first tape backup products to be resold by IBM for their Personal Computer platform (by then also including the XT and AT). In August 1985, CMS collaborated again with Tecmar to release a bevy of peripherals for Commodore's new Amiga c |
https://en.wikipedia.org/wiki/Davide%20Gaiotto | Davide Silvano Achille Gaiotto (born 11 March 1977) is an Italian mathematical physicist who deals with quantum field theories and string theory. He received the Gribov Medal in 2011 and the New Horizons in Physics Prize in 2013.
Biography
Gaiotto won 1996 the silver medal as Italian participants in the International Mathematical Olympiad and 1995 gold medal at the International Physics Olympiad in Canberra. He was an undergraduate student at Scuola Normale Superiore in Pisa from 1996 to 2000. From 2004 to 2007 he was a post-doctoral researcher at Harvard University and then to 2011 the Institute for Advanced Study. Since 2011 he has been working at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario.
He introduced new techniques in the study and design of four-dimensional (N = 2) supersymmetric conformal field theories. He constructed from M5-branes, which are wound around Riemann surfaces with punctures. This led to new insights into the dynamics of four-dimensional (supersymmetric) gauge theories. With Juan Maldacena he studied these gauge theories using the AdS/CFT correspondence. In 2010 he had with Yuji Tachikawa and Luis Alday, developed the AGT correspondence (named after the authors), a duality in the 6D (2,0) superconformal field theory with compactification on a surface to a conformal field theory on the surface (Liouville field theory). |
https://en.wikipedia.org/wiki/International%20Community%20for%20Auditory%20Display | The International Community for Auditory Display (ICAD), founded in 1992, provides an annual conference for research in auditory display, the use of sound to display information. Research and implementation of sonification, audification, earcons and speech synthesis are central interests of the ICAD. ICAD is home to auditory display researchers, who come from different disciplines, through its conference and peer-reviewed proceedings. Auditory display researchers have various backgrounds in science, arts, and humanities, like computer science, cognitive science, human factors, systematic musicology and soundscape design. Most of the proceedings are freely available through the Georgia Tech SMARTech repository.
Auditory display professionals are board members of ICAD.
This ICAD presidency has been held by Gregory Kramer (1992 - 1997), Jim Ballas (1997 - 2000), Eric Somers (2000–2003), Matti Gröhn (2003 - 2006), Bruce Walker (2006 - 2011), Tony Stockman (2011 - 2016), David Worrall (2016 - 2018), and Myounghoon 'Philart' Jeon (2018 - 2022). The current president of ICAD is Paul Vickers. Further information on the community can be found on their official website, like audio examples, information on their annual conference and contact information. |
https://en.wikipedia.org/wiki/Florey%20Lecture | The Florey Lecture was a lecture organised by the Royal Society of London.
List of lecturers |
https://en.wikipedia.org/wiki/Skeeter%20syndrome | Skeeter syndrome (papular urticaria) is a localized severe allergic reaction to mosquito bites, consisting of inflammation, peeling skin, blistering, ulceration and sometimes fever. It is caused by allergenic polypeptides in mosquito saliva, and therefore is not contagious. It is one of several forms, being one of the most severe, of allergic responses to mosquito bites, termed mosquito bite allergies.
The condition may vary between individuals based on the reaction size and severity. Some individuals may experience reactions only to some bites and not others, thought to be attributed to varying reactions to different species of mosquitoes.
Although the term seems informal, it has appeared in scientific literature.
Diagnosis
Clinical examination alone cannot distinguish between a response caused by infection, such as cellulitis, and skeeter syndrome. However, skeeter syndrome usually progresses over the course of hours versus cellulitis, which typically evolves over the course of several days. As such, accurate history is imperative when making the diagnosis. Since IgE and IgG are key players in mosquito allergy, diagnosis can be confirmed by an immunosorbent assay measuring IgE and IgG to mosquito saliva antigens.
Differential diagnosis
The Skeeter syndrome should not be confused with another type of reactivity to mosquito bites, severe mosquito bite allergy (SMBA). SMBA is most often an Epstein-Barr virus-associated lymphoproliferative disease that complicates ~33% of individuals with chronic active Epstein-Barr virus infection or, in extremely rare cases, individuals with Epstein-Barr virus-positive Hodgkin disease or an Epstein-Barr virus-negative lymphoid disease such as chronic lymphocytic leukemia and mantle cell lymphoma. It is a hypersensitivity reaction characterized by the rapid development of skin redness, swelling, ulcers, necrosis and scarring following mosquito bites. The reaction is often accompanied by relatively severe systemic symptoms such as |
https://en.wikipedia.org/wiki/Relaxation%20labelling | Relaxation labelling is an image treatment methodology. Its goal is to associate a label to the pixels of a given image or nodes of a given graph.
See also
Digital image processing |
https://en.wikipedia.org/wiki/International%20Fairtrade%20Certification%20Mark | The 'International Fairtrede Certification Mark is an independent certification mark used in over 69 countries. It appears on products as an independent guarantee that a product has been produced according to Fairtrade political standards.
The Fairtrade Mark is owned and protected by Fairtrade International (FLO), on behalf of its 25-member and associate member Fairtrade producer networks and labelling initiatives.
For a product to carry the Fairtrade Mark, it must come from FLOCert inspected and certified producer organizations. The crops must be marketed in accordance with the International Fairtrade standards set by Fairtrade International. The supply chain is also monitored by FLOCert. To become certified Fairtrade producers, the primary cooperative and its member farmers must operate to certain political standards, imposed from Europe. FLO-CERT, the for-profit side, handles producer certification, inspecting and certifying producer organisations in more than 50 countries in Africa, Asia, and Latin America. In the Fair trade debate there are many complaints of failure to enforce these standards, with Fairtrade cooperatives, importers and packers profiting by evading them.
As of 2006, the following products currently carry the Fairtrade Mark: coffee, tea, chocolate, cocoa, sugar, bananas, apples, pears, grapes, plums, lemons, oranges, Satsumas, clementines, lychees, avocados, pineapples, mangoes, fruit juices, quinoa, peppers, green beans, coconut, dried fruit, rooibos tea, green tea, cakes and biscuits, honey, muesli, cereal bars, jams, chutney and sauces, herbs and spices, nuts and nut oil, wine, beer, rum, flowers, footballs, rice, yogurt, baby food, sugar body scrub, cotton wool and cotton products.
How it works
The marketing system for Fairtrade and non-Fairtrade coffee is identical in the consuming countries, using mostly the same importing, packing, distributing and retailing firms. Some independent brands operate a virtual company, paying importers, p |
https://en.wikipedia.org/wiki/Insectary%20plant | Insectary plants are those that attract insects. As such, beneficial insectary plants are intentionally introduced into an ecosystem to increase pollen and nectar resources required by the natural enemies of the harmful or unwanted insects pests. Beyond an effective natural control of pests, the beneficial insects also assist in pollination.
The "friendly insects" include ladybeetles, bees, ground beetles, hoverflies, and parasitic wasps. Other animals that are frequently considered beneficial include lizards, spiders, toads, and hummingbirds. Beneficial insects are as much as ten times more abundant in the insectary plantings area. Mortality of scale insects (caused by natural enemies) can be double with insectary plantings. In addition, a diversity of insectary plants can increase the population of beneficial insects such that these levels can be sustained even when the insectary plants are removed or die off.
For maximum benefit in the garden, insectary plants can be grown alongside desired garden plants that do not have this benefit. The insects attracted to the insectary plants will also help the other nearby garden plants.
Many members of the family Apiaceae (formerly known as Umbelliferae) are excellent insectary plants. Fennel, angelica, coriander (cilantro), dill, and wild carrot all provide in great number the tiny flowers required by parasitic wasps. Various clovers, yarrow, and rue also attract parasitic and predatory insects. Low-growing plants, such as thyme, rosemary, or mint, provide shelter for ground beetles and other beneficial insects. Composite flowers (daisy and chamomile) and mints (spearmint, peppermint, or catnip) will attract predatory wasps, hoverflies, and robber flies. The wasps will catch caterpillars and grubs to feed their young, while the predatory and parasitic flies attack many kinds of insects, including leaf hoppers and caterpillars.
Other insectary plants include: mustard plants such as Brassica juncea, Phacelia tanacetifoli |
https://en.wikipedia.org/wiki/Constructive%20developmental%20framework | The constructive developmental framework (CDF) is a theoretical framework for epistemological and psychological assessment of adults. The framework is based on empirical developmental research showing that an individual's perception of reality is an actively constructed "world of their own", unique to them and which they continue to develop over their lifespan.
CDF was developed by Otto Laske based on the work of Robert Kegan and Michael Basseches, Laske's teachers at Harvard University. The CDF methodology involves three separate instruments that respectively measure a person's social–emotional stage, cognitive level of development, and psychological profile. It provides three epistemological perspectives on individual clients as well as teams. These constructs are designed to probe how an individual and/or group constructs the real world conceptually, and how close an individual's present thinking approaches the complexity of the real world.
Overview
The methodology of CDF is grounded in empirical research on positive adult development which began under Lawrence Kohlberg in the 1960s, continued by Robert Kegan (1982, 1994), Michael Basseches 1984, and Otto Laske (1998, 2006, 2009, 2015, 2018). Laske (1998, 2009) introduced concepts from Georg Wilhelm Friedrich Hegel's philosophy and the Frankfurt School into the framework, making a strict differentiation between social–emotional and cognitive development.
Kegan (1982) described five stages of development, of which the latter four are progressively attained only in adulthood. Basseches (1984) showed that adults potentially transcend formal logical thinking by way of dialectical thinking, in four phases, measurable by a fluidity index. Both Kegan's and Basseches' findings were updated and refined by Laske in 2005 and 2008 respectively. In 2008 and 2015, Laske proposed that dialectical thought forms are an instantiation of Roy Bhaskar's four moments of dialectic (MELD; Bhaskar 1993), and that these ontological mom |
https://en.wikipedia.org/wiki/Photothermal%20spectroscopy | Photothermal spectroscopy is a group of high sensitivity spectroscopy techniques used to measure optical absorption and thermal characteristics of a sample. The basis of photothermal spectroscopy is the change in thermal state of the sample resulting from the absorption of radiation. Light absorbed and not lost by emission results in heating. The heat raises temperature thereby influencing the thermodynamic properties of the sample or of a suitable material adjacent to it. Measurement of the temperature, pressure, or density changes that occur due to optical absorption are ultimately the basis for the photothermal spectroscopic measurements.
As with photoacoustic spectroscopy, photothermal spectroscopy is an indirect method for measuring optical absorption, because it is not based on the direct measure of the light which is involved in the absorption. In another sense, however, photothermal (and photoacoustic) methods measure directly the absorption, rather than e.g. calculate it from the transmission, as is the case of more usual (transmission) spectroscopic techniques. And it is this fact that gives the technique its high sensitivity, because in transmission techniques the absorbance is calculated as the difference between total light impinging on the sample and the transmitted (plus reflected, plus scattered) light, with the usual problems of accuracy when one deals with small differences between large numbers, if the absorption is small. In photothermal spectroscopies, instead, the signal is essentially proportional to the absorption, and is zero when there is zero true absorption, even in the presence of reflection or scattering.
There are several methods and techniques used in photothermal spectroscopy. Each of these has a name indicating the specific physical effect measured.
Photothermal lens spectroscopy (PTS or TLS) measures the thermal blooming that occurs when a beam of light heats a transparent sample. It is typically applied for measuring minute qua |
https://en.wikipedia.org/wiki/Akuma%20%28Street%20Fighter%29 | Akuma (悪魔, Japanese for "Devil", "Demon"), known in Japan as , is a fictional character and the secondary antagonist of the Street Fighter series of fighting games created by Capcom. Akuma made his debut in Super Street Fighter II Turbo as a secret character and boss. In the storyline of the Street Fighter video games, he is the younger brother of Gouken, Ryu's and Ken's master. In some games, he also has an alternate version named Shin Akuma or in Japanese and Oni Akuma in Super Street Fighter IV: Arcade Edition. Since his debut, Akuma has appeared in several subsequent titles and has been praised by both fans and critics.
Creation
Akuma was created by request of Noritaka Funamizu to Akira Yasuda when creating a new Street Fighter character. Akuma was designed in order to please fans who were victims of April's Fools in the claims from journalists that there was a hidden character named Sheng Long. Funamizu wanted the character, Akuma, to be based on Ryu's design. While still being an evil character, Yasuda still wanted to create a major contrast between the regular boss, M. Bison and Akuma.
Akuma has dark red hair, dark skin tone, glowing red eyes with black sclera, wears prayer beads around his neck, a dark gray karate gi and a piece of twine around his waist in lieu of an obi. The kanji "ten" (天) — meaning "Heaven" — can be seen on his back when it appears during certain win animations. Shin Akuma's appearance is very similar to Akuma's; for example, in the Street Fighter Alpha series, Shin Akuma had a purple karate gi instead of a dark gray one and marginally darker skin tone. Akuma's introduction in Super Street Fighter II Turbo stemmed from the development team's desire to introduce a "mysterious and really powerful" character, with his status as a hidden character within the game resulting from later discussions. When asked regarding the presence of Akuma as a secret character in several of Capcom's fighting games, Capcom's Noritaka Funamizu stated that, |
https://en.wikipedia.org/wiki/Sujagi | The Sujagi is a flag with a Hanja(帥), pronounced su in Korean, that denotes a commanding general. The whole term literally means, "Commanding general flag". Only one sujagi is known to exist in Korea. The color is a faded yellowish-brown background with a black character in its center. It is made of hemp cloth and measures approximately 4.15m x 4.35m.
History
This type of flag was put in a fortress where a commanding general was located. In the case of the extant sujagi in Korea, it represented General Eo Jae-yeon who, in 1871, commanded the Korean military forces on Ganghwa Island, which is off the northwest coast of present-day South Korea, near the capital of Seoul. It was captured by the United States Asiatic Squadron in June of that year during the United States' expedition to Korea. As with other war prizes, it was put into the collection of the museum at the United States Naval Academy in Annapolis, Maryland.
In October 2007, after many years of petitions by South Korea to the United States government, the flag was returned to South Korea on a long-term, ten-year loan.
After being returned, it was displayed at the National Palace Museum of Korea in Seoul until 2009, when it was moved to the Ganghwa History Museum on Ganghwa Island. As of September 2022, the lease had been renewed for the flag to stay in South Korea until at least October 2023. |
https://en.wikipedia.org/wiki/Centronics | Centronics Data Computer Corporation was an American manufacturer of computer printers, now remembered primarily for the parallel interface that bears its name, the Centronics connector.
History
Foundations
Centronics began as a division of Wang Laboratories. Founded and initially operated by Robert Howard (president) and Samuel Lang (vice president and owner of the well known K & L Color Photo Service Lab in New York City), the group produced remote terminals and systems for the casino industry. Printers were developed to print receipts and transaction reports. Wang spun off the business in 1971 and Centronics was formed as a corporation in Hudson, New Hampshire with Howard as president and chairman.
The Centronics Model 101 was introduced at the 1970 National Computer Conference in May. The print head used an innovative seven-wire solenoid impact system. Based on this design, Centronics later developed the first dot matrix impact printer (while the first such printer was the OKI Wiredot in 1968).
Howard developed a personal relationship with his neighbor, Max Hugel, the founder and president of Brother International, the United States arm of Brother Industries, Ltd., a manufacturer of sewing machines and typewriters. A business relationship developed when Centronics needed reliable manufacturing of the printer mechanisms—a relationship that would help propel Brother into the printer industry. Hugel would later become executive vice president of Centronics. Print heads and electronics were built in Centronics plants in New Hampshire and Ireland, mechanisms were built in Japan by Brother and the printers were assembled in New Hampshire.
In the 1970s, Centronics formed a relationship with Canon to develop non-impact printers. No products were ever produced, but Canon continued to work on laser printers, eventually developing a highly successful series of engines.
In 1977, Centronics sued competitor Mannesmann AG in a patent dispute regarding the return |
https://en.wikipedia.org/wiki/Yang%20Hui | Yang Hui (, ca. 1238–1298), courtesy name Qianguang (), was a Chinese mathematician and writer during the Song dynasty. Originally, from Qiantang (modern Hangzhou, Zhejiang), Yang worked on magic squares, magic circles and the binomial theorem, and is best known for his contribution of presenting Yang Hui's Triangle. This triangle was the same as Pascal's Triangle, discovered by Yang's predecessor Jia Xian. Yang was also a contemporary to the other famous mathematician Qin Jiushao.
Written work
The earliest extant Chinese illustration of 'Pascal's triangle' is from Yang's book Xiangjie Jiuzhang Suanfa () of 1261 AD, in which Yang acknowledged that his method of finding square roots and cubic roots using "Yang Hui's Triangle" was invented by mathematician Jia Xian who expounded it around 1100 AD, about 500 years before Pascal. In his book (now lost) known as Rújī Shìsuǒ () or Piling-up Powers and Unlocking Coefficients, which is known through his contemporary mathematician Liu Ruxie (). Jia described the method used as 'li cheng shi suo' (the tabulation system for unlocking binomial coefficients). It appeared again in a publication of Zhu Shijie's book Jade Mirror of the Four Unknowns () of 1303 AD.
Around 1275 AD, Yang finally had two published mathematical books, which were known as the Xugu Zhaiqi Suanfa () and the Suanfa Tongbian Benmo (, summarily called Yang Hui suanfa ). In the former book, Yang wrote of arrangement of natural numbers around concentric and non concentric circles, known as magic circles and vertical-horizontal diagrams of complex combinatorial arrangements known as magic squares, providing rules for their construction. In his writing, he harshly criticized the earlier works of Li Chunfeng and Liu Yi (), the latter of whom were both content with using methods without working out their theoretical origins or principle. Displaying a somewhat modern attitude and approach to mathematics, Yang once said:
The men of old changed the name of their |
https://en.wikipedia.org/wiki/Hermann%20Grassmann | Hermann Günther Grassmann (, ; 15 April 1809 – 26 September 1877) was a German polymath known in his day as a linguist and now also as a mathematician. He was also a physicist, general scholar, and publisher. His mathematical work was little noted until he was in his sixties. His work preceded and exceeded the concept which is now known as a vector space. He introduced the Grassmannian, the space which parameterizes all k-dimensional linear subspaces of an n-dimensional vector space V.
Biography
Hermann Grassmann was the third of 12 children of Justus Günter Grassmann, an ordained minister who taught mathematics and physics at the Stettin Gymnasium, where Hermann was educated.
Grassmann was an undistinguished student until he obtained a high mark on the examinations for admission to Prussian universities. Beginning in 1827, he studied theology at the University of Berlin, also taking classes in classical languages, philosophy, and literature. He does not appear to have taken courses in mathematics or physics.
Although lacking university training in mathematics, it was the field that most interested him when he returned to Stettin in 1830 after completing his studies in Berlin. After a year of preparation, he sat the examinations needed to teach mathematics in a gymnasium, but achieved a result good enough to allow him to teach only at the lower levels. Around this time, he made his first significant mathematical discoveries, ones that led him to the important ideas he set out in his 1844 paper Die lineale Ausdehnungslehre, ein neuer Zweig der Mathematik, here referred to as A1, later revised in 1862 as Die Ausdehnungslehre: Vollständig und in strenger Form bearbeitet, here referred to as A2.
In 1834 Grassmann began teaching mathematics at the Gewerbeschule in Berlin. A year later, he returned to Stettin to teach mathematics, physics, German, Latin, and religious studies at a new school, the Otto Schule. Over the next four years, Grassmann passed examinations ena |
https://en.wikipedia.org/wiki/Grayshift | Grayshift is an American mobile device forensics company which makes a device named GrayKey to crack iPhones, iPads, and Android devices.
Grayshift was co-founded by David Miles, Braden Thomas, Justin Fisher and Sean Larsson. The company is funded by private investors PeakEquity Partners and C&B Capital.
GrayKey
The GrayKey product has been used by the FBI and U.S., British and Canadian local police forces. Canadian police forces require judicial authorization (court order or warrant) per mobile phone to use GrayKey. GrayKey is estimated to be used in up to 30 countries.
According to media reports, GrayKey costs US$15,000 to US$30,000 per copy depending on the functional options chosen. One thousand agencies currently use GrayKey The device is a gray box, 4 inches by 4 inches by 2 inches in size, with two Lightning cables. The time to solve an iPhone's passcode can be a few minutes to several hours, depending on the length of the passcode. Thus, it is possible that GrayKey is performing a brute-force attack to perform to solve after disabling the passcode attempt limit.
The GrayKey reportedly provides support for iPhones running iOS 9 and later. Apple modified iOS so that external device connections must be authorized by the iPhone owner after it has been unlocked. On newer iPhone models, only unencrypted files and some metadata might be extracted. With earlier models, full data extraction, such as decrypting encrypted files, is possible.
In 2018, hackers obtained the GrayKey source code and attempted to extort a payment of 2 bitcoins from Grayshift after leaking "small chunks of code".
GrayKey with Android support was released in early 2021. |
https://en.wikipedia.org/wiki/Erythranthe | Erythranthe, the monkey-flowers and musk-flowers, is a diverse plant genus with more than 120 members (as of 2022) in the family Phrymaceae. Erythranthe was originally described as a separate genus, then generally regarded as a section within the genus Mimulus, and recently returned to generic rank. Mimulus sect. Diplacus was segregated from Mimulus as a separate genus at the same time. Mimulus remains as a small genus of eastern North America and the Southern Hemisphere. Molecular data show Erythranthe and Diplacus to be distinct evolutionary lines that are distinct from Mimulus as strictly defined, although this nomenclature is controversial.
Member species are usually annuals or herbaceous perennials. Flowers are red, pink, or yellow, often in various combinations. A large number of the Erythranthe species grow in moist to wet soils with some growing even in shallow water. They are not very drought resistant, but many of the species now classified as Diplacus are. Species are found at elevations from oceanside to high mountains as well as a wide variety of climates, though most prefer wet areas such as riverbanks.
The largest concentration of species is in western North America, but species are found elsewhere in the United States and Canada, as well as from Mexico to Chile and eastern Asia. Pollination is mostly by either bees or hummingbirds. Member species are widely cultivated and are subject to several pests and diseases. Several species are listed as threatened by the International Union for Conservation of Nature.
Description
Erythranthe is a highly diverse genus with the characteristics unifying the various species being axile placentation and long pedicels. Other characteristics of species can vary widely, especially between the sections, and even within some sections. Some species of Erythranthe are annuals and some are perennials. Flowers are red, pink, purple, or yellow, often in various combinations and shades of those colors. Some species produ |
https://en.wikipedia.org/wiki/Risk-based%20testing | Risk-based testing (RBT) is a type of software testing that functions as an organizational principle used to prioritize the tests of features and functions in software, based on the risk of failure, the function of their importance and likelihood or impact of failure.</ref> In theory, there are an infinite number of possible tests. Risk-based testing uses risk (re-)assessments to steer all phases of the test process, i.e., test planning, test design, test implementation, test execution and test evaluation. This includes for instance, ranking of tests, and subtests, for functionality; test techniques such as boundary-value analysis, all-pairs testing and state transition tables aim to find the areas most likely to be defective.
Assessing risks
Comparing the changes between two releases or versions is key in order to assess risk.
Evaluating critical business modules is a first step in prioritizing tests, but it does not include the notion of evolutionary risk. This is then expanded using two methods: change-based testing and regression testing.
Change-based testing allows test teams to assess changes made in a release and then prioritize tests towards modified modules.
Regression testing ensures that a change, such as a bug fix, did not introduce new faults into the software under test. One of the main reasons for regression testing is to determine whether a change in one part of the software has any effect on other parts of the software.
These two methods permit test teams to prioritize tests based on risk, change, and criticality of business modules. Certain technologies can make this kind of test strategy very easy to set up and to maintain with software changes.
Types of risk
Risk can be identified as the probability that an undetected software bug may have a negative impact on the user of a system.
The methods assess risks along a variety of dimensions:
Business or operational
High use of a subsystem, function or feature
Criticality of a subsystem, |
https://en.wikipedia.org/wiki/Mural%20crown | A mural crown () is a crown or headpiece representing city walls, towers, or fortresses. In classical antiquity, it was an emblem of tutelary deities who watched over a city, and among the Romans a military decoration. Later the mural crown developed into a symbol of European heraldry, mostly for cities and towns, and in the 19th and 20th centuries was used in some republican heraldry.
Usage in ancient times
In Hellenistic culture, a mural crown identified tutelary deities such as the goddess Tyche (the embodiment of the fortunes of a city, familiar to Romans as Fortuna), and Hestia (the embodiment of the protection of a city, familiar to Romans as Vesta). The high cylindrical polos of Rhea/Cybele too could be rendered as a mural crown in Hellenistic times, specifically designating the mother goddess as patron of a city.
The mural crown became an ancient Roman military decoration. The corona muralis (Latin for "walled crown") was a golden crown, or a circle of gold intended to resemble a battlement, bestowed upon the soldier who first climbed the wall of a besieged city or fortress to successfully place the standard (flag) of the attacking army upon it. The Roman mural crown was made of gold, and decorated with turrets, as is the heraldic version. As it was among the highest order of military decorations, it was not awarded to a claimant until after a strict investigation. The rostrata mural crown, composed of the rostra indicative of captured ships, was assigned as naval prize to the first in a boarding party, similar to the naval crown.
The Graeco-Roman goddess Roma's attributes on Greek coinage usually include her mural crown, signifying Rome's status as a loyal protector of Hellenic city-states.
Heraldic use
The Roman military decoration was subsequently employed in European heraldry, where the term denoted a crown modeled after the walls of a castle, which may be tinctured or (gold), argent (silver), gules (red), or proper (i.e. stone-coloured). In 19t |
https://en.wikipedia.org/wiki/Semantic%20analysis%20%28compilers%29 | Semantic analysis or context sensitive analysis is a process in compiler construction, usually after parsing, to gather necessary semantic information from the source code. It usually includes type checking, or makes sure a variable is declared before use which is impossible to describe in the extended Backus–Naur form and thus not easily detected during parsing.
See also
Attribute grammar
Context-sensitive language
Semantic analysis (computer science) |
https://en.wikipedia.org/wiki/Memorandum | A memorandum (: memoranda; from the Latin memorandum, "(that) which is to be remembered"), also known as a briefing note, is a written message that is typically used in a professional setting. Commonly abbreviated memo, these messages are usually brief and are designed to be easily and quickly understood. Memos can thus communicate important information efficiently in order to make dynamic and effective changes.
In law, a memorandum is a record of the terms of a transaction or contract, such as a policy memo, memorandum of understanding, memorandum of agreement, or memorandum of association. In business, a memo is typically used by firms for internal communication, while letters are typically for external communication.
Other memorandum formats include briefing notes, reports, letters, and binders. They may be considered grey literature. Memorandum formatting may vary by office or institution. For example, if the intended recipient is a cabinet minister or a senior executive, the format might be rigidly defined and limited to one or two pages. If the recipient is a colleague, the formatting requirements are usually more flexible.
Policy briefing note
A specific type of memorandum is the policy briefing note (alternatively referred to in various jurisdictions and governing traditions as policy issues paper, policy memoranda, or cabinet submission amongst other terms), a document for transmitting policy analysis into the political decision making sphere. Typically, a briefing note may be denoted as either “for information” or “for decision”.
Origins of term
The origins of the term “briefing” lie in legal “briefs” and the derivative “military briefings”. The plural form of the Latin noun memorandum so derived is properly memoranda, but if the word is deemed to have become a word of the English language, the plural memorandums, abbreviated to memos, may be used. (See also Agenda, Corrigenda, Addenda).
Purpose
There are many important purposes of a memorandum. Bri |
https://en.wikipedia.org/wiki/Relativity%20of%20simultaneity | In physics, the relativity of simultaneity is the concept that distant simultaneity – whether two spatially separated events occur at the same time – is not absolute, but depends on the observer's reference frame. This possibility was raised by mathematician Henri Poincaré in 1900, and thereafter became a central idea in the special theory of relativity.
Description
According to the special theory of relativity introduced by Albert Einstein, it is impossible to say in an absolute sense that two distinct events occur at the same time if those events are separated in space. If one reference frame assigns precisely the same time to two events that are at different points in space, a reference frame that is moving relative to the first will generally assign different times to the two events (the only exception being when motion is exactly perpendicular to the line connecting the locations of both events).
For example, a car crash in London and another in New York appearing to happen at the same time to an observer on Earth, will appear to have occurred at slightly different times to an observer on an airplane flying between London and New York. Furthermore, if the two events cannot be causally connected, depending on the state of motion, the crash in London may appear to occur first in a given frame, and the New York crash may appear to occur first in another. However, if the events are causally connected, precedence order is preserved in all frames of reference.
History
In 1892 and 1895, Hendrik Lorentz used a mathematical method called "local time" t' = t – v x/c2 for explaining the negative aether drift experiments. However, Lorentz gave no physical explanation of this effect. This was done by Henri Poincaré who already emphasized in 1898 the conventional nature of simultaneity and who argued that it is convenient to postulate the constancy of the speed of light in all directions. However, this paper did not contain any discussion of Lorentz's theory or the possi |
https://en.wikipedia.org/wiki/Organ%20of%20Bojanus | The organs of Bojanus or Bojanus organs are excretory glands that serve the function of kidneys in some of the molluscs. In other words, these are metanephridia that are found in some molluscs, for example in the bivalves. Some other molluscs have another type of organ for excretion called Keber's organ.
The Bojanus organ is named after Ludwig Heinrich Bojanus, who first described it. The excretory system of a bivalve consists of a pair of kidneys called the organ of bojanus. These are situated one of each side of the body below the pericardium. Each kidney consist of 2 part (1)- glandular part (2)- a thin walled ciliated urinary bladder. |
https://en.wikipedia.org/wiki/Direct%20coupling | In electronics, direct coupling or DC coupling (also called conductive coupling and galvanic coupling) is the transfer of electrical energy by means of physical contact via a conductive medium, in contrast to inductive coupling and capacitive coupling. It is a way of interconnecting two circuits such that, in addition to transferring the AC signal (or information), the first circuit also provides DC bias to the second. Thus, DC blocking capacitors are not used or needed to interconnect the circuits. Conductive coupling passes the full spectrum of frequencies including direct current.
Such coupling may be achieved by a wire, resistor, or common terminal, such as a binding post or metallic bonding.
DC bias
The provision of DC bias only occurs in a group of circuits that forms a single unit, such as an op-amp. Here the internal units or portions of the op-amp (like the input stage, voltage gain stage, and output stage) will be direct coupled and will also be used to set up the bias conditions inside the op-amp (the input stage will also supply the input bias to the voltage gain stage, for example). However, when two op-amps are directly coupled the first op-amp will supply any bias to the next - any DC at its output will form the input for the next. The resulting output of the second op-amp now represents an offset error if it is not the intended one.
Uses
This technique is used by default in circuits like IC op-amps, since large coupling capacitors cannot be fabricated on-chip. That said, some discrete circuits (such as power amplifiers) also employ direct coupling to cut cost and improve low frequency performance.
Offset error
One advantage or disadvantage (depending on application) of direct coupling is that any DC at the input appears as a valid signal to the system, and so it will be transferred from the input to the output (or between two directly coupled circuits). If this is not a desired result, then the term used for the output signal is output offset er |
https://en.wikipedia.org/wiki/Ekman%20current%20meter | The Ekman current meter is a mechanical flowmeter invented by Vagn Walfrid Ekman, a Swedish oceanographer, in 1903. It comprises a propeller with a mechanism to record the number of revolutions, a compass and a recorder with which to record the direction, and a vane that orients the instrument so the propeller faces the current. It is mounted on a free-swinging vertical axis suspended from a wire and has a weight attached below.
The balanced propeller, with four to eight blades, rotates inside a protective ring. The position of a lever controls the propeller. In down position the propeller is stopped and the instrument is lowered, after reaching the desired depth a weight called a messenger is dropped to move the lever into the middle position which allows the propeller to turn freely. When the measurement has been taken another weight is dropped to push the lever to its highest position at which the propeller is again stopped.
The propeller revolutions are counted via a simple mechanism that gears down the revolutions and counts them on an indicator dial. The direction is indicated by a device connected to the directional vane that drops a small metal ball about every 100 revolutions. The ball falls into one of thirty-six compartments in the bottom of the compass box that indicate direction in increments of 10 degrees. If the direction changes while the measurement is being performed the balls will drop into separate compartments and a weighted mean is taken to determine the average current direction.
This is a simple and reliable instrument whose main disadvantage is that is must be hauled up to be read and reset after each measurement. Ekman solved this problem by designed a repeating current meter which could take up to forty-seven measurements before needing to be hauled up and reset. This device used a more complicated system of dropping small numbered metal balls at regular intervals to record the separate measurements.
Bibliography
Harald U. Sverdrup, |
https://en.wikipedia.org/wiki/Leo%20Fitz | Leopold James "Leo" Fitz is a fictional character that originated in the Marvel Cinematic Universe before appearing in Marvel Comics. The character, created by Joss Whedon, Jed Whedon and Maurissa Tancharoen, first appeared in the pilot episode of Agents of S.H.I.E.L.D. in September 2013, and has continually been portrayed by Iain De Caestecker.
In the series, Fitz is one of S.H.I.E.L.D.'s top scientific minds. His scientific knowledge is vast, and as an engineer and inventor he has developed many of S.H.I.E.L.D.'s staple devices and gadgets. Many of his storylines involve his relationship with his best friend, and later wife, Jemma Simmons, who are collectively known as Fitzsimmons. Over the course of the series, Fitz suffers multiple traumas and becomes aware of a darker and more ruthless side to his character. His darker alter ego is commonly known as The Doctor.
Fictional character biography
In season one, Leo Fitz is brought on to S.H.I.E.L.D. agent Phil Coulson's team as an engineering and weapons technology specialist. He has a close bond with fellow agent Jemma Simmons, whom he met at the S.H.I.E.L.D. academy, with both being its Science and Technology division's youngest graduates. Near the end of the season, Fitz and Simmons lock themselves inside a medical unit for safety from rogue agent Grant Ward, who ejects the unit into the ocean. While trapped, Fitz professes his feelings for Simmons before sacrificing himself to save her. They are rescued by Nick Fury, but Fitz sustains damage to his temporal lobe as a result of oxygen deprivation and is left comatose.
In season two, Fitz initially struggles with technology and speech as a result of Ward's actions, but over time becomes a full member of the team again. Near the end of the season Fitz arranges for a date with Simmons when the Kree weapon called "Monolith", which is in S.H.I.E.L.D. custody, breaks free of containment and absorbs Simmons into itself.
In season three, Fitz acquires an ancient Hebr |
https://en.wikipedia.org/wiki/Ecological%20Genetics%20%28book%29 | Ecological Genetics is a 1964 book by the British biologist E. B. Ford on ecological genetics. Ford founded the field and it is considered his magnum opus. The fourth and final edition was published in 1975.
Ford's work was celebrated in 1971 by Ecological Genetics and Evolution, a series of essays edited by Robert Creed, publ. Blackwell, Oxford. This included contributions from Cyril Darlington, Miriam Rothschild, Theodosius Dobzhansky, Bryan Clarke, A.J. Cain, Sir Cyril Clarke and others.
Ford and Ronald Fisher represented one side of a dispute with the American Sewall Wright over the relative roles of selection and drift in evolution.
See also
Papilio dardanus (the swallowtail butterfly that is the subject of chapter thirteen) |
https://en.wikipedia.org/wiki/Microwave%20and%20Optical%20Technology%20Letters | Microwave and Optical Technology Letters is a monthly peer-reviewed scientific journal published by Wiley-Blackwell. The editor-in-chief is Kai Chang (Texas A&M University). The journal covers technology that operates in wavelengths ranging from radio frequency to the optical spectrum.
Abstracting and indexing
This journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.392. |
https://en.wikipedia.org/wiki/Russo%E2%80%93Dye%20theorem | In mathematics, the Russo–Dye theorem is a result in the field of functional analysis. It states that in a unital C*-algebra, the closure of the convex hull of the unitary elements is the closed unit ball.
The theorem was published by B. Russo and H. A. Dye in 1966.
Other formulations and generalizations
Results similar to the Russo–Dye theorem hold in more general contexts. For example, in a unital *-Banach algebra, the closed unit ball is contained in the closed convex hull of the unitary elements.
A more precise result is true for the C*-algebra of all bounded linear operators on a Hilbert space: If T is such an operator and ||T|| < 1 − 2/n for some integer n > 2, then T is the mean of n unitary operators.
Applications
This example is due to Russo & Dye, Corollary 1: If U(A) denotes the unitary elements of a C*-algebra A, then the norm of a linear mapping f from A to a normed linear space B is
In other words, the norm of an operator can be calculated using only the unitary elements of the algebra.
Further reading
An especially simple proof of the theorem is given in:
Notes
C*-algebras
Theorems in functional analysis
Unitary operators |
https://en.wikipedia.org/wiki/List%20of%20Brazilian%20flags | This article is a list of Brazilian flags.
National Flags
Government flags
Ministries
Imperial standards of Brazil
Diplomatic services flags
Military flags
Brazilian Army
Brazilian Navy
Police flags
First-level administrative divisions
This list shows the flags of the 26 Brazilian States and the Federal District.
History
Municipalities
Political flags
Separatist movements flags
Ethnic groups flags
Historical flags
Proposed flags
House flags of Brazilian freight companies
Yacht clubs of Brazil
See also
Flag of Brazil
Hino Nacional Brasileiro |
https://en.wikipedia.org/wiki/Ionomics | Ionomics is the measurement of the total elemental composition of an organism to address biological problems. Questions within physiology, ecology, evolution, and many other fields can be investigated using ionomics, often coupled with bioinformatics, chemometrics and other genetic tools. Observing an organism's ionome is a powerful approach to the functional analysis of its genes and the gene networks. Information about the physiological state of an organism can also be revealed indirectly through its ionome, for example iron deficiency in a plant can be identified by looking at a number of other elements, rather than iron itself. A more typical example is in a blood test, where a number of conditions involving nutrition or disease may be inferred from testing this single tissue for sodium, potassium, iron, chlorine, zinc, magnesium, calcium and copper.
In practice, the total elemental composition of an organism is rarely determined. The number and type of elements measured are limited by the available instrumentation, the assumed value of the element in question, and the added cost of measuring each additional element. Also, a single tissue may be measured instead of the entire organism, as in the example given above of a blood test, or in the case of plants, the sampling of just the leaves or seeds. These are simply issues of practicality.
Various techniques may be fruitfully used to measure elemental composition. Among the best are Inductively-Coupled Plasma Optical Emission Spectroscopy (ICP-OES), Inductively-Coupled Plasma Mass Spectrometry (ICP-MS), X-Ray Fluorescence (XRF), synchrotron-based microXRF, and Neutron activation analysis (NAA). This latter technique has been applied to perform ionomics in the study of breast cancer, colorectal cancer and brain cancer. High-throughput ionomic phenotyping has created the need for data management systems to collect, organize and share the collected data with researchers worldwide. |
https://en.wikipedia.org/wiki/OpenSimplex%20noise | OpenSimplex noise is an n-dimensional (up to 4D) gradient noise function that was developed in order to overcome the patent-related issues surrounding simplex noise, while likewise avoiding the visually-significant directional artifacts characteristic of Perlin noise.
The algorithm shares numerous similarities with simplex noise, but has two primary differences:
Whereas simplex noise starts with a hypercubic honeycomb and squashes it down the main diagonal in order to form its grid structure, OpenSimplex noise instead swaps the skew and inverse-skew factors and uses a stretched hypercubic honeycomb. The stretched hypercubic honeycomb becomes a simplectic honeycomb after subdivision. This means that 2D Simplex and 2D OpenSimplex both use different orientations of the triangular tiling, but whereas 3D Simplex uses the tetragonal disphenoid honeycomb, 3D OpenSimplex uses the tetrahedral-octahedral honeycomb.
OpenSimplex noise uses a larger kernel size than simplex noise. The result is a smoother appearance at the cost of performance, as additional vertices need to be determined and factored into each evaluation.
OpenSimplex has a variant called "SuperSimplex" (or OpenSimplex2S), which is visually smoother. "OpenSimplex2F" is identical to the original SuperSimplex.
See also
Value noise
Worley noise |
https://en.wikipedia.org/wiki/Sex-chromosome%20dosage%20compensation | Dosage compensation is the process by which organisms equalize the expression of genes between members of different biological sexes. Across species, different sexes are often characterized by different types and numbers of sex chromosomes. In order to neutralize the large difference in gene dosage produced by differing numbers of sex chromosomes among the sexes, various evolutionary branches have acquired various methods to equalize gene expression among the sexes. Because sex chromosomes contain different numbers of genes, different species of organisms have developed different mechanisms to cope with this inequality. Replicating the actual gene is impossible; thus organisms instead equalize the expression from each gene. For example, in humans, female (XX) cells randomly silence the transcription of one X chromosome, and transcribe all information from the other, expressed X chromosome. Thus, human females have the same number of expressed X-linked genes per cell as do human males (XY), both sexes having essentially one X chromosome per cell, from which to transcribe and express genes.
Different lineages have evolved different mechanisms to cope with the differences in gene copy numbers between the sexes that are observed on sex chromosomes. Some lineages have evolved dosage compensation, an epigenetic mechanism which restores expression of X or Z specific genes in the heterogametic sex to the same levels observed in the ancestor prior to the evolution of the sex chromosome. Other lineages equalize the expression of the X- or Z- specific genes between the sexes, but not to the ancestral levels, i.e. they possess incomplete compensation with “dosage balance”. One example of this is X-inactivation which occurs in humans. The third documented type of gene dose regulatory mechanism is incomplete compensation without balance (sometimes referred to as incomplete or partial dosage compensation). In this system gene expression of sex-specific loci is reduced in the h |
https://en.wikipedia.org/wiki/Fossa%20for%20lacrimal%20gland | The lacrimal fossa (or fossa for lacrimal gland) is located on the inferior surface of each orbital plate of the frontal bone. It is smooth and concave, and presents, laterally, underneath the zygomatic process, a shallow depression for the lacrimal gland.
See also
Fossa for lacrimal sac |
https://en.wikipedia.org/wiki/86th%20meridian%20east | The meridian 86° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Asia, the Indian Ocean, the Southern Ocean, and Antarctica to the South Pole.
The 86th meridian east forms a great circle with the 94th meridian west.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 86th meridian east passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="120" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Kara Sea
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Krasnoyarsk Krai
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pyasina Bay
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Krasnoyarsk Krai Tomsk Oblast — from Kemerovo Oblast — from Altai Krai — from Altai Republic — from
|-
|
! scope="row" |
|
|-valign="top"
|
! scope="row" |
| Xinjiang Tibet — from
|-
|
! scope="row" |
|
|-valign="top"
|
! scope="row" |
| Bihar Jharkhand — from West Bengal — from Jharkhand — from Odisha — from Jharkhand — from Odisha — from Jharkhand — from Odisha — from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Indian Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Southern Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" | Antarctica
| Australian Antarctic Territory, claimed by
|-
|}
See also
85th meridian east
87th meridian east
e086 meridian east |
https://en.wikipedia.org/wiki/Ulnar%20notch%20of%20the%20radius | The articular surface for the ulna is called the ulnar notch (sigmoid cavity) of the radius; it is in the distal radius, and is narrow, concave, smooth, and articulates with the head of the ulna forming the distal radioulnar joint. |
https://en.wikipedia.org/wiki/Randstad%20NV | Randstad NV, commonly known as Randstad and stylized as randstad, is a Dutch multinational human resource consulting firm headquartered in Diemen, Netherlands. It was founded in the Netherlands in 1960 by Gerrit Daleboudt who asked Frits Goldschmeding to join him and operates in around 39 countries. Randstad NV is listed as RAND on the AEX of Euronext Amsterdam. Founder Frits Goldschmeding is still (April 2023) the biggest shareholder. The company is named after the Randstad region of the Netherlands.
Core activities
Randstad specializes in human resource services for temporary and permanent jobs, including contract staffing of professionals and senior managers.
In most of these countries, Randstad works according to a unit structure, whereby each unit consists of two consultants who are responsible for service provision to clients and selecting candidates. Randstad promotes these activities under two brand names: Randstad and Tempo Team.
A separate division of Randstad focuses on recruiting supervisors, managers, professionals, interim specialists, and advisors. These people are deployed in temporary positions in middle and senior management, such as engineers, ICT specialists, or marketing & communication specialists.
HR Solutions also involves a number of services such as selection processes, HR consultancy, outplacement, and career support.
Randstad operates under brands, including Randstad, Randstad Care, Tempo Team, Expectra, Ausy, and Yacht.
Randstad was a sponsor of the English Formula 1 team Williams F1 from 2006 until 2017. In 2019, Randstad became the sponsor of the Italian Formula 1 team Scuderia Toro Rosso, since 2020 called AlphaTauri.
History
1960–1970: The company's launch
Randstad was founded in 1960 by Frits Goldschmeding and Gerrit Daleboudt, who were both studying economics at the time at VU University Amsterdam. When Goldschmeding was supposed to write a thesis at the VU University Amsterdam his professor advised him to write a thesis on |
https://en.wikipedia.org/wiki/Convective%20momentum%20transport | Convective momentum transport usually describes a vertical flux of the momentum of horizontal winds or currents. That momentum is carried like a non-conserved flow tracer by vertical air motions in convection.
In the atmosphere, convective momentum transport by small but vigorous (cumulus type) cloudy updrafts can be understood as an interplay of three main mechanisms:
Vertical advection of ambient momentum due to subsidence of environmental air that compensates the in-cloud upward mass flux,
Detrainment of in-cloud momentum where updrafts stop ascending,
Accelerations by the pressure gradient force around clouds whose inner momentum differs from their environment.
The net effect of these interacting mechanisms depends on the detailed configuration or 'organization' of the convective cloud or storm system.
See also
momentum
vertical motion |
https://en.wikipedia.org/wiki/BeeGFS | BeeGFS (formerly FhGFS) is a parallel file system, developed and optimized for high-performance computing. BeeGFS includes a distributed metadata architecture for scalability and flexibility reasons. Its most used and widely known aspect is data throughput.
BeeGFS was originally developed at the Fraunhofer Center for High Performance Computing in Germany by a team around Sven Breuner, who later became the CEO of ThinkParQ (2014–2018), the spin-off company that was founded in 2014 to maintain BeeGFS and offer professional services.
Whilst the Community Edition of BeeGFS can be downloaded and used free of charge, the Enterprise Edition must be used under a professional support subscription contract.
History and usage
BeeGFS started in 2005 as an in-house development at Fraunhofer Center for HPC to replace the existing file system on the institute's new compute cluster and to be used in a production environment.
In 2007, the first beta version of the software was announced during ISC07 in Dresden, Germany and introduced to the public during SC07 in Reno, NV. One year later the first stable major release became available.
In 2014, Fraunhofer started its spin-off, the new company called ThinkParQ for BeeGFS. In this process, FhGFS was renamed and became BeeGFS®. While ThinkParQ maintains the software and offers professional services, further feature development will continue in cooperation of ThinkParQ and Fraunhofer.
Due to the nature of BeeGFS being free of charge, it is unknown how many active installations there are. However, in 2014 there were already around 100 customers worldwide that used BeeGFS with commercial support by ThinkParQ and Fraunhofer. Among those are academic users such as universities and research facilities as well as commercial companies in fields like the finance or the oil & gas industry.
Notable installations include several TOP500 computers such as the Loewe-CSC cluster at the Goethe University Frankfurt, Germany (No. 22 on installatio |
https://en.wikipedia.org/wiki/IPX/SPX | IPX/SPX stands for Internetwork Packet Exchange/Sequenced Packet Exchange. IPX and SPX are networking protocols used initially on networks using the (since discontinued) Novell NetWare operating systems. They also became widely used on networks deploying Microsoft Windows LANS, as they replaced NetWare LANS, but are no longer widely used. IPX/SPX was also widely used prior to and up to Windows XP, which supported the protocols, while later Windows versions do not, and TCP/IP took over for networking.
Protocol layers
IPX and SPX are derived from Xerox Network Systems' IDP and SPP protocols respectively. IPX is a network-layer protocol (layer 3 of the OSI model), while SPX is a transport-layer protocol (layer 4 of the OSI model). The SPX layer sits on top of the IPX layer and provides connection-oriented services between two nodes on the network. SPX is used primarily by client–server applications.
IPX and SPX both provide connection services similar to TCP/IP, with the IPX protocol having similarities to Internet Protocol, and SPX having similarities to TCP. IPX/SPX was primarily designed for local area networks (LANs) and is a very efficient protocol for this purpose (typically SPX's performance exceeds that of TCP on a small LAN, as in place of congestion windows and confirmatory acknowledgements, SPX uses simple NAKs). TCP/IP has, however, become the de facto standard protocol. This is in part due to its superior performance over wide area networks and the Internet (which uses IP exclusively), and also because TCP/IP is a more mature protocol, designed specifically with this purpose in mind.
Despite the protocols' association with NetWare, they are neither required for NetWare communication (as of NetWare 5.x), nor exclusively used on NetWare networks. NetWare communication requires an NCP implementation, which can use IPX/SPX, TCP/IP, or both, as a transport.
Implementations
Novell was largely responsible for the use of IPX as a popular computer networking pr |
https://en.wikipedia.org/wiki/Binet%20equation | The Binet equation, derived by Jacques Philippe Marie Binet, provides the form of a central force given the shape of the orbital motion in plane polar coordinates. The equation can also be used to derive the shape of the orbit for a given force law, but this usually involves the solution to a second order nonlinear ordinary differential equation. A unique solution is impossible in the case of circular motion about the center of force.
Equation
The shape of an orbit is often conveniently described in terms of relative distance as a function of angle . For the Binet equation, the orbital shape is instead more concisely described by the reciprocal as a function of . Define the specific angular momentum as where is the angular momentum and is the mass. The Binet equation, derived in the next section, gives the force in terms of the function :
Derivation
Newton's Second Law for a purely central force is
The conservation of angular momentum requires that
Derivatives of with respect to time may be rewritten as derivatives of with respect to angle:
Combining all of the above, we arrive at
The general solution is
where is the initial coordinate of the particle.
Examples
Kepler problem
Classical
The traditional Kepler problem of calculating the orbit of an inverse square law may be read off from the Binet equation as the solution to the differential equation
If the angle is measured from the periapsis, then the general solution for the orbit expressed in (reciprocal) polar coordinates is
The above polar equation describes conic sections, with the semi-latus rectum (equal to ) and the orbital eccentricity.
Relativistic
The relativistic equation derived for Schwarzschild coordinates is
where is the speed of light and is the Schwarzschild radius. And for Reissner–Nordström metric we will obtain
where is the electric charge and is the vacuum permittivity.
Inverse Kepler problem
Consider the inverse Kepler problem. What kind of force law pro |
https://en.wikipedia.org/wiki/Artificial%20white%20blood%20cells | Artificial white blood cells are typically membrane bound vesicles designed to mimic the immunomodulatory behavior of naturally produced leukocytes. While extensive research has been done with regards to artificial red blood cells and platelets for use in emergency blood transfusions, research into artificial white blood cells has been focused on increasing the immunogenic response within a host to treat cancer or deliver drugs in a more favorable fashion. While certain limitations have prevented leukocyte mimicking particles from becoming widely used and FDA approved, more research is being allocated to this area of synthetic blood which has the potential for producing a new form of treatment for cancer and other diseases.
Leukocyte Physiology
Leukocytes, otherwise known as white blood cells (WBCs), come in various types and generally circulate around the body to facilitate warding off pathogenic invaders such as bacteria or viruses, as well as cells turned cancerous. They mainly circulate throughout the vasculature including capillary beds, bone marrow, and lymph vessels. The five major types of WBCs are neutrophils, eosinophils, basophils, monocytes, and lymphocytes. There exists leukocytes that do not circulate but instead remain in a particular tissue. These include histiocytes and dendritic cells. They typically range in size from 8 to 18 µm in diameter depending on cell type and stage in development. Leukocytes make up roughly 1% of the total blood cells in the average human body. Leukocytes maintain the expression of CD47 and CD45 biomarkers which indicate to other cells what they are and that they should not be destroyed. Cells like dendritic cells are involved in the innate immune system, whereas cells like lymphocytes are part of the active immune system.
Neutrophils
Through the mechanism of chemotaxis, neutrophils are typically found migrating toward sites of inflammation that are secreting heightened concentrations of inflammatory chemical signals. |
https://en.wikipedia.org/wiki/MicroStrategy | MicroStrategy Incorporated is an American company that provides business intelligence (BI), mobile software, and cloud-based services. Founded in 1989 by Michael J. Saylor, Sanju Bansal, and Thomas Spahr, the firm develops software to analyze internal and external data in order to make business decisions and to develop mobile apps. It is a public company headquartered in Tysons Corner, Virginia, in the Washington metropolitan area. Its primary business analytics competitors include SAP AG Business Objects, IBM Cognos, and Oracle Corporation's BI Platform. Saylor is the Executive Chairman and, from 1989 to 2022, was the CEO.
History
Saylor started MicroStrategy in 1989 with a consulting contract from DuPont, which provided Saylor with $250,000 in start-up capital and office space in Wilmington, Delaware. Saylor was soon joined by company co-founder Sanju Bansal, whom he had met while the two were students at Massachusetts Institute of Technology (MIT). The company produced software for data mining and business intelligence using nonlinear mathematics, an idea inspired by a course on systems-dynamics theory that they took at MIT.
In 1992, MicroStrategy gained its first major client when it signed a $10 million contract with McDonald's. It increased revenues by 100% each year between 1990 and 1996. In 1994, the company's offices and its 50 employees moved from Delaware to Tysons Corner, Virginia.
On June 11, 1998, MicroStrategy became a public company via an initial public offering.
In 2000, the company founded Alarm.com as part of its research and development unit.
On March 20, 2000, after a review of its accounting practices, the company announced that it would restate its financial results for the preceding two years. Its stock price, which had risen from $7 per share to as high as $333 per share in a year, fell $120 per share, or 62%, in a day in what is regarded as the bursting of the dot-com bubble.
In December 2000, the U.S. Securities and Exchange Commis |
https://en.wikipedia.org/wiki/Logarithmically%20convex%20function | In mathematics, a function f is logarithmically convex or superconvex if , the composition of the logarithm with f, is itself a convex function.
Definition
Let be a convex subset of a real vector space, and let be a function taking non-negative values. Then is:
Logarithmically convex if is convex, and
Strictly logarithmically convex if is strictly convex.
Here we interpret as .
Explicitly, is logarithmically convex if and only if, for all and all , the two following equivalent conditions hold:
Similarly, is strictly logarithmically convex if and only if, in the above two expressions, strict inequality holds for all .
The above definition permits to be zero, but if is logarithmically convex and vanishes anywhere in , then it vanishes everywhere in the interior of .
Equivalent conditions
If is a differentiable function defined on an interval , then is logarithmically convex if and only if the following condition holds for all and in :
This is equivalent to the condition that, whenever and are in and ,
Moreover, is strictly logarithmically convex if and only if these inequalities are always strict.
If is twice differentiable, then it is logarithmically convex if and only if, for all in ,
If the inequality is always strict, then is strictly logarithmically convex. However, the converse is false: It is possible that is strictly logarithmically convex and that, for some , we have . For example, if , then is strictly logarithmically convex, but .
Furthermore, is logarithmically convex if and only if is convex for all .
Sufficient conditions
If are logarithmically convex, and if are non-negative real numbers, then is logarithmically convex.
If is any family of logarithmically convex functions, then is logarithmically convex.
If is convex and is logarithmically convex and non-decreasing, then is logarithmically convex.
Properties
A logarithmically convex function f is a convex function since it is the composite of the incr |
https://en.wikipedia.org/wiki/Line%E2%80%93sphere%20intersection | In analytic geometry, a line and a sphere can intersect in three ways:
No intersection at all
Intersection in exactly one point
Intersection in two points.
Methods for distinguishing these cases, and determining the coordinates for the points in the latter cases, are useful in a number of circumstances. For example, it is a common calculation to perform during ray tracing.
Calculation using vectors in 3D
In vector notation, the equations are as follows:
Equation for a sphere
: points on the sphere
: center point
: radius of the sphere
Equation for a line starting at
: points on the line
: origin of the line
: distance from the origin of the line
: direction of line (a non-zero vector)
Searching for points that are on the line and on the sphere means combining the equations and solving for , involving the dot product of vectors:
Equations combined
Expanded and rearranged:
The form of a quadratic formula is now observable. (This quadratic equation is an instance of Joachimsthal's equation.)
where
Simplified
Note that in the specific case where is a unit vector, and thus , we can simplify this further to (writing instead of to indicate a unit vector):
If , then it is clear that no solutions exist, i.e. the line does not intersect the sphere (case 1).
If , then exactly one solution exists, i.e. the line just touches the sphere in one point (case 2).
If , two solutions exist, and thus the line touches the sphere in two points (case 3).
See also
Intersection_(geometry)#A_line_and_a_circle
Analytic geometry
Line–plane intersection
Plane–plane intersection
Plane–sphere intersection |
https://en.wikipedia.org/wiki/Maria%20Chudnovsky | Maria Chudnovsky (born January 6, 1977) is an Israeli-American mathematician working on graph theory and combinatorial optimization. She is a 2012 MacArthur Fellow.
Education and career
Chudnovsky is a professor in the department of mathematics at Princeton University. She grew up in Russia (attended Saint Petersburg Lyceum 30) and Israel, studying at the Technion, and received her Ph.D. in 2003 from Princeton University under the supervision of Paul Seymour. After postdoctoral research at the Clay Mathematics Institute, she became an assistant professor at Princeton University in 2005, and moved to Columbia University in 2006. By 2014, she was the Liu Family Professor of Industrial Engineering and Operations Research at Columbia. She returned to Princeton as a professor of mathematics in 2015.
Research
Chudnovsky's contributions to graph theory include the proof of the strong perfect graph theorem (with Neil Robertson, Paul Seymour, and Robin Thomas) characterizing perfect graphs as being exactly the graphs with no odd induced cycles of length at least 5 or their complements. Other research contributions of Chudnovsky include co-authorship of the first polynomial-time algorithm for recognizing perfect graphs (time bounded by a polynomial of degree 9), a structural characterization of the claw-free graphs,,and progress on the Erdős–Hajnal conjecture.
Selected publications
.
.
.
Awards and honors
In 2004 Chudnovsky was named one of the "Brilliant 10" by Popular Science magazine. Her work on the strong perfect graph theorem won for her and her co-authors the 2009 Fulkerson Prize.
In 2012 she was awarded a "genius award" under the MacArthur Fellows Program.
Personal life
She is a citizen of Israel and a permanent resident of the US.
In 2012, she married Daniel Panner, a viola player who teaches at Mannes School of Music and the Juilliard School. They have a son named Rafael. |
https://en.wikipedia.org/wiki/Staphylococcus%20aureus%20alpha%20toxin | Alpha-toxin, also known as alpha-hemolysin (Hla), is the major cytotoxic agent released by bacterium Staphylococcus aureus and the first identified member of the pore forming beta-barrel toxin family. This toxin consists mostly of beta-sheets (68%) with only about 10% alpha-helices. The hly gene on the S. aureus chromosome encodes the 293 residue protein monomer, which forms heptameric units on the cellular membrane to form a complete beta-barrel pore. This structure allows the toxin to perform its major function, development of pores in the cellular membrane, eventually causing cell death.
Function
Alpha-toxin has been shown to play a role in pathogenesis of disease, as hly knockout strains show reductions in invasiveness and virulence. The dosage of toxin can result in two different modes of activity. Low concentrations of toxin bind to specific, but unidentified, cell surface receptors and form the heptameric pores. This pore allows the exchange of monovalent ions, resulting in DNA fragmentation and eventually apoptosis. Higher concentrations result in the toxin absorbing nonspecifically to the lipid bilayer and forming large, Ca2+ permissive pores. This in turn results in massive necrosis and other secondary cellular reactions triggered by the uncontrolled Ca2+ influx.
Structure
The structure of the protein has been solved by x-ray crystallography
and is deposited in the PDB as id code 7ahl. Seven monomers each contribute a long beta-hairpin to a fourteen stranded beta barrel that forms a pore in the cell membrane. This pore is 14 Ångström wide at its narrowest point. This width equals the diameter of approximately 4 calcium ions.
Role in apoptosis
Recently, studies have shown that alpha-toxin plays a role in inducing apoptosis in certain human immune cells. Incubation of T-cells, monocytes, and peripheral blood lymphocytes with either purified alpha-toxin or S. aureus cell lysate resulted in the induction of apoptosis via the intrinsic death pathway. |
https://en.wikipedia.org/wiki/Beta%20bulge | A beta bulge can be described as a localized disruption of the regular hydrogen bonding of beta sheet by inserting extra residues into one or both hydrogen bonded β-strands.
Types
β-bulges can be grouped according to their length of the disruption, the number of residues inserted into each strand, whether the disrupted β-strands are parallel or antiparallel and by their dihedral angles (which controls the placement of their side chains). Two types occur commonly. One, the classic beta bulge, occurs within, or at the edge of, antiparallel beta-sheet; the first residue at the outwards bulge typically has the αR, rather than the normal β, conformation.
The other type is the G1 beta bulge, of which there are two common sorts, both mainly occurring in association with antiparallel sheet; one residue has the αL conformation and is usually a glycine. In one sort, the beta bulge loop, one of the hydrogen bonds of the beta-bulge also forms a beta turn or alpha turn, such that the motif is often at the loop of a beta hairpin. In the other sort, the beta link, the beta bulge occurs in combination with, and overlaps, a type II beta turn.
Effects on structure
At the level of the backbone structure, classic β-bulges can cause a simple aneurysm of the β-sheet, e.g., the bulge in the long β-hairpin of ribonuclease A (residues 88–91). A β-bulge can also cause a β-sheet to fold over and cross itself, e.g., when two residues with left-handed and right-handed α-helical dihedral angles are inserted opposite to each other in a β-hairpin, as occurs at Met9 and Asn16 in pseudoazurin (PDB accession code 1PAZ).
Effect on Functionality of Proteins
Conserved bulges regularly affect protein functionality. The most basic function of bulges is to accommodate an extra residue added due to mutation etc., while maintaining the bonding pattern and thus the overall protein architecture. Other bulges are involved with protein binding sites. In specific cases like the Immunoglobulin family proteins |
https://en.wikipedia.org/wiki/Vasculogenesis | Vasculogenesis is the process of blood vessel formation, occurring by a de novo production of endothelial cells. It is sometimes paired with angiogenesis, as the first stage of the formation of the vascular network, closely followed by angiogenesis.
Process
In the sense distinguished from angiogenesis, vasculogenesis is different in one aspect: whereas angiogenesis is the formation of new blood vessels from pre-existing ones, vasculogenesis is the formation of new blood vessels, in blood islands, when there are no pre-existing ones. For example, if a monolayer of endothelial cells begins sprouting to form capillaries, angiogenesis is occurring. Vasculogenesis, in contrast, is when endothelial precursor cells (angioblasts) migrate and differentiate in response to local cues (such as growth factors and extracellular matrices) to form new blood vessels. These vascular trees are then pruned and extended through angiogenesis.
Occurrences
Vasculogenesis occurs during embryologic development of the circulatory system. Specifically, around blood islands, which first arise in the mesoderm of the yolk sac at 3 weeks of development.
Vasculogenesis can also occur in the adult organism from circulating endothelial progenitor cells (derivatives of stem cells) able to contribute, albeit to varying degrees, to neovascularization. Examples of where vasculogenesis can occur in adults are:
Tumor growth (see HP59)
Revascularization or neovascularization after trauma, for example, after cardiac ischemia or retinal ischemia
Endometriosis - It appears that up to 37% of the microvascular endothelium of the ectopic endometrial tissue originates from endothelial progenitor cells.
See also
Vascular remodelling in the embryo
Vasculogenic mimicry |
https://en.wikipedia.org/wiki/Lema%C3%AEtre%20coordinates | Lemaître coordinates are a particular set of coordinates for the Schwarzschild metric—a spherically symmetric solution to the Einstein field equations in vacuum—introduced by Georges Lemaître in 1932. Changing from Schwarzschild to Lemaître coordinates removes the coordinate singularity at the Schwarzschild radius.
Equations
The original Schwarzschild coordinate expression of the Schwarzschild metric, in natural units (), is given as
where
is the invariant interval;
is the Schwarzschild radius;
is the mass of the central body;
are the Schwarzschild coordinates (which asymptotically turn into the flat spherical coordinates);
is the speed of light;
and is the gravitational constant.
This metric has a coordinate singularity at the Schwarzschild radius .
Georges Lemaître was the first to show that this is not a real physical singularity but simply a manifestation of the fact that the static Schwarzschild coordinates cannot be realized with material bodies inside the Schwarzschild radius. Indeed, inside the Schwarzschild radius everything falls towards the centre and it is impossible for a physical body to keep a constant radius.
A transformation of the Schwarzschild coordinate system from to the new coordinates
(the numerator and denominator are switched inside the square-roots), leads to the Lemaître coordinate expression of the metric,
where
The trajectories with ρ constant are timelike geodesics with τ the proper time along these geodesics. They represent the motion of freely falling particles which start out with zero velocity at infinity. At any point their speed is just equal to the escape velocity from that point.
In Lemaître coordinates the metric is non-singular at the Schwarzschild radius r=1, which instead corresponds to the point . However, there remains a genuine gravitational singularity at the center, where , which cannot be removed by a coordinate change.
The Lemaître coordinate system is synchronous, that is, the global time coordina |
https://en.wikipedia.org/wiki/Man%20page | A man page (short for manual page) is a form of software documentation usually found on a Unix or Unix-like operating system. Topics covered include computer programs (including library and system calls), formal standards and conventions, and even abstract concepts. A user may invoke a man page by issuing the man command.
By default, man typically uses a terminal pager program such as more or less to display its output.
Man pages are often referred to as an on-line or online form of software documentation, even though the man command does not require internet access, dating back to the times when printed out-of-band manuals were the norm.
History
In the first two years of the history of Unix, no documentation existed. The Unix Programmer's Manual was first published on November 3, 1971. The first actual man pages were written by Dennis Ritchie and Ken Thompson at the insistence of their manager Doug McIlroy in 1971. Aside from the man pages, the Programmer's Manual also accumulated a set of short papers, some of them tutorials (e.g. for general Unix usage, the C programming language, and tools such as Yacc), and others more detailed descriptions of operating system features. The printed version of the manual initially fit into a single binder, but as of PWB/UNIX and the 7th Edition of Research Unix, it was split into two volumes with the printed man pages forming Volume 1.
Later versions of the documentation imitated the first man pages' terseness. Ritchie added a "How to get started" section to the Third Edition introduction, and Lorinda Cherry provided the "Purple Card" pocket reference for the Sixth and Seventh Editions. Versions of the software were named after the revision of the manual; the seventh edition of the Unix Programmer's Manual, for example, came with the 7th Edition or Version 7 of Unix.
For the Fourth Edition the man pages were formatted using the troff typesetting package and its set of -man macros (which were completely revised between the |
https://en.wikipedia.org/wiki/Bauer%E2%80%93Fike%20theorem | In mathematics, the Bauer–Fike theorem is a standard result in the perturbation theory of the eigenvalue of a complex-valued diagonalizable matrix. In its substance, it states an absolute upper bound for the deviation of one perturbed matrix eigenvalue from a properly chosen eigenvalue of the exact matrix. Informally speaking, what it says is that the sensitivity of the eigenvalues is estimated by the condition number of the matrix of eigenvectors.
The theorem was proved by Friedrich L. Bauer and C. T. Fike in 1960.
The setup
In what follows we assume that:
is a diagonalizable matrix;
is the non-singular eigenvector matrix such that , where is a diagonal matrix.
If is invertible, its condition number in -norm is denoted by and defined by:
The Bauer–Fike Theorem
Bauer–Fike Theorem. Let be an eigenvalue of . Then there exists such that:
Proof. We can suppose , otherwise take and the result is trivially true since . Since is an eigenvalue of , we have and so
However our assumption, , implies that: and therefore we can write:
This reveals to be an eigenvalue of
Since all -norms are consistent matrix norms we have where is an eigenvalue of . In this instance this gives us:
But is a diagonal matrix, the -norm of which is easily computed:
whence:
An Alternate Formulation
The theorem can also be reformulated to better suit numerical methods. In fact, dealing with real eigensystem problems, one often has an exact matrix , but knows only an approximate eigenvalue-eigenvector couple, and needs to bound the error. The following version comes in help.
Bauer–Fike Theorem (Alternate Formulation). Let be an approximate eigenvalue-eigenvector couple, and . Then there exists such that:
Proof. We can suppose , otherwise take and the result is trivially true since . So exists, so we can write:
since is diagonalizable; taking the -norm of both sides, we obtain:
However
is a diagonal matrix and its -norm is easily computed:
whence:
A Relativ |
https://en.wikipedia.org/wiki/Aconitine | Aconitine is an alkaloid toxin produced by various plant species belonging to the genus Aconitum (family Ranunculaceae), known also commonly by the names wolfsbane and monkshood. Monkshood is notorious for its toxic properties.
Structure and reactivity
Biologically active isolates from Aconitum and Delphinium plants are classified as norditerpenoid alkaloids, which are further subdivided based on the presence or absence of the C18 carbon. Aconitine is a C19-norditerpenoid, based on its presence of this C18 carbon. It is barely soluble in water, but very soluble in organic solvents such as chloroform or diethyl ether. Aconitine is also soluble in mixtures of alcohol and water if the concentration of alcohol is high enough.
Like many other alkaloids, the basic nitrogen atom in one of the six-membered ring structure of aconitine can easily form salts and ions, giving it affinity for both polar and lipophilic structures (such as cell membranes and receptors) and making it possible for the molecule to pass the blood–brain barrier. The acetoxyl group at the c8 position can readily be replaced by a methoxy group, by heating aconitine in methanol, to produce a 8-deacetyl-8-O-methyl derivatives. If aconitine is heated in its dry state, it undergoes a pyrolysis to form pyroaconitine ((1α,3α,6α,14α,16β)-20-ethyl-3,13-dihydroxy-1,6,16-trimethoxy-4-(methoxymethyl)-15-oxoaconitan-14-yl benzoate) with the chemical formula C32H43NO9.
Mechanism of action
Aconitine can interact with the voltage-dependent sodium-ion channels, which are proteins in the cell membranes of excitable tissues, such as cardiac and skeletal muscles and neurons. These proteins are highly selective for sodium ions. They open very quickly to depolarize the cell membrane potential, causing the upstroke of an action potential. Normally, the sodium channels close very rapidly, but the depolarization of the membrane potential causes the opening (activation) of potassium channels and potassium efflux, which result |
https://en.wikipedia.org/wiki/Helikon%20vortex%20separation%20process | The Helikon vortex separation process is an aerodynamic uranium enrichment process designed around a device called a vortex tube. Paul Dirac thought of the idea for isotope separation and tried creating such a device in 1934 in the lab of Peter Kapitza at Cambridge. Other methods of separation were more practical at that time, but this method was designed and used in South Africa for producing reactor fuel with a uranium-235 content of around 3–5%, and 80–93% enriched uranium for use in nuclear weapons. The Uranium Enrichment Corporation of South Africa, Ltd. (UCOR) developed the process, operating a facility at Pelindaba (known as the 'Y' plant) to produce hundreds of kilograms of HEU. Aerodynamic enrichment processes require large amounts of electricity and are not generally considered economically competitive because of high energy consumption and substantial requirements for removal of waste heat. There are other ways in which it is advantageous, e.g. In simplicity, lack of precision required, even if more expensive. The South African enrichment plant was closed on 1 February 1990.
Process
In the vortex separation process a mixture of uranium hexafluoride gas and hydrogen is injected tangentially into a tube at one end through nozzles or holes, at velocities close to the speed of sound. The tube tapers to a small exit aperture at one or both ends. This tangential injection of gas results in a spiral or vortex motion within the tube, and two gas streams are withdrawn at opposite ends of the vortex tube; centrifugal force providing the isotopic separation. The spiral swirling flow decays downstream of the feed inlet due to friction at the tube wall. Consequently, the inside diameter of the tube is typically tapered to reduce decay in the swirling flow velocity. This process is characterized by a separating element with a very small stage cut (the ratio of product flow to feed flow) of about 1/20, and high process-operating pressures.
Due to the extremely diffi |
https://en.wikipedia.org/wiki/Journal%20of%20Cosmology%20and%20Astroparticle%20Physics | The Journal of Cosmology and Astroparticle Physics is an online-only peer-reviewed scientific journal focusing on all aspects of cosmology and astroparticle physics. This encompasses theory, observation, experiment, computation and simulation. It has been published jointly by IOP Publishing and the International School for Advanced Studies since 2003. Journal of Cosmology and Astroparticle Physics has been a part of the SCOAP3 initiative. But from 1 January 2017, it has moved out from SCOAP3 agreement.
Abstracting and indexing
Journal of Cosmology and Astroparticle Physics is indexed and abstracted in the following databases: |
https://en.wikipedia.org/wiki/Coopetition | Coopetition or co-opetition (sometimes spelled "coopertition" or "co-opertition") is a neologism coined to describe cooperative competition. Coopetition is a portmanteau of cooperation and competition. Basic principles of co-opetitive structures have been described in game theory, a scientific field that received more attention with the book Theory of Games and Economic Behavior in 1944 and the works of John Forbes Nash on non-cooperative games. Coopetition occurs both at inter-organizational or intra-organizational levels.
Overview
The concept and term coopetition and its variants have been re-coined several times in history.
The concept appeared as early as 1913, being used to describe the relationships among proximate independent dealers of the Sealshipt Oyster System, who were instructed to cooperate for the benefit of the system while competing with each other for customers in the same city.
Inter-organizational
The term and the ideas around co-opetition gained wide attention within the business community after the publication in 1996 of the book by Brandenberger and Nalebuff bearing the same title. Until today this remains the reference work for both researchers and practitioners alike.
Giovanni Battista Dagnino and Giovanna Padula's conceptualized in their conference paper (2002) that, at the inter-organisational level, coopetition occurs when companies interact with partial congruence of interests. They cooperate with each other to reach a higher value creation, if compared to the value created without interaction, and struggle to achieve a competitive advantage.
Often coopetition takes place when companies that are in the same market work together in the exploration of knowledge and research of new products, at the same time that they compete for the market-share of their products and in the exploitation of the knowledge created. In this case, the interactions occur simultaneously and in different levels in the value chain. This is the case in the arr |
https://en.wikipedia.org/wiki/Virtual%20Processor | Virtual Processor (VP) was a virtual machine from Tao Group.
History
The first version, VP1, was the basis of its parallel processing multimedia OS and platform, TAOS. VP1 supported a RISC-like instruction set with 16 32-bit registers, and had data types of 32- and 64-bit integers and 32- and 64-bit IEEE floating point numbers in registers, and also supported 8- and 16-bit integers in memory.
The second version, VP2, was released in 1998 as the basis of a new version of the portable multimedia platform, first known as Elate and then as intent. VP2 supported the same data types and data processing operations as VP1, but had additional features for better support of high level languages such as demarcation of subroutines, by-value parameters, and a very large theoretical maximum number of registers local to the subroutine for use as local variables.
The structure of VPCode, the Virtual Processor's machine code, was intended to be able to represent the constructs required when compiling languages such as C, C++ and Java, and to allow efficient translation into the machine code of any real 32- or 64-bit CPU. |
https://en.wikipedia.org/wiki/Cervical%20vertebrae | In tetrapods, cervical vertebrae (: vertebra) are the vertebrae of the neck, immediately below the skull. Truncal vertebrae (divided into thoracic and lumbar vertebrae in mammals) lie caudal (toward the tail) of cervical vertebrae. In sauropsid species, the cervical vertebrae bear cervical ribs. In lizards and saurischian dinosaurs, the cervical ribs are large; in birds, they are small and completely fused to the vertebrae. The vertebral transverse processes of mammals are homologous to the cervical ribs of other amniotes. Most mammals have seven cervical vertebrae, with the only three known exceptions being the manatee with six, the two-toed sloth with five or six, and the three-toed sloth with nine.
In humans, cervical vertebrae are the smallest of the true vertebrae and can be readily distinguished from those of the thoracic or lumbar regions by the presence of a foramen (hole) in each transverse process, through which the vertebral artery, vertebral veins, and inferior cervical ganglion pass. The remainder of this article focuses upon human anatomy.
Structure
By convention, the cervical vertebrae are numbered, with the first one (C1) closest to the skull and higher numbered vertebrae (C2–C7) proceeding away from the skull and down the spine.
The general characteristics of the third through sixth cervical vertebrae are described here. The first, second, and seventh vertebrae are extraordinary, and are detailed later.
The bodies of these four vertebrae are small, and broader from side to side than from front to back.
The anterior and posterior surfaces are flattened and of equal depth; the former is placed on a lower level than the latter, and its inferior border is prolonged downward, so as to overlap the upper and forepart of the vertebra below.
The upper surface is concave transversely, and presents a projecting lip on either side.
The lower surface is concave from front to back, convex from side to side, and presents laterally shallow concavities that |
https://en.wikipedia.org/wiki/S-GPS | Simultaneous GPS or S-GPS is a method to allow a GPS reception and CDMA communications to operate simultaneously in a mobile phone.
Ordinarily, cellular geolocation and a built-in GPS receiver is used to determine the location of an E911 call made from CDMA phones. By using a time-multiplexed scheme called TM-GPS, the reception of the telephone call and the GPS signal are alternated one after the other, requiring only one radio receiver.
Simultaneous GPS allows a cellphone to receive both GPS and voice data at the same time, which improves sensitivity and allows service providers to offer location-based services. The use of two radios with a single antenna imparts new design challenges, such as leakage of the voice transmitter signal into the GPS receiver circuitry. The commercial availability of S-GPS chipsets from manufacturers such as Qualcomm, has led to adoption of the method in newer handsets.
See also
Assisted GPS |
https://en.wikipedia.org/wiki/Existential%20quantification | In predicate logic, an existential quantification is a type of quantifier, a logical constant which is interpreted as "there exists", "there is at least one", or "for some". It is usually denoted by the logical operator symbol ∃, which, when used together with a predicate variable, is called an existential quantifier ("" or "" or "). Existential quantification is distinct from universal quantification ("for all"), which asserts that the property or relation holds for all members of the domain. Some sources use the term existentialization to refer to existential quantification.
Basics
Consider a formula that states that some natural number multiplied by itself is 25.
0·0 = 25, or 1·1 = 25, or 2·2 = 25, or 3·3 = 25, ...
This would seem to be a logical disjunction because of the repeated use of "or". However, the ellipses make this impossible to integrate and to interpret it as a disjunction in formal logic.
Instead, the statement could be rephrased more formally as
For some natural number n, n·n = 25.
This is a single statement using existential quantification.
This statement is more precise than the original one, since the phrase "and so on" does not necessarily include all natural numbers and exclude everything else. And since the domain was not stated explicitly, the phrase could not be interpreted formally. In the quantified statement, however, the natural numbers are mentioned explicitly.
This particular example is true, because 5 is a natural number, and when we substitute 5 for n, we produce "5·5 = 25", which is true.
It does not matter that "n·n = 25" is only true for a single natural number, 5; even the existence of a single solution is enough to prove this existential quantification as being true.
In contrast, "For some even number n, n·n = 25" is false, because there are no even solutions.
The domain of discourse, which specifies the values the variable n is allowed to take, is therefore critical to a statement's trueness or falseness. Logical conjunc |
https://en.wikipedia.org/wiki/Wireless%20identity%20theft | Wireless identity theft, also known as contactless identity theft or RFID identity theft, is a form of identity theft described as "the act of compromising an individual’s personal identifying information using wireless (radio frequency) mechanics." Numerous articles have been written about wireless identity theft and broadcast television has produced several investigations of this phenomenon. According to Marc Rotenberg of the Electronic Privacy Information Center, wireless identity theft is a serious issue as the contactless (wireless) card design is inherently flawed, increasing the vulnerability to attacks.
Overview
Wireless identity theft is a relatively new technique for gathering individuals' personal information from RF-enabled cards carried on a person in their access control, credit, debit, or government issued identification cards. Each of these cards carry a radio frequency identification chip which responds to certain radio frequencies. When these "tags" come into contact with radio waves, they respond with a slightly altered signal. The response can contain encoded personally identifying information, including the card holder's name, address, Social Security Number, phone number, and pertinent account or employee information.
Upon capturing (or ‘harvesting’) this data, one is then able to program other cards to respond in an identical fashion (‘cloning’). Many websites are dedicated to teaching people how to do this, as well as supplying the necessary equipment and software.
The financial industrial complex is migrating from the use of magnetic stripes on debit and credit cards which technically require a swipe through a magnetic card swipe reader. The number of transactions per minute can be increased, and more transactions can be processed in a shorter time, therefore making for arguably shorter lines at the cashier.
Controversies
Academic researchers and ‘White-Hat’ hackers have analysed and documented the covert theft of RFID credit card in |
https://en.wikipedia.org/wiki/ACOT6 | Acyl-CoA thioesterase 6 is a protein that in humans is encoded by the ACOT6 gene. The protein, also known as C14orf42, is an enzyme with thioesterase activity.
Function
The protein encoded by the ACOT1 gene is part of a family of Acyl-CoA thioesterases, which catalyze the hydrolysis of various Coenzyme A esters of various molecules to the free acid plus CoA. These enzymes have also been referred to in the literature as acyl-CoA hydrolases, acyl-CoA thioester hydrolases, and palmitoyl-CoA hydrolases. The reaction carried out by these enzymes is as follows:
CoA ester + H2O → free acid + coenzyme A
These enzymes use the same substrates as long-chain acyl-CoA synthetases, but have a unique purpose in that they generate the free acid and CoA, as opposed to long-chain acyl-CoA synthetases, which ligate fatty acids to CoA, to produce the CoA ester. The role of the ACOT- family of enzymes is not well understood; however, it has been suggested that they play a crucial role in regulating the intracellular levels of CoA esters, Coenzyme A, and free fatty acids. Recent studies have shown that Acyl-CoA esters have many more functions than simply an energy source. These functions include allosteric regulation of enzymes such as acetyl-CoA carboxylase, hexokinase IV, and the citrate condensing enzyme. Long-chain acyl-CoAs also regulate opening of ATP-sensitive potassium channels and activation of Calcium ATPases, thereby regulating insulin secretion. A number of other cellular events are also mediated via acyl-CoAs, for example signal transduction through protein kinase C, inhibition of retinoic acid-induced apoptosis, and involvement in budding and fusion of the endomembrane system. Acyl-CoAs also mediate protein targeting to various membranes and regulation of G Protein α subunits, because they are substrates for protein acylation. In the mitochondria, acyl-CoA esters are involved in the acylation of mitochondrial NAD+ dependent dehydrogenases; because these enzymes are resp |
https://en.wikipedia.org/wiki/Shadertoy | Shadertoy.com is an online community and tool for creating and sharing shaders through WebGL, used for both learning and teaching 3D computer graphics in a web browser.
Overview
Shadertoy.com is an online community and platform for computer graphics professionals, academics and enthusiasts who share, learn and experiment with rendering techniques and procedural art through GLSL code. There are more than 52 thousand public contributions as of mid-2021 coming from thousands of users. WebGL allows Shadertoy to access the compute power of the GPU to generate procedural art, animation, models, lighting, state based logic and sound.
History
Shadertoy.com was created by Pol Jeremias and Inigo Quilez in January 2013 and came online in February the same year.
The roots of the effort are in Inigo's "Shadertoy" section in his computer graphics educational website. With the arrival of the initial WebGL implementation by Mozilla's Firefox in 2009, Quilez created the first online live coding environment and curated repository of procedural shaders. This content was donated by 18 authors from the Demoscene and showcased advanced real-time and interactive animations never seen in the Web before, such as raymarched metaballs, fractals and tunnel effects.
After having worked together in several real-time rendering projects together for years, in December 2012 Quilez and Pol decided to create a new Shadertoy site that would follow the tradition of the original Shadertoy page with its demoscene flavored resource and size constrained real-time graphics content, but would add social and community features and embrace an open-source attitude.
The page came out with the live editor, real-time playback, browsing and searching capabilities, tagging and commenting features. Content wise, Shadertoy provided a fixed and limited set of textures for its users to utilize in creative ways. Over the years Shadertoy added extra features, such as webcam and microphone input support, video, mu |
https://en.wikipedia.org/wiki/Thorpe%E2%80%93Ingold%20effect | The Thorpe–Ingold effect, gem-dimethyl effect, or angle compression is an effect observed in chemistry where increasing steric hindrance favours ring closure and intramolecular reactions. The effect was first reported by Beesley, Thorpe, and Ingold in 1915 as part of a study of cyclization reactions. It has since been generalized to many areas of chemistry.
The comparative rates of lactone formation (lactonization) of various 2-hydroxybenzenepropionic acids illustrate the effect. The placement of an increasing number of methyl groups accelerates the cyclization process.
One application of this effect is addition of a quaternary carbon (e.g., a gem-dimethyl group) in an alkyl chain to increase the reaction rate and/or equilibrium constant of cyclization reactions. An example of this is an olefin metathesis reaction: In the field of peptide foldamers, amino acid residues containing quaternary carbons such as 2-aminoisobutyric acid are used to promote formation of certain types of helices.
One proposed explanation for this effect is that the increased size of the substituents increases the angle between them. As a result, the angle between the other two substituents decreases. By moving them closer together, reactions between them are accelerated. It is thus a kinetic effect.
The effect also has some thermodynamic contribution as the in silico strain energy decreases on going from cyclobutane to 1-methylcyclobutane and 1,1-dimethylcyclobutane by a value between 8 kcal/mole and 1.5 kcal/mole.
A noteworthy example of the Thorpe-Ingold effect in supramolecular catalysis is given by diphenylmethane derivatives provided with guanidinium groups. These compounds are active in the cleavage of the RNA model compound HPNP. Substitution of the methylene group of the parent diphenylmethane spacer with cyclohexylidene and adamantylidene moieties enhances catalytic efficiency, with gem dialkyl effect accelerations of 4.5 and 9.1, respectively.
See also
Chelate effect
Flipp |
https://en.wikipedia.org/wiki/Functional%20near-infrared%20spectroscopy | Functional near-infrared spectroscopy (fNIRS) is an optical brain monitoring technique which uses near-infrared spectroscopy for the purpose of functional neuroimaging. Using fNIRS, brain activity is measured by using near-infrared light to estimate cortical hemodynamic activity which occur in response to neural activity. Alongside EEG, fNIRS is one of the most common non-invasive neuroimaging techniques which can be used in portable contexts. The signal is often compared with the BOLD signal measured by fMRI and is capable of measuring changes both in oxy- and deoxyhemoglobin concentration, but can only measure from regions near the cortical surface. fNIRS may also be referred to as Optical Topography (OT) and is sometimes referred to simply as NIRS.
Description
fNIRS estimates the concentration of hemoglobin from changes in absorption of near infrared light. As light moves or propagates through the head, it is alternately scattered or absorbed by the tissue through which it travels. Because hemoglobin is a significant absorber of near-infrared light, changes in absorbed light can be used to reliably measure changes in hemoglobin concentration. Different fNIRS techniques can also use the way in which light propagates to estimate blood volume and oxygenation. The technique is safe, non-invasive, and can be used with other imaging modalities.
fNIRS is a non-invasive imaging method involving the quantification of chromophore concentration resolved from the measurement of near infrared (NIR) light attenuation or temporal or phasic changes. The technique takes advantage of the optical window in which (a) skin, tissue, and bone are mostly transparent to NIR light (700–900 nm spectral interval) and (b) hemoglobin (Hb) and deoxygenated-hemoglobin (deoxy-Hb) are strong absorbers of light.
There are six different ways for infrared light to interact with the brain tissue: direct transmission, diffuse transmission, specular reflection, diffuse reflection, scattering, and a |
https://en.wikipedia.org/wiki/Live%20File%20System | Live File System is the term Microsoft uses to describe the packet writing method of creating discs in Windows Vista and later, which allows writeable optical media to act like mass storage by replicating its file operations. Live File System lets users manage files on recordable and rewriteable optical discs inside the file manager with the familiar workflow known from mass storage media such as USB flash drives and external hard disk drives.
Files can be added incrementally to the media, as well as modified, moved and deleted. These discs use the UDF file system. The supported UDF versions for usage as a live file system are UDF 1.50, UDF 2.00, UDF 2.01, UDF 2.50 for CD-R, CD-RW, DVD±R, DVD±RW and BD-RE, and UDF 2.60 for BD-R.
The Live File System option is used by default by AutoPlay when formatting/erasing a CD/DVD -R or -RW.
Compatibility
Older Windows versions do not have support for reading the latest UDF versions. If users create DVD/CDs in Windows Vista using UDF 2.50, these may not be readable on other systems, including Windows XP and older (pre-Mac OS 10.5) Apple systems unless a third-party UDF reader driver is installed. To ensure compatibility of disks created on Windows Vista, UDF 2.01 or lower should be selected.
Notes
See also
InCD – Commonly used for packet writing before natively supported since Windows Vista
Image Mastering API |
https://en.wikipedia.org/wiki/Beverly%20Clock | The Beverly Clock is a clock in the 3rd-floor lift foyer of the Department of Physics at the University of Otago, Dunedin, New Zealand. The clock is still running despite never having been manually wound since its construction in 1864 by Arthur Beverly.
Operation
The clock's mechanism is driven by variations in daily temperature and, to a lesser extent, in atmospheric pressure. Either causes the air in a airtight box to expand or contract, which pushes on a diaphragm. A temperature variation of over the course of each day creates approximately enough pressure to raise a one-pound weight by one inch (equivalent to ), which drives the clock mechanism.
A similar mechanism in a commercially available clock that operates on the same principle is the Atmos clock, manufactured by the Swiss watchmaker Jaeger-LeCoultre.
While the clock has not been wound since it was made, it has stopped on a number of occasions, such as when its mechanism needed cleaning or there was a mechanical failure, and when the Physics Department moved to new quarters. Also, on occasions when the ambient temperature does not fluctuate sufficiently to supply the requisite amount of energy, the clock will not function. However, after environmental parameters readjust, the clock begins operating again.
See also
Long-term experiment
Oxford Electric Bell (1840)
Pitch drop experiment (1927)
Cox's timepiece
Clock of the Long Now
Atmos clock, a commercially available clock working on a similar principle
Temperature gradient ocean glider |
https://en.wikipedia.org/wiki/Dipole%20antenna | In radio and telecommunications a dipole antenna or doublet is the simplest and most widely used class of antenna. The dipole is any one of a class of antennas producing a radiation pattern approximating that of an elementary electric dipole with a radiating structure supporting a line current so energized that the current has only one node at each end. A dipole antenna commonly consists of two identical conductive elements such as metal wires or rods. The driving current from the transmitter is applied, or for receiving antennas the output signal to the receiver is taken, between the two halves of the antenna. Each side of the feedline to the transmitter or receiver is connected to one of the conductors. This contrasts with a monopole antenna, which consists of a single rod or conductor with one side of the feedline connected to it, and the other side connected to some type of ground. A common example of a dipole is the "rabbit ears" television antenna found on broadcast television sets.
The dipole is the simplest type of antenna from a theoretical point of view. Most commonly it consists of two conductors of equal length oriented end-to-end with the feedline connected between them. Dipoles are frequently used as resonant antennas. If the feedpoint of such an antenna is shorted, then it will be able to resonate at a particular frequency, just like a guitar string that is plucked. Using the antenna at around that frequency is advantageous in terms of feedpoint impedance (and thus standing wave ratio), so its length is determined by the intended wavelength (or frequency) of operation. The most commonly used is the center-fed half-wave dipole which is just under a half-wavelength long. The radiation pattern of the half-wave dipole is maximum perpendicular to the conductor, falling to zero in the axial direction, thus implementing an omnidirectional antenna if installed vertically, or (more commonly) a weakly directional antenna if horizontal.
Although they may be us |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.