text
stringlengths
2
132k
source
dict
Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization. Stochastic optimization methods generalize deterministic methods for deterministic problems. stochastic semantic analysis An approach used in computer science as a semantic component of natural language understanding. Stochastic models generally use the definition of segments of words as basic semantic units for the semantic models, and in some cases involve a two layered approach. Stanford Research Institute Problem Solver (STRIPS) An automated planner developed by Richard Fikes and Nils Nilsson in 1971 at SRI International. subject-matter expert (SME) A person who has accumulated great knowledge in a particular field or topic, demonstrated by the person's degree, licensure, and/or through years of professional experience with the subject. superintelligence A hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. Superintelligence may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act within the physical world. A superintelligence may or may not be created by an intelligence explosion and be associated with a technological singularity. supervised learning The machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels
{ "page_id": 50336055, "source": null, "title": "Glossary of artificial intelligence" }
for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way (see inductive bias). support vector machines In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data used for classification and regression. swarm intelligence (SI) The collective behavior of decentralized, self-organized systems, either natural or artificial. The expression was introduced in the context of cellular robotic systems. symbolic artificial intelligence The term for the collection of all methods in artificial intelligence research that are based on high-level "symbolic" (human-readable) representations of problems, logic, and search. synthetic intelligence (SI) An alternative term for artificial intelligence which emphasizes that the intelligence of machines need not be an imitation or in any way artificial; it can be a genuine form of intelligence. systems neuroscience A subdiscipline of neuroscience and systems biology that studies the structure and function of neural circuits and systems. It is an umbrella term, encompassing a number of areas of study concerned with how nerve cells behave when connected together to form neural pathways, neural circuits, and larger brain networks. == T == technological singularity Also simply the singularity. A hypothetical point in the future when technological growth becomes uncontrollable and irreversible, resulting in unfathomable changes to human civilization. temporal difference learning A class of model-free reinforcement learning methods which learn by bootstrapping from the current estimate of the value function. These methods sample from the environment, like Monte Carlo methods, and perform updates based on current estimates, like dynamic programming methods. tensor network theory A theory of brain function (particularly that of the cerebellum) that provides a mathematical model of the transformation of sensory space-time coordinates into motor coordinates and vice versa by cerebellar neuronal networks. The
{ "page_id": 50336055, "source": null, "title": "Glossary of artificial intelligence" }
theory was developed as a geometrization of brain function (especially of the central nervous system) using tensors. TensorFlow A free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library, and is also used for machine learning applications such as neural networks. theoretical computer science (TCS) A subset of general computer science and mathematics that focuses on more mathematical topics of computing and includes the theory of computation. theory of computation In theoretical computer science and mathematics, the theory of computation is the branch that deals with how efficiently problems can be solved on a model of computation, using an algorithm. The field is divided into three major branches: automata theory and languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?". Thompson sampling A heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit problem. It consists in choosing the action that maximizes the expected reward with respect to a randomly drawn belief. time complexity The computational complexity that describes the amount of time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to differ by at most a constant factor. transfer learning A machine learning technique in which knowledge learned from a task is reused in order to boost performance on a related task. For example, for image classification, knowledge gained while learning to recognize cars could be applied when trying to recognize trucks. transformer A type of deep
{ "page_id": 50336055, "source": null, "title": "Glossary of artificial intelligence" }
learning architecture that exploits a multi-head attention mechanism. Transformers address some of the limitations of long short-term memory, and became widely used in natural language processing, although it can also process other types of data such as images in the case of vision transformers. transhumanism Abbreviated H+ or h+. An international philosophical movement that advocates for the transformation of the human condition by developing and making widely available sophisticated technologies to greatly enhance human intellect and physiology. transition system In theoretical computer science, a transition system is a concept used in the study of computation. It is used to describe the potential behavior of discrete systems. It consists of states and transitions between states, which may be labeled with labels chosen from a set; the same label may appear on more than one transition. If the label set is a singleton, the system is essentially unlabeled, and a simpler definition that omits the labels is possible. tree traversal Also tree search. A form of graph traversal and refers to the process of visiting (checking and/or updating) each node in a tree data structure, exactly once. Such traversals are classified by the order in which the nodes are visited. true quantified Boolean formula In computational complexity theory, the language TQBF is a formal language consisting of the true quantified Boolean formulas. A (fully) quantified Boolean formula is a formula in quantified propositional logic where every variable is quantified (or bound), using either existential or universal quantifiers, at the beginning of the sentence. Such a formula is equivalent to either true or false (since there are no free variables). If such a formula evaluates to true, then that formula is in the language TQBF. It is also known as QSAT (Quantified SAT). Turing machine A mathematical model of computation describing an abstract
{ "page_id": 50336055, "source": null, "title": "Glossary of artificial intelligence" }
machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any algorithm. Turing test A test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human, developed by Alan Turing in 1950. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give. type system In programming languages, a set of rules that assigns a property called type to the various constructs of a computer program, such as variables, expressions, functions, or modules. These types formalize and enforce the otherwise implicit categories the programmer uses for algebraic data types, data structures, or other components (e.g. "string", "array of float", "function returning boolean"). The main purpose of a type system is to reduce possibilities for bugs in computer programs by defining interfaces between different parts of a computer program, and then checking that the parts have been connected in a consistent way. This checking can happen statically (at compile time), dynamically (at run time), or as a combination of static and dynamic checking. Type systems have
{ "page_id": 50336055, "source": null, "title": "Glossary of artificial intelligence" }
other purposes as well, such as expressing business rules, enabling certain compiler optimizations, allowing for multiple dispatch, providing a form of documentation, etc. == U == unsupervised learning A type of self-organized Hebbian learning that helps find previously unknown patterns in data set without pre-existing labels. It is also known as self-organization and allows modeling probability densities of given inputs. It is one of the three basic paradigms of machine learning, alongside supervised and reinforcement learning. Semi-supervised learning has also been described and is a hybridization of supervised and unsupervised techniques. == V == vision processing unit (VPU) A type of microprocessor designed to accelerate machine vision tasks. Value-alignment complete Analogous to an AI-complete problem, a value-alignment complete problem is a problem where the AI control problem needs to be fully solved to solve it. == W == Watson A question-answering computer system capable of answering questions posed in natural language, developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci. Watson was named after IBM's first CEO, industrialist Thomas J. Watson. weak AI Also narrow AI. Artificial intelligence that is focused on one narrow task. weak supervision See semi-supervised learning. word embedding A representation of a word in natural language processing. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that words that are closer in the vector space are expected to be similar in meaning. == X == XGBoost Short for eXtreme Gradient Boosting, XGBoost is an open-source software library which provides a regularizing gradient boosting framework for multiple programming languages. == References == === Works cited === == Notes ==
{ "page_id": 50336055, "source": null, "title": "Glossary of artificial intelligence" }
Colm P. O'Donnell is an Irish chemist and engineer. He is a professor of biosystems and food engineering at the University College Dublin who is active in the field of process analytical technology (PAT). He is also a head of university School of biosystems and food engineering — as well as a chairperson of the Dairy processing technical committee of International Federation for Process Analysis and Control (IFPAC). == References == == External links == "Professor Colm O'Donnell". University College Dublin (UCD). Retrieved 5 October 2018.
{ "page_id": 58659130, "source": null, "title": "Colm O'Donnell" }
Shotokuvirae is a kingdom of viruses. == Nomenclature == The kingdom, Shotokuvirae, was named after Japan's Empress Shotoku (718-770 AD), who reigned over Japan twice, first as Empress Koken and later as Empress Shotoku, and who created the world's earliest written record of a plant virus disease, which was a poem to her followers about a geminivirus eupatorium yellow vein virus infection of a eupatorium plant, which she had described as having turned yellow. == Taxonomy == The following phyla are recognized: Commensaviricota Cossaviricota Cressdnaviricota == References ==
{ "page_id": 63770939, "source": null, "title": "Shotokuvirae" }
Fluorescence-lifetime imaging microscopy or FLIM is an imaging technique based on the differences in the exponential decay rate of the photon emission of a fluorophore from a sample. It can be used as an imaging technique in confocal microscopy, two-photon excitation microscopy, and multiphoton tomography. The fluorescence lifetime (FLT) of the fluorophore, rather than its intensity, is used to create the image in FLIM. Fluorescence lifetime depends on the local micro-environment of the fluorophore, thus precluding any erroneous measurements in fluorescence intensity due to change in brightness of the light source, background light intensity or limited photo-bleaching. This technique also has the advantage of minimizing the effect of photon scattering in thick layers of sample. Being dependent on the micro-environment, lifetime measurements have been used as an indicator for pH, viscosity and chemical species concentration. == Fluorescence lifetimes == A fluorophore which is excited by a photon will drop to the ground state with a certain probability based on the decay rates through a number of different (radiative and/or nonradiative) decay pathways. To observe fluorescence, one of these pathways must be by spontaneous emission of a photon. In the ensemble description, the fluorescence emitted will decay with time according to I ( t ) = I 0 e − t / τ {\displaystyle I(t)=I_{0}e^{-t/\tau }} where 1 τ = ∑ k i {\displaystyle {\frac {1}{\tau }}=\sum k_{i}} . In the above, t {\displaystyle t} is time, τ {\displaystyle \tau } is the fluorescence lifetime, I 0 {\displaystyle I_{0}} is the initial fluorescence at t = 0 {\displaystyle t=0} , and k i {\displaystyle k_{i}} are the rates for each decay pathway, at least one of which must be the fluorescence decay rate k f {\displaystyle k_{f}} . More importantly, the lifetime, τ {\displaystyle \tau } is independent of the initial
{ "page_id": 1642813, "source": null, "title": "Fluorescence-lifetime imaging microscopy" }
intensity and of the emitted light. This can be utilized for making non-intensity based measurements in chemical sensing. == Measurement == Fluorescence-lifetime imaging yields images with the intensity of each pixel determined by τ {\displaystyle \tau } , which allows one to view contrast between materials with different fluorescence decay rates (even if those materials fluoresce at exactly the same wavelength), and also produces images which show changes in other decay pathways, such as in FRET imaging. === Pulsed illumination === Fluorescence lifetimes can be determined in the time domain by using a pulsed source. When a population of fluorophores is excited by an ultrashort or delta pulse of light, the time-resolved fluorescence will decay exponentially as described above. However, if the excitation pulse or detection response is wide, the measured fluorescence, d(t), will not be purely exponential. The instrumental response function, IRF(t) will be convolved or blended with the decay function, F(t). d ( t ) = I R F ( t ) ⊗ F ( t ) {\displaystyle {d}(t)={IRF}(t)\otimes {F}(t)} The instrumental response of the source, detector, and electronics can be measured, usually from scattered excitation light. Recovering the decay function (and corresponding lifetimes) poses additional challenges as division in the frequency domain tends to produce high noise when the denominator is close to zero. ==== TCSPC ==== Time-correlated single-photon counting (TCSPC) is usually employed because it compensates for variations in source intensity and single photon pulse amplitudes. Using commercial TCSPC equipment a fluorescence decay curve can be recorded with a time resolution down to 405 fs. The recorded fluorescence decay histogram obeys Poisson statistics which is considered in determining goodness of fit during fitting. More specifically, TCSPC records times at which individual photons are detected by a fast single-photon detector (typically a photo-multiplier tube (PMT) or a
{ "page_id": 1642813, "source": null, "title": "Fluorescence-lifetime imaging microscopy" }
single photon avalanche photo diode (SPAD)) with respect to the excitation laser pulse. The recordings are repeated for multiple laser pulses and after enough recorded events, one is able to build a histogram of the number of events across all of these recorded time points. This histogram can then be fit to an exponential function that contains the exponential lifetime decay function of interest, and the lifetime parameter can accordingly be extracted. Multi-channel PMT systems with 16 to 64 elements have been commercially available, whereas the recently demonstrated CMOS single-photon avalanche diode (SPAD)-TCSPC FLIM systems can offer even higher number of detection channels and additional low-cost options. ==== Gating method ==== Pulse excitation is still used in this method. Before the pulse reaches the sample, some of the light is reflected by a dichroic mirror and gets detected by a photodiode that activates a delay generator controlling a gated optical intensifier (GOI) that sits in front of the CCD detector. The GOI only allows for detection for the fraction of time when it is open after the delay. Thus, with an adjustable delay generator, one is able to collect fluorescence emission after multiple delay times encompassing the time range of the fluorescence decay of the sample. In recent years integrated intensified CCD cameras entered the market. These cameras consist of an image intensifier, CCD sensor and an integrated delay generator. ICCD cameras with shortest gating times of down to 200ps and delay steps of 10ps allow sub-nanosecond resolution FLIM. In combination with an endoscope this technique is used for intraoperative diagnosis of brain tumors. === Phase modulation === Fluorescence lifetimes can be determined in the frequency domain by a phase-modulation method. The method uses a light source that is pulsed or modulated at high frequency (up to 500 MHz) such
{ "page_id": 1642813, "source": null, "title": "Fluorescence-lifetime imaging microscopy" }
as an LED, diode laser or a continuous wave source combined with an electro-optic modulator or an acousto-optic modulator. The fluorescence is (a.) demodulated and (b.) phase shifted; both quantities are related to the characteristic decay times of the fluorophore. Also, y-components to the excitation and fluorescence sine waves will be modulated, and lifetime can be determined from the modulation ratio of these y-components. Hence, 2 values for the lifetime can be determined from the phase-modulation method. The lifetimes are determined through a fitting procedures of these experimental parameters. An advantage of PMT-based or camera-based frequency domain FLIM is its fast lifetime image acquisition making it suitable for applications such as live cell research. == Analysis == The goal of the analysis algorithm is to extract the pure decay curve from the measured decay and to estimate the lifetime(s). The latter is usually accomplished by fitting single or multi exponential functions. A variety of methods have been developed to solve this problem. The most widely used technique is the least square iterative re-convolution which is based on the minimization of the weighted sum of the residuals. In this technique theoretical exponential decay curves are convoluted with the instrument response function, which is measured separately, and the best fit is found by iterative calculation of the residuals for different inputs until a minimum is found. For a set of observations d ( t i ) {\displaystyle d({{t}_{i}})} of the fluorescence signal in time bin i, the lifetime estimation is carried out by minimization of: χ 2 = ∑ i [ d i ( t i ) − d 0 i ( t i , a , τ ) ] 2 {\displaystyle {{\chi }^{2}}=\sum \limits _{i}{{\left[{{d}_{i}}({{t}_{i}})-{{d}_{0i}}({{t}_{i}},a,\tau )\right]}^{2}}} Besides experimental difficulties, including the wavelength dependent instrument response function, mathematical treatment of the iterative
{ "page_id": 1642813, "source": null, "title": "Fluorescence-lifetime imaging microscopy" }
de-convolution problem is not straightforward and it is a slow process which in the early days of FLIM made it impractical for a pixel-by-pixel analysis. Non fitting methods are attractive because they offer a very fast solution to lifetime estimation. One of the major and straightforward techniques in this category is the rapid lifetime determination (RLD) method. RLD calculates the lifetimes and their amplitudes directly by dividing the decay curve into two parts of equal width δ {\displaystyle \delta } t. The analysis is performed by integrating the decay curve in equal time intervals δ {\displaystyle \delta } t: D 0 = ∑ i = 1 K / 2 I i δ t D 1 = ∑ i = K / 2 K I i δ t {\displaystyle {\begin{matrix}{{D}_{0}}=\sum \limits _{i=1}^{K/2}{{{I}_{i}}\delta t}&{{D}_{1}}=\sum \limits _{i=K/2}^{K}{{{I}_{i}}\delta t}\\\end{matrix}}} Ii is the recorded signal in the i-th channel and K is the number of channels. The lifetime can be estimated using: τ = δ t / ln ⁡ ( D 0 / D 1 ) {\displaystyle \tau =\delta t/\ln({{D}_{0}}/{{D}_{1}})} For multi exponential decays this equation provides the average lifetime. This method can be extended to analyze bi-exponential decays. One major drawback of this method is that it cannot take into account the instrument response effect and for this reason the early part of the measured decay curves should be ignored in the analyses. This means that part of the signal is discarded and the accuracy for estimating short lifetimes goes down. One of the interesting features of the convolution theorem is that the integral of the convolution is the product of the factors that make up the integral. There are a few techniques which work in transformed space that exploit this property to recover the pure decay curve from the measured curve. Laplace and
{ "page_id": 1642813, "source": null, "title": "Fluorescence-lifetime imaging microscopy" }
Fourier transformation along with Laguerre gauss expansion have been used to estimate the lifetime in transformed space. These approaches are faster than the deconvolution based methods but they suffer from truncation and sampling problems. Moreover, application of methods like Laguerre gauss expansion is mathematically complicated. In Fourier methods the lifetime of a single exponential decay curve is given by: τ = 1 n ω A n B n {\displaystyle \tau ={\frac {1}{n\omega }}{\frac {{A}_{n}}{{B}_{n}}}} Where: A n = ∑ t d ( t ) sin ⁡ ( n ω t ) ∑ t I R F ( t ) sin ⁡ ( n ω t ) = ω τ 1 + ω 2 τ 2 , B n = ∑ t d ( t ) cos ⁡ ( n ω t ) ∑ t I R F cos ⁡ ( n ω t ) = 1 1 + n ω 2 τ 2 , ω = 2 π T {\displaystyle {\begin{matrix}{{A}_{n}}={\frac {\sum \limits _{t}{d(t)\sin(n\omega t)}}{\sum \limits _{t}{IRF(t)\sin(n\omega t)}}}={\frac {\omega \tau }{1+{{\omega }^{2}}{{\tau }^{2}}}},&{{B}_{n}}={\frac {\sum \limits _{t}{d(t)\cos(n\omega t)}}{\sum \limits _{t}{IRF\cos(n\omega t)}}}={\frac {1}{1+n{{\omega }^{2}}{{\tau }^{2}}}},&\omega ={\frac {2\pi }{T}}\\\end{matrix}}} and n is the harmonic number and T is the total time range of detection. == Applications == FLIM has primarily been used in biology as a method to detect photosensitizers in cells and tumors as well as FRET in instances where ratiometric imaging is difficult. The technique was developed in the late 1980s and early 1990s (Gating method: Bugiel et al. 1989. König 1989, Phase modulation: Lakowicz at al. 1992,) before being more widely applied in the late 1990s. In cell culture, it has been used to study EGF receptor signaling and trafficking. Time domain FLIM (tdFLIM) has also been used to show the interaction of both types of nuclear intermediate filament proteins
{ "page_id": 1642813, "source": null, "title": "Fluorescence-lifetime imaging microscopy" }
lamins A and B1 in distinct homopolymers at the nuclear envelope, which further interact with each other in higher order structures. FLIM imaging is particularly useful in neurons, where light scattering by brain tissue is problematic for ratiometric imaging. In neurons, FLIM imaging using pulsed illumination has been used to study Ras, CaMKII, Rac, and Ran family proteins. FLIM has been used in clinical multiphoton tomography to detect intradermal cancer cells as well as pharmaceutical and cosmetic compounds. More recently FLIM has also been used to detect flavanols in plant cells. === Autofluorescent coenzymes NAD(P)H and FAD === Multi-photon FLIM is increasingly used to detect auto-fluorescence from coenzymes as markers for changes in mammalian metabolism. === FRET imaging === Since the fluorescence lifetime of a fluorophore depends on both radiative (i.e. fluorescence) and non-radiative (i.e. quenching, FRET) processes, energy transfer from the donor molecule to the acceptor molecule will decrease the lifetime of the donor. Thus, FRET measurements using FLIM can provide a method to discriminate between the states/environments of the fluorophore. In contrast to intensity-based FRET measurements, the FLIM-based FRET measurements are also insensitive to the concentration of fluorophores and can thus filter out artifacts introduced by variations in the concentration and emission intensity across the sample. == See also == Phasor approach to fluorescence lifetime and spectral imaging == References == == External links == Fluorescence Excited-State Lifetime Imaging Lifetime and spectral analysis tools in ImageJ: http://spechron.com Archived 2013-03-11 at the Wayback Machine Fluorescence Lifetime Imaging Microscopy Principle of TCSPC FLIM (Becker&Hickl GmbH)
{ "page_id": 1642813, "source": null, "title": "Fluorescence-lifetime imaging microscopy" }
Androsterone sulfate, also known as 3α-hydroxy-5α-androstan-17-one 3α-sulfate, is an endogenous, naturally occurring steroid and one of the major urinary metabolites of androgens. It is a steroid sulfate which is formed from sulfation of androsterone by the steroid sulfotransferase SULT2A1 and can be desulfated back into androsterone by steroid sulfatase. == See also == Androsterone glucuronide Steroid sulfate C19H30O5S == References == == External links == Metabocard for Androsterone Sulfate (HMDB02759) - Human Metabolome Database
{ "page_id": 54595901, "source": null, "title": "Androsterone sulfate" }
The molecular formula C8H16O2 may refer to: Butyl butyrate Caprylic acid Cyclohexanedimethanol Ethyl hexanoate 2-Ethylhexanoic acid Hexyl acetate Manzanate Pentyl propanoate 2,2,4,4-Tetramethyl-1,3-cyclobutanediol Valproic acid
{ "page_id": 23662911, "source": null, "title": "C8H16O2" }
Kate & Leopold is a 2001 American romantic comedy science fiction film that tells a story of a physicist by the name of Stuart (Liev Schreiber), who accidentally pulls his great‑great‑grandfather, Leopold (Hugh Jackman), through a time portal from 19th‑century New York to the present, where Leopold and Stuart's ex‑girlfriend, Kate (Meg Ryan), fall in love with each other. == Plot == On 28 April 1876, Leopold is the orphaned and impoverished 3rd Duke of Albany, living in New York City with his uncle, Millard Mountbatten, under the care of the butler by the name of Otis. Conversant with the arts and sciences, he has designed and modelled a prototype Otis elevator, but Uncle Millard exhorts him to return to reality and marry a wealthy American heiress. While sketching the Brooklyn Bridge during a public meeting dedicated to the completion of its Manhattan tower, Leopold notices Stuart Besser taking photographs with an anachronistically small camera. Stuart is an amateur physicist (and a great‑great‑grandson of Leopold) from 21st‑century New York, who has discovered the existence of gravitational time portals. At the evening ball, during which Leopold is to announce his bride-to-be, he sees Stuart photographing design sketches in the duke's study. Chasing Stuart, he tries to save him from falling off the unfinished bridge, only for both to fall into the time portal. Leopold regains consciousness on a Wednesday morning in the year 2001 in Stuart's apartment at 88 White Street, Manhattan. Stuart explains that the portal will reopen on the next Monday, until which time Leopold should stay in Stuart's apartment. While taking his dog out, Stuart is injured by falling into the empty elevator shaft and, after ranting about his scientific discovery in the hospital, is involuntarily committed to a mental institution. According to Stuart's concept, Leopold's time travel
{ "page_id": 594240, "source": null, "title": "Kate & Leopold" }
to the 21st century has caused a widespread "occlusion" of elevators, and may cause the disappearance of Stuart himself if Leopold does not go back to 1876 on Monday. Leopold is intrigued by the cynical and ambitious Kate McKay, Stuart's ex-girlfriend who lives downstairs. He says that she produces the impression of a "career woman" and, upon learning that she works in market research, ironically remarks, "Mm. A fine avocation for women, research. Perfect for the feminine mind." (Later on, Kate's boss tells her the same thing: "You skew male. You're like a man. A man who understands women—their desires, their needs. You understand them, but you're not really one of them.") Kate shrugs it off and demands that Leopold take Stuart's dog for a walk. Back at the apartment, he befriends Charlie—Kate's brother and an aspiring actor, who believes him to be an actor as well, steadfast to his character. On Thursday morning, Kate becomes impressed by Leopold's eloquent exposition of how important the tastiness of food is to the quality of human life. She takes him to an audition for a TV commercial pitching a fat-free butter, Farmer's Bounty, produced by the English company Jansen Foods, which is being taken over by Kate's company, Camden Research Group. After the successful audition, Kate and Leopold stop by a horse-drawn tourist carriage to hail a taxi, at which moment a thief snatches Kate's briefcase and flees into Central Park. Seeing Kate run after the purse-snatcher, Leopold borrows one of the horses and hurries to help her. Riding together with Kate, he drives the thief into an impasse and forces him to drop the briefcase. Bedazzled by the sight of Leopold riding on a white horse to her rescue, Kate begins to admit that his 19th century dukedom may be "for
{ "page_id": 594240, "source": null, "title": "Kate & Leopold" }
real". On Friday, Leopold hires a violinist and invites Kate to a rooftop dinner, which ends with a waltz and the first kiss. On Saturday, they stroll about Lower Manhattan and come across Uncle Millard's home at 1 Hanover Square, where Leopold retrieves a metal box with his boyhood treasures, including his mother's ring, from a drawer hidden in his room's wall. In the evening, he tries to propose to Kate, but she falls asleep on his lap. On Sunday, Leopold acts in a Farmer's Bounty commercial, but walks off the set upon finding the diet margarine disgusting. Leopold chastises Kate about integrity, to which she counters that he lacks connection with reality. Realizing that their time together is nearly over, both spend the evening in subdued contemplation. On Monday morning, Stuart escapes from the asylum and sends Leopold to his own time, which makes the elevators work again. Charlie notices Kate in a photo taken at Leopold's ball on 28 April 1876, and shows the picture to Stuart, who realizes that Kate's future is in the past. That night, when Kate is about to accept her promotion at the Anglo-American merger meeting held, to her surprise, at 1 Hanover Square, Stuart and Charlie tell her that in order to be with Leopold she has to jump off the Brooklyn Bridge within the next 23 minutes. Kate rejects their suggestion as absurd and goes to give her acceptance speech, during which she sees herself, wearing the same evening dress, in one of Stuart's photos. Her speech falters to a musing and terminates with an apology for having to leave, at which point the three of them rush to the bridge. Having made it through the portal, Kate appears in 1876 and runs to 1 Hanover Square, where the Anglo-American engagement
{ "page_id": 594240, "source": null, "title": "Kate & Leopold" }
is to take place. Just when Leopold is about to announce his bride of convenience, Kate storms into the ballroom, and he instead announces her name, styled as "Kate McKay, of the McKays of Massapequa". Among the shocked guests, Kate and Leopold reunite with a kiss and dance a bridal waltz. Thus Kate turns out to be Stuart's great‑great‑grandmother. == Cast == == Alternative versions == References suggesting that Kate is Stuart's great-great-grandmother were censored from the film just a few days before the theatrical release—according to director James Mangold, due to "2 critics who were horrified by Liev Schreiber's distant relationship to Leo". Those two critics were Roger Ebert and Richard Roeper. The previous four-year relationship of Stuart and Kate is classified as illegally incestuous in many jurisdictions, because she is his lineal ancestor (see Legality of incest#Table). The following scenes were excised: References suggesting that Kate has a genetic relationship to Stuart. A scene where Ryan appears in the background of a 19th-century party. A cameo by director James Mangold, where he plays a director whose film is being changed to meet the demands of a test screening. The director's cut, lasting 123 minutes, was released on DVD (Region 4) on 29 January 2003, and on Blu-ray (Region A) on 10 April 2012. The theatrical cut, lasting 118 minutes, exists only on DVD, first released on 11 June 2002. == Music == The soundtrack to Kate & Leopold was released on December 25, 2001. == Reception == === Critical response === On Rotten Tomatoes, the film has an approval rating of 52% based on reviews from 133 critics, and an average rating of 5.3/10. The site's consensus is: "Though Hugh Jackman charms, Kate & Leopold is bland and predictable, and the time travel scenario lacks logic." On Metacritic,
{ "page_id": 594240, "source": null, "title": "Kate & Leopold" }
the film has a weighted average score of 44 based on 27 reviews, indicating "mixed or average" reviews. Audiences surveyed by CinemaScore gave the film a grade B+ on scale of A to F. Roger Ebert of the Chicago Sun-Times wrote: "Meg Ryan does this sort of thing about as well as it can possibly be done, and after Sleepless in Seattle and You've Got Mail, here is another ingenious plot that teases us with the possibility that true love will fail, while winking that, of course, it will prevail." Peter Travers of Rolling Stone called it "comfort food for bruised romantics". Lael Loewenstein of Variety wrote: "A time-travel romantic comedy whose best elements—Meg Ryan and Hugh Jackman—overcome distracting plot holes, loose threads and assorted contrivances to make for a mostly charming and diverting tale." === Accolades === Hugh Jackman was nominated in 2001 for the Golden Globe Award for Best Actor – Motion Picture Musical or Comedy. The film won the Golden Globe Award for Best Song for the song "Until...", written and performed by Sting. The same song was also nominated for the Academy Award for Best Original Song, and Sting performed the song during the ceremony. == References == == External links == Kate & Leopold at IMDb
{ "page_id": 594240, "source": null, "title": "Kate & Leopold" }
The QTY Code is a design method to transform membrane proteins that are intrinsically insoluble in water into variants with water solubility, while retaining their structure and function. == Similar structures of amino acids == The QTY Code is based on two key molecular structural facts: 1) all 20 natural amino acids are found in alpha-helices regardless of their chemical properties, although some amino acids have a higher propensity to form an alpha-helix; and, 2) several amino acids share striking structural similarities despite their very different chemical properties. These may be paired as: Glutamine (Q) vs Leucine (L); Threonine (T) vs Valine (V) and Isoleucine (I); and Tyrosine (Y) vs Phenylalanine (F). The QTY Code systematically replaces water-insoluble amino acids (L, V, I and F) with water-soluble amino acids (Q, T and Y) in transmembrane alpha-helices. Thus, its application to membrane proteins changes the water-insoluble form of membrane proteins into water-soluble variants. The QTY Code was specifically conceived to render G protein-coupled receptors (GPCRs) into a water-soluble form. Despite substantial transmembrane domain changes, the QTY variants of GPCRs maintain stable structure and ligand binding activities. === Hydrogen bond interactions between water and the amino acids === The side chain of glutamine (Q) can form 4 hydrogen bonds with 4 water molecules. There are 2 hydrogen donors from nitrogen and 2 hydrogen acceptors for oxygen. The –OH group of threonine (T) and tyrosine (Y) can form 3 hydrogen bonds with 3 water molecules (2 H-acceptors and 1 H-donor). Color code: Green = carbon, red = oxygen, blue = nitrogen, gray = hydrogen, yellow disks = hydrogen bonds. === Three types of alpha-helices and with nearly identical molecular structure === There are 3 types of alpha-helices and with nearly identical molecular structure, namely: a) 1.5Å per amino acid rise, b) 100˚ per
{ "page_id": 66916675, "source": null, "title": "QTY Code" }
amino acid turn, c) 3.6 amino acids and 360˚ per helical turn, and d) 5.4Å per helical turn. The 3 types of alpha-helices are: 1) mostly hydrophobic amino acids including Leucine (L), Isoleucine (I), Valine (V), Phenylalanine (F), Methionine (M) and Alanine (A) that are commonly found as the helical transmembrane segments in membrane proteins; 2) mostly hydrophilic amino acids including Aspartic acid (D), Glutamic acid (E), Glutamine (Q), Lysine (K), Arginine (R), Serine (S), Threonine (T), Tyrosine (Y) that are commonly found on the out layer in water-soluble globular proteins; 3) mixed hydrophobic and hydrophilic amino acids that are partitioned in 2 faces: hydrophobic face and hydrophilic face, in an analogy, like our fingers with front and back. These alpha-helices sometimes attach to surface of membrane lipid bilayer, or partially buried to the hydrophobic core and partially close to the surface of water-soluble globular proteins. == The QTY code == The QTY Code is likely universally applicable and also reversible, namely, Q changes to L, T changes to V and I, and Y changes to F. The QTY Code has been successful in designing many water-soluble variants of chemokine receptors and cytokine receptors. The QTY Code may likely be successfully applied to other water-insoluble aggregated proteins. The QTY Code is robust and straightforward: it is the simplest tool to carry out membrane protein design without sophisticated computer algorithms. Thus, it can be used broadly. The QTY Code has implications for designing additional GPCRs and other membrane proteins including cytokine receptors that are directly involved in cytokine storm syndrome. The QTY Code has also been applied to cytokine receptor water-soluble variants with the aim of combatting the cytokine storm syndrome (also called cytokine release syndrome) suffered by cancer patients receiving CAR-T therapy. This therapeutic application may be equally applicable to
{ "page_id": 66916675, "source": null, "title": "QTY Code" }
severely infected COVID-19 patients, for whom cytokine storms often lead to death. == References == == Further reading == Hung, Chien-Lun; Kuo, Yun-Hsuan; Lee, Su Wei; Chiang, Yun-Wei (2021). "Protein Stability Depends Critically on the Surface Hydrogen-Bonding Network: A Case Study of Bid Protein". The Journal of Physical Chemistry B. 125 (30): 8373–8382. doi:10.1021/acs.jpcb.1c03245. PMID 34314184. S2CID 236472005. Zayni, Sonja; Damiati, Samar; Moreno-Flores, Susana; Amman, Fabian; Hofacker, Ivo; Jin, David; Ehmoser, Eva-Kathrin (2021). "Enhancing the Cell-Free Expression of Native Membrane Proteins by in Silico Optimization of the Coding Sequence—An Experimental Study of the Human Voltage-Dependent Anion Channel". Membranes. 11 (10): 741. doi:10.3390/membranes11100741. PMC 8540592. PMID 34677509. Root-Bernstein, Robert; Churchill, Beth (2021). "Co-Evolution of Opioid and Adrenergic Ligands and Receptors: Shared, Complementary Modules Explain Evolution of Functional Interactions and Suggest Novel Engineering Possibilities". Life. 11 (11): 1217. Bibcode:2021Life...11.1217R. doi:10.3390/life11111217. PMC 8623292. PMID 34833093. Vorobieva, Anastassia Andreevna (2021). "Principles and Methods in Computational Membrane Protein Design". Journal of Molecular Biology. 433 (20): 167154. doi:10.1016/j.jmb.2021.167154. PMID 34271008. S2CID 236001242. Martin, Joseph; Sawyer, Abigail (2019). "Elucidating the structure of membrane proteins". BioTechniques. 66 (4): 167–170. doi:10.2144/btn-2019-0030. PMID 30987442. S2CID 149754025. Tegler, Lotta; Corin, Karolina; Pick, Horst; Brookes, Jennifer; Skuhersky, Michael; Vogel, Horst; Zhang, Shuguang (2020). "The G protein coupled receptor CXCR4 designed by the QTY code becomes more hydrophilic and retains cell signaling activity". Scientific Reports. 10 (1) 21371. Bibcode:2020NatSR..1021371T. doi:10.1038/s41598-020-77659-x. PMC 7721705. PMID 33288780. Tao, Fei; Tang, Hongzhi; Zhang, Shuguang; Li, Mengke; Xu, Ping (2022). "Enabling QTY Server for Designing Water-Soluble α-Helical Transmembrane Proteins". mBio. 13 (1): e0360421. doi:10.1128/mbio.03604-21. PMC 8764525. PMID 35038913. Smorodina, Eva; Tao, Fei; Qing, Rui; Jin, David; Yang, Steve; Zhang, Shuguang (2022). "Comparing 2 crystal structures and 12 AlphaFold2-predicted human membrane glucose transporters and their water-soluble QTY variants". QRB Discovery. 3 (e5): e5. doi:10.1017/qrd.2022.6. hdl:10852/101308. PMC 10392618. PMID 37529287.
{ "page_id": 66916675, "source": null, "title": "QTY Code" }
Smorodina, Eva; Diankin, Igor; Tao, Fei; Qing, Rui; Yang, Steve; Zhang, Shuguang (2022). "Structural informatic study of determined and AlphaFold2 predicted molecular structures of 13 human solute carrier transporters and their water-soluble QTY variants". Scientific Reports. 12 (1) 20103. Bibcode:2022NatSR..1220103S. doi:10.1038/s41598-022-23764-y. PMC 9684436. PMID 36418372. Zhang, Shuguang; Egli, Martin (2022). "Hiding in plain sight: three chemically distinct α-helix types". Quarterly Reviews of Biophysics. 55 (e7): e7. doi:10.1017/S0033583522000063. PMID 35722863. Qing, Rui; Hao, Shilei; Smorodina, Eva; Jin, David; Zalevsky, Arthur; Zhang, Shuguang (2022). "Protein Design: From the Aspect of Water Solubility and Stability". Chemical Reviews. 122 (18): 14085–14179. doi:10.1021/acs.chemrev.1c00757. hdl:10852/100513. PMC 9523718. PMID 35921495. Meng, Run; Hao, Shilei; Sun, Changfa; Hou, Zongkun; Hou, Yao; Wang, Lili; Deng, Peiying; Deng, Jia; Yang, Yaying; Xia, Haijian; Wang, Bochu; Qing, Rui; Zhang, Shuguang (2023). "Reverse-QTY code design of active human serum albumin self-assembled amphiphilic nanoparticles for effective anti-tumor drug doxorubicin release in mice". Proceedings of the National Academy of Sciences. 120 (21): e220173120. Bibcode:2023PNAS..12020173M. doi:10.1073/pnas.2220173120. PMC 10214157. PMID 37186820. Qing, Rui; Xue, Mantian; Zhao, Jiayuan; Wu, Lidong; Breitwieser, Andreas; Smorodina, Eva; Schubert, Thomas; Azzellino, Giovanni; Jin, David; Kong, Jing; Palacios, Tomás; Sleytr, Uwe B.; Zhang, Shuguang (2023). "Scalable biomimetic sensing system with membrane receptor dual-monolayer probe and graphene transistor arrays". Science Advances. 9 (29): eadf1402. Bibcode:2023SciA....9F1402Q. doi:10.1126/sciadv.adf1402. PMC 10361598. PMID 37478177. Sajeev-Sheeja, Akash; Smorodina, Eva; Zhang, Shuguang (2023). "Structural bioinformatics studies of bacterial outer membrane beta-barrel transporters and their AlphaFold2 predicted water-soluble QTY variants". PLOS ONE. 18 (8): e0290360. Bibcode:2023PLoSO..1890360S. doi:10.1371/journal.pone.0290360. PMC 10443868. PMID 37607179. Li, Mengke; Wang, Yanze; Tao, Fei; Xu, Ping; Zhang, Shuguang (2023). "QTY code designed antibodies for aggregation prevention: A structural bioinformatic and computational study". Proteins: Structure, Function, and Bioinformatics. 92 (2): 206–218. doi:10.1002/prot.26603. PMID 37795805. Li, Mengke; Qing, Rui; Tao, Fei; Xu, Ping; Zhang, Shuguang (2023). "Dynamic Dimerization of
{ "page_id": 66916675, "source": null, "title": "QTY Code" }
Chemokine Receptors and Potential Inhibitory Role of Their Truncated Isoforms Revealed through Combinatorial Prediction". International Journal of Molecular Sciences. 24 (22): 16266. doi:10.3390/ijms242216266. PMC 10671024. PMID 38003455. Li, Mengke; Qing, Rui; Tao, Fei; Xu, Ping; Zhang, Shuguang (2024). "Inhibitory effect of truncated isoforms on GPCR dimerization predicted by combinatorial computational strategy". Computational and Structural Biotechnology Journal. 23: 278–286. doi:10.1016/j.csbj.2023.12.008. PMC 10762321. PMID 38173876. Pan, Emily; Tao, Fei; Smorodina, Eva; Zhang, Shuguang (2024). "Structural bioinformatics studies of six human ABC transporters and their AlphaFold2-predicted water-soluble QTY variants". QRB Discovery. 5: e1. doi:10.1017/qrd.2024.2. PMC 10988169. PMID 38577032. Li, Mengke; Tang, Hongzhi; Qing, Rui; Wang, Yanze; Liu, Jiongqin; Wang, Rui; Lyu, Shan; Ma, Lina; Xu, Ping; Zhang, Shuguang; Tao, Fei (2024). "Design of a water-soluble transmembrane receptor kinase with intact molecular function by QTY code". Nature Communications. 15 (1) 4293. Bibcode:2024NatCo..15.4293L. doi:10.1038/s41467-024-48513-9. PMC 11164701. PMID 38858360.
{ "page_id": 66916675, "source": null, "title": "QTY Code" }
In statistics, machine learning and algorithms, a tensor sketch is a type of dimensionality reduction that is particularly efficient when applied to vectors that have tensor structure. Such a sketch can be used to speed up explicit kernel methods, bilinear pooling in neural networks and is a cornerstone in many numerical linear algebra algorithms. == Mathematical definition == Mathematically, a dimensionality reduction or sketching matrix is a matrix M ∈ R k × d {\displaystyle M\in \mathbb {R} ^{k\times d}} , where k < d {\displaystyle k<d} , such that for any vector x ∈ R d {\displaystyle x\in \mathbb {R} ^{d}} | ‖ M x ‖ 2 − ‖ x ‖ 2 | < ε ‖ x ‖ 2 {\displaystyle |\|Mx\|_{2}-\|x\|_{2}|<\varepsilon \|x\|_{2}} with high probability. In other words, M {\displaystyle M} preserves the norm of vectors up to a small error. A tensor sketch has the extra property that if x = y ⊗ z {\displaystyle x=y\otimes z} for some vectors y ∈ R d 1 , z ∈ R d 2 {\displaystyle y\in \mathbb {R} ^{d_{1}},z\in \mathbb {R} ^{d_{2}}} such that d 1 d 2 = d {\displaystyle d_{1}d_{2}=d} , the transformation M ( y ⊗ z ) {\displaystyle M(y\otimes z)} can be computed more efficiently. Here ⊗ {\displaystyle \otimes } denotes the Kronecker product, rather than the outer product, though the two are related by a flattening. The speedup is achieved by first rewriting M ( y ⊗ z ) = M ′ y ∘ M ″ z {\displaystyle M(y\otimes z)=M'y\circ M''z} , where ∘ {\displaystyle \circ } denotes the elementwise (Hadamard) product. Each of M ′ y {\displaystyle M'y} and M ″ z {\displaystyle M''z} can be computed in time O ( k d 1 ) {\displaystyle O(kd_{1})} and O ( k d 2 ) {\displaystyle
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
O(kd_{2})} , respectively; including the Hadamard product gives overall time O ( d 1 d 2 + k d 1 + k d 2 ) {\displaystyle O(d_{1}d_{2}+kd_{1}+kd_{2})} . In most use cases this method is significantly faster than the full M ( y ⊗ z ) {\displaystyle M(y\otimes z)} requiring O ( k d ) = O ( k d 1 d 2 ) {\displaystyle O(kd)=O(kd_{1}d_{2})} time. For higher-order tensors, such as x = y ⊗ z ⊗ t {\displaystyle x=y\otimes z\otimes t} , the savings are even more impressive. == History == The term tensor sketch was coined in 2013 describing a technique by Rasmus Pagh from the same year. Originally it was understood using the fast Fourier transform to do fast convolution of count sketches. Later research works generalized it to a much larger class of dimensionality reductions via Tensor random embeddings. Tensor random embeddings were introduced in 2010 in a paper on differential privacy and were first analyzed by Rudelson et al. in 2012 in the context of sparse recovery. Avron et al. were the first to study the subspace embedding properties of tensor sketches, particularly focused on applications to polynomial kernels. In this context, the sketch is required not only to preserve the norm of each individual vector with a certain probability but to preserve the norm of all vectors in each individual linear subspace. This is a much stronger property, and it requires larger sketch sizes, but it allows the kernel methods to be used very broadly as explored in the book by David Woodruff. == Tensor random projections == The face-splitting product is defined as the tensor products of the rows (was proposed by V. Slyusar in 1996 for radar and digital antenna array applications). More directly, let C ∈ R 3 × 3
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
{\displaystyle \mathbf {C} \in \mathbb {R} ^{3\times 3}} and D ∈ R 3 × 3 {\displaystyle \mathbf {D} \in \mathbb {R} ^{3\times 3}} be two matrices. Then the face-splitting product C ∙ D {\displaystyle \mathbf {C} \bullet \mathbf {D} } is C ∙ D = [ C 1 ⊗ D 1 C 2 ⊗ D 2 C 3 ⊗ D 3 ] = [ C 1 , 1 D 1 , 1 C 1 , 1 D 1 , 2 C 1 , 1 D 1 , 3 C 1 , 2 D 1 , 1 C 1 , 2 D 1 , 2 C 1 , 2 D 1 , 3 C 1 , 3 D 1 , 1 C 1 , 3 D 1 , 2 C 1 , 3 D 1 , 3 C 2 , 1 D 2 , 1 C 2 , 1 D 2 , 2 C 2 , 1 D 2 , 3 C 2 , 2 D 2 , 1 C 2 , 2 D 2 , 2 C 2 , 2 D 2 , 3 C 2 , 3 D 2 , 1 C 2 , 3 D 2 , 2 C 2 , 3 D 2 , 3 C 3 , 1 D 3 , 1 C 3 , 1 D 3 , 2 C 3 , 1 D 3 , 3 C 3 , 2 D 3 , 1 C 3 , 2 D 3 , 2 C 3 , 2 D 3 , 3 C 3 , 3 D 3 , 1 C 3 , 3 D 3 , 2 C 3 , 3 D 3 , 3 ] . {\displaystyle \mathbf {C} \bullet \mathbf {D} =\left[{\begin{array}{c }\mathbf {C} _{1}\otimes \mathbf {D} _{1}\\\hline \mathbf {C} _{2}\otimes \mathbf {D}
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
_{2}\\\hline \mathbf {C} _{3}\otimes \mathbf {D} _{3}\\\end{array}}\right]=\left[{\begin{array}{c c c c c c c c c }\mathbf {C} _{1,1}\mathbf {D} _{1,1}&\mathbf {C} _{1,1}\mathbf {D} _{1,2}&\mathbf {C} _{1,1}\mathbf {D} _{1,3}&\mathbf {C} _{1,2}\mathbf {D} _{1,1}&\mathbf {C} _{1,2}\mathbf {D} _{1,2}&\mathbf {C} _{1,2}\mathbf {D} _{1,3}&\mathbf {C} _{1,3}\mathbf {D} _{1,1}&\mathbf {C} _{1,3}\mathbf {D} _{1,2}&\mathbf {C} _{1,3}\mathbf {D} _{1,3}\\\hline \mathbf {C} _{2,1}\mathbf {D} _{2,1}&\mathbf {C} _{2,1}\mathbf {D} _{2,2}&\mathbf {C} _{2,1}\mathbf {D} _{2,3}&\mathbf {C} _{2,2}\mathbf {D} _{2,1}&\mathbf {C} _{2,2}\mathbf {D} _{2,2}&\mathbf {C} _{2,2}\mathbf {D} _{2,3}&\mathbf {C} _{2,3}\mathbf {D} _{2,1}&\mathbf {C} _{2,3}\mathbf {D} _{2,2}&\mathbf {C} _{2,3}\mathbf {D} _{2,3}\\\hline \mathbf {C} _{3,1}\mathbf {D} _{3,1}&\mathbf {C} _{3,1}\mathbf {D} _{3,2}&\mathbf {C} _{3,1}\mathbf {D} _{3,3}&\mathbf {C} _{3,2}\mathbf {D} _{3,1}&\mathbf {C} _{3,2}\mathbf {D} _{3,2}&\mathbf {C} _{3,2}\mathbf {D} _{3,3}&\mathbf {C} _{3,3}\mathbf {D} _{3,1}&\mathbf {C} _{3,3}\mathbf {D} _{3,2}&\mathbf {C} _{3,3}\mathbf {D} _{3,3}\end{array}}\right].} The reason this product is useful is the following identity: ( C ∙ D ) ( x ⊗ y ) = C x ∘ D y = [ ( C x ) 1 ( D y ) 1 ( C x ) 2 ( D y ) 2 ⋮ ] , {\displaystyle (\mathbf {C} \bullet \mathbf {D} )(x\otimes y)=\mathbf {C} x\circ \mathbf {D} y=\left[{\begin{array}{c }(\mathbf {C} x)_{1}(\mathbf {D} y)_{1}\\(\mathbf {C} x)_{2}(\mathbf {D} y)_{2}\\\vdots \end{array}}\right],} where ∘ {\displaystyle \circ } is the element-wise (Hadamard) product. Since this operation can be computed in linear time, C ∙ D {\displaystyle \mathbf {C} \bullet \mathbf {D} } can be multiplied on vectors with tensor structure much faster than normal matrices. === Construction with fast Fourier transform === The tensor sketch of Pham and Pagh computes C ( 1 ) x ∗ C ( 2 ) y {\displaystyle C^{(1)}x\ast C^{(2)}y} , where C ( 1 ) {\displaystyle C^{(1)}} and C ( 2 ) {\displaystyle C^{(2)}} are independent count sketch matrices and ∗ {\displaystyle \ast } is vector convolution. They
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
show that, amazingly, this equals C ( x ⊗ y ) {\displaystyle C(x\otimes y)} – a count sketch of the tensor product! It turns out that this relation can be seen in terms of the face-splitting product as C ( 1 ) x ∗ C ( 2 ) y = F − 1 ( F C ( 1 ) x ∘ F C ( 2 ) y ) {\displaystyle C^{(1)}x\ast C^{(2)}y={\mathcal {F}}^{-1}({\mathcal {F}}C^{(1)}x\circ {\mathcal {F}}C^{(2)}y)} , where F {\displaystyle {\mathcal {F}}} is the Fourier transform matrix. Since F {\displaystyle {\mathcal {F}}} is an orthonormal matrix, F − 1 {\displaystyle {\mathcal {F}}^{-1}} doesn't impact the norm of C x {\displaystyle Cx} and may be ignored. What's left is that C ∼ C ( 1 ) ∙ C ( 2 ) {\displaystyle C\sim {\mathcal {C}}^{(1)}\bullet {\mathcal {C}}^{(2)}} . On the other hand, F ( C ( 1 ) x ∗ C ( 2 ) y ) = F C ( 1 ) x ∘ F C ( 2 ) y = ( F C ( 1 ) ∙ F C ( 2 ) ) ( x ⊗ y ) {\displaystyle {\mathcal {F}}(C^{(1)}x\ast C^{(2)}y)={\mathcal {F}}C^{(1)}x\circ {\mathcal {F}}C^{(2)}y=({\mathcal {F}}C^{(1)}\bullet {\mathcal {F}}C^{(2)})(x\otimes y)} . === Application to general matrices === The problem with the original tensor sketch algorithm was that it used count sketch matrices, which aren't always very good dimensionality reductions. In 2020 it was shown that any matrices with random enough independent rows suffice to create a tensor sketch. This allows using matrices with stronger guarantees, such as real Gaussian Johnson Lindenstrauss matrices. In particular, we get the following theorem Consider a matrix T {\displaystyle T} with i.i.d. rows T 1 , … , T m ∈ R d {\displaystyle T_{1},\dots ,T_{m}\in \mathbb {R} ^{d}} , such that E [ ( T 1
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
x ) 2 ] = ‖ x ‖ 2 2 {\displaystyle E[(T_{1}x)^{2}]=\|x\|_{2}^{2}} and E [ ( T 1 x ) p ] 1 / p ≤ a p ‖ x ‖ 2 {\displaystyle E[(T_{1}x)^{p}]^{1/p}\leq {\sqrt {ap}}\|x\|_{2}} . Let T ( 1 ) , … , T ( c ) {\displaystyle T^{(1)},\dots ,T^{(c)}} be independent consisting of T {\displaystyle T} and M = T ( 1 ) ∙ ⋯ ∙ T ( c ) {\displaystyle M=T^{(1)}\bullet \dots \bullet T^{(c)}} . Then | ‖ M x ‖ 2 − ‖ x ‖ 2 | < ε ‖ x ‖ 2 {\displaystyle |\|Mx\|_{2}-\|x\|_{2}|<\varepsilon \|x\|_{2}} with probability 1 − δ {\displaystyle 1-\delta } for any vector x {\displaystyle x} if m = ( 4 a ) 2 c ε − 2 log ⁡ 1 / δ + ( 2 a e ) ε − 1 ( log ⁡ 1 / δ ) c {\displaystyle m=(4a)^{2c}\varepsilon ^{-2}\log 1/\delta +(2ae)\varepsilon ^{-1}(\log 1/\delta )^{c}} . In particular, if the entries of T {\displaystyle T} are ± 1 {\displaystyle \pm 1} we get m = O ( ε − 2 log ⁡ 1 / δ + ε − 1 ( 1 c log ⁡ 1 / δ ) c ) {\displaystyle m=O(\varepsilon ^{-2}\log 1/\delta +\varepsilon ^{-1}({\tfrac {1}{c}}\log 1/\delta )^{c})} which matches the normal Johnson Lindenstrauss theorem of m = O ( ε − 2 log ⁡ 1 / δ ) {\displaystyle m=O(\varepsilon ^{-2}\log 1/\delta )} when ε {\displaystyle \varepsilon } is small. The paper also shows that the dependency on ε − 1 ( 1 c log ⁡ 1 / δ ) c {\displaystyle \varepsilon ^{-1}({\tfrac {1}{c}}\log 1/\delta )^{c}} is necessary for constructions using tensor randomized projections with Gaussian entries. == Variations == === Recursive construction === Because of the exponential dependency on c {\displaystyle c} in tensor
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
sketches based on the face-splitting product, a different approach was developed in 2020 which applies M ( x ⊗ y ⊗ ⋯ ) = M ( 1 ) ( x ⊗ ( M ( 2 ) y ⊗ ⋯ ) ) {\displaystyle M(x\otimes y\otimes \cdots )=M^{(1)}(x\otimes (M^{(2)}y\otimes \cdots ))} We can achieve such an M {\displaystyle M} by letting M = M ( c ) ( M ( c − 1 ) ⊗ I d ) ( M ( c − 2 ) ⊗ I d 2 ) ⋯ ( M ( 1 ) ⊗ I d c − 1 ) {\displaystyle M=M^{(c)}(M^{(c-1)}\otimes I_{d})(M^{(c-2)}\otimes I_{d^{2}})\cdots (M^{(1)}\otimes I_{d^{c-1}})} . With this method, we only apply the general tensor sketch method to order 2 tensors, which avoids the exponential dependency in the number of rows. It can be proved that combining c {\displaystyle c} dimensionality reductions like this only increases ε {\displaystyle \varepsilon } by a factor c {\displaystyle {\sqrt {c}}} . === Fast constructions === The fast Johnson–Lindenstrauss transform is a dimensionality reduction matrix Given a matrix M ∈ R k × d {\displaystyle M\in \mathbb {R} ^{k\times d}} , computing the matrix vector product M x {\displaystyle Mx} takes k d {\displaystyle kd} time. The Fast Johnson Lindenstrauss Transform (FJLT), was introduced by Ailon and Chazelle in 2006. A version of this method takes M = SHD {\displaystyle M=\operatorname {SHD} } where D {\displaystyle D} is a diagonal matrix where each diagonal entry D i , i {\displaystyle D_{i,i}} is ± 1 {\displaystyle \pm 1} independently. The matrix-vector multiplication D x {\displaystyle Dx} can be computed in O ( d ) {\displaystyle O(d)} time. H {\displaystyle H} is a Hadamard matrix, which allows matrix-vector multiplication in time O ( d log ⁡ d ) {\displaystyle O(d\log d)} S {\displaystyle S}
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
is a k × d {\displaystyle k\times d} sampling matrix which is all zeros, except a single 1 in each row. If the diagonal matrix is replaced by one which has a tensor product of ± 1 {\displaystyle \pm 1} values on the diagonal, instead of being fully independent, it is possible to compute SHD ⁡ ( x ⊗ y ) {\displaystyle \operatorname {SHD} (x\otimes y)} fast. For an example of this, let ρ , σ ∈ { − 1 , 1 } 2 {\displaystyle \rho ,\sigma \in \{-1,1\}^{2}} be two independent ± 1 {\displaystyle \pm 1} vectors and let D {\displaystyle D} be a diagonal matrix with ρ ⊗ σ {\displaystyle \rho \otimes \sigma } on the diagonal. We can then split up SHD ⁡ ( x ⊗ y ) {\displaystyle \operatorname {SHD} (x\otimes y)} as follows: SHD ⁡ ( x ⊗ y ) = [ 1 0 0 0 0 0 1 0 0 1 0 0 ] [ 1 1 1 1 1 − 1 1 − 1 1 1 − 1 − 1 1 − 1 − 1 1 ] [ σ 1 ρ 1 0 0 0 0 σ 1 ρ 2 0 0 0 0 σ 2 ρ 1 0 0 0 0 σ 2 ρ 2 ] [ x 1 y 1 x 2 y 1 x 1 y 2 x 2 y 2 ] = ( [ 1 0 0 1 1 0 ] ∙ [ 1 0 1 0 0 1 ] ) ( [ 1 1 1 − 1 ] ⊗ [ 1 1 1 − 1 ] ) ( [ σ 1 0 0 σ 2 ] ⊗ [ ρ 1 0 0 ρ 2 ] ) ( [ x 1 x 2 ] ⊗ [ y 1 y
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
2 ] ) = ( [ 1 0 0 1 1 0 ] ∙ [ 1 0 1 0 0 1 ] ) ( [ 1 1 1 − 1 ] [ σ 1 0 0 σ 2 ] [ x 1 x 2 ] ⊗ [ 1 1 1 − 1 ] [ ρ 1 0 0 ρ 2 ] [ y 1 y 2 ] ) = [ 1 0 0 1 1 0 ] [ 1 1 1 − 1 ] [ σ 1 0 0 σ 2 ] [ x 1 x 2 ] ∘ [ 1 0 1 0 0 1 ] [ 1 1 1 − 1 ] [ ρ 1 0 0 ρ 2 ] [ y 1 y 2 ] . {\displaystyle {\begin{aligned}&\operatorname {SHD} (x\otimes y)\\&\quad ={\begin{bmatrix}1&0&0&0\\0&0&1&0\\0&1&0&0\end{bmatrix}}{\begin{bmatrix}1&1&1&1\\1&-1&1&-1\\1&1&-1&-1\\1&-1&-1&1\end{bmatrix}}{\begin{bmatrix}\sigma _{1}\rho _{1}&0&0&0\\0&\sigma _{1}\rho _{2}&0&0\\0&0&\sigma _{2}\rho _{1}&0\\0&0&0&\sigma _{2}\rho _{2}\\\end{bmatrix}}{\begin{bmatrix}x_{1}y_{1}\\x_{2}y_{1}\\x_{1}y_{2}\\x_{2}y_{2}\end{bmatrix}}\\[5pt]&\quad =\left({\begin{bmatrix}1&0\\0&1\\1&0\end{bmatrix}}\bullet {\begin{bmatrix}1&0\\1&0\\0&1\end{bmatrix}}\right)\left({\begin{bmatrix}1&1\\1&-1\end{bmatrix}}\otimes {\begin{bmatrix}1&1\\1&-1\end{bmatrix}}\right)\left({\begin{bmatrix}\sigma _{1}&0\\0&\sigma _{2}\\\end{bmatrix}}\otimes {\begin{bmatrix}\rho _{1}&0\\0&\rho _{2}\\\end{bmatrix}}\right)\left({\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}\otimes {\begin{bmatrix}y_{1}\\y_{2}\end{bmatrix}}\right)\\[5pt]&\quad =\left({\begin{bmatrix}1&0\\0&1\\1&0\end{bmatrix}}\bullet {\begin{bmatrix}1&0\\1&0\\0&1\end{bmatrix}}\right)\left({\begin{bmatrix}1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}\sigma _{1}&0\\0&\sigma _{2}\\\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}\,\otimes \,{\begin{bmatrix}1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}\rho _{1}&0\\0&\rho _{2}\\\end{bmatrix}}{\begin{bmatrix}y_{1}\\y_{2}\end{bmatrix}}\right)\\[5pt]&\quad ={\begin{bmatrix}1&0\\0&1\\1&0\end{bmatrix}}{\begin{bmatrix}1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}\sigma _{1}&0\\0&\sigma _{2}\\\end{bmatrix}}{\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}\,\circ \,{\begin{bmatrix}1&0\\1&0\\0&1\end{bmatrix}}{\begin{bmatrix}1&1\\1&-1\end{bmatrix}}{\begin{bmatrix}\rho _{1}&0\\0&\rho _{2}\\\end{bmatrix}}{\begin{bmatrix}y_{1}\\y_{2}\end{bmatrix}}.\end{aligned}}} In other words, SHD = S ( 1 ) H D ( 1 ) ∙ S ( 2 ) H D ( 2 ) {\displaystyle \operatorname {SHD} =S^{(1)}HD^{(1)}\bullet S^{(2)}HD^{(2)}} , splits up into two Fast Johnson–Lindenstrauss transformations, and the total reduction takes time O ( d 1 log ⁡ d 1 + d 2 log ⁡ d 2 ) {\displaystyle O(d_{1}\log d_{1}+d_{2}\log d_{2})} rather than d 1 d 2 log ⁡ ( d 1 d 2 ) {\displaystyle d_{1}d_{2}\log(d_{1}d_{2})} as with the direct approach. The same approach can be extended to compute higher degree products, such as SHD ⁡ ( x ⊗ y ⊗ z ) {\displaystyle \operatorname {SHD} (x\otimes y\otimes z)} Ahle et al. shows that if SHD {\displaystyle \operatorname {SHD} } has ε − 2 ( log ⁡ 1 / δ ) c
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
+ 1 {\displaystyle \varepsilon ^{-2}(\log 1/\delta )^{c+1}} rows, then | ‖ SHD ⁡ x ‖ 2 − ‖ x ‖ | ≤ ε ‖ x ‖ 2 {\displaystyle |\|\operatorname {SHD} x\|_{2}-\|x\||\leq \varepsilon \|x\|_{2}} for any vector x ∈ R d c {\displaystyle x\in \mathbb {R} ^{d^{c}}} with probability 1 − δ {\displaystyle 1-\delta } , while allowing fast multiplication with degree c {\displaystyle c} tensors. Jin et al., the same year, showed a similar result for the more general class of matrices call RIP, which includes the subsampled Hadamard matrices. They showed that these matrices allow splitting into tensors provided the number of rows is ε − 2 ( log ⁡ 1 / δ ) 2 c − 1 log ⁡ d {\displaystyle \varepsilon ^{-2}(\log 1/\delta )^{2c-1}\log d} . In the case c = 2 {\displaystyle c=2} this matches the previous result. These fast constructions can again be combined with the recursion approach mentioned above, giving the fastest overall tensor sketch. == Data aware sketching == It is also possible to do so-called "data aware" tensor sketching. Instead of multiplying a random matrix on the data, the data points are sampled independently with a certain probability depending on the norm of the point. == Applications == === Explicit polynomial kernels === Kernel methods are popular in machine learning as they give the algorithm designed the freedom to design a "feature space" in which to measure the similarity of their data points. A simple kernel-based binary classifier is based on the following computation: y ^ ( x ′ ) = sgn ⁡ ∑ i = 1 n y i k ( x i , x ′ ) , {\displaystyle {\hat {y}}(\mathbf {x'} )=\operatorname {sgn} \sum _{i=1}^{n}y_{i}k(\mathbf {x} _{i},\mathbf {x'} ),} where x i ∈ R d {\displaystyle \mathbf {x} _{i}\in \mathbb
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
{R} ^{d}} are the data points, y i {\displaystyle y_{i}} is the label of the i {\displaystyle i} th point (either −1 or +1), and y ^ ( x ′ ) {\displaystyle {\hat {y}}(\mathbf {x'} )} is the prediction of the class of x ′ {\displaystyle \mathbf {x'} } . The function k : R d × R d → R {\displaystyle k:\mathbb {R} ^{d}\times \mathbb {R} ^{d}\to \mathbb {R} } is the kernel. Typical examples are the radial basis function kernel, k ( x , x ′ ) = exp ⁡ ( − ‖ x − x ′ ‖ 2 2 ) {\displaystyle k(x,x')=\exp(-\|x-x'\|_{2}^{2})} , and polynomial kernels such as k ( x , x ′ ) = ( 1 + ⟨ x , x ′ ⟩ ) 2 {\displaystyle k(x,x')=(1+\langle x,x'\rangle )^{2}} . When used this way, the kernel method is called "implicit". Sometimes it is faster to do an "explicit" kernel method, in which a pair of functions f , g : R d → R D {\displaystyle f,g:\mathbb {R} ^{d}\to \mathbb {R} ^{D}} are found, such that k ( x , x ′ ) = ⟨ f ( x ) , g ( x ′ ) ⟩ {\displaystyle k(x,x')=\langle f(x),g(x')\rangle } . This allows the above computation to be expressed as y ^ ( x ′ ) = sgn ⁡ ∑ i = 1 n y i ⟨ f ( x i ) , g ( x ′ ) ⟩ = sgn ⁡ ⟨ ( ∑ i = 1 n y i f ( x i ) ) , g ( x ′ ) ⟩ , {\displaystyle {\hat {y}}(\mathbf {x'} )=\operatorname {sgn} \sum _{i=1}^{n}y_{i}\langle f(\mathbf {x} _{i}),g(\mathbf {x'} )\rangle =\operatorname {sgn} \left\langle \left(\sum _{i=1}^{n}y_{i}f(\mathbf {x} _{i})\right),g(\mathbf {x'} )\right\rangle ,} where the value ∑ i = 1
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
n y i f ( x i ) {\displaystyle \sum _{i=1}^{n}y_{i}f(\mathbf {x} _{i})} can be computed in advance. The problem with this method is that the feature space can be very large. That is D >> d {\displaystyle D>>d} . For example, for the polynomial kernel k ( x , x ′ ) = ⟨ x , x ′ ⟩ 3 {\displaystyle k(x,x')=\langle x,x'\rangle ^{3}} we get f ( x ) = x ⊗ x ⊗ x {\displaystyle f(x)=x\otimes x\otimes x} and g ( x ′ ) = x ′ ⊗ x ′ ⊗ x ′ {\displaystyle g(x')=x'\otimes x'\otimes x'} , where ⊗ {\displaystyle \otimes } is the tensor product and f ( x ) , g ( x ′ ) ∈ R D {\displaystyle f(x),g(x')\in \mathbb {R} ^{D}} where D = d 3 {\displaystyle D=d^{3}} . If d {\displaystyle d} is already large, D {\displaystyle D} can be much larger than the number of data points ( n {\displaystyle n} ) and so the explicit method is inefficient. The idea of tensor sketch is that we can compute approximate functions f ′ , g ′ : R d → R t {\displaystyle f',g':\mathbb {R} ^{d}\to \mathbb {R} ^{t}} where t {\displaystyle t} can even be smaller than d {\displaystyle d} , and which still have the property that ⟨ f ′ ( x ) , g ′ ( x ′ ) ⟩ ≈ k ( x , x ′ ) {\displaystyle \langle f'(x),g'(x')\rangle \approx k(x,x')} . This method was shown in 2020 to work even for high degree polynomials and radial basis function kernels. === Compressed matrix multiplication === Assume we have two large datasets, represented as matrices X , Y ∈ R n × d {\displaystyle X,Y\in \mathbb {R} ^{n\times d}} , and we want to find the
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
rows i , j {\displaystyle i,j} with the largest inner products ⟨ X i , Y j ⟩ {\displaystyle \langle X_{i},Y_{j}\rangle } . We could compute Z = X Y T ∈ R n × n {\displaystyle Z=XY^{T}\in \mathbb {R} ^{n\times n}} and simply look at all n 2 {\displaystyle n^{2}} possibilities. However, this would take at least n 2 {\displaystyle n^{2}} time, and probably closer to n 2 d {\displaystyle n^{2}d} using standard matrix multiplication techniques. The idea of Compressed Matrix Multiplication is the general identity X Y T = ∑ i = 1 d X i ⊗ Y i {\displaystyle XY^{T}=\sum _{i=1}^{d}X_{i}\otimes Y_{i}} where ⊗ {\displaystyle \otimes } is the tensor product. Since we can compute a (linear) approximation to X i ⊗ Y i {\displaystyle X_{i}\otimes Y_{i}} efficiently, we can sum those up to get an approximation for the complete product. === Compact multilinear pooling === Bilinear pooling is the technique of taking two input vectors, x , y {\displaystyle x,y} from different sources, and using the tensor product x ⊗ y {\displaystyle x\otimes y} as the input layer to a neural network. In the authors considered using tensor sketch to reduce the number of variables needed. In 2017 another paper takes the FFT of the input features, before they are combined using the element-wise product. This again corresponds to the original tensor sketch. == References == == Further reading == Ahle, Thomas; Knudsen, Jakob (2019-09-03). "Almost Optimal Tensor Sketch". ResearchGate. Retrieved 2020-07-11. Slyusar, V. I. (1998). "End products in matrices in radar applications" (PDF). Radioelectronics and Communications Systems. 41 (3): 50–53. Slyusar, V. I. (1997-05-20). "Analytical model of the digital antenna array on a basis of face-splitting matrix products" (PDF). Proc. ICATT-97, Kyiv: 108–109. Slyusar, V. I. (1997-09-15). "New operations of matrices product for applications of
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
radars" (PDF). Proc. Direct and Inverse Problems of Electromagnetic and Acoustic Wave Theory (DIPED-97), Lviv.: 73–74. Slyusar, V. I. (March 13, 1998). "A Family of Face Products of Matrices and its Properties" (PDF). Cybernetics and Systems Analysis C/C of Kibernetika I Sistemnyi Analiz.- 1999. 35 (3): 379–384. doi:10.1007/BF02733426. S2CID 119661450.
{ "page_id": 64295245, "source": null, "title": "Tensor sketch" }
Cossaviricota is a phylum of viruses. The phylum is named after Yvonne Cossart who discovered Parvovirus B19, the causative pathogen of fifth disease. == Classes == The following classes are recognized: Mouviricetes Papovaviricetes Quintoviricetes == References ==
{ "page_id": 63770960, "source": null, "title": "Cossaviricota" }
Methylation specific oligonucleotide microarray, also known as MSO microarray, was developed as a technique to map epigenetic methylation changes in DNA of cancer cells. The general process starts with modification of DNA with bisulfite, specifically to convert unmethylated cytosine in CpG sites to uracil, while leaving methylated cytosines untouched. The modified DNA region of interest is amplified via PCR and during the process, uracils are converted to thymine. The amplicons are labelled with a fluorescent dye and hybridized to oligonucleotide probes that are fixed to a glass slide. The probes differentially bind to cytosine and thymine residues, which ultimately allows discrimination between methylated and unmethylated CpG sites, respectively. A calibration curve is produced and compared with the microarray results of the amplified DNA samples. This allows a general quantification of the proportion of methylation present in the region of interest. This microarray technique was developed by Tim Hui-Ming Huang and his laboratory and was officially published in 2002. == Implications for cancer research == Cancer cells often develop atypical methylation patterns, at CpG sites in promoters of tumour suppressor genes. High levels of methylation at a promoter leads to downregulation of the corresponding genes and is characteristic of carcinogenesis. It is one of the most consistent changes observed in early stage tumour cells. Methylation specific oligonucleotide microarray allows for the high resolution and high throughput detection of numerous methylation events on multiple gene promoters. Therefore, this technique can be used to detect aberrant methylation in tumour suppressor promoters at an early stage and has been used in gastric and colon cancers and multiple others. Because it allows one to detect presence of atypical methylations in cancer cells, it can also be used to reveal the major cause behind the malignancy, whether its main contributor is mutations on chromosomes or epigenetic
{ "page_id": 4264274, "source": null, "title": "Methylation specific oligonucleotide microarray" }
modifications, as well as which tumour suppressor genes' transcription levels are affected. An interesting use of this microarray includes specific classification of cancers based on the methylation patterns alone, such as differentiating between classes of leukemia, suggesting that different classes of cancer show relatively unique methylation patterns. This technique has also been proposed to monitor cancer treatments that involve modifying the methylation patterns in mutant cancer cells. == References == == External links == Resources, information and specific protocols for DNA Methylation Analysis Software for DNA Methylation Analysis
{ "page_id": 4264274, "source": null, "title": "Methylation specific oligonucleotide microarray" }
A grain is a small, hard, dry fruit (caryopsis) – with or without an attached hull layer – harvested for human or animal consumption. A grain crop is a grain-producing plant. The two main types of commercial grain crops are cereals and legumes. After being harvested, dry grains are more durable than other staple foods, such as starchy fruits (plantains, breadfruit, etc.) and tubers (sweet potatoes, cassava, and more). This durability has made grains well suited to industrial agriculture, since they can be mechanically harvested, transported by rail or ship, stored for long periods in silos, and milled for flour or pressed for oil. Thus, the grain market is a major global commodity market that includes crops such as maize, rice, soybeans, wheat and other grains. == Cereal and non-cereal grains == In the grass family, a grain (narrowly defined) is a caryopsis, a fruit with its wall fused on to the single seed inside, belonging to a cereal such as wheat, maize, or rice. More broadly, in agronomy and commerce, seeds or fruits from other plant families are called grains if they resemble cereal caryopses. For example, amaranth is sold as "grain amaranth", and amaranth products may be described as "whole grains". The pre-Hispanic civilizations of the Andes had grain-based food systems, but at higher elevations none of the grains belonged the cereal family. All three grains native to the Andes (kaniwa, kiwicha, and quinoa) are broad-leaved plants rather than grasses. == Cereal grains == Many different species of cereal are cultivated for their grains. === Warm-season cereals === fonio maize (corn) millets (of multiple species) sorghum === Cool-season cereals === barley oats rice rye spelt teff triticale wheat wild rice == Pseudocereal grains == Starchy grains from broadleaf (dicot) plant families are cultivated as nutritious alternatives to cereals.
{ "page_id": 27988307, "source": null, "title": "Grain" }
The three major pseudocereal grains are: amaranth (Amaranth family) also called kiwicha buckwheat (Smartweed family) quinoa (Amaranth family, formerly classified as Goosefoot family) == Pulses or grain legumes == Pulses or grain legumes, members of the pea family, have a higher protein content than most other plant foods, at around 20%, while soybeans have as much as 35%. As is the case with all other whole plant foods, pulses also contain carbohydrates and fat. Common pulses include: chickpeas common beans common peas (garden peas) fava beans lentils lima beans lupins mung beans peanuts pigeon peas runner beans soybeans == Oilseed grains == Oilseed grains are grown primarily for the extraction of their edible oil. Vegetable oils provide dietary energy and some essential fatty acids. They are also used as fuel and lubricants. === Mustard family === black mustard India mustard rapeseed (including canola) === Aster family === safflower sunflower seed === Other families === flax seed (Flax family) hemp seed (Hemp family) poppy seed (Poppy family) == Historical importance == Because grains are small, hard and dry, they can be stored, measured, and transported more readily than can other kinds of food crops such as fresh fruits, roots and tubers. The development of grain agriculture allowed excess food to be produced and stored easily which could have led to the creation of the first temporary settlements and the division of society into classes. This assumption that grain agriculture led to early settlements and social stratification has been challenged by James Scott in his book Against the Grain. He argues that the transition from hunter-gatherer societies to settled agrarian communities was not a voluntary choice driven by the benefits of increased food production due to the long storage potential of grains, but rather that the shift towards settlements was a coerced
{ "page_id": 27988307, "source": null, "title": "Grain" }
transformation imposed by dominant members of a society seeking to expand control over labor and resources. == Trade == == Occupational safety and health == Those who handle grain at grain facilities may encounter numerous occupational hazards and exposures. Risks include grain entrapment, where workers are submerged in the grain and unable to extricate themselves; explosions caused by fine particles of grain dust, and falls. == See also == == References == == External links ==
{ "page_id": 27988307, "source": null, "title": "Grain" }
Viruses are only able to replicate themselves by commandeering the reproductive apparatus of cells and making them reproduce the virus's genetic structure and particles instead. How viruses do this depends mainly on the type of nucleic acid DNA or RNA they contain, which is either one or the other but never both. Viruses cannot function or reproduce outside a cell, and are totally dependent on a host cell to survive. Most viruses are species specific, and related viruses typically only infect a narrow range of plants, animals, bacteria, or fungi. == Life cycle process == === Viral entry === For the virus to reproduce and thereby establish infection, it must enter cells of the host organism and use those cells' materials. To enter the cells, proteins on the surface of the virus interact with proteins of the cell. Attachment, or adsorption, occurs between the viral particle and the host cell membrane. A hole forms in the cell membrane, then the virus particle or its genetic contents are released into the host cell, where replication of the viral genome may commence. === Viral replication === Next, a virus must take control of the host cell's replication mechanisms. It is at this stage a distinction between susceptibility and permissibility of a host cell is made. Permissibility determines the outcome of the infection. After control is established and the environment is set for the virus to begin making copies of itself, replication occurs quickly by the millions. === Viral shedding === After a virus has made many copies of itself, the progeny may begin to leave the cell by several methods. This is called shedding and is the final stage in the viral life cycle. === Viral latency === Some viruses can "hide" within a cell, which may mean that they evade the
{ "page_id": 4395348, "source": null, "title": "Viral life cycle" }
host cell defenses or immune system and may increase the long-term "success" of the virus. This hiding is deemed latency. During this time, the virus does not produce any progeny, it remains inactive until external stimuli—such as light or stress—prompts it to activate. == See also == Kinetic class (virology) Viral phenomena, which get their name from the way in which their propagation is analogous to the propagation of viruses among hosts == References ==
{ "page_id": 4395348, "source": null, "title": "Viral life cycle" }
The anal pore or cytoproct is a structure in various single-celled eukaryotes where waste is ejected after the nutrients from food have been absorbed into the cytoplasm. In ciliates, the anal pore (cytopyge) and cytostome are the only regions of the pellicle that are not covered by ridges, cilia or rigid covering. They serve as analogues of, respectively, the anus and mouth of multicellular organisms. The cytopyge's thin membrane allows vacuoles to be merged into the cell wall and emptied. == Location == The anal pore is an exterior opening of microscopic organisms through which undigested food waste, water, or gas are expelled from the body. The anal pore is located on the ventral surface, usually in the posterior half of the cell. The anal pore itself is actually a structure made up of two components: piles of fibres, and microtubules. This structure is found in different unicellular eukaryotes like paramecium organelles. == Function == Digested nutrients from the vacuole pass into the cytoplasm, making the vacuole shrink and moves to the anal pore, where it ruptures to release the waste content to the environment outside of the cell. The cytoproct is used for the excretion of indigestible debris contained in the food vacuoles. Most microorganisms possess an anal pore for excretion, usually in the form of an opening on the pellicle to eject out indigestible debris. The opening and closing of the cytoproct resemble a reversible ring of tissue fusion occurring between the inner and outer layers located at the aboral end. An anal pore is not a permanently visible structure; it appears at the time of defecation and then disappears afterward. In paramecium, the anal pore is a region of pellicle that is not covered by ridges and cilia, and the area has thin pellicles that allow the
{ "page_id": 32379221, "source": null, "title": "Anal pore" }
vacuoles to be merged into the cell surface to be emptied. In ciliates, the anal cytostomes and cytopyge pore regions are not covered by either ridges or cilia or hard coatings like the other parts of the organism. As a food vacuole approaches the cytoproct region it actually starts to flatten out the surrounding cells, and a thin-membrane vacuole allows it to be combined in the cell wall. Once the vacuole attaches to the plasma membrane of the cell wall, the vacuole is emptied. The waste excreted by the cell can come as a membrane-bound packaged ball, or as a stream of debris behind the organism. Directly after secretion of the waste products, deep invagination (deep, canyon-like structure that was the vacuole) is still present. About 10 to 30 seconds after secretion, the vacuole detaches, and a new thin plasma membrane is formed. After a minute has gone by the organism's cytoproct is closed up again and the process is ready to be repeated. == In marine animals == Ctenophores are marine animals which superficially resemble jellyfish, but have biradial symmetry and use eight bands of transverse ciliated plates to swim. All ctenophores possess a pair of small anal pores located adjacent to the apical sensory organ which is thought to control osmotic pressure. These pores have sometimes been interpreted as homologous with the anus of bilaterian animals (worms, humans, snails, fish, etc.). Furthermore, they possess a third tissue layer between the endoderm and ectoderm, another characteristic reminiscent of the Bilateria. Ctenophores possess a functional through-gut from which digested waste products and material distributed via the endodermal canals are expelled to the exterior environment through terminal anal pores, which are specialized to control outflow from the branched endodermal canal system. Ctenophores have no true anus; the central canal opens toward
{ "page_id": 32379221, "source": null, "title": "Anal pore" }
the aboral end by two small pores, through which a small amount of egestion can take place. == References == == Bibliography == “Introduction to Ctenophora.” Introduction to the Ctenophora, https://ucmp.berkeley.edu/cnidaria/ctenophora.html. Pang, Kevin; Martindale, Mark Q. (December 2008). "Ctenophores". Current Biology. 18 (24): R1119 – R1120. Bibcode:2008CBio...18R1119P. doi:10.1016/j.cub.2008.10.004. PMID 19108762. Presnell, Jason S.; Vandepas, Lauren E.; Warren, Kaitlyn J.; Swalla, Billie J.; Amemiya, Chris T.; Browne, William E. (October 2016). "The Presence of a Functionally Tripartite Through-Gut in Ctenophora Has Implications for Metazoan Character Trait Evolution". Current Biology. 26 (20): 2814–2820. Bibcode:2016CBio...26.2814P. doi:10.1016/j.cub.2016.08.019. PMID 27568594.
{ "page_id": 32379221, "source": null, "title": "Anal pore" }
In chemical graph theory, the Padmakar–Ivan (PI) index is a topological index of a molecule, used in biochemistry. The Padmakar–Ivan index is a generalization introduced by Padmakar V. Khadikar and Iván Gutman of the concept of the Wiener index, introduced by Harry Wiener. The Padmakar–Ivan index of a graph G is the sum over all edges uv of G of number of edges which are not equidistant from u and v. Let G be a graph and e = uv an edge of G. Here n e u ( e ∣ G ) {\displaystyle n_{eu}(e\mid G)} denotes the number of edges lying closer to the vertex u than the vertex v, and n e v ( e ∣ G ) {\displaystyle n_{ev}(e\mid G)} is the number of edges lying closer to the vertex v than the vertex u. The Padmakar–Ivan index of a graph G is defined as PI ⁡ ( G ) = ∑ e ∈ E ( G ) [ n e u ( e ∣ G ) + n e v ( e ∣ G ) ] {\displaystyle \operatorname {PI} (G)=\sum _{e\in E(G)}[n_{eu}(e\mid G)+n_{ev}(e\mid G)]} The PI index is very important in the study of quantitative structure–activity relationship for the classification models used in the chemical, biological sciences, engineering, and nanotechnology. == Examples == The PI index of Dendrimer Nanostar of the following figure can be calculated by PI ⁡ ( G n ) = 441 ⋅ 4 n − 639 ⋅ 2 n + 232 , n ≥ 0. {\displaystyle \operatorname {PI} (G_{n})=441\cdot 4^{n}-639\cdot 2^{n}+232,\quad n\geq 0.} == References ==
{ "page_id": 56103252, "source": null, "title": "Padmakar–Ivan index" }
Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas). The traditional mathematical formulation of Brownian motion is that of the Wiener process, which is often called Brownian motion, even in mathematical sources. This motion pattern typically consists of random fluctuations in a particle's position inside a fluid sub-domain, followed by a relocation to another sub-domain. Each relocation is followed by more fluctuations within the new closed volume. This pattern describes a fluid at thermal equilibrium, defined by a given temperature. Within such a fluid, there exists no preferential direction of flow (as in transport phenomena). More specifically, the fluid's overall linear and angular momenta remain null over time. The kinetic energies of the molecular Brownian motions, together with those of molecular rotations and vibrations, sum up to the caloric component of a fluid's internal energy (the equipartition theorem). This motion is named after the Scottish botanist Robert Brown, who first described the phenomenon in 1827, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water. In 1900, the French mathematician Louis Bachelier modeled the stochastic process now called Brownian motion in his doctoral thesis, The Theory of Speculation (Théorie de la spéculation), prepared under the supervision of Henri Poincaré. Then, in 1905, theoretical physicist Albert Einstein published a paper where he modeled the motion of the pollen particles as being moved by individual water molecules, making one of his first major scientific contributions. The direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion. This explanation of Brownian motion served as convincing evidence that atoms and molecules exist and was further verified experimentally by
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter". The many-body interactions that yield the Brownian pattern cannot be solved by a model accounting for every involved molecule. Consequently, only probabilistic models applied to molecular populations can be employed to describe it. Two such models of the statistical mechanics, due to Einstein and Smoluchowski, are presented below. Another, pure probabilistic class of models is the class of the stochastic process models. There exist sequences of both simpler and more complicated stochastic processes which converge (in the limit) to Brownian motion (see random walk and Donsker's theorem). == History == The Roman philosopher-poet Lucretius' scientific poem On the Nature of Things (c. 60 BC) has a remarkable description of the motion of dust particles in verses 113–140 from Book II. He uses this as a proof of the existence of atoms: Observe what happens when sunbeams are admitted into a building and shed light on its shadowy places. You will see a multitude of tiny particles mingling in a multitude of ways... their dancing is an actual indication of underlying movements of matter that are hidden from our sight... It originates with the atoms which move of themselves [i.e., spontaneously]. Then those small compound bodies that are least removed from the impetus of the atoms are set in motion by the impact of their invisible blows and in turn cannon against slightly larger bodies. So the movement mounts up from the atoms and gradually emerges to the level of our senses so that those bodies are in motion that we see in sunbeams, moved by blows that remain invisible. Although the mingling, tumbling motion of dust particles is caused largely by air currents, the glittering, jiggling motion
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
of small dust particles is caused chiefly by true Brownian dynamics; Lucretius "perfectly describes and explains the Brownian movement by a wrong example". While Jan Ingenhousz described the irregular motion of coal dust particles on the surface of alcohol in 1785, the discovery of this phenomenon is often credited to the botanist Robert Brown in 1827. Brown was studying pollen grains of the plant Clarkia pulchella suspended in water under a microscope when he observed minute particles, ejected by the pollen grains, executing a jittery motion. By repeating the experiment with particles of inorganic matter he was able to rule out that the motion was life-related, although its origin was yet to be explained. The mathematics of much of stochastic analysis including the mathematics of Brownian motion was introduced by Louis Bachelier in 1900 in his PhD thesis "The theory of speculation", in which he presented an analysis of the stock and option markets. However this work was largely unknown until the 1950s.: 33 Albert Einstein (in one of his 1905 papers) provided an explanation of Brownian motion in terms of atoms and molecules at a time when their existence was still debated. Einstein proved the relation between the probability distribution of a Brownian particle and the diffusion equation.: 33 These equations describing Brownian motion were subsequently verified by the experimental work of Jean Baptiste Perrin in 1908, leading to his Nobel prize. Norbert Wiener gave the first complete and rigorous mathematical analysis in 1923, leading to the underlying mathematical concept being called a Wiener process. The instantaneous velocity of the Brownian motion can be defined as v = Δx/Δt, when Δt << τ, where τ is the momentum relaxation time. In 2010, the instantaneous velocity of a Brownian particle (a glass microsphere trapped in air with optical tweezers) was
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
measured successfully. The velocity data verified the Maxwell–Boltzmann velocity distribution, and the equipartition theorem for a Brownian particle. == Statistical mechanics theories == === Einstein's theory === There are two parts to Einstein's theory: the first part consists in the formulation of a diffusion equation for Brownian particles, in which the diffusion coefficient is related to the mean squared displacement of a Brownian particle, while the second part consists in relating the diffusion coefficient to measurable physical quantities. In this way Einstein was able to determine the size of atoms, and how many atoms there are in a mole, or the molecular weight in grams, of a gas. In accordance to Avogadro's law, this volume is the same for all ideal gases, which is 22.414 liters at standard temperature and pressure. The number of atoms contained in this volume is referred to as the Avogadro number, and the determination of this number is tantamount to the knowledge of the mass of an atom, since the latter is obtained by dividing the molar mass of the gas by the Avogadro constant. The first part of Einstein's argument was to determine how far a Brownian particle travels in a given time interval. Classical mechanics is unable to determine this distance because of the enormous number of bombardments a Brownian particle will undergo, roughly of the order of 1014 collisions per second. He regarded the increment of particle positions in time τ {\displaystyle \tau } in a one-dimensional (x) space (with the coordinates chosen so that the origin lies at the initial position of the particle) as a random variable ( q {\displaystyle q} ) with some probability density function φ ( q ) {\displaystyle \varphi (q)} (i.e., φ ( q ) {\displaystyle \varphi (q)} is the probability density for a jump of
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
magnitude q {\displaystyle q} , i.e., the probability density of the particle incrementing its position from x {\displaystyle x} to x + q {\displaystyle x+q} in the time interval τ {\displaystyle \tau } ). Further, assuming conservation of particle number, he expanded the number density ρ ( x , t + τ ) {\displaystyle \rho (x,t+\tau )} (number of particles per unit volume around x {\displaystyle x} ) at time t + τ {\displaystyle t+\tau } in a Taylor series, ρ ( x , t + τ ) = ρ ( x , t ) + τ ∂ ρ ( x , t ) ∂ t + ⋯ = ∫ − ∞ ∞ ρ ( x − q , t ) φ ( q ) d q = E q [ ρ ( x − q , t ) ] = ρ ( x , t ) ∫ − ∞ ∞ φ ( q ) d q − ∂ ρ ∂ x ∫ − ∞ ∞ q φ ( q ) d q + ∂ 2 ρ ∂ x 2 ∫ − ∞ ∞ q 2 2 φ ( q ) d q + ⋯ = ρ ( x , t ) ⋅ 1 − 0 + ∂ 2 ρ ∂ x 2 ∫ − ∞ ∞ q 2 2 φ ( q ) d q + ⋯ {\displaystyle {\begin{aligned}\rho (x,t+\tau )={}&\rho (x,t)+\tau {\frac {\partial \rho (x,t)}{\partial t}}+\cdots \\[2ex]={}&\int _{-\infty }^{\infty }\rho (x-q,t)\,\varphi (q)\,dq=\mathbb {E} _{q}{\left[\rho (x-q,t)\right]}\\[1ex]={}&\rho (x,t)\,\int _{-\infty }^{\infty }\varphi (q)\,dq-{\frac {\partial \rho }{\partial x}}\,\int _{-\infty }^{\infty }q\,\varphi (q)\,dq+{\frac {\partial ^{2}\rho }{\partial x^{2}}}\,\int _{-\infty }^{\infty }{\frac {q^{2}}{2}}\varphi (q)\,dq+\cdots \\[1ex]={}&\rho (x,t)\cdot 1-0+{\cfrac {\partial ^{2}\rho }{\partial x^{2}}}\,\int _{-\infty }^{\infty }{\frac {q^{2}}{2}}\varphi (q)\,dq+\cdots \end{aligned}}} where the second equality is by definition of φ {\displaystyle \varphi } . The integral in the first
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
term is equal to one by the definition of probability, and the second and other even terms (i.e. first and other odd moments) vanish because of space symmetry. What is left gives rise to the following relation: ∂ ρ ∂ t = ∂ 2 ρ ∂ x 2 ⋅ ∫ − ∞ ∞ q 2 2 τ φ ( q ) d q + higher-order even moments. {\displaystyle {\frac {\partial \rho }{\partial t}}={\frac {\partial ^{2}\rho }{\partial x^{2}}}\cdot \int _{-\infty }^{\infty }{\frac {q^{2}}{2\tau }}\varphi (q)\,dq+{\text{higher-order even moments.}}} Where the coefficient after the Laplacian, the second moment of probability of displacement q {\displaystyle q} , is interpreted as mass diffusivity D: D = ∫ − ∞ ∞ q 2 2 τ φ ( q ) d q . {\displaystyle D=\int _{-\infty }^{\infty }{\frac {q^{2}}{2\tau }}\varphi (q)\,dq.} Then the density of Brownian particles ρ at point x at time t satisfies the diffusion equation: ∂ ρ ∂ t = D ⋅ ∂ 2 ρ ∂ x 2 , {\displaystyle {\frac {\partial \rho }{\partial t}}=D\cdot {\frac {\partial ^{2}\rho }{\partial x^{2}}},} Assuming that N particles start from the origin at the initial time t = 0, the diffusion equation has the solution ρ ( x , t ) = N 4 π D t exp ⁡ ( − x 2 4 D t ) . {\displaystyle \rho (x,t)={\frac {N}{\sqrt {4\pi Dt}}}\exp {\left(-{\frac {x^{2}}{4Dt}}\right)}.} This expression (which is a normal distribution with the mean μ = 0 {\displaystyle \mu =0} and variance σ 2 = 2 D t {\displaystyle \sigma ^{2}=2Dt} usually called Brownian motion B t {\displaystyle B_{t}} ) allowed Einstein to calculate the moments directly. The first moment is seen to vanish, meaning that the Brownian particle is equally likely to move to the left as it is to move to the right. The
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
second moment is, however, non-vanishing, being given by E [ x 2 ] = 2 D t . {\displaystyle \mathbb {E} {\left[x^{2}\right]}=2Dt.} This equation expresses the mean squared displacement in terms of the time elapsed and the diffusivity. From this expression Einstein argued that the displacement of a Brownian particle is not proportional to the elapsed time, but rather to its square root. His argument is based on a conceptual switch from the "ensemble" of Brownian particles to the "single" Brownian particle: we can speak of the relative number of particles at a single instant just as well as of the time it takes a Brownian particle to reach a given point. The second part of Einstein's theory relates the diffusion constant to physically measurable quantities, such as the mean squared displacement of a particle in a given time interval. This result enables the experimental determination of the Avogadro number and therefore the size of molecules. Einstein analyzed a dynamic equilibrium being established between opposing forces. The beauty of his argument is that the final result does not depend upon which forces are involved in setting up the dynamic equilibrium. In his original treatment, Einstein considered an osmotic pressure experiment, but the same conclusion can be reached in other ways. Consider, for instance, particles suspended in a viscous fluid in a gravitational field. Gravity tends to make the particles settle, whereas diffusion acts to homogenize them, driving them into regions of smaller concentration. Under the action of gravity, a particle acquires a downward speed of v = μmg, where m is the mass of the particle, g is the acceleration due to gravity, and μ is the particle's mobility in the fluid. George Stokes had shown that the mobility for a spherical particle with radius r is μ = 1
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
6 π η r {\displaystyle \mu ={\tfrac {1}{6\pi \eta r}}} , where η is the dynamic viscosity of the fluid. In a state of dynamic equilibrium, and under the hypothesis of isothermal fluid, the particles are distributed according to the barometric distribution ρ = ρ o exp ⁡ ( − m g h k B T ) , {\displaystyle \rho =\rho _{o}\,\exp \left({-{\frac {mgh}{k_{\text{B}}T}}}\right),} where ρ − ρo is the difference in density of particles separated by a height difference, of h = z − z o {\displaystyle h=z-z_{o}} , kB is the Boltzmann constant (the ratio of the universal gas constant, R, to the Avogadro constant, NA), and T is the absolute temperature. Dynamic equilibrium is established because the more that particles are pulled down by gravity, the greater the tendency for the particles to migrate to regions of lower concentration. The flux is given by Fick's law, J = − D d ρ d h , {\displaystyle J=-D{\frac {d\rho }{dh}},} where J = ρv. Introducing the formula for ρ, we find that v = D m g k B T . {\displaystyle v={\frac {Dmg}{k_{\text{B}}T}}.} In a state of dynamical equilibrium, this speed must also be equal to v = μmg. Both expressions for v are proportional to mg, reflecting that the derivation is independent of the type of forces considered. Similarly, one can derive an equivalent formula for identical charged particles of charge q in a uniform electric field of magnitude E, where mg is replaced with the electrostatic force qE. Equating these two expressions yields the Einstein relation for the diffusivity, independent of mg or qE or other such forces: E [ x 2 ] 2 t = D = μ k B T = μ R T N A = R T 6 π η r
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
N A . {\displaystyle {\frac {\mathbb {E} {\left[x^{2}\right]}}{2t}}=D=\mu k_{\text{B}}T={\frac {\mu RT}{N_{\text{A}}}}={\frac {RT}{6\pi \eta rN_{\text{A}}}}.} Here the first equality follows from the first part of Einstein's theory, the third equality follows from the definition of the Boltzmann constant as kB = R / NA, and the fourth equality follows from Stokes's formula for the mobility. By measuring the mean squared displacement over a time interval along with the universal gas constant R, the temperature T, the viscosity η, and the particle radius r, the Avogadro constant NA can be determined. The type of dynamical equilibrium proposed by Einstein was not new. It had been pointed out previously by J. J. Thomson in his series of lectures at Yale University in May 1903 that the dynamic equilibrium between the velocity generated by a concentration gradient given by Fick's law and the velocity due to the variation of the partial pressure caused when ions are set in motion "gives us a method of determining Avogadro's constant which is independent of any hypothesis as to the shape or size of molecules, or of the way in which they act upon each other". An identical expression to Einstein's formula for the diffusion coefficient was also found by Walther Nernst in 1888 in which he expressed the diffusion coefficient as the ratio of the osmotic pressure to the ratio of the frictional force and the velocity to which it gives rise. The former was equated to the law of van 't Hoff while the latter was given by Stokes's law. He writes k ′ = p o / k {\displaystyle k'=p_{o}/k} for the diffusion coefficient k′, where p o {\displaystyle p_{o}} is the osmotic pressure and k is the ratio of the frictional force to the molecular viscosity which he assumes is given by Stokes's formula
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
for the viscosity. Introducing the ideal gas law per unit volume for the osmotic pressure, the formula becomes identical to that of Einstein's. The use of Stokes's law in Nernst's case, as well as in Einstein and Smoluchowski, is not strictly applicable since it does not apply to the case where the radius of the sphere is small in comparison with the mean free path. Confirming Einstein's formula experimentally proved difficult. Initial attempts by Theodor Svedberg in 1906 and 1907 were critiqued by Einstein and by Perrin as not measuring a quantity directly comparable to the formula. Victor Henri in 1908 took cinematographic shots through a microscope and found quantitative disagreement with the formula but again the analysis was uncertain. Einstein's predictions were finally confirmed in a series of experiments carried out by Chaudesaigues in 1908 and Perrin in 1909. The confirmation of Einstein's theory constituted empirical progress for the kinetic theory of heat. In essence, Einstein showed that the motion can be predicted directly from the kinetic model of thermal equilibrium. The importance of the theory lay in the fact that it confirmed the kinetic theory's account of the second law of thermodynamics as being an essentially statistical law. === Smoluchowski model === Smoluchowski's theory of Brownian motion starts from the same premise as that of Einstein and derives the same probability distribution ρ(x, t) for the displacement of a Brownian particle along the x in time t. He therefore gets the same expression for the mean squared displacement: E [ ( Δ x ) 2 ] {\displaystyle \mathbb {E} {\left[(\Delta x)^{2}\right]}} . However, when he relates it to a particle of mass m moving at a velocity u which is the result of a frictional force governed by Stokes's law, he finds E [ ( Δ x )
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
2 ] = 2 D t = t 32 81 m u 2 π μ a = t 64 27 1 2 m u 2 3 π μ a , {\displaystyle \mathbb {E} {\left[(\Delta x)^{2}\right]}=2Dt=t{\frac {32}{81}}{\frac {mu^{2}}{\pi \mu a}}=t{\frac {64}{27}}{\frac {{\frac {1}{2}}mu^{2}}{3\pi \mu a}},} where μ is the viscosity coefficient, and a is the radius of the particle. Associating the kinetic energy m u 2 / 2 {\displaystyle mu^{2}/2} with the thermal energy RT/N, the expression for the mean squared displacement is 64/27 times that found by Einstein. The fraction 27/64 was commented on by Arnold Sommerfeld in his necrology on Smoluchowski: "The numerical coefficient of Einstein, which differs from Smoluchowski by 27/64 can only be put in doubt." Smoluchowski attempts to answer the question of why a Brownian particle should be displaced by bombardments of smaller particles when the probabilities for striking it in the forward and rear directions are equal. If the probability of m gains and n − m losses follows a binomial distribution, P m , n = ( n m ) 2 − n , {\displaystyle P_{m,n}={\binom {n}{m}}2^{-n},} with equal a priori probabilities of 1/2, the mean total gain is E [ 2 m − n ] = ∑ m = n 2 n ( 2 m − n ) P m , n = n n ! 2 n + 1 [ ( n 2 ) ! ] 2 . {\displaystyle \mathbb {E} {\left[2m-n\right]}=\sum _{m={\frac {n}{2}}}^{n}(2m-n)P_{m,n}={\frac {nn!}{2^{n+1}\left[\left({\frac {n}{2}}\right)!\right]^{2}}}.} If n is large enough so that Stirling's approximation can be used in the form n ! ≈ ( n e ) n 2 π n , {\displaystyle n!\approx \left({\frac {n}{e}}\right)^{n}{\sqrt {2\pi n}},} then the expected total gain will be E [ 2 m − n ] ≈ n 2 π , {\displaystyle \mathbb {E} {\left[2m-n\right]}\approx
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
{\sqrt {\frac {n}{2\pi }}},} showing that it increases as the square root of the total population. Suppose that a Brownian particle of mass M is surrounded by lighter particles of mass m which are traveling at a speed u. Then, reasons Smoluchowski, in any collision between a surrounding and Brownian particles, the velocity transmitted to the latter will be mu/M. This ratio is of the order of 10−7 cm/s. But we also have to take into consideration that in a gas there will be more than 1016 collisions in a second, and even greater in a liquid where we expect that there will be 1020 collision in one second. Some of these collisions will tend to accelerate the Brownian particle; others will tend to decelerate it. If there is a mean excess of one kind of collision or the other to be of the order of 108 to 1010 collisions in one second, then velocity of the Brownian particle may be anywhere between 10–1000 cm/s. Thus, even though there are equal probabilities for forward and backward collisions there will be a net tendency to keep the Brownian particle in motion, just as the ballot theorem predicts. These orders of magnitude are not exact because they don't take into consideration the velocity of the Brownian particle, U, which depends on the collisions that tend to accelerate and decelerate it. The larger U is, the greater will be the collisions that will retard it so that the velocity of a Brownian particle can never increase without limit. Could such a process occur, it would be tantamount to a perpetual motion of the second type. And since equipartition of energy applies, the kinetic energy of the Brownian particle, M U 2 / 2 {\displaystyle MU^{2}/2} , will be equal, on the average, to
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
the kinetic energy of the surrounding fluid particle, m u 2 / 2 {\displaystyle mu^{2}/2} . In 1906 Smoluchowski published a one-dimensional model to describe a particle undergoing Brownian motion. The model assumes collisions with M ≫ m where M is the test particle's mass and m the mass of one of the individual particles composing the fluid. It is assumed that the particle collisions are confined to one dimension and that it is equally probable for the test particle to be hit from the left as from the right. It is also assumed that every collision always imparts the same magnitude of ΔV. If NR is the number of collisions from the right and NL the number of collisions from the left then after N collisions the particle's velocity will have changed by ΔV(2NR − N). The multiplicity is then simply given by: ( N N R ) = N ! N R ! ( N − N R ) ! {\displaystyle {\binom {N}{N_{\text{R}}}}={\frac {N!}{N_{\text{R}}!(N-N_{\text{R}})!}}} and the total number of possible states is given by 2N. Therefore, the probability of the particle being hit from the right NR times is: P N ( N R ) = N ! 2 N N R ! ( N − N R ) ! {\displaystyle P_{N}(N_{\text{R}})={\frac {N!}{2^{N}N_{\text{R}}!(N-N_{\text{R}})!}}} As a result of its simplicity, Smoluchowski's 1D model can only qualitatively describe Brownian motion. For a realistic particle undergoing Brownian motion in a fluid, many of the assumptions don't apply. For example, the assumption that on average occurs an equal number of collisions from the right as from the left falls apart once the particle is in motion. Also, there would be a distribution of different possible ΔVs instead of always just one in a realistic situation. === Langevin equation === The diffusion equation
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
yields an approximation of the time evolution of the probability density function associated with the position of the particle going under a Brownian movement under the physical definition. The approximation becomes valid on timescales much larger than the timescale of individual atomic collisions, since it does not include a term to describe the acceleration of particles during collision. The time evolution of the position of the Brownian particle over all time scales described using the Langevin equation, an equation that involves a random force field representing the effect of the thermal fluctuations of the solvent on the particle. At longer times scales, where acceleration is negligible, individual particle dynamics can be approximated using Brownian dynamics in place of Langevin dynamics. === Astrophysics: star motion within galaxies === In stellar dynamics, a massive body (star, black hole, etc.) can experience Brownian motion as it responds to gravitational forces from surrounding stars. The rms velocity V of the massive object, of mass M, is related to the rms velocity v ⋆ {\displaystyle v_{\star }} of the background stars by M V 2 ≈ m v ⋆ 2 {\displaystyle MV^{2}\approx mv_{\star }^{2}} where m ≪ M {\displaystyle m\ll M} is the mass of the background stars. The gravitational force from the massive object causes nearby stars to move faster than they otherwise would, increasing both v ⋆ {\displaystyle v_{\star }} and V. The Brownian velocity of Sgr A*, the supermassive black hole at the center of the Milky Way galaxy, is predicted from this formula to be less than 1 km s−1. == Mathematics == In mathematics, Brownian motion is described by the Wiener process, a continuous-time stochastic process named in honor of Norbert Wiener. It is one of the best known Lévy processes (càdlàg stochastic processes with stationary independent increments) and occurs
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
frequently in pure and applied mathematics, economics and physics. The Wiener process Wt is characterized by four facts: W0 = 0 Wt is almost surely continuous Wt has independent increments W t − W s ∼ N ( 0 , t − s ) {\displaystyle W_{t}-W_{s}\sim {\mathcal {N}}(0,t-s)} (for 0 ≤ s ≤ t {\displaystyle 0\leq s\leq t} ). N ( μ , σ 2 ) {\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})} denotes the normal distribution with expected value μ and variance σ2. The condition that it has independent increments means that if 0 ≤ s 1 < t 1 ≤ s 2 < t 2 {\displaystyle 0\leq s_{1}<t_{1}\leq s_{2}<t_{2}} then W t 1 − W s 1 {\displaystyle W_{t_{1}}-W_{s_{1}}} and W t 2 − W s 2 {\displaystyle W_{t_{2}}-W_{s_{2}}} are independent random variables. In addition, for some filtration F t {\displaystyle {\mathcal {F}}_{t}} , W t {\displaystyle W_{t}} is F t {\displaystyle {\mathcal {F}}_{t}} measurable for all t ≥ 0 {\displaystyle t\geq 0} . An alternative characterisation of the Wiener process is the so-called Lévy characterisation that says that the Wiener process is an almost surely continuous martingale with W0 = 0 and quadratic variation [ W t , W t ] = t {\displaystyle [W_{t},W_{t}]=t} . A third characterisation is that the Wiener process has a spectral representation as a sine series whose coefficients are independent N ( 0 , 1 ) {\displaystyle {\mathcal {N}}(0,1)} random variables. This representation can be obtained using the Kosambi–Karhunen–Loève theorem. The Wiener process can be constructed as the scaling limit of a random walk, or other discrete-time stochastic processes with stationary independent increments. This is known as Donsker's theorem. Like the random walk, the Wiener process is recurrent in one or two dimensions (meaning that it returns almost surely to any fixed neighborhood
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
of the origin infinitely often) whereas it is not recurrent in dimensions three and higher. Unlike the random walk, it is scale invariant. A d-dimensional Gaussian free field has been described as "a d-dimensional-time analog of Brownian motion." === Statistics === The Brownian motion can be modeled by a random walk. In the general case, Brownian motion is a Markov process and described by stochastic integral equations. === Lévy characterisation === The French mathematician Paul Lévy proved the following theorem, which gives a necessary and sufficient condition for a continuous Rn-valued stochastic process X to actually be n-dimensional Brownian motion. Hence, Lévy's condition can actually be used as an alternative definition of Brownian motion. Let X = (X1, ..., Xn) be a continuous stochastic process on a probability space (Ω, Σ, P) taking values in Rn. Then the following are equivalent: X is a Brownian motion with respect to P, i.e., the law of X with respect to P is the same as the law of an n-dimensional Brownian motion, i.e., the push-forward measure X∗(P) is classical Wiener measure on C0([0, ∞); Rn). both X is a martingale with respect to P (and its own natural filtration); and for all 1 ≤ i, j ≤ n, Xi(t) Xj(t) − δij t is a martingale with respect to P (and its own natural filtration), where δij denotes the Kronecker delta. === Spectral content === The spectral content of a stochastic process X t {\displaystyle X_{t}} can be found from the power spectral density, formally defined as S ( ω ) = lim T → ∞ 1 T E { | ∫ 0 T e i ω t X t d t | 2 } , {\displaystyle S(\omega )=\lim _{T\to \infty }{\frac {1}{T}}\mathbb {E} \left\{\left|\int _{0}^{T}e^{i\omega t}X_{t}dt\right|^{2}\right\},} where E {\displaystyle \mathbb {E}
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
} stands for the expected value. The power spectral density of Brownian motion is found to be S B M ( ω ) = 4 D ω 2 . {\displaystyle S_{BM}(\omega )={\frac {4D}{\omega ^{2}}}.} where D is the diffusion coefficient of Xt. For naturally occurring signals, the spectral content can be found from the power spectral density of a single realization, with finite available time, i.e., S ( 1 ) ( ω , T ) = 1 T | ∫ 0 T e i ω t X t d t | 2 , {\displaystyle S^{(1)}(\omega ,T)={\frac {1}{T}}\left|\int _{0}^{T}e^{i\omega t}X_{t}dt\right|^{2},} which for an individual realization of a Brownian motion trajectory, it is found to have expected value μ B M ( ω , T ) {\displaystyle \mu _{BM}(\omega ,T)} μ BM ( ω , T ) = 4 D ω 2 [ 1 − sin ⁡ ( ω T ) ω T ] {\displaystyle \mu _{\text{BM}}(\omega ,T)={\frac {4D}{\omega ^{2}}}\left[1-{\frac {\sin \left(\omega T\right)}{\omega T}}\right]} and variance σ BM 2 ( ω , T ) {\displaystyle \sigma _{\text{BM}}^{2}(\omega ,T)} σ S 2 ( f , T ) = E { ( S T ( j ) ( f ) ) 2 } − μ S 2 ( f , T ) = 20 D 2 f 4 [ 1 − ( 6 − cos ⁡ ( f T ) ) 2 sin ⁡ ( f T ) 5 f T + ( 17 − cos ⁡ ( 2 f T ) − 16 cos ⁡ ( f T ) ) 10 f 2 T 2 ] . {\displaystyle \sigma _{S}^{2}(f,T)=\mathbb {E} \left\{\left(S_{T}^{(j)}(f)\right)^{2}\right\}-\mu _{S}^{2}(f,T)={\frac {20D^{2}}{f^{4}}}\left[1-{\Big (}6-\cos \left(fT\right){\Big )}{\frac {2\sin \left(fT\right)}{5fT}}+{\frac {{\Big (}17-\cos \left(2fT\right)-16\cos \left(fT\right){\Big )}}{10f^{2}T^{2}}}\right].} For sufficiently long realization times, the expected value of the power spectrum of a single trajectory converges to the
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
formally defined power spectral density S ( ω ) {\displaystyle S(\omega )} , but its coefficient of variation γ = σ / μ {\displaystyle \gamma =\sigma /\mu } tends to 5 / 2 {\displaystyle {\sqrt {5}}/2} . This implies the distribution of S ( 1 ) ( ω , T ) {\displaystyle S^{(1)}(\omega ,T)} is broad even in the infinite time limit. === Riemannian manifolds === Brownian motion is usually considered to take place in Euclidean space. It is natural to consider how such motion generalizes to more complex shapes, such as surfaces or higher dimensional manifolds. The formalization requires the space to possess some form of a derivative, as well as a metric, so that a Laplacian can be defined. Both of these are available on Riemannian manifolds. Riemannian manifolds have the property that geodesics can be described in polar coordinates; that is, displacements are always in a radial direction, at some given angle. Uniform random motion is then described by Gaussians along the radial direction, independent of the angle, the same as in Euclidean space. The infinitesimal generator (and hence characteristic operator) of Brownian motion on Euclidean Rn is ⁠1/2⁠Δ, where Δ denotes the Laplace operator. Brownian motion on an m-dimensional Riemannian manifold (M, g) can be defined as diffusion on M with the characteristic operator given by ⁠1/2⁠ΔLB, half the Laplace–Beltrami operator ΔLB. One of the topics of study is a characterization of the Poincaré recurrence time for such systems. == Narrow escape == The narrow escape problem is a ubiquitous problem in biology, biophysics and cellular biology which has the following formulation: a Brownian particle (ion, molecule, or protein) is confined to a bounded domain (a compartment or a cell) by a reflecting boundary, except for a small window through which it can escape. The narrow
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
escape problem is that of calculating the mean escape time. This time diverges as the window shrinks, thus rendering the calculation a singular perturbation problem. == See also == == References == == Further reading == Brown, Robert (1828). "A brief account of microscopical observations made in the months of June, July and August, 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies" (PDF). Philosophical Magazine. 4 (21): 161–173. doi:10.1080/14786442808674769. Archived (PDF) from the original on 9 October 2022. Also includes a subsequent defense by Brown of his original observations, Additional remarks on active molecules. Chaudesaigues, M. (1908). "Le mouvement brownien et la formule d'Einstein" [Brownian motion and Einstein's formula]. Comptes Rendus (in French). 147: 1044–6. Clark, P. (1976). "Atomism versus thermodynamics". In Howson, Colin (ed.). Method and appraisal in the physical sciences. Cambridge University Press. ISBN 978-0521211109. Cohen, Ruben D. (1986). "Self Similarity in Brownian Motion and Other Ergodic Phenomena" (PDF). Journal of Chemical Education. 63 (11): 933–934. Bibcode:1986JChEd..63..933C. doi:10.1021/ed063p933. Archived (PDF) from the original on 9 October 2022. Dubins, Lester E.; Schwarz, Gideon (15 May 1965). "On Continuous Martingales". Proceedings of the National Academy of Sciences of the United States of America. 53 (3): 913–916. Bibcode:1965PNAS...53..913D. doi:10.1073/pnas.53.5.913. JSTOR 72837. PMC 301348. PMID 16591279. Einstein, A. (1956). Investigations on the Theory of Brownian Movement. New York: Dover. ISBN 978-0-486-60304-9. Retrieved 6 January 2014. {{cite book}}: ISBN / Date incompatibility (help) Henri, V. (1908). "Études cinématographique du mouvement brownien" [Cinematographic studies of Brownian motion]. Comptes Rendus (in French) (146): 1024–6. Lucretius, On The Nature of Things, translated by William Ellery Leonard. (on-line version, from Project Gutenberg. See the heading 'Atomic Motions'; this translation differs slightly from the one quoted). Nelson, Edward, (1967). Dynamical Theories of Brownian
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
Motion. (PDF version of this out-of-print book, from the author's webpage.) This is primarily a mathematical work, but the first four chapters discuss the history of the topic, in the era from Brown to Einstein. Pearle, P.; Collett, B.; Bart, K.; Bilderback, D.; Newman, D.; Samuels, S. (2010). "What Brown saw and you can too". American Journal of Physics. 78 (12): 1278–1289. arXiv:1008.0039. Bibcode:2010AmJPh..78.1278P. doi:10.1119/1.3475685. S2CID 12342287. Perrin, J. (1909). "Mouvement brownien et réalité moléculaire" [Brownian movement and molecular reality]. Annales de chimie et de physique. 8th series. 18: 5–114. See also Perrin's book "Les Atomes" (1914). von Smoluchowski, M. (1906). "Zur kinetischen Theorie der Brownschen Molekularbewegung und der Suspensionen". Annalen der Physik. 21 (14): 756–780. Bibcode:1906AnP...326..756V. doi:10.1002/andp.19063261405. Svedberg, T. (1907). Studien zur Lehre von den kolloiden Losungen. Theile, T. N. Danish version: "Om Anvendelse af mindste Kvadraters Methode i nogle Tilfælde, hvor en Komplikation af visse Slags uensartede tilfældige Fejlkilder giver Fejlene en 'systematisk' Karakter". French version: "Sur la compensation de quelques erreurs quasi-systématiques par la méthodes de moindre carrés" published simultaneously in Vidensk. Selsk. Skr. 5. Rk., naturvid. og mat. Afd., 12:381–408, 1880. == External links == Einstein on Brownian Motion Discusses history, botany and physics of Brown's original observations, with videos "Einstein's prediction finally witnessed one century later" : a test to observe the velocity of Brownian motion Large-Scale Brownian Motion Demonstration
{ "page_id": 4436, "source": null, "title": "Brownian motion" }
Feature engineering is a preprocessing step in supervised machine learning and statistical modeling which transforms raw data into a more effective set of inputs. Each input comprises several attributes, known as features. By providing models with relevant information, feature engineering significantly enhances their predictive accuracy and decision-making capability. Beyond machine learning, the principles of feature engineering are applied in various scientific fields, including physics. For example, physicists construct dimensionless numbers such as the Reynolds number in fluid dynamics, the Nusselt number in heat transfer, and the Archimedes number in sedimentation. They also develop first approximations of solutions, such as analytical solutions for the strength of materials in mechanics. == Clustering == One of the applications of feature engineering has been clustering of feature-objects or sample-objects in a dataset. Especially, feature engineering based on matrix decomposition has been extensively used for data clustering under non-negativity constraints on the feature coefficients. These include Non-Negative Matrix Factorization (NMF), Non-Negative Matrix-Tri Factorization (NMTF), Non-Negative Tensor Decomposition/Factorization (NTF/NTD), etc. The non-negativity constraints on coefficients of the feature vectors mined by the above-stated algorithms yields a part-based representation, and different factor matrices exhibit natural clustering properties. Several extensions of the above-stated feature engineering methods have been reported in literature, including orthogonality-constrained factorization for hard clustering, and manifold learning to overcome inherent issues with these algorithms. Other classes of feature engineering algorithms include leveraging a common hidden structure across multiple inter-related datasets to obtain a consensus (common) clustering scheme. An example is Multi-view Classification based on Consensus Matrix Decomposition (MCMD), which mines a common clustering scheme across multiple datasets. MCMD is designed to output two types of class labels (scale-variant and scale-invariant clustering), and: is computationally robust to missing information, can obtain shape- and scale-based outliers, and can handle high-dimensional data effectively. Coupled matrix and tensor decompositions
{ "page_id": 46207323, "source": null, "title": "Feature engineering" }
are popular in multi-view feature engineering. == Predictive modelling == Feature engineering in machine learning and statistical modeling involves selecting, creating, transforming, and extracting data features. Key components include feature creation from existing data, transforming and imputing missing or invalid features, reducing data dimensionality through methods like Principal Components Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA), and selecting the most relevant features for model training based on importance scores and correlation matrices. Features vary in significance. Even relatively insignificant features may contribute to a model. Feature selection can reduce the number of features to prevent a model from becoming too specific to the training data set (overfitting). Feature explosion occurs when the number of identified features is too large for effective model estimation or optimization. Common causes include: Feature templates - implementing feature templates instead of coding new features Feature combinations - combinations that cannot be represented by a linear system Feature explosion can be limited via techniques such as: regularization, kernel methods, and feature selection. == Automation == Automation of feature engineering is a research topic that dates back to the 1990s. Machine learning software that incorporates automated feature engineering has been commercially available since 2016. Related academic literature can be roughly separated into two types: Multi-relational decision tree learning (MRDTL) uses a supervised algorithm that is similar to a decision tree. Deep Feature Synthesis uses simpler methods. === Multi-relational decision tree learning (MRDTL) === Multi-relational Decision Tree Learning (MRDTL) extends traditional decision tree methods to relational databases, handling complex data relationships across tables. It innovatively uses selection graphs as decision nodes, refined systematically until a specific termination criterion is reached. Most MRDTL studies base implementations on relational databases, which results in many redundant operations. These redundancies can be reduced by using techniques such as
{ "page_id": 46207323, "source": null, "title": "Feature engineering" }
tuple id propagation. === Open-source implementations === There are a number of open-source libraries and tools that automate feature engineering on relational data and time series: featuretools is a Python library for transforming time series and relational data into feature matrices for machine learning. MCMD: An open-source feature engineering algorithm for joint clustering of multiple datasets . OneBM or One-Button Machine combines feature transformations and feature selection on relational data with feature selection techniques. [OneBM] helps data scientists reduce data exploration time allowing them to try and error many ideas in short time. On the other hand, it enables non-experts, who are not familiar with data science, to quickly extract value from their data with a little effort, time, and cost. getML community is an open source tool for automated feature engineering on time series and relational data. It is implemented in C/C++ with a Python interface. It has been shown to be at least 60 times faster than tsflex, tsfresh, tsfel, featuretools or kats. tsfresh is a Python library for feature extraction on time series data. It evaluates the quality of the features using hypothesis testing. tsflex is an open source Python library for extracting features from time series data. Despite being 100% written in Python, it has been shown to be faster and more memory efficient than tsfresh, seglearn or tsfel. seglearn is an extension for multivariate, sequential time series data to the scikit-learn Python library. tsfel is a Python package for feature extraction on time series data. kats is a Python toolkit for analyzing time series data. === Deep feature synthesis === The deep feature synthesis (DFS) algorithm beat 615 of 906 human teams in a competition. == Feature stores == The feature store is where the features are stored and organized for the explicit purpose of
{ "page_id": 46207323, "source": null, "title": "Feature engineering" }
being used to either train models (by data scientists) or make predictions (by applications that have a trained model). It is a central location where you can either create or update groups of features created from multiple different data sources, or create and update new datasets from those feature groups for training models or for use in applications that do not want to compute the features but just retrieve them when it needs them to make predictions. A feature store includes the ability to store code used to generate features, apply the code to raw data, and serve those features to models upon request. Useful capabilities include feature versioning and policies governing the circumstances under which features can be used. Feature stores can be standalone software tools or built into machine learning platforms. == Alternatives == Feature engineering can be a time-consuming and error-prone process, as it requires domain expertise and often involves trial and error. Deep learning algorithms may be used to process a large raw dataset without having to resort to feature engineering. However, deep learning algorithms still require careful preprocessing and cleaning of the input data. In addition, choosing the right architecture, hyperparameters, and optimization algorithm for a deep neural network can be a challenging and iterative process. == See also == Covariate Data transformation Feature extraction Feature learning Hashing trick Instrumental variables estimation Kernel method List of datasets for machine learning research Scale co-occurrence matrix Space mapping == References == == Further reading ==
{ "page_id": 46207323, "source": null, "title": "Feature engineering" }
Chromium sulfate may refer to: Chromium(II) sulfate Chromium(III) sulfate
{ "page_id": 18682204, "source": null, "title": "Chromium sulfate" }
Hydroxybutyraldehyde may refer to: 3-Hydroxybutyraldehyde (acetaldol), an aldol, formally the product of the dimerization of acetaldehyde 4-Hydroxybutyraldehyde, a chemical intermediate == See also == Hydroxybutanal
{ "page_id": 59183452, "source": null, "title": "Hydroxybutyraldehyde" }
Harrington paradox is a notion in the environmental and ecological economics describing the compliance of firms to the environmental regulations. The paradox was first described in Winston Harrington's paper in 1988 and was based on the research over monitoring, realization and compliance to environmental regulations in the US from the end of the 1970s to the beginning of the 1980s. According to the paradox, the firms in general comply with environmental regulations in spite of the fact that: Frequency of environmental monitoring of firms is low In case of detection of violations, the violator-firm is rarely punished The expected fine is low in comparison to the cost of compliance == Explanation == Firms' compliance at such level is contrary to the rational crime theory of Gary Becker which describes the behavior of profit maximizing entities. The rational firms will comply to the standards only in case the expected fine is higher than the cost of compliance. In order to explain the paradox several suggestions have been put forward. While Gary Becker's model assumes random monitoring of firms, actual monitoring strategies follow a targeted approach and more frequently inspect firms that are more likely to be in violation. Targeted monitoring and enforcement was put forward as an explanation of the Harrington paradox by Winston Harrington in his 1988 article. An alternative explanation of the paradox is allowing firms to report environmental violations voluntarily in order to be treated with leniency by the regulator. Firms may exhibit altruism or self-image concern and voluntarily comply with environmental regulation == Observation == The empirical data observing the paradox is rare. In the research conducted by Norwegian Climate and Pollution Agency in 2001 no serious violations were revealed, but in the majority of firms (80%) there were minor deviations from standards. The fact that in
{ "page_id": 26349919, "source": null, "title": "Harrington paradox" }
Norway there is low frequency of monitoring and the fine system for minor violations is light can not bring strong evidence to the paradox, as major violations imply very strict punishments which is conforming to the rational crime theory. == External links == https://books.google.com/books?id=JQ0vBINSp_MC&pg=PA209 https://books.google.com/books?id=2L6PBcD3fQAC&pg=PA198 == Additional materials == Enforcement leverage when penalties are restricted, Harrington, W. 1988, Journal of Public Economics 37, 29–53 Regulatory dealing – revisiting the Harrington paradox, Anthony Heyes, Neil Rickman, Journal of Public Economics 72 (1999) 361–378 Firms’ Compliance to Environmental Regulation: Is There Really a Paradox? Karine Nyborg and Kjetil Telle, Environmental & Resource Economics (2006) 35:1–18 Corporate environmentalism and environmental statutory permitting, Christopher S. Decker, Journal of Law and Economics, vol. XLVI (April 2003) A dissolving paradox: Firms’ compliance to environmental regulation, Karine Nyborg and Kjetil Telle, Memorandum No 02/2, Department of Economics University of Oslo Enforcement leverage when penalties are restricted: a reconsideration under asymmetric information, Mark Raymond, Journal of Public Economics, Volume 73, Issue 2, 1 August 1999, Pages 289-295 Monitoring and Enforcement of Environmental Policy, Mark A. Cohen 1998, International Yearbook of Environmental and Resource Economics,Volume III, edited by Tom Tietenberg and Henk Folmer; Edward Elgar publishers(1999)
{ "page_id": 26349919, "source": null, "title": "Harrington paradox" }
The molecular formula C18H34O2 (molar mass: 282.46 g/mol, exact mass: 282.2559 u) may refer to: Elaidic acid Octadecanolide Oleic acid Petroselinic acid
{ "page_id": 23662943, "source": null, "title": "C18H34O2" }
Browning is the process of food turning brown due to the chemical reactions that take place within. The process of browning is one of the chemical reactions that take place in food chemistry and represents an interesting research topic regarding health, nutrition, and food technology. Though there are many different ways food chemically changes over time, browning in particular falls into two main categories: enzymatic versus non-enzymatic browning processes. Browning has many important implications on the food industry relating to nutrition, technology, and economic cost. Researchers are especially interested in studying the control (inhibition) of browning and the different methods that can be employed to maximize this inhibition and ultimately prolong the shelf life of food. == Enzymatic browning == Enzymatic browning is one of the most important reactions that takes place in most fruits and vegetables as well as in seafood. These processes affect the taste, color, and value of such foods. Generally, it is a chemical reaction involving polyphenol oxidase (PPO), catechol oxidase, and other enzymes that create melanins and benzoquinone from natural phenols. Enzymatic browning (also called oxidation of foods) requires exposure to oxygen. It begins with the oxidation of phenols by polyphenol oxidase into quinones, whose strong electrophilic state causes high susceptibility to a nucleophilic attack from other proteins. These quinones are then polymerized in a series of reactions, eventually resulting in the formation of brown pigments (melanosis) on the surface of the food. The rate of enzymatic browning is reflected by the amount of active polyphenol oxidases present in the food. Hence, most research into methods of preventing enzymatic browning has been directed towards inhibiting polyphenol oxidase activity. However, not all browning of food produces negative effects. Examples of beneficial enzymatic browning: Developing color and flavor in coffee, cocoa beans, and tea. Developing color and
{ "page_id": 987492, "source": null, "title": "Food browning" }
flavor in dried fruit such as figs and raisins. Examples of non-beneficial enzymatic browning: Fresh fruit and vegetables, including apples, potatoes, bananas and avocados. Oxidation of polyphenols is the major cause of melanosis in crustaceans such as shrimp. == Control of enzymatic browning == The control of enzymatic browning has always been a challenge for the food industry. A variety of approaches are used to prevent or slow down enzymatic browning of foods, each method aimed at targeting specific steps of the chemical reaction. The different types of enzymatic browning control can be classified into two large groups: physical and chemical. Usually, multiple methods are used. The use of sulfites (powerful anti-browning chemicals) have been reconsidered due to the potential hazards that it causes along with its activity. Much research has been conducted regarding the exact types of control mechanisms that take place when confronted with the enzymatic process. Besides prevention, control over browning also includes measures intended to recover the food color after its browning. For instance, ion exchange filtration or ultrafiltration can be used in winemaking to remove the brown color sediments in the solution. === Physical methods === Heat treatment − Treating food with heat, such as blanching or roasting, de-naturates enzymes and destroys the reactants responsible for browning. Blanching is used, for example, in winemaking, tea processing, storing nuts and bacon, and preparing vegetables for freezing preservation. Meat is often partially browned under high heat before being incorporated into a larger preparation to be cooked at a lower temperature which produces less browning. Cold treatment − Refrigeration and freezing are the most common ways of storing food, preventing decay. The activity of browning enzymes, i.e., rate of reaction, drops in low temperatures. Thus, refrigeration helps to keep the initial look, color, and flavour of fresh vegetables
{ "page_id": 987492, "source": null, "title": "Food browning" }
and fruits. Refrigeration is also used during distribution and retailing of fruits and vegetables. Oxygen elimination − Presence of oxygen is crucial for enzymatic browning, therefore eliminating oxygen from the environment helps to slow down the browning reaction. Withdrawing air or replacing it with other gases (e.g., N2 or CO2) during preservation, such as in vacuum-packaging or modified atmosphere packaging, wine or juice bottling, using impermeable films or edible coatings, dipping into salt or sugar solutions, keeps the food away from direct contact with oxygen. Impermeable films made of plastic or other materials prevent food being exposed to oxygen in the air and avoid moisture loss. There is an increasing activity in developing packaging materials impregnated with antioxidants, antimicrobial, and antifungal substances, such as butylated hydroxytoluene (BHT) and butylated hydroxyanisole (BHA), tocopherols, hinokitiol, lysozyme, nisin, natamycin, chitosan, and ε-polylysine. Edible coatings can be made of polysaccharides, proteins, lipids, vegetable skins, plants or other natural products. Irradiation − Food irradiation using UV-C, gamma rays, x-rays, and electron beams is another method to extend the food shelf life. Ionizing radiation inhibits the vitality of microorganisms responsible for food spoilage and delays the maturation and sprouting of preserving vegetables and fruits. === Chemical methods === Acidification − Browning enzymes, as other enzymes, are active at a specific range of pH. For example, PPO shows optimal activity at pH 5-7 and is inhibited below pH 3. Acidifying agents and acidity regulators are widely used as food additives to maintain a desired pH in food products. Acidulants, such as citric acid, ascorbic acid, and glutathione, are used as anti-browning agents. Many of these agents also show other anti-browning effects, such as chelating and antioxidant activities. Antioxidants − Many antioxidants are used in food industry as food additives. These compounds react with oxygen and suppress the
{ "page_id": 987492, "source": null, "title": "Food browning" }
initiation of the browning process. Also, they interfere with intermediate products of the following reactions and inhibit melanin formation. Ascorbic acid, N-acetylcysteine, L-cysteine, 4-hexylresorcinol, erythorbic acid, cysteine hydrochloride, glutathione are examples of antioxidants that have been studied for their anti-browning properties. Chelating agents − Polyphenol oxidase requires copper as a cofactor for its functionality, thus copper-chelating agents inhibit the activity of this enzyme. Many agents possessing chelating activity have been studied and used in different fields of food industry, such as citric acid, sorbic acid, polyphosphates, hinokitiol, kojic acid, EDTA, porphyrins, polycarboxylic acids, different proteins. Some of these compounds also have other anti-browning effects, such as acidifying or antioxidant. Hinokitiol is used in coating materials for food packaging. === Other methods === Natural agents − Different natural products and their extracts, such as onion, pineapple, lemon, and white wine, are known to inhibit or slow the browning of some products. Onion and its extract exhibit potent anti-browning properties by inhibiting the PPO activity. Pineapple juice have shown to possess anti-browning effect on apples and bananas. Lemon juice is used in making doughs to make the pastry products look brighter. This effect is possibly explained by the anti-browning properties of citric and ascorbic acids in the lemon juice. Genetic modification − Arctic apples have been genetically modified to silence the expression of PPO, thereby delaying the browning effect, and improving apple eating quality. == Non-enzymatic browning == The second type of browning, non-enzymatic browning, is a process that also produces the brown pigmentation in foods but without the activity of enzymes. The two main forms of non-enzymatic browning are caramelization and the Maillard reaction. Both vary in the reaction rate as a function of water activity (in food chemistry, the standard state of water activity is most often defined as the
{ "page_id": 987492, "source": null, "title": "Food browning" }
partial vapor pressure of pure water at the same temperature). Caramelization is a process involving the pyrolysis of sugar. It is used extensively in cooking for the desired nutty flavor and brown color. As the process occurs, volatile chemicals are released, producing the characteristic caramel flavor. The other non-enzymatic reaction is the Maillard reaction. This reaction is responsible for the production of the flavor when foods are cooked. Examples of foods that undergo Maillard reaction include breads, steaks, and potatoes. It is a chemical reaction that takes place between the amine group of a free amino acid and the carbonyl group of a reducing sugar, usually with the addition of heat. The sugar interacts with the amino acid, producing a variety of odors and flavors. The Maillard reaction is the basis for producing artificial flavors for processed foods in the flavoring industry since the type of amino acid involved determines the resulting flavor. Melanoidins are brown, high molecular weight heterogeneous polymers that are formed when sugars and amino acids combine through the Maillard reaction at high temperatures and low water activity. Melanoidins are commonly present in foods that have undergone some form of non-enzymatic browning, such as barley malts (Vienna and Munich), bread crust, bakery products and coffee. They are also present in the wastewater of sugar refineries, necessitating treatment in order to avoid contamination around the outflow of these refineries. == Browning of grapes during winemaking == Like most fruit, grapes vary in the number of phenolic compounds they have. This characteristic is used as a parameter in judging the quality of the wine. The general process of winemaking is initiated by the enzymatic oxidation of phenolic compounds by polyphenol oxidases. Contact between the phenolic compounds in the vacuole of the grape cell and the polyphenol oxidase enzyme (located
{ "page_id": 987492, "source": null, "title": "Food browning" }
in the cytoplasm) triggers the oxidation of the grape. Thus, the initial browning of grapes occurs as a result of "compartmentalization modification" in the cells of the grape. == Implications in food industry and technology == Enzymatic browning affects the color, flavor, and nutritional value of foods, causing huge economic loss when not sold to consumers on time. It is estimated that more than 50% of produce is lost as a result of enzymatic browning. The increase in human population and consequential depletion in natural resources has prompted many biochemists and food engineers alike to find new or improved techniques to preserve food and for longer by using methods to inhibit the browning reaction. This effectively increases the shelf life of foods, solving this part of the waste problem. A better understanding of the enzymatic browning mechanisms, specifically, understanding the properties of the enzymes and substrates that are involved in the reaction may help food technologists to control certain stages in the mechanism and ultimately apply that knowledge to inhibit browning. Apples are fruits commonly studied by researchers due to their high phenolic content, which make them highly susceptible to enzymatic browning. In accordance with other findings regarding apples and browning activity, a correlation has been found between higher phenolic quantities and increased enzymatic activity in apples. This provides a potential target and thus hope for food industries wishing to genetically modify foods to decrease polyphenol oxidase activity and thus decrease browning. An example of such accomplishments in food engineering is in the production of Arctic apples. These apples, engineered by Okanagan Specialty Fruits Inc, are a result of applying gene splicing, a laboratory technique that has allowed for the reduction in polyphenol oxidase. Another type of issue that is closely studied is the browning of seafood. Seafood, in particular
{ "page_id": 987492, "source": null, "title": "Food browning" }
shrimp, is a staple consumed by people all over the world. The browning of shrimp, which is actually referred to as melanosis, creates a great concern for food handlers and consumers. Melanosis mainly occurs during postmortem handling and refrigerated storage. Recent studies have found a plant extract that acts as an anti-melatonin polyphenol oxidase inhibitor serves the same function as sulfites but without the health risks. == See also == Browning (partial cooking) Decomposition Gravy Water activity == References ==
{ "page_id": 987492, "source": null, "title": "Food browning" }
The Hayashi rearrangement is the chemical reaction of ortho-benzoylbenzoic acids catalyzed by sulfuric acid or phosphorus pentoxide. This reaction proceeds through electrophilic acylium ion attack with a spiro intermediate. == References ==
{ "page_id": 8196452, "source": null, "title": "Hayashi rearrangement" }
Papovaviricetes is a class of viruses. The class shares the name of an abolished family, Papovaviridae, which was split in 1999 into the two families Papillomaviridae and Polyomaviridae. The class was established in 2019 and takes its name from the former family. == Orders == The following orders are recognized: Sepolyvirales Zurhausenvirales == See also == Bandicoot papillomatosis carcinomatosis virus == References ==
{ "page_id": 63770985, "source": null, "title": "Papovaviricetes" }
Oleosins are structural proteins found in vascular plant oil bodies and in plant cells. Oil bodies are not considered organelles because they have a single layer membrane and lack the pre-requisite double layer membrane in order to be considered an organelle. They are found in plant parts with high oil content that undergo extreme desiccation as part of their maturation process, and help stabilize the bodies. == Components == Oleosins are proteins of 16 kDa to 24 kDa and are composed of three domains: an N-terminal hydrophilic region of variable length (from 30 to 60 residues); a central hydrophobic domain of about 70 residues and a C-terminal amphipathic region of variable length (from 60 to 100 residues). The central hydrophobic domain is proposed to be made up of beta-strand structure and to interact with the lipids. It is the only domain whose sequence is conserved. Models show oleosins having a hairpin-like hydrophobic shape that is inserted inside the triacylglyceride (TAG), while the hydrophilic parts are left outside oil bodies. Oleosins have been found on oil bodies of seeds, tapetum cells, and pollen but not fruits. Instead of a stabilizer of oil bodies, oleosins are believed to be involved in water-uptaking of pollen on stigma. == Allergic reactions == Allergic reactions to oleosins from hazelnut, peanut and sesame oils have been confirmed, ranging from contact dermatitis to anaphylactic shock. These oil body associated proteins are at ~14 and ~17 kDa, named, respectively, Ses i 5 and Ses i 4. Commercial-grade peanut oil is often highly refined, which makes it safe for most peanut allergic individuals. In contrast, commercial-grade sesame oil is typically an unrefined product with a measurably higher protein content. In addition to being a food ingredient, sesame oil can be present in drug products, dietary supplements and topically applied
{ "page_id": 10162537, "source": null, "title": "Oleosin" }
cosmetics. == Usage == Oleosins provide an easy way of purifying proteins which have been produced recombinantly in plants. If the protein is made as a fusion protein with oleosin and a protease recognition site is incorporated between them, the fusion protein will sit in the membrane of the oil body, which can be easily isolated by centrifugation. The oil droplets can then be mixed with aqueous medium again, and oleosin cleaved from the protein of interest. Centrifugation will cause two phases to separate again, and the aqueous medium now contains the purified protein. == References ==
{ "page_id": 10162537, "source": null, "title": "Oleosin" }
Particulate organic matter (POM) is a fraction of total organic matter operationally defined as that which does not pass through a filter pore size that typically ranges in size from 0.053 millimeters (53 μm) to 2 millimeters. Particulate organic carbon (POC) is a closely related term often used interchangeably with POM. POC refers specifically to the mass of carbon in the particulate organic material, while POM refers to the total mass of the particulate organic matter. In addition to carbon, POM includes the mass of the other elements in the organic matter, such as nitrogen, oxygen and hydrogen. In this sense POC is a component of POM and there is typically about twice as much POM as POC. Many statements that can be made about POM apply equally to POC, and much of what is said in this article about POM could equally have been said of POC. Particulate organic matter is sometimes called suspended organic matter, macroorganic matter, or coarse fraction organic matter. When land samples are isolated by sieving or filtration, this fraction includes partially decomposed detritus and plant material, pollen, and other materials. When sieving to determine POM content, consistency is crucial because isolated size fractions will depend on the force of agitation. POM is readily decomposable, serving many soil functions and providing terrestrial material to water bodies. It is a source of food for both soil organisms and aquatic organisms and provides nutrients for plants. In water bodies, POM can contribute substantially to turbidity, limiting photic depth which can suppress primary productivity. POM also enhances soil structure leading to increased water infiltration, aeration and resistance to erosion. Soil management practices, such as tillage and compost/manure application, alter the POM content of soil and water. == Overview == Particulate organic carbon (POC) is operationally defined as all
{ "page_id": 49090922, "source": null, "title": "Particulate organic matter" }
combustible, non-carbonate carbon that can be collected on a filter. The oceanographic community has historically used a variety of filters and pore sizes, most commonly 0.7, 0.8, or 1.0 μm glass or quartz fiber filters. The biomass of living zooplankton is intentionally excluded from POC through the use of a pre-filter or specially designed sampling intakes that repel swimming organisms. Sub-micron particles, including most marine prokaryotes, which are 0.2–0.8 μm in diameter, are often not captured but should be considered part of POC rather than dissolved organic carbon (DOC), which is usually operationally defined as < 0.2 μm. Typically POC is considered to contain suspended and sinking particles ≥ 0.2 μm in size, which therefore includes biomass from living microbial cells, detrital material including dead cells, fecal pellets, other aggregated material, and terrestrially-derived organic matter. Some studies further divide POC operationally based on its sinking rate or size, with ≥ 51 μm particles sometimes equated to the sinking fraction. Both DOC and POC play major roles in the carbon cycle, but POC is the major pathway by which organic carbon produced by phytoplankton is exported – mainly by gravitational settling – from the surface to the deep ocean and eventually to sediments, and is thus a key component of the biological pump. == Terrestrial ecosystems == === Soil organic matter === Soil organic matter is anything in the soil of biological origin. Carbon is its key component comprising about 58% by weight. Simple assessment of total organic matter is obtained by measuring organic carbon in soil. Living organisms (including roots) contribute about 15% of the total organic matter in soil. These are critical to operation of the soil carbon cycle. What follows refers to the remaining 85% of the soil organic matter - the non-living component. As shown below, non-living
{ "page_id": 49090922, "source": null, "title": "Particulate organic matter" }
organic matter in soils can be grouped into four distinct categories on the basis of size, behaviour and persistence. These categories are arranged in order of decreasing ability to decompose. Each of them contribute to soil health in different ways. Dissolved organic matter (DOM): is the organic matter which dissolves in soil water. It comprises the relatively simple organic compounds (e.g. organic acids, sugars and amino acids) which easily decompose. It has a turnover time of less than 12 months. Exudates from plant roots (mucilages and gums) are included here. Particulate organic matter (POM): is the organic matter that retains evidence of its original cellular structure, and is discussed further in the next section. Humus: is usually the largest proportion of organic matter in soil, contributing 45 to 75%. Typically it adheres to soil minerals, and plays an important role structuring soil. Humus is the end product of soil organism activity, is chemically complex, and does not have recognisable characteristics of its origin. Humus is of very small unit size and has large surface area in relation to its weight. It holds nutrients, has high water holding capacity and significant cation exchange capacity, buffers pH change and can hold cations. Humus is quite slow to decompose and exists in soil for decades. Resistant organic matter: has a high carbon content and includes charcoal, charred plant materials, graphite and coal. Turnover times are long and estimated in hundreds of years. It is not biologically active but contributes positively to soil structural properties, including water holding capacity, cation exchange capacity and thermal properties. === Role of POM in soils === Particulate organic matter (POM) includes steadily decomposing plant litter and animal faeces, and the detritus from the activity of microorganisms. Most of it continually undergoes decomposition by microorganisms (when conditions are sufficiently
{ "page_id": 49090922, "source": null, "title": "Particulate organic matter" }
moist) and usually has a turnover time of less than 10 years. Less active parts may take 15 to 100 years to turnover. Where it is still at the soil surface and relatively fresh, particulate organic matter intercepts the energy of raindrops and protects physical soil surfaces from damage. As it is decomposes, particulate organic matter provides much of the energy required by soil organisms as well as providing a steady release of nutrients into the soil environment. The decomposition of POM provides energy and nutrients. Nutrients not taken up by soil organisms may be available for plant uptake. The amount of nutrients released (mineralized) during decomposition depends on the biological and chemical characteristics of the POM, such as the C:N ratio. In addition to nutrient release, decomposers colonizing POM play a role in improving soil structure. Fungal mycelium entangle soil particles and release sticky, cement-like, polysaccharides into the soil; ultimately forming soil aggregates Soil POM content is affected by organic inputs and the activity of soil decomposers. The addition of organic materials, such as manure or crop residues, typically results in an increase in POM. Alternatively, repeated tillage or soil disturbance increases the rate of decomposition by exposing soil organisms to oxygen and organic substrates; ultimately, depleting POM. Reduction in POM content is observed when native grasslands are converted to agricultural land. Soil temperature and moisture also affect the rate of POM decomposition. Because POM is a readily available (labile) source of soil nutrients, is a contributor to soil structure, and is highly sensitive to soil management, it is frequently used as an indicator to measure soil quality. === Freshwater ecosystems === In poorly-managed soils, particularly on sloped ground, erosion and transport of soil sediment rich in POM can contaminate water bodies. Because POM provides a source of energy
{ "page_id": 49090922, "source": null, "title": "Particulate organic matter" }
and nutrients, rapid build-up of organic matter in water can result in eutrophication. Suspended organic materials can also serve as a potential vector for the pollution of water with fecal bacteria, toxic metals or organic compounds. == Marine ecosystems == Life and particulate organic matter in the ocean have fundamentally shaped the planet. On the most basic level, particulate organic matter can be defined as both living and non-living matter of biological origin with a size of ≥0.2 μm in diameter, including anything from a small bacterium (0.2 μm in size) to blue whales (20 m in size). Organic matter plays a crucial role in regulating global marine biogeochemical cycles and events, from the Great Oxidation Event in Earth's early history to the sequestration of atmospheric carbon dioxide in the deep ocean. Understanding the distribution, characteristics, dynamics, and changes over time of particulate matter in the ocean is hence fundamental in understanding and predicting the marine ecosystem, from food web dynamics to global biogeochemical cycles. === Measuring POM === Optical particle measurements are emerging as an important technique for understanding the ocean carbon cycle, including contributions to estimates of their downward flux, which sequesters carbon dioxide in the deep sea. Optical instruments can be used from ships or installed on autonomous platforms, delivering much greater spatial and temporal coverage of particles in the mesopelagic zone of the ocean than traditional techniques, such as sediment traps. Technologies to image particles have advanced greatly over the last two decades, but the quantitative translation of these immense datasets into biogeochemical properties remains a challenge. In particular, advances are needed to enable the optimal translation of imaged objects into carbon content and sinking velocities. In addition, different devices often measure different optical properties, leading to difficulties in comparing results. === Ocean primary production ===
{ "page_id": 49090922, "source": null, "title": "Particulate organic matter" }
Marine primary production can be divided into new production from allochthonous nutrient inputs to the euphotic zone, and regenerated production from nutrient recycling in the surface waters. The total new production in the ocean roughly equates to the sinking flux of particulate organic matter to the deep ocean, about 4 × 109 tons of carbon annually. === Model of sinking oceanic particles === Sinking oceanic particles encompass a wide range of shape, porosity, ballast and other characteristics. The model shown in the diagram at the right attempts to capture some of the predominant features that influence the shape of the sinking flux profile (red line). The sinking of organic particles produced in the upper sunlit layers of the ocean forms an important limb of the oceanic biological pump, which impacts the sequestration of carbon and resupply of nutrients in the mesopelagic ocean. Particles raining out from the upper ocean undergo remineralization by bacteria colonized on their surface and interior, leading to an attenuation in the sinking flux of organic matter with depth. The diagram illustrates a mechanistic model for the depth-dependent, sinking, particulate mass flux constituted by a range of sinking, remineralizing particles. Marine snow varies in shape, size and character, ranging from individual cells to pellets and aggregates, most of which is rapidly colonized and consumed by heterotrophic bacteria, contributing to the attenuation of the sinking flux with depth. === Sinking velocity === The range of recorded sinking velocities of particles in the oceans spans from negative (particles float toward the surface) to several km per day (as with salp fecal pellets) When considering the sinking velocity of an individual particle, a first approximation can be obtained from Stokes' law (originally derived for spherical, non-porous particles and laminar flow) combined with White's approximation, which suggest that sinking velocity increases
{ "page_id": 49090922, "source": null, "title": "Particulate organic matter" }
linearly with excess density (the difference from the water density) and the square of particle diameter (i.e., linearly with the particle area). Building on these expectations, many studies have tried to relate sinking velocity primarily to size, which has been shown to be a useful predictor for particles generated in controlled environments (e.g., roller tanks. However, strong relationships were only observed when all particles were generated using the same water/plankton community. When particles were made by different plankton communities, size alone was a bad predictor (e.g., Diercks and Asper, 1997) strongly supporting notions that particle densities and shapes vary widely depending on the source material. Packaging and porosity contribute appreciably to determining sinking velocities. On the one hand, adding ballasting materials, such as diatom frustules, to aggregates may lead to an increase in sinking velocities owing to the increase in excess density. On the other hand, the addition of ballasting mineral particles to marine particle populations frequently leads to smaller more densely packed aggregates that sink slower because of their smaller size. Mucous-rich particles have been shown to float despite relatively large sizes, whereas oil- or plastic-containing aggregates have been shown to sink rapidly despite the presence of substances with an excess density smaller than seawater. In natural environments, particles are formed through different mechanisms, by different organisms, and under varying environmental conditions that affect aggregation (e.g., salinity, pH, minerals), ballasting (e.g., dust deposition, sediment load; van der Jagt et al., 2018) and sinking behaviour (e.g., viscosity;). A universal conversion of size-to-sinking velocity is hence impracticable. === Role in the lower aquatic food web === Along with dissolved organic matter, POM drives the lower aquatic food web by providing energy in the form of carbohydrates, sugars, and other polymers that can be degraded. POM in water bodies is derived from
{ "page_id": 49090922, "source": null, "title": "Particulate organic matter" }