text
stringlengths
11
320k
source
stringlengths
26
161
Decision making ( DM ) can be seen as a purposeful choice of action sequences. It also covers control, a purposeful choice of input sequences. As a rule, it runs under randomness, uncertainty and incomplete knowledge. A range of prescriptive theories have been proposed how to make optimal decisions under these conditions. They optimise sequence of decision rules, mappings of the available knowledge on possible actions. This sequence is called strategy or policy. Among various theories, Bayesian DM is broadly accepted axiomatically based theory that solves the design of optimal decision strategy. It describes random, uncertain or incompletely known quantities as random variables, i.e. by their joint probability expressing belief in their possible values. The strategy that minimises expected loss (or equivalently maximises expected reward) expressing decision-maker's goals is then taken as the optimal strategy. While the probabilistic description of beliefs is uniquely and deductively driven by rules for joint probabilities, the composition and decomposition of the loss function have no such universally applicable formal machinery. Fully probabilistic design (of decision strategies or control, FPD) removes the mentioned drawback and expresses also the DM goals of by the "ideal" probability , which assigns high (small) values to desired (undesired) behaviours of the closed DM loop formed by the influenced world part and by the used strategy. FPD has axiomatic basis and has Bayesian DM as its restricted subpart. [ 1 ] [ 2 ] FPD has a range of theoretical consequences , [ 3 ] [ 4 ] and, importantly, has been successfully used to quite diverse application domains. [ 5 ]
https://en.wikipedia.org/wiki/Fully_probabilistic_design
A fully switched network is a computer network which uses only network switches rather than Ethernet hubs on Ethernet networks . [ 1 ] The switches provide a dedicated connection to each workstation. A switch allows for many conversations to occur simultaneously. Before switches, networks based on hubs data could only allow transmission in one direction at a time, this was called half-duplex . By using a switch this restriction is removed; full-duplex communication is maintained and the network is collision free. [ 2 ] This means that data can now be transmitted in both directions at the same time. Fully switched networks employ either twisted-pair or fiber-optic cabling , both of which use separate conductors for sending and receiving data. [ 3 ] In this type of environment, Ethernet nodes can forgo the collision detection process and transmit at will, since they are the only potential devices that can access the medium. This means that a fully switched network is a collision-free environment. The core function of a switch is to allow each workstation to communicate only with the switch instead of with each other. This in turn means that data can be sent from workstation to switch and from switch to workstation simultaneously. The core purpose of a switch is to decongest network flow to the workstations so that the connections can transmit more effectively; receiving transmissions that were only specific to their network address . With the network decongested and transmitting data in both directions simultaneously this can in fact double network speed and capacity when two workstations are trading information. For example, if your network speed is 5 Mbit/s, then each workstation is able to simultaneously transfer data at 5 Mbit/s. This computer networking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fully_switched_network
Fulminating gold is a light- and shock-sensitive yellow to yellow-orange amorphous heterogeneous mixture of different polymeric compounds of predominantly gold (III), ammonia , and chlorine that cannot be described by a chemical formula . Here, " fulminating " has its oldest meaning, "explosive" (from Latin fulmen, lightning, from verb fulgeo, 'I shine'); the material contains no fulminate ions. The best approximate description is that it is the product of partial hydrolysis of ∞ 3 [ Au 2 ( μ − NH 2 ) ( μ 3 − NH ) 2 ] Cl {\displaystyle {}_{\infty }^{3}{\ce {[Au2(\mu-NH2)(\mu_3-NH)2]Cl}}} . Upon combustion, it produces a purple vapor. The complex has a square planar molecular geometry with a low spin state. [ 1 ] Generally, it is best to avoid accidentally creating this substance by mixing gold(III) chloride or hydroxide salts with ammonia gas or ammonium salts , as it is prone to explosion with even the slightest touch. [ 2 ] Fulminating gold was the first high explosive known, and was noted in western alchemy as early as 1585. Sebald Schwaerzer was the first to isolate this compound and comment on its characteristics in his book Chrysopoeia Schwaertzeriana . Schwaerzer's production required dissolving a sample of gold in aqua regia , adding ammonium chloride to the saturated solution, and precipitating the solution through lead spheres and drying over oil of tartar . [ 3 ] Chemists of the 16th and 17th centuries were very interested in the novelty of an explosive gold compound, and many chemists of the era were injured upon its detonation. Jöns Jacob Berzelius , a leading chemist of the early 19th century was one such person. He had a beaker explode in his hand, damaging it and his eyes for several years. [ 4 ] It wasn't until Johann Rudolf Glauber in the 17th century that fulminating gold started to have uses. He used the purple fumes after detonation to plate objects in gold . [ 5 ] Later on, it was used in photography because of its light-sensitive nature. [ 6 ] In the 18th and 19th centuries, work continued on finding the chemical formula for fulminating gold. Carl Wilhelm Scheele found and proved that ammonia was what drove the formation of the complex and that upon detonation the gas formed was primarily nitrogen gas . Jean Baptiste Dumas went further and found that in addition to gold and nitrogen, fulminating gold also had hydrogen and chlorine in it. He then decomposed a ground sample with copper(II) oxide to find that it was a salt with an ammonium cation and a gold nitrogen complex as the anion. Ernst Weitz continued studying the compound with state of the art techniques and concluded that fulminating gold was a mixture of "diamido-imido-aurichloride" and 2 Au ( OH ) 3 ⋅ 3 NH 3 {\displaystyle {\ce {2Au(OH)3.3NH3}}} . He managed to ignore the poor solubility of the complex in most solvents, but noted that it did dissolve readily in aqueous gold(III) , ammonia, and chloride systems. His conclusion on the formula proved to be incorrect but offered a fair estimate for later scientists to jump from. Due to the massive interest in the study of fulminating gold in the early and middling eras of chemistry, there are many ways to synthesize it. [ 7 ] Not all synthesis routes yield the same product. According to Steinhauser et al. and Ernst Weitz, a very homogeneous sample can be obtained by hydrolysis of [ Au ( NH 3 ) 4 ] ( NO 3 ) 3 {\displaystyle {\ce {[Au(NH3)4](NO3)3}}} with Cl − {\displaystyle {\ce {Cl^-}}} . They have also noted that different synthetic routes, as well as using different amount of ammonia when precipitating the product, leads to different ratios of Au, N, H, and Cl. Due to its physical and chemical properties, fulminating gold cannot be crystallized under normal methods, making determining the crystal structure very difficult. From extensive attempts at crystallization by Steinhauser et al. and vibrational spectroscopy, it has been concluded that fulminating gold is an amorphous mixture of polymeric compounds that are linked via μ-NH 2 and μ 3 -NH bridges. It has also been found that fulminating gold is also very slightly soluble in acetonitrile and dimethylformamide. [ 8 ] Recent EXAFS (Extended X-Ray Absorption Fine Structure) analyses by Joannis Psilitelis has shown that fulminating gold is a square planar tetraamminegold(III) cation with either four or one gold atoms in the second coordination sphere. This geometry is supported by the diamagnetic character of fulminating gold. Since it has a d 8 electron configuration and is diamagnetic, it must have a square planar geometry. [ 9 ] It is also known that the unusual colouration of the smoke is caused by the presence of heterogeneous gold nanoparticles. [ 10 ] Due to the explosive tendency of this compound, industrial techniques for extracting and purifying gold compounds are very few. There was a novel biogas extraction of precious metals from scrapped electronics that worked very well, but the creation of fulminating gold and other precious metal amines limits its widespread use. [ 11 ] However, there are patents and methods that use fulminating gold as an intermediate in a process of turning low-purity gold into high-purity gold for electronics. [ 12 ]
https://en.wikipedia.org/wiki/Fulminating_gold
In geometry, the Fulton–MacPherson compactification of the configuration space of n distinct labeled points in a compact complex manifold is a compact complex manifold that contains the configuration space as an open dense subset and is constructed in a canonical way. [ 1 ] The notion was introduced by Fulton & MacPherson (1994) . This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fulton–MacPherson_compactification
Fumagillin is a complex biomolecule and used as an antimicrobial agent. It was isolated in 1949 from the microbial organism Aspergillus fumigatus . [ 1 ] It was originally used against microsporidian parasites Nosema apis infections in honey bees . [ citation needed ] Some studies found it to be effective against some myxozoan parasites , including Myxobolus cerebralis , an important parasite of fish ; however, in the more rigorous tests required for U.S. Food and Drug Administration approval, it was ineffective. [ citation needed ] There are reports that fumagillin controls Nosema ceranae , [ 2 ] which has recently been hypothesized as a possible cause of colony collapse disorder . [ 3 ] [ 4 ] The latest report, however, has shown it to be ineffective against N. ceranae . [ 5 ] Fumagillin is also investigated as an inhibitor of malaria parasite growth. [ 6 ] [ 7 ] Fumagillin has been used in the treatment of microsporidiosis . [ 8 ] [ 9 ] It is also an amebicide . [ 10 ] Fumagillin can block blood vessel formation by binding to an enzyme methionine aminopeptidase 2 [ 11 ] and for this reason, the compound, together with semisynthetic derivatives, are investigated as an angiogenesis inhibitor [ 12 ] in the treatment of cancer. The company Zafgen conducted clinical trials using the fumagillin analog beloranib for weight loss, [ 13 ] but they were unsuccessful. [ 14 ] Fumagillin is toxic to erythrocytes in vitro at concentrations greater than 10 μM. [ 15 ] Fumagillin and the related fumagillol (the hydrolysis product) have been a target in total synthesis , with several reported successful strategies, racemic, asymmetric, and formal. [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ]
https://en.wikipedia.org/wiki/Fumagillin
Fumaric acid or trans -butenedioic acid is an organic compound with the formula HO 2 CCH=CHCO 2 H. A white solid, fumaric acid occurs widely in nature. It has a fruit -like taste and has been used as a food additive . Its E number is E297. [ 3 ] The salts and esters are known as fumarates . Fumarate can also refer to the C 4 H 2 O 2− 4 ion (in solution). Fumaric acid is the trans isomer of butenedioic acid, while maleic acid is the cis isomer. It is produced in eukaryotic organisms from succinate in complex 2 of the electron transport chain via the enzyme succinate dehydrogenase . Fumaric acid is found in fumitory ( Fumaria officinalis ), bolete mushrooms (specifically Boletus fomentarius var. pseudo-igniarius ), lichen , and Iceland moss . Fumarate is an intermediate in the citric acid cycle used by cells to produce energy in the form of adenosine triphosphate (ATP) from food . It is formed by the oxidation of succinate by the enzyme succinate dehydrogenase . Fumarate is then converted by the enzyme fumarase to malate . Human skin naturally produces fumaric acid when exposed to sunlight . [ 4 ] [ 5 ] Fumarate is also a product of the urea cycle . Click on genes, proteins and metabolites below to link to respective articles. [ § 1 ] Fumaric acid has been used as a food acidulant since 1946. It is approved for use as a food additive in the EU, [ 6 ] USA [ 7 ] and Australia and New Zealand. [ 8 ] As a food additive , it is used as an acidity regulator and can be denoted by the E number E297. It is generally used in beverages and baking powders for which requirements are placed on purity. Fumaric acid is used in the making of wheat tortillas as a food preservative and as the acid in leavening. [ 9 ] It is generally used as a substitute for tartaric acid and occasionally in place of citric acid , at a rate of 1 g of fumaric acid to every ~1.5 g of citric acid, in order to add sourness , similarly to the way malic acid is used. As well as being a component of some artificial vinegar flavors, such as "Salt and Vinegar" flavored potato chips, [ 10 ] it is also used as a coagulant in stove-top pudding mixes. The European Commission Scientific Committee on Animal Nutrition, part of DG Health , found in 2014 that fumaric acid is "practically non-toxic" but high doses are probably nephrotoxic after long-term use. [ 11 ] Fumaric acid was developed as a medicine to treat the autoimmune condition psoriasis in the 1950s in Germany as a tablet containing 3 esters , primarily dimethyl fumarate , and marketed as Fumaderm by Biogen Idec in Europe. Biogen would later go on to develop the main ester, dimethyl fumarate, as a treatment for multiple sclerosis . In patients with relapsing-remitting multiple sclerosis, the ester dimethyl fumarate (BG-12, Biogen) significantly reduced relapse and disability progression in a phase 3 trial. It activates the Nrf2 antioxidant response pathway, the primary cellular defense against the cytotoxic effects of oxidative stress. [ 12 ] Fumaric acid is used in the manufacture of polyester resins and polyhydric alcohols and as a mordant for dyes. Fumaric acid can be used to make 6-methylcoumarin . [ 13 ] When fumaric acid is added to their feed, lambs produce up to 70% less methane during digestion. [ 14 ] Fumaric acid is produced based on catalytic isomerisation of maleic acid in aqueous solutions at low p H . It precipitates from the reaction solution. Maleic acid is accessible in large volumes as a hydrolysis product of maleic anhydride, produced by catalytic oxidation of benzene or butane . [ 3 ] Fumaric acid was first prepared from succinic acid . [ 15 ] A traditional synthesis involves oxidation of furfural (from the processing of maize ) using chlorate in the presence of a vanadium -based catalyst . [ 16 ] The chemical properties of fumaric acid can be anticipated from its component functional groups . This weak acid forms a di ester , it undergoes bromination across the double bond , [ 17 ] and it is a good dienophile . The oral LD50 is 10g/kg. [ 3 ] Acetyl-CoA Oxaloacetate Malate Fumarate Succinate Succinyl-CoA Citrate cis- Aconitate Isocitrate Oxalosuccinate 2-oxoglutarate Carbamoyl phosphate L - citrulline L - ornithine Urea L - aspartate L - argininosuccinate L - arginine Fumarate
https://en.wikipedia.org/wiki/Fumaric_acid
Fumarylacetoacetic acid ( fumarylacetoacetate ) is an intermediate in the metabolism of tyrosine . It is formed through the conversion of maleylacetoacetate into fumarylacetoacetate by the enzyme maleylacetoacetate isomerase . [ 1 ] This article about an alkene is a stub . You can help Wikipedia by expanding it . This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fumarylacetoacetic_acid
Fumiquinazolines are bio-active isolates of Aspergillus . [ 2 ] [ 3 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fumiquinazoline
Fumonisin B2 is a fumonisin mycotoxin produced by the fungi Fusarium verticillioides (formerly Fusarium moniliforme ) and Aspergillus niger . [ 1 ] It is a structural analog of fumonisin B3 , while it is lacking one hydroxy group compared to fumonisin B1 . [ 2 ] Fumonisin B2 is more cytotoxic than fumonisin B1. Fumonisin B2 inhibits sphingosine acyltransferase . Fumonisin B2 and other fumonisins frequently contaminate maize and other crops, while recently it has been shown using LC–MS/MS that FB2 can contaminate coffee beans as well. [ 3 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fumonisin_B2
The Function-Behaviour-Structure ontology – or short, the FBS ontology – is an ontology of design objects, i.e. things that have been or can be designed . The Function-Behaviour-Structure ontology conceptualizes design objects in three ontological categories: function (F), behaviour (B), and structure (S). The FBS ontology has been used in design science as a basis for modelling the process of designing as a set of distinct activities. This article relates to the concepts and models proposed by John S. Gero and his collaborators. Similar ideas have been developed independently by other researchers. [ 1 ] [ 2 ] [ 3 ] The ontological categories composing the Function-Behaviour-Structure ontology are defined as follows: [ 4 ] [ 5 ] The three ontological categories are interconnected: Function is connected with behaviour, and behaviour is connected with structure. There is no connection between function and structure. The Function-Behaviour-Structure ontology is the basis for two frameworks of designing: the FBS framework, and its extension, the situated FBS framework. They represent the process of designing as transformations between function, behaviour and structure, and subclasses thereof. The original version of the FBS framework was published by John S. Gero in 1990. [ 6 ] It applies the FBS ontology to the process of designing, by further articulating the three ontological categories. In this articulation, behaviour (B) is specialised into expected behaviour (Be) (the "desired" behaviour) and behaviour derived from structure (Bs) (the "actual" behaviour). In addition, two further notions are introduced on top of the existing ontological categories: requirements (R) that represent intentions from the client that come from outside the designer, and description (D) that represents a depiction of the design created by the designer. Based on these articulations, the FBS framework proposes eight processes claimed as fundamental in designing, [ 4 ] [ 7 ] specifically: The eight fundamental processes in the FBS framework are illustrated using a turbocharger design process. The situated FBS framework was developed by John S. Gero and Udo Kannengiesser in 2000 [ 7 ] as an extension of the FBS framework to explicitly capture the role of situated cognition or situatedness in designing. [ 8 ] [ 9 ] The basic assumption underpinning the situated FBS framework is that designing involves interactions between three worlds: the external world, the interpreted world and the expected world. They are defined as follows: [ 4 ] [ 5 ] [ 7 ] The three worlds are interconnected by four classes of interaction: The situated FBS framework is a result of merging the three-world model of situatedness with the original FBS framework, by specialising the ontological categories as follows: [ 4 ] [ 5 ] [ 7 ] 20 processes connect these specialised ontological categories. They elaborate and extend the eight fundamental processes in the FBS framework, providing more descriptive power with regards to the situatedness of designing. The FBS ontology has been used as a basis for modelling designs (the results of designing) and design processes (the activities of designing) in a number of design disciplines, including engineering design, architecture, human-computer interface, human-robot interface, construction and software design. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] While the FBS ontology has been discussed in terms of its completeness, [ 20 ] [ 21 ] [ 22 ] [ 23 ] several research groups have extended it to fit the needs of their specific domains. [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] It has also been used as a schema for coding and analysing behavioural studies of designers. [ 32 ] [ 33 ] [ 34 ] [ 35 ] [ 36 ] Others have applied the FBS ontology to develop an ontology of systems. [ 31 ] [ 19 ] [ 37 ] For this purpose, the three categories of concepts (i.e., Function, Behavior, Structure) are expanded to be six categories of concepts by adding the concepts of Context, Principle, State. As such, the FBS ontology becomes the FCBPSS ontology. In the FCBPSS ontology, the definition of Function and Structure remains the same as that in the FBS ontology. The concepts of Behavior, Context, Principle, and State are as follows. The six categories of concepts are related, especially the structure of a system is a foundation, followed by the state, behavior, and function. The principle aggregates states along with their underlying structure to the behavior of the structure.
https://en.wikipedia.org/wiki/Function-Behaviour-Structure_ontology
Function-Spacer-Lipid (FSL) Kode constructs (Kode Technology) are amphiphatic , water dispersible biosurface engineering constructs that can be used to engineer the surface of cells , viruses and organisms, or to modify solutions and non-biological surfaces with bioactives. [ 1 ] [ 2 ] [ 3 ] FSL Kode constructs spontaneously and stably incorporate into cell membranes. FSL Kode constructs with all these aforementioned features are also known as Kode Constructs. [ 1 ] The process of modifying surfaces with FSL Kode constructs is known as "koding" and the resultant "koded" cells , [ 1 ] viruses and liposomes are respectively known as kodecytes , [ 1 ] [ 4 ] and kodevirions . [ 1 ] [ 5 ] All living surfaces are decorated with a diverse range of complex molecules, which are key modulators of chemical communications and other functions such as protection, adhesion , infectivity , apoptosis , etc. Functional-Spacer-Lipid (FSL) Kode constructs can be synthesized to mimic the bioactive components present on biological surfaces, and then re-present them in novel ways. [ 1 ] [ 2 ] The architecture of an FSL Kode construct, as implicit in the name, consists of three components - a functional head group, a spacer, and a lipid tail. [ 3 ] This structure is analogous to a Lego minifigure in that, they have three structural components, with each component having a separate purpose. [ 1 ] [ 2 ] In the examples shown in all the figures, a Lego ' minifig ' has been used for the analogy. However, it should be appreciated that this is merely a representation and the true structural similarity is significantly varied between Lego minifigures and FSL Kode constructs (fig 1) . The functional group of an FSL is equivalent to a Lego minifigure head, with both being at the extremity and carrying the character functional components. The spacer of the FSL is equivalent to the body of the Lego minifigure and the arms on the minifigure are representative of substitutions which may be engineered into the chemical makeup of the spacer. The lipid of the FSL anchors it to lipid membranes and gives the FSL construct its amphiphatic nature which can cause it to self-assemble. Because the lipid tail can act directly as an anchor it is analogous to the legs of a Lego minifigure. The functional group, the spacer and the lipid tail components of the FSL Kode construct can each be individually designed resulting in FSL Kode constructs with specific biological functions. [ 2 ] The functional head group is usually the bioactive component of the construct and the various spacers and lipids influence and effect its presentation, orientation and location on a surface. Critical to the definition of an FSL Kode construct is the requirement to be dispersible in water, and spontaneously and stably incorporate into cell membranes. Other lipid bioconjugates that include components similar to FSLs but do not have these features are not termed as Function-Spacer-Lipid Kode constructs. [ 2 ] Source: [ 3 ] A large range of functional groups have already been made into FSL Kode constructs. These include: Note 1: Multimeric – the presentation of the F residue can be as multimers with controlled spacing and be variable. Note 2: Mass – the mass that can be anchored by an FSL Kode constructs can range from 200 to >1x10 6 Da Source: [ 3 ] The spacer is an integral part of the FSL Kode construct and gives it several important characteristics including water dispersibility . [ 2 ] Source: [ 3 ] The lipid tail is essential for enabling lipid membrane insertion and retention but also for giving the construct amphiphilic characteristics that enable hydrophilic surface coating (due to formation of bilipid layers ). Different membrane lipids that can be used to create FSLs have different membrane physiochemical characteristics and thus can affect biological function of the FSL. Lipids in FSL Kode constructs include: [ 2 ] Upper image shows the antigen (flower head) presented in a variety of formats directly on a solid surface. The construct attaches randomly to the solid surface. In all diagrams, the same functional group is represented by the sunflower head. The first four examples show monomeric presentation of F, with the first example representing the short 1.9nm (Ad) spacer (the flower stalk). The second and fourth with the longer stalks (spacers) representing the CMG2 7.2nm spacer and CMG4 11.5nm spacers respectively. The second and third constructs have the same spacers, but the lipid tail of the third construct is cholesterol , rather than DOPE . One of the important functions of an FSL construct is that it can optimise the presentation of antigens , both on cell surfaces and solid-phase membranes. This optimisation is achieved primarily by the spacer, and secondarily by the lipid tail. In a typical immunoassay , the antigen is deposited directly onto the microplate surface and binds to the surface either in a random fashion, or in a preferred orientation depending on the residues present on the surface of this antigen. Usually this deposition process is uncontrolled. In contrast, the FSL Kode construct bound to a microplate presents the antigen away from the surface in an orientation with a high level of exposure to the environment. Furthermore, typical immunoassays use recombinant peptides rather than discrete peptide antigens. As the recombinant peptide is many times bigger than the epitope of interest, a lot of undesired and unwanted peptide sequences are also represented on the microplate. These additional sequences may include unwanted microbial related sequences (as determined by a BLAST analysis) that can cause issues of low level cross-reactivity. Often the mechanism by which an immunoassay is able to overcome this low level activity is to dilute the serum so that the low level microbial reactive antibodies are not seen, and only high-level specific antibodies result in an interpretable result. In contrast, FSL Kode constructs usually use specifically selected peptide fragments (up to 40 amino acids), [ 15 ] [ 18 ] thereby overcoming cross-reactivity with microbial sequences, and allowing for the use of undiluted serum (which increases sensitivity ). The F component can be further enhanced by presentation of it in multimeric formats and with specific spacing. The four types of multimeric format include linear repeating units, linear repeating units with spacing, clusters, and branching (Fig. 4) . The FSL Kode construct by nature of its composition in possessing both hydrophobic and hydrophilic regions are amphiphilic (or amphipathic). This characteristic determines the way in which the construct will interact with surfaces. When present in a solution they may form simple micelles or adopt more complex bilayer structures with two simplistic examples shown in Fig. 5a . More complex structures are expected. The actual nature of FSL micelles has not been determined. However, based on normal structural function of micelles, it is expected that it will be determined in part by the combination of functional group, spacer and lipid together with temperature, concentration , size and hydrophobicity/hydrophilicity for each FSL Kode construct type. Surface coatings will occur via two theoretical mechanisms, the first being direct hydrophobic interaction of the lipid tail with a hydrophobic surface resulting in a monolayer of FSL at the surface (Fig. 5b) . Hydrophobic binding of the FSL will be via its hydrophobic lipid tail interacting directly with the hydrophobic ( lipophilic ) surface. The second surface coating will be through the formation of bilayers as the lipid tail is unable to react with the hydrophilic surface. In this case the lipids will induce the formation of a bilayer, the surface of which will be hydrophilic. This hydrophilic membrane will then interact directly with the hydrophilic surface and will probably encapsulate fibres . This hydrophilic bilayer binding is the expected mechanism by which FSLs are able to bind to fibrous membranes such as paper [ 19 ] and glass fibres (Fig. 5c) and (Fig. 9) . After labeling of the surface with the selected F bioactives, the constructs will be present and oriented at the membrane surface. It is expected that the FSL will be highly mobile within the membrane and the choice of lipid tail will effect is relative partitioning within the membrane. [ 2 ] [ 7 ] The construct unless it has flip-flop behavior is expected to remain surface presented. However, the modification is not permanent in living cells and constructs will be lost (consumed) at a rate proportional to the activity at the membrane and division rate of the cell (with dead cells remaining highly labeled). [ 1 ] Additionally, when present in vivo with serum lipids FSLs will elute from the membrane into the plasma at a rate of about 1% per hour. [ 5 ] [ 10 ] In fixed cells or inactive cells (e.g. red cells) stored in serum free media the constructs are retained normally. [ 1 ] Liposomes are easy koded by simply adding FSL Kode constructs into the preparation. Contacting koded liposomes with microplates or other surfaces can cause the labeling of the microplate surface. Non-biologic surface coatings will occur via two mechanisms, the first being direct hydrophobic interaction of the lipid tail with a hydrophobic surface resulting in a monolayer of FSL at the surface. The second surface coating will be through the formation of bilayers, which probably either encapsulate fibres or being via the hydrophilic F group. This is the expected mechanism by which FSLs bind to fibrous membranes such as paper [ 19 ] and glass fibres. A recent study has found that when FSL Kode constructs are optimised, could in a few seconds glycosylate almost any non-biological surface including metals, glass, plastics, rubbers, and other polymers. [ 20 ] The technological features of FSL Kode constructs and the koding process can be summarized as follows: FSL constructs have a wide range of uses and they have been used to modify the following: FSL constructs, when in solution ( saline ) and in contact, will spontaneously incorporate into cell and virus membranes. [ 1 ] The methodology involves simply preparing a solution of FSL constructs in the range of 1–1000 μg / mL . The actual concentration will depend on the construct and the quantity of construct required in the membrane. One part of FSL solution is added to one part of cells (up to 100% suspension ) and they are incubated at a set temperature within the range of 4–37 °C (39–99 °F) depending on temperature compatibility of the cells being modified. The higher the temperature, the faster the rate of FSL insertion into the membrane. For red blood cells, at 37 °C incubation for 2 hours achieves >95% insertion with at least 50% insertion being achieved within 20 minutes. In general, FSL insertion time of 4 hours at room temperature or 20 hours at 4 °C gives results similar to 1 hour at 37 °C for carbohydrate based FSLs inserting into red blood cells. [ 1 ] The resultant kodecytes or kodevirions do not required to be washed, however this option should be considered if an excess of FSL construct is used in the koding process. FSL Kode constructs have been used for research and development, diagnostic products, and are currently being investigated as potential therapeutic agents. FSL have been used to create human red cell kodecytes that have been used to detect and identify blood group allo-antibodies [ 13 ] [ 14 ] [ 15 ] as ABO sub-group mimics, [ 11 ] ABO quality control systems, [ 4 ] serologic teaching kits [ 12 ] and a syphilis diagnostic. [ 21 ] Kodecytes have also demonstrated that FSL-FLRO4 is a suitable reagent for labelling packed red blood cells (PRBC) at any point during routine storage and look to facilitate the development of immunoassays and transfusion models focused on addressing the mechanisms involved in transfusion-related immunomodulation (TRIM). [ 17 ] Murine kodecytes have been experimentally used to determine in vivo cell survival, [ 10 ] and create model transfusion reactions . [ 9 ] [ 10 ] Zebrafish kodecytes have been used to determine real time in vivo cell migration. [ 16 ] Kodecytes have been used to create influenza diagnostics. [ 2 ] Kodecytes which have been modified with FSL-GB3 were unable to be infected with the HIV virus . [ 7 ] [ 23 ] Kodevirions are FSL modified viruses. Several FSL Kode constructs have been used to label viruses to assist in their flow-cytometric visualisation [ 5 ] and to track them real time distribution in animal models. [ 5 ] They have also been used to modify the surface of viruses with the intention of targeting them to be used to attach tumors ( oncolytic ). [ 5 ] Kodesomes are liposomes that have been decorated with FSL Kode constructs. These have been used to deposit FSL constructs onto microplates to create diagnostic assays . They also have the potential for therapeutic use. [ 24 ] These are solutions containing FSL Kode constructs where the construct will exist as a clear micellular dispersion. FSL-GB3 as a solution/gel has been used to inhibit HIV infection [ 7 ] and to neutralise Shiga toxin . [ 7 ] FSL blood group A as a solution has been used to neutralise circulating antibodies in a mouse model and allow incompatible blood group A ( murine kodecytes) transfusion. [ 9 ] This model experiment was used to demonstrate the potential of FSLs to neutralise circulating antibody and allow for incompatible blood transfusion or organ transplantation . [ 19 ] All FSL Kode constructs disperse in water and are therefore compatible with inkjet printers . FSL constructs can be printed with a standard desktop inkjet printer directly onto paper to create immunoassays. [ 19 ] An empty ink cartridge is filled with an FSL construct and words, barcodes , or graphics are printed. A Perspex template is adhered to the surface to create reaction wells. The method is then a standard EIA procedure, but blocking of serum is not required and undiluted serum can be used. A typical procedure is as follows: add serum, incubate, wash by immersion, add secondary EIA conjugate, incubate, wash, add NBT / BCIP precipitating substrate and stop the reaction when developed by washing (Fig. 9) . The result is stable for years.
https://en.wikipedia.org/wiki/Function-spacer-lipid_Kode_construct
In engineering , a function is interpreted as a specific process , action or task that a system is able to perform. [ 1 ] In the lifecycle of engineering projects, there are usually distinguished subsequently: Requirements and Functional specification documents. The Requirements usually specifies the most important attributes of the requested system. In the Design specification documents, physical or software processes and systems are frequently the requested functions For advertising and marketing of technical products, the number of functions they can perform is often counted and used for promotion. For example a calculator capable of the basic mathematical operations of addition, subtraction, multiplication, and division, would be called a "four-function" model; when other operations are added, for example for scientific, financial, or statistical calculations, advertisers speak of "57 scientific functions", etc. A wristwatch with stopwatch and timer facilities would similarly claim a specified number of functions. To maximise the claim, trivial operations which do not significantly enhance the functionality of a product may be counted. This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Function_(engineering)
A function analysis diagram (FAD) is a method used in engineering design to model and visualize the functions and interactions between components of a system or product. It represents the functional relationships through a diagram consisting of blocks, which represent physical components, and labeled relations/arrows between them, which represent useful or harmful functional interactions. The FAD method was first proposed in a 1997 patent by the company Invention Machine Corporation as part of their TRIZ -based software tools. [ 1 ] It has been further developed through research collaborations between academia and industry. [ 2 ] [ 3 ] FAD modeling is considered more intuitive than traditional function analysis methods like function trees and function structures because it incorporates the physical structure of the product. It allows capturing a richer network of functional relationships compared to the linear representations from other methods. The layout of the diagram can also mirror the spatial arrangement of components, conveying additional meaning. [ 2 ] By explicitly mapping out functional interactions between components, FAD diagrams help capture the rationale of why a product is designed the way it is. Modeling harmful or undesired functions provides a starting point for generating design improvements. [ 3 ] FAD diagrams consist of labelled blocks representing the physical components, users, or environmental resources related to the product. The relations between blocks are shown as labelled arrows that describe useful or harmful functional interactions. For example, a piston block can have a "compresses air" relation to a cylinder block. [ citation needed ] More complex FAD models can be created hierarchically by linking diagrams that focus on different system states or levels of detail. [ 2 ] Research has developed techniques for providing overview visualizations of the network of linked FAD diagrams. [ 3 ] While natural language terms are often used for labelling functional interactions in FAD, conventions and shorthands can be defined for recurring relation types to approach a modeling language . Examples include shorthand notation for effort and flow transformations in power systems . [ 3 ] Intended benefits of FAD modeling include: [ 2 ] [ 3 ] FAD has been used to model and analyze engineering systems in domains including aerospace , manufacturing , and power systems. It provides an intuitive representation for sharing and discussing functional knowledge of product designs. Potential applications include: [ 2 ] [ 3 ] While FAD diagrams can be created with general drawing and mapping tools, some engineering design software packages provide specific support for building FAD models. These include:
https://en.wikipedia.org/wiki/Function_analysis_diagram
The function block diagram ( FBD ) is a graphical language for programmable logic controller design, [ 1 ] that can describe the function between input variables and output variables. A function is described as a set of elementary blocks. Input and output variables are connected to blocks by connection lines. Inputs and outputs of the blocks are wired together with connection lines or links. Single lines may be used to connect two logical points of the diagram: The connection is oriented, meaning that the line carries associated data from the left end to the right end. The left and right ends of the connection line must be of the same type. Multiple right connection, also called divergence, can be used to broadcast information from its left end to each of its right ends. All ends of the connection must be of the same type. Function Block Diagram is one of five languages for logic or control configuration [ 2 ] supported by standard IEC 61131-3 for a control system such as a programmable logic controller (PLC) or a Distributed Control System (DCS). The other supported languages are ladder logic , sequential function chart , structured text , and instruction list .
https://en.wikipedia.org/wiki/Function_block_diagram
In systems engineering , software engineering , and computer science , a function model or functional model is a structured representation of the functions ( activities , actions , processes , operations ) within the modeled system or subject area. [ 1 ] A function model, similar with the activity model or process model , is a graphical representation of an enterprise 's function within a defined scope. The purposes of the function model are to describe the functions and processes, assist with discovery of information needs, help identify opportunities, and establish a basis for determining product and service costs. [ 2 ] The function model in the field of systems engineering and software engineering originates in the 1950s and 1960s, but the origin of functional modelling of organizational activity goes back to the late 19th century. In the late 19th century the first diagrams appeared that pictured business activities, actions, processes, or operations, and in the first half of the 20th century the first structured methods for documenting business process activities emerged. One of those methods was the flow process chart , introduced by Frank Gilbreth to members of American Society of Mechanical Engineers (ASME) in 1921 with the presentation, entitled “Process Charts—First Steps in Finding the One Best Way”. [ 3 ] Gilbreth's tools quickly found their way into industrial engineering curricula. The emergence of the field of systems engineering can be traced back to Bell Telephone Laboratories in the 1940s. [ 4 ] The need to identify and manipulate the properties of a system as a whole, which in complex engineering projects may greatly differ from the sum of the parts' properties, motivated various industries to apply the discipline. [ 5 ] One of the first to define the function model in this field was the British engineer William Gosling . In his book The design of engineering systems (1962, p. 25) he stated: One of the first well defined function models, was the functional flow block diagram (FFBD) developed by the defense-related TRW Incorporated in the 1950s. [ 7 ] In the 1960s it was exploited by the NASA to visualize the time sequence of events in a space systems and flight missions. [ 8 ] It is further widely used in classical systems engineering to show the order of execution of system functions. [ 9 ] In systems engineering and software engineering a function model is created with a functional modeling perspective . The functional perspective is one of the perspectives possible in business process modelling , other perspectives are for example behavioural, organisational or informational. [ 10 ] A functional modeling perspective concentrates on describing the dynamic process . The main concept in this modeling perspective is the process, this could be a function, transformation, activity, action, task etc. A well-known example of a modeling language employing this perspective is data flow diagrams . The perspective uses four symbols to describe a process, these being: Now, with these symbols, a process can be represented as a network of these symbols. This decomposed process is a DFD, data flow diagram. In Dynamic Enterprise Modeling a division is made in the Control model , Function Model, Process model and Organizational model . Functional decomposition refers broadly to the process of resolving a functional relationship into its constituent parts in such a way that the original function can be reconstructed from those parts by function composition . In general, this process of decomposition is undertaken either for the purpose of gaining insight into the identity of the constituent components, or for the purpose of obtaining a compressed representation of the global function, a task which is feasible only when the constituent processes possess a certain level of modularity . Functional decomposition has a prominent role in computer programming , where a major goal is to modularize processes to the greatest extent possible. For example, a library management system may be broken up into an inventory module, a patron information module, and a fee assessment module. In the early decades of computer programming, this was manifested as the "art of subroutining," as it was called by some prominent practitioners. Functional decomposition of engineering systems is a method for analyzing engineered systems. The basic idea is to try to divide a system in such a way that each block of the block diagram can be described without an "and" or "or" in the description. This exercise forces each part of the system to have a pure function . When a system is composed of pure functions, they can be reused, or replaced. A usual side effect is that the interfaces between blocks become simple and generic. Since the interfaces usually become simple, it is easier to replace a pure function with a related, similar function. The functional approach is extended in multiple diagrammic techniques and modeling notations. This section gives an overview of the important techniques in chronological order. A functional block diagram is a block diagram , that describes the functions and interrelationships of a system . The functional block diagram can picture: [ 11 ] The block diagram can use additional schematic symbols to show particular properties. Specific function block diagram are the classic functional flow block diagram , and the Function Block Diagram (FBD) used in the design of programmable logic controllers . The functional flow block diagram (FFBD) is a multi-tier, time-sequenced, step-by-step flow diagram of the system 's functional flow. [ 14 ] The diagram is developed in the 1950s and widely used in classical systems engineering . The functional flow block diagram is also referred to as Functional Flow Diagram , functional block diagram , and functional flow . [ 15 ] Functional flow block diagrams (FFBD) usually define the detailed, step-by-step operational and support sequences for systems , but they are also used effectively to define processes in developing and producing systems. The software development processes also use FFBDs extensively. In the system context, the functional flow steps may include combinations of hardware , software , personnel , facilities, and/or procedures. In the FFBD method, the functions are organized and depicted by their logical order of execution. Each function is shown with respect to its logical relationship to the execution and completion of other functions. A node labeled with the function name depicts each function. Arrows from left to right show the order of execution of the functions. Logic symbols represent sequential or parallel execution of functions. [ 16 ] HIPO for hierarchical input process output is a popular 1970s systems analysis design aid and documentation technique [ 17 ] for representing the modules of a system as a hierarchy and for documenting each module. [ 18 ] It was used to develop requirements, construct the design, and support implementation of an expert system to demonstrate automated rendezvous. Verification was then conducted systematically because of the method of design and implementation. [ 19 ] The overall design of the system is documented using HIPO charts or structure charts . The structure chart is similar in appearance to an organizational chart, but has been modified to show additional detail. Structure charts can be used to display several types of information, but are used most commonly to diagram either data structures or code structures. [ 18 ] The N 2 Chart is a diagram in the shape of a matrix , representing functional or physical interfaces between system elements. It is used to systematically identify, define, tabulate, design, and analyze functional and physical interfaces. It applies to system interfaces and hardware and/or software interfaces. [ 14 ] The N 2 diagram has been used extensively to develop data interfaces, primarily in the software areas. However, it can also be used to develop hardware interfaces. The basic N 2 chart is shown in Figure 2. The system functions are placed on the diagonal; the remainder of the squares in the N × N matrix represent the interface inputs and outputs. [ 20 ] Structured Analysis and Design Technique (SADT) is a software engineering methodology for describing systems as a hierarchy of functions, a diagrammatic notation for constructing a sketch for a software application. It offers building blocks to represent entities and activities, and a variety of arrows to relate boxes. These boxes and arrows have an associated informal semantics . [ 21 ] SADT can be used as a functional analysis tool of a given process, using successive levels of details. The SADT method allows to define user needs for IT developments, which is used in industrial Information Systems, but also to explain and to present an activity's manufacturing processes, procedures. [ 22 ] The SADT supplies a specific functional view of any enterprise by describing the functions and their relationships in a company. These functions fulfill the objectives of a company, such as sales, order planning, product design, part manufacturing, and human resource management. The SADT can depict simple functional relationships and can reflect data and control flow relationships between different functions. The IDEF0 formalism is based on SADT, developed by Douglas T. Ross in 1985. [ 23 ] IDEF0 is a function modeling methodology for describing manufacturing functions, which offers a functional modeling language for the analysis, development, re-engineering, and integration of information systems ; business processes; or software engineering analysis. [ 24 ] It is part of the IDEF family of modeling languages in the field of software engineering , and is built on the functional modeling language building SADT . The IDEF0 Functional Modeling method is designed to model the decisions, actions, and activities of an organization or system. [ 25 ] It was derived from the established graphic modeling language structured analysis and design technique (SADT) developed by Douglas T. Ross and SofTech, Inc. In its original form, IDEF0 includes both a definition of a graphical modeling language ( syntax and semantics ) and a description of a comprehensive methodology for developing models. [ 1 ] The US Air Force commissioned the SADT developers to develop a function model method for analyzing and communicating the functional perspective of a system. IDEF0 should assist in organizing system analysis and promote effective communication between the analyst and the customer through simplified graphical devices. [ 25 ] Axiomatic design is a top down hierarchical functional decomposition process used as a solution synthesis framework for the analysis, development, re-engineering, and integration of products, information systems, business processes or software engineering solutions. [ 26 ] Its structure is suited mathematically to analyze coupling between functions in order to optimize the architectural robustness of potential functional solution models. In the field of systems and software engineering numerous specific function and functional models and close related models have been defined. Here only a few general types will be explained. A Business Function Model (BFM) is a general description or category of operations performed routinely to carry out an organization's mission. They "provide a conceptual structure for the identification of general business functions ". [ 27 ] It can show the critical business processes in the context of the business area functions. The processes in the business function model must be consistent with the processes in the value chain models. Processes are a group of related business activities performed to produce an end product or to provide a service. Unlike business functions that are performed on a continual basis, processes are characterized by the fact that they have a specific beginning and an end point marked by the delivery of a desired output. The figure on the right depicts the relationship between the business processes, business functions, and the business area's business reference model. [ 28 ] Business Process Model and Notation (BPMN) is a graphical representation for specifying business processes in a workflow . BPMN was developed by Business Process Management Initiative (BPMI), and is currently maintained by the Object Management Group since the two organizations merged in 2005. The current version of BPMN is 2.0. [ 29 ] The Business Process Model and Notation (BPMN) specification provides a graphical notation for specifying business processes in a Business Process Diagram (BPD). [ 30 ] The objective of BPMN is to support business process management for both technical users and business users by providing a notation that is intuitive to business users yet able to represent complex process semantics. The BPMN specification also provides a mapping between the graphics of the notation to the underlying constructs of execution languages, particularly BPEL4WS . [ 31 ] A Business reference model is a reference model, concentrating on the functional and organizational aspects of the core business of an enterprise , service organization or government agency . In enterprise engineering a business reference model is part of an Enterprise Architecture Framework or Architecture Framework , which defines how to organize the structure and views associated with an Enterprise Architecture . A reference model in general is a model of something that embodies the basic goal or idea of something and can then be looked at as a reference for various purposes. A business reference model is a means to describe the business operations of an organization, independent of the organizational structure that perform them. Other types of business reference model can also depict the relationship between the business processes , business functions, and the business area 's business reference model. These reference model can be constructed in layers, and offer a foundation for the analysis of service components, technology, data, and performance. The Operator Function Model (OFM) is proposed as an alternative to traditional task analysis techniques used by human factors engineers. An operator function model attempts to represent in mathematical form how an operator might decompose a complex system into simpler parts and coordinate control actions and system configurations so that acceptable overall system performance is achieved. The model represents basic issues of knowledge representation, information flow, and decision making in complex systems. Miller (1985) suggests that the network structure can be thought of as a possible representation of an operator's internal model of the system plus a control structure which specifies how the model is used to solve the decision problems that comprise operator control functions. [ 32 ] This article incorporates public domain material from the National Institute of Standards and Technology This article incorporates public domain material from Operator Function Model (OFM) . Federal Aviation Administration .
https://en.wikipedia.org/wiki/Function_model
ℝ → X In mathematical analysis , and applications in geometry , applied mathematics , engineering , and natural sciences , a function of a real variable is a function whose domain is the real numbers R {\displaystyle \mathbb {R} } , or a subset of R {\displaystyle \mathbb {R} } that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval. The most widely considered such functions are the real functions , which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers. Nevertheless, the codomain of a function of a real variable may be any set. However, it is often assumed to have a structure of R {\displaystyle \mathbb {R} } - vector space over the reals. That is, the codomain may be a Euclidean space , a coordinate vector , the set of matrices of real numbers of a given size, or an R {\displaystyle \mathbb {R} } - algebra , such as the complex numbers or the quaternions . The structure R {\displaystyle \mathbb {R} } -vector space of the codomain induces a structure of R {\displaystyle \mathbb {R} } -vector space on the functions. If the codomain has a structure of R {\displaystyle \mathbb {R} } -algebra, the same is true for the functions. The image of a function of a real variable is a curve in the codomain. In this context, a function that defines curve is called a parametric equation of the curve. When the codomain of a function of a real variable is a finite-dimensional vector space , the function may be viewed as a sequence of real functions. This is often used in applications. A real function is a function from a subset of R {\displaystyle \mathbb {R} } to R , {\displaystyle \mathbb {R} ,} where R {\displaystyle \mathbb {R} } denotes as usual the set of real numbers . That is, the domain of a real function is a subset R {\displaystyle \mathbb {R} } , and its codomain is R . {\displaystyle \mathbb {R} .} It is generally assumed that the domain contains an interval of positive length. For many commonly used real functions, the domain is the whole set of real numbers, and the function is continuous and differentiable at every point of the domain. One says that these functions are defined, continuous and differentiable everywhere. This is the case of: Some functions are defined everywhere, but not continuous at some points. For example Some functions are defined and continuous everywhere, but not everywhere differentiable. For example Many common functions are not defined everywhere, but are continuous and differentiable everywhere where they are defined. For example: Some functions are continuous in their whole domain, and not differentiable at some points. This is the case of: A real-valued function of a real variable is a function that takes as input a real number , commonly represented by the variable x , for producing another real number, the value of the function, commonly denoted f ( x ). For simplicity, in this article a real-valued function of a real variable will be simply called a function . To avoid any ambiguity, the other types of functions that may occur will be explicitly specified. Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable is taken in a subset X of R {\displaystyle \mathbb {R} } , the domain of the function, which is always supposed to contain an interval of positive length. In other words, a real-valued function of a real variable is a function such that its domain X is a subset of R {\displaystyle \mathbb {R} } that contains an interval of positive length. A simple example of a function in one variable could be: which is the square root of x . The image of a function f ( x ) {\displaystyle f(x)} is the set of all values of f when the variable x runs in the whole domain of f . For a continuous (see below for a definition) real-valued function with a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function . The preimage of a given real number y is the set of the solutions of the equation y = f ( x ) . The domain of a function of several real variables is a subset of R {\displaystyle \mathbb {R} } that is sometimes explicitly defined. In fact, if one restricts the domain X of a function f to a subset Y ⊂ X , one gets formally a different function, the restriction of f to Y , which is denoted f | Y . In practice, it is often not harmful to identify f and f | Y , and to omit the subscript | Y . Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation . This means that it is not worthy to explicitly define the domain of a function of a real variable. The arithmetic operations may be applied to the functions in the following way: It follows that the functions of n variables that are everywhere defined and the functions of n variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals ( R {\displaystyle \mathbb {R} } -algebras). One may similarly define 1 / f : ( x ) ↦ 1 / f ( x ) , {\displaystyle 1/f:(x)\mapsto 1/f(x),} which is a function only if the set of the points ( x ) in the domain of f such that f ( x ) ≠ 0 contains an open subset of R {\displaystyle \mathbb {R} } . This constraint implies that the above two algebras are not fields . Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of a real variable are ubiquitous in mathematics, it is worth defining this notion without reference to the general notion of continuous maps between topological space. For defining the continuity, it is useful to consider the distance function of R {\displaystyle \mathbb {R} } , which is an everywhere defined function of 2 real variables: d ( x , y ) = | x − y | {\displaystyle d(x,y)=|x-y|} A function f is continuous at a point a {\displaystyle a} which is interior to its domain, if, for every positive real number ε , there is a positive real number φ such that | f ( x ) − f ( a ) | < ε {\displaystyle |f(x)-f(a)|<\varepsilon } for all x {\displaystyle x} such that d ( x , a ) < φ . {\displaystyle d(x,a)<\varphi .} In other words, φ may be chosen small enough for having the image by f of the interval of radius φ centered at a {\displaystyle a} contained in the interval of length 2 ε centered at f ( a ) . {\displaystyle f(a).} A function is continuous if it is continuous at every point of its domain. The limit of a real-valued function of a real variable is as follows. [ 1 ] Let a be a point in topological closure of the domain X of the function f . The function, f has a limit L when x tends toward a , denoted if the following condition is satisfied: For every positive real number ε > 0, there is a positive real number δ > 0 such that for all x in the domain such that If the limit exists, it is unique. If a is in the interior of the domain, the limit exists if and only if the function is continuous at a . In this case, we have When a is in the boundary of the domain of f , and if f has a limit at a , the latter formula allows to "extend by continuity" the domain of f to a . One can collect a number of functions each of a real variable, say into a vector parametrized by x : The derivative of the vector y is the vector derivatives of f i ( x ) for i = 1, 2, ..., n : One can also perform line integrals along a space curve parametrized by x , with position vector r = r ( x ), by integrating with respect to the variable x : where · is the dot product , and x = a and x = b are the start and endpoints of the curve. With the definitions of integration and derivatives, key theorems can be formulated, including the fundamental theorem of calculus , integration by parts , and Taylor's theorem . Evaluating a mixture of integrals and derivatives can be done by using theorem differentiation under the integral sign . A real-valued implicit function of a real variable is not written in the form " y = f ( x )". Instead, the mapping is from the space R {\displaystyle \mathbb {R} } 2 to the zero element in R {\displaystyle \mathbb {R} } (just the ordinary zero 0): and is an equation in the variables. Implicit functions are a more general way to represent functions, since if: then we can always define: but the converse is not always possible, i.e. not all implicit functions have the form of this equation. Given the functions r 1 = r 1 ( t ) , r 2 = r 2 ( t ) , ..., r n = r n ( t ) all of a common variable t , so that: or taken together: then the parametrized n -tuple, describes a one-dimensional space curve . At a point r ( t = c ) = a = ( a 1 , a 2 , ..., a n ) for some constant t = c , the equations of the one-dimensional tangent line to the curve at that point are given in terms of the ordinary derivatives of r 1 ( t ), r 2 ( t ), ..., r n ( t ), and r with respect to t : The equation of the n -dimensional hyperplane normal to the tangent line at r = a is: or in terms of the dot product : where p = ( p 1 , p 2 , ..., p n ) are points in the plane , not on the space curve. The physical and geometric interpretation of d r ( t )/ dt is the " velocity " of a point-like particle moving along the path r ( t ), treating r as the spatial position vector coordinates parametrized by time t , and is a vector tangent to the space curve for all t in the instantaneous direction of motion. At t = c , the space curve has a tangent vector d r ( t )/ dt | t = c , and the hyperplane normal to the space curve at t = c is also normal to the tangent at t = c . Any vector in this plane ( p − a ) must be normal to d r ( t )/ dt | t = c . Similarly, d 2 r ( t )/ dt 2 is the " acceleration " of the particle, and is a vector normal to the curve directed along the radius of curvature . A matrix can also be a function of a single variable. For example, the rotation matrix in 2d: is a matrix valued function of rotation angle of about the origin. Similarly, in special relativity , the Lorentz transformation matrix for a pure boost (without rotations): is a function of the boost parameter β = v / c , in which v is the relative velocity between the frames of reference (a continuous variable), and c is the speed of light , a constant. Generalizing the previous section, the output of a function of a real variable can also lie in a Banach space or a Hilbert space . In these spaces, division and multiplication and limits are all defined, so notions such as derivative and integral still apply. This occurs especially often in quantum mechanics, where one takes the derivative of a ket or an operator . This occurs, for instance, in the general time-dependent Schrödinger equation : where one takes the derivative of a wave function, which can be an element of several different Hilbert spaces. A complex-valued function of a real variable may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values. If f ( x ) is such a complex valued function, it may be decomposed as where g and h are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions. The cardinality of the set of real-valued functions of a real variable, R R = { f : R → R } {\displaystyle \mathbb {R} ^{\mathbb {R} }=\{f:\mathbb {R} \to \mathbb {R} \}} , is ℶ 2 = 2 c {\displaystyle \beth _{2}=2^{\mathfrak {c}}} , which is strictly larger than the cardinality of the continuum (i.e., set of all real numbers). This fact is easily verified by cardinal arithmetic: c a r d ( R R ) = c a r d ( R ) c a r d ( R ) = c c = ( 2 ℵ 0 ) c = 2 ℵ 0 ⋅ c = 2 c . {\displaystyle \mathrm {card} (\mathbb {R} ^{\mathbb {R} })=\mathrm {card} (\mathbb {R} )^{\mathrm {card} (\mathbb {R} )}={\mathfrak {c}}^{\mathfrak {c}}=(2^{\aleph _{0}})^{\mathfrak {c}}=2^{\aleph _{0}\cdot {\mathfrak {c}}}=2^{\mathfrak {c}}.} Furthermore, if X {\displaystyle X} is a set such that 2 ≤ c a r d ( X ) ≤ c {\displaystyle 2\leq \mathrm {card} (X)\leq {\mathfrak {c}}} , then the cardinality of the set X R = { f : R → X } {\displaystyle X^{\mathbb {R} }=\{f:\mathbb {R} \to X\}} is also 2 c {\displaystyle 2^{\mathfrak {c}}} , since 2 c = c a r d ( 2 R ) ≤ c a r d ( X R ) ≤ c a r d ( R R ) = 2 c . {\displaystyle 2^{\mathfrak {c}}=\mathrm {card} (2^{\mathbb {R} })\leq \mathrm {card} (X^{\mathbb {R} })\leq \mathrm {card} (\mathbb {R} ^{\mathbb {R} })=2^{\mathfrak {c}}.} However, the set of continuous functions C 0 ( R ) = { f : R → R : f c o n t i n u o u s } {\displaystyle C^{0}(\mathbb {R} )=\{f:\mathbb {R} \to \mathbb {R} :f\ \mathrm {continuous} \}} has a strictly smaller cardinality, the cardinality of the continuum, c {\displaystyle {\mathfrak {c}}} . This follows from the fact that a continuous function is completely determined by its value on a dense subset of its domain. [ 2 ] Thus, the cardinality of the set of continuous real-valued functions on the reals is no greater than the cardinality of the set of real-valued functions of a rational variable. By cardinal arithmetic: c a r d ( C 0 ( R ) ) ≤ c a r d ( R Q ) = ( 2 ℵ 0 ) ℵ 0 = 2 ℵ 0 ⋅ ℵ 0 = 2 ℵ 0 = c . {\displaystyle \mathrm {card} (C^{0}(\mathbb {R} ))\leq \mathrm {card} (\mathbb {R} ^{\mathbb {Q} })=(2^{\aleph _{0}})^{\aleph _{0}}=2^{\aleph _{0}\cdot \aleph _{0}}=2^{\aleph _{0}}={\mathfrak {c}}.} On the other hand, since there is a clear bijection between R {\displaystyle \mathbb {R} } and the set of constant functions { f : R → R : f ( x ) ≡ x 0 } {\displaystyle \{f:\mathbb {R} \to \mathbb {R} :f(x)\equiv x_{0}\}} , which forms a subset of C 0 ( R ) {\displaystyle C^{0}(\mathbb {R} )} , c a r d ( C 0 ( R ) ) ≥ c {\displaystyle \mathrm {card} (C^{0}(\mathbb {R} ))\geq {\mathfrak {c}}} must also hold. Hence, c a r d ( C 0 ( R ) ) = c {\displaystyle \mathrm {card} (C^{0}(\mathbb {R} ))={\mathfrak {c}}} .
https://en.wikipedia.org/wiki/Function_of_a_real_variable
The theory of functions of several complex variables is the branch of mathematics dealing with functions defined on the complex coordinate space C n {\displaystyle \mathbb {C} ^{n}} , that is, n -tuples of complex numbers . The name of the field dealing with the properties of these functions is called several complex variables (and analytic space ), which the Mathematics Subject Classification has as a top-level heading. As in complex analysis of functions of one variable , which is the case n = 1 , the functions studied are holomorphic or complex analytic so that, locally, they are power series in the variables z i . Equivalently, they are locally uniform limits of polynomials ; or locally square-integrable solutions to the n -dimensional Cauchy–Riemann equations . [ 1 ] [ 2 ] [ 3 ] For one complex variable, every domain [ note 1 ] ( D ⊂ C {\displaystyle D\subset \mathbb {C} } ), is the domain of holomorphy of some function, in other words every domain has a function for which it is the domain of holomorphy. [ 4 ] [ 5 ] For several complex variables, this is not the case; there exist domains ( D ⊂ C n , n ≥ 2 {\displaystyle D\subset \mathbb {C} ^{n},\ n\geq 2} ) that are not the domain of holomorphy of any function, and so is not always the domain of holomorphy, so the domain of holomorphy is one of the themes in this field. [ 4 ] Patching the local data of meromorphic functions , i.e. the problem of creating a global meromorphic function from zeros and poles, is called the Cousin problem. Also, the interesting phenomena that occur in several complex variables are fundamentally important to the study of compact complex manifolds and complex projective varieties ( C P n {\displaystyle \mathbb {CP} ^{n}} ) [ 6 ] and has a different flavour to complex analytic geometry in C n {\displaystyle \mathbb {C} ^{n}} or on Stein manifolds , these are much similar to study of algebraic varieties that is study of the algebraic geometry than complex analytic geometry. Many examples of such functions were familiar in nineteenth-century mathematics; abelian functions , theta functions , and some hypergeometric series , and also, as an example of an inverse problem; the Jacobi inversion problem . [ 7 ] Naturally also same function of one variable that depends on some complex parameter is a candidate. The theory, however, for many years didn't become a full-fledged field in mathematical analysis , since its characteristic phenomena weren't uncovered. The Weierstrass preparation theorem would now be classed as commutative algebra ; it did justify the local picture, ramification , that addresses the generalization of the branch points of Riemann surface theory. With work of Friedrich Hartogs , Pierre Cousin [ fr ] , E. E. Levi , and of Kiyoshi Oka in the 1930s, a general theory began to emerge; others working in the area at the time were Heinrich Behnke , Peter Thullen , Karl Stein , Wilhelm Wirtinger and Francesco Severi . Hartogs proved some basic results, such as every isolated singularity is removable , for every analytic function f : C n → C {\displaystyle f:\mathbb {C} ^{n}\to \mathbb {C} } whenever n > 1 . Naturally the analogues of contour integrals will be harder to handle; when n = 2 an integral surrounding a point should be over a three-dimensional manifold (since we are in four real dimensions), while iterating contour (line) integrals over two separate complex variables should come to a double integral over a two-dimensional surface. This means that the residue calculus will have to take a very different character. After 1945 important work in France, in the seminar of Henri Cartan , and Germany with Hans Grauert and Reinhold Remmert , quickly changed the picture of the theory. A number of issues were clarified, in particular that of analytic continuation . Here a major difference is evident from the one-variable theory; while for every open connected set D in C {\displaystyle \mathbb {C} } we can find a function that will nowhere continue analytically over the boundary, that cannot be said for n > 1 . In fact the D of that kind are rather special in nature (especially in complex coordinate spaces C n {\displaystyle \mathbb {C} ^{n}} and Stein manifolds, satisfying a condition called pseudoconvexity ). The natural domains of definition of functions, continued to the limit, are called Stein manifolds and their nature was to make sheaf cohomology groups vanish, on the other hand, the Grauert–Riemenschneider vanishing theorem is known as a similar result for compact complex manifolds, and the Grauert–Riemenschneider conjecture is a special case of the conjecture of Narasimhan. [ 4 ] In fact it was the need to put (in particular) the work of Oka on a clearer basis that led quickly to the consistent use of sheaves for the formulation of the theory (with major repercussions for algebraic geometry , in particular from Grauert's work). From this point onwards there was a foundational theory, which could be applied to analytic geometry , [ note 2 ] automorphic forms of several variables, and partial differential equations . The deformation theory of complex structures and complex manifolds was described in general terms by Kunihiko Kodaira and D. C. Spencer . The celebrated paper GAGA of Serre [ 8 ] pinned down the crossover point from géometrie analytique to géometrie algébrique . C. L. Siegel was heard to complain that the new theory of functions of several complex variables had few functions in it, meaning that the special function side of the theory was subordinated to sheaves. The interest for number theory , certainly, is in specific generalizations of modular forms . The classical candidates are the Hilbert modular forms and Siegel modular forms . These days these are associated to algebraic groups (respectively the Weil restriction from a totally real number field of GL (2) , and the symplectic group ), for which it happens that automorphic representations can be derived from analytic functions. In a sense this doesn't contradict Siegel; the modern theory has its own, different directions. Subsequent developments included the hyperfunction theory, and the edge-of-the-wedge theorem , both of which had some inspiration from quantum field theory . There are a number of other fields, such as Banach algebra theory, that draw on several complex variables. The complex coordinate space C n {\displaystyle \mathbb {C} ^{n}} is the Cartesian product of n copies of C {\displaystyle \mathbb {C} } , and when C n {\displaystyle \mathbb {C} ^{n}} is a domain of holomorphy, C n {\displaystyle \mathbb {C} ^{n}} can be regarded as a Stein manifold , and more generalized Stein space. C n {\displaystyle \mathbb {C} ^{n}} is also considered to be a complex projective variety , a Kähler manifold , [ 9 ] etc. It is also an n -dimensional vector space over the complex numbers , which gives its dimension 2 n over R {\displaystyle \mathbb {R} } . [ note 3 ] Hence, as a set and as a topological space , C n {\displaystyle \mathbb {C} ^{n}} may be identified to the real coordinate space R 2 n {\displaystyle \mathbb {R} ^{2n}} and its topological dimension is thus 2 n . In coordinate-free language, any vector space over complex numbers may be thought of as a real vector space of twice as many dimensions, where a complex structure is specified by a linear operator J (such that J 2 = − I ) which defines multiplication by the imaginary unit i . Any such space, as a real space, is oriented . On the complex plane thought of as a Cartesian plane , multiplication by a complex number w = u + iv may be represented by the real matrix with determinant Likewise, if one expresses any finite-dimensional complex linear operator as a real matrix (which will be composed from 2 × 2 blocks of the aforementioned form), then its determinant equals to the square of absolute value of the corresponding complex determinant. It is a non-negative number, which implies that the (real) orientation of the space is never reversed by a complex operator. The same applies to Jacobians of holomorphic functions from C n {\displaystyle \mathbb {C} ^{n}} to C n {\displaystyle \mathbb {C} ^{n}} . A function f defined on a domain D ⊂ C n {\displaystyle D\subset \mathbb {C} ^{n}} and with values in C {\displaystyle \mathbb {C} } is said to be holomorphic at a point z ∈ D {\displaystyle z\in D} if it is complex-differentiable at this point, in the sense that there exists a complex linear map L : C n → C {\displaystyle L:\mathbb {C} ^{n}\to \mathbb {C} } such that f ( z + h ) = f ( z ) + L ( h ) + o ( ‖ h ‖ ) {\displaystyle f(z+h)=f(z)+L(h)+o(\lVert h\rVert )} The function f is said to be holomorphic if it is holomorphic at all points of its domain of definition D . If f is holomorphic, then all the partial maps : z ↦ f ( z 1 , … , z i − 1 , z , z i + 1 , … , z n ) {\displaystyle z\mapsto f(z_{1},\dots ,z_{i-1},z,z_{i+1},\dots ,z_{n})} are holomorphic as functions of one complex variable : we say that f is holomorphic in each variable separately. Conversely, if f is holomorphic in each variable separately, then f is in fact holomorphic : this is known as Hartog's theorem , or as Osgood's lemma under the additional hypothesis that f is continuous . In one complex variable, a function f : C → C {\displaystyle f:\mathbb {C} \to \mathbb {C} } defined on the plane is holomorphic at a point p ∈ C {\displaystyle p\in \mathbb {C} } if and only if its real part u {\displaystyle u} and its imaginary part v {\displaystyle v} satisfy the so-called Cauchy-Riemann equations at p {\displaystyle p} : ∂ u ∂ x ( p ) = ∂ v ∂ y ( p ) and ∂ u ∂ y ( p ) = − ∂ v ∂ x ( p ) {\displaystyle {\frac {\partial u}{\partial x}}(p)={\frac {\partial v}{\partial y}}(p)\quad {\text{ and }}\quad {\frac {\partial u}{\partial y}}(p)=-{\frac {\partial v}{\partial x}}(p)} In several variables, a function f : C n → C {\displaystyle f:\mathbb {C} ^{n}\to \mathbb {C} } is holomorphic if and only if it is holomorphic in each variable separately, and hence if and only if the real part u {\displaystyle u} and the imaginary part v {\displaystyle v} of f {\displaystyle f} satisfiy the Cauchy Riemann equations : ∀ i ∈ { 1 , … , n } , ∂ u ∂ x i = ∂ v ∂ y i and ∂ u ∂ y i = − ∂ v ∂ x i {\displaystyle \forall i\in \{1,\dots ,n\},\quad {\frac {\partial u}{\partial x_{i}}}={\frac {\partial v}{\partial y_{i}}}\quad {\text{ and }}\quad {\frac {\partial u}{\partial y_{i}}}=-{\frac {\partial v}{\partial x_{i}}}} Using the formalism of Wirtinger derivatives , this can be reformulated as : ∀ i ∈ { 1 , … , n } , ∂ f ∂ z i ¯ = 0 , {\displaystyle \forall i\in \{1,\dots ,n\},\quad {\frac {\partial f}{\partial {\overline {z_{i}}}}}=0,} or even more compactly using the formalism of complex differential forms , as : ∂ ¯ f = 0. {\displaystyle {\bar {\partial }}f=0.} Prove the sufficiency of two conditions (A) and (B). Let f meets the conditions of being continuous and separately homorphic on domain D . Each disk has a rectifiable curve γ {\displaystyle \gamma } , γ ν {\displaystyle \gamma _{\nu }} is piecewise smoothness , class C 1 {\displaystyle {\mathcal {C}}^{1}} Jordan closed curve. ( ν = 1 , 2 , … , n {\displaystyle \nu =1,2,\ldots ,n} ) Let D ν {\displaystyle D_{\nu }} be the domain surrounded by each γ ν {\displaystyle \gamma _{\nu }} . Cartesian product closure D 1 × D 2 × ⋯ × D n ¯ {\displaystyle {\overline {D_{1}\times D_{2}\times \cdots \times D_{n}}}} is D 1 ¯ × D 2 ¯ × ⋯ × D n ¯ ∈ D {\displaystyle {\overline {D_{1}}}\times {\overline {D_{2}}}\times \cdots \times {\overline {D_{n}}}\in D} . Also, take the closed polydisc Δ ¯ {\displaystyle {\overline {\Delta }}} so that it becomes Δ ¯ ⊂ D 1 × D 2 × ⋯ × D n {\displaystyle {\overline {\Delta }}\subset {D_{1}\times D_{2}\times \cdots \times D_{n}}} . Δ ¯ ( z , r ) = { ζ = ( ζ 1 , ζ 2 , … , ζ n ) ∈ C n ; | ζ ν − z ν | ≤ r ν for all ν = 1 , … , n } {\displaystyle {\overline {\Delta }}(z,r)=\left\{\zeta =(\zeta _{1},\zeta _{2},\dots ,\zeta _{n})\in \mathbb {C} ^{n};\left|\zeta _{\nu }-z_{\nu }\right|\leq r_{\nu }{\text{ for all }}\nu =1,\dots ,n\right\}} and let { z ν } ν = 1 n {\displaystyle \{z_{\nu }\}_{\nu =1}^{n}} be the center of each disk.) Using the Cauchy's integral formula of one variable repeatedly, [ note 4 ] Because ∂ D {\displaystyle \partial D} is a rectifiable Jordanian closed curve [ note 5 ] and f is continuous, so the order of products and sums can be exchanged so the iterated integral can be calculated as a multiple integral . Therefore, Because the order of products and sums is interchangeable, from ( 1 ) we get f is class C ∞ {\displaystyle {\mathcal {C}}^{\infty }} -function. From (2), if f is holomorphic, on polydisc { ζ = ( ζ 1 , ζ 2 , … , ζ n ) ∈ C n ; | ζ ν − z ν | ≤ r ν , for all ν = 1 , … , n } {\displaystyle \left\{\zeta =(\zeta _{1},\zeta _{2},\dots ,\zeta _{n})\in \mathbb {C} ^{n};|\zeta _{\nu }-z_{\nu }|\leq r_{\nu },{\text{ for all }}\nu =1,\dots ,n\right\}} and | f | ≤ M {\displaystyle |f|\leq {M}} , the following evaluation equation is obtained. Therefore, Liouville's theorem hold. If function f is holomorphic, on polydisc { z = ( z 1 , z 2 , … , z n ) ∈ C n ; | z ν − a ν | < r ν , for all ν = 1 , … , n } {\displaystyle \{z=(z_{1},z_{2},\dots ,z_{n})\in \mathbb {C} ^{n};|z_{\nu }-a_{\nu }|<r_{\nu },{\text{ for all }}\nu =1,\dots ,n\}} , from the Cauchy's integral formula, we can see that it can be uniquely expanded to the next power series. In addition, f that satisfies the following conditions is called an analytic function. For each point a = ( a 1 , … , a n ) ∈ D ⊂ C n {\displaystyle a=(a_{1},\dots ,a_{n})\in D\subset \mathbb {C} ^{n}} , f ( z ) {\displaystyle f(z)} is expressed as a power series expansion that is convergent on D : We have already explained that holomorphic functions on polydisc are analytic. Also, from the theorem derived by Weierstrass, we can see that the analytic function on polydisc (convergent power series) is holomorphic. It is possible to define a combination of positive real numbers { r ν ( ν = 1 , … , n ) } {\displaystyle \{r_{\nu }\ (\nu =1,\dots ,n)\}} such that the power series ∑ k 1 , … , k n = 0 ∞ c k 1 , … , k n ( z 1 − a 1 ) k 1 ⋯ ( z n − a n ) k n {\textstyle \sum _{k_{1},\dots ,k_{n}=0}^{\infty }c_{k_{1},\dots ,k_{n}}(z_{1}-a_{1})^{k_{1}}\cdots (z_{n}-a_{n})^{k_{n}}\ } converges uniformly at { z = ( z 1 , z 2 , … , z n ) ∈ C n ; | z ν − a ν | < r ν , for all ν = 1 , … , n } {\displaystyle \left\{z=(z_{1},z_{2},\dots ,z_{n})\in \mathbb {C} ^{n};|z_{\nu }-a_{\nu }|<r_{\nu },{\text{ for all }}\nu =1,\dots ,n\right\}} and does not converge uniformly at { z = ( z 1 , z 2 , … , z n ) ∈ C n ; | z ν − a ν | > r ν , for all ν = 1 , … , n } {\displaystyle \left\{z=(z_{1},z_{2},\dots ,z_{n})\in \mathbb {C} ^{n};|z_{\nu }-a_{\nu }|>r_{\nu },{\text{ for all }}\nu =1,\dots ,n\right\}} . In this way it is possible to have a similar, combination of radius of convergence [ note 6 ] for a one complex variable. This combination is generally not unique and there are an infinite number of combinations. Let ω ( z ) {\displaystyle \omega (z)} be holomorphic in the annulus { z = ( z 1 , z 2 , … , z n ) ∈ C n ; r ν < | z | < R ν , for all ν + 1 , … , n } {\displaystyle \left\{z=(z_{1},z_{2},\dots ,z_{n})\in \mathbb {C} ^{n};r_{\nu }<|z|<R_{\nu },{\text{ for all }}\nu +1,\dots ,n\right\}} and continuous on their circumference, then there exists the following expansion ; The integral in the second term, of the right-hand side is performed so as to see the zero on the left in every plane, also this integrated series is uniformly convergent in the annulus r ν ′ < | z | < R ν ′ {\displaystyle r'_{\nu }<|z|<R'_{\nu }} , where r ν ′ > r ν {\displaystyle r'_{\nu }>r_{\nu }} and R ν ′ < R ν {\displaystyle R'_{\nu }<R_{\nu }} , and so it is possible to integrate term. [ 11 ] The Cauchy integral formula holds only for polydiscs, and in the domain of several complex variables, polydiscs are only one of many possible domains, so we introduce the Bochner–Martinelli formula . Suppose that f is a continuously differentiable function on the closure of a domain D on C n {\displaystyle \mathbb {C} ^{n}} with piecewise smooth boundary ∂ D {\displaystyle \partial D} , and let the symbol ∧ {\displaystyle \land } denotes the exterior or wedge product of differential forms. Then the Bochner–Martinelli formula states that if z is in the domain D then, for ζ {\displaystyle \zeta } , z in C n {\displaystyle \mathbb {C} ^{n}} the Bochner–Martinelli kernel ω ( ζ , z ) {\displaystyle \omega (\zeta ,z)} is a differential form in ζ {\displaystyle \zeta } of bidegree ( n , n − 1 ) {\displaystyle (n,n-1)} , defined by In particular if f is holomorphic the second term vanishes, so Holomorphic functions of several complex variables satisfy an identity theorem , as in one variable : two holomorphic functions defined on the same connected open set D ⊂ C n {\displaystyle D\subset \mathbb {C} ^{n}} and which coincide on an open subset N of D , are equal on the whole open set D . This result can be proven from the fact that holomorphics functions have power series extensions, and it can also be deduced from the one variable case. Contrary to the one variable case, it is possible that two different holomorphic functions coincide on a set which has an accumulation point, for instance the maps f ( z 1 , z 2 ) = 0 {\displaystyle f(z_{1},z_{2})=0} and g ( z 1 , z 2 ) = z 1 {\displaystyle g(z_{1},z_{2})=z_{1}} coincide on the whole complex line of C 2 {\displaystyle \mathbb {C} ^{2}} defined by the equation z 1 = 0 {\displaystyle z_{1}=0} . The maximal principle , inverse function theorem , and implicit function theorems also hold. For a generalized version of the implicit function theorem to complex variables, see the Weierstrass preparation theorem . From the establishment of the inverse function theorem, the following mapping can be defined. For the domain U , V of the n -dimensional complex space C n {\displaystyle \mathbb {C} ^{n}} , the bijective holomorphic function ϕ : U → V {\displaystyle \phi :U\to V} and the inverse mapping ϕ − 1 : V → U {\displaystyle \phi ^{-1}:V\to U} is also holomorphic. At this time, ϕ {\displaystyle \phi } is called a U , V biholomorphism also, we say that U and V are biholomorphically equivalent or that they are biholomorphic. When n > 1 {\displaystyle n>1} , open balls and open polydiscs are not biholomorphically equivalent, that is, there is no biholomorphic mapping between the two. [ 12 ] This was proven by Poincaré in 1907 by showing that their automorphism groups have different dimensions as Lie groups . [ 5 ] [ 13 ] However, even in the case of several complex variables, there are some results similar to the results of the theory of uniformization in one complex variable. [ 14 ] Let U, V be domain on C n {\displaystyle \mathbb {C} ^{n}} , such that f ∈ O ( U ) {\displaystyle f\in {\mathcal {O}}(U)} and g ∈ O ( V ) {\displaystyle g\in {\mathcal {O}}(V)} , ( O ( U ) {\displaystyle {\mathcal {O}}(U)} is the set/ring of holomorphic functions on U .) assume that U , V , U ∩ V ≠ ∅ {\displaystyle U,\ V,\ U\cap V\neq \varnothing } and W {\displaystyle W} is a connected component of U ∩ V {\displaystyle U\cap V} . If f | W = g | W {\displaystyle f|_{W}=g|_{W}} then f is said to be connected to V , and g is said to be analytic continuation of f . From the identity theorem, if g exists, for each way of choosing W it is unique. When n > 2, the following phenomenon occurs depending on the shape of the boundary ∂ U {\displaystyle \partial U} : there exists domain U , V , such that all holomorphic functions f {\displaystyle f} over the domain U , have an analytic continuation g ∈ O ( V ) {\displaystyle g\in {\mathcal {O}}(V)} . In other words, there may not exist a function f ∈ O ( U ) {\displaystyle f\in {\mathcal {O}}(U)} such that ∂ U {\displaystyle \partial U} as the natural boundary. This is called the Hartogs's phenomenon. Therefore, investigating when domain boundaries become natural boundaries has become one of the main research themes of several complex variables. In addition, if n ≥ 2 {\displaystyle n\geq 2} , it would be that the above V has an intersection part with U other than W . This contributed to advancement of the notion of sheaf cohomology. In polydisks, the Cauchy's integral formula holds and the power series expansion of holomorphic functions is defined, but polydisks and open unit balls are not biholomorphic mapping because the Riemann mapping theorem does not hold, and also, polydisks was possible to separation of variables, but it doesn't always hold for any domain. Therefore, in order to study of the domain of convergence of the power series, it was necessary to make additional restriction on the domain, this was the Reinhardt domain. Early knowledge into the properties of field of study of several complex variables, such as Logarithmically-convex, Hartogs's extension theorem, etc., were given in the Reinhardt domain. Let D ⊂ C n {\displaystyle D\subset \mathbb {C} ^{n}} ( n ≥ 1 {\displaystyle n\geq 1} ) to be a domain, with centre at a point a = ( a 1 , … , a n ) ∈ C n {\displaystyle a=(a_{1},\dots ,a_{n})\in \mathbb {C} ^{n}} , such that, together with each point z 0 = ( z 1 0 , … , z n 0 ) ∈ D {\displaystyle z^{0}=(z_{1}^{0},\dots ,z_{n}^{0})\in D} , the domain also contains the set A domain D is called a Reinhardt domain if it satisfies the following conditions: [ 15 ] [ 16 ] Let θ ν ( ν = 1 , … , n ) {\displaystyle \theta _{\nu }\;(\nu =1,\dots ,n)} is a arbitrary real numbers, a domain D is invariant under the rotation: { z 0 − a ν } → { e i θ ν ( z ν 0 − a ν ) } {\displaystyle \left\{z^{0}-a_{\nu }\right\}\to \left\{e^{i\theta _{\nu }}(z_{\nu }^{0}-a_{\nu })\right\}} . The Reinhardt domains which are defined by the following condition; Together with all points of z 0 ∈ D {\displaystyle z^{0}\in D} , the domain contains the set A Reinhardt domain D is called a complete Reinhardt domain with centre at a point a if together with all point z 0 ∈ D {\displaystyle z^{0}\in D} it also contains the polydisc A complete Reinhardt domain D is star-like with regard to its centre a . Therefore, the complete Reinhardt domain is simply connected , also when the complete Reinhardt domain is the boundary line, there is a way to prove the Cauchy's integral theorem without using the Jordan curve theorem . When a some complete Reinhardt domain to be the domain of convergence of a power series, an additional condition is required, which is called logarithmically-convex. A Reinhardt domain D is called logarithmically convex if the image λ ( D ∗ ) {\displaystyle \lambda (D^{*})} of the set under the mapping is a convex set in the real coordinate space R n {\displaystyle \mathbb {R} ^{n}} . Every such domain in C n {\displaystyle \mathbb {C} ^{n}} is the interior of the set of points of absolute convergence of some power series in ∑ k 1 , … , k n = 0 ∞ c k 1 , … , k n ( z 1 − a 1 ) k 1 ⋯ ( z n − a n ) k n {\textstyle \sum _{k_{1},\dots ,k_{n}=0}^{\infty }c_{k_{1},\dots ,k_{n}}(z_{1}-a_{1})^{k_{1}}\cdots (z_{n}-a_{n})^{k_{n}}\ } , and conversely; The domain of convergence of every power series in z 1 , … , z n {\displaystyle z_{1},\dots ,z_{n}} is a logarithmically-convex Reinhardt domain with centre a = 0 {\displaystyle a=0} . [ note 7 ] But, there is an example of a complete Reinhardt domain D which is not logarithmically convex. [ 17 ] When examining the domain of convergence on the Reinhardt domain, Hartogs found the Hartogs's phenomenon in which holomorphic functions in some domain on the C n {\displaystyle \mathbb {C} ^{n}} were all connected to larger domain. [ 18 ] From Hartogs's extension theorem the domain of convergence extends from H ε {\displaystyle H_{\varepsilon }} to Δ 2 {\displaystyle \Delta ^{2}} . Looking at this from the perspective of the Reinhardt domain, H ε {\displaystyle H_{\varepsilon }} is the Reinhardt domain containing the center z = 0, and the domain of convergence of H ε {\displaystyle H_{\varepsilon }} has been extended to the smallest complete Reinhardt domain Δ 2 {\displaystyle \Delta ^{2}} containing H ε {\displaystyle H_{\varepsilon }} . [ 24 ] Thullen 's [ 25 ] classical result says that a 2-dimensional bounded Reinhard domain containing the origin is biholomorphic to one of the following domains provided that the orbit of the origin by the automorphism group has positive dimension: Toshikazu Sunada (1978) [ 26 ] established a generalization of Thullen's result: When moving from the theory of one complex variable to the theory of several complex variables, depending on the range of the domain, it may not be possible to define a holomorphic function such that the boundary of the domain becomes a natural boundary. Considering the domain where the boundaries of the domain are natural boundaries (In the complex coordinate space C n {\displaystyle \mathbb {C} ^{n}} call the domain of holomorphy), the first result of the domain of holomorphy was the holomorphic convexity of H . Cartan and Thullen. [ 27 ] Levi's problem shows that the pseudoconvex domain was a domain of holomorphy. (First for C 2 {\displaystyle \mathbb {C} ^{2}} , [ 28 ] later extended to C n {\displaystyle \mathbb {C} ^{n}} . [ 29 ] [ 30 ] ) [ 31 ] Kiyoshi Oka 's [ 34 ] [ 35 ] notion of idéal de domaines indéterminés is interpreted theory of sheaf cohomology by H . Cartan and more development Serre. [ note 10 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] [ 40 ] [ 41 ] [ 6 ] In sheaf cohomology, the domain of holomorphy has come to be interpreted as the theory of Stein manifolds. [ 42 ] The notion of the domain of holomorphy is also considered in other complex manifolds, furthermore also the complex analytic space which is its generalization. [ 4 ] When a function f is holomorpic on the domain D ⊂ C n {\displaystyle D\subset \mathbb {C} ^{n}} and cannot directly connect to the domain outside D , including the point of the domain boundary ∂ D {\displaystyle \partial D} , the domain D is called the domain of holomorphy of f and the boundary is called the natural boundary of f . In other words, the domain of holomorphy D is the supremum of the domain where the holomorphic function f is holomorphic, and the domain D , which is holomorphic, cannot be extended any more. For several complex variables, i.e. domain D ⊂ C n ( n ≥ 2 ) {\displaystyle D\subset \mathbb {C} ^{n}\ (n\geq 2)} , the boundaries may not be natural boundaries. Hartogs' extension theorem gives an example of a domain where boundaries are not natural boundaries. [ 43 ] Formally, a domain D in the n -dimensional complex coordinate space C n {\displaystyle \mathbb {C} ^{n}} is called a domain of holomorphy if there do not exist non-empty domain U ⊂ D {\displaystyle U\subset D} and V ⊂ C n {\displaystyle V\subset \mathbb {C} ^{n}} , V ⊄ D {\displaystyle V\not \subset D} and U ⊂ D ∩ V {\displaystyle U\subset D\cap V} such that for every holomorphic function f on D there exists a holomorphic function g on V with f = g {\displaystyle f=g} on U . For the n = 1 {\displaystyle n=1} case, every domain ( D ⊂ C {\displaystyle D\subset \mathbb {C} } ) is a domain of holomorphy; we can find a holomorphic function that is not identically 0, but whose zeros accumulate everywhere on the boundary of the domain, which must then be a natural boundary for a domain of definition of its reciprocal. Let G ⊂ C n {\displaystyle G\subset \mathbb {C} ^{n}} be a domain, or alternatively for a more general definition, let G {\displaystyle G} be an n {\displaystyle n} dimensional complex analytic manifold . Further let O ( G ) {\displaystyle {\mathcal {O}}(G)} stand for the set of holomorphic functions on G . For a compact set K ⊂ G {\displaystyle K\subset G} , the holomorphically convex hull of K is One obtains a narrower concept of polynomially convex hull by taking O ( G ) {\displaystyle {\mathcal {O}}(G)} instead to be the set of complex-valued polynomial functions on G . The polynomially convex hull contains the holomorphically convex hull. The domain G {\displaystyle G} is called holomorphically convex if for every compact subset K , K ^ G {\displaystyle K,{\hat {K}}_{G}} is also compact in G . Sometimes this is just abbreviated as holomorph-convex . When n = 1 {\displaystyle n=1} , every domain G {\displaystyle G} is holomorphically convex since then K ^ G {\displaystyle {\hat {K}}_{G}} is the union of K with the relatively compact components of G ∖ K ⊂ G {\displaystyle G\setminus K\subset G} . When n ≥ 1 {\displaystyle n\geq 1} , if f satisfies the above holomorphic convexity on D it has the following properties. dist ( K , D c ) = dist ( K ^ D , D c ) {\displaystyle {\text{dist}}(K,D^{c})={\text{dist}}({\hat {K}}_{D},D^{c})} for every compact subset K in D , where dist ( K , D c ) {\displaystyle {\text{dist}}(K,D^{c})} denotes the distance between K and D c = C n ∖ D {\displaystyle D^{c}=\mathbb {C} ^{n}\setminus D} . Also, at this time, D is a domain of holomorphy. Therefore, every convex domain ( D ⊂ C n ) {\displaystyle (D\subset \mathbb {C} ^{n})} is domain of holomorphy. [ 5 ] Hartogs showed that Hartogs (1906): [ 19 ] Let D be a Hartogs's domain on C {\displaystyle \mathbb {C} } and R be a positive function on D such that the set Ω {\displaystyle \Omega } in C 2 {\displaystyle \mathbb {C} ^{2}} defined by z 1 ∈ D {\displaystyle z_{1}\in D} and | z 2 | < R ( z 1 ) {\displaystyle |z_{2}|<R(z_{1})} is a domain of holomorphy. Then − log ⁡ R ( z 1 ) {\displaystyle -\log {R}(z_{1})} is a subharmonic function on D . [ 4 ] If such a relations holds in the domain of holomorphy of several complex variables, it looks like a more manageable condition than a holomorphically convex. [ note 11 ] The subharmonic function looks like a kind of convex function , so it was named by Levi as a pseudoconvex domain (Hartogs's pseudoconvexity). Pseudoconvex domain (boundary of pseudoconvexity) are important, as they allow for classification of domains of holomorphy. A domain of holomorphy is a global property, by contrast, pseudoconvexity is that local analytic or local geometric property of the boundary of a domain. [ 46 ] is called plurisubharmonic if it is upper semi-continuous , and for every complex line φ : Δ → X {\displaystyle \varphi \colon \Delta \to X} the function is subharmonic, where Δ ⊂ C {\displaystyle \Delta \subset \mathbb {C} } denotes the unit disk. In one-complex variable, necessary and sufficient condition that the real-valued function u = u ( z ) {\displaystyle u=u(z)} , that can be second-order differentiable with respect to z of one-variable complex function is subharmonic is Δ = 4 ( ∂ 2 u ∂ z ∂ z ¯ ) ≥ 0 {\displaystyle \Delta =4\left({\frac {\partial ^{2}u}{\partial z\,\partial {\overline {z}}}}\right)\geq 0} . Therefore, if u {\displaystyle u} is of class C 2 {\displaystyle {\mathcal {C}}^{2}} , then u {\displaystyle u} is plurisubharmonic if and only if the hermitian matrix H u = ( λ i j ) , λ i j = ∂ 2 u ∂ z i ∂ z ¯ j {\displaystyle H_{u}=(\lambda _{ij}),\lambda _{ij}={\frac {\partial ^{2}u}{\partial z_{i}\,\partial {\bar {z}}_{j}}}} is positive semidefinite. Equivalently, a C 2 {\displaystyle {\mathcal {C}}^{2}} -function u is plurisubharmonic if and only if − 1 ∂ ∂ ¯ f {\displaystyle {\sqrt {-1}}\partial {\bar {\partial }}f} is a positive (1,1)-form . [ 47 ] : 39–40 When the hermitian matrix of u is positive-definite and class C 2 {\displaystyle {\mathcal {C}}^{2}} , we call u a strict plurisubharmonic function. Weak pseudoconvex is defined as : Let X ⊂ C n {\displaystyle X\subset {\mathbb {C} }^{n}} be a domain. One says that X is pseudoconvex if there exists a continuous plurisubharmonic function φ {\displaystyle \varphi } on X such that the set { z ∈ X ; φ ( z ) ≤ sup x } {\displaystyle \{z\in X;\varphi (z)\leq \sup x\}} is a relatively compact subset of X for all real numbers x . [ note 12 ] i.e. there exists a smooth plurisubharmonic exhaustion function ψ ∈ Psh ( X ) ∩ C ∞ ( X ) {\displaystyle \psi \in {\text{Psh}}(X)\cap {\mathcal {C}}^{\infty }(X)} . Often, the definition of pseudoconvex is used here and is written as; Let X be a complex n -dimensional manifold. Then is said to be weeak pseudoconvex there exists a smooth plurisubharmonic exhaustion function ψ ∈ Psh ( X ) ∩ C ∞ ( X ) {\displaystyle \psi \in {\text{Psh}}(X)\cap {\mathcal {C}}^{\infty }(X)} . [ 47 ] : 49 Let X be a complex n -dimensional manifold. Strongly (or Strictly) pseudoconvex if there exists a smooth strictly plurisubharmonic exhaustion function ψ ∈ Psh ( X ) ∩ C ∞ ( X ) {\displaystyle \psi \in {\text{Psh}}(X)\cap {\mathcal {C}}^{\infty }(X)} , i.e., H ψ {\displaystyle H\psi } is positive definite at every point. The strongly pseudoconvex domain is the pseudoconvex domain. [ 47 ] : 49 Strongly pseudoconvex and strictly pseudoconvex (i.e. 1-convex and 1-complete [ 48 ] ) are often used interchangeably, [ 49 ] see Lempert [ 50 ] for the technical difference. If C 2 {\displaystyle {\mathcal {C}}^{2}} boundary , it can be shown that D has a defining function; i.e., that there exists ρ : C n → R {\displaystyle \rho :\mathbb {C} ^{n}\to \mathbb {R} } which is C 2 {\displaystyle {\mathcal {C}}^{2}} so that D = { ρ < 0 } {\displaystyle D=\{\rho <0\}} , and ∂ D = { ρ = 0 } {\displaystyle \partial D=\{\rho =0\}} . Now, D is pseudoconvex iff for every p ∈ ∂ D {\displaystyle p\in \partial D} and w {\displaystyle w} in the complex tangent space at p, that is, If D does not have a C 2 {\displaystyle {\mathcal {C}}^{2}} boundary, the following approximation result can be useful. Proposition 1 If D is pseudoconvex, then there exist bounded , strongly Levi pseudoconvex domains D k ⊂ D {\displaystyle D_{k}\subset D} with class C ∞ {\displaystyle {\mathcal {C}}^{\infty }} -boundary which are relatively compact in D , such that This is because once we have a φ {\displaystyle \varphi } as in the definition we can actually find a C ∞ {\displaystyle {\mathcal {C}}^{\infty }} exhaustion function. When the Levi (–Krzoska) form is positive-definite, it is called strongly Levi (–Krzoska) pseudoconvex or often called simply strongly (or strictly) pseudoconvex. [ 5 ] If for every boundary point ρ {\displaystyle \rho } of D , there exists an analytic variety B {\displaystyle {\mathcal {B}}} passing ρ {\displaystyle \rho } which lies entirely outside D in some neighborhood around ρ {\displaystyle \rho } , except the point ρ {\displaystyle \rho } itself. Domain D that satisfies these conditions is called Levi total pseudoconvex. [ 52 ] Let n -functions φ : z j = φ j ( u , t ) {\displaystyle \varphi :z_{j}=\varphi _{j}(u,t)} be continuous on Δ : | U | ≤ 1 , 0 ≤ t ≤ 1 {\displaystyle \Delta :|U|\leq 1,0\leq t\leq 1} , holomorphic in | u | < 1 {\displaystyle |u|<1} when the parameter t is fixed in [0, 1], and assume that ∂ φ j ∂ u {\displaystyle {\frac {\partial \varphi _{j}}{\partial u}}} are not all zero at any point on Δ {\displaystyle \Delta } . Then the set Q ( t ) := { Z j = φ j ( u , t ) ; | u | ≤ 1 } {\displaystyle Q(t):=\{Z_{j}=\varphi _{j}(u,t);|u|\leq 1\}} is called an analytic disc de-pending on a parameter t , and B ( t ) := { Z j = φ j ( u , t ) ; | u | = 1 } {\displaystyle B(t):=\{Z_{j}=\varphi _{j}(u,t);|u|=1\}} is called its shell. If Q ( t ) ⊂ D ( 0 < t ) {\displaystyle Q(t)\subset D\ (0<t)} and B ( 0 ) ⊂ D {\displaystyle B(0)\subset D} , Q(t) is called Family of Oka's disk. [ 52 ] [ 53 ] When Q ( 0 ) ⊂ D {\displaystyle Q(0)\subset D} holds on any family of Oka's disk, D is called Oka pseudoconvex. [ 52 ] Oka's proof of Levi's problem was that when the unramified Riemann domain over C n {\displaystyle \mathbb {C} ^{n}} [ 54 ] was a domain of holomorphy (holomorphically convex), it was proved that it was necessary and sufficient that each boundary point of the domain of holomorphy is an Oka pseudoconvex. [ 29 ] [ 53 ] For every point x ∈ ∂ D {\displaystyle x\in \partial D} there exist a neighbourhood U of x and f holomorphic. ( i.e. U ∩ D {\displaystyle U\cap D} be holomorphically convex.) such that f cannot be extended to any neighbourhood of x . i.e., let ψ : X → Y {\displaystyle \psi :X\to Y} be a holomorphic map, if every point y ∈ Y {\displaystyle y\in Y} has a neighborhood U such that ψ − 1 ( U ) {\displaystyle \psi ^{-1}(U)} admits a C ∞ {\displaystyle {\mathcal {C}}^{\infty }} -plurisubharmonic exhaustion function (weakly 1-complete [ 55 ] ), in this situation, we call that X is locally pseudoconvex (or locally Stein) over Y . As an old name, it is also called Cartan pseudoconvex. In C n {\displaystyle \mathbb {C} ^{n}} the locally pseudoconvex domain is itself a pseudoconvex domain and it is a domain of holomorphy. [ 56 ] [ 52 ] For example, Diederich–Fornæss [ 57 ] found local pseudoconvex bounded domains Ω {\displaystyle \Omega } with smooth boundary on non-Kähler manifolds such that Ω {\displaystyle \Omega } is not weakly 1-complete. [ 58 ] [ note 13 ] For a domain D ⊂ C n {\displaystyle D\subset \mathbb {C} ^{n}} the following conditions are equivalent: [ note 14 ] The implications 1 ⇔ 2 ⇔ 3 {\displaystyle 1\Leftrightarrow 2\Leftrightarrow 3} , [ note 15 ] 1 ⇒ 4 {\displaystyle 1\Rightarrow 4} , [ note 16 ] and 4 ⇒ 5 {\displaystyle 4\Rightarrow 5} are standard results. Proving 5 ⇒ 1 {\displaystyle 5\Rightarrow 1} , i.e. constructing a global holomorphic function which admits no extension from non-extendable functions defined only locally. This is called the Levi problem (after E. E. Levi ) and was solved for unramified Riemann domains over C n {\displaystyle \mathbb {C} ^{n}} by Kiyoshi Oka, [ note 17 ] but for ramified Riemann domains, pseudoconvexity does not characterize holomorphically convexity, [ 66 ] and then by Lars Hörmander using methods from functional analysis and partial differential equations (a consequence of ∂ ¯ {\displaystyle {\bar {\partial }}} -problem(equation) with a L 2 methods ). [ 1 ] [ 43 ] [ 3 ] [ 67 ] The introduction of sheaves into several complex variables allowed the reformulation of and solution to several important problems in the field. Oka introduced the notion which he termed "idéal de domaines indéterminés" or "ideal of indeterminate domains". [ 34 ] [ 35 ] Specifically, it is a set ( I ) {\displaystyle (I)} of pairs ( f , δ ) {\displaystyle (f,\delta )} , f {\displaystyle f} holomorphic on a non-empty open set δ {\displaystyle \delta } , such that The origin of indeterminate domains comes from the fact that domains change depending on the pair ( f , δ ) {\displaystyle (f,\delta )} . Cartan [ 36 ] [ 37 ] translated this notion into the notion of the coherent ( sheaf ) (Especially, coherent analytic sheaf) in sheaf cohomology. [ 67 ] [ 68 ] This name comes from H. Cartan. [ 69 ] Also, Serre (1955) introduced the notion of the coherent sheaf into algebraic geometry, that is, the notion of the coherent algebraic sheaf. [ 70 ] The notion of coherent ( coherent sheaf cohomology ) helped solve the problems in several complex variables. [ 39 ] The definition of the coherent sheaf is as follows. [ 70 ] [ 71 ] [ 72 ] [ 73 ] [ 47 ] : 83–89 A quasi-coherent sheaf on a ringed space ( X , O X ) {\displaystyle (X,{\mathcal {O}}_{X})} is a sheaf F {\displaystyle {\mathcal {F}}} of O X {\displaystyle {\mathcal {O}}_{X}} - modules which has a local presentation, that is, every point in X {\displaystyle X} has an open neighborhood U {\displaystyle U} in which there is an exact sequence for some (possibly infinite) sets I {\displaystyle I} and J {\displaystyle J} . A coherent sheaf on a ringed space ( X , O X ) {\displaystyle (X,{\mathcal {O}}_{X})} is a sheaf F {\displaystyle {\mathcal {F}}} satisfying the following two properties: Morphisms between (quasi-)coherent sheaves are the same as morphisms of sheaves of O X {\displaystyle {\mathcal {O}}_{X}} -modules. Also, Jean-Pierre Serre (1955) [ 70 ] proves that (Oka–Cartan) coherent theorem [ 34 ] says that each sheaf that meets the following conditions is a coherent. [ 74 ] From the above Serre(1955) theorem, O p {\displaystyle {\mathcal {O}}^{p}} is a coherent sheaf, also, (i) is used to prove Cartan's theorems A and B . In the case of one variable complex functions, Mittag-Leffler's theorem was able to create a global meromorphic function from a given and principal parts (Cousin I problem), and Weierstrass factorization theorem was able to create a global meromorphic function from a given zeroes or zero-locus (Cousin II problem). However, these theorems do not hold in several complex variables because the singularities of analytic function in several complex variables are not isolated points; these problems are called the Cousin problems and are formulated in terms of sheaf cohomology. They were first introduced in special cases by Pierre Cousin in 1895. [ 79 ] It was Oka who showed the conditions for solving first Cousin problem for the domain of holomorphy [ note 18 ] on the complex coordinate space, [ 82 ] [ 83 ] [ 80 ] [ note 19 ] also solving the second Cousin problem with additional topological assumptions. The Cousin problem is a problem related to the analytical properties of complex manifolds, but the only obstructions to solving problems of a complex analytic property are pure topological; [ 80 ] [ 39 ] [ 31 ] Serre called this the Oka principle . [ 84 ] They are now posed, and solved, for arbitrary complex manifold M , in terms of conditions on M . M , which satisfies these conditions, is one way to define a Stein manifold. The study of the cousin's problem made us realize that in the study of several complex variables, it is possible to study of global properties from the patching of local data, [ 36 ] that is it has developed the theory of sheaf cohomology. (e.g.Cartan seminar. [ 42 ] ) [ 39 ] Without the language of sheaves, the problem can be formulated as follows. On a complex manifold M , one is given several meromorphic functions f i {\displaystyle f_{i}} along with domains U i {\displaystyle U_{i}} where they are defined, and where each difference f i − f j {\displaystyle f_{i}-f_{j}} is holomorphic (wherever the difference is defined). The first Cousin problem then asks for a meromorphic function f {\displaystyle f} on M such that f − f i {\displaystyle f-f_{i}} is holomorphic on U i {\displaystyle U_{i}} ; in other words, that f {\displaystyle f} shares the singular behaviour of the given local function. Now, let K be the sheaf of meromorphic functions and O the sheaf of holomorphic functions on M . The first Cousin problem can always be solved if the following map is surjective: By the long exact cohomology sequence , is exact, and so the first Cousin problem is always solvable provided that the first cohomology group H 1 ( M , O ) vanishes. In particular, by Cartan's theorem B , the Cousin problem is always solvable if M is a Stein manifold. The second Cousin problem starts with a similar set-up to the first, specifying instead that each ratio f i / f j {\displaystyle f_{i}/f_{j}} is a non-vanishing holomorphic function (where said difference is defined). It asks for a meromorphic function f {\displaystyle f} on M such that f / f i {\displaystyle f/f_{i}} is holomorphic and non-vanishing. Let O ∗ {\displaystyle \mathbf {O} ^{*}} be the sheaf of holomorphic functions that vanish nowhere, and K ∗ {\displaystyle \mathbf {K} ^{*}} the sheaf of meromorphic functions that are not identically zero. These are both then sheaves of abelian groups , and the quotient sheaf K ∗ / O ∗ {\displaystyle \mathbf {K} ^{*}/\mathbf {O} ^{*}} is well-defined. If the following map ϕ {\displaystyle \phi } is surjective, then Second Cousin problem can be solved: The long exact sheaf cohomology sequence associated to the quotient is so the second Cousin problem is solvable in all cases provided that H 1 ( M , O ∗ ) = 0. {\displaystyle H^{1}(M,\mathbf {O} ^{*})=0.} The cohomology group H 1 ( M , O ∗ ) {\displaystyle H^{1}(M,\mathbf {O} ^{*})} for the multiplicative structure on O ∗ {\displaystyle \mathbf {O} ^{*}} can be compared with the cohomology group H 1 ( M , O ) {\displaystyle H^{1}(M,\mathbf {O} )} with its additive structure by taking a logarithm. That is, there is an exact sequence of sheaves where the leftmost sheaf is the locally constant sheaf with fiber 2 π i Z {\displaystyle 2\pi i\mathbb {Z} } . The obstruction to defining a logarithm at the level of H 1 is in H 2 ( M , Z ) {\displaystyle H^{2}(M,\mathbb {Z} )} , from the long exact cohomology sequence When M is a Stein manifold, the middle arrow is an isomorphism because H q ( M , O ) = 0 {\displaystyle H^{q}(M,\mathbf {O} )=0} for q > 0 {\displaystyle q>0} so that a necessary and sufficient condition in that case for the second Cousin problem to be always solvable is that H 2 ( M , Z ) = 0. {\displaystyle H^{2}(M,\mathbb {Z} )=0.} (This condition called Oka principle.) Since a non-compact (open) Riemann surface [ 85 ] always has a non-constant single-valued holomorphic function, [ 86 ] and satisfies the second axiom of countability , the open Riemann surface is in fact a 1 -dimensional complex manifold possessing a holomorphic mapping into the complex plane C {\displaystyle \mathbb {C} } . (In fact, Gunning and Narasimhan have shown (1967) [ 87 ] that every non-compact Riemann surface actually has a holomorphic immersion into the complex plane. In other words, there is a holomorphic mapping into the complex plane whose derivative never vanishes.) [ 88 ] The Whitney embedding theorem tells us that every smooth n -dimensional manifold can be embedded as a smooth submanifold of R 2 n {\displaystyle \mathbb {R} ^{2n}} , whereas it is "rare" for a complex manifold to have a holomorphic embedding into C n {\displaystyle \mathbb {C} ^{n}} . For example, for an arbitrary compact connected complex manifold X , every holomorphic function on it is constant by Liouville's theorem, and so it cannot have any embedding into complex n-space. That is, for several complex variables, arbitrary complex manifolds do not always have holomorphic functions that are not constants. So, consider the conditions under which a complex manifold has a holomorphic function that is not a constant. Now if we had a holomorphic embedding of X into C n {\displaystyle \mathbb {C} ^{n}} , then the coordinate functions of C n {\displaystyle \mathbb {C} ^{n}} would restrict to nonconstant holomorphic functions on X , contradicting compactness, except in the case that X is just a point. Complex manifolds that can be holomorphic embedded into C n {\displaystyle \mathbb {C} ^{n}} are called Stein manifolds. Also Stein manifolds satisfy the second axiom of countability. [ 89 ] A Stein manifold is a complex submanifold of the vector space of n complex dimensions. They were introduced by and named after Karl Stein (1951). [ 90 ] A Stein space is similar to a Stein manifold but is allowed to have singularities. Stein spaces are the analogues of affine varieties or affine schemes in algebraic geometry. If the univalent domain on C n {\displaystyle \mathbb {C} ^{n}} is connection to a manifold, can be regarded as a complex manifold and satisfies the separation condition described later, the condition for becoming a Stein manifold is to satisfy the holomorphic convexity. Therefore, the Stein manifold is the properties of the domain of definition of the (maximal) analytic continuation of an analytic function. Suppose X is a paracompact complex manifolds of complex dimension n {\displaystyle n} and let O ( X ) {\displaystyle {\mathcal {O}}(X)} denote the ring of holomorphic functions on X . We call X a Stein manifold if the following conditions hold: [ 91 ] Note that condition (3) can be derived from conditions (1) and (2). [ 92 ] Let X be a connected, non-compact (open) Riemann surface . A deep theorem of Behnke and Stein (1948) [ 86 ] asserts that X is a Stein manifold. Another result, attributed to Hans Grauert and Helmut Röhrl (1956), states moreover that every holomorphic vector bundle on X is trivial. In particular, every line bundle is trivial, so H 1 ( X , O X ∗ ) = 0 {\displaystyle H^{1}(X,{\mathcal {O}}_{X}^{*})=0} . The exponential sheaf sequence leads to the following exact sequence: Now Cartan's theorem B shows that H 1 ( X , O X ) = H 2 ( X , O X ) = 0 {\displaystyle H^{1}(X,{\mathcal {O}}_{X})=H^{2}(X,{\mathcal {O}}_{X})=0} , therefore H 2 ( X , Z ) = 0 {\displaystyle H^{2}(X,\mathbb {Z} )=0} . This is related to the solution of the second (multiplicative) Cousin problem . Cartan extended Levi's problem to Stein manifolds. [ 93 ] This was proved by Bremermann [ 95 ] by embedding it in a sufficiently high dimensional C n {\displaystyle \mathbb {C} ^{n}} , and reducing it to the result of Oka. [ 29 ] Also, Grauert proved for arbitrary complex manifolds M . [ note 21 ] [ 98 ] [ 31 ] [ 96 ] And Narasimhan [ 99 ] [ 100 ] extended Levi's problem to complex analytic space , a generalized in the singular case of complex manifolds. Levi's problem remains unresolved in the following cases; more generalized and also, This means that Behnke–Stein theorem, which holds for Stein manifolds, has not found a conditions to be established in Stein space. [ 101 ] Grauert introduced the concept of K-complete in the proof of Levi's problem. Let X is complex manifold, X is K-complete if, to each point x 0 ∈ X {\displaystyle x_{0}\in X} , there exist finitely many holomorphic map f 1 , … , f k {\displaystyle f_{1},\dots ,f_{k}} of X into C p {\displaystyle \mathbb {C} ^{p}} , p = p ( x 0 ) {\displaystyle p=p(x_{0})} , such that x 0 {\displaystyle x_{0}} is an isolated point of the set A = { x ∈ X ; f − 1 f ( x 0 ) ( v = 1 , … , k ) } {\displaystyle A=\{x\in X;f^{-1}f(x_{0})\ (v=1,\dots ,k)\}} . [ 98 ] This concept also applies to complex analytic space. [ 104 ] These facts imply that a Stein manifold is a closed complex submanifold of complex space, whose complex structure is that of the ambient space (because the embedding is biholomorphic). Numerous further characterizations of such manifolds exist, in particular capturing the property of their having "many" holomorphic functions taking values in the complex numbers. See for example Cartan's theorems A and B , relating to sheaf cohomology . In the GAGA set of analogies, Stein manifolds correspond to affine varieties . [ 112 ] Stein manifolds are in some sense dual to the elliptic manifolds in complex analysis which admit "many" holomorphic functions from the complex numbers into themselves. It is known that a Stein manifold is elliptic if and only if it is fibrant in the sense of so-called "holomorphic homotopy theory". Meromorphic function in one-variable complex function were studied in a compact (closed) Riemann surface, because since the Riemann-Roch theorem ( Riemann's inequality ) holds for compact Riemann surfaces (Therefore the theory of compact Riemann surface can be regarded as the theory of (smooth (non-singular) projective) algebraic curve over C {\displaystyle \mathbb {C} } [ 113 ] [ 114 ] ). In fact, compact Riemann surface had a non-constant single-valued meromorphic function [ 85 ] , and also a compact Riemann surface had enough meromorphic functions. A compact one-dimensional complex manifold was a Riemann sphere C ^ ≅ C P 1 {\displaystyle {\widehat {\mathbb {C} }}\cong \mathbb {CP} ^{1}} . However, the abstract notion of a compact Riemann surface is always algebraizable (The Riemann's existence theorem , Kodaira embedding theorem .), [ note 25 ] but it is not easy to verify which compact complex analytic spaces are algebraizable. [ 115 ] In fact, Hopf found a class of compact complex manifolds without nonconstant meromorphic functions. [ 56 ] However, there is a Siegel result that gives the necessary conditions for compact complex manifolds to be algebraic. [ 116 ] The generalization of the Riemann-Roch theorem to several complex variables was first extended to compact analytic surfaces by Kodaira, [ 117 ] Kodaira also extended the theorem to three-dimensional, [ 118 ] and n-dimensional Kähler varieties. [ 119 ] Serre formulated the Riemann–Roch theorem as a problem of dimension of coherent sheaf cohomology , [ 6 ] and also Serre proved Serre duality . [ 120 ] Cartan and Serre proved the following property: [ 121 ] the cohomology group is finite-dimensional for a coherent sheaf on a compact complex manifold M. [ 122 ] Riemann–Roch on a Riemann surface for a vector bundle was proved by Weil in 1938. [ 123 ] Hirzebruch generalized the theorem to compact complex manifolds in 1994 [ 124 ] and Grothendieck generalized it to a relative version (relative statements about morphisms .). [ 125 ] [ 126 ] Next, the generalization of the result that "the compact Riemann surfaces are projective" to the high-dimension. In particular, consider the conditions that when embedding of compact complex submanifold X into the complex projective space C P n {\displaystyle \mathbb {CP} ^{n}} . [ note 26 ] The vanishing theorem (was first introduced by Kodaira in 1953) gives the condition, when the sheaf cohomology group vanishing, and the condition is to satisfy a kind of positivity . As an application of this theorem, the Kodaira embedding theorem [ 127 ] says that a compact Kähler manifold M , with a Hodge metric, there is a complex-analytic embedding of M into complex projective space of enough high-dimension N . In addition the Chow's theorem [ 128 ] shows that the complex analytic subspace (subvariety) of a closed complex projective space to be an algebraic that is, so it is the common zero of some homogeneous polynomials, such a relationship is one example of what is called Serre's GAGA principle . [ 8 ] The complex analytic sub-space(variety) of the complex projective space has both algebraic and analytic properties. Then combined with Kodaira's result, a compact Kähler manifold M embeds as an algebraic variety. This result gives an example of a complex manifold with enough meromorphic functions. Broadly, the GAGA principle says that the geometry of projective complex analytic spaces (or manifolds) is equivalent to the geometry of projective complex varieties. The combination of analytic and algebraic methods for complex projective varieties lead to areas such as Hodge theory . Also, the deformation theory of compact complex manifolds has developed as Kodaira–Spencer theory. However, despite being a compact complex manifold, there are counterexample of that cannot be embedded in projective space and are not algebraic. [ 129 ] Analogy of the Levi problems on the complex projective space C P n {\displaystyle \mathbb {CP} ^{n}} by Takeuchi. [ 4 ] [ 130 ] [ 131 ] [ 132 ]
https://en.wikipedia.org/wiki/Function_of_several_complex_variables
ℝ n → X In mathematical analysis and its applications, a function of several real variables or real multivariate function is a function with more than one argument , with all arguments being real variables. This concept extends the idea of a function of a real variable to several variables. The "input" variables take real values, while the "output", also called the "value of the function", may be real or complex . However, the study of the complex-valued functions may be easily reduced to the study of the real-valued functions , by considering the real and imaginary parts of the complex function; therefore, unless explicitly specified, only real-valued functions will be considered in this article. The domain of a function of n variables is the subset of ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠ for which the function is defined. As usual, the domain of a function of several real variables is supposed to contain a nonempty open subset of ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠ . A real-valued function of n real variables is a function that takes as input n real numbers , commonly represented by the variables x 1 , x 2 , …, x n , for producing another real number, the value of the function, commonly denoted f ( x 1 , x 2 , …, x n ) . For simplicity, in this article a real-valued function of several real variables will be simply called a function . To avoid any ambiguity, the other types of functions that may occur will be explicitly specified. Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable are taken in a subset X of R n , the domain of the function, which is always supposed to contain an open subset of R n . In other words, a real-valued function of n real variables is a function such that its domain X is a subset of R n that contains a nonempty open set. An element of X being an n - tuple ( x 1 , x 2 , …, x n ) (usually delimited by parentheses), the general notation for denoting functions would be f (( x 1 , x 2 , …, x n )) . The common usage, much older than the general definition of functions between sets, is to not use double parentheses and to simply write f ( x 1 , x 2 , …, x n ) . It is also common to abbreviate the n -tuple ( x 1 , x 2 , …, x n ) by using a notation similar to that for vectors , like boldface x , underline x , or overarrow x → . This article will use bold. A simple example of a function in two variables could be: which is the volume V of a cone with base area A and height h measured perpendicularly from the base. The domain restricts all variables to be positive since lengths and areas must be positive. For an example of a function in two variables: where a and b are real non-zero constants. Using the three-dimensional Cartesian coordinate system , where the xy plane is the domain R 2 and the z axis is the codomain R , one can visualize the image to be a two-dimensional plane, with a slope of a in the positive x direction and a slope of b in the positive y direction. The function is well-defined at all points ( x , y ) in R 2 . The previous example can be extended easily to higher dimensions: for p non-zero real constants a 1 , a 2 , …, a p , which describes a p -dimensional hyperplane . The Euclidean norm : is also a function of n variables which is everywhere defined, while is defined only for x ≠ (0, 0, …, 0) . For a non-linear example function in two variables: which takes in all points in X , a disk of radius √ 8 "punctured" at the origin ( x , y ) = (0, 0) in the plane R 2 , and returns a point in R . The function does not include the origin ( x , y ) = (0, 0) , if it did then f would be ill-defined at that point. Using a 3d Cartesian coordinate system with the xy -plane as the domain R 2 , and the z axis the codomain R , the image can be visualized as a curved surface. The function can be evaluated at the point ( x , y ) = (2, √ 3 ) in X : However, the function couldn't be evaluated at, say since these values of x and y do not satisfy the domain's rule. The image of a function f ( x 1 , x 2 , …, x n ) is the set of all values of f when the n -tuple ( x 1 , x 2 , …, x n ) runs in the whole domain of f . For a continuous (see below for a definition) real-valued function which has a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function . The preimage of a given real number c is called a level set . It is the set of the solutions of the equation f ( x 1 , x 2 , …, x n ) = c . The domain of a function of several real variables is a subset of R n that is sometimes, but not always, explicitly defined. In fact, if one restricts the domain X of a function f to a subset Y ⊂ X , one gets formally a different function, the restriction of f to Y , which is denoted f | Y {\displaystyle f|_{Y}} . In practice, it is often (but not always) not harmful to identify f and f | Y {\displaystyle f|_{Y}} , and to omit the restrictor | Y . Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation . Moreover, many functions are defined in such a way that it is difficult to specify explicitly their domain. For example, given a function f , it may be difficult to specify the domain of the function g ( x ) = 1 / f ( x ) . {\displaystyle g({\boldsymbol {x}})=1/f({\boldsymbol {x}}).} If f is a multivariate polynomial , (which has R n {\displaystyle \mathbb {R} ^{n}} as a domain), it is even difficult to test whether the domain of g is also R n {\displaystyle \mathbb {R} ^{n}} . This is equivalent to test whether a polynomial is always positive, and is the object of an active research area (see Positive polynomial ). The usual operations of arithmetic on the reals may be extended to real-valued functions of several real variables in the following way: It follows that the functions of n variables that are everywhere defined and the functions of n variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals ( R -algebras). This is a prototypical example of a function space . One may similarly define which is a function only if the set of the points ( x 1 , …, x n ) in the domain of f such that f ( x 1 , …, x n ) ≠ 0 contains an open subset of R n . This constraint implies that the above two algebras are not fields . One can easily obtain a function in one real variable by giving a constant value to all but one of the variables. For example, if ( a 1 , …, a n ) is a point of the interior of the domain of the function f , we can fix the values of x 2 , …, x n to a 2 , …, a n respectively, to get a univariable function whose domain contains an interval centered at a 1 . This function may also be viewed as the restriction of the function f to the line defined by the equations x i = a i for i = 2, …, n . Other univariable functions may be defined by restricting f to any line passing through ( a 1 , …, a n ) . These are the functions where the c i are real numbers that are not all zero. In next section, we will show that, if the multivariable function is continuous, so are all these univariable functions, but the converse is not necessarily true. Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of several real variables are ubiquitous in mathematics, it is worth to define this notion without reference to the general notion of continuous maps between topological space. For defining the continuity, it is useful to consider the distance function of R n , which is an everywhere defined function of 2 n real variables: A function f is continuous at a point a = ( a 1 , …, a n ) which is interior to its domain, if, for every positive real number ε , there is a positive real number φ such that | f ( x ) − f ( a )| < ε for all x such that d ( x a ) < φ . In other words, φ may be chosen small enough for having the image by f of the ball of radius φ centered at a contained in the interval of length 2 ε centered at f ( a ) . A function is continuous if it is continuous at every point of its domain. If a function is continuous at f ( a ) , then all the univariate functions that are obtained by fixing all the variables x i except one at the value a i , are continuous at f ( a ) . The converse is false; this means that all these univariate functions may be continuous for a function that is not continuous at f ( a ) . For an example, consider the function f such that f (0, 0) = 0 , and is otherwise defined by The functions x ↦ f ( x , 0) and y ↦ f (0, y ) are both constant and equal to zero, and are therefore continuous. The function f is not continuous at (0, 0) , because, if ε < 1/2 and y = x 2 ≠ 0 , we have f ( x , y ) = 1/2 , even if | x | is very small. Although not continuous, this function has the further property that all the univariate functions obtained by restricting it to a line passing through (0, 0) are also continuous. In fact, we have for λ ≠ 0 . The limit at a point of a real-valued function of several real variables is defined as follows. [ 1 ] Let a = ( a 1 , a 2 , …, a n ) be a point in topological closure of the domain X of the function f . The function, f has a limit L when x tends toward a , denoted if the following condition is satisfied: For every positive real number ε > 0 , there is a positive real number δ > 0 such that for all x in the domain such that If the limit exists, it is unique. If a is in the interior of the domain, the limit exists if and only if the function is continuous at a . In this case, we have When a is in the boundary of the domain of f , and if f has a limit at a , the latter formula allows to "extend by continuity" the domain of f to a . A symmetric function is a function f that is unchanged when two variables x i and x j are interchanged: where i and j are each one of 1, 2, …, n . For example: is symmetric in x , y , z since interchanging any pair of x , y , z leaves f unchanged, but is not symmetric in all of x , y , z , t , since interchanging t with x or y or z gives a different function. Suppose the functions or more compactly ξ = ξ ( x ) , are all defined on a domain X . As the n -tuple x = ( x 1 , x 2 , …, x n ) varies in X , a subset of R n , the m -tuple ξ = ( ξ 1 , ξ 2 , …, ξ m ) varies in another region Ξ a subset of R m . To restate this: Then, a function ζ of the functions ξ ( x ) defined on Ξ , is a function composition defined on X , [ 2 ] in other terms the mapping Note the numbers m and n do not need to be equal. For example, the function defined everywhere on R 2 can be rewritten by introducing which is also everywhere defined in R 3 to obtain Function composition can be used to simplify functions, which is useful for carrying out multiple integrals and solving partial differential equations . Elementary calculus is the calculus of real-valued functions of one real variable, and the principal ideas of differentiation and integration of such functions can be extended to functions of more than one real variable; this extension is multivariable calculus . Partial derivatives can be defined with respect to each variable: Partial derivatives themselves are functions, each of which represents the rate of change of f parallel to one of the x 1 , x 2 , …, x n axes at all points in the domain (if the derivatives exist and are continuous—see also below). A first derivative is positive if the function increases along the direction of the relevant axis, negative if it decreases, and zero if there is no increase or decrease. Evaluating a partial derivative at a particular point in the domain gives the rate of change of the function at that point in the direction parallel to a particular axis, a real number. For real-valued functions of a real variable, y = f ( x ) , its ordinary derivative dy / dx is geometrically the gradient of the tangent line to the curve y = f ( x ) at all points in the domain. Partial derivatives extend this idea to tangent hyperplanes to a curve. The second order partial derivatives can be calculated for every pair of variables: Geometrically, they are related to the local curvature of the function's image at all points in the domain. At any point where the function is well-defined, the function could be increasing along some axes, and/or decreasing along other axes, and/or not increasing or decreasing at all along other axes. This leads to a variety of possible stationary points : global or local maxima , global or local minima , and saddle points —the multidimensional analogue of inflection points for real functions of one real variable. The Hessian matrix is a matrix of all the second order partial derivatives, which are used to investigate the stationary points of the function, important for mathematical optimization . In general, partial derivatives of higher order p have the form: where p 1 , p 2 , …, p n are each integers between 0 and p such that p 1 + p 2 + ⋯ + p n = p , using the definitions of zeroth partial derivatives as identity operators : The number of possible partial derivatives increases with p , although some mixed partial derivatives (those with respect to more than one variable) are superfluous, because of the symmetry of second order partial derivatives . This reduces the number of partial derivatives to calculate for some p . A function f ( x ) is differentiable in a neighborhood of a point a if there is an n -tuple of numbers dependent on a in general, A ( a ) = ( A 1 ( a ), A 2 ( a ), …, A n ( a )) , so that: [ 3 ] where α ( x ) → 0 {\displaystyle \alpha ({\boldsymbol {x}})\to 0} as | x − a | → 0 {\displaystyle |{\boldsymbol {x}}-{\boldsymbol {a}}|\to 0} . This means that if f is differentiable at a point a , then f is continuous at x = a , although the converse is not true - continuity in the domain does not imply differentiability in the domain. If f is differentiable at a then the first order partial derivatives exist at a and: for i = 1, 2, …, n , which can be found from the definitions of the individual partial derivatives, so the partial derivatives of f exist. Assuming an n -dimensional analogue of a rectangular Cartesian coordinate system , these partial derivatives can be used to form a vectorial linear differential operator , called the gradient (also known as " nabla " or " del ") in this coordinate system: used extensively in vector calculus , because it is useful for constructing other differential operators and compactly formulating theorems in vector calculus. Then substituting the gradient ∇ f (evaluated at x = a ) with a slight rearrangement gives: where · denotes the dot product . This equation represents the best linear approximation of the function f at all points x within a neighborhood of a . For infinitesimal changes in f and x as x → a : which is defined as the total differential , or simply differential , of f , at a . This expression corresponds to the total infinitesimal change of f , by adding all the infinitesimal changes of f in all the x i directions. Also, df can be construed as a covector with basis vectors as the infinitesimals dx i in each direction and partial derivatives of f as the components. Geometrically ∇ f is perpendicular to the level sets of f , given by f ( x ) = c which for some constant c describes an ( n − 1) -dimensional hypersurface. The differential of a constant is zero: in which d x is an infinitesimal change in x in the hypersurface f ( x ) = c , and since the dot product of ∇ f and d x is zero, this means ∇ f is perpendicular to d x . In arbitrary curvilinear coordinate systems in n dimensions, the explicit expression for the gradient would not be so simple - there would be scale factors in terms of the metric tensor for that coordinate system. For the above case used throughout this article, the metric is just the Kronecker delta and the scale factors are all 1. If all first order partial derivatives evaluated at a point a in the domain: exist and are continuous for all a in the domain, f has differentiability class C 1 . In general, if all order p partial derivatives evaluated at a point a : exist and are continuous, where p 1 , p 2 , …, p n , and p are as above, for all a in the domain, then f is differentiable to order p throughout the domain and has differentiability class C p . If f is of differentiability class C ∞ , f has continuous partial derivatives of all order and is called smooth . If f is an analytic function and equals its Taylor series about any point in the domain, the notation C ω denotes this differentiability class. Definite integration can be extended to multiple integration over the several real variables with the notation; where each region R 1 , R 2 , …, R n is a subset of or all of the real line: and their Cartesian product gives the region to integrate over as a single set: an n -dimensional hypervolume . When evaluated, a definite integral is a real number if the integral converges in the region R of integration (the result of a definite integral may diverge to infinity for a given region, in such cases the integral remains ill-defined). The variables are treated as "dummy" or "bound" variables which are substituted for numbers in the process of integration. The integral of a real-valued function of a real variable y = f ( x ) with respect to x has geometric interpretation as the area bounded by the curve y = f ( x ) and the x -axis. Multiple integrals extend the dimensionality of this concept: assuming an n -dimensional analogue of a rectangular Cartesian coordinate system , the above definite integral has the geometric interpretation as the n -dimensional hypervolume bounded by f ( x ) and the x 1 , x 2 , …, x n axes, which may be positive, negative, or zero, depending on the function being integrated (if the integral is convergent). While bounded hypervolume is a useful insight, the more important idea of definite integrals is that they represent total quantities within space. This has significance in applied mathematics and physics: if f is some scalar density field and x are the position vector coordinates, i.e. some scalar quantity per unit n -dimensional hypervolume, then integrating over the region R gives the total amount of quantity in R . The more formal notions of hypervolume is the subject of measure theory . Above we used the Lebesgue measure , see Lebesgue integration for more on this topic. With the definitions of multiple integration and partial derivatives, key theorems can be formulated, including the fundamental theorem of calculus in several real variables (namely Stokes' theorem ), integration by parts in several real variables, the symmetry of higher partial derivatives and Taylor's theorem for multivariable functions . Evaluating a mixture of integrals and partial derivatives can be done by using theorem differentiation under the integral sign . One can collect a number of functions each of several real variables, say into an m -tuple, or sometimes as a column vector or row vector , respectively: all treated on the same footing as an m -component vector field , and use whichever form is convenient. All the above notations have a common compact notation y = f ( x ) . The calculus of such vector fields is vector calculus . For more on the treatment of row vectors and column vectors of multivariable functions, see matrix calculus . A real-valued implicit function of several real variables is not written in the form " y = f (…) ". Instead, the mapping is from the space R n + 1 to the zero element in R (just the ordinary zero 0): is an equation in all the variables. Implicit functions are a more general way to represent functions, since if: then we can always define: but the converse is not always possible, i.e. not all implicit functions have an explicit form. For example, using interval notation , let Choosing a 3-dimensional (3D) Cartesian coordinate system, this function describes the surface of a 3D ellipsoid centered at the origin ( x , y , z ) = (0, 0, 0) with constant semi-major axes a , b , c , along the positive x , y and z axes respectively. In the case a = b = c = r , we have a sphere of radius r centered at the origin. Other conic section examples which can be described similarly include the hyperboloid and paraboloid , more generally so can any 2D surface in 3D Euclidean space. The above example can be solved for x , y or z ; however it is much tidier to write it in an implicit form. For a more sophisticated example: for non-zero real constants A , B , C , ω , this function is well-defined for all ( t , x , y , z ) , but it cannot be solved explicitly for these variables and written as " t = ", " x = ", etc. The implicit function theorem of more than two real variables deals with the continuity and differentiability of the function, as follows. [ 4 ] Let ϕ ( x 1 , x 2 , …, x n ) be a continuous function with continuous first order partial derivatives, and let ϕ evaluated at a point ( a , b ) = ( a 1 , a 2 , …, a n , b ) be zero: and let the first partial derivative of ϕ with respect to y evaluated at ( a , b ) be non-zero: Then, there is an interval [ y 1 , y 2 ] containing b , and a region R containing ( a , b ) , such that for every x in R there is exactly one value of y in [ y 1 , y 2 ] satisfying ϕ ( x , y ) = 0 , and y is a continuous function of x so that ϕ ( x , y ( x )) = 0 . The total differentials of the functions are: Substituting dy into the latter differential and equating coefficients of the differentials gives the first order partial derivatives of y with respect to x i in terms of the derivatives of the original function, each as a solution of the linear equation for i = 1, 2, …, n . A complex-valued function of several real variables may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values. If f ( x 1 , …, x n ) is such a complex valued function, it may be decomposed as where g and h are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions. This reduction works for the general properties. However, for an explicitly given function, such as: the computation of the real and the imaginary part may be difficult. Multivariable functions of real variables arise inevitably in engineering and physics , because observable physical quantities are real numbers (with associated units and dimensions ), and any one physical quantity will generally depend on a number of other quantities. Examples in continuum mechanics include the local mass density ρ of a mass distribution, a scalar field which depends on the spatial position coordinates (here Cartesian to exemplify), r = ( x , y , z ) , and time t : Similarly for electric charge density for electrically charged objects, and numerous other scalar potential fields. Another example is the velocity field , a vector field , which has components of velocity v = ( v x , v y , v z ) that are each multivariable functions of spatial coordinates and time similarly: Similarly for other physical vector fields such as electric fields and magnetic fields , and vector potential fields. Another important example is the equation of state in thermodynamics , an equation relating pressure P , temperature T , and volume V of a fluid, in general it has an implicit form: The simplest example is the ideal gas law : where n is the number of moles , constant for a fixed amount of substance , and R the gas constant . Much more complicated equations of state have been empirically derived, but they all have the above implicit form. Real-valued functions of several real variables appear pervasively in economics . In the underpinnings of consumer theory, utility is expressed as a function of the amounts of various goods consumed, each amount being an argument of the utility function. The result of maximizing utility is a set of demand functions , each expressing the amount demanded of a particular good as a function of the prices of the various goods and of income or wealth. In producer theory , a firm is usually assumed to maximize profit as a function of the quantities of various goods produced and of the quantities of various factors of production employed. The result of the optimization is a set of demand functions for the various factors of production and a set of supply functions for the various products; each of these functions has as its arguments the prices of the goods and of the factors of production. Some "physical quantities" may be actually complex valued - such as complex impedance , complex permittivity , complex permeability , and complex refractive index . These are also functions of real variables, such as frequency or time, as well as temperature. In two-dimensional fluid mechanics , specifically in the theory of the potential flows used to describe fluid motion in 2d, the complex potential is a complex valued function of the two spatial coordinates x and y , and other real variables associated with the system. The real part is the velocity potential and the imaginary part is the stream function . The spherical harmonics occur in physics and engineering as the solution to Laplace's equation , as well as the eigenfunctions of the z -component angular momentum operator , which are complex-valued functions of real-valued spherical polar angles : In quantum mechanics , the wavefunction is necessarily complex-valued, but is a function of real spatial coordinates (or momentum components), as well as time t : where each is related by a Fourier transform .
https://en.wikipedia.org/wiki/Function_of_several_real_variables
In calculus , a function series is a series where each of its terms is a function , not just a real or complex number . Examples of function series include ordinary power series , Laurent series , Fourier series , Liouville-Neumann series , formal power series , and Puiseux series . There exist many types of convergence for a function series, such as uniform convergence , pointwise convergence , and convergence almost everywhere . Each type of convergence corresponds to a different metric for the space of functions that are added together in the series, and thus a different type of limit . The Weierstrass M-test is a useful result in studying convergence of function series.
https://en.wikipedia.org/wiki/Function_series
Functional Ensemble of Temperament ( FET ) is a neurochemical model suggesting specific functional roles of main neurotransmitter systems in the regulation of behaviour. Medications can adjust the release of brain neurotransmitters in cases of depression , anxiety disorder , schizophrenia and other mental disorders because an imbalance within neurotransmitter systems can emerge as consistent characteristics in behaviour compromising people's lives. All people have a weaker form of such imbalance in at least one of such neurotransmitter systems that make each of us distinct from one another. The impact of this weak imbalance in neurochemistry can be seen in the consistent features of behaviour in healthy people (temperament). In this sense temperament (as neuro-chemically-based individual differences) and mental illness represents varying degrees along the same continuum of neurotransmitter imbalance in neurophysiological systems of behavioural regulation. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] In fact, multiple temperament traits (such as Impulsivity , sensation seeking , neuroticism , endurance , plasticity , sociability or extraversion ) have been linked to brain neurotransmitters and hormone systems. [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] By the end of the 20th century, it became clear that the human brain operates with more than a dozen neurotransmitters and a large number of neuropeptides and hormones. The relationships between these different chemical systems are complex as some of them suppress and some of them induce each other's release during neuronal exchanges. This complexity of relationships devalues the old approach of assigning "inhibitory vs. excitatory" roles to neurotransmitters: the same neurotransmitters can be either inhibitory or excitatory depending on what system they interact with. It became clear that an impressive diversity of neurotransmitters and their receptors is necessary to meet a wide range of behavioural situations, but the links between temperament traits and specific neurotransmitters are still a matter of research. Several attempts were made to assign specific (single) neurotransmitters to specific (single) traits. For example, dopamine was proposed to be a neurotransmitter of the trait of Extraversion, noradrenaline was linked to anxiety, and serotonin was thought to be a neurotransmitter of an inhibition system. These assignments of neurotransmitter functions appeared to be an oversimplification when confronted by the evidence of much more diverse functionality. [ 16 ] [ 17 ] Research groups led by Petra Netter in Germany, Lars Farde in Karolinska Institute in Sweden and Trevor Robbins in Cambridge, UK had most extensive studies of the links between temperament/personality traits or dynamical properties of behavior and groups of neurotransmitters. [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] The architecture of the Functional Ensemble of Temperament (FET) was developed by Trofimova as the Compact version of the Structure of Temperament Questionnaire (STQ-77) in 1997–2007. The differentiation between the rows of the FET inherits the activity-specific approach to the structure of temperament proposed by Rusalov in mid-1980s. According to this approach, the traits of temperament (and behavioural regulation) related to motor-physical, social-verbal and mental aspects of activities are based on different neurophysiological systems and should be assessed separately (so you can see a separation of traits into 3 rows related to these 3 types of activities). The 3-column structure of the FET framework follows Alexander Luria theory of three functional neuroanatomic systems (sensory-informational, programming and energetic) and is in line with functional constructivism approach. [ 26 ] This approach views all behaviour as being constructed and generated anew based on individual's capacities and demands of the situation [ 19 ] The final STQ-77/FET model considers 12 systems (and temperament traits): 9 systems (and traits) regulating the formal functional aspects of behaviour (energetic, dynamic and orientational, each assessed in three domains (intellectual, physical and social-verbal) together with 3 systems related to emotionality ( Neuroticism , Impulsivity and a disposition of Satisfaction (formerly called Self-Confidence) (see Figure). [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 19 ] [ 33 ] The differences between Trofimova's and Rusalov's models of temperament (and the structures of their versions of the STQ) are: . In 2007–2013 this STQ-77 model of temperament was reviewed and compared to the main findings in neurophysiology, neurochemistry, clinical psychology and kinesiology resulting in the neurochemical FET model offered by Irina Trofimova, McMaster University. [ 16 ] Trevor Robbins , Cambridge University who collaborated with Trofimova on this project in 2014–2016 suggested a revision of the part of the FET neurochemical hypothesis related to the trait of Intellectual (mental) Endurance (known in cognitive psychology also as "sustained attention "). This neurochemical component of the FET hypothesis was upgraded in 2015 by underlying a key role of acetylcholine and noradrenalin in sustained attention. [ 16 ] [ 17 ] [ 35 ] In February 2018, by the suggestion of Dr Marina Kolbeneva (Institute of Psychology, Russian Academy of Sciences) the scale Self-Confidence was renamed as the scale of dispositional Satisfaction. [ 19 ] The final STQ-77/FET framework classifies temperament traits and their neurochemical biomarkers into 12 components: nine components regulating the formal functional aspects of behaviour (energetic, dynamic and orientational) each assessed in three domains (intellectual, physical and social-verbal); also three components related to emotionality ( Neuroticism , Impulsivity and Satisfaction (Self-Confidence)) (see Figure) [ 19 ] [ 34 ] [ 33 ] [ 36 ] The FET framework summarized existing literature showing the nine non-emotionality traits are regulated by the monoamines (MA) ( noradrenalin , dopamine , serotonin ), acetylcholine and neuropeptide systems, whereas the three emotionality-related traits emerge as a dysregulation of opioid receptors systems that have direct control over MA systems. Importantly, the FET model suggests that there is no one-to-one correspondence between the neurotransmitter systems underlying temperament traits (or mental disorders) but instead specific ensemble relationships between these systems emerge as temperament traits. [ 16 ] [ 17 ] [ 31 ] [ 32 ] [ 19 ] The FET framework is based only on the strongest consensus points in the research studying the role of neurotransmitter in behavioural regulation and the components of temperament; it doesn't list more controversial links between these multiple systems. † Neurotransmitter systems: 5-HT: serotonin ; DA: dopamine ; NE: noradrenalin ; ACh: acetylcholine ; Glu: glutamate ; OXY: oxytocin ; VSP: vasopressin ; NP: Neuropeptides ; KOR, MOR, DOR: kappa-, mu- and delta- opioid receptors correspondingly; sANS - sympathetic autonomic nervous system ; HPA - hypothalamic–pituitary–adrenal axis . The FET points out that opioid receptor systems are involved not only in regulation of emotional dispositions but also amplify three non-emotionality aspects of behaviour (KOR for orientation, DOR for integration of actions and MOR of approval-maintenance of behaviour). [ 31 ] [ 19 ] This involvement was confirmed for MOR systems that bind endorphins : experiments show that MOR overstimulation influences hypothalamic serotonin and Brain-derived neurotrophic factor release and affecting endurance aspects of behaviour. [ 37 ] [ 38 ] [ 39 ] The interplay within hormonal systems and its interaction with serotonin also appeared to be a factor is social emotions, such as shame and guilt [ 40 ] FET framework was proposed to simplify classifications of psychiatric disorders (DSM, ICD) using 12 functional aspects of behaviour that this model highlights. [ 41 ] [ 11 ] [ 19 ] Clinical studies showed good differential power of the FET framework for various diagnoses of psychopathology. For example, depressed people had low endurance and psychomotor slowdown in their temperament profiles. [ 9 ] [ 10 ] [ 11 ] [ 41 ] In contrast to them, patients with Generalized Anxiety Disorder had higher impulsivity and neuroticism. [ 9 ] [ 8 ] [ 11 ] FET-developers suggested that every symptom in DSM/ICD diagnoses can be mapped into a specific FET code reflecting a disregulation within well-documented neurochemical systems. [ 19 ]
https://en.wikipedia.org/wiki/Functional_Ensemble_of_Temperament
Functional Materials is a quarterly peer-reviewed scientific journal published by the Institute for Single Crystals of the National Academy of Sciences of Ukraine . The journal was established in 1994 and covers fundamental and applied research on organic and non-organic functional materials. Functional Materials has been included in the list of scientific journals recognized by the Higher Attestation Commission of Ukraine. This article about a materials science journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Functional_Materials
Functional Materials Letters is an interdisciplinary, peer-reviewed journal published by World Scientific with articles relating to the synthesis, behavior, characterization and application of functional materials. These are materials designed to respond to changes in their environments. Topics covered include ferroelectric, magneto-optical, sustainable energy and shape memory materials. Established in 2008 as a quarterly journal, Funct. Mater. Lett. switched to bimonthly in 2013. The journal is indexed in Inspec . According to the Journal Citation Reports , the journal has a 2020 impact factor of 2.17. [ 1 ] This article about a materials science journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Functional_Materials_Letters
In chemistry and pharmacology, functional analogs are chemical compounds that have similar physical , chemical , biochemical , or pharmacological properties. Functional analogs are not necessarily structural analogs with a similar chemical structure . [ 1 ] An example of pharmacological functional analogs are morphine , heroin and fentanyl , which have the same mechanism of action, but fentanyl is structurally quite different from the other two with significant variance in dosage. [ 2 ] This pharmacology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Functional_analog_(chemistry)
A functional block diagram , in systems engineering and software engineering , is a block diagram that describes the functions and interrelationships of a system . The functional block diagram can picture: [ 1 ] The block diagram can use additional schematic symbols to show particular properties. Since the late 1950s, functional block diagrams have been used in a wide range applications, from systems engineering to software engineering . They became a necessity in complex systems design to "understand thoroughly from exterior design the operation of the present system and the relationship of each of the parts to the whole." [ 3 ] Many specific types of functional block diagrams have emerged. For example, the functional flow block diagram is a combination of the functional block diagram and the flowchart . Many software development methodologies are built with specific functional block diagram techniques. An example from the field of industrial computing is the Function Block Diagram (FBD), a graphical language for the development of software applications for programmable logic controllers .
https://en.wikipedia.org/wiki/Functional_block_diagram
Functional cloning is a molecular cloning technique that relies on prior knowledge of the encoded protein ’s sequence or function for gene identification. [ 1 ] [ 2 ] [ 3 ] In this assay, a genomic or cDNA library is screened to identify the genetic sequence of a protein of interest. Expression cDNA libraries may be screened with antibodies specific for the protein of interest or may rely on selection via the protein function. [ 1 ] Historically, the amino acid sequence of a protein was used to prepare degenerate oligonucleotides which were then probed against the library to identify the gene encoding the protein of interest. [ 2 ] [ 3 ] Once candidate clones carrying the gene of interest are identified, they are sequenced and their identity is confirmed. This method of cloning allows researchers to screen entire genomes without prior knowledge of the location of the gene or the genetic sequence. [ 1 ] This technique can be used to identify genes that encode similar proteins from one organism to another. [ 4 ] Similarly, this technique can be paired with metagenomic libraries to identify novel genes and proteins that perform similar functions, such as the identification of novel antibiotics by screening for beta-lactamase activity or selecting for growth in the presence of penicillin . [ 5 ] The workflow of a functional cloning experiment varies depending on the source of genetic material, the extent of prior knowledge of the protein or gene of interest and the ability to screen for the protein function. In general, a functional cloning experiment consists of four steps: 1) sample collection, 2) library preparation, 3) screening or selection and 4) sequencing . Genetic material is collected from a particular cell type, organism or environmental sample relevant to the biological question. In functional cloning, mRNA is commonly isolated and cDNA is prepared from the isolated mRNA ( RNA extraction ). [ 6 ] In certain circumstances genomic DNA may be isolated, particularly when environmental samples are used as the source of genetic material. [ 1 ] If the starting material is genomic DNA , the DNA is sheared to produce fragments of appropriate length for the vector of choice. The DNA fragments or cDNA are then treated with restriction endonucleases and ligated to a plasmid or chromosomal vectors. In the case of assays that screen for the protein or for its function, an expression vector is used to ensure that the gene product is expressed. The vector choice will depend on the origin of the DNA or cDNA to ensure proper expression and to ensure that the encoded gene will fall within the limits of the vector's insert size. [ 7 ] The choice of host is important to ensure that the codon usage will be similar to the donor organism. The host will also need to guarantee that the proper post-translational modifications and protein folding will occur to enable proper functioning of the expressed proteins. [ 7 ] The method of screening the prepared genomic or cDNA libraries for the gene of interest is highly variable depending on the experimental design and biological question. One method of screening is to probe colonies via Southern blotting with degenerate oligonucleotides prepared from the amino acid sequence of the query protein. [ 8 ] In expression libraries, the protein of interest can be identified by screening with an antibody specific for the query protein via Western blotting to identify colonies carrying the gene of interest. In other circumstances, a specific assay can be used to screen or select for the protein's activity. [ 1 ] For example, genes conferring antibiotic resistance can be selected by growing the colonies of the library on media containing a specified antibiotic . [ 5 ] Another example is screening for enzymatic activity by incubating with a substrate that is catalyzed to a colorimetric compound that can easily be visualized. [ 9 ] The final step of functional cloning is to sequence the DNA or cDNA from the clones that were successfully identified in the screen or selection step. The sequence can then be annotated and used for downstream applications, such as protein expression and purification for industrial applications. [ 10 ] The advantages of functional cloning include the ability to screen for novel genes with desired applications in organisms that cannot be cultured, particularly from bacterial or viral specimens. [ 1 ] Additionally, genes encoding proteins with related functions can be identified when there is low sequence similarity due to the ability to screen for the protein function alone. Functional cloning allows for gene identification without prior knowledge of the organism's genome sequence or position of the gene within the genome. [ 1 ] As with other cloning techniques, vector and host choice affect the success of gene identification via functional cloning due to cloning bias. The vector must have an insert size that will accommodate the entire DNA sequence of the expressed protein. Additionally, in expression vectors the promoters and terminators must function within the chosen host organism. The host choice may affect transcription and translation due to differing codon usage , transcriptional and translational machinery or post-translational modifications within the host. [ 7 ] [ 1 ] Other limitations include the labour-intensive library preparation and potential screens which can be both expensive and time-consuming. [ 7 ] Positional cloning is another molecular cloning technique for identification of a gene of interest. This method uses exact chromosomal location instead of function to guide gene identification. [ 11 ] Because of this, this method focuses on all the genetic material at a chromosomal locus and makes no assumptions about function. [ 11 ] In model organisms such as mice or yeast , this method is used more frequently as the information about the position of a gene of interest can be obtained from the sequenced genome . However, this method becomes much more cumbersome when sequence information is not available. In this case, linkage analysis can also be used. [ 11 ] Functional cloning on the other hand is more readily used in organisms such as bacterial pathogens that are viable but nonculturable and where sequence data is not available but gene homology or protein function is still of interest. [ 11 ] A way to differentiate between functional and positional cloning is to visualize genes as words. Functional cloning is like using a thesaurus to look up words and selecting for new words that have the same meanings (or functions). [ 12 ] Positional cloning is more like picking a specific page of a dictionary and then browsing only that page for any words of interest. [ 12 ] With the advent of sequencing technology becoming cheaper and cheaper, it is now more feasible to sequence an unknown genome and then computationally determine homology instead of screening. [ 13 ] This brings the added benefit of being able to screen for multiple genes of interest at the same time and reduces experimental time. It also allows one to avoid labour-intensive cloning procedures as well. [ 14 ] However, if this route is taken, there are other biases and hurdles one must consider. By using sequenced data, one is able to screen based on homology alone. [ 1 ] A function-based approach thus allows for discovery of novel enzymes whose functions would not have been predicted based on DNA sequence alone. [ 1 ] Therefore, while sequencing is less labour-intensive experimentally, it can also lead to missed genes of interest due to differing sequence homology in genes of related function. Gibson assembly is a quick cloning method that uses three primary enzymes; 5' exonuclease , polymerase and ligase . [ 15 ] The exonuclease digests the 5' end of DNA fragments leaving a 3' overhang. [ 15 ] If there is significant homology (20-40 bp) on each end of the DNA insert, it can anneal with a complementary backbone. [ 15 ] Afterwards the polymerase can fill in the gaps while ligase fuses the nicks at the end. [ 15 ] This method greatly increases the rate of cloning and success rate of cloning into a vector backbone. [ 15 ] However, it requires the DNA fragment to have significant homology with the plasmid. [ 15 ] For this reason, knowledge of the sequence being cloned must be known beforehand. This is not a requirement with functional cloning. TOPO Cloning is a cloning method that uses Taq polymerase . [ 16 ] This is because Taq leaves a single adenosine overhang on the 3’ end of PCR reaction products. [ 16 ] Utilizing this knowledge, backbones with a 5’ thymine overhang can be used for cloning purposes. [ 16 ] In this case knowledge of the fragment being cloned must be known to be able to make PCR primers for it and the number of TOPO Cloning compatible vectors is relatively small. However, it provides the advantage that reactions only take about 5 minutes to do. [ 16 ] Gateway recombination cloning is a cloning method in which a DNA fragment is moved from one plasmid backbone to another via a single homologous recombination event. [ 17 ] However, for this method to work, the DNA fragment of interest must be flanked by recombination sites. [ 17 ] While this method isn't strictly an alternative, it does allow the movement of DNA fragments from one plasmid to another quicker than creating a whole new genomic library. The reason this method may be used in conjunction with functional cloning is to put a library under a different promoter or on a backbone with a different selection marker. [ 17 ] This can come in handy if an individual wants to try functional cloning in a wide range of bacteria to try to combat the issue with codon bias . [ 17 ] [ 7 ] Metagenomics is one of the largest fields that commonly uses functional cloning. Metagenomics studies all the genetic material from a specific environmental sample, such as the gut microbiome or lake water. [ 1 ] Functional libraries are created that contain DNA fragments from the environment. [ 1 ] As the original bacterium that a DNA sequence originated from cannot be easily detected, creating metagenomic functional libraries possesses advantages. Less than 1% of all bacteria are easily cultured in the lab, leaving a large percentage of bacteria that cannot be grown. [ 18 ] By using functional libraries, the gene functions of unculturable bacteria can still be studied. [ 1 ] Furthermore, these uncultured microbes provide a source for the discovery of novel enzymes with biotechnological applications. Some novel proteins that have been discovered from marine environments include enzymes such as proteases, amylases, lipases, chitinases, deoxyribonucleases and phosphatases. [ 19 ] There are situations in which it is imperative to determine if a gene homolog from one source is present in another organism. For example, identification of novel DNA polymerases for polymerase chain reaction (PCR) reactions which synthesize DNA molecules from deoxyribonucleotides. [ 20 ] While human polymerase optimally works at 37 °C (99 °F), DNA does not denature until 94–98 °C (201–208 °F). [ 20 ] This poses a problem as at these temperatures the human DNA polymerase would denature during the denaturation step of the PCR reaction resulting in a non-functioning polymerase protein and a failed PCR. To combat this a DNA polymerase from a thermophile , or bacteria that grows at high temperatures, could be used. An example is Taq polymerase which comes from the thermophilic bacterium Thermus aquaticus . [ 20 ] One could set up a functional cloning screen to find homologous polymerases that have the added advantage of being thermostable at high temperatures. With this in mind, 3173 Polymerase, another polymerase enzyme, now commonly used in RT-PCR reactions was discovered using the above theory. [ 21 ] In RT-PCR reactions, two separate enzymes are commonly used. The first is a retroviral reverse transcriptase to convert RNA to cDNA . [ 21 ] The second is a thermostable DNA polymerase to amplify the target sequence. [ 21 ] 3173 Polymerase is able to perform both enzymatic functions resulting in a better option for RT-PCR. [ 21 ] The enzyme was discovered using functional cloning from a viral host originally found in Octopus hot springs (93 °C) in Yellowstone National Park. [ 21 ] One of the ongoing challenges of treating bacterial infections is antibiotic resistance which commonly arises when patients do not take their full treatment of medication and hence allow bacteria to develop resistance to antibiotics over time. [ 22 ] To understand how to combat antibiotic resistance it is important to understand how the bacterial genome is evolving and changing in healthy individuals with no recent usage of antibiotics to provide a baseline. [ 22 ] Using a functional cloning-based technique, DNA isolated from human microflora were cloned into expression vectors in Escherichia coli . [ 22 ] Afterwards, antibiotics were applied as a screen. [ 22 ] If a plasmid contained a gene insert that provided antibiotic resistance the cell survived and was selected on the plate. [ 22 ] If the insert provided no resistance, the cell died and did not form a colony. [ 22 ] Based on selection of cell colonies that survived, a better picture of genetic factors contributing to antibiotic resistance were pieced together. Most of the resistance genes that were identified were previously unknown. [ 22 ] By using a functional cloning-based technique one is able to elucidate genes giving rise to antibiotic resistance to better understand treatment for bacterial infections.
https://en.wikipedia.org/wiki/Functional_cloning
In the calculus of variations , a field of mathematical analysis , the functional derivative (or variational derivative ) [ 1 ] relates a change in a functional (a functional in this sense is a function that acts on functions) to a change in a function on which the functional depends. In the calculus of variations, functionals are usually expressed in terms of an integral of functions, their arguments , and their derivatives . In an integrand L of a functional, if a function f is varied by adding to it another function δf that is arbitrarily small, and the resulting integrand is expanded in powers of δf , the coefficient of δf in the first order term is called the functional derivative. For example, consider the functional J [ f ] = ∫ a b L ( x , f ( x ) , f ′ ( x ) ) d x , {\displaystyle J[f]=\int _{a}^{b}L(\,x,f(x),f'{(x)}\,)\,dx\,,} where f ′( x ) ≡ df / dx . If f is varied by adding to it a function δf , and the resulting integrand L ( x , f + δf , f ′+ δf ′) is expanded in powers of δf , then the change in the value of J to first order in δf can be expressed as follows: [ 1 ] [ Note 1 ] δ J = ∫ a b ( ∂ L ∂ f δ f ( x ) + ∂ L ∂ f ′ d d x δ f ( x ) ) d x = ∫ a b ( ∂ L ∂ f − d d x ∂ L ∂ f ′ ) δ f ( x ) d x + ∂ L ∂ f ′ ( b ) δ f ( b ) − ∂ L ∂ f ′ ( a ) δ f ( a ) {\displaystyle {\begin{aligned}\delta J&=\int _{a}^{b}\left({\frac {\partial L}{\partial f}}\delta f(x)+{\frac {\partial L}{\partial f'}}{\frac {d}{dx}}\delta f(x)\right)\,dx\,\\[1ex]&=\int _{a}^{b}\left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\delta f(x)\,dx\,+\,{\frac {\partial L}{\partial f'}}(b)\delta f(b)\,-\,{\frac {\partial L}{\partial f'}}(a)\delta f(a)\end{aligned}}} where the variation in the derivative, δf ′ was rewritten as the derivative of the variation ( δf ) ′ , and integration by parts was used in these derivatives. In this section, the functional differential (or variation or first variation) [ Note 2 ] is defined. Then the functional derivative is defined in terms of the functional differential. Suppose B {\displaystyle B} is a Banach space and F {\displaystyle F} is a functional defined on B {\displaystyle B} . The differential of F {\displaystyle F} at a point ρ ∈ B {\displaystyle \rho \in B} is the linear functional δ F [ ρ , ⋅ ] {\displaystyle \delta F[\rho ,\cdot ]} on B {\displaystyle B} defined [ 2 ] by the condition that, for all ϕ ∈ B {\displaystyle \phi \in B} , F [ ρ + ϕ ] − F [ ρ ] = δ F [ ρ ; ϕ ] + ε ‖ ϕ ‖ {\displaystyle F[\rho +\phi ]-F[\rho ]=\delta F[\rho ;\phi ]+\varepsilon \left\|\phi \right\|} where ε {\displaystyle \varepsilon } is a real number that depends on ‖ ϕ ‖ {\displaystyle \|\phi \|} in such a way that ε → 0 {\displaystyle \varepsilon \to 0} as ‖ ϕ ‖ → 0 {\displaystyle \|\phi \|\to 0} . This means that δ F [ ρ , ⋅ ] {\displaystyle \delta F[\rho ,\cdot ]} is the Fréchet derivative of F {\displaystyle F} at ρ {\displaystyle \rho } . However, this notion of functional differential is so strong it may not exist, [ 3 ] and in those cases a weaker notion, like the Gateaux derivative is preferred. In many practical cases, the functional differential is defined [ 4 ] as the directional derivative δ F [ ρ , ϕ ] = lim ε → 0 F [ ρ + ε ϕ ] − F [ ρ ] ε = [ d d ε F [ ρ + ε ϕ ] ] ε = 0 . {\displaystyle {\begin{aligned}\delta F[\rho ,\phi ]&=\lim _{\varepsilon \to 0}{\frac {F[\rho +\varepsilon \phi ]-F[\rho ]}{\varepsilon }}\\[1ex]&=\left[{\frac {d}{d\varepsilon }}F[\rho +\varepsilon \phi ]\right]_{\varepsilon =0}.\end{aligned}}} Note that this notion of the functional differential can even be defined without a norm. In many applications, the domain of the functional F {\displaystyle F} is a space of differentiable functions ρ {\displaystyle \rho } defined on some space Ω {\displaystyle \Omega } and F {\displaystyle F} is of the form F [ ρ ] = ∫ Ω L ( x , ρ ( x ) , D ρ ( x ) ) d x {\displaystyle F[\rho ]=\int _{\Omega }L(x,\rho (x),D\rho (x))\,dx} for some function L ( x , ρ ( x ) , D ρ ( x ) ) {\displaystyle L(x,\rho (x),D\rho (x))} that may depend on x {\displaystyle x} , the value ρ ( x ) {\displaystyle \rho (x)} and the derivative D ρ ( x ) {\displaystyle D\rho (x)} . If this is the case and, moreover, δ F [ ρ , ϕ ] {\displaystyle \delta F[\rho ,\phi ]} can be written as the integral of ϕ {\displaystyle \phi } times another function (denoted δF / δρ ) δ F [ ρ , ϕ ] = ∫ Ω δ F δ ρ ( x ) ϕ ( x ) d x {\displaystyle \delta F[\rho ,\phi ]=\int _{\Omega }{\frac {\delta F}{\delta \rho }}(x)\ \phi (x)\ dx} then this function δF / δρ is called the functional derivative of F at ρ . [ 5 ] [ 6 ] If F {\displaystyle F} is restricted to only certain functions ρ {\displaystyle \rho } (for example, if there are some boundary conditions imposed) then ϕ {\displaystyle \phi } is restricted to functions such that ρ + ε ϕ {\displaystyle \rho +\varepsilon \phi } continues to satisfy these conditions. Heuristically, ϕ {\displaystyle \phi } is the change in ρ {\displaystyle \rho } , so we 'formally' have ϕ = δ ρ {\displaystyle \phi =\delta \rho } , and then this is similar in form to the total differential of a function F ( ρ 1 , ρ 2 , … , ρ n ) {\displaystyle F(\rho _{1},\rho _{2},\dots ,\rho _{n})} , d F = ∑ i = 1 n ∂ F ∂ ρ i d ρ i , {\displaystyle dF=\sum _{i=1}^{n}{\frac {\partial F}{\partial \rho _{i}}}\ d\rho _{i},} where ρ 1 , ρ 2 , … , ρ n {\displaystyle \rho _{1},\rho _{2},\dots ,\rho _{n}} are independent variables. Comparing the last two equations, the functional derivative δ F / δ ρ ( x ) {\displaystyle \delta F/\delta \rho (x)} has a role similar to that of the partial derivative ∂ F / ∂ ρ i {\displaystyle \partial F/\partial \rho _{i}} , where the variable of integration x {\displaystyle x} is like a continuous version of the summation index i {\displaystyle i} . [ 7 ] One thinks of δF / δρ as the gradient of F at the point ρ , so the value δF / δρ(x) measures how much the functional F will change if the function ρ is changed at the point x . Hence the formula ∫ δ F δ ρ ( x ) ϕ ( x ) d x {\displaystyle \int {\frac {\delta F}{\delta \rho }}(x)\phi (x)\;dx} is regarded as the directional derivative at point ρ {\displaystyle \rho } in the direction of ϕ {\displaystyle \phi } . This is analogous to vector calculus, where the inner product of a vector v {\displaystyle v} with the gradient gives the directional derivative in the direction of v {\displaystyle v} . Like the derivative of a function, the functional derivative satisfies the following properties, where F [ ρ ] and G [ ρ ] are functionals: [ Note 3 ] A formula to determine functional derivatives for a common class of functionals can be written as the integral of a function and its derivatives. This is a generalization of the Euler–Lagrange equation : indeed, the functional derivative was introduced in physics within the derivation of the Lagrange equation of the second kind from the principle of least action in Lagrangian mechanics (18th century). The first three examples below are taken from density functional theory (20th century), the fourth from statistical mechanics (19th century). Given a functional F [ ρ ] = ∫ f ( r , ρ ( r ) , ∇ ρ ( r ) ) d r , {\displaystyle F[\rho ]=\int f({\boldsymbol {r}},\rho ({\boldsymbol {r}}),\nabla \rho ({\boldsymbol {r}}))\,d{\boldsymbol {r}},} and a function ϕ ( r ) {\displaystyle \phi ({\boldsymbol {r}})} that vanishes on the boundary of the region of integration, from a previous section Definition , ∫ δ F δ ρ ( r ) ϕ ( r ) d r = [ d d ε ∫ f ( r , ρ + ε ϕ , ∇ ρ + ε ∇ ϕ ) d r ] ε = 0 = ∫ ( ∂ f ∂ ρ ϕ + ∂ f ∂ ∇ ρ ⋅ ∇ ϕ ) d r = ∫ [ ∂ f ∂ ρ ϕ + ∇ ⋅ ( ∂ f ∂ ∇ ρ ϕ ) − ( ∇ ⋅ ∂ f ∂ ∇ ρ ) ϕ ] d r = ∫ [ ∂ f ∂ ρ ϕ − ( ∇ ⋅ ∂ f ∂ ∇ ρ ) ϕ ] d r = ∫ ( ∂ f ∂ ρ − ∇ ⋅ ∂ f ∂ ∇ ρ ) ϕ ( r ) d r . {\displaystyle {\begin{aligned}\int {\frac {\delta F}{\delta \rho ({\boldsymbol {r}})}}\,\phi ({\boldsymbol {r}})\,d{\boldsymbol {r}}&=\left[{\frac {d}{d\varepsilon }}\int f({\boldsymbol {r}},\rho +\varepsilon \phi ,\nabla \rho +\varepsilon \nabla \phi )\,d{\boldsymbol {r}}\right]_{\varepsilon =0}\\&=\int \left({\frac {\partial f}{\partial \rho }}\,\phi +{\frac {\partial f}{\partial \nabla \rho }}\cdot \nabla \phi \right)d{\boldsymbol {r}}\\&=\int \left[{\frac {\partial f}{\partial \rho }}\,\phi +\nabla \cdot \left({\frac {\partial f}{\partial \nabla \rho }}\,\phi \right)-\left(\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}\right)\phi \right]d{\boldsymbol {r}}\\&=\int \left[{\frac {\partial f}{\partial \rho }}\,\phi -\left(\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}\right)\phi \right]d{\boldsymbol {r}}\\&=\int \left({\frac {\partial f}{\partial \rho }}-\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}\right)\phi ({\boldsymbol {r}})\ d{\boldsymbol {r}}\,.\end{aligned}}} The second line is obtained using the total derivative , where ∂f / ∂∇ρ is a derivative of a scalar with respect to a vector . [ Note 4 ] The third line was obtained by use of a product rule for divergence . The fourth line was obtained using the divergence theorem and the condition that ϕ = 0 {\displaystyle \phi =0} on the boundary of the region of integration. Since ϕ {\displaystyle \phi } is also an arbitrary function, applying the fundamental lemma of calculus of variations to the last line, the functional derivative is δ F δ ρ ( r ) = ∂ f ∂ ρ − ∇ ⋅ ∂ f ∂ ∇ ρ {\displaystyle {\frac {\delta F}{\delta \rho ({\boldsymbol {r}})}}={\frac {\partial f}{\partial \rho }}-\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}} where ρ = ρ ( r ) and f = f ( r , ρ , ∇ ρ ) . This formula is for the case of the functional form given by F [ ρ ] at the beginning of this section. For other functional forms, the definition of the functional derivative can be used as the starting point for its determination. (See the example Coulomb potential energy functional .) The above equation for the functional derivative can be generalized to the case that includes higher dimensions and higher order derivatives. The functional would be, F [ ρ ( r ) ] = ∫ f ( r , ρ ( r ) , ∇ ρ ( r ) , ∇ ( 2 ) ρ ( r ) , … , ∇ ( N ) ρ ( r ) ) d r , {\displaystyle F[\rho ({\boldsymbol {r}})]=\int f({\boldsymbol {r}},\rho ({\boldsymbol {r}}),\nabla \rho ({\boldsymbol {r}}),\nabla ^{(2)}\rho ({\boldsymbol {r}}),\dots ,\nabla ^{(N)}\rho ({\boldsymbol {r}}))\,d{\boldsymbol {r}},} where the vector r ∈ R n , and ∇ ( i ) is a tensor whose n i components are partial derivative operators of order i , [ ∇ ( i ) ] α 1 α 2 ⋯ α i = ∂ i ∂ r α 1 ∂ r α 2 ⋯ ∂ r α i where α 1 , α 2 , … , α i = 1 , 2 , … , n . {\displaystyle \left[\nabla ^{(i)}\right]_{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}={\frac {\partial ^{\,i}}{\partial r_{\alpha _{1}}\partial r_{\alpha _{2}}\cdots \partial r_{\alpha _{i}}}}\qquad \qquad {\text{where}}\quad \alpha _{1},\alpha _{2},\dots ,\alpha _{i}=1,2,\dots ,n\ .} [ Note 5 ] An analogous application of the definition of the functional derivative yields δ F [ ρ ] δ ρ = ∂ f ∂ ρ − ∇ ⋅ ∂ f ∂ ( ∇ ρ ) + ∇ ( 2 ) ⋅ ∂ f ∂ ( ∇ ( 2 ) ρ ) + ⋯ + ( − 1 ) N ∇ ( N ) ⋅ ∂ f ∂ ( ∇ ( N ) ρ ) = ∂ f ∂ ρ + ∑ i = 1 N ( − 1 ) i ∇ ( i ) ⋅ ∂ f ∂ ( ∇ ( i ) ρ ) . {\displaystyle {\begin{aligned}{\frac {\delta F[\rho ]}{\delta \rho }}&{}={\frac {\partial f}{\partial \rho }}-\nabla \cdot {\frac {\partial f}{\partial (\nabla \rho )}}+\nabla ^{(2)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(2)}\rho \right)}}+\dots +(-1)^{N}\nabla ^{(N)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(N)}\rho \right)}}\\&{}={\frac {\partial f}{\partial \rho }}+\sum _{i=1}^{N}(-1)^{i}\nabla ^{(i)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}\ .\end{aligned}}} In the last two equations, the n i components of the tensor ∂ f ∂ ( ∇ ( i ) ρ ) {\displaystyle {\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}} are partial derivatives of f with respect to partial derivatives of ρ , [ ∂ f ∂ ( ∇ ( i ) ρ ) ] α 1 α 2 ⋯ α i = ∂ f ∂ ρ α 1 α 2 ⋯ α i {\displaystyle \left[{\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}\right]_{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}={\frac {\partial f}{\partial \rho _{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}}}} where ρ α 1 α 2 ⋯ α i ≡ ∂ i ρ ∂ r α 1 ∂ r α 2 ⋯ ∂ r α i {\displaystyle \rho _{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}\equiv {\frac {\partial ^{\,i}\rho }{\partial r_{\alpha _{1}}\,\partial r_{\alpha _{2}}\cdots \partial r_{\alpha _{i}}}}} , and the tensor scalar product is, ∇ ( i ) ⋅ ∂ f ∂ ( ∇ ( i ) ρ ) = ∑ α 1 , α 2 , ⋯ , α i = 1 n ∂ i ∂ r α 1 ∂ r α 2 ⋯ ∂ r α i ∂ f ∂ ρ α 1 α 2 ⋯ α i . {\displaystyle \nabla ^{(i)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}=\sum _{\alpha _{1},\alpha _{2},\cdots ,\alpha _{i}=1}^{n}\ {\frac {\partial ^{\,i}}{\partial r_{\alpha _{1}}\,\partial r_{\alpha _{2}}\cdots \partial r_{\alpha _{i}}}}\ {\frac {\partial f}{\partial \rho _{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}}}\ .} [ Note 6 ] The Thomas–Fermi model of 1927 used a kinetic energy functional for a noninteracting uniform electron gas in a first attempt of density-functional theory of electronic structure: T T F [ ρ ] = C F ∫ ρ 5 / 3 ( r ) d r . {\displaystyle T_{\mathrm {TF} }[\rho ]=C_{\mathrm {F} }\int \rho ^{5/3}(\mathbf {r} )\,d\mathbf {r} \,.} Since the integrand of T TF [ ρ ] does not involve derivatives of ρ ( r ) , the functional derivative of T TF [ ρ ] is, [ 12 ] δ T T F δ ρ ( r ) = C F ∂ ρ 5 / 3 ( r ) ∂ ρ ( r ) = 5 3 C F ρ 2 / 3 ( r ) . {\displaystyle {\frac {\delta T_{\mathrm {TF} }}{\delta \rho ({\boldsymbol {r}})}}=C_{\mathrm {F} }{\frac {\partial \rho ^{5/3}(\mathbf {r} )}{\partial \rho (\mathbf {r} )}}={\frac {5}{3}}C_{\mathrm {F} }\rho ^{2/3}(\mathbf {r} )\,.} The electron-nucleus potential energy is V [ ρ ] = ∫ ρ ( r ) | r | d r . {\displaystyle V[\rho ]=\int {\frac {\rho ({\boldsymbol {r}})}{|{\boldsymbol {r}}|}}\ d{\boldsymbol {r}}.} Applying the definition of functional derivative, ∫ δ V δ ρ ( r ) ϕ ( r ) d r = [ d d ε ∫ ρ ( r ) + ε ϕ ( r ) | r | d r ] ε = 0 = ∫ ϕ ( r ) | r | d r . {\displaystyle {\begin{aligned}\int {\frac {\delta V}{\delta \rho ({\boldsymbol {r}})}}\ \phi ({\boldsymbol {r}})\ d{\boldsymbol {r}}&{}=\left[{\frac {d}{d\varepsilon }}\int {\frac {\rho ({\boldsymbol {r}})+\varepsilon \phi ({\boldsymbol {r}})}{|{\boldsymbol {r}}|}}\ d{\boldsymbol {r}}\right]_{\varepsilon =0}\\[1ex]&{}=\int {\frac {\phi ({\boldsymbol {r}})}{|{\boldsymbol {r}}|}}\ d{\boldsymbol {r}}\,.\end{aligned}}} So, δ V δ ρ ( r ) = 1 | r | . {\displaystyle {\frac {\delta V}{\delta \rho ({\boldsymbol {r}})}}={\frac {1}{|{\boldsymbol {r}}|}}\ .} The functional derivative of the classical part of the electron-electron interaction (often called Hartree energy) is J [ ρ ] = 1 2 ∬ ρ ( r ) ρ ( r ′ ) | r − r ′ | d r d r ′ . {\displaystyle J[\rho ]={\frac {1}{2}}\iint {\frac {\rho (\mathbf {r} )\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,d\mathbf {r} d\mathbf {r} '\,.} From the definition of the functional derivative , ∫ δ J δ ρ ( r ) ϕ ( r ) d r = [ d d ε J [ ρ + ε ϕ ] ] ε = 0 = [ d d ε ( 1 2 ∬ [ ρ ( r ) + ε ϕ ( r ) ] [ ρ ( r ′ ) + ε ϕ ( r ′ ) ] | r − r ′ | d r d r ′ ) ] ε = 0 = 1 2 ∬ ρ ( r ′ ) ϕ ( r ) | r − r ′ | d r d r ′ + 1 2 ∬ ρ ( r ) ϕ ( r ′ ) | r − r ′ | d r d r ′ {\displaystyle {\begin{aligned}\int {\frac {\delta J}{\delta \rho ({\boldsymbol {r}})}}\phi ({\boldsymbol {r}})d{\boldsymbol {r}}&{}=\left[{\frac {d\ }{d\varepsilon }}\,J[\rho +\varepsilon \phi ]\right]_{\varepsilon =0}\\&{}=\left[{\frac {d\ }{d\varepsilon }}\,\left({\frac {1}{2}}\iint {\frac {[\rho ({\boldsymbol {r}})+\varepsilon \phi ({\boldsymbol {r}})]\,[\rho ({\boldsymbol {r}}')+\varepsilon \phi ({\boldsymbol {r}}')]}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}\,d{\boldsymbol {r}}d{\boldsymbol {r}}'\right)\right]_{\varepsilon =0}\\&{}={\frac {1}{2}}\iint {\frac {\rho ({\boldsymbol {r}}')\phi ({\boldsymbol {r}})}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}\,d{\boldsymbol {r}}d{\boldsymbol {r}}'+{\frac {1}{2}}\iint {\frac {\rho ({\boldsymbol {r}})\phi ({\boldsymbol {r}}')}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}\,d{\boldsymbol {r}}d{\boldsymbol {r}}'\\\end{aligned}}} The first and second terms on the right hand side of the last equation are equal, since r and r′ in the second term can be interchanged without changing the value of the integral. Therefore, ∫ δ J δ ρ ( r ) ϕ ( r ) d r = ∫ ( ∫ ρ ( r ′ ) | r − r ′ | d r ′ ) ϕ ( r ) d r {\displaystyle \int {\frac {\delta J}{\delta \rho ({\boldsymbol {r}})}}\phi ({\boldsymbol {r}})d{\boldsymbol {r}}=\int \left(\int {\frac {\rho ({\boldsymbol {r}}')}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}d{\boldsymbol {r}}'\right)\phi ({\boldsymbol {r}})d{\boldsymbol {r}}} and the functional derivative of the electron-electron Coulomb potential energy functional J [ ρ ] is, [ 13 ] δ J δ ρ ( r ) = ∫ ρ ( r ′ ) | r − r ′ | d r ′ . {\displaystyle {\frac {\delta J}{\delta \rho ({\boldsymbol {r}})}}=\int {\frac {\rho ({\boldsymbol {r}}')}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}d{\boldsymbol {r}}'\,.} The second functional derivative is δ 2 J [ ρ ] δ ρ ( r ′ ) δ ρ ( r ) = ∂ ∂ ρ ( r ′ ) ( ρ ( r ′ ) | r − r ′ | ) = 1 | r − r ′ | . {\displaystyle {\frac {\delta ^{2}J[\rho ]}{\delta \rho (\mathbf {r} ')\delta \rho (\mathbf {r} )}}={\frac {\partial }{\partial \rho (\mathbf {r} ')}}\left({\frac {\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\right)={\frac {1}{|\mathbf {r} -\mathbf {r} '|}}.} In 1935 von Weizsäcker proposed to add a gradient correction to the Thomas-Fermi kinetic energy functional to make it better suit a molecular electron cloud: T W [ ρ ] = 1 8 ∫ ∇ ρ ( r ) ⋅ ∇ ρ ( r ) ρ ( r ) d r = ∫ t W ( r ) d r , {\displaystyle T_{\mathrm {W} }[\rho ]={\frac {1}{8}}\int {\frac {\nabla \rho (\mathbf {r} )\cdot \nabla \rho (\mathbf {r} )}{\rho (\mathbf {r} )}}d\mathbf {r} =\int t_{\mathrm {W} }(\mathbf {r} )\ d\mathbf {r} \,,} where t W ≡ 1 8 ∇ ρ ⋅ ∇ ρ ρ and ρ = ρ ( r ) . {\displaystyle t_{\mathrm {W} }\equiv {\frac {1}{8}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho }}\qquad {\text{and}}\ \ \rho =\rho ({\boldsymbol {r}})\ .} Using a previously derived formula for the functional derivative, δ T W δ ρ = ∂ t W ∂ ρ − ∇ ⋅ ∂ t W ∂ ∇ ρ = − 1 8 ∇ ρ ⋅ ∇ ρ ρ 2 − ( 1 4 ∇ 2 ρ ρ − 1 4 ∇ ρ ⋅ ∇ ρ ρ 2 ) where ∇ 2 = ∇ ⋅ ∇ , {\displaystyle {\begin{aligned}{\frac {\delta T_{\mathrm {W} }}{\delta \rho }}&={\frac {\partial t_{\mathrm {W} }}{\partial \rho }}-\nabla \cdot {\frac {\partial t_{\mathrm {W} }}{\partial \nabla \rho }}\\&=-{\frac {1}{8}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho ^{2}}}-\left({\frac {1}{4}}{\frac {\nabla ^{2}\rho }{\rho }}-{\frac {1}{4}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho ^{2}}}\right)\qquad {\text{where}}\ \ \nabla ^{2}=\nabla \cdot \nabla \ ,\end{aligned}}} and the result is, [ 14 ] δ T W δ ρ = 1 8 ∇ ρ ⋅ ∇ ρ ρ 2 − 1 4 ∇ 2 ρ ρ . {\displaystyle {\frac {\delta T_{\mathrm {W} }}{\delta \rho }}=\ \ \,{\frac {1}{8}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho ^{2}}}-{\frac {1}{4}}{\frac {\nabla ^{2}\rho }{\rho }}\ .} The entropy of a discrete random variable is a functional of the probability mass function . H [ p ( x ) ] = − ∑ x p ( x ) log ⁡ p ( x ) {\displaystyle H[p(x)]=-\sum _{x}p(x)\log p(x)} Thus, ∑ x δ H δ p ( x ) ϕ ( x ) = [ d d ε H [ p ( x ) + ε ϕ ( x ) ] ] ε = 0 = [ − d d ε ∑ x [ p ( x ) + ε ϕ ( x ) ] log ⁡ [ p ( x ) + ε ϕ ( x ) ] ] ε = 0 = − ∑ x [ 1 + log ⁡ p ( x ) ] ϕ ( x ) . {\displaystyle {\begin{aligned}\sum _{x}{\frac {\delta H}{\delta p(x)}}\,\phi (x)&{}=\left[{\frac {d}{d\varepsilon }}H[p(x)+\varepsilon \phi (x)]\right]_{\varepsilon =0}\\&{}=\left[-\,{\frac {d}{d\varepsilon }}\sum _{x}\,[p(x)+\varepsilon \phi (x)]\ \log[p(x)+\varepsilon \phi (x)]\right]_{\varepsilon =0}\\&{}=-\sum _{x}\,[1+\log p(x)]\ \phi (x)\,.\end{aligned}}} Thus, δ H δ p ( x ) = − 1 − log ⁡ p ( x ) . {\displaystyle {\frac {\delta H}{\delta p(x)}}=-1-\log p(x).} Let F [ φ ( x ) ] = e ∫ φ ( x ) g ( x ) d x . {\displaystyle F[\varphi (x)]=e^{\int \varphi (x)g(x)dx}.} Using the delta function as a test function, δ F [ φ ( x ) ] δ φ ( y ) = lim ε → 0 F [ φ ( x ) + ε δ ( x − y ) ] − F [ φ ( x ) ] ε = lim ε → 0 e ∫ ( φ ( x ) + ε δ ( x − y ) ) g ( x ) d x − e ∫ φ ( x ) g ( x ) d x ε = e ∫ φ ( x ) g ( x ) d x lim ε → 0 e ε ∫ δ ( x − y ) g ( x ) d x − 1 ε = e ∫ φ ( x ) g ( x ) d x lim ε → 0 e ε g ( y ) − 1 ε = e ∫ φ ( x ) g ( x ) d x g ( y ) . {\displaystyle {\begin{aligned}{\frac {\delta F[\varphi (x)]}{\delta \varphi (y)}}&{}=\lim _{\varepsilon \to 0}{\frac {F[\varphi (x)+\varepsilon \delta (x-y)]-F[\varphi (x)]}{\varepsilon }}\\&{}=\lim _{\varepsilon \to 0}{\frac {e^{\int (\varphi (x)+\varepsilon \delta (x-y))g(x)dx}-e^{\int \varphi (x)g(x)dx}}{\varepsilon }}\\&{}=e^{\int \varphi (x)g(x)dx}\lim _{\varepsilon \to 0}{\frac {e^{\varepsilon \int \delta (x-y)g(x)dx}-1}{\varepsilon }}\\&{}=e^{\int \varphi (x)g(x)dx}\lim _{\varepsilon \to 0}{\frac {e^{\varepsilon g(y)}-1}{\varepsilon }}\\&{}=e^{\int \varphi (x)g(x)dx}g(y).\end{aligned}}} Thus, δ F [ φ ( x ) ] δ φ ( y ) = g ( y ) F [ φ ( x ) ] . {\displaystyle {\frac {\delta F[\varphi (x)]}{\delta \varphi (y)}}=g(y)F[\varphi (x)].} This is particularly useful in calculating the correlation functions from the partition function in quantum field theory . A function can be written in the form of an integral like a functional. For example, ρ ( r ) = F [ ρ ] = ∫ ρ ( r ′ ) δ ( r − r ′ ) d r ′ . {\displaystyle \rho ({\boldsymbol {r}})=F[\rho ]=\int \rho ({\boldsymbol {r}}')\delta ({\boldsymbol {r}}-{\boldsymbol {r}}')\,d{\boldsymbol {r}}'.} Since the integrand does not depend on derivatives of ρ , the functional derivative of ρ ( r ) is, δ ρ ( r ) δ ρ ( r ′ ) ≡ δ F δ ρ ( r ′ ) = ∂ ∂ ρ ( r ′ ) [ ρ ( r ′ ) δ ( r − r ′ ) ] = δ ( r − r ′ ) . {\displaystyle {\frac {\delta \rho ({\boldsymbol {r}})}{\delta \rho ({\boldsymbol {r}}')}}\equiv {\frac {\delta F}{\delta \rho ({\boldsymbol {r}}')}}={\frac {\partial \ \ }{\partial \rho ({\boldsymbol {r}}')}}\,[\rho ({\boldsymbol {r}}')\delta ({\boldsymbol {r}}-{\boldsymbol {r}}')]=\delta ({\boldsymbol {r}}-{\boldsymbol {r}}').} The functional derivative of the iterated function f ( f ( x ) ) {\displaystyle f(f(x))} is given by: δ f ( f ( x ) ) δ f ( y ) = f ′ ( f ( x ) ) δ ( x − y ) + δ ( f ( x ) − y ) {\displaystyle {\frac {\delta f(f(x))}{\delta f(y)}}=f'(f(x))\delta (x-y)+\delta (f(x)-y)} and δ f ( f ( f ( x ) ) ) δ f ( y ) = f ′ ( f ( f ( x ) ) ( f ′ ( f ( x ) ) δ ( x − y ) + δ ( f ( x ) − y ) ) + δ ( f ( f ( x ) ) − y ) {\displaystyle {\frac {\delta f(f(f(x)))}{\delta f(y)}}=f'(f(f(x))(f'(f(x))\delta (x-y)+\delta (f(x)-y))+\delta (f(f(x))-y)} In general: δ f N ( x ) δ f ( y ) = f ′ ( f N − 1 ( x ) ) δ f N − 1 ( x ) δ f ( y ) + δ ( f N − 1 ( x ) − y ) {\displaystyle {\frac {\delta f^{N}(x)}{\delta f(y)}}=f'(f^{N-1}(x)){\frac {\delta f^{N-1}(x)}{\delta f(y)}}+\delta (f^{N-1}(x)-y)} Putting in N = 0 gives: δ f − 1 ( x ) δ f ( y ) = − δ ( f − 1 ( x ) − y ) f ′ ( f − 1 ( x ) ) {\displaystyle {\frac {\delta f^{-1}(x)}{\delta f(y)}}=-{\frac {\delta (f^{-1}(x)-y)}{f'(f^{-1}(x))}}} In physics, it is common to use the Dirac delta function δ ( x − y ) {\displaystyle \delta (x-y)} in place of a generic test function ϕ ( x ) {\displaystyle \phi (x)} , for yielding the functional derivative at the point y {\displaystyle y} (this is a point of the whole functional derivative as a partial derivative is a component of the gradient): [ 15 ] δ F [ ρ ( x ) ] δ ρ ( y ) = lim ε → 0 F [ ρ ( x ) + ε δ ( x − y ) ] − F [ ρ ( x ) ] ε . {\displaystyle {\frac {\delta F[\rho (x)]}{\delta \rho (y)}}=\lim _{\varepsilon \to 0}{\frac {F[\rho (x)+\varepsilon \delta (x-y)]-F[\rho (x)]}{\varepsilon }}.} This works in cases when F [ ρ ( x ) + ε f ( x ) ] {\displaystyle F[\rho (x)+\varepsilon f(x)]} formally can be expanded as a series (or at least up to first order) in ε {\displaystyle \varepsilon } . The formula is however not mathematically rigorous, since F [ ρ ( x ) + ε δ ( x − y ) ] {\displaystyle F[\rho (x)+\varepsilon \delta (x-y)]} is usually not even defined. The definition given in a previous section is based on a relationship that holds for all test functions ϕ ( x ) {\displaystyle \phi (x)} , so one might think that it should hold also when ϕ ( x ) {\displaystyle \phi (x)} is chosen to be a specific function such as the delta function . However, the latter is not a valid test function (it is not even a proper function). In the definition, the functional derivative describes how the functional F [ ρ ( x ) ] {\displaystyle F[\rho (x)]} changes as a result of a small change in the entire function ρ ( x ) {\displaystyle \rho (x)} . The particular form of the change in ρ ( x ) {\displaystyle \rho (x)} is not specified, but it should stretch over the whole interval on which x {\displaystyle x} is defined. Employing the particular form of the perturbation given by the delta function has the meaning that ρ ( x ) {\displaystyle \rho (x)} is varied only in the point y {\displaystyle y} . Except for this point, there is no variation in ρ ( x ) {\displaystyle \rho (x)} .
https://en.wikipedia.org/wiki/Functional_derivative
Functional Design is a paradigm used to simplify the design of hardware and software devices such as computer software and, increasingly, 3D models . A functional design assures that each modular part of a device has only one responsibility and performs that responsibility with the minimum of side effects on other parts. Functionally designed modules tend to have low coupling . The advantage for implementation is that if a software module has a single purpose, it will be simpler, and therefore easier and less expensive, to design and implement. Systems with functionally designed parts are easier to modify because each part does only what it claims to do. Since maintenance is more than 3/4 of a successful system's life, [ 1 ] this feature is a crucial advantage. It also makes the system easier to understand and document, which simplifies training. The result is that the practical lifetime of a functional system is longer. In a system of programs, a functional module will be easier to reuse because it is less likely to have side effects that appear in other parts of the system. The standard way to assure functional design is to review the description of a module. If the description includes conjunctions such as "and" or "or", then the design has more than one responsibility, and is therefore likely to have side effects. The responsibilities need to be divided into several modules in order to achieve a functional design. Every computer system has parts that cannot be functionally pure because they exist to distribute CPU cycles or other resources to different modules. For example, most systems have an "initialization" section that starts up the modules. Other well-known examples are the interrupt vector table and the main loop . Some functions inherently have mixed semantics. For example, a function "move the car from the garage" inherently has a side effect of changing the "car position". In some cases, the mixed semantics can extend over a large topological tree or graph of related concepts. In these unusual cases, functional design is not recommended by some authorities. [ citation needed ] Instead polymorphism , inheritance , or procedural methods may be preferred. Recently several software companies have introduced functional design as a concept to describe a Parametric feature based modeler for 3D modeling and simulation. In this context, they mean a parametric model of an object where the parameters are tied to real-world design criteria, such as an axle that will adjust its diameter based on the strength of the material and the amount of force being applied to it in the simulation. It is hoped that this will create efficiencies in the design process for mechanical and perhaps even architectural/structural assemblies by integrating the results of finite element analysis directly to the behavior of individual objects.
https://en.wikipedia.org/wiki/Functional_design
Functional divergence is the process by which genes, after gene duplication , shift in function from an ancestral function. Functional divergence can result in either subfunctionalization , where a paralog specializes one of several ancestral functions, or neofunctionalization , where a totally new functional capability evolves. It is thought that this process of gene duplication and functional divergence is a major originator of molecular novelty and has produced the many large protein families that exist today. [ 1 ] [ 2 ] Functional divergence is just one possible outcome of gene duplication events. Other fates include nonfunctionalization where one of the paralogs acquires deleterious mutations and becomes a pseudogene and superfunctionalization (reinforcement), [ 3 ] where both paralogs maintain original function. While gene, chromosome, or whole genome duplication events are considered the canonical sources of functional divergence of paralogs , orthologs (genes descended from speciation events) can also undergo functional divergence [ 4 ] [ 5 ] [ 6 ] [ 7 ] and horizontal gene transfer can also result in multiple copies of a gene in a genome, providing the opportunity for functional divergence. Many well known protein families are the result of this process, such as the ancient gene duplication event that led to the divergence of hemoglobin and myoglobin , the more recent duplication events that led to the various subunit expansions (alpha and beta) of vertebrate hemoglobins , [ 8 ] or the expansion of G-protein alpha subunits [ 9 ]
https://en.wikipedia.org/wiki/Functional_divergence
Functional genomics is a field of molecular biology that attempts to describe gene (and protein ) functions and interactions. Functional genomics make use of the vast data generated by genomic and transcriptomic projects (such as genome sequencing projects and RNA sequencing ). Functional genomics focuses on the dynamic aspects such as gene transcription , translation , regulation of gene expression and protein–protein interactions , as opposed to the static aspects of the genomic information such as DNA sequence or structures. A key characteristic of functional genomics studies is their genome-wide approach to these questions, generally involving high-throughput methods rather than a more traditional "candidate-gene" approach. In order to understand functional genomics it is important to first define function. In their paper [ 1 ] Graur et al. define function in two possible ways. These are "selected effect" and "causal role". The "selected effect" function refers to the function for which a trait (DNA, RNA, protein etc.) is selected for. The "causal role" function refers to the function that a trait is sufficient and necessary for. Functional genomics usually tests the "causal role" definition of function. The goal of functional genomics is to understand the function of genes or proteins, eventually all components of a genome. The term functional genomics is often used to refer to the many technical approaches to study an organism's genes and proteins, including the "biochemical, cellular, and/or physiological properties of each and every gene product" [ 2 ] while some authors include the study of nongenic elements in their definition. [ 3 ] Functional genomics may also include studies of natural genetic variation over time (such as an organism's development) or space (such as its body regions), as well as functional disruptions such as mutations. The promise of functional genomics is to generate and synthesize genomic and proteomic knowledge into an understanding of the dynamic properties of an organism. This could potentially provide a more complete picture of how the genome specifies function compared to studies of single genes. Integration of functional genomics data is often a part of systems biology approaches. Functional genomics includes function-related aspects of the genome itself such as mutation and polymorphism (such as single nucleotide polymorphism (SNP) analysis), as well as the measurement of molecular activities. The latter comprise a number of "- omics " such as transcriptomics ( gene expression ), proteomics ( protein production ), and metabolomics . Functional genomics uses mostly multiplex techniques to measure the abundance of many or all gene products such as mRNAs or proteins within a biological sample . A more focused functional genomics approach might test the function of all variants of one gene and quantify the effects of mutants by using sequencing as a readout of activity. Together these measurement modalities endeavor to quantitate the various biological processes and improve our understanding of gene and protein functions and interactions. Systematic pairwise deletion of genes or inhibition of gene expression can be used to identify genes with related function, even if they do not interact physically. Epistasis refers to the fact that effects for two different gene knockouts may not be additive; that is, the phenotype that results when two genes are inhibited may be different from the sum of the effects of single knockouts. Proteins formed by the translation of the mRNA (messenger RNA, a coded information from DNA for protein synthesis) play a major role in regulating gene expression. To understand how they regulate gene expression it is necessary to identify DNA sequences that they interact with. Techniques have been developed to identify sites of DNA-protein interactions. These include ChIP-sequencing , CUT&RUN sequencing and Calling Cards. [ 4 ] Assays have been developed to identify regions of the genome that are accessible. These regions of accessible chromatin are candidate regulatory regions. These assays include ATAC-seq , DNase-Seq and FAIRE-Seq . Microarrays measure the amount of mRNA in a sample that corresponds to a given gene or probe DNA sequence. Probe sequences are immobilized on a solid surface and allowed to hybridize with fluorescently labeled "target" mRNA. The intensity of fluorescence of a spot is proportional to the amount of target sequence that has hybridized to that spot and therefore to the abundance of that mRNA sequence in the sample. Microarrays allow for the identification of candidate genes involved in a given process based on variation between transcript levels for different conditions and shared expression patterns with genes of known function. Serial analysis of gene expression (SAGE) is an alternate method of analysis based on RNA sequencing rather than hybridization. SAGE relies on the sequencing of 10–17 base pair tags which are unique to each gene. These tags are produced from poly-A mRNA and ligated end-to-end before sequencing. SAGE gives an unbiased measurement of the number of transcripts per cell, since it does not depend on prior knowledge of what transcripts to study (as microarrays do). RNA sequencing has taken over microarray and SAGE technology in recent years, as noted in 2016, and has become the most efficient way to study transcription and gene expression. This is typically done by next-generation sequencing . [ 5 ] A subset of sequenced RNAs are small RNAs, a class of non-coding RNA molecules that are key regulators of transcriptional and post-transcriptional gene silencing, or RNA silencing . Next-generation sequencing is the gold standard tool for non-coding RNA discovery, profiling and expression analysis. Massively parallel reporter assays is a technology to test the cis-regulatory activity of DNA sequences. [ 6 ] [ 7 ] MPRAs use a plasmid with a synthetic cis-regulatory element upstream of a promoter driving a synthetic gene such as Green Fluorescent Protein. A library of cis-regulatory elements is usually tested using MPRAs, a library can contain from hundreds to thousands of cis-regulatory elements. The cis-regulatory activity of the elements is assayed by using the downstream reporter activity. The activity of all the library members is assayed in parallel using barcodes for each cis-regulatory element. One limitation of MPRAs is that the activity is assayed on a plasmid and may not capture all aspects of gene regulation observed in the genome. STARR-seq is a technique similar to MPRAs to assay enhancer activity of randomly sheared genomic fragments. In the original publication, [ 8 ] randomly sheared fragments of the Drosophila genome were placed downstream of a minimal promoter. Candidate enhancers amongst the randomly sheared fragments will transcribe themselves using the minimal promoter. By using sequencing as a readout and controlling for input amounts of each sequence the strength of putative enhancers are assayed by this method. Perturb-seq couples CRISPR mediated gene knockdowns with single-cell gene expression. Linear models are used to calculate the effect of the knockdown of a single gene on the expression of multiple genes. A yeast two-hybrid screening (Y2H) tests a "bait" protein against many potential interacting proteins ("prey") to identify physical protein–protein interactions. This system is based on a transcription factor, originally GAL4, [ 9 ] whose separate DNA-binding and transcription activation domains are both required in order for the protein to cause transcription of a reporter gene. In a Y2H screen, the "bait" protein is fused to the binding domain of GAL4, and a library of potential "prey" (interacting) proteins is recombinantly expressed in a vector with the activation domain. In vivo interaction of bait and prey proteins in a yeast cell brings the activation and binding domains of GAL4 close enough together to result in expression of a reporter gene . It is also possible to systematically test a library of bait proteins against a library of prey proteins to identify all possible interactions in a cell. Mass spectrometry (MS) can identify proteins and their relative levels, hence it can be used to study protein expression. When used in combination with affinity purification , mass spectrometry (AP/MS) can be used to study protein complexes, that is, which proteins interact with one another in complexes and in which ratios. In order to purify protein complexes, usually a "bait" protein is tagged with a specific protein or peptide that can be used to pull out the complex from a complex mix. The purification is usually done using an antibody or a compound that binds to the fusion part. The proteins are then digested into short peptide fragments and mass spectrometry is used to identify the proteins based on the mass-to-charge ratios of those fragments. In deep mutational scanning, every possible amino acid change in a given protein is first synthesized. [ 10 ] The activity of each of these protein variants is assayed in parallel using barcodes for each variant. [ 11 ] By comparing the activity to the wild-type protein, the effect of each mutation is identified. While it is possible to assay every possible single amino-acid change due to combinatorics two or more concurrent mutations are hard to test. Deep mutational scanning experiments have also been used to infer protein structure and protein-protein interactions. [ 12 ] Deep Mutational Scanning is an example of a multiplexed assays of variant effect (MAVEs), a family of methods that involve mutagenesis of a DNA-encoded protein or regulatory element followed by a multiplexed assay for some aspect of function. MAVEs enable the generation of ‘variant effect maps’ characterizing aspects of the function of every possible single nucleotide change in a gene or functional element of interest. [ 13 ] An important functional feature of genes is the phenotype caused by mutations. Mutants can be produced by random mutations or by directed mutagenesis, including site-directed mutagenesis, deleting complete genes, or other techniques. Gene function can be investigated by systematically "knocking out" genes one by one. This is done by either deletion or disruption of function (such as by insertional mutagenesis ) and the resulting organisms are screened for phenotypes that provide clues to the function of the disrupted gene. Knock-outs have been produced for whole genomes, i.e. by deleting all genes in a genome. For essential genes , this is not possible, so other techniques are used, e.g. deleting a gene while expressing the gene from a plasmid , using an inducible promoter, so that the level of gene product can be changed at will (and thus a "functional" deletion achieved). Site-directed mutagenesis is used to mutate specific bases (and thus amino acids ). This is critical to investigate the function of specific amino acids in a protein, e.g. in the active site of an enzyme . RNA interference (RNAi) methods can be used to transiently silence or knockdown gene expression using ~20 base-pair double-stranded RNA typically delivered by transfection of synthetic ~20-mer short-interfering RNA molecules (siRNAs) or by virally encoded short-hairpin RNAs (shRNAs). RNAi screens, typically performed in cell culture-based assays or experimental organisms (such as C. elegans ) can be used to systematically disrupt nearly every gene in a genome or subsets of genes (sub-genomes); possible functions of disrupted genes can be assigned based on observed phenotypes . CRISPR-Cas9 has been used to delete genes in a multiplexed manner in cell-lines. Quantifying the amount of guide-RNAs for each gene before and after the experiment can point towards essential genes. If a guide-RNA disrupts an essential gene it will lead to the loss of that cell and hence there will be a depletion of that particular guide-RNA after the screen. In a recent CRISPR-cas9 experiment in mammalian cell-lines, around 2000 genes were found to be essential in multiple cell-lines. [ 15 ] [ 16 ] Some of these genes were essential in only one cell-line. Most of genes are part of multi-protein complexes. This approach can be used to identify synthetic lethality by using the appropriate genetic background. CRISPRi and CRISPRa enable loss-of-function and gain-of-function screens in a similar manner. CRISPRi identified ~2100 essential genes in the K562 cell-line. [ 17 ] [ 18 ] CRISPR deletion screens have also been used to identify potential regulatory elements of a gene. For example, a technique called ScanDel was published which attempted this approach. The authors deleted regions outside a gene of interest(HPRT1 involved in a Mendelian disorder) in an attempt to identify regulatory elements of this gene. [ 19 ] Gassperini et al. did not identify any distal regulatory elements for HPRT1 using this approach, however such approaches can be extended to other genes of interest. Putative genes can be identified by scanning a genome for regions likely to encode proteins, based on characteristics such as long open reading frames , transcriptional initiation sequences, and polyadenylation sites. A sequence identified as a putative gene must be confirmed by further evidence, such as similarity to cDNA or EST sequences from the same organism, similarity of the predicted protein sequence to known proteins, association with promoter sequences, or evidence that mutating the sequence produces an observable phenotype. The Rosetta stone approach is a computational method for de-novo protein function prediction. It is based on the hypothesis that some proteins involved in a given physiological process may exist as two separate genes in one organism and as a single gene in another. Genomes are scanned for sequences that are independent in one organism and in a single open reading frame in another. If two genes have fused, it is predicted that they have similar biological functions that make such co-regulation advantageous. Because of the large quantity of data produced by these techniques and the desire to find biologically meaningful patterns, bioinformatics is crucial to analysis of functional genomics data. Examples of techniques in this class are data clustering or principal component analysis for unsupervised machine learning (class detection) as well as artificial neural networks or support vector machines for supervised machine learning (class prediction, classification ). Functional enrichment analysis is used to determine the extent of over- or under-expression (positive- or negative- regulators in case of RNAi screens) of functional categories relative to a background sets. Gene ontology based enrichment analysis are provided by DAVID and gene set enrichment analysis (GSEA), [ 20 ] pathway based analysis by Ingenuity [ 21 ] and Pathway studio [ 22 ] and protein complex based analysis by COMPLEAT. [ 23 ] New computational methods have been developed for understanding the results of a deep mutational scanning experiment. 'phydms' compares the result of a deep mutational scanning experiment to a phylogenetic tree. [ 24 ] This allows the user to infer if the selection process in nature applies similar constraints on a protein as the results of the deep mutational scan indicate. This may allow an experimenter to choose between different experimental conditions based on how well they reflect nature. Deep mutational scanning has also been used to infer protein-protein interactions. [ 25 ] The authors used a thermodynamic model to predict the effects of mutations in different parts of a dimer. Deep mutational structure can also be used to infer protein structure. Strong positive epistasis between two mutations in a deep mutational scan can be indicative of two parts of the protein that are close to each other in 3-D space. This information can then be used to infer protein structure. A proof of principle of this approach was shown by two groups using the protein GB1. [ 26 ] [ 27 ] Results from MPRA experiments have required machine learning approaches to interpret the data. A gapped k-mer SVM model has been used to infer the kmers that are enriched within cis-regulatory sequences with high activity compared to sequences with lower activity. [ 28 ] These models provide high predictive power. Deep learning and random forest approaches have also been used to interpret the results of these high-dimensional experiments. [ 29 ] These models are beginning to help develop a better understanding of non-coding DNA function towards gene-regulation. The ENCODE (Encyclopedia of DNA elements) project is an in-depth analysis of the human genome whose goal is to identify all the functional elements of genomic DNA, in both coding and non-coding regions. Important results include evidence from genomic tiling arrays that most nucleotides are transcribed as coding transcripts, non-coding RNAs, or random transcripts, the discovery of additional transcriptional regulatory sites, further elucidation of chromatin-modifying mechanisms. The GTEx project is a human genetics project aimed at understanding the role of genetic variation in shaping variation in the transcriptome across tissues. The project has collected a variety of tissue samples (> 50 different tissues) from more than 700 post-mortem donors. This has resulted in the collection of >11,000 samples. GTEx has helped understand the tissue-sharing and tissue-specificity of eQTLs . [ 30 ] The genomic resource was developed to "enrich our understanding of how differences in our DNA sequence contribute to health and disease." [ 31 ] The Atlas of Variant Effects Alliance (AVE), [ 32 ] founded in 2020, is an international consortium aiming to catalog the impact of all possible genetic variants for disease-related functional genomics by creating variant effect maps that reveal the function of every possible single nucleotide change in a gene or regulatory element. AVE is funded in part through the Brotman Baty Institute at the University of Washington and the National Human Genome Research Institute, via funding from the Center of Excellence in Genome Science grant (NHGRI RM1HG010461). [ 33 ]
https://en.wikipedia.org/wiki/Functional_genomics
In organic chemistry , a functional group is any substituent or moiety in a molecule that causes the molecule's characteristic chemical reactions . The same functional group will undergo the same or similar chemical reactions regardless of the rest of the molecule's composition. [ 1 ] [ 2 ] This enables systematic prediction of chemical reactions and behavior of chemical compounds and the design of chemical synthesis . The reactivity of a functional group can be modified by other functional groups nearby. Functional group interconversion can be used in retrosynthetic analysis to plan organic synthesis . A functional group is a group of atoms in a molecule with distinctive chemical properties , regardless of the other atoms in the molecule. The atoms in a functional group are linked to each other and to the rest of the molecule by covalent bonds . For repeating units of polymers , functional groups attach to their nonpolar core of carbon atoms and thus add chemical character to carbon chains. Functional groups can also be charged , e.g. in carboxylate salts ( −COO − ), which turns the molecule into a polyatomic ion or a complex ion . Functional groups binding to a central atom in a coordination complex are called ligands . Complexation and solvation are also caused by specific interactions of functional groups. In the common rule of thumb "like dissolves like", it is the shared or mutually well-interacting functional groups which give rise to solubility . For example, sugar dissolves in water because both share the hydroxyl functional group ( −OH ) and hydroxyls interact strongly with each other. Plus, when functional groups are more electronegative than atoms they attach to, the functional groups will become polar, and the otherwise nonpolar molecules containing these functional groups become polar and so become soluble in some aqueous environment. Combining the names of functional groups with the names of the parent alkanes generates what is termed a systematic nomenclature for naming organic compounds . In traditional nomenclature, the first carbon atom after the carbon that attaches to the functional group is called the alpha carbon ; the second, beta carbon, the third, gamma carbon, etc. If there is another functional group at a carbon, it may be named with the Greek letter, e.g., the gamma-amine in gamma-aminobutyric acid is on the third carbon of the carbon chain attached to the carboxylic acid group. IUPAC conventions call for numeric labeling of the position, e.g. 4-aminobutanoic acid. In traditional names various qualifiers are used to label isomers , for example, isopropanol (IUPAC name: propan-2-ol) is an isomer of n-propanol (propan-1-ol). The term moiety has some overlap with the term "functional group". However, a moiety is an entire "half" of a molecule, which can be not only a single functional group, but also a larger unit consisting of multiple functional groups. For example, an "aryl moiety" may be any group containing an aromatic ring , regardless of how many functional groups the said aryl has. The following is a list of common functional groups. [ 3 ] In the formulas, the symbols R and R' usually denote an attached hydrogen, or a hydrocarbon side chain of any length, but may sometimes refer to any group of atoms. Hydrocarbons are a class of molecule that is defined by functional groups called hydrocarbyls that contain only carbon and hydrogen, but vary in the number and order of double bonds. Each one differs in type (and scope) of reactivity. There are also a large number of branched or ring alkanes that have specific names, e.g., tert-butyl , bornyl , cyclohexyl , etc. There are several functional groups that contain an alkene such as vinyl group , allyl group , or acrylic group . Hydrocarbons may form charged structures: positively charged carbocations or negative carbanions . Carbocations are often named -um . Examples are tropylium and triphenylmethyl cations and the cyclopentadienyl anion. Haloalkanes are a class of molecule that is defined by a carbon– halogen bond. This bond can be relatively weak (in the case of an iodoalkane) or quite stable (as in the case of a fluoroalkane). In general, with the exception of fluorinated compounds, haloalkanes readily undergo nucleophilic substitution reactions or elimination reactions . The substitution on the carbon, the acidity of an adjacent proton, the solvent conditions, etc. all can influence the outcome of the reactivity. Compounds that contain C–O bonds each possess differing reactivity based upon the location and hybridization of the C–O bond, owing to the electron-withdrawing effect of sp-hybridized oxygen (carbonyl groups) and the donating effects of sp 2 -hybridized oxygen (alcohol groups). Compounds that contain nitrogen in this category may contain C-O bonds, such as in the case of amides . (acetimidamide) alkyl nitrate alkyl nitrite 4-pyridyl (pyridin-4-yl) 3-pyridyl (pyridin-3-yl) 2-pyridyl (pyridin-2-yl) Compounds that contain sulfur exhibit unique chemistry due to sulfur's ability to form more bonds than oxygen, its lighter analogue on the periodic table. Substitutive nomenclature (marked as prefix in table) is preferred over functional class nomenclature (marked as suffix in table) for sulfides, disulfides, sulfoxides and sulfones. Compounds that contain phosphorus exhibit unique chemistry due to the ability of phosphorus to form more bonds than nitrogen, its lighter analogue on the periodic table. Compounds containing boron exhibit unique chemistry due to their having partially filled octets and therefore acting as Lewis acids . methyllithium methylmagnesium chloride trimethylaluminium trimethylsilyl triflate note 1 Fluorine is too electronegative to be bonded to magnesium; it becomes an ionic salt instead. These names are used to refer to the moieties themselves or to radical species, and also to form the names of halides and substituents in larger molecules. When the parent hydrocarbon is unsaturated, the suffix ("-yl", "-ylidene", or "-ylidyne") replaces "-ane" (e.g. "ethane" becomes "ethyl"); otherwise, the suffix replaces only the final "-e" (e.g. " ethyne " becomes " ethynyl "). [ 4 ] When used to refer to moieties, multiple single bonds differ from a single multiple bond. For example, a methylene bridge (methanediyl) has two single bonds, whereas a methylidene group (methylidene) has one double bond. Suffixes can be combined, as in methylidyne (triple bond) vs. methylylidene (single bond and double bond) vs. methanetriyl (three double bonds). There are some retained names, such as methylene for methanediyl, 1,x- phenylene for phenyl-1,x-diyl (where x is 2, 3, or 4), [ 5 ] carbyne for methylidyne, and trityl for triphenylmethyl.
https://en.wikipedia.org/wiki/Functional_group
A functional group is a collection of organisms that share characteristics within a community. Ideally, these would perform equivalent tasks based on domain forces, rather than a common ancestor or evolutionary relationship. This could potentially lead to analogous structures that overrule the possibility of homology . More specifically, these beings produce resembling effects to external factors of an inhabiting system. [ 1 ] Due to the fact that a majority of these creatures share an ecological niche , it is practical to assume they require similar structures in order to achieve the greatest amount of fitness . This refers to such as the ability to successfully reproduce to create offspring, and furthermore sustain life by avoiding predators and sharing meals. Rather than being based in theory, functional groups are directly observed and determined by research specialists. It is important that this information is witnessed firsthand in order to state as usable evidence. Behavior and overall contribution to others are common key points to look for. Individuals use the corresponding perceived traits to further link genetic profiles to one another. Although the species themselves are different, variables based on overall function and performance are interchangeable. These groups share an indistinguishable part within their energy flow , providing a key position within food chains and relationships within environment(s). [ 2 ] An ecosystem is the biological organization that defines and expands on various environment factors, abiotic and biotic , that relate to simultaneous interaction. [ 3 ] Whether it be a producer or relative consumer, each and every piece of life maintains a critical position in the ongoing survival rates of its own surroundings. As it pertains, a functional group shares a very specific role within any given ecosystem and the process of cycling vitality. There are generally two types of functional groups that range between flora and specific animal populations. Groups that relate to vegetation science, or flora, are known as plant functional types. Also referred to as PFT for short, these often share identical photosynthetic processes and require comparable nutrients. As an example, plants that undergo photosynthesis share an identical purpose in producing chemical energy for others. [ 4 ] In contrast, those within the animal science range are called guilds typically sharing feeding types. This could be easily simplified when viewing trophic levels . Examples include primary consumers, secondary consumers, tertiary consumers, and quaternary consumers. [ 5 ] Functional diversity is often referred to as the "value and the range of those species and organismal traits that influence ecosystem functioning ”. [ 6 ] Traits of an organism that make it unique may include the way it moves, gathers resources, or reproduces, or the time of year it is active [ 7 ] add to the overall diversity of an entire ecosystem , and therefore enhance the overall function, or productivity, of that ecosystem. [ 8 ] Functional diversity increases the overall productivity of an ecosystem by allowing for an increase in niche occupation. Species have evolved to be more diverse through each epoch of time, [ 9 ] with plants and insects having some of the most diverse families discovered thus far. [ 10 ] The unique traits of an organism can allow a new niche to be occupied, allow for better defense against predators, and potentially lead to specialization. Organismal level functional diversity, which adds to the overall functional diversity of an ecosystem, is important for conservation efforts, especially in systems used for human consumption. [ 11 ] Functional diversity can be difficult to measure accurately, but when done correctly, it provides useful insight to the overall function and stability of an ecosystem. [ 12 ] Functional redundancy refers to the phenomenon that species in the same ecosystem fill similar roles, which results in a sort of "insurance" in the ecosystem. Redundant species can easily do the job of a similar species from the same functional niche. [ 13 ] This is possible because similar species have adapted to fill the same niche overtime. Functional redundancy varies across ecosystems and can vary from year to year depending on multiple factors including habitat availability, overall species diversity, competition for resources, and anthropogenic influence. [ 14 ] This variation can lead to a fluctuation in overall ecosystem production. It is not always known how many species occupy a functional niche, and how much, if any, redundancy is occurring in each niche in an ecosystem. It is hypothesized that each important functional niche is filled by multiple species. Similar to functional diversity, there is no one clear method for calculating functional redundancy accurately, which can be problematic. One method is to account for the number of species occupying a functional niche, as well as the abundance of each species. This can indicate how many total individuals in an ecosystem are performing one function. [ 15 ] Studies relating to functional diversity and redundancy occur in a large proportion of conservation and ecological research. As the human population increases, the need for ecosystem function subsequently increases. In addition, habitat destruction and modification continue to increase, and suitable habitat for many species continues to decrease, this research becomes more important. As the human population continues to expand and become urbanized, native and natural landscapes are disappearing, being replaced with modified and managed land for human consumption. Alterations to landscapes are often accompanied with negative side effects including fragmentation, species losses, and nutrient runoff, which can effect the stability of an ecosystem, productivity of an ecosystem, and the functional diversity and functional redundancy by decreasing species diversity. It has been shown that intense land use affects both the species diversity and functional overlap, leaving the ecosystem and organisms in it vulnerable. [ 16 ] Specifically, bee species, which we rely on for pollination services, have both lower functional diversity and species diversity in managed landscapes when compared to natural habitats, indicating that anthropogenic change can be detrimental for organismal functional diversity, and therefore overall ecosystem functional diversity. [ 17 ] Additional research demonstrated that the functional redundancy of herbivorous insects in streams varies due to stream velocity, demonstrating that environmental factors can alter functional overlap. [ 18 ] When conservation efforts begin, it is still up for debate whether preserving specific species or functional traits is a more beneficial approach for the preservation of ecosystem function. Higher species diversity can lead to an increase in overall ecosystem productivity, but does not necessarily insure the security of functional overlap. In ecosystems with high redundancy, losing a species (which lowers overall functional diversity) will not always lower overall ecosystem function due to high functional overlap, and thus in this instance it is most important to conserve a group, rather than an individual. In ecosystems with dominant species, which contribute to a majority of the biomass output, it may be more beneficial to conserve this single species, rather than a functional group. [ 19 ] The ecological concept of keystone species was redefined based on the presence of species with non redundant trophic dynamics with measured biomass dominance within functional groups, which highlights the conservation benefits of protecting both species and their respective functional group. [ 20 ] Understanding functional diversity and redundancy, and the roles each play in conservation efforts, is often hard to accomplish because the tools with which we measure diversity and redundancy cannot be used interchangeably. Due to this, recent empirical work most often analyzes the effects of either functional diversity or functional redundancy, but not both. This does not create a complete picture of the factors influencing ecosystem production. In ecosystems with similar and diverse vegetation, functional diversity is more important for overall ecosystem stability and productivity. [ 21 ] Yet, in contrast, functional diversity of native bee species in highly managed landscapes provided evidence for higher functional redundancy leading to higher fruit production, something humans rely heavily on for food consumption. [ 22 ] A recent paper has stated that until a more accurate measuring technique is universally used, it is too early to determine which species, or functional groups, are most vulnerable and susceptible to extinction. [ 23 ] Overall, understanding how extinction affects ecosystems, and which traits are most vulnerable, can protect ecosystems as a whole. [ 24 ]
https://en.wikipedia.org/wiki/Functional_group_(ecology)
The concept of functional information is an attempt to rigorously define the information content of biological systems. The concept was originated by a group led by Jack W. Szostak in 2003. [ 1 ] They define functional information as follows: [ 2 ] [ 3 ] This leads to two conclusions: Note that functional information of a system E {\displaystyle E} must always be defined relative to a specific function x {\displaystyle x} , without a choice of which it has no meaning. In 2025, a group of researchers proposed a law of increasing functional information , that asserts that a tendency to increase in functional information is an inherent property of the universe, encompassing both biological and non-biological systems. [ 4 ] [ 5 ] [ 6 ] This biophysics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Functional_information
Functional integration is a collection of results in mathematics and physics where the domain of an integral is no longer a region of space, but a space of functions . Functional integrals arise in probability , in the study of partial differential equations , and in the path integral approach to the quantum mechanics of particles and fields. In an ordinary integral (in the sense of Lebesgue integration ) there is a function to be integrated (the integrand) and a region of space over which to integrate the function (the domain of integration). The process of integration consists of adding up the values of the integrand for each point of the domain of integration. Making this procedure rigorous requires a limiting procedure, where the domain of integration is divided into smaller and smaller regions. For each small region, the value of the integrand cannot vary much, so it may be replaced by a single value. In a functional integral the domain of integration is a space of functions. For each function, the integrand returns a value to add up. Making this procedure rigorous poses challenges that continue to be topics of current research. Functional integration was developed by Percy John Daniell in an article of 1919 [ 1 ] and Norbert Wiener in a series of studies culminating in his articles of 1921 on Brownian motion . They developed a rigorous method (now known as the Wiener measure ) for assigning a probability to a particle's random path. Richard Feynman developed another functional integral, the path integral , useful for computing the quantum properties of systems. In Feynman's path integral, the classical notion of a unique trajectory for a particle is replaced by an infinite sum of classical paths, each weighted differently according to its classical properties. Functional integration is central to quantization techniques in theoretical physics. The algebraic properties of functional integrals are used to develop series used to calculate properties in quantum electrodynamics and the standard model of particle physics. Whereas standard Riemann integration sums a function f ( x ) over a continuous range of values of x , functional integration sums a functional G [ f ], which can be thought of as a "function of a function" over a continuous range (or space) of functions f . Most functional integrals cannot be evaluated exactly but must be evaluated using perturbation methods . The formal definition of a functional integral is ∫ G [ f ] D [ f ] ≡ ∫ R ⋯ ∫ R G [ f ] ∏ x d f ( x ) . {\displaystyle \int G[f]\;{\mathcal {D}}[f]\equiv \int _{\mathbb {R} }\cdots \int _{\mathbb {R} }G[f]\prod _{x}df(x)\;.} However, in most cases the functions f ( x ) can be written in terms of an infinite series of orthogonal functions such as f ( x ) = f n H n ( x ) {\displaystyle f(x)=f_{n}H_{n}(x)} , and then the definition becomes ∫ G [ f ] D [ f ] ≡ ∫ R ⋯ ∫ R G ( f 1 ; f 2 ; … ) ∏ n d f n , {\displaystyle \int G[f]\;{\mathcal {D}}[f]\equiv \int _{\mathbb {R} }\cdots \int _{\mathbb {R} }G(f_{1};f_{2};\ldots )\prod _{n}df_{n}\;,} which is slightly more understandable. The integral is shown to be a functional integral with a capital D {\displaystyle {\mathcal {D}}} . Sometimes the argument is written in square brackets D [ f ] {\displaystyle {\mathcal {D}}[f]} , to indicate the functional dependence of the function in the functional integration measure. Most functional integrals are actually infinite, but often the limit of the quotient of two related functional integrals can still be finite. The functional integrals that can be evaluated exactly usually start with the following Gaussian integral : in which K ( x ; y ) = K ( y ; x ) {\displaystyle K(x;y)=K(y;x)} . By functionally differentiating this with respect to J ( x ) and then setting to 0 this becomes an exponential multiplied by a monomial in f . To see this, let's use the following notation: G [ f , J ] = − 1 2 ∫ R [ ∫ R f ( x ) K ( x ; y ) f ( y ) d y + J ( x ) f ( x ) ] d x , W [ J ] = ∫ exp ⁡ { G [ f , J ] } D [ f ] . {\displaystyle G[f,J]=-{\frac {1}{2}}\int _{\mathbb {R} }\left[\int _{\mathbb {R} }f(x)K(x;y)f(y)\,dy+J(x)f(x)\right]dx\,\quad ,\quad W[J]=\int \exp \lbrace G[f,J]\rbrace {\mathcal {D}}[f]\;.} With this notation the first equation can be written as: W [ J ] W [ 0 ] = exp ⁡ { 1 2 ∫ R 2 J ( x ) K − 1 ( x ; y ) J ( y ) d x d y } . {\displaystyle {\dfrac {W[J]}{W[0]}}=\exp \left\lbrace {\frac {1}{2}}\int _{\mathbb {R} ^{2}}J(x)K^{-1}(x;y)J(y)\,dx\,dy\right\rbrace .} Now, taking functional derivatives to the definition of W [ J ] {\displaystyle W[J]} and then evaluating in J = 0 {\displaystyle J=0} , one obtains: δ δ J ( a ) W [ J ] | J = 0 = ∫ f ( a ) exp ⁡ { G [ f , 0 ] } D [ f ] , {\displaystyle {\dfrac {\delta }{\delta J(a)}}W[J]{\Bigg |}_{J=0}=\int f(a)\exp \lbrace G[f,0]\rbrace {\mathcal {D}}[f]\;,} δ 2 W [ J ] δ J ( a ) δ J ( b ) | J = 0 = ∫ f ( a ) f ( b ) exp ⁡ { G [ f , 0 ] } D [ f ] , {\displaystyle {\dfrac {\delta ^{2}W[J]}{\delta J(a)\delta J(b)}}{\Bigg |}_{J=0}=\int f(a)f(b)\exp \lbrace G[f,0]\rbrace {\mathcal {D}}[f]\;,} ⋮ {\displaystyle \qquad \qquad \qquad \qquad \vdots } which is the result anticipated. More over, by using the first equation one arrives to the useful result: Putting these results together and backing to the original notation we have: ∫ f ( a ) f ( b ) exp ⁡ { − 1 2 ∫ R 2 f ( x ) K ( x ; y ) f ( y ) d x d y } D [ f ] ∫ exp ⁡ { − 1 2 ∫ R 2 f ( x ) K ( x ; y ) f ( y ) d x d y } D [ f ] = K − 1 ( a ; b ) . {\displaystyle {\frac {\displaystyle \int f(a)f(b)\exp \left\lbrace -{\frac {1}{2}}\int _{\mathbb {R} ^{2}}f(x)K(x;y)f(y)\,dx\,dy\right\rbrace {\mathcal {D}}[f]}{\displaystyle \int \exp \left\lbrace -{\frac {1}{2}}\int _{\mathbb {R} ^{2}}f(x)K(x;y)f(y)\,dx\,dy\right\rbrace {\mathcal {D}}[f]}}=K^{-1}(a;b)\,.} Another useful integral is the functional delta function : which is useful to specify constraints. Functional integrals can also be done over Grassmann-valued functions ψ ( x ) {\displaystyle \psi (x)} , where ψ ( x ) ψ ( y ) = − ψ ( y ) ψ ( x ) {\displaystyle \psi (x)\psi (y)=-\psi (y)\psi (x)} , which is useful in quantum electrodynamics for calculations involving fermions . Functional integrals where the space of integration consists of paths ( ν = 1) can be defined in many different ways. The definitions fall in two different classes: the constructions derived from Wiener's theory yield an integral based on a measure , whereas the constructions following Feynman's path integral do not. Even within these two broad divisions, the integrals are not identical, that is, they are defined differently for different classes of functions. In the Wiener integral , a probability is assigned to a class of Brownian motion paths. The class consists of the paths w that are known to go through a small region of space at a given time. The passage through different regions of space is assumed independent of each other, and the distance between any two points of the Brownian path is assumed to be Gaussian-distributed with a variance that depends on the time t and on a diffusion constant D : The probability for the class of paths can be found by multiplying the probabilities of starting in one region and then being at the next. The Wiener measure can be developed by considering the limit of many small regions.
https://en.wikipedia.org/wiki/Functional_integration
Functional integration is the study of how brain regions work together to process information and effect responses. Though functional integration frequently relies on anatomic knowledge of the connections between brain areas, the emphasis is on how large clusters of neurons – numbering in the thousands or millions – fire together under various stimuli. The large datasets required for such a whole-scale picture of brain function have motivated the development of several novel and general methods for the statistical analysis of interdependence, such as dynamic causal modelling and statistical linear parametric mapping. These datasets are typically gathered in human subjects by non-invasive methods such as EEG / MEG , fMRI , or PET . The results can be of clinical value by helping to identify the regions responsible for psychiatric disorders, as well as to assess how different activities or lifestyles affect the functioning of the brain. A study's choice of imaging modality depends on the desired spatial and temporal resolution. fMRI and PET offer relatively high spatial resolution, with voxel dimensions on the order of a few millimeters, [ 1 ] but their relatively low sampling rate hinders the observation of rapid and transient interactions between distant regions of the brain. These temporal limitations are overcome by MEG, but at the cost of only detecting signals from much larger clusters of neurons. [ 2 ] Functional magnetic resonance imaging (fMRI) is a form of MRI that is most frequently used to take advantage of a difference in magnetism between oxy- and deoxyhemoglobin to assess blood flow to different parts of the brain. Typical sampling rates for fMRI images are in the tenths of seconds. [ 3 ] Magnetoencephalography (MEG) is an imaging modality that uses very sensitive magnetometers to measure the magnetic fields resulting from ionic currents flowing through neurons in the brain. High-quality MEG machines allow for sub-millisecond sampling rates. [ 2 ] PET works by introducing a radiolabeled biologically active molecule. The choice of molecule dictates what is visualized: by using a radiolabeled analog of glucose, for example, one can obtain an image whose intensity distribution indicates metabolic activity. PET scanners offer sampling rates in the tenths of seconds. [ 4 ] Multimodal imaging frequently consists of the coupling of an electrophysiologic measurement technique, such as EEG or MEG, with a hemodynamic one such as fMRI or PET. While the intention is to use the strengths and limitations of each to complement the other, current approaches suffer from experimental limitations. [ 5 ] Some previous work has focused on attempting to use the high spatial resolution of fMRI to determine the (spatial) origin of EEG/MEG signals, so that in future work this spatial information could be extracted from a unimodal EEG/MEG signal. While some studies have seen success in correlating signal origins between modalities to within a few millimeters, the results have not been uniformly positive. Another current limitation is the actual experimental setup: taking measurements using both modalities at once yields inferior signals, but the alternative of measuring each modality separately is confounded by trial-to-trial variability. [ 5 ] In functional integration, there is a distinction drawn between functional connectivity, and effective connectivity. Two brain regions are said to be functionally connected if there is a high correlation between the times that the two are firing, though this does not imply causality. Effective connectivity, on the other hand, is a description of the causal relationship between various brain regions. [ 6 ] While statistical assessment of the functional connectivity of multiple brain regions is non-trivial, determining the causality of which brain regions influence which to fire is much thornier, and requires solutions to ill-posed optimization problems. [ 7 ] Dynamic causal modeling (DCM) is a Bayesian method for deducing the structure of a neural system based on the observed hemodynamic (fMRI) or electrophysiologic (EEG/MEG) signal. The first step is to make a prediction as to the relationships between the brain regions of interest, and formulate a system of ordinary differential equations describing the causal relationship between them, although many parameters (and relationships) will be initially unknown. Using previous results on how neural activity is known to translate into fMRI or EEG signals, [ 8 ] one can take the measured signal and determine the likelihood that model parameters have particular values. The elucidated model can then be used to predict relationships between the considered brain regions under different conditions. [ 9 ] A key factor to consider during the design of neuroimaging experiments involving DCM is the relationship between the timing of tasks or stimuli presented to the subject and the ability of DCM to determine the underlying relationships between brain regions, which is partially determined by the temporal resolution of the imaging modality in use. [ 10 ] Statistical parametric mapping (SPM) is a method for determining whether the activation of a particular brain region changes between experimental conditions, stimuli, or over time. The essential idea is simple, and consists of two major steps: first, one performs a univariate statistical test on each individual voxel between each experimental condition. [ 11 ] Second, one analyzes the clustering of the voxels that show statistically significant differences, and determines which brain regions exhibit different levels of activation under different experimental conditions. There is great flexibility in the choice of statistical test (and thus the questions that an experiment can be designed to answer), and common choices include the Student's t test or linear regression . An important consideration with SPM, however, is that the large number of comparisons requires one to control the false positive rate with a more stringent significance threshold. This can be done either by modifying the initial statistical test to decrease the α value so as to make it harder for a particular voxel to exhibit a significant difference (e.g., Bonferroni correction ), or by modifying the clustering analysis in the second step by only considering a brain region's activation to be significant if it contains a certain number of voxels that exhibit a statistical difference (see random field theory ). [ 11 ] Voxel-based morphometry (VBM) is a method that allows one to measure brain tissue composition differences between subjects. To do so, one must first register all images to a standard coordinate system, by mapping them to a reference image. This is done by use of an affine transformation that minimizes the sum-of-squares intensity difference between the experimental image and the reference. Once this is done, the proportion of grey or white matter in a voxel can be determined by intensity. This allows one to compare the tissue composition of corresponding brain regions between different subjects. [ 12 ] The ability to visualize whole-brain activity is frequently used in comparing brain function during various sorts of tasks or tests of skill, as well as in comparing brain structure and function between different groups of people. Many previous fMRI studies have seen that spontaneous activation of functionally connected brain regions occurs during the resting state, even in the absence of any sort of stimulation or activity. Human subjects presented with a visual learning task exhibit changes in functional connectivity in the resting state for up to 24 hours and dynamic functional connectivity studies have even shown changes in functional connectivity during a single scan. By taking fMRI scans of subjects before and after the learning task, as well as on the following day, it was shown that the activity had caused a resting-state change in hippocampal activity. Dynamic causal modeling revealed that the hippocampus also exhibited a new level of effective connectivity with the striatum , though there was no learning-related change in any visual area. [ 13 ] Combining fMRI with DCM on subjects performing a learning task allows one to delineate which brain systems are involved in various sorts of learning, whether implicit or explicit, and document for long these tasks lead to changes in resting-state brain activation. Voxel-based morphometric measurements of grey matter localization in the brain can be used to predict components of IQ. A set of 35 teenagers were tested for IQ and were fMRI scanned over the course of 3.5 years, and had their IQ predicted by the level of grey matter localization. This study was well-conducted, but studies of this sort frequently suffer from "double-dipping," where a single dataset is used both to identify the brain regions of interest and to develop a predictive model, which leads to overtraining of the model and an absence of real predictive power. [ 14 ] The study authors avoided double-dipping by using a "leave-one-out" methodology, that involves building a predictive model for each of the n members of a sample based on data from the other n-1 members. This ensures that the model is independent of the subject whose IQ is being predicted, and resulted in a model capable of explaining 53% of the change in verbal IQ as a function of grey matter density in the left motor cortex. The study also observed the previously reported phenomenon that a ranking of young subjects by IQ does not stay constant as the subjects age, which confounds any measurement of the efficacy of educational programs. [ 14 ] These studies can be cross-validated by attempting to locate and assess patients with lesions or other damage in the identified brain region, and examining whether they exhibit functional deficits relative to the population. This methodology would be hindered by the lack of a "before" baseline measurement, however. The phonological loop is a component of working memory that stores a small set of words that can be maintained indefinitely if not distracted. The concept was proposed by the psychologists Alan Baddeley and Graham Hitch to explain how phrases or sentences can be internalized and used to direct action. By using statistical parametric mapping to assess differences in cerebral blood flow between participants performing two different tasks, Paulescu et al. [ 15 ] were able to identify the storage of the phonological loop as in the supramarginal gyrii . Human subjects were first split into a control and experimental group. The control group was presented with letters in a language they did not understand, and non-linguistic visual diagrams. The experimental group was tasked with two activities: the first activity was to remember a string of letters, and was intended activate all elements of the phonological loop. The second activity asked participants to assess whether given phrases rhymed, and was intended to only activate certain sub-systems involved in vocalization, but specifically not the phonological storage. By comparing the first experimental task to the second, as well as to the control group, the study authors observed that the brain region most significantly activated by the task requiring phonological storage was the supramarginal gyrii. This result was backed up by previous literature observations of functional deficits in patients with damage in this area. Though this study was able to precisely localize a specific function anatomically and the methods of functional integration and imaging are of great value in determining the brain regions involved in certain information processing tasks, the low-level neural circuitry that gives rise to these phenomena remains mysterious. Although fMRI studies of people with schizophrenia and bipolar disorder have yielded some insight into the changes in effective connectivity caused by these diseases, [ 16 ] a comprehensive understanding of the functional remodelling that occurs has not yet been achieved. Montague et al. [ 17 ] note that the almost "unreasonable effectiveness of psychotropic medication" has somewhat stymied progress in this field, and advocate for a large-scale "computational phenotyping" of psychiatric patients. Neuroimaging studies of large numbers of these patients could yield brain activation markers for specific psychiatric illnesses, and also aid in the development of therapeutics and animal models. While a true baseline of brain function in psychiatric patients is near-impossible to obtain, reference values can still be measured by comparing images gathered from patients before and after treatment.
https://en.wikipedia.org/wiki/Functional_integration_(neurobiology)
Functional magnetic resonance spectroscopy of the brain ( fMRS ) uses magnetic resonance imaging (MRI) to study brain metabolism during brain activation. The data generated by fMRS usually shows spectra of resonances, instead of a brain image, as with MRI. The area under peaks in the spectrum represents relative concentrations of metabolites. fMRS is based on the same principles as in vivo magnetic resonance spectroscopy (MRS). However, while conventional MRS records a single spectrum of metabolites from a region of interest, a key interest of fMRS is to detect multiple spectra and study metabolite concentration dynamics during brain function. Therefore, it is sometimes referred to as dynamic MRS , [ 1 ] [ 2 ] event-related MRS [ 3 ] or time-resolved MRS . [ 4 ] A novel variant of fMRS is functional diffusion-weighted spectroscopy (fDWS) which measures diffusion properties of brain metabolites upon brain activation. [ 5 ] Unlike in vivo MRS which is intensively used in clinical settings, fMRS is used primarily as a research tool, both in a clinical context, for example, to study metabolite dynamics in patients with epilepsy , migraine and dyslexia , and to study healthy brains. fMRS can be used to study metabolism dynamics also in other parts of the body, for example, in muscles and heart; however, brain studies have been far more popular. The main goals of fMRS studies are to contribute to the understanding of energy metabolism in the brain, and to test and improve data acquisition and quantification techniques to ensure and enhance validity and reliability of fMRS studies. Like in vivo MRS, fMRS can probe different nuclei, such as hydrogen ( 1 H) and carbon ( 13 C). The 1 H nucleus is the most sensitive and is most commonly used to measure metabolite concentrations and concentration dynamics, whereas 13 C is best suited for characterizing fluxes and pathways of brain metabolism. The natural abundance of 13 C in the brain is only about 1%; therefore, 13 C fMRS studies usually involve the isotope enrichment via infusion or ingestion. [ 6 ] In the literature 13 C fMRS is commonly referred to as functional 13 C MRS or just 13 C MRS . [ 7 ] Typically in MRS a single spectrum is acquired by averaging enough spectra over a long acquisition time. [ 8 ] Averaging is necessary because of the complex spectral structures and relatively low concentrations of many brain metabolites, which result in a low signal-to-noise ratio (SNR) in MRS of a living brain. fMRS differs from MRS by acquiring not one but multiple spectra at different time points while the participant is inside the MRI scanner. Thus, temporal resolution is very important and acquisition times need to be kept adequately short to provide a dynamic rate of metabolite concentration change. To balance the need for temporal resolution and sufficient SNR, fMRS requires a high magnetic field strength (1.5 T and above). High field strengths have the advantage of increased SNR as well as improved spectral resolution allowing to detect more metabolites and more detailed metabolite dynamics. [ 2 ] fMRS is continuously advancing as stronger magnets become more available and better data acquisition techniques are developed providing increased spectral and temporal resolution. With 7- tesla magnet scanners it is possible to detect around 18 different metabolites of 1 H spectrum which is a significant improvement over less powerful magnets. [ 9 ] [ 10 ] Temporal resolution has increased from 7 minutes in the first fMRS studies [ 11 ] to 5 seconds in more recent ones. [ 4 ] In fMRS, depending on the focus of the study, either single- voxel or multi-voxel spectroscopic technique can be used. In single-voxel fMRS the selection of the volume of interest (VOI) is often done by running a functional magnetic resonance imaging (fMRI) study prior to fMRS to localize the brain region activated by the task. Single-voxel spectroscopy requires shorter acquisition times; therefore it is more suitable for fMRS studies where high temporal resolution is needed and where the volume of interest is known. Multi-voxel spectroscopy provides information about group of voxels and data can be presented in 2D or 3D images, but it requires longer acquisition times and therefore temporal resolution is decreased. Multi-voxel spectroscopy usually is performed when the specific volume of interest is not known or it is important to study metabolite dynamics in a larger brain region. [ 12 ] fMRS has several advantages over other functional neuroimaging and brain biochemistry detection techniques. Unlike push-pull cannula , microdialysis and in vivo voltammetry , fMRS is a non-invasive method for studying dynamics of biochemistry in an activated brain. It is done without exposing subjects to ionizing radiation like it is done in positron emission tomography (PET) or single-photon emission computed tomography (SPECT) studies. fMRS gives a more direct measurement of cellular events occurring during brain activation than BOLD fMRI or PET which rely on hemodynamic responses and show only global neuronal energy uptake during brain activation while fMRS gives also information about underlying metabolic processes that support the working brain. [ 6 ] However, fMRS requires very sophisticated data acquisition, quantification methods and interpretation of results. This is one of the main reasons why in the past it received less attention than other MR techniques, but the availability of stronger magnets and improvements in data acquisition and quantification methods are making fMRS more popular. [ 13 ] Main limitations of fMRS are related to signal sensitivity and the fact that many metabolites of potential interest can not be detected with current fMRS techniques. Because of limited spatial and temporal resolution fMRS can not provide information about metabolites in different cell types, for example, whether lactate is used by neurons or by astrocytes during brain activation. The smallest volume that can currently be characterized with fMRS is 1 cm 3 , which is too big to measure metabolites in different cell types. To overcome this limitation, mathematical and kinetic modeling is used. [ 14 ] [ 15 ] Many brain areas are not suitable for fMRS studies because they are too small (like small nuclei in brainstem ) or too close to bone tissue, CSF or extracranial lipids , which could cause inhomogeneity in the voxel and contaminate the spectra. [ 16 ] To avoid these difficulties, in most fMRS studies the volume of interest is chosen from the visual cortex – because it is easily stimulated, has high energy metabolisms, and yields good MRS signals. [ 17 ] Unlike in vivo MRS which is intensively used in clinical settings, [ citation needed ] fMRS is used primarily as a research tool, both in a clinical context, for example, to study metabolite dynamics in patients with epilepsy , [ 18 ] migraine [ 19 ] [ 20 ] [ 17 ] and dyslexia , [ 16 ] [ 21 ] and to study healthy brains. fMRS can be used to study metabolism dynamics also in other parts of the body, for example, in muscles [ 22 ] and heart; [ 23 ] however, brain studies have been far more popular. The main goals of fMRS studies are to contribute to the understanding of energy metabolism in the brain, and to test and improve data acquisition and quantification techniques to ensure and enhance validity and reliability of fMRS studies. [ 24 ] fMRS was developed as an extension of MRS in the early 1990s. [ 11 ] Its potential as a research technology became obvious when it was applied to an important research problem where PET studies had been inconclusive, namely the mismatch between oxygen and glucose consumption during sustained visual stimulation. [ 25 ] The 1 H fMRS studies highlighted the important role of lactate in this process and significantly contributed to the research in brain energy metabolism during brain activation. It confirmed the hypothesis that lactate increases during sustained visual stimulation [ 26 ] [ 27 ] [ 28 ] and allowed the generalization of findings based on visual stimulation to other types of stimulation, e.g., auditory stimulation, [ 29 ] motor task [ 30 ] and cognitive tasks. [ 16 ] [ 31 ] 1 H fMRS measurements were instrumental in achieving the current consensus among most researchers that lactate levels increase during the first minutes of intense brain activation. However, there are no consistent results about the magnitude of increase, and questions about the exact role of lactate in brain energy metabolism still remain unanswered and are the subject of continuing research. [ 32 ] [ 33 ] 13 C MRS is a special type of fMRS particularly suited for measuring important neurophysiological fluxes in vivo and in real time to assess metabolic activity both in healthy and diseased brains (e.g., in human tumor tissue [ 34 ] ). These fluxes include TCA cycle , glutamate–glutamine cycle , glucose and oxygen consumption. [ 6 ] 13 C MRS can provide detailed quantitative information about glucose dynamics that can not be obtained with 1 H fMRS, because of the low concentration of glucose in the brain and the spread of its resonances in several multiplets in the 1 H MRS spectrum. [ 35 ] 13 C MRSs have been crucial in recognizing that the awake nonstimulated (resting) human brain is highly active using 70%–80% of its energy for glucose oxidation to support signaling within cortical networks which is suggested to be necessary for consciousness . [ 36 ] This finding has an important implication for the interpretation of BOLD fMRI data where this high baseline activity is generally ignored and response to the task is shown as independent of the baseline activity. 13 C MRS studies indicate that this approach can misjudge and even completely miss the brain activity induced by the task. [ 37 ] 13 C MRS findings together with other results from PET and fMRI studies have been combined in a model to explain the function of resting state activity called default mode network . [ 38 ] Another important benefit of 13 C MRS is that it provides unique means for determining the time course of metabolite pools and measuring turnover rates of TCA and glutamate–glutamine cycles. As such, it has been proved to be important in aging research by revealing that mitochondrial metabolism is reduced with aging which may explain the decline in cognitive and sensory processes. [ 39 ] Usually, in 1 H fMRS the water signal is suppressed to detect metabolites with much lower concentration than water. Though, an unsuppressed water signal can be used to estimate functional changes in the relaxation time T2* during cortical activation. This approach has been proposed as an alternative to the BOLD fMRI technique and used to detect visual response to photic stimulation , motor activation by finger tapping and activations in language areas during speech processing. [ 40 ] Recently functional real-time single-voxel proton spectroscopy (fSVPS) has been proposed as a technique for real-time neurofeedback studies in magnetic fields of 7 tesla (7 T) and above. This approach could have potential advantages over BOLD fMRI and is the subject of current research. [ 41 ] fMRS has been used in migraine and pain research. It has supported the important hypothesis of mitochondria dysfunction in migraine with aura (MwA) patients. Here the ability of fMRS to measure chemical processes in the brain over time proved crucial for confirming that repetitive photic stimulation causes higher increase of the lactate level and higher decrease of the N-acetylaspartate (NAA) level in the visual cortex of MwA patients compared to migraine without aura (MwoA) patients and healthy individuals. [ 17 ] [ 19 ] [ 20 ] In pain research fMRS complements fMRI and PET techniques. Although fMRI and PET are continuously used to localize pain processing areas in the brain, they can not provide direct information about changes in metabolites during pain processing that could help to understand physiological processes behind pain perception and potentially lead to novel treatments for pain . fMRS overcomes this limitation and has been used to study pain-induced (cold-pressure, heat, dental pain) neurotransmitter level changes in the anterior cingulate cortex , [ 42 ] [ 43 ] anterior insular cortex [ 4 ] and left insular cortex. [ 44 ] These fMRS studies are valuable because they show that some or all Glx compounds ( glutamate , GABA and glutamine ) increase during painful stimuli in the studied brain regions. Cognitive studies frequently rely on the detection of neuronal activity during cognition. The use of fMRS for this purpose is at present mainly at an experimental level but is rapidly increasing. Cognitive tasks where fMRS has been used and the major findings of the research are summarized below.
https://en.wikipedia.org/wiki/Functional_magnetic_resonance_spectroscopy_of_the_brain
In the development of vertebrate animals , the functional matrix hypothesis is a phenomenological description of bone growth. It proposes that "the origin, development and maintenance of all skeletal units are secondary, compensatory and mechanically obligatory responses to temporally and operationally prior demands of related functional matrices." [ 1 ] The fundamental basis for this hypothesis, laid out by Columbia anatomy professor Melvin Moss is that bones do not grow but are grown , [ 2 ] thus stressing the ontogenetic primacy of function over form. [ 3 ] This is in contrast to the current conventional scientific wisdom that genetic , rather than epigenetic (non-genetic) factors, control such growth. [ 3 ] The theory was introduced as a chapter in a dental textbook in 1962. [ 4 ]
https://en.wikipedia.org/wiki/Functional_matrix_hypothesis
Functional Molecular Infection Epidemiology ( FMIE ) [ 1 ] is an emerging area of medicine that entails the study of pathogen genes and genomes in the context of their functional association with the host niches (adhesion, invasion, adaptation) and the complex interactions they trigger within the host immune system ( cell signaling , apoptosis ) to culminate in varied outcomes of the infection. This can also be defined as the correlation of genetic variations in a pathogen or its respective host with a unique function that is important for disease severity, disease progression, or host susceptibility to a particular pathogen. Functional epidemiology implies not only descriptive host-pathogen genomic associations, but rather the interplay between pathogen and host genomic variations to functionally demonstrate the role of the genetic variations during infection. [ citation needed ] Functional Molecular Infection Epidemiology differs from classical Molecular Infection Epidemiology mainly in that the latter deals with the tagging and tracking of the infectious agent without much concern for the functional/phenotypic characteristics of the agent being tracked. Functional molecular epidemiology, on the other hand, lays more emphasis on genotypic and phenotypic correlates of host-pathogen interaction, adaptation or homeostasis. Furthermore, classical molecular epidemiology largely uses “neutral” markers, such as insertion sequences and intergenic elements, while functional molecular epidemiology harnesses functionally relevant markers such as SNPs and genome co-ordinates with putative roles in infection biology – both on the pathogen and the host side. Many studies have been conducted which fit the theme of FMIE - for example, acquisition and transmission of the Mycobacterium avium subsp. paratuberculosis and its role in the development of Type-1 diabetes mellitus when human gene SLC11A1 undergoes particular mutations in a susceptible host. [ 2 ] The concept of FMIE has become potentially relevant in the aftermath of multiple genome sequencing and resequencing of important bacterial pathogens from many different host/patient populations. [ 3 ] A consortium of scientists in India and Germany is (Project BRIDGE) already formed under the aegis of the Freie University in Berlin and the University of Hyderabad to explore and investigate the application of FMIE in public health and Veterinary arena as a part of the DFG funded project - GRK1673 [ 4 ] under the joint leadership of Lothar H. Wieler (Free University of Berlin) and Niyaz Ahmed (University of Hyderabad).
https://en.wikipedia.org/wiki/Functional_molecular_infection_epidemiology
In formal logic and related branches of mathematics , a functional predicate , or function symbol , is a logical symbol that may be applied to an object term to produce another object term. Functional predicates are also sometimes called mappings , but that term has additional meanings in mathematics . In a model , a function symbol will be modelled by a function . Specifically, the symbol F in a formal language is a functional symbol if, given any symbol X representing an object in the language, F ( X ) is again a symbol representing an object in that language. In typed logic , F is a functional symbol with domain type T and codomain type U if, given any symbol X representing an object of type T , F ( X ) is a symbol representing an object of type U . One can similarly define function symbols of more than one variable, analogous to functions of more than one variable; a function symbol in zero variables is simply a constant symbol. Now consider a model of the formal language, with the types T and U modelled by sets [ T ] and [ U ] and each symbol X of type T modelled by an element [ X ] in [ T ]. Then F can be modelled by the set which is simply a function with domain [ T ] and codomain [ U ]. It is a requirement of a consistent model that [ F ( X )] = [ F ( Y )] whenever [ X ] = [ Y ]. In a treatment of predicate logic that allows one to introduce new predicate symbols, one will also want to be able to introduce new function symbols. Given the function symbols F and G , one can introduce a new function symbol F ∘ G , the composition of F and G , satisfying ( F ∘ G )( X ) = F ( G ( X )), for all X . Of course, the right side of this equation doesn't make sense in typed logic unless the domain type of F matches the codomain type of G , so this is required for the composition to be defined. One also gets certain function symbols automatically. In untyped logic, there is an identity predicate id that satisfies id( X ) = X for all X . In typed logic, given any type T , there is an identity predicate id T with domain and codomain type T ; it satisfies id T ( X ) = X for all X of type T . Similarly, if T is a subtype of U , then there is an inclusion predicate of domain type T and codomain type U that satisfies the same equation; there are additional function symbols associated with other ways of constructing new types out of old ones. Additionally, one can define functional predicates after proving an appropriate theorem . (If you're working in a formal system that doesn't allow you to introduce new symbols after proving theorems, then you will have to use relation symbols to get around this, as in the next section.) Specifically, if you can prove that for every X (or every X of a certain type), there exists a unique Y satisfying some condition P , then you can introduce a function symbol F to indicate this. Note that P will itself be a relational predicate involving both X and Y . So if there is such a predicate P and a theorem: then you can introduce a function symbol F of domain type T and codomain type U that satisfies: Many treatments of predicate logic don't allow functional predicates, only relational predicates . This is useful, for example, in the context of proving metalogical theorems (such as Gödel's incompleteness theorems ), where one doesn't want to allow the introduction of new functional symbols (nor any other new symbols, for that matter). But there is a method of replacing functional symbols with relational symbols wherever the former may occur; furthermore, this is algorithmic and thus suitable for applying most metalogical theorems to the result. Specifically, if F has domain type T and codomain type U , then it can be replaced with a predicate P of type ( T , U ). Intuitively, P ( X , Y ) means F ( X ) = Y . Then whenever F ( X ) would appear in a statement, you can replace it with a new symbol Y of type U and include another statement P ( X , Y ). To be able to make the same deductions, you need an additional proposition: (Of course, this is the same proposition that had to be proven as a theorem before introducing a new function symbol in the previous section.) Because the elimination of functional predicates is both convenient for some purposes and possible, many treatments of formal logic do not deal explicitly with function symbols but instead use only relation symbols; another way to think of this is that a functional predicate is a special kind of predicate, specifically one that satisfies the proposition above. This may seem to be a problem if you wish to specify a proposition schema that applies only to functional predicates F ; how do you know ahead of time whether it satisfies that condition? To get an equivalent formulation of the schema, first replace anything of the form F ( X ) with a new variable Y . Then universally quantify over each Y immediately after the corresponding X is introduced (that is, after X is quantified over, or at the beginning of the statement if X is free), and guard the quantification with P ( X , Y ). Finally, make the entire statement a material consequence of the uniqueness condition for a functional predicate above. Let us take as an example the axiom schema of replacement in Zermelo–Fraenkel set theory . (This example uses mathematical symbols .) This schema states (in one form), for any functional predicate F in one variable: First, we must replace F ( C ) with some other variable D : Of course, this statement isn't correct; D must be quantified over just after C : We still must introduce P to guard this quantification: This is almost correct, but it applies to too many predicates; what we actually want is: This version of the axiom schema of replacement is now suitable for use in a formal language that doesn't allow the introduction of new function symbols. Alternatively, one may interpret the original statement as a statement in such a formal language; it was merely an abbreviation for the statement produced at the end.
https://en.wikipedia.org/wiki/Functional_predicate
In theoretical physics , functional renormalization group ( FRG ) is an implementation of the renormalization group (RG) concept which is used in quantum and statistical field theory, especially when dealing with strongly interacting systems. The method combines functional methods of quantum field theory with the intuitive renormalization group idea of Kenneth G. Wilson . This technique allows to interpolate smoothly between the known microscopic laws and the complicated macroscopic phenomena in physical systems. In this sense, it bridges the transition from simplicity of microphysics to complexity of macrophysics. Figuratively speaking, FRG acts as a microscope with a variable resolution. One starts with a high-resolution picture of the known microphysical laws and subsequently decreases the resolution to obtain a coarse-grained picture of macroscopic collective phenomena. The method is nonperturbative, meaning that it does not rely on an expansion in a small coupling constant . Mathematically, FRG is based on an exact functional differential equation for a scale-dependent effective action . In quantum field theory , the effective action Γ {\displaystyle \Gamma } is an analogue of the classical action functional S {\displaystyle S} and depends on the fields of a given theory. It includes all quantum and thermal fluctuations. Variation of Γ {\displaystyle \Gamma } yields exact quantum field equations, for example for cosmology or the electrodynamics of superconductors. Mathematically, Γ {\displaystyle \Gamma } is the generating functional of the one-particle irreducible Feynman diagrams . Interesting physics, as propagators and effective couplings for interactions, can be straightforwardly extracted from it. In a generic interacting field theory the effective action Γ {\displaystyle \Gamma } , however, is difficult to obtain. FRG provides a practical tool to calculate Γ {\displaystyle \Gamma } employing the renormalization group concept. The central object in FRG is a scale-dependent effective action functional Γ k {\displaystyle \Gamma _{k}} often called average action or flowing action. The dependence on the RG sliding scale k {\displaystyle k} is introduced by adding a regulator (infrared cutoff) R k {\displaystyle R_{k}} to the full inverse propagator Γ k ( 2 ) {\displaystyle \Gamma _{k}^{(2)}} . Roughly speaking, the regulator R k {\displaystyle R_{k}} decouples slow modes with momenta q ≲ k {\displaystyle q\lesssim k} by giving them a large mass, while high momentum modes are not affected. Thus, Γ k {\displaystyle \Gamma _{k}} includes all quantum and statistical fluctuations with momenta q ≳ k {\displaystyle q\gtrsim k} . The flowing action Γ k {\displaystyle \Gamma _{k}} obeys the exact functional flow equation k ∂ k Γ k = 1 2 STr k ∂ k R k ( Γ k ( 1 , 1 ) + R k ) − 1 , {\displaystyle k\,\partial _{k}\Gamma _{k}={\frac {1}{2}}{\text{STr}}\,k\,\partial _{k}R_{k}\,(\Gamma _{k}^{(1,1)}+R_{k})^{-1},} derived by Christof Wetterich and Tim R. Morris in 1993. Here ∂ k {\displaystyle \partial _{k}} denotes a derivative with respect to the RG scale k {\displaystyle k} at fixed values of the fields. Furthermore, Γ k ( 1 , 1 ) {\displaystyle \Gamma _{k}^{(1,1)}} denotes the functional derivative of Γ k {\displaystyle \Gamma _{k}} from the left-hand-side and the right-hand-side respectively, due to the tensor structure of the equation. This feature is often shown simplified by the second derivative of the effective action. The functional differential equation for Γ k {\displaystyle \Gamma _{k}} must be supplemented with the initial condition Γ k → Λ = S {\displaystyle \Gamma _{k\to \Lambda }=S} , where the "classical action" S {\displaystyle S} describes the physics at the microscopic ultraviolet scale k = Λ {\displaystyle k=\Lambda } . Importantly, in the infrared limit k → 0 {\displaystyle k\to 0} the full effective action Γ = Γ k → 0 {\displaystyle \Gamma =\Gamma _{k\to 0}} is obtained. In the Wetterich equation STr {\displaystyle {\text{STr}}} denotes a supertrace which sums over momenta, frequencies, internal indices, and fields (taking bosons with a plus and fermions with a minus sign). The exact flow equation for Γ k {\displaystyle \Gamma _{k}} has a one-loop structure. This is an important simplification compared to perturbation theory , where multi-loop diagrams must be included. The second functional derivative Γ k ( 2 ) = Γ k ( 1 , 1 ) {\displaystyle \Gamma _{k}^{(2)}=\Gamma _{k}^{(1,1)}} is the full inverse field propagator modified by the presence of the regulator R k {\displaystyle R_{k}} . The renormalization group evolution of Γ k {\displaystyle \Gamma _{k}} can be illustrated in the theory space, which is a multi-dimensional space of all possible running couplings { c n } {\displaystyle \{c_{n}\}} allowed by the symmetries of the problem. As schematically shown in the figure, at the microscopic ultraviolet scale k = Λ {\displaystyle k=\Lambda } one starts with the initial condition Γ k = Λ = S {\displaystyle \Gamma _{k=\Lambda }=S} . As the sliding scale k {\displaystyle k} is lowered, the flowing action Γ k {\displaystyle \Gamma _{k}} evolves in the theory space according to the functional flow equation. The choice of the regulator R k {\displaystyle R_{k}} is not unique, which introduces some scheme dependence into the renormalization group flow. For this reason, different choices of the regulator R k {\displaystyle R_{k}} correspond to the different paths in the figure. At the infrared scale k = 0 {\displaystyle k=0} , however, the full effective action Γ k = 0 = Γ {\displaystyle \Gamma _{k=0}=\Gamma } is recovered for every choice of the cut-off R k {\displaystyle R_{k}} , and all trajectories meet at the same point in the theory space. In most cases of interest the Wetterich equation can only be solved approximately. Usually some type of expansion of Γ k {\displaystyle \Gamma _{k}} is performed, which is then truncated at finite order leading to a finite system of ordinary differential equations. Different systematic expansion schemes (such as the derivative expansion, vertex expansion, etc.) were developed. The choice of the suitable scheme should be physically motivated and depends on a given problem. The expansions do not necessarily involve a small parameter (like an interaction coupling constant ) and thus they are, in general, of nonperturbative nature. Note however, that due to multiple choices regarding (prefactor-)conventions and the concrete definition of the effective action, one can find other (equivalent) versions of the Wetterich equation in the literature. [ 1 ] Contrary to the flow equation for the effective action, this scheme is formulated for the effective interaction V [ η , η + ] = − ln ⁡ Z [ G 0 − 1 η , G 0 − 1 η + ] − η G 0 − 1 η + {\displaystyle {\mathcal {V}}[\eta ,\eta ^{+}]=-\ln Z[G_{0}^{-1}\eta ,G_{0}^{-1}\eta ^{+}]-\eta G_{0}^{-1}\eta ^{+}} which generates n-particle interaction vertices, amputated by the bare propagators G 0 {\displaystyle G_{0}} ; Z [ η , η + ] {\displaystyle Z[\eta ,\eta ^{+}]} is the "standard" generating functional for the n-particle Green functions. The Wick ordering of effective interaction with respect to Green function D {\displaystyle D} can be defined by W [ η , η + ] = exp ⁡ ( − Δ D ) V [ η , η + ] {\displaystyle {\mathcal {W}}[\eta ,\eta ^{+}]=\exp(-\Delta _{D}){\mathcal {V}}[\eta ,\eta ^{+}]} . where Δ = D δ 2 / ( δ η δ η + ) {\displaystyle \Delta =D\delta ^{2}/(\delta \eta \delta \eta ^{+})} is the Laplacian in the field space. This operation is similar to Normal order and excludes from the interaction all possible terms, formed by a convolution of source fields with respective Green function D. Introducing some cutoff Λ {\displaystyle \Lambda } the Polchinskii equation ∂ ∂ Λ V Λ ( ψ ) = − Δ ˙ G 0 , Λ V Λ ( ψ ) + Δ G ˙ 0 , Λ 12 V Λ ( 1 ) V Λ ( 2 ) {\displaystyle {\frac {\partial }{\partial \Lambda }}{{V}_{\Lambda }}(\psi )=-{{\dot {\Delta }}_{G_{0,\Lambda }}}{{V}_{\Lambda }}(\psi )+\Delta _{{\dot {G}}_{0,\Lambda }}^{12}{\mathcal {V}}_{\Lambda }^{(1)}{\mathcal {V}}_{\Lambda }^{(2)}} takes the form of the Wick-ordered equation ∂ Λ W Λ = − Δ D ˙ Λ + G ˙ 0 , Λ W Λ + e − Δ D Λ 12 Δ G ˙ 0 , Λ 12 W Λ ( 1 ) W Λ ( 2 ) {\displaystyle {\partial _{\Lambda }}{{\mathcal {W}}_{\Lambda }}=-{\Delta _{{{\dot {D}}_{\Lambda }}+{{\dot {G}}_{0,\Lambda }}}}{{\mathcal {W}}_{\Lambda }}+{e^{-\Delta _{D_{\Lambda }}^{12}}}\Delta _{{\dot {G}}_{0,\Lambda }}^{12}{\mathcal {W}}_{\Lambda }^{(1)}{\mathcal {W}}_{\Lambda }^{(2)}} where Δ G ˙ 0 , Λ 12 V Λ ( 1 ) V Λ ( 2 ) = 1 2 ( δ V Λ ( ψ ) δ ψ , G ˙ 0 , Λ δ V Λ ( ψ ) δ ψ ) {\displaystyle \Delta _{{\dot {G}}_{0,\Lambda }}^{12}{\mathcal {V}}_{\Lambda }^{(1)}{\mathcal {V}}_{\Lambda }^{(2)}={\frac {1}{2}}\left({{\frac {\delta {{V}_{\Lambda }}(\psi )}{\delta \psi }},{{\dot {G}}_{0,\Lambda }}{\frac {\delta {{V}_{\Lambda }}(\psi )}{\delta \psi }}}\right)} The method was applied to numerous problems in physics, e.g.:
https://en.wikipedia.org/wiki/Functional_renormalization_group
In software engineering and systems engineering , a functional requirement defines a function of a system or its component, where a function is described as a summary (or specification or statement) of behavior between inputs and outputs. [ 1 ] Functional requirements may involve calculations, technical details, data manipulation and processing, and other specific functionality that define what a system is supposed to accomplish. [ 2 ] Behavioral requirements describe all the cases where the system uses the functional requirements, these are captured in use cases . Functional requirements are supported by non-functional requirements (also known as "quality requirements"), which impose constraints on the design or implementation (such as performance requirements, security, or reliability). Generally, functional requirements are expressed in the form "system must do <requirement>," while non-functional requirements take the form "system shall be <requirement>." [ 3 ] The plan for implementing functional requirements is detailed in the system design, whereas non-functional requirements are detailed in the system architecture . [ 4 ] [ 5 ] As defined in requirements engineering , functional requirements specify particular results of a system. This should be contrasted with non-functional requirements, which specify overall characteristics such as cost and reliability . Functional requirements drive the application architecture of a system, while non-functional requirements drive the technical architecture of a system. [ 4 ] In some cases, a requirements analyst generates use cases after gathering and validating a set of functional requirements. The hierarchy of functional requirements collection and change, broadly speaking, is: user/ stakeholder request → analyze → use case → incorporate. Stakeholders make a request; systems engineers attempt to discuss, observe, and understand the aspects of the requirement; use cases, entity relationship diagrams, and other models are built to validate the requirement; and, if documented and approved, the requirement is implemented/incorporated. [ 6 ] Each use case illustrates behavioral scenarios through one or more functional requirements. Often, though, an analyst will begin by eliciting a set of use cases, from which the analyst can derive the functional requirements that must be implemented to allow a user to perform each use case. A typical functional requirement will contain a unique name and number, a brief summary, and a rationale. This information is used to help the reader understand why the requirement is needed, and to track the requirement through the development of the system. [ 7 ] The crux of the requirement is the description of the required behavior, which must be clear and readable. The described behavior may come from organizational or business rules, or it may be discovered through elicitation sessions with users, stakeholders, and other experts within the organization. [ 7 ] Many requirements may be uncovered during the use case development. When this happens, the requirements analyst may create a placeholder requirement with a name and summary, and research the details later, to be filled in when they are better known.
https://en.wikipedia.org/wiki/Functional_requirement
Functional selectivity (or agonist trafficking , biased agonism , biased signaling , ligand bias , and differential engagement ) is the ligand -dependent selectivity for certain signal transduction pathways relative to a reference ligand (often the endogenous hormone or peptide) at the same receptor . [ 1 ] Functional selectivity can be present when a receptor has several possible signal transduction pathways. To which degree each pathway is activated thus depends on which ligand binds to the receptor. [ 2 ] Functional selectivity, or biased signaling, is most extensively characterized at G protein coupled receptors (GPCRs). [ 3 ] A number of biased agonists, such as those at muscarinic M2 receptors tested as analgesics [ 4 ] or antiproliferative drugs, [ 5 ] or those at opioid receptors that mediate pain, show potential at various receptor families to increase beneficial properties while reducing side effects. For example, pre-clinical studies with G protein biased agonists at the μ-opioid receptor show equivalent efficacy for treating pain with reduced risk for addictive potential and respiratory depression . [ 1 ] [ 6 ] Studies within the chemokine receptor system also suggest that GPCR biased agonism is physiologically relevant. For example, a beta-arrestin biased agonist of the chemokine receptor CXCR3 induced greater chemotaxis of T cells relative to a G protein biased agonist. [ 7 ] Functional selectivity has been proposed to broaden conventional definitions of pharmacology . Traditional pharmacology posits that a ligand can be either classified as an agonist (full or partial), antagonist or more recently an inverse agonist through a specific receptor subtype, and that this characteristic will be consistent with all effector ( second messenger ) systems coupled to that receptor. While this dogma has been the backbone of ligand-receptor interactions for decades now, more recent data indicates that this classic definition of ligand-protein associations does not hold true for a number of compounds; such compounds may be termed as mixed agonist-antagonists . Functional selectivity posits that a ligand may inherently produce a mix of the classic characteristics through a single receptor isoform depending on the effector pathway coupled to that receptor. For instance, a ligand can not easily be classified as an agonist or antagonist, because it can be a little of both, depending on its preferred signal transduction pathways. Thus, such ligands must instead be classified on the basis of their individual effects in the cell, instead of being either an agonist or antagonist to a receptor. These observations were made in a number of different expression systems , and therefore functional selectivity is not just an epiphenomenon of one particular expression system. One notable example of functional selectivity occurs with the 5-HT 2A receptor , as well as the 5-HT 2C receptor . Serotonin , the main endogenous ligand of 5-HT receptors , is a functionally selective agonist at this receptor, activating phospholipase C (which leads to inositol triphosphate accumulation), but does not activate phospholipase A2 , which would result in arachidonic acid signaling. However, the other endogenous compound dimethyltryptamine activates arachidonic acid signaling at the 5-HT 2A receptor, as do many exogenous hallucinogens such as DOB and lysergic acid diethylamide (LSD). Notably, LSD does not activate IP 3 signaling through this receptor to any significant extent. (Conversely, LSD, unlike serotonin, has negligible affinity for the 5-HT 2C-VGV isoform, is unable to promote calcium release, and is, thus, functionally selective at 5-HT 2C . [ 8 ] ) Oligomers, specifically 5-HT 2A – mGluR2 Tooltip metabotropic glutamate receptor 2 heteromers , mediate this effect. This may explain why some direct 5-HT 2 receptor agonists have psychedelic effects, whereas compounds that indirectly increase serotonin signaling at the 5-HT 2 receptors generally do not, for example: selective serotonin reuptake inhibitors (SSRIs), monoamine oxidase inhibitors (MAOIs), and medications using 5HT 2A receptor agonists that do not have constitutive activity at the mGluR2 dimer , such as lisuride . [ 9 ] Tianeptine , an atypical antidepressant , is thought to exhibit functional selectivity at the μ-opioid receptor to mediate its antidepressant effects. [ 10 ] [ 11 ] Oliceridine is a μ-opioid receptor agonist that has been described to be functionally selective towards G protein and away from β-arrestin2 pathways. [ 12 ] However, recent reports highlight that, rather than functional selectivity or 'G protein bias', this agonist has low intrinsic efficacy. [ 13 ] In vivo , it has been reported to mediate pain relief without tolerance nor gastrointestinal side effects. The delta opioid receptor agonists SNC80 and ARM390 demonstrate functional selectivity that is thought to be due to their differing capacity to cause receptor internalization . [ 14 ] While SNC80 causes delta opioid receptors to internalize, ARM390 causes very little receptor internalization. [ 14 ] Functionally, that means that the effects of SNC80 (e.g. analgesia ) do not occur when a subsequent dose follows the first, whereas the effects of ARM390 persist. [ 14 ] However, tolerance to ARM390's analgesia still occurs eventually after multiple doses, though through a mechanism that does not involve receptor internalization. [ 14 ] Interestingly, the other effects of ARM390 (e.g. decreased anxiety) persist after tolerance to its analgesic effects has occurred. [ 14 ] An example of functional selectivity to bias metabolism was demonstrated for an electron transfer protein cytochrome P450 reductase (POR) with binding of small molecule ligands shown to alter the protein conformation and interaction with various redox partner proteins of POR. [ 15 ]
https://en.wikipedia.org/wiki/Functional_selectivity
A functional specification (also, functional spec , specs , functional specifications document (FSD) , functional requirements specification ) in systems engineering and software development is a document that specifies the functions that a system or component must perform (often part of a requirements specification) (ISO/IEC/IEEE 24765-2010). [ 1 ] The documentation typically describes what is needed by the system user as well as requested properties of inputs and outputs (e.g. of the software system). A functional specification is the more technical response to a matching requirements document, e.g. the product requirements document "PRD" [ citation needed ] . Thus it picks up the results of the requirements analysis stage. On more complex systems multiple levels of functional specifications will typically nest to each other, e.g. on the system level, on the module level and on the level of technical details. A functional specification does not define the inner workings of the proposed system; it does not include the specification of how the system function will be implemented. A functional requirement in a functional specification might state as follows: Such a requirement describes an interaction between an external agent (the user ) and the software system. When the user provides input to the system by clicking the OK button, the program responds (or should respond) by closing the dialog window containing the OK button. There are many purposes for functional specifications. One of the primary purposes on team projects is to achieve some form of team consensus on what the program is to achieve before making the more time-consuming effort of writing source code and test cases , followed by a period of debugging . Typically, such consensus is reached after one or more reviews by the stakeholders on the project at hand after having negotiated a cost-effective way to achieve the requirements the software needs to fulfill. In the ordered industrial software engineering life-cycle ( waterfall model ), functional specification describes what has to be implemented. The next, systems architecture document describes how the functions will be realized using a chosen software environment. In non industrial, prototypical systems development, functional specifications are typically written after or as part of requirements analysis. When the team agrees that functional specification consensus is reached, the functional spec is typically declared "complete" or "signed off". After this, typically the software development and testing team write source code and test cases using the functional specification as the reference. While testing is performed, the behavior of the program is compared against the expected behavior as defined in the functional specification. One popular method of writing a functional specification document involves drawing or rendering either simple wire frames or accurate, graphically designed UI screenshots. After this has been completed, and the screen examples are approved by all stakeholders, graphical elements can be numbered and written instructions can be added for each number on the screen example. For example, a login screen can have the username field labeled '1' and password field labeled '2,' and then each number can be declared in writing, for use by software engineers and later for beta testing purposes to ensure that functionality is as intended. The benefit of this method is that countless additional details can be attached to the screen examples.
https://en.wikipedia.org/wiki/Functional_specification
A functional spinal unit ( FSU ), or motion segment , is the smallest physiological motion unit of the spine to exhibit biomechanical [ 1 ] characteristics similar to those of the entire spine. [ 2 ] A FSU consists of two adjacent vertebrae , the intervertebral disc and all adjoining ligaments between them and excludes other connecting tissues such as muscles . The three-joint complex that results is sometimes referred to as the "articular triad". In vitro studies of isolated or multiple FSU's are often used to measure biomechanical properties of the spine. The typical load-displacement behavior of a cadaveric FSU specimen is nonlinear. Within the total range of passive motion of any FSU, the typical load-displacement curve consists of 2 regions or 'zones' that exhibit very different biomechanical behavior. In the vicinity of the resting neutral position of the FSU, this load-displacement behavior is highly flexible. This is the region known as the 'neutral zone', which is the motion region of the joint where the passive osteoligamentous stability mechanisms exert little or no influence. During passive physiological movement of the FSU, motion occurs in this region against minimal internal resistance. It is a region in which a small load causes a relatively large displacement. The 'elastic zone' is the remaining region of FSU motion that continues from the end of the neutral zone to the point of maximum resistance (provided by the passive osteoligamentous stability mechanism), thus limiting the range of motion. [ 3 ]
https://en.wikipedia.org/wiki/Functional_spinal_unit
Methyl acetate Thiophenol Ethylamine Malonic acid Ethanolamine Glycine Glycerol ( R )-Cysteine In chemistry , functionality is the presence of functional groups in a molecule . A monofunctional molecule possesses one functional group, a bifunctional (or difunctional ) two, a trifunctional three, and so forth. In organic chemistry (and other fields of chemistry), a molecule's functionality has a decisive influence on its reactivity . In polymer chemistry , the functionality of a monomer refers to its number of polymerizable groups, and affects the formation and the degree of crosslinking of polymers . In organic chemistry, functionality is often used as a synonym for functional group . For example, a hydroxyl group can also be called a HO-function. [ 1 ] [ 2 ] Functionalisation means the introduction of functional groups, for example According to IUPAC , the functionality of a monomer is defined as the number of bonds that a monomer's repeating unit forms in a polymer with other monomers. Thus in the case of a functionality of f = 2 a linear polymer is formed by polymerizing (a thermoplastic ). Monomers with a functionality f ≥ 3 lead to a branching point, which can lead to cross-linked polymers (a thermosetting polymer ). Monofunctional monomers do not exist as such molecules lead to a chain termination . [ 6 ] From the average functionality of the used monomers the reaching of the gel point can be calculated as a function of reaction progress. [ 7 ] Side reactions may increase or decrease the functionality. [ 8 ] However, IUPAC definition and the use of the term in organic chemistry differ with respect to the functionality of a double bond. [ 6 ] [ 9 ] In polymer chemistry, a double bond possesses a functionality of two (because two points of contact for further polymer chains are present, on each of the two adjacent carbon atoms), while in organic chemistry the double bond is a functional group and thus has a functionality of one.
https://en.wikipedia.org/wiki/Functionality_(chemistry)
In materials science and mathematics, functionally graded elements are elements used in finite element analysis . [ 1 ] They can be used to describe a functionally graded material . [ 2 ] This applied mathematics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Functionally_graded_element
In materials science Functionally Graded Materials ( FGMs ) may be characterized by the variation in composition and structure gradually over volume, resulting in corresponding changes in the properties of the material. The materials can be designed for specific function and applications. Various approaches based on the bulk (particulate processing), preform processing, layer processing and melt processing are used to fabricate the functionally graded materials. The concept of FGM was first considered in Japan in 1984 during a space plane project, where a combination of materials used would serve the purpose of a thermal barrier capable of withstanding a surface temperature of 2000 K and a temperature gradient of 1000 K across a 10 mm section. [ 1 ] In recent years this concept has become more popular in Europe, particularly in Germany. A transregional collaborative research center (SFB Transregio) is funded since 2006 in order to exploit the potential of grading monomaterials, such as steel, aluminium and polypropylen, by using thermomechanically coupled manufacturing processes. [ 2 ] FGMs can vary in either composition and structure, for example, porosity, or both to produce the resulting gradient. The gradient can be categorized as either continuous or discontinuous, which exhibits a stepwise gradient. There are several examples of FGMs in nature, including bamboo and bone, which alter their microstructure to create a material property gradient. [ 3 ] In biological materials, the gradients can be produced through changes in the chemical composition, structure, interfaces, and through the presence of gradients spanning multiple length scales. Specifically within the variation of chemical compositions, the manipulation of the mineralization, the presence of inorganic ions and biomolecules , and the level of hydration have all been known to cause gradients in plants and animals. [ 4 ] The basic structural units of FGMs are elements or material ingredients represented by maxel . The term maxel was introduced in 2005 by Rajeev Dwivedi and Radovan Kovacevic at Research Center for Advanced Manufacturing (RCAM). [ 5 ] The attributes of maxel include the location and volume fraction of individual material components. A maxel is also used in the context of the additive manufacturing processes (such as stereolithography , selective laser sintering , fused deposition modeling, etc.) to describe a physical voxel (a portmanteau of the words 'volume' and 'element'), which defines the build resolution of either a rapid prototyping or rapid manufacturing process, or the resolution of a design produced by such fabrication means. The transition between the two materials can be approximated by through either a power-law or exponential law relation: Power Law: E = E o z k {\displaystyle E=E_{o}z^{k}} where E o {\displaystyle E_{o}} is the Young's modulus at the surface of the material, z is the depth from surface, and k is a non-dimensional exponent ( 0 < k < 1 {\displaystyle 0<k<1} ). Exponential Law: E = E o e α z {\displaystyle E=E_{o}e^{\alpha z}} where α < 0 {\displaystyle \alpha <0} indicates a hard surface and α > 0 {\displaystyle \alpha >0} indicates soft surface. [ 6 ] There are many areas of application for FGM. The concept is to make a composite material by varying the microstructure from one material to another material with a specific gradient. This enables the material to have the best of both materials. If it is for thermal, or corrosive resistance or malleability and toughness both strengths of the material may be used to avoid corrosion, fatigue, fracture and stress corrosion cracking. There is a myriad of possible applications and industries interested in FGMs. They span from defense, looking at protective armor, to biomedical, investigating implants, to optoelectronics and energy. [ citation needed ] The aircraft and aerospace industry and the computer circuit industry are very interested in the possibility of materials that can withstand very high thermal gradients. [ 7 ] This is normally achieved by using a ceramic layer connected with a metallic layer. The Air Vehicles Directorate has conducted a Quasi-static bending test results of functionally graded titanium/ titanium boride test specimens which can be seen below. [ 8 ] The test correlated to the finite element analysis (FEA) using a quadrilateral mesh with each element having its own structural and thermal properties. Advanced Materials and Processes Strategic Research Programme (AMPSRA) have done analysis on producing a thermal barrier coating using Zr02 and NiCoCrAlY. Their results have proved successful but no results of the analytical model are published. The rendition of the term that relates to the additive fabrication processes has its origins at the RMRG (Rapid Manufacturing Research Group) at Loughborough University in the United Kingdom . The term forms a part of a descriptive taxonomy of terms relating directly to various particulars relating to the additive CAD - CAM manufacturing processes, originally established as a part of the research conducted by architect Thomas Modeen into the application of the aforementioned techniques in the context of architecture. Gradient of elastic modulus essentially changes the fracture toughness of adhesive contacts. [ 9 ] Additionally, there has been an increased focus on how to apply FGMs to biomedical applications, specifically dental and orthopedic implants. For example, bone is an FGM that exhibits a change in elasticity and other mechanical properties between the cortical and cancellous bone . It logically follows that FGMs for orthopedic implants would be ideal for mimicking the performance of bone. FGMs for biomedical applications have the potential benefit of preventing stress concentrations that could lead to biomechanical failure and improving biocompatibility and biomechanical stability. [ 10 ] FGMs in relation to orthopedic implants are particularly important as the common materials used (titanium, stainless steel, etc.) are stiffer and thus pose a risk of creating abnormal physiological conditions that alter the stress concentration at the interface between the implant and the bone. If the implant is too stiff it risks causing bone resorption , while a flexible implant can cause stability and the bone-implant interface. Numerous FEM simulations have been carried out to understand the possible FGM and mechanical gradients that could be implemented into different orthopedic implants, as the gradients and mechanical properties are highly geometry specific. [ 11 ] An example of a FGM for use in orthopedic implants is carbon fiber reinforcement polymer matrix (CRFP) with yttria-stabilized zirconia (YSZ). Varying the amount of YSZ present as a filler in the material, resulted in a flexural strength gradation ratio of 1.95. This high gradation ratio and overall high flexibility shows promise as being a supportive material in bone implants. [ 12 ] There are quite a few FGMs being explored using hydroxyapatite (HA) due to its osteoconductivity which assists with osseointegration of implants. However, HA exhibits lower fracture strength and toughness compared to bone, which requires it to be used in conjunction with other materials in implants. One study combined HA with alumina and zirconia via a spark plasma process to create a FGM that shows a mechanical gradient as well as good cellular adhesion and proliferation. [ 13 ] Numerical methods have been developed for modelling the mechanical response of FGMs, with the finite element method being the most popular one. Initially, the variation of material properties was introduced by means of rows (or columns) of homogeneous elements, leading to a discontinuous step-type variation in the mechanical properties. [ 14 ] Later, Santare and Lambros [ 15 ] developed functionally graded finite elements, where the mechanical property variation takes place at the element level. Martínez-Pañeda and Gallego extended this approach to commercial finite element software. [ 16 ] Contact properties of FGM can be simulated using the Boundary Element Method (which can be applied both to non-adhesive and adhesive contacts). [ 17 ] Molecular dynamics simulation has also been implemented to study functionally graded materials. M. Islam [ 18 ] studied the mechanical and vibrational properties of functionally graded Cu-Ni nanowires using molecular dynamics simulation. Mechanics of functionally graded material structures was considered by many authors. [ 19 ] [ 20 ] [ 21 ] [ 22 ] However, recently a new micro-mechanical model is developed to calculate the effective elastic Young modulus for graphene-reinforced plates composite. The model considers the average dimensions of the graphene nanoplates, weight fraction, and the graphene/ matrix ratio in the Representative Volume Element. The dynamic behavior of this functionally graded polymer-based composite reinforced with graphene fillers is crucial for engineering applications. [ 23 ]
https://en.wikipedia.org/wiki/Functionally_graded_material
In engineering design , a function–means tree (a.k.a. function/means tree or F/M tree ) is a method for functional decomposition and concept generation. At the top level, main functions are identified. Under each function, a means (or solution element) is attached. Alternative solution elements can also be attached. Each means is in turn decomposed into functions with means attached to each of them. A well-elaborated function means tree span, a design space where all concepts under consideration are represented. In addition to product level requirements, there might be requirements on sub functions that may be a consequence of means at a higher level. The function means tree is a tool that can aid in the creative part of the design process. It can also be a tool for mapping requirements to parts in a design. This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Function–means_tree
A fundamental ephemeris of the Solar System is a model of the objects of the system in space, with all of their positions and motions accurately represented. It is intended to be a high-precision primary reference for prediction and observation of those positions and motions, and which provides a basis for further refinement of the model. It is generally not intended to cover the entire life of the Solar System; usually a short-duration time span, perhaps a few centuries, is represented to high accuracy. Some long ephemerides cover several millennia to medium accuracy. They are published by the Jet Propulsion Laboratory as Development Ephemeris . The latest releases include DE430 which covers planetary and lunar ephemeris from Dec 21, 1549 to Jan 25, 2650 with high precision and is intended for general use for modern time periods . DE431 was created to cover a longer time period Aug 15, -13200 to March 15, 17191 with slightly less precision for use with historic observations and far reaching forecasted positions. DE432 was released as a minor update to DE430 with improvements to the Pluto barycenter in support of the New Horizons mission. [ 1 ] The set of physical laws and numerical constants used in the calculation of the ephemeris must be self-consistent and precisely specified. The ephemeris must be calculated strictly in accordance with this set, which represents the most current knowledge of all relevant physical forces and effects. Current fundamental ephemerides are typically released with exact descriptions of all mathematical models, methods of computation, observational data, and adjustment to the observations at the time of their announcement. [ 2 ] This may not have been the case in the past, as fundamental ephemerides were then computed from a collection of methods derived over a span of decades by many researchers. [ 3 ] The independent variable of the ephemeris is always time. In the case of the most current ephemerides, it is a relativistic coordinate time scale equivalent to the IAU definition of TCB . [ 3 ] In the past, mean solar time (before the discovery of the non-uniform rotation of the Earth ) and ephemeris time (before the implementation of relativistic gravitational equations ) were used. The remainder of the ephemeris can consist of either the mathematical equations and initial conditions which describe the motions of the bodies of the Solar System, of tabulated data calculated from those equations and conditions, or of condensed mathematical representations of the tabulated data. A fundamental ephemeris is the basis from which apparent ephemerides, phenomena, and orbital elements are computed for astronomical, nautical, and surveyors' almanacs. Apparent ephemerides give positions and motions of Solar System bodies as seen by observers from the surface of Earth, and are useful for astronomers, navigators, and surveyors in planning observations and in reducing the data acquired, although much of the work of latter two has been supplanted by GPS technology. Phenomena are events related to the configurations of Solar System bodies, for instance rise and set times, phases , eclipses and occultations , and have many civil and scientific applications. Orbital elements are descriptions of the motion of a body at a particular instant, used for further short-time-span calculation of the body's position when high accuracy is not required. Astronomers have been tasked with computing accurate ephemerides, originally for purposes of sea navigation, from at least the 18th century. In England, Charles II founded the Royal Observatory in 1675, [ 4 ] which began publishing The Nautical Almanac in 1766. [ 5 ] In France, the Bureau des Longitudes was founded in 1795 to publish the Connaissance des Temps . [ 6 ] The early fundamental ephemerides of these publications came from many different sources and authors as the science of celestial mechanics matured. [ 7 ] At the end of the 19th century, the analytical methods of general perturbations reached the probable limits of what could be accomplished by hand calculation. The planetary "theories" of Newcomb [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] and Hill [ 14 ] [ 15 ] formed the fundamental ephemerides of the Nautical Almanac at that time. For the Sun, Mercury, Venus, and Mars, the tabulations of the Astronomical Almanac continued to be derived from the work of Newcomb and Ross [ 16 ] through 1983. In France, the works of LeVerrier [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] and Gaillot [ 22 ] [ 23 ] [ 24 ] formed the fundamental ephemeris of the Connaissance des Temps . From the mid 20th century, work began on numerical integration of the equations of motion on early computing machines for purposes of producing fundamental ephemerides for the Astronomical Almanac . Jupiter, Saturn, Uranus, Neptune, and Pluto were based on the work of Eckert, et al . [ 25 ] and Clemence [ 26 ] through 1983. The fundamental ephemeris of the Moon, always a difficult problem in celestial mechanics, remained a work-in-progress through the early 1980s. It was based originally on the work of Brown, [ 27 ] with updates and corrections by Clemence, et al . [ 28 ] and Eckert, et al . [ 29 ] [ 30 ] [ 31 ] Starting in 1984, a revolution in the methods of producing fundamental ephemerides began. [ 32 ] From 1984 through 2002, the fundamental ephemeris of the Astronomical Almanac was the Jet Propulsion Laboratory 's DE200/LE200 , a fully numerically-integrated ephemeris fitted to modern position and velocity observations of the Sun, Moon, and planets. From 2003 onward (as of Feb 2012), JPL's DE405/LE405 , an integrated ephemeris referred to the International Celestial Reference Frame , has been used. [ 3 ] In France, the Bureau des Longitudes began using their machine-generated semi-analytical theory VSOP82 in 1984, [ 33 ] and their work continued with the founding of the Institut de mécanique céleste et de calcul des éphémérides in 1998 and the INPOP [ 34 ] [ 35 ] series of numerical ephemerides. DE405/LE405 were superseded by DE421/LE421 in 2008. [ 36 ]
https://en.wikipedia.org/wiki/Fundamental_ephemeris
The fundamental frequency , often referred to simply as the fundamental (abbreviated as f 0 or f 1 ), is defined as the lowest frequency of a periodic waveform . [ 1 ] In music, the fundamental is the musical pitch of a note that is perceived as the lowest partial present. In terms of a superposition of sinusoids , the fundamental frequency is the lowest frequency sinusoidal in the sum of harmonically related frequencies, or the frequency of the difference between adjacent frequencies. In some contexts, the fundamental is usually abbreviated as f 0 , indicating the lowest frequency counting from zero . [ 2 ] [ 3 ] [ 4 ] In other contexts, it is more common to abbreviate it as f 1 , the first harmonic . [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] (The second harmonic is then f 2 = 2⋅ f 1 , etc.) According to Benward and Saker's Music: In Theory and Practice : [ 10 ] Since the fundamental is the lowest frequency and is also perceived as the loudest, the ear identifies it as the specific pitch of the musical tone [ harmonic spectrum ].... The individual partials are not heard separately but are blended together by the ear into a single tone. All sinusoidal and many non-sinusoidal waveforms repeat exactly over time – they are periodic. The period of a waveform is the smallest positive value T {\displaystyle T} for which the following is true: Where x ( t ) {\displaystyle x(t)} is the value of the waveform t {\displaystyle t} . This means that the waveform's values over any interval of length T {\displaystyle T} is all that is required to describe the waveform completely (for example, by the associated Fourier series ). Since any multiple of period T {\displaystyle T} also satisfies this definition, the fundamental period is defined as the smallest period over which the function may be described completely. The fundamental frequency is defined as its reciprocal: When the units of time are seconds, the frequency is in s − 1 {\displaystyle s^{-1}} , also known as Hertz . For a pipe of length L {\displaystyle L} with one end closed and the other end open the wavelength of the fundamental harmonic is 4 L {\displaystyle 4L} , as indicated by the first two animations. Hence, Therefore, using the relation where v {\displaystyle v} is the speed of the wave, the fundamental frequency can be found in terms of the speed of the wave and the length of the pipe: If the ends of the same pipe are now both closed or both opened, the wavelength of the fundamental harmonic becomes 2 L {\displaystyle 2L} . By the same method as above, the fundamental frequency is found to be In music, the fundamental is the musical pitch of a note that is perceived as the lowest partial present. The fundamental may be created by vibration over the full length of a string or air column, or a higher harmonic chosen by the player. The fundamental is one of the harmonics . A harmonic is any member of the harmonic series, an ideal set of frequencies that are positive integer multiples of a common fundamental frequency. The reason a fundamental is also considered a harmonic is because it is 1 times itself. [ 11 ] The fundamental is the frequency at which the entire wave vibrates. Overtones are other sinusoidal components present at frequencies above the fundamental. All of the frequency components that make up the total waveform, including the fundamental and the overtones, are called partials. Together they form the harmonic series. Overtones which are perfect integer multiples of the fundamental are called harmonics. When an overtone is near to being harmonic, but not exact, it is sometimes called a harmonic partial, although they are often referred to simply as harmonics. Sometimes overtones are created that are not anywhere near a harmonic, and are just called partials or inharmonic overtones. The fundamental frequency is considered the first harmonic and the first partial . The numbering of the partials and harmonics is then usually the same; the second partial is the second harmonic, etc. But if there are inharmonic partials, the numbering no longer coincides. Overtones are numbered as they appear above the fundamental. So strictly speaking, the first overtone is the second partial (and usually the second harmonic). As this can result in confusion, only harmonics are usually referred to by their numbers, and overtones and partials are described by their relationships to those harmonics. Consider a spring, fixed at one end and having a mass attached to the other; this would be a single degree of freedom (SDoF) oscillator. Once set into motion, it will oscillate at its natural frequency. For a single degree of freedom oscillator, a system in which the motion can be described by a single coordinate, the natural frequency depends on two system properties: mass and stiffness; (providing the system is undamped). The natural frequency, or fundamental frequency, ω 0 , can be found using the following equation: where: To determine the natural frequency in Hz, the omega value is divided by 2 π . Or: where: While doing a modal analysis , the frequency of the 1st mode is the fundamental frequency. This is also expressed as: where:
https://en.wikipedia.org/wiki/Fundamental_frequency
In single-variable differential calculus , the fundamental increment lemma is an immediate consequence of the definition of the derivative f ′ ( a ) {\textstyle f'(a)} of a function f {\textstyle f} at a point a {\textstyle a} : The lemma asserts that the existence of this derivative implies the existence of a function φ {\displaystyle \varphi } such that for sufficiently small but non-zero h {\textstyle h} . For a proof, it suffices to define and verify this φ {\displaystyle \varphi } meets the requirements. The lemma says, at least when h {\displaystyle h} is sufficiently close to zero, that the difference quotient can be written as the derivative f' plus an error term φ ( h ) {\displaystyle \varphi (h)} that vanishes at h = 0 {\displaystyle h=0} . That is, one has In that the existence of φ {\displaystyle \varphi } uniquely characterises the number f ′ ( a ) {\displaystyle f'(a)} , the fundamental increment lemma can be said to characterise the differentiability of single-variable functions. For this reason, a generalisation of the lemma can be used in the definition of differentiability in multivariable calculus . In particular, suppose f maps some subset of R n {\displaystyle \mathbb {R} ^{n}} to R {\displaystyle \mathbb {R} } . Then f is said to be differentiable at a if there is a linear function and a function such that for non-zero h sufficiently close to 0 . In this case, M is the unique derivative (or total derivative , to distinguish from the directional and partial derivatives ) of f at a . Notably, M is given by the Jacobian matrix of f evaluated at a . We can write the above equation in terms of the partial derivatives ∂ f ∂ x i {\displaystyle {\frac {\partial f}{\partial x_{i}}}} as
https://en.wikipedia.org/wiki/Fundamental_increment_lemma
In physics , the fundamental interactions or fundamental forces are interactions in nature that appear not to be reducible to more basic interactions. There are four fundamental interactions known to exist: [ 1 ] The gravitational and electromagnetic interactions produce long-range forces whose effects can be seen directly in everyday life. The strong and weak interactions produce forces at subatomic scales and govern nuclear interactions inside atoms . Some scientists hypothesize that a fifth force might exist, but these hypotheses remain speculative. Each of the known fundamental interactions can be described mathematically as a field . The gravitational interaction is attributed to the curvature of spacetime , described by Einstein's general theory of relativity . The other three are discrete quantum fields , and their interactions are mediated by elementary particles described by the Standard Model of particle physics . [ 2 ] Within the Standard Model, the strong interaction is carried by a particle called the gluon and is responsible for quarks binding together to form hadrons , such as protons and neutrons . As a residual effect, it creates the nuclear force that binds the latter particles to form atomic nuclei . The weak interaction is carried by particles called W and Z bosons , and also acts on the nucleus of atoms , mediating radioactive decay . The electromagnetic force, carried by the photon , creates electric and magnetic fields , which are responsible for the attraction between orbital electrons and atomic nuclei which holds atoms together, as well as chemical bonding and electromagnetic waves , including visible light , and forms the basis for electrical technology. Although the electromagnetic force is far stronger than gravity, it tends to cancel itself out within large objects, so over large (astronomical) distances gravity tends to be the dominant force, and is responsible for holding together the large scale structures in the universe, such as planets, stars, and galaxies. The historical success of models that show relationships between fundamental interactions have led to efforts to go beyond the Standard Model and combine all four forces in to a theory of everything . In his 1687 theory, Isaac Newton postulated space as an infinite and unalterable physical structure existing before, within, and around all objects while their states and relations unfold at a constant pace everywhere, thus absolute space and time . Inferring that all objects bearing mass approach at a constant rate, but collide by impact proportional to their masses, Newton inferred that matter exhibits an attractive force. His law of universal gravitation implied there to be instant interaction among all objects. [ 3 ] [ 4 ] As conventionally interpreted, Newton's theory of motion modelled a central force without a communicating medium. [ 5 ] [ 6 ] Thus Newton's theory violated the tradition, going back to Descartes , that there should be no action at a distance . [ 7 ] Conversely, during the 1820s, when explaining magnetism, Michael Faraday inferred a field filling space and transmitting that force. Faraday conjectured that ultimately, all forces unified into one. [ 8 ] In 1873, James Clerk Maxwell unified electricity and magnetism as effects of an electromagnetic field whose third consequence was light, travelling at constant speed in vacuum. If his electromagnetic field theory held true in all inertial frames of reference , this would contradict Newton's theory of motion, which relied on Galilean relativity . [ 9 ] If, instead, his field theory only applied to reference frames at rest relative to a mechanical luminiferous aether —presumed to fill all space whether within matter or in vacuum and to manifest the electromagnetic field—then it could be reconciled with Galilean relativity and Newton's laws. (However, such a "Maxwell aether" was later disproven; Newton's laws did, in fact, have to be replaced.) [ 10 ] The Standard Model of particle physics was developed throughout the latter half of the 20th century. In the Standard Model, the electromagnetic, strong, and weak interactions associate with elementary particles , whose behaviours are modelled in quantum mechanics (QM). For predictive success with QM's probabilistic outcomes, particle physics conventionally models QM events across a field set to special relativity , altogether relativistic quantum field theory (QFT). [ 11 ] Force particles, called gauge bosons — force carriers or messenger particles of underlying fields—interact with matter particles, called fermions . Everyday matter is atoms, composed of three fermion types: up-quarks and down-quarks constituting, as well as electrons orbiting, the atom's nucleus. Atoms interact, form molecules , and manifest further properties through electromagnetic interactions among their electrons absorbing and emitting photons, the electromagnetic field's force carrier, which if unimpeded traverse potentially infinite distance. Electromagnetism's QFT is quantum electrodynamics (QED). The force carriers of the weak interaction are the massive W and Z bosons . Electroweak theory (EWT) covers both electromagnetism and the weak interaction. At the high temperatures shortly after the Big Bang , the weak interaction, the electromagnetic interaction, and the Higgs boson were originally mixed components of a different set of ancient pre-symmetry-breaking fields. As the early universe cooled, these fields split into the long-range electromagnetic interaction, the short-range weak interaction, and the Higgs boson. In the Higgs mechanism , the Higgs field manifests Higgs bosons that interact with some quantum particles in a way that endows those particles with mass. The strong interaction, whose force carrier is the gluon , traversing minuscule distance among quarks, is modeled in quantum chromodynamics (QCD). EWT, QCD, and the Higgs mechanism comprise particle physics ' Standard Model (SM). Predictions are usually made using calculational approximation methods, although such perturbation theory is inadequate to model some experimental observations (for instance bound states and solitons ). Still, physicists widely accept the Standard Model as science's most experimentally confirmed theory. Beyond the Standard Model , some theorists work to unite the electroweak and strong interactions within a Grand Unified Theory [ 12 ] (GUT). Some attempts at GUTs hypothesize "shadow" particles, such that every known matter particle associates with an undiscovered force particle , and vice versa, altogether supersymmetry (SUSY). Other theorists seek to quantize the gravitational field by the modelling behaviour of its hypothetical force carrier, the graviton and achieve quantum gravity (QG). One approach to QG is loop quantum gravity (LQG). Still other theorists seek both QG and GUT within one framework, reducing all four fundamental interactions to a Theory of Everything (ToE). The most prevalent aim at a ToE is string theory , although to model matter particles , it added SUSY to force particles —and so, strictly speaking, became superstring theory . Multiple, seemingly disparate superstring theories were unified on a backbone, M-theory . Theories beyond the Standard Model remain highly speculative, lacking great experimental support. In the conceptual model of fundamental interactions, matter consists of fermions , which carry properties called charges and spin ± 1 ⁄ 2 (intrinsic angular momentum ± ħ ⁄ 2 , where ħ is the reduced Planck constant ). They attract or repel each other by exchanging bosons . The interaction of any pair of fermions in perturbation theory can then be modelled thus: The exchange of bosons always carries energy and momentum between the fermions, thereby changing their speed and direction. The exchange may also transport a charge between the fermions, changing the charges of the fermions in the process (e.g., turn them from one type of fermion to another). Since bosons carry one unit of angular momentum, the fermion's spin direction will flip from + 1 ⁄ 2 to − 1 ⁄ 2 (or vice versa) during such an exchange (in units of the reduced Planck constant ). Since such interactions result in a change in momentum, they can give rise to classical Newtonian forces . In quantum mechanics, physicists often use the terms "force" and "interaction" interchangeably; for example, the weak interaction is sometimes referred to as the "weak force". According to the present understanding, there are four fundamental interactions or forces: gravitation , electromagnetism, the weak interaction , and the strong interaction. Their magnitude and behaviour vary greatly, as described in the table below. Modern physics attempts to explain every observed physical phenomenon by these fundamental interactions. Moreover, reducing the number of different interaction types is seen as desirable. Two cases in point are the unification of: Both magnitude ("relative strength") and "range" of the associated potential, as given in the table, are meaningful only within a rather complex theoretical framework. The table below lists properties of a conceptual scheme that remains the subject of ongoing research. The modern (perturbative) quantum mechanical view of the fundamental forces other than gravity is that particles of matter ( fermions ) do not directly interact with each other, but rather carry a charge, and exchange virtual particles ( gauge bosons ), which are the interaction carriers or force mediators. For example, photons mediate the interaction of electric charges , and gluons mediate the interaction of color charges . The full theory includes perturbations beyond simply fermions exchanging bosons; these additional perturbations can involve bosons that exchange fermions, as well as the creation or destruction of particles: see Feynman diagrams for examples. Gravitation is the weakest of the four interactions at the atomic scale, where electromagnetic interactions dominate. Gravitation is the most important of the four fundamental forces for astronomical objects over astronomical distances for two reasons. First, gravitation has an infinite effective range, like electromagnetism but unlike the strong and weak interactions. Second, gravity always attracts and never repels; in contrast, astronomical bodies tend toward a near-neutral net electric charge, such that the attraction to one type of charge and the repulsion from the opposite charge mostly cancel each other out. [ 15 ] Even though electromagnetism is far stronger than gravitation, electrostatic attraction is not relevant for large celestial bodies, such as planets, stars, and galaxies, simply because such bodies contain equal numbers of protons and electrons and so have a net electric charge of zero. Nothing "cancels" gravity, since it is only attractive, unlike electric forces which can be attractive or repulsive. On the other hand, all objects having mass are subject to the gravitational force, which only attracts. Therefore, only gravitation matters on the large-scale structure of the universe. The long range of gravitation makes it responsible for such large-scale phenomena as the structure of galaxies and black holes and, being only attractive, it slows down the expansion of the universe . Gravitation also explains astronomical phenomena on more modest scales, such as planetary orbits , as well as everyday experience: objects fall; heavy objects act as if they were glued to the ground, and animals can only jump so high. Gravitation was the first interaction to be described mathematically. In ancient times, Aristotle hypothesized that objects of different masses fall at different rates. During the Scientific Revolution , Galileo Galilei experimentally determined that this hypothesis was wrong under certain circumstances—neglecting the friction due to air resistance and buoyancy forces if an atmosphere is present (e.g. the case of a dropped air-filled balloon vs a water-filled balloon), all objects accelerate toward the Earth at the same rate. Isaac Newton's law of Universal Gravitation (1687) was a good approximation of the behaviour of gravitation. Present-day understanding of gravitation stems from Einstein's General Theory of Relativity of 1915, a more accurate (especially for cosmological masses and distances) description of gravitation in terms of the geometry of spacetime . Merging general relativity and quantum mechanics (or quantum field theory ) into a more general theory of quantum gravity is an area of active research. It is hypothesized that gravitation is mediated by a massless spin-2 particle called the graviton . Although general relativity has been experimentally confirmed (at least for weak fields, i.e. not black holes) on all but the smallest scales, there are alternatives to general relativity . These theories must reduce to general relativity in some limit, and the focus of observational work is to establish limits on what deviations from general relativity are possible. Proposed extra dimensions could explain why the gravity force is so weak. [ 16 ] Electromagnetism and weak interaction appear to be very different at everyday low energies. They can be modeled using two different theories. However, above unification energy, on the order of 100 GeV , they would merge into a single electroweak force. The electroweak theory is very important for modern cosmology , particularly on how the universe evolved. This is because shortly after the Big Bang, when the temperature was still above approximately 10 15 K , the electromagnetic force and the weak force were still merged as a combined electroweak force. For contributions to the unification of the weak and electromagnetic interaction between elementary particles , Abdus Salam, Sheldon Glashow and Steven Weinberg were awarded the Nobel Prize in Physics in 1979. [ 17 ] [ 18 ] Electromagnetism is the force that acts between electrically charged particles. This phenomenon includes the electrostatic force acting between charged particles at rest, and the combined effect of electric and magnetic forces acting between charged particles moving relative to each other. Electromagnetism has an infinite range, as gravity does, but is vastly stronger. It is the force that binds electrons to atoms, and it holds molecules together . It is responsible for everyday phenomena like light , magnets , electricity , and friction . Electromagnetism fundamentally determines all macroscopic, and many atomic-level, properties of the chemical elements . In a four kilogram (~1 gallon) jug of water, there is of total electron charge. Thus, if we place two such jugs a meter apart, the electrons in one of the jugs repel those in the other jug with a force of This force is many times larger than the weight of the planet Earth. The atomic nuclei in one jug also repel those in the other with the same force. However, these repulsive forces are canceled by the attraction of the electrons in jug A with the nuclei in jug B and the attraction of the nuclei in jug A with the electrons in jug B, resulting in no net force. Electromagnetic forces are tremendously stronger than gravity, but tend to cancel out so that for astronomical-scale bodies, gravity dominates. Electrical and magnetic phenomena have been observed since ancient times, but it was only in the 19th century James Clerk Maxwell discovered that electricity and magnetism are two aspects of the same fundamental interaction. By 1864, Maxwell's equations had rigorously quantified this unified interaction. Maxwell's theory, restated using vector calculus , is the classical theory of electromagnetism, suitable for most technological purposes. The constant speed of light in vacuum (customarily denoted with a lowercase letter c ) can be derived from Maxwell's equations, which are consistent with the theory of special relativity. Albert Einstein 's 1905 theory of special relativity , however, which follows from the observation that the speed of light is constant no matter how fast the observer is moving, showed that the theoretical result implied by Maxwell's equations has profound implications far beyond electromagnetism on the very nature of time and space. In another work that departed from classical electro-magnetism, Einstein also explained the photoelectric effect by utilizing Max Planck's discovery that light was transmitted in 'quanta' of specific energy content based on the frequency, which we now call photons . Starting around 1927, Paul Dirac combined quantum mechanics with the relativistic theory of electromagnetism . Further work in the 1940s, by Richard Feynman , Freeman Dyson , Julian Schwinger , and Sin-Itiro Tomonaga , completed this theory, which is now called quantum electrodynamics , the revised theory of electromagnetism. Quantum electrodynamics and quantum mechanics provide a theoretical basis for electromagnetic behavior such as quantum tunneling , in which a certain percentage of electrically charged particles move in ways that would be impossible under the classical electromagnetic theory, that is necessary for everyday electronic devices such as transistors to function. The weak interaction or weak nuclear force is responsible for some nuclear phenomena such as beta decay . Electromagnetism and the weak force are now understood to be two aspects of a unified electroweak interaction — this discovery was the first step toward the unified theory known as the Standard Model . In the theory of the electroweak interaction, the carriers of the weak force are the massive gauge bosons called the W and Z bosons . The weak interaction is the only known interaction that does not conserve parity ; it is left–right asymmetric. The weak interaction even violates CP symmetry but does conserve CPT . The strong interaction , or strong nuclear force , is the most complicated interaction, mainly because of the way it varies with distance. The nuclear force is powerfully attractive between nucleons at distances of about 1 femtometre (fm, or 10 −15 metres), but it rapidly decreases to insignificance at distances beyond about 2.5 fm. At distances less than 0.7 fm, the nuclear force becomes repulsive. This repulsive component is responsible for the physical size of nuclei, since the nucleons can come no closer than the force allows. After the nucleus was discovered in 1908, it was clear that a new force, today known as the nuclear force, was needed to overcome the electrostatic repulsion , a manifestation of electromagnetism, of the positively charged protons. Otherwise, the nucleus could not exist. Moreover, the force had to be strong enough to squeeze the protons into a volume whose diameter is about 10 −15 m , much smaller than that of the entire atom. From the short range of this force, Hideki Yukawa predicted that it was associated with a massive force particle, whose mass is approximately 100 MeV. The 1947 discovery of the pion ushered in the modern era of particle physics. Hundreds of hadrons were discovered from the 1940s to 1960s, and an extremely complicated theory of hadrons as strongly interacting particles was developed. Most notably: While each of these approaches offered insights, no approach led directly to a fundamental theory. Murray Gell-Mann along with George Zweig first proposed fractionally charged quarks in 1961. Throughout the 1960s, different authors considered theories similar to the modern fundamental theory of quantum chromodynamics (QCD) as simple models for the interactions of quarks. The first to hypothesize the gluons of QCD were Moo-Young Han and Yoichiro Nambu , who introduced the quark color charge. Han and Nambu hypothesized that it might be associated with a force-carrying field. At that time, however, it was difficult to see how such a model could permanently confine quarks. Han and Nambu also assigned each quark color an integer electrical charge, so that the quarks were fractionally charged only on average, and they did not expect the quarks in their model to be permanently confined. In 1971, Murray Gell-Mann and Harald Fritzsch proposed that the Han/Nambu color gauge field was the correct theory of the short-distance interactions of fractionally charged quarks. A little later, David Gross , Frank Wilczek , and David Politzer discovered that this theory had the property of asymptotic freedom , allowing them to make contact with experimental evidence . They concluded that QCD was the complete theory of the strong interactions, correct at all distance scales. The discovery of asymptotic freedom led most physicists to accept QCD since it became clear that even the long-distance properties of the strong interactions could be consistent with experiment if the quarks are permanently confined : the strong force increases indefinitely with distance, trapping quarks inside the hadrons. Assuming that quarks are confined, Mikhail Shifman , Arkady Vainshtein and Valentine Zakharov were able to compute the properties of many low-lying hadrons directly from QCD, with only a few extra parameters to describe the vacuum. In 1980, Kenneth G. Wilson published computer calculations based on the first principles of QCD, establishing, to a level of confidence tantamount to certainty, that QCD will confine quarks. Since then, QCD has been the established theory of strong interactions. QCD is a theory of fractionally charged quarks interacting by means of 8 bosonic particles called gluons. The gluons also interact with each other, not just with the quarks, and at long distances the lines of force collimate into strings, loosely modeled by a linear potential, a constant attractive force. In this way, the mathematical theory of QCD not only explains how quarks interact over short distances but also the string-like behavior, discovered by Chew and Frautschi, which they manifest over longer distances. Conventionally, the Higgs interaction is not counted among the four fundamental forces. [ 19 ] [ 20 ] Nonetheless, although not a gauge interaction nor generated by any diffeomorphism symmetry, the Higgs field 's cubic Yukawa coupling produces a weakly attractive fifth interaction. After spontaneous symmetry breaking via the Higgs mechanism , Yukawa terms remain of the form with Yukawa coupling λ i {\displaystyle \lambda _{i}} , particle mass m i {\displaystyle m_{i}} (in eV ), and Higgs vacuum expectation value 246.22 GeV . Hence coupled particles can exchange a virtual Higgs boson, yielding classical potentials of the form with Higgs mass 125.18 GeV . Because the reduced Compton wavelength of the Higgs boson is so small ( 1.576 × 10 −18 m , comparable to the W and Z bosons ), this potential has an effective range of a few attometers . Between two electrons, it begins roughly 10 11 times weaker than the weak interaction , and grows exponentially weaker at non-zero distances. The fundamental forces may become unified into a single force at very high energies and on a minuscule scale, the Planck scale . [ 21 ] Particle accelerators cannot produce the enormous energies required to experimentally probe this regime. The weak and electromagnetic forces have already been unified with the electroweak theory of Sheldon Glashow , Abdus Salam , and Steven Weinberg , for which they received the 1979 Nobel Prize in physics. [ 22 ] [ 23 ] [ 24 ] Numerous theoretical efforts have been made to systematize the existing four fundamental interactions on the model of electroweak unification. Grand Unified Theories (GUTs) are proposals to show that the three fundamental interactions described by the Standard Model are all different manifestations of a single interaction with symmetries that break down and create separate interactions below some extremely high level of energy. GUTs are also expected to predict some of the relationships between constants of nature that the Standard Model treats as unrelated, as well as predicting gauge coupling unification for the relative strengths of the electromagnetic, weak, and strong forces. A so-called theory of everything , which would integrate GUTs with a quantum gravity theory face a greater barrier, because no quantum gravity theories, which include string theory , loop quantum gravity , and twistor theory , have secured wide acceptance. Some theories look for a graviton to complete the Standard Model list of force-carrying particles, while others, like loop quantum gravity, emphasize the possibility that time-space itself may have a quantum aspect to it. Some theories beyond the Standard Model include a hypothetical fifth force , and the search for such a force is an ongoing line of experimental physics research. In supersymmetric theories, some particles acquire their masses only through supersymmetry breaking effects and these particles, known as moduli , can mediate new forces. Another reason to look for new forces is the discovery that the expansion of the universe is accelerating (also known as dark energy ), giving rise to a need to explain a nonzero cosmological constant , and possibly to other modifications of general relativity . Fifth forces have also been suggested to explain phenomena such as CP violations, dark matter , and dark flow .
https://en.wikipedia.org/wiki/Fundamental_interaction
In mathematics , particularly in functional analysis , the fundamental lemma of interpolation theory is a lemma that establishes the relationship between different methods of interpolation in Banach spaces . [ 1 ] The fundamental lemma states the following: Fundamental lemma of interpolation theory. Let A ¯ = ( A 0 , A 1 ) {\displaystyle {\bar {A}}=(A_{0},A_{1})} be a Banach couple and let a ∈ Σ ( A ¯ ) {\displaystyle a\in \Sigma ({\bar {A}})} be such that min ( 1 , 1 / t ) K ( t , a ) → 0 {\displaystyle \min(1,1/t)K(t,a)\to 0} when t → 0 {\displaystyle t\to 0} or t → ∞ {\displaystyle t\to \infty } . Then for each ε > 0 {\displaystyle \varepsilon >0} , there exists a representation satisfying u n ∈ Δ ( A ¯ ) {\displaystyle u_{n}\in \Delta ({\bar {A}})} (with convergence in Σ ( A ¯ ) {\displaystyle \Sigma ({\bar {A}})} ) and for all n ∈ Z {\displaystyle n\in \mathbb {Z} } , where α ≤ 3 {\displaystyle \alpha \leq 3} is a constant. [ 2 ] A stronger version of the fundamental lemma, known as the strong fundamental lemma , was developed by mathematicians Alexander Brudnyi and Krugljak. The strong fundamental lemma states that for mutually closed Banach couples, there exists a decomposition with improved estimates on the norms of the components. Specifically, for a ∈ Σ c ( A ¯ ) {\displaystyle a\in \Sigma ^{c}({\bar {A}})} , there exist elements u n ∈ Δ ( A ¯ ) {\displaystyle u_{n}\in \Delta ({\bar {A}})} such that This constant 3 + 2 2 ≈ 5.8284 {\displaystyle 3+2{\sqrt {2}}\approx 5.8284} is currently the best known value, as proven by Dmitriev and later independently by Kaijser using different methods. [ 3 ] The fundamental lemma was first introduced in the context of classical interpolation theory by mathematicians Jacques-Louis Lions and Jaak Peetre in their 1964 paper Sur une classe d'espaces d'interpolation . [ 4 ] The development of stronger versions, including the strong fundamental lemma, indicated a maturation of the theory as its applications expanded. The ongoing search for optimal constants in these results remains an active area of research, with significant contributions from mathematicians like Brudnyi, Krugljak, Cwikel, and others. [ 5 ] The fundamental lemma is particularly useful in establishing the equivalence of the K-method and J-method of interpolation. This equivalence is fundamental to the theory of interpolation spaces , as it allows mathematicians to choose whichever method is more convenient for a given problem. [ 6 ] Furthermore, the lemma has found various applications in the study of K-spaces , a class of interpolation spaces defined by certain monotonicity conditions. Brudnyi and Krugljak used the strong fundamental lemma to show that K-spaces, despite their abstract definition, have a concrete structure characterized by lattice norms acting on K-functionals. [ 5 ] In harmonic analysis , the lemma provides essential tools for studying the behavior of various function spaces. It has been particularly useful in establishing properties of Calderón–Mityagin couples , where all interpolation spaces with respect to the couple are K-spaces. The lemma also appears in the theory of operator ideals and has applications in studying the regularity properties of solutions to partial differential equations . [ 7 ] Other variants of the fundamental lemma have been developed for specific applications, including versions involving the E-functional and continuous parameter formulations. These variants have proven useful in studying weighted Banach lattices and in establishing relationships between different types of interpolation spaces. [ 8 ]
https://en.wikipedia.org/wiki/Fundamental_lemma_of_interpolation_theory
In mathematics , specifically in the calculus of variations , a variation δf of a function f can be concentrated on an arbitrarily small interval, but not a single point. Accordingly, the necessary condition of extremum ( functional derivative equal zero) appears in a weak formulation (variational form) integrated with an arbitrary function δf . The fundamental lemma of the calculus of variations is typically used to transform this weak formulation into the strong formulation ( differential equation ), free of the integration with arbitrary function. The proof usually exploits the possibility to choose δf concentrated on an interval on which f keeps sign (positive or negative). Several versions of the lemma are in use. Basic versions are easy to formulate and prove. More powerful versions are used when needed. Here "smooth" may be interpreted as "infinitely differentiable", [ 1 ] but often is interpreted as "twice continuously differentiable" or "continuously differentiable" or even just "continuous", [ 2 ] since these weaker statements may be strong enough for a given task. "Compactly supported" means "vanishes outside [ c , d ] {\displaystyle [c,d]} for some c {\displaystyle c} , d {\displaystyle d} such that a < c < d < b {\displaystyle a<c<d<b} "; [ 1 ] but often a weaker statement suffices, assuming only that h {\displaystyle h} (or h {\displaystyle h} and a number of its derivatives) vanishes at the endpoints a {\displaystyle a} , b {\displaystyle b} ; [ 2 ] in this case the closed interval [ a , b ] {\displaystyle [a,b]} is used. Suppose f ( x ¯ ) ≠ 0 {\displaystyle f({\bar {x}})\neq 0} for some x ¯ ∈ ( a , b ) {\displaystyle {\bar {x}}\in (a,b)} . Since f {\displaystyle f} is continuous, it is nonzero with the same sign for some c , d {\displaystyle c,d} such that a < c < x ¯ < d < b {\displaystyle a<c<{\bar {x}}<d<b} . Without loss of generality, assume f ( x ¯ ) > 0 {\displaystyle f({\bar {x}})>0} . Then take an h {\displaystyle h} that is positive on ( c , d ) {\displaystyle (c,d)} and zero elsewhere, for example Note this bump function satisfies the properties in the statement, including C ∞ {\displaystyle C^{\infty }} . Since we reach a contradiction. [ 3 ] The special case for g = 0 is just the basic version. Here is the special case for f = 0 (often sufficient). If, in addition, continuous differentiability of g is assumed, then integration by parts reduces both statements to the basic version; this case is attributed to Joseph-Louis Lagrange , while the proof of differentiability of g is due to Paul du Bois-Reymond . The given functions ( f , g ) may be discontinuous, provided that they are locally integrable (on the given interval). In this case, Lebesgue integration is meant, the conclusions hold almost everywhere (thus, in all continuity points), and differentiability of g is interpreted as local absolute continuity (rather than continuous differentiability). [ 8 ] [ 9 ] Sometimes the given functions are assumed to be piecewise continuous , in which case Riemann integration suffices, and the conclusions are stated everywhere except the finite set of discontinuity points. [ 5 ] This necessary condition is also sufficient, since the integrand becomes ( u 0 h ) ′ + ( u 1 h ′ ) ′ + ⋯ + ( u n − 1 h ( n − 1 ) ) ′ . {\displaystyle (u_{0}h)'+(u_{1}h')'+\dots +(u_{n-1}h^{(n-1)})'.} The case n = 1 is just the version for two given functions, since f = f 0 = u 0 ′ {\displaystyle f=f_{0}=u'_{0}} and f 1 = u 0 , {\displaystyle f_{1}=u_{0},} thus, f 0 − f 1 ′ = 0. {\displaystyle f_{0}-f'_{1}=0.} In contrast, the case n =2 does not lead to the relation f 0 − f 1 ′ + f 2 ″ = 0 , {\displaystyle f_{0}-f'_{1}+f''_{2}=0,} since the function f 2 = u 1 {\displaystyle f_{2}=u_{1}} need not be differentiable twice. The sufficient condition f 0 − f 1 ′ + f 2 ″ = 0 {\displaystyle f_{0}-f'_{1}+f''_{2}=0} is not necessary. Rather, the necessary and sufficient condition may be written as f 0 − ( f 1 − f 2 ′ ) ′ = 0 {\displaystyle f_{0}-(f_{1}-f'_{2})'=0} for n =2, f 0 − ( f 1 − ( f 2 − f 3 ′ ) ′ ) ′ = 0 {\displaystyle f_{0}-(f_{1}-(f_{2}-f'_{3})')'=0} for n =3, and so on; in general, the brackets cannot be opened because of non-differentiability. Generalization to vector-valued functions ( a , b ) → R d {\displaystyle (a,b)\to \mathbb {R} ^{d}} is straightforward; one applies the results for scalar functions to each coordinate separately, [ 11 ] or treats the vector-valued case from the beginning. [ 12 ] Similarly to the basic version, one may consider a continuous function f on the closure of Ω, assuming that h vanishes on the boundary of Ω (rather than compactly supported). [ 13 ] Here is a version for discontinuous multivariable functions. This lemma is used to prove that extrema of the functional are weak solutions y : [ x 0 , x 1 ] → V {\displaystyle y:[x_{0},x_{1}]\to V} (for an appropriate vector space V {\displaystyle V} ) of the Euler–Lagrange equation The Euler–Lagrange equation plays a prominent role in classical mechanics and differential geometry .
https://en.wikipedia.org/wiki/Fundamental_lemma_of_the_calculus_of_variations
In computer vision , the fundamental matrix F {\displaystyle \mathbf {F} } is a 3×3 matrix which relates corresponding points in stereo images . In epipolar geometry , with homogeneous image coordinates , x and x ′, of corresponding points in a stereo image pair, Fx describes a line (an epipolar line ) on which the corresponding point x ′ on the other image must lie. That means, for all pairs of corresponding points holds Being of rank two and determined only up to scale, the fundamental matrix can be estimated given at least seven point correspondences. Its seven parameters represent the only geometric information about cameras that can be obtained through point correspondences alone. The term "fundamental matrix" was coined by QT Luong in his influential PhD thesis. It is sometimes also referred to as the " bifocal tensor ". As a tensor it is a two-point tensor in that it is a bilinear form relating points in distinct coordinate systems. The above relation which defines the fundamental matrix was published in 1992 by both Olivier Faugeras and Richard Hartley . Although H. Christopher Longuet-Higgins ' essential matrix satisfies a similar relationship, the essential matrix is a metric object pertaining to calibrated cameras, while the fundamental matrix describes the correspondence in more general and fundamental terms of projective geometry. This is captured mathematically by the relationship between a fundamental matrix F {\displaystyle \mathbf {F} } and its corresponding essential matrix E {\displaystyle \mathbf {E} } , which is K {\displaystyle \mathbf {K} } and K ′ {\displaystyle \mathbf {K} '} being the intrinsic calibration matrices of the two images involved. The fundamental matrix is a relationship between any two images of the same scene that constrains where the projection of points from the scene can occur in both images. Given the projection of a scene point into one of the images the corresponding point in the other image is constrained to a line, helping the search, and allowing for the detection of wrong correspondences. The relation between corresponding points , which the fundamental matrix represents, is referred to as epipolar constraint , matching constraint , discrete matching constraint , or incidence relation . The fundamental matrix can be determined by a set of point correspondences . Additionally, these corresponding image points may be triangulated to world points with the help of camera matrices derived directly from this fundamental matrix. The scene composed of these world points is within a projective transformation of the true scene. [ 1 ] Say that the image point correspondence x ↔ x ′ {\displaystyle \mathbf {x} \leftrightarrow \mathbf {x'} } derives from the world point X {\displaystyle {\textbf {X}}} under the camera matrices ( P , P ′ ) {\displaystyle \left({\textbf {P}},{\textbf {P}}'\right)} as Say we transform space by a general homography matrix H 4 × 4 {\displaystyle {\textbf {H}}_{4\times 4}} such that X 0 = H X {\displaystyle {\textbf {X}}_{0}={\textbf {H}}{\textbf {X}}} . The cameras then transform as The fundamental matrix can also be derived using the coplanarity condition. [ 2 ] The fundamental matrix expresses the epipolar geometry in stereo images. The epipolar geometry in images taken with perspective cameras appears as straight lines. However, in satellite images , the image is formed during the sensor movement along its orbit ( pushbroom sensor ). Therefore, there are multiple projection centers for one image scene and the epipolar line is formed as an epipolar curve. However, in special conditions such as small image tiles, the satellite images could be rectified using the fundamental matrix. The fundamental matrix is of rank 2. Its kernel defines the epipole .
https://en.wikipedia.org/wiki/Fundamental_matrix_(computer_vision)
In mathematics, a fundamental matrix of a system of n homogeneous linear ordinary differential equations x ˙ ( t ) = A ( t ) x ( t ) {\displaystyle {\dot {\mathbf {x} }}(t)=A(t)\mathbf {x} (t)} is a matrix-valued function Ψ ( t ) {\displaystyle \Psi (t)} whose columns are linearly independent solutions of the system. [ 1 ] Then every solution to the system can be written as x ( t ) = Ψ ( t ) c {\displaystyle \mathbf {x} (t)=\Psi (t)\mathbf {c} } , for some constant vector c {\displaystyle \mathbf {c} } (written as a column vector of height n ). A matrix-valued function Ψ {\displaystyle \Psi } is a fundamental matrix of x ˙ ( t ) = A ( t ) x ( t ) {\displaystyle {\dot {\mathbf {x} }}(t)=A(t)\mathbf {x} (t)} if and only if Ψ ˙ ( t ) = A ( t ) Ψ ( t ) {\displaystyle {\dot {\Psi }}(t)=A(t)\Psi (t)} and Ψ {\displaystyle \Psi } is a non-singular matrix for all t {\displaystyle t} . [ 2 ] The fundamental matrix is used to express the state-transition matrix , an essential component in the solution of a system of linear ordinary differential equations. [ 3 ] This article about matrices is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fundamental_matrix_(linear_differential_equation)
The fundamental plane in a spherical coordinate system is a plane of reference that divides the sphere into two hemispheres . The geocentric latitude of a point is then the angle between the fundamental plane and the line joining the point to the centre of the sphere. [ 1 ] For a geographic coordinate system of the Earth, the fundamental plane is the Equator . Astronomical coordinate systems have varying fundamental planes: [ 2 ] This elementary geometry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fundamental_plane_(spherical_coordinates)
The fundamental resolution equation is used in chromatography to help relate adjustable chromatographic parameters to resolution. a R s = ( N 4 ) ( α − 1 α ) ( k 2 ′ 1 + k 2 ′ ) {\displaystyle R_{s}=\left({\frac {\sqrt {N}}{4}}\right)\left({\frac {\alpha -1}{\alpha }}\right)\left({\frac {k'_{2}}{1+k'_{2}}}\right)} where, N {\displaystyle N} = Number of theoretical plates α {\displaystyle \alpha } = Selectivity Term = k 2 ′ k 1 ′ {\displaystyle {\frac {k'_{2}}{k'_{1}}}} The N 4 {\displaystyle {\frac {\sqrt {N}}{4}}} term is the column factor, the α − 1 α {\displaystyle {\frac {\alpha -1}{\alpha }}} term is the thermodynamic factor, and the k 2 ′ 1 + k 2 ′ {\displaystyle {\frac {k'_{2}}{1+k'_{2}}}} term is the retention factor . The 3 factors are not completely independent, but can be treated as such. To increase resolution of two peaks on a chromatogram , one of the three terms of the equation need to be modified. The fundamental resolution equation is derived as follows: For two closely spaced peaks, ω 1 = ω 2 {\displaystyle \omega _{1}=\omega _{2}} , and σ 1 = σ 2 {\displaystyle \sigma _{1}=\sigma _{2}} , so, R s = t r 2 − t r 1 ω 2 = t r 2 − t r 1 4 σ 2 {\displaystyle R_{s}={\frac {t_{r2}-t_{r1}}{\omega _{2}}}={\frac {t_{r2}-t_{r1}}{4\sigma _{2}}}} Where t r 1 {\displaystyle t_{r1}} and t r 2 {\displaystyle t_{r2}} are the retention times of two separate peaks. Since N = ( t r 2 σ 2 ) 2 {\displaystyle N=\left({\frac {t_{r2}}{\sigma _{2}}}\right)^{2}} , then σ = t r 2 N {\displaystyle \sigma ={\frac {t_{r2}}{\sqrt {N}}}} Using substitution, R S = N ( t r 2 − t r 1 4 t r 2 ) = ( N 4 ) ( 1 − t r 1 t r 2 ) {\displaystyle R_{S}={\sqrt {N}}\left({\frac {t_{r2}-t_{r1}}{4t_{r2}}}\right)=\left({\frac {\sqrt {N}}{4}}\right)\left(1-{\frac {t_{r1}}{t_{r2}}}\right)} . Now using the following equations and solving for t r 1 {\displaystyle t_{r1}} and t r 2 {\displaystyle t_{r2}} k 1 ′ = t r 1 − t 0 t 0 ; t r 1 = t 0 ( k 1 ′ + 1 ) {\displaystyle k'_{1}={\frac {t_{r1}-t_{0}}{t_{0}}};\quad t_{r1}=t_{0}(k'_{1}+1)} k 2 ′ = t r 2 − t 0 t 0 ; t r 2 = t 0 ( k 2 ′ + 1 ) {\displaystyle k'_{2}={\frac {t_{r2}-t_{0}}{t_{0}}};\quad t_{r2}=t_{0}(k'_{2}+1)} Substituting again and you get: R s = ( N 4 ) ( 1 − k 1 ′ + 1 k 2 ′ + 1 ) = ( N 4 ) ( k 2 ′ − k 1 ′ 1 + k 2 ′ ) {\displaystyle R_{s}=\left({\frac {\sqrt {N}}{4}}\right)\left(1-{\frac {k'_{1}+1}{k'_{2}+1}}\right)=\left({\frac {\sqrt {N}}{4}}\right)\left({\frac {k'_{2}-k'_{1}}{1+k'_{2}}}\right)} And finally substituting once more α = k 2 ′ / k 1 ′ {\displaystyle \alpha =k'_{2}/k'_{1}} and you get the Fundamental Resolution Equation: R s = ( N 4 ) ( α − 1 α ) ( k 2 ′ 1 + k 2 ′ ) {\displaystyle R_{s}=\left({\frac {\sqrt {N}}{4}}\right)\left({\frac {\alpha -1}{\alpha }}\right)\left({\frac {k'_{2}}{1+k'_{2}}}\right)}
https://en.wikipedia.org/wiki/Fundamental_resolution_equation
The fundamental series is a set of spectral lines in a set caused by transition between d and f orbitals in atoms . Originally the series was discovered in the infrared by Fowler and independently by Arno Bergmann . [ 1 ] This resulted in the name Bergmann series used for such a set of lines in a spectrum. However the name was changed as Bergmann also discovered other series of lines. And other discoverers also established other such series. They became known as the fundamental series. [ 2 ] Bergmann observed lithium at 5347 cm −1 , sodium at 5416 cm −1 potassium at 6592 cm −1 . [ 2 ] Bergmann observed that the lines in the series in the caesium spectrum were double. His discovery was announced in Contributions to the Knowledge of the Infra-Red Emission Spectra of the Alkalies , Jena 1907. [ 3 ] Carl Runge called this series the "new series". He predicted that the lines of potassium and rubidium would be in pairs. [ 3 ] He expressed the frequencies of the series lines by a formula and predicted a connection of the series limit to the other known series. In 1909 W. M. Hicks produced approximate formulas for the various series and noticed that this series had a simpler formula than the others and thus called it the "fundamental series" and used the letter F. [ 1 ] [ 4 ] The formula that more resembled the hydrogen spectrum calculations was because of a smaller quantum defect . There is no physical basis to call this fundamental. [ 5 ] The fundamental series was described as badly-named. [ 6 ] It is the last spectroscopic series to have a special designation. [ 6 ] The next series involving transitions between F and G subshells is known as the FG series. [ 6 ] Frequencies of the lines in the series are given by this formula: ν = R [ 3 + d ] 2 − R [ m + f ] 2 , with m = 4 , 5 , 6 , . . . , {\displaystyle \nu ={\frac {R}{\left[3+d\right]^{2}}}-{\frac {R}{\left[m+f\right]^{2}}}{\text{, with }}m=4,5,6,...,} R is the Rydberg constant , T B S = R [ 3 + d ] 2 {\displaystyle T_{BS}={\frac {R}{\left[3+d\right]^{2}}}} is the series limit, represented by 3D , and R [ m + f ] 2 {\displaystyle {\frac {R}{\left[m+f\right]^{2}}}} is represented by mF . A shortened formula is then given by ν = 3 D − m F {\displaystyle \nu =3D-mF} with values of m being integers from 4 upwards. [ 7 ] The two numbers separated by the "−" are called terms, that represent the energy level of an atom. The limit of the fundamental series is the same as the 3D level. [ 5 ] The terms can have different designations, mF for single line systems, mΦ for doublets and mf for triplets. [ 8 ] Lines in the fundamental series are split into compound doublets, due to the D and F subshells having different spin possibilities. The splitting of the D subshell is very small and that of the F subshell even less so, so the fine structure in the fundamental series is harder to resolve than that in the sharp or diffuse series . [ 7 ] The quantum defect for lithium is 0. [ 5 ] The fundamental series lines for sodium appear in the near infrared. The fundamental series lines for potassium appear in the near infrared. The fundamental series lines for rubidium appear in the near infrared. The valence electron moves from the 4 d level as the 3 d is contained in an inner shell. They were observed by R von Lamb. Relevant energy levels are 4 p 6 4 d j =5/2 19,355.282 cm −1 and j =3/2 19,355.623 cm −1 , and the first f levels at 4 p 6 4 f j =5/2 26,792.185 cm −1 and j =7/2 26,792.169 cm −1 . [ 11 ]
https://en.wikipedia.org/wiki/Fundamental_series
The fundamental theorem of Riemannian geometry states that on any Riemannian manifold (or pseudo-Riemannian manifold ) there is a unique affine connection that is torsion-free and metric-compatible, called the Levi-Civita connection or (pseudo-) Riemannian connection of the given metric. Because it is canonically defined by such properties, this connection is often automatically used when given a metric . The theorem can be stated as follows: Fundamental theorem of Riemannian Geometry. [ 1 ] Let ( M , g ) be a Riemannian manifold (or pseudo-Riemannian manifold ). Then there is a unique connection ∇ which satisfies the following conditions: The first condition is called metric-compatibility of ∇ . [ 2 ] It may be equivalently expressed by saying that, given any curve in M , the inner product of any two ∇ –parallel vector fields along the curve is constant. [ 3 ] It may also be equivalently phrased as saying that the metric tensor is preserved by parallel transport , which is to say that the metric is parallel when considering the natural extension of ∇ to act on (0,2)-tensor fields: ∇ g = 0 . [ 4 ] It is further equivalent to require that the connection is induced by a principal bundle connection on the orthonormal frame bundle . [ 5 ] The second condition is sometimes called symmetry of ∇ . [ 6 ] It expresses the condition that the torsion of ∇ is zero, and as such is also called torsion-freeness . [ 7 ] There are alternative characterizations. [ 8 ] An extension of the fundamental theorem states that given a pseudo-Riemannian manifold there is a unique connection preserving the metric tensor , with any given vector-valued 2-form as its torsion. The difference between an arbitrary connection (with torsion) and the corresponding Levi-Civita connection is the contorsion tensor . The fundamental theorem asserts both existence and uniqueness of a certain connection, which is called the Levi-Civita connection or (pseudo-) Riemannian connection . However, the existence result is extremely direct, as the connection in question may be explicitly defined by either the second Christoffel identity or Koszul formula as obtained in the proofs below. This explicit definition expresses the Levi-Civita connection in terms of the metric and its first derivatives. As such, if the metric is k -times continuously differentiable, then the Levi-Civita connection is ( k − 1) -times continuously differentiable. [ 9 ] The Levi-Civita connection can also be characterized in other ways, for instance via the Palatini variation of the Einstein–Hilbert action . The proof of the theorem can be presented in various ways. [ 10 ] Here the proof is first given in the language of coordinates and Christoffel symbols , and then in the coordinate-free language of covariant derivatives . Regardless of the presentation, the idea is to use the metric-compatibility and torsion-freeness conditions to obtain a direct formula for any connection that is both metric-compatible and torsion-free. This establishes the uniqueness claim in the fundamental theorem. To establish the existence claim, it must be directly checked that the formula obtained does define a connection as desired. Here the Einstein summation convention will be used, which is to say that an index repeated as both subscript and superscript is being summed over all values. Let m denote the dimension of M . Recall that, relative to a local chart, a connection is given by m 3 smooth functions { Γ i j l } , {\displaystyle \left\{\Gamma _{ij}^{l}\right\},} with ( ∇ X Y ) i = X j ∂ j Y i + X j Y k Γ j k i {\displaystyle (\nabla _{X}Y)^{i}=X^{j}\partial _{j}Y^{i}+X^{j}Y^{k}\Gamma _{jk}^{i}} for any vector fields X and Y . [ 11 ] Torsion-freeness of the connection refers to the condition that ∇ X Y − ∇ Y X = [ X , Y ] for arbitrary X and Y . Written in terms of local coordinates, this is equivalent to 0 = X j Y k ( Γ j k i − Γ k j i ) , {\displaystyle 0=X^{j}Y^{k}(\Gamma _{jk}^{i}-\Gamma _{kj}^{i}),} which by arbitrariness of X and Y is equivalent to the condition Γ i jk = Γ i kj . [ 12 ] Similarly, the condition of metric-compatibility is equivalent to the condition [ 13 ] ∂ k g i j = Γ k i l g l j + Γ k j l g i l . {\displaystyle \partial _{k}g_{ij}=\Gamma _{ki}^{l}g_{lj}+\Gamma _{kj}^{l}g_{il}.} In this way, it is seen that the conditions of torsion-freeness and metric-compatibility can be viewed as a linear system of equations for the connection, in which the coefficients and 'right-hand side' of the system are given by the metric and its first derivative. The fundamental theorem of Riemannian geometry can be viewed as saying that this linear system has a unique solution. This is seen via the following computation: [ 14 ] ∂ i g j l + ∂ j g i l − ∂ l g i j = ( Γ i j p g p l + Γ i l p g j p ) + ( Γ j i p g p l + Γ j l p g i p ) − ( Γ l i p g p j + Γ l j p g i p ) = 2 Γ i j p g p l {\displaystyle {\begin{aligned}\partial _{i}g_{jl}+\partial _{j}g_{il}-\partial _{l}g_{ij}&=\left(\Gamma _{ij}^{p}g_{pl}+\Gamma _{il}^{p}g_{jp}\right)+\left(\Gamma _{ji}^{p}g_{pl}+\Gamma _{jl}^{p}g_{ip}\right)-\left(\Gamma _{li}^{p}g_{pj}+\Gamma _{lj}^{p}g_{ip}\right)\\&=2\Gamma _{ij}^{p}g_{pl}\end{aligned}}} in which the metric-compatibility condition is used three times for the first equality and the torsion-free condition is used three times for the second equality. The resulting formula is sometimes known as the first Christoffel identity . [ 15 ] It can be contracted with the inverse of the metric, g kl , to find the second Christoffel identity : [ 16 ] Γ i j k = 1 2 g k l ( ∂ i g j l + ∂ j g i l − ∂ l g i j ) . {\displaystyle \Gamma _{ij}^{k}={\tfrac {1}{2}}g^{kl}\left(\partial _{i}g_{jl}+\partial _{j}g_{il}-\partial _{l}g_{ij}\right).} This proves the uniqueness of a torsion-free and metric-compatible condition; that is, any such connection must be given by the above formula. To prove the existence, it must be checked that the above formula defines a connection that is torsion-free and metric-compatible. This can be done directly. The above proof can also be expressed in terms of vector fields. [ 17 ] Torsion-freeness refers to the condition that ∇ X Y − ∇ Y X = [ X , Y ] , {\displaystyle \nabla _{X}Y-\nabla _{Y}X=[X,Y],} and metric-compatibility refers to the condition that X ( g ( Y , Z ) ) = g ( ∇ X Y , Z ) + g ( Y , ∇ X Z ) , {\displaystyle X\left(g(Y,Z)\right)=g(\nabla _{X}Y,Z)+g(Y,\nabla _{X}Z),} where X , Y , and Z are arbitrary vector fields. The computation previously done in local coordinates can be written as X ( g ( Y , Z ) ) + Y ( g ( X , Z ) ) − Z ( g ( X , Y ) ) = ( g ( ∇ X Y , Z ) + g ( Y , ∇ X Z ) ) + ( g ( ∇ Y X , Z ) + g ( X , ∇ Y Z ) ) − ( g ( ∇ Z X , Y ) + g ( X , ∇ Z Y ) ) = g ( ∇ X Y + ∇ Y X , Z ) + g ( ∇ X Z − ∇ Z X , Y ) + g ( ∇ Y Z − ∇ Z Y , X ) = g ( 2 ∇ X Y + [ Y , X ] , Z ) + g ( [ X , Z ] , Y ) + g ( [ Y , Z ] , X ) . {\displaystyle {\begin{aligned}X\left(g(Y,Z)\right)&+Y\left(g(X,Z)\right)-Z\left(g(X,Y)\right)\\&={\Big (}g(\nabla _{X}Y,Z)+g(Y,\nabla _{X}Z){\Big )}+{\Big (}g(\nabla _{Y}X,Z)+g(X,\nabla _{Y}Z){\Big )}-{\Big (}g(\nabla _{Z}X,Y)+g(X,\nabla _{Z}Y){\Big )}\\&=g(\nabla _{X}Y+\nabla _{Y}X,Z)+g(\nabla _{X}Z-\nabla _{Z}X,Y)+g(\nabla _{Y}Z-\nabla _{Z}Y,X)\\&=g(2\nabla _{X}Y+[Y,X],Z)+g([X,Z],Y)+g([Y,Z],X).\end{aligned}}} This reduces immediately to the first Christoffel identity in the case that X , Y , and Z are coordinate vector fields. The equations displayed above can be rearranged to produce the Koszul formula or identity 2 g ( ∇ X Y , Z ) = X ( g ( Y , Z ) ) + Y ( g ( X , Z ) ) − Z ( g ( X , Y ) ) + g ( [ X , Y ] , Z ) − g ( [ X , Z ] , Y ) − g ( [ Y , Z ] , X ) . {\displaystyle 2g(\nabla _{X}Y,Z)=X\left(g(Y,Z)\right)+Y\left(g(X,Z)\right)-Z\left(g(X,Y)\right)+g([X,Y],Z)-g([X,Z],Y)-g([Y,Z],X).} This proves the uniqueness of a torsion-free and metric-compatible condition, since if g ( W , Z ) is equal to g ( U , Z ) for arbitrary Z , then W must equal U . This is a consequence of the non-degeneracy of the metric. In the local formulation above, this key property of the metric was implicitly used, in the same way, via the existence of g kl . Furthermore, by the same reasoning, the Koszul formula can be used to define a vector field ∇ X Y when given X and Y , and it is routine to check that this defines a connection that is torsion-free and metric-compatible. [ 18 ]
https://en.wikipedia.org/wiki/Fundamental_theorem_of_Riemannian_geometry
The fundamental theorem of algebra , also called d'Alembert's theorem [ 1 ] or the d'Alembert–Gauss theorem , [ 2 ] states that every non- constant single-variable polynomial with complex coefficients has at least one complex root . This includes polynomials with real coefficients, since every real number is a complex number with its imaginary part equal to zero. Equivalently (by definition), the theorem states that the field of complex numbers is algebraically closed . The theorem is also stated as follows: every non-zero, single-variable, degree n polynomial with complex coefficients has, counted with multiplicity , exactly n complex roots. The equivalence of the two statements can be proven through the use of successive polynomial division . Despite its name, it is not fundamental for modern algebra ; it was named when algebra was synonymous with the theory of equations . Peter Roth [ de ] , in his book Arithmetica Philosophica (published in 1608, at Nürnberg, by Johann Lantzenberger), [ 3 ] wrote that a polynomial equation of degree n (with real coefficients) may have n solutions. Albert Girard , in his book L'invention nouvelle en l'Algèbre (published in 1629), asserted that a polynomial equation of degree n has n solutions, but he did not state that they had to be real numbers. Furthermore, he added that his assertion holds "unless the equation is incomplete", where "incomplete" means that at least one coefficient is equal to 0. However, when he explains in detail what he means, it is clear that he actually believes that his assertion is always true; for instance, he shows that the equation x 4 = 4 x − 3 , {\displaystyle x^{4}=4x-3,} although incomplete, has four solutions (counting multiplicities): 1 (twice), − 1 + i 2 , {\displaystyle -1+i{\sqrt {2}},} and − 1 − i 2 . {\displaystyle -1-i{\sqrt {2}}.} As will be mentioned again below, it follows from the fundamental theorem of algebra that every non-constant polynomial with real coefficients can be written as a product of polynomials with real coefficients whose degrees are either 1 or 2. However, in 1702 Leibniz erroneously said that no polynomial of the type x 4 + a 4 (with a real and distinct from 0) can be written in such a way. Later, Nikolaus Bernoulli made the same assertion concerning the polynomial x 4 − 4 x 3 + 2 x 2 + 4 x + 4 , but he got a letter from Euler in 1742 [ 4 ] in which it was shown that this polynomial is equal to with α = 4 + 2 7 . {\displaystyle \alpha ={\sqrt {4+2{\sqrt {7}}}}.} Euler also pointed out that A first attempt at proving the theorem was made by d'Alembert in 1746, but his proof was incomplete. Among other problems, it assumed implicitly a theorem (now known as Puiseux's theorem ), which would not be proved until more than a century later and using the fundamental theorem of algebra. Other attempts were made by Euler (1749), de Foncenex (1759), Lagrange (1772), and Laplace (1795). These last four attempts assumed implicitly Girard's assertion; to be more precise, the existence of solutions was assumed and all that remained to be proved was that their form was a + bi for some real numbers a and b . In modern terms, Euler, de Foncenex, Lagrange, and Laplace were assuming the existence of a splitting field of the polynomial p ( z ). At the end of the 18th century, two new proofs were published which did not assume the existence of roots, but neither of which was complete. One of them, due to James Wood and mainly algebraic, was published in 1798 and it was totally ignored. Wood's proof had an algebraic gap. [ 5 ] The other one was published by Gauss in 1799 and it was mainly geometric, but it had a topological gap, only filled by Alexander Ostrowski in 1920, as discussed in Smale (1981). [ 6 ] The first rigorous proof was published by Argand , an amateur mathematician , in 1806 (and revisited in 1813); [ 7 ] it was also here that, for the first time, the fundamental theorem of algebra was stated for polynomials with complex coefficients, rather than just real coefficients. Gauss produced two other proofs in 1816 and another incomplete version of his original proof in 1849. The first textbook containing a proof of the theorem was Cauchy 's Cours d'analyse de l'École Royale Polytechnique (1821). It contained Argand's proof, although Argand is not credited for it. None of the proofs mentioned so far is constructive . It was Weierstrass who raised for the first time, in the middle of the 19th century, the problem of finding a constructive proof of the fundamental theorem of algebra. He presented his solution, which amounts in modern terms to a combination of the Durand–Kerner method with the homotopy continuation principle, in 1891. Another proof of this kind was obtained by Hellmuth Kneser in 1940 and simplified by his son Martin Kneser in 1981. Without using countable choice , it is not possible to constructively prove the fundamental theorem of algebra for complex numbers based on the Dedekind real numbers (which are not constructively equivalent to the Cauchy real numbers without countable choice). [ 8 ] However, Fred Richman proved a reformulated version of the theorem that does work. [ 9 ] There are several equivalent formulations of the theorem: The next two statements are equivalent to the previous ones, although they do not involve any nonreal complex number. These statements can be proved from previous factorizations by remarking that, if r is a non-real root of a polynomial with real coefficients, its complex conjugate r ¯ {\displaystyle {\overline {r}}} is also a root, and ( x − r ) ( x − r ¯ ) {\displaystyle (x-r)(x-{\overline {r}})} is a polynomial of degree two with real coefficients (this is the complex conjugate root theorem ). Conversely, if one has a factor of degree two, the quadratic formula gives a root. All proofs below involve some mathematical analysis , or at least the topological concept of continuity of real or complex functions. Some also use differentiable or even analytic functions. This requirement has led to the remark that the Fundamental Theorem of Algebra is neither fundamental, nor a theorem of algebra. [ 10 ] Some proofs of the theorem only prove that any non-constant polynomial with real coefficients has some complex root. This lemma is enough to establish the general case because, given a non-constant polynomial p with complex coefficients, the polynomial has only real coefficients, and, if z is a root of q , then either z or its conjugate is a root of p . Here, p ¯ {\displaystyle {\overline {p}}} is the polynomial obtained by replacing each coefficient of p with its complex conjugate ; the roots of p ¯ {\displaystyle {\overline {p}}} are exactly the complex conjugates of the roots of p . Many non-algebraic proofs of the theorem use the fact (sometimes called the "growth lemma") that a polynomial function p ( z ) of degree n whose dominant coefficient is 1 behaves like z n when | z | is large enough. More precisely, there is some positive real number R such that when | z | > R . Even without using complex numbers, it is possible to show that a real-valued polynomial p ( x ): p (0) ≠ 0 of degree n > 2 can always be divided by some quadratic polynomial with real coefficients. [ 11 ] In other words, for some real-valued a and b , the coefficients of the linear remainder on dividing p ( x ) by x 2 − ax − b simultaneously become zero. where q ( x ) is a polynomial of degree n − 2. The coefficients R p ( x ) ( a , b ) and S p ( x ) ( a , b ) are independent of x and completely defined by the coefficients of p ( x ). In terms of representation, R p ( x ) ( a , b ) and S p ( x ) ( a , b ) are bivariate polynomials in a and b . In the flavor of Gauss's first (incomplete) proof of this theorem from 1799, the key is to show that for any sufficiently large negative value of b , all the roots of both R p ( x ) ( a , b ) and S p ( x ) ( a , b ) in the variable a are real-valued and alternating each other (interlacing property). Utilizing a Sturm-like chain that contain R p ( x ) ( a , b ) and S p ( x ) ( a , b ) as consecutive terms, interlacing in the variable a can be shown for all consecutive pairs in the chain whenever b has sufficiently large negative value. As S p ( a , b = 0) = p (0) has no roots, interlacing of R p ( x ) ( a , b ) and S p ( x ) ( a , b ) in the variable a fails at b = 0. Topological arguments can be applied on the interlacing property to show that the locus of the roots of R p ( x ) ( a , b ) and S p ( x ) ( a , b ) must intersect for some real-valued a and b < 0. Find a closed disk D of radius r centered at the origin such that | p ( z )| > | p (0)| whenever | z | ≥ r . The minimum of | p ( z )| on D , which must exist since D is compact , is therefore achieved at some point z 0 in the interior of D , but not at any point of its boundary. The maximum modulus principle applied to 1/ p ( z ) implies that p ( z 0 ) = 0. In other words, z 0 is a zero of p ( z ). A variation of this proof does not require the maximum modulus principle (in fact, a similar argument also gives a proof of the maximum modulus principle for holomorphic functions). Continuing from before the principle was invoked, if a := p ( z 0 ) ≠ 0, then, expanding p ( z ) in powers of z − z 0 , we can write Here, the c j are simply the coefficients of the polynomial z → p ( z + z 0 ) after expansion, and k is the index of the first non-zero coefficient following the constant term. For z sufficiently close to z 0 this function has behavior asymptotically similar to the simpler polynomial q ( z ) = a + c k ( z − z 0 ) k {\displaystyle q(z)=a+c_{k}(z-z_{0})^{k}} . More precisely, the function for some positive constant M in some neighborhood of z 0 . Therefore, if we define θ 0 = ( arg ⁡ ( a ) + π − arg ⁡ ( c k ) ) / k {\displaystyle \theta _{0}=(\arg(a)+\pi -\arg(c_{k}))/k} and let z = z 0 + r e i θ 0 {\displaystyle z=z_{0}+re^{i\theta _{0}}} tracing a circle of radius r > 0 around z , then for any sufficiently small r (so that the bound M holds), we see that When r is sufficiently close to 0 this upper bound for | p ( z )| is strictly smaller than | a |, contradicting the definition of z 0 . Geometrically, we have found an explicit direction θ 0 such that if one approaches z 0 from that direction one can obtain values p ( z ) smaller in absolute value than | p ( z 0 )|. Another analytic proof can be obtained along this line of thought observing that, since | p ( z )| > | p (0)| outside D , the minimum of | p ( z )| on the whole complex plane is achieved at z 0 . If | p ( z 0 )| > 0, then 1/ p is a bounded holomorphic function in the entire complex plane since, for each complex number z , |1/ p ( z )| ≤ |1/ p ( z 0 )|. Applying Liouville's theorem , which states that a bounded entire function must be constant, this would imply that 1/ p is constant and therefore that p is constant. This gives a contradiction, and hence p ( z 0 ) = 0. [ 12 ] Yet another analytic proof uses the argument principle . Let R be a positive real number large enough so that every root of p ( z ) has absolute value smaller than R ; such a number must exist because every non-constant polynomial function of degree n has at most n zeros. For each r > R , consider the number where c ( r ) is the circle centered at 0 with radius r oriented counterclockwise; then the argument principle says that this number is the number N of zeros of p ( z ) in the open ball centered at 0 with radius r , which, since r > R , is the total number of zeros of p ( z ). On the other hand, the integral of n / z along c ( r ) divided by 2π i is equal to n . But the difference between the two numbers is The numerator of the rational expression being integrated has degree at most n − 1 and the degree of the denominator is n + 1. Therefore, the number above tends to 0 as r → +∞. But the number is also equal to N − n and so N = n . Another complex-analytic proof can be given by combining linear algebra with the Cauchy theorem . To establish that every complex polynomial of degree n > 0 has a zero, it suffices to show that every complex square matrix of size n > 0 has a (complex) eigenvalue . [ 13 ] The proof of the latter statement is by contradiction . Let A be a complex square matrix of size n > 0 and let I n be the unit matrix of the same size. Assume A has no eigenvalues. Consider the resolvent function which is a meromorphic function on the complex plane with values in the vector space of matrices. The eigenvalues of A are precisely the poles of R ( z ). Since, by assumption, A has no eigenvalues, the function R ( z ) is an entire function and Cauchy theorem implies that On the other hand, R ( z ) expanded as a geometric series gives: This formula is valid outside the closed disc of radius ‖ A ‖ {\displaystyle \|A\|} (the operator norm of A ). Let r > ‖ A ‖ . {\displaystyle r>\|A\|.} Then (in which only the summand k = 0 has a nonzero integral). This is a contradiction, and so A has an eigenvalue. Finally, Rouché's theorem gives perhaps the shortest proof of the theorem. Suppose the minimum of | p ( z )| on the whole complex plane is achieved at z 0 ; it was seen at the proof which uses Liouville's theorem that such a number must exist. We can write p ( z ) as a polynomial in z − z 0 : there is some natural number k and there are some complex numbers c k , c k + 1 , ..., c n such that c k ≠ 0 and: If p ( z 0 ) is nonzero, it follows that if a is a k th root of − p ( z 0 )/ c k and if t is positive and sufficiently small, then | p ( z 0 + ta )| < | p ( z 0 )|, which is impossible, since | p ( z 0 )| is the minimum of | p | on D . For another topological proof by contradiction, suppose that the polynomial p ( z ) has no roots, and consequently is never equal to 0. Think of the polynomial as a map from the complex plane into the complex plane. It maps any circle | z | = R into a closed loop, a curve P ( R ). We will consider what happens to the winding number of P ( R ) at the extremes when R is very large and when R = 0. When R is a sufficiently large number, then the leading term z n of p ( z ) dominates all other terms combined; in other words, When z traverses the circle R e i θ {\displaystyle Re^{i\theta }} once counter-clockwise ( 0 ≤ θ ≤ 2 π ) , {\displaystyle (0\leq \theta \leq 2\pi ),} then z n = R n e i n θ {\displaystyle z^{n}=R^{n}e^{in\theta }} winds n times counter-clockwise ( 0 ≤ θ ≤ 2 π n ) {\displaystyle (0\leq \theta \leq 2\pi n)} around the origin (0,0), and P ( R ) likewise. At the other extreme, with | z | = 0, the curve P (0) is merely the single point p (0), which must be nonzero because p ( z ) is never zero. Thus p (0) must be distinct from the origin (0,0), which denotes 0 in the complex plane. The winding number of P (0) around the origin (0,0) is thus 0. Now changing R continuously will deform the loop continuously . At some R the winding number must change. But that can only happen if the curve P ( R ) includes the origin (0,0) for some R . But then for some z on that circle | z | = R we have p ( z ) = 0, contradicting our original assumption. Therefore, p ( z ) has at least one zero. These proofs of the Fundamental Theorem of Algebra must make use of the following two facts about real numbers that are not algebraic but require only a small amount of analysis (more precisely, the intermediate value theorem in both cases): The second fact, together with the quadratic formula , implies the theorem for real quadratic polynomials. In other words, algebraic proofs of the fundamental theorem actually show that if R is any real-closed field , then its extension C = R ( √ −1 ) is algebraically closed. As mentioned above, it suffices to check the statement "every non-constant polynomial p ( z ) with real coefficients has a complex root". This statement can be proved by induction on the greatest non-negative integer k such that 2 k divides the degree n of p ( z ). Let a be the coefficient of z n in p ( z ) and let F be a splitting field of p ( z ) over C ; in other words, the field F contains C and there are elements z 1 , z 2 , ..., z n in F such that If k = 0, then n is odd, and therefore p ( z ) has a real root. Now, suppose that n = 2 k m (with m odd and k > 0) and that the theorem is already proved when the degree of the polynomial has the form 2 k − 1 m ′ with m ′ odd. For a real number t , define: Then the coefficients of q t ( z ) are symmetric polynomials in the z i with real coefficients. Therefore, they can be expressed as polynomials with real coefficients in the elementary symmetric polynomials , that is, in − a 1 , a 2 , ..., (−1) n a n . So q t ( z ) has in fact real coefficients. Furthermore, the degree of q t ( z ) is n ( n − 1)/2 = 2 k −1 m ( n − 1), and m ( n − 1) is an odd number. So, using the induction hypothesis, q t has at least one complex root; in other words, z i + z j + tz i z j is complex for two distinct elements i and j from {1, ..., n }. Since there are more real numbers than pairs ( i , j ), one can find distinct real numbers t and s such that z i + z j + tz i z j and z i + z j + sz i z j are complex (for the same i and j ). So, both z i + z j and z i z j are complex numbers. It is easy to check that every complex number has a complex square root, thus every complex polynomial of degree 2 has a complex root by the quadratic formula. It follows that z i and z j are complex numbers, since they are roots of the quadratic polynomial z 2 −  ( z i + z j ) z + z i z j . Joseph Shipman showed in 2007 that the assumption that odd degree polynomials have roots is stronger than necessary; any field in which polynomials of prime degree have roots is algebraically closed (so "odd" can be replaced by "odd prime" and this holds for fields of all characteristics). [ 14 ] For axiomatization of algebraically closed fields, this is the best possible, as there are counterexamples if a single prime is excluded. However, these counterexamples rely on −1 having a square root. If we take a field where −1 has no square root, and every polynomial of degree n ∈ I has a root, where I is any fixed infinite set of odd numbers, then every polynomial f ( x ) of odd degree has a root (since ( x 2 + 1) k f ( x ) has a root, where k is chosen so that deg( f ) + 2 k ∈ I ). Another algebraic proof of the fundamental theorem can be given using Galois theory . It suffices to show that C has no proper finite field extension . [ 15 ] Let K / C be a finite extension. Since the normal closure of K over R still has a finite degree over C (or R ), we may assume without loss of generality that K is a normal extension of R (hence it is a Galois extension , as every algebraic extension of a field of characteristic 0 is separable ). Let G be the Galois group of this extension, and let H be a Sylow 2-subgroup of G , so that the order of H is a power of 2, and the index of H in G is odd. By the fundamental theorem of Galois theory , there exists a subextension L of K / R such that Gal( K / L ) = H . As [ L : R ] = [ G : H ] is odd, and there are no nonlinear irreducible real polynomials of odd degree, we must have L = R , thus [ K : R ] and [ K : C ] are powers of 2. Assuming by way of contradiction that [ K : C ] > 1, we conclude that the 2-group Gal( K / C ) contains a subgroup of index 2, so there exists a subextension M of C of degree 2. However, C has no extension of degree 2, because every quadratic complex polynomial has a complex root, as mentioned above. This shows that [ K : C ] = 1, and therefore K = C , which completes the proof. There exists still another way to approach the fundamental theorem of algebra, due to J. M. Almira and A. Romero: by Riemannian geometric arguments. The main idea here is to prove that the existence of a non-constant polynomial p ( z ) without zeros implies the existence of a flat Riemannian metric over the sphere S 2 . This leads to a contradiction since the sphere is not flat. A Riemannian surface ( M , g ) is said to be flat if its Gaussian curvature , which we denote by K g , is identically null. Now, the Gauss–Bonnet theorem , when applied to the sphere S 2 , claims that which proves that the sphere is not flat. Let us now assume that n > 0 and for each complex number z . Let us define Obviously, p* ( z ) ≠ 0 for all z in C . Consider the polynomial f ( z ) = p ( z ) p* ( z ). Then f ( z ) ≠ 0 for each z in C . Furthermore, We can use this functional equation to prove that g , given by for w in C , and for w ∈ S 2 \{0}, is a well defined Riemannian metric over the sphere S 2 (which we identify with the extended complex plane C ∪ {∞}). Now, a simple computation shows that since the real part of an analytic function is harmonic. This proves that K g = 0. Since the fundamental theorem of algebra can be seen as the statement that the field of complex numbers is algebraically closed , it follows that any theorem concerning algebraically closed fields applies to the field of complex numbers. Here are a few more consequences of the theorem, which are either about the field of real numbers or the relationship between the field of real numbers and the field of complex numbers: While the fundamental theorem of algebra states a general existence result, it is of some interest, both from the theoretical and from the practical point of view, to have information on the location of the zeros of a given polynomial. The simplest result in this direction is a bound on the modulus: all zeros ζ of a monic polynomial z n + a n − 1 z n − 1 + ⋯ + a 1 z + a 0 {\displaystyle z^{n}+a_{n-1}z^{n-1}+\cdots +a_{1}z+a_{0}} satisfy an inequality |ζ| ≤ R ∞ , where As stated, this is not yet an existence result but rather an example of what is called an a priori bound: it says that if there are solutions then they lie inside the closed disk of center the origin and radius R ∞ . However, once coupled with the fundamental theorem of algebra it says that the disk contains in fact at least one solution. More generally, a bound can be given directly in terms of any p-norm of the n -vector of coefficients a := ( a 0 , a 1 , … , a n − 1 ) , {\displaystyle a:=(a_{0},a_{1},\ldots ,a_{n-1}),} that is |ζ| ≤ R p , where R p is precisely the q -norm of the 2-vector ( 1 , ‖ a ‖ p ) , {\displaystyle (1,\|a\|_{p}),} q being the conjugate exponent of p , 1 p + 1 q = 1 , {\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1,} for any 1 ≤ p ≤ ∞. Thus, the modulus of any solution is also bounded by for 1 < p < ∞, and in particular (where we define a n to mean 1, which is reasonable since 1 is indeed the n -th coefficient of our polynomial). The case of a generic polynomial of degree n , is of course reduced to the case of a monic, dividing all coefficients by a n ≠ 0. Also, in case that 0 is not a root, i.e. a 0 ≠ 0, bounds from below on the roots ζ follow immediately as bounds from above on 1 ζ {\displaystyle {\tfrac {1}{\zeta }}} , that is, the roots of Finally, the distance | ζ − ζ 0 | {\displaystyle |\zeta -\zeta _{0}|} from the roots ζ to any point ζ 0 {\displaystyle \zeta _{0}} can be estimated from below and above, seeing ζ − ζ 0 {\displaystyle \zeta -\zeta _{0}} as zeros of the polynomial P ( z + ζ 0 ) {\displaystyle P(z+\zeta _{0})} , whose coefficients are the Taylor expansion of P ( z ) at z = ζ 0 . {\displaystyle z=\zeta _{0}.} Let ζ be a root of the polynomial in order to prove the inequality |ζ| ≤ R p we can assume, of course, |ζ| > 1. Writing the equation as and using the Hölder's inequality we find Now, if p = 1, this is thus In the case 1 < p ≤ ∞, taking into account the summation formula for a geometric progression , we have thus and simplifying, Therefore holds, for all 1 ≤ p ≤ ∞.
https://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra
In algebra , the fundamental theorem of algebraic K -theory describes the effects of changing the ring of K -groups from a ring R to R [ t ] {\displaystyle R[t]} or R [ t , t − 1 ] {\displaystyle R[t,t^{-1}]} . The theorem was first proved by Hyman Bass for K 0 , K 1 {\displaystyle K_{0},K_{1}} and was later extended to higher K -groups by Daniel Quillen . Let G i ( R ) {\displaystyle G_{i}(R)} be the algebraic K-theory of the category of finitely generated modules over a noetherian ring R ; explicitly, we can take G i ( R ) = π i ( B + f-gen-Mod R ) {\displaystyle G_{i}(R)=\pi _{i}(B^{+}{\text{f-gen-Mod}}_{R})} , where B + = Ω B Q {\displaystyle B^{+}=\Omega BQ} is given by Quillen's Q-construction . If R is a regular ring (i.e., has finite global dimension ), then G i ( R ) = K i ( R ) , {\displaystyle G_{i}(R)=K_{i}(R),} the i -th K-group of R . [ 1 ] This is an immediate consequence of the resolution theorem , which compares the K-theories of two different categories (with inclusion relation.) For a noetherian ring R , the fundamental theorem states: [ 2 ] The proof of the theorem uses the Q-construction . There is also a version of the theorem for the singular case (for K i {\displaystyle K_{i}} ); this is the version proved in Grayson's paper. This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fundamental_theorem_of_algebraic_K-theory
In mathematics , the fundamental theorem of arithmetic , also called the unique factorization theorem and prime factorization theorem , states that every integer greater than 1 can be represented uniquely as a product of prime numbers , up to the order of the factors. [ 3 ] [ 4 ] [ 5 ] For example, The theorem says two things about this example: first, that 1200 can be represented as a product of primes, and second, that no matter how this is done, there will always be exactly four 2s, one 3, two 5s, and no other primes in the product. The requirement that the factors be prime is necessary: factorizations containing composite numbers may not be unique (for example, 12 = 2 ⋅ 6 = 3 ⋅ 4 {\displaystyle 12=2\cdot 6=3\cdot 4} ). This theorem is one of the main reasons why 1 is not considered a prime number : if 1 were prime, then factorization into primes would not be unique; for example, 2 = 2 ⋅ 1 = 2 ⋅ 1 ⋅ 1 = … {\displaystyle 2=2\cdot 1=2\cdot 1\cdot 1=\ldots } The theorem generalizes to other algebraic structures that are called unique factorization domains and include principal ideal domains , Euclidean domains , and polynomial rings over a field . However, the theorem does not hold for algebraic integers . [ 6 ] This failure of unique factorization is one of the reasons for the difficulty of the proof of Fermat's Last Theorem . The implicit use of unique factorization in rings of algebraic integers is behind the error of many of the numerous false proofs that have been written during the 358 years between Fermat's statement and Wiles's proof . The fundamental theorem can be derived from Book VII, propositions 30, 31 and 32, and Book IX, proposition 14 of Euclid 's Elements . If two numbers by multiplying one another make some number, and any prime number measure the product, it will also measure one of the original numbers. (In modern terminology: if a prime p divides the product ab , then p divides either a or b or both.) Proposition 30 is referred to as Euclid's lemma , and it is the key in the proof of the fundamental theorem of arithmetic. Any composite number is measured by some prime number. (In modern terminology: every integer greater than one is divided evenly by some prime number.) Proposition 31 is proved directly by infinite descent . Any number either is prime or is measured by some prime number. Proposition 32 is derived from proposition 31, and proves that the decomposition is possible. If a number be the least that is measured by prime numbers, it will not be measured by any other prime number except those originally measuring it. (In modern terminology: a least common multiple of several prime numbers is not a multiple of any other prime number.) Book IX, proposition 14 is derived from Book VII, proposition 30, and proves partially that the decomposition is unique – a point critically noted by André Weil . [ 7 ] Indeed, in this proposition the exponents are all equal to one, so nothing is said for the general case. While Euclid took the first step on the way to the existence of prime factorization, Kamāl al-Dīn al-Fārisī took the final step [ 8 ] and stated for the first time the fundamental theorem of arithmetic. [ 9 ] Article 16 of Gauss 's Disquisitiones Arithmeticae is an early modern statement and proof employing modular arithmetic . [ 1 ] Every positive integer n > 1 can be represented in exactly one way as a product of prime powers where p 1 < p 2 < ... < p k are primes and the n i are positive integers. This representation is commonly extended to all positive integers, including 1, by the convention that the empty product is equal to 1 (the empty product corresponds to k = 0 ). This representation is called the canonical representation [ 10 ] of n , or the standard form [ 11 ] [ 12 ] of n . For example, Factors p 0 = 1 may be inserted without changing the value of n (for example, 1000 = 2 3 ×3 0 ×5 3 ). In fact, any positive integer can be uniquely represented as an infinite product taken over all the positive prime numbers, as where a finite number of the n i are positive integers, and the others are zero. Allowing negative exponents provides a canonical form for positive rational numbers . The canonical representations of the product, greatest common divisor (GCD), and least common multiple (LCM) of two numbers a and b can be expressed simply in terms of the canonical representations of a and b themselves: However, integer factorization , especially of large numbers, is much more difficult than computing products, GCDs, or LCMs. So these formulas have limited use in practice. Many arithmetic functions are defined using the canonical representation. In particular, the values of additive and multiplicative functions are determined by their values on the powers of prime numbers. The proof uses Euclid's lemma ( Elements VII, 30): If a prime divides the product of two integers, then it must divide at least one of these integers. It must be shown that every integer greater than 1 is either prime or a product of primes. First, 2 is prime. Then, by strong induction , assume this is true for all numbers greater than 1 and less than n . If n is prime, there is nothing more to prove. Otherwise, there are integers a and b , where n = a b , and 1 < a ≤ b < n . By the induction hypothesis, a = p 1 p 2 ⋅⋅⋅ p j and b = q 1 q 2 ⋅⋅⋅ q k are products of primes. But then n = a b = p 1 p 2 ⋅⋅⋅ p j q 1 q 2 ⋅⋅⋅ q k is a product of primes. Suppose, to the contrary, there is an integer that has two distinct prime factorizations. Let n be the least such integer and write n = p 1 p 2 ... p j = q 1 q 2 ... q k , where each p i and q i is prime. We see that p 1 divides q 1 q 2 ... q k , so p 1 divides some q i by Euclid's lemma . Without loss of generality, say p 1 divides q 1 . Since p 1 and q 1 are both prime, it follows that p 1 = q 1 . Returning to our factorizations of n , we may cancel these two factors to conclude that p 2 ... p j = q 2 ... q k . We now have two distinct prime factorizations of some integer strictly smaller than n , which contradicts the minimality of n . The fundamental theorem of arithmetic can also be proved without using Euclid's lemma. [ 13 ] The proof that follows is inspired by Euclid's original version of the Euclidean algorithm . Assume that s {\displaystyle s} is the smallest positive integer which is the product of prime numbers in two different ways. Incidentally, this implies that s {\displaystyle s} , if it exists, must be a composite number greater than 1 {\displaystyle 1} . Now, say Every p i {\displaystyle p_{i}} must be distinct from every q j . {\displaystyle q_{j}.} Otherwise, if say p i = q j , {\displaystyle p_{i}=q_{j},} then there would exist some positive integer t = s / p i = s / q j {\displaystyle t=s/p_{i}=s/q_{j}} that is smaller than s and has two distinct prime factorizations. One may also suppose that p 1 < q 1 , {\displaystyle p_{1}<q_{1},} by exchanging the two factorizations, if needed. Setting P = p 2 ⋯ p m {\displaystyle P=p_{2}\cdots p_{m}} and Q = q 2 ⋯ q n , {\displaystyle Q=q_{2}\cdots q_{n},} one has s = p 1 P = q 1 Q . {\displaystyle s=p_{1}P=q_{1}Q.} Also, since p 1 < q 1 , {\displaystyle p_{1}<q_{1},} one has Q < P . {\displaystyle Q<P.} It then follows that As the positive integers less than s have been supposed to have a unique prime factorization, p 1 {\displaystyle p_{1}} must occur in the factorization of either q 1 − p 1 {\displaystyle q_{1}-p_{1}} or Q . The latter case is impossible, as Q , being smaller than s , must have a unique prime factorization, and p 1 {\displaystyle p_{1}} differs from every q j . {\displaystyle q_{j}.} The former case is also impossible, as, if p 1 {\displaystyle p_{1}} is a divisor of q 1 − p 1 , {\displaystyle q_{1}-p_{1},} it must be also a divisor of q 1 , {\displaystyle q_{1},} which is impossible as p 1 {\displaystyle p_{1}} and q 1 {\displaystyle q_{1}} are distinct primes. Therefore, there cannot exist a smallest integer with more than a single distinct prime factorization. Every positive integer must either be a prime number itself, which would factor uniquely, or a composite that also factors uniquely into primes, or in the case of the integer 1 {\displaystyle 1} , not factor into any prime. The first generalization of the theorem is found in Gauss's second monograph (1832) on biquadratic reciprocity . This paper introduced what is now called the ring of Gaussian integers , the set of all complex numbers a + bi where a and b are integers. It is now denoted by Z [ i ] . {\displaystyle \mathbb {Z} [i].} He showed that this ring has the four units ±1 and ± i , that the non-zero, non-unit numbers fall into two classes, primes and composites, and that (except for order), the composites have unique factorization as a product of primes ( up to the order and multiplication by units). [ 14 ] Similarly, in 1844 while working on cubic reciprocity , Eisenstein introduced the ring Z [ ω ] {\displaystyle \mathbb {Z} [\omega ]} , where ω = − 1 + − 3 2 , {\textstyle \omega ={\frac {-1+{\sqrt {-3}}}{2}},} ω 3 = 1 {\displaystyle \omega ^{3}=1} is a cube root of unity . This is the ring of Eisenstein integers , and he proved it has the six units ± 1 , ± ω , ± ω 2 {\displaystyle \pm 1,\pm \omega ,\pm \omega ^{2}} and that it has unique factorization. However, it was also discovered that unique factorization does not always hold. An example is given by Z [ − 5 ] {\displaystyle \mathbb {Z} [{\sqrt {-5}}]} . In this ring one has [ 15 ] Examples like this caused the notion of "prime" to be modified. In Z [ − 5 ] {\displaystyle \mathbb {Z} \left[{\sqrt {-5}}\right]} it can be proven that if any of the factors above can be represented as a product, for example, 2 = ab , then one of a or b must be a unit. This is the traditional definition of "prime". It can also be proven that none of these factors obeys Euclid's lemma; for example, 2 divides neither (1 + √ −5 ) nor (1 − √ −5 ) even though it divides their product 6. In algebraic number theory 2 is called irreducible in Z [ − 5 ] {\displaystyle \mathbb {Z} \left[{\sqrt {-5}}\right]} (only divisible by itself or a unit) but not prime in Z [ − 5 ] {\displaystyle \mathbb {Z} \left[{\sqrt {-5}}\right]} (if it divides a product it must divide one of the factors). The mention of Z [ − 5 ] {\displaystyle \mathbb {Z} \left[{\sqrt {-5}}\right]} is required because 2 is prime and irreducible in Z . {\displaystyle \mathbb {Z} .} Using these definitions it can be proven that in any integral domain a prime must be irreducible. Euclid's classical lemma can be rephrased as "in the ring of integers Z {\displaystyle \mathbb {Z} } every irreducible is prime". This is also true in Z [ i ] {\displaystyle \mathbb {Z} [i]} and Z [ ω ] , {\displaystyle \mathbb {Z} [\omega ],} but not in Z [ − 5 ] . {\displaystyle \mathbb {Z} [{\sqrt {-5}}].} The rings in which factorization into irreducibles is essentially unique are called unique factorization domains . Important examples are polynomial rings over the integers or over a field , Euclidean domains and principal ideal domains . In 1843 Kummer introduced the concept of ideal number , which was developed further by Dedekind (1876) into the modern theory of ideals , special subsets of rings. Multiplication is defined for ideals, and the rings in which they have unique factorization are called Dedekind domains . There is a version of unique factorization for ordinals , though it requires some additional conditions to ensure uniqueness. Any commutative Möbius monoid satisfies a unique factorization theorem and thus possesses arithmetical properties similar to those of the multiplicative semigroup of positive integers. Fundamental Theorem of Arithmetic is, in fact, a special case of the unique factorization theorem in commutative Möbius monoids. The Disquisitiones Arithmeticae has been translated from Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes. The two monographs Gauss published on biquadratic reciprocity have consecutively numbered sections: the first contains §§ 1–23 and the second §§ 24–76. Footnotes referencing these are of the form "Gauss, BQ, § n ". Footnotes referencing the Disquisitiones Arithmeticae are of the form "Gauss, DA, Art. n ". These are in Gauss's Werke , Vol II, pp. 65–92 and 93–148; German translations are pp. 511–533 and 534–586 of the German edition of the Disquisitiones .
https://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmetic
The fundamental theorem of calculus is a theorem that links the concept of differentiating a function (calculating its slopes , or rate of change at every point on its domain) with the concept of integrating a function (calculating the area under its graph, or the cumulative effect of small contributions). Roughly speaking, the two operations can be thought of as inverses of each other. The first part of the theorem, the first fundamental theorem of calculus , states that for a continuous function f , an antiderivative or indefinite integral F can be obtained as the integral of f over an interval with a variable upper bound. [ 1 ] Conversely, the second part of the theorem, the second fundamental theorem of calculus , states that the integral of a function f over a fixed interval is equal to the change of any antiderivative F between the ends of the interval. This greatly simplifies the calculation of a definite integral provided an antiderivative can be found by symbolic integration , thus avoiding numerical integration . The fundamental theorem of calculus relates differentiation and integration, showing that these two operations are essentially inverses of one another. Before the discovery of this theorem, it was not recognized that these two operations were related. Ancient Greek mathematicians knew how to compute area via infinitesimals , an operation that we would now call integration. The origins of differentiation likewise predate the fundamental theorem of calculus by hundreds of years; for example, in the fourteenth century the notions of continuity of functions and motion were studied by the Oxford Calculators and other scholars. The historical relevance of the fundamental theorem of calculus is not the ability to calculate these operations, but the realization that the two seemingly distinct operations (calculation of geometric areas, and calculation of gradients) are actually closely related. Calculus as a unified theory of integration and differentiation started from the conjecture and the proof of the fundamental theorem of calculus. The first published statement and proof of a rudimentary form of the fundamental theorem, strongly geometric in character, [ 2 ] was by James Gregory (1638–1675). [ 3 ] [ 4 ] Isaac Barrow (1630–1677) proved a more generalized version of the theorem, [ 5 ] while his student Isaac Newton (1642–1727) completed the development of the surrounding mathematical theory. Gottfried Leibniz (1646–1716) systematized the knowledge into a calculus for infinitesimal quantities and introduced the notation used today. The first fundamental theorem may be interpreted as follows. Given a continuous function y = f ( x ) {\displaystyle y=f(x)} whose graph is plotted as a curve, one defines a corresponding "area function" x ↦ A ( x ) {\displaystyle x\mapsto A(x)} such that A ( x ) is the area beneath the curve between 0 and x . The area A ( x ) may not be easily computable, but it is assumed to be well defined. The area under the curve between x and x + h could be computed by finding the area between 0 and x + h , then subtracting the area between 0 and x . In other words, the area of this "strip" would be A ( x + h ) − A ( x ) . There is another way to estimate the area of this same strip. As shown in the accompanying figure, h is multiplied by f ( x ) to find the area of a rectangle that is approximately the same size as this strip. So: A ( x + h ) − A ( x ) ≈ f ( x ) ⋅ h {\displaystyle A(x+h)-A(x)\approx f(x)\cdot h} Dividing by h on both sides, we get: A ( x + h ) − A ( x ) h ≈ f ( x ) {\displaystyle {\frac {A(x+h)-A(x)}{h}}\approx f(x)} This estimate becomes a perfect equality when h approaches 0: f ( x ) = lim h → 0 A ( x + h ) − A ( x ) h = def A ′ ( x ) . {\displaystyle f(x)=\lim _{h\to 0}{\frac {A(x+h)-A(x)}{h}}\ {\stackrel {\text{def}}{=}}\ A'(x).} That is, the derivative of the area function A ( x ) exists and is equal to the original function f ( x ) , so the area function is an antiderivative of the original function. Thus, the derivative of the integral of a function (the area) is the original function, so that derivative and integral are inverse operations which reverse each other. This is the essence of the Fundamental Theorem. Intuitively, the fundamental theorem states that integration and differentiation are inverse operations which reverse each other. The second fundamental theorem says that the sum of infinitesimal changes in a quantity (the integral of the derivative of the quantity) adds up to the net change in the quantity. To visualize this, imagine traveling in a car and wanting to know the distance traveled (the net change in position along the highway). You can see the velocity on the speedometer but cannot look out to see your location. Each second, you can find how far the car has traveled using distance = speed × time , that is, multiplying the current speed (in kilometers or miles per hour) by the time interval (1 second = 1 3600 {\displaystyle {\tfrac {1}{3600}}} hour). By summing up all these small steps, you can approximate the total distance traveled, in spite of not looking outside the car: distance traveled = ∑ ( velocity at each time ) × ( time interval ) = ∑ v t × Δ t . {\displaystyle {\text{distance traveled}}=\sum \left({\begin{array}{c}{\text{velocity at}}\\{\text{each time}}\end{array}}\right)\times \left({\begin{array}{c}{\text{time}}\\{\text{interval}}\end{array}}\right)=\sum v_{t}\times \Delta t.} As Δ t {\displaystyle \Delta t} becomes infinitesimally small, the summing up corresponds to integration . Thus, the integral of the velocity function (the derivative of position) computes how far the car has traveled (the net change in position). The first fundamental theorem says that the value of any function is the rate of change (the derivative) of its integral from a fixed starting point up to any chosen end point. Continuing the above example using a velocity as the function, you can integrate it from the starting time up to any given time to obtain a distance function whose derivative is that velocity. (To obtain your highway-marker position, you would need to add your starting position to this integral and to take into account whether your travel was in the direction of increasing or decreasing mile markers.) There are two parts to the theorem. The first part deals with the derivative of an antiderivative , while the second part deals with the relationship between antiderivatives and definite integrals . This part is sometimes referred to as the first fundamental theorem of calculus . [ 6 ] Let f be a continuous real-valued function defined on a closed interval [ a , b ] . Let F be the function defined, for all x in [ a , b ] , by F ( x ) = ∫ a x f ( t ) d t . {\displaystyle F(x)=\int _{a}^{x}f(t)\,dt.} Then F is uniformly continuous on [ a , b ] and differentiable on the open interval ( a , b ) , and F ′ ( x ) = f ( x ) {\displaystyle F'(x)=f(x)} for all x in ( a , b ) so F is an antiderivative of f . The fundamental theorem is often employed to compute the definite integral of a function f {\displaystyle f} for which an antiderivative F {\displaystyle F} is known. Specifically, if f {\displaystyle f} is a real-valued continuous function on [ a , b ] {\displaystyle [a,b]} and F {\displaystyle F} is an antiderivative of f {\displaystyle f} in [ a , b ] {\displaystyle [a,b]} , then ∫ a b f ( t ) d t = F ( b ) − F ( a ) . {\displaystyle \int _{a}^{b}f(t)\,dt=F(b)-F(a).} The corollary assumes continuity on the whole interval. This result is strengthened slightly in the following part of the theorem. This part is sometimes referred to as the second fundamental theorem of calculus [ 7 ] or the Newton–Leibniz theorem . Let f {\displaystyle f} be a real-valued function on a closed interval [ a , b ] {\displaystyle [a,b]} and F {\displaystyle F} a continuous function on [ a , b ] {\displaystyle [a,b]} which is an antiderivative of f {\displaystyle f} in ( a , b ) {\displaystyle (a,b)} : F ′ ( x ) = f ( x ) . {\displaystyle F'(x)=f(x).} If f {\displaystyle f} is Riemann integrable on [ a , b ] {\displaystyle [a,b]} then ∫ a b f ( x ) d x = F ( b ) − F ( a ) . {\displaystyle \int _{a}^{b}f(x)\,dx=F(b)-F(a).} The second part is somewhat stronger than the corollary because it does not assume that f {\displaystyle f} is continuous. When an antiderivative F {\displaystyle F} of f {\displaystyle f} exists, then there are infinitely many antiderivatives for f {\displaystyle f} , obtained by adding an arbitrary constant to F {\displaystyle F} . Also, by the first part of the theorem, antiderivatives of f {\displaystyle f} always exist when f {\displaystyle f} is continuous. For a given function f , define the function F ( x ) as F ( x ) = ∫ a x f ( t ) d t . {\displaystyle F(x)=\int _{a}^{x}f(t)\,dt.} For any two numbers x 1 and x 1 + Δ x in [ a , b ] , we have F ( x 1 + Δ x ) − F ( x 1 ) = ∫ a x 1 + Δ x f ( t ) d t − ∫ a x 1 f ( t ) d t = ∫ x 1 x 1 + Δ x f ( t ) d t , {\displaystyle {\begin{aligned}F(x_{1}+\Delta x)-F(x_{1})&=\int _{a}^{x_{1}+\Delta x}f(t)\,dt-\int _{a}^{x_{1}}f(t)\,dt\\&=\int _{x_{1}}^{x_{1}+\Delta x}f(t)\,dt,\end{aligned}}} the latter equality resulting from the basic properties of integrals and the additivity of areas. According to the mean value theorem for integration , there exists a real number c ∈ [ x 1 , x 1 + Δ x ] {\displaystyle c\in [x_{1},x_{1}+\Delta x]} such that ∫ x 1 x 1 + Δ x f ( t ) d t = f ( c ) ⋅ Δ x . {\displaystyle \int _{x_{1}}^{x_{1}+\Delta x}f(t)\,dt=f(c)\cdot \Delta x.} It follows that F ( x 1 + Δ x ) − F ( x 1 ) = f ( c ) ⋅ Δ x , {\displaystyle F(x_{1}+\Delta x)-F(x_{1})=f(c)\cdot \Delta x,} and thus that F ( x 1 + Δ x ) − F ( x 1 ) Δ x = f ( c ) . {\displaystyle {\frac {F(x_{1}+\Delta x)-F(x_{1})}{\Delta x}}=f(c).} Taking the limit as Δ x → 0 , {\displaystyle \Delta x\to 0,} and keeping in mind that c ∈ [ x 1 , x 1 + Δ x ] , {\displaystyle c\in [x_{1},x_{1}+\Delta x],} one gets lim Δ x → 0 F ( x 1 + Δ x ) − F ( x 1 ) Δ x = lim Δ x → 0 f ( c ) , {\displaystyle \lim _{\Delta x\to 0}{\frac {F(x_{1}+\Delta x)-F(x_{1})}{\Delta x}}=\lim _{\Delta x\to 0}f(c),} that is, F ′ ( x 1 ) = f ( x 1 ) , {\displaystyle F'(x_{1})=f(x_{1}),} according to the definition of the derivative, the continuity of f , and the squeeze theorem . [ 8 ] Suppose F is an antiderivative of f , with f continuous on [ a , b ] . Let G ( x ) = ∫ a x f ( t ) d t . {\displaystyle G(x)=\int _{a}^{x}f(t)\,dt.} By the first part of the theorem, we know G is also an antiderivative of f . Since F ′ − G ′ = 0 the mean value theorem implies that F − G is a constant function , that is, there is a number c such that G ( x ) = F ( x ) + c for all x in [ a , b ] . Letting x = a , we have F ( a ) + c = G ( a ) = ∫ a a f ( t ) d t = 0 , {\displaystyle F(a)+c=G(a)=\int _{a}^{a}f(t)\,dt=0,} which means c = − F ( a ) . In other words, G ( x ) = F ( x ) − F ( a ) , and so ∫ a b f ( x ) d x = G ( b ) = F ( b ) − F ( a ) . {\displaystyle \int _{a}^{b}f(x)\,dx=G(b)=F(b)-F(a).} This is a limit proof by Riemann sums . To begin, we recall the mean value theorem . Stated briefly, if F is continuous on the closed interval [ a , b ] and differentiable on the open interval ( a , b ) , then there exists some c in ( a , b ) such that F ′ ( c ) ( b − a ) = F ( b ) − F ( a ) . {\displaystyle F'(c)(b-a)=F(b)-F(a).} Let f be (Riemann) integrable on the interval [ a , b ] , and let f admit an antiderivative F on ( a , b ) such that F is continuous on [ a , b ] . Begin with the quantity F ( b ) − F ( a ) . Let there be numbers x 0 , ..., x n such that a = x 0 < x 1 < x 2 < ⋯ < x n − 1 < x n = b . {\displaystyle a=x_{0}<x_{1}<x_{2}<\cdots <x_{n-1}<x_{n}=b.} It follows that F ( b ) − F ( a ) = F ( x n ) − F ( x 0 ) . {\displaystyle F(b)-F(a)=F(x_{n})-F(x_{0}).} Now, we add each F ( x i ) along with its additive inverse, so that the resulting quantity is equal: F ( b ) − F ( a ) = F ( x n ) + [ − F ( x n − 1 ) + F ( x n − 1 ) ] + ⋯ + [ − F ( x 1 ) + F ( x 1 ) ] − F ( x 0 ) = [ F ( x n ) − F ( x n − 1 ) ] + [ F ( x n − 1 ) − F ( x n − 2 ) ] + ⋯ + [ F ( x 2 ) − F ( x 1 ) ] + [ F ( x 1 ) − F ( x 0 ) ] . {\displaystyle {\begin{aligned}F(b)-F(a)&=F(x_{n})+[-F(x_{n-1})+F(x_{n-1})]+\cdots +[-F(x_{1})+F(x_{1})]-F(x_{0})\\&=[F(x_{n})-F(x_{n-1})]+[F(x_{n-1})-F(x_{n-2})]+\cdots +[F(x_{2})-F(x_{1})]+[F(x_{1})-F(x_{0})].\end{aligned}}} The above quantity can be written as the following sum: The function F is differentiable on the interval ( a , b ) and continuous on the closed interval [ a , b ] ; therefore, it is also differentiable on each interval ( x i −1 , x i ) and continuous on each interval [ x i −1 , x i ] . According to the mean value theorem (above), for each i there exists a c i {\displaystyle c_{i}} in ( x i −1 , x i ) such that F ( x i ) − F ( x i − 1 ) = F ′ ( c i ) ( x i − x i − 1 ) . {\displaystyle F(x_{i})-F(x_{i-1})=F'(c_{i})(x_{i}-x_{i-1}).} Substituting the above into ( 1' ), we get F ( b ) − F ( a ) = ∑ i = 1 n [ F ′ ( c i ) ( x i − x i − 1 ) ] . {\displaystyle F(b)-F(a)=\sum _{i=1}^{n}[F'(c_{i})(x_{i}-x_{i-1})].} The assumption implies F ′ ( c i ) = f ( c i ) . {\displaystyle F'(c_{i})=f(c_{i}).} Also, x i − x i − 1 {\displaystyle x_{i}-x_{i-1}} can be expressed as Δ x {\displaystyle \Delta x} of partition i {\displaystyle i} . We are describing the area of a rectangle, with the width times the height, and we are adding the areas together. Each rectangle, by virtue of the mean value theorem , describes an approximation of the curve section it is drawn over. Also Δ x i {\displaystyle \Delta x_{i}} need not be the same for all values of i , or in other words that the width of the rectangles can differ. What we have to do is approximate the curve with n rectangles. Now, as the size of the partitions get smaller and n increases, resulting in more partitions to cover the space, we get closer and closer to the actual area of the curve. By taking the limit of the expression as the norm of the partitions approaches zero, we arrive at the Riemann integral . We know that this limit exists because f was assumed to be integrable. That is, we take the limit as the largest of the partitions approaches zero in size, so that all other partitions are smaller and the number of partitions approaches infinity. So, we take the limit on both sides of ( 2' ). This gives us lim ‖ Δ x i ‖ → 0 F ( b ) − F ( a ) = lim ‖ Δ x i ‖ → 0 ∑ i = 1 n [ f ( c i ) ( Δ x i ) ] . {\displaystyle \lim _{\|\Delta x_{i}\|\to 0}F(b)-F(a)=\lim _{\|\Delta x_{i}\|\to 0}\sum _{i=1}^{n}[f(c_{i})(\Delta x_{i})].} Neither F ( b ) nor F ( a ) is dependent on ‖ Δ x i ‖ {\displaystyle \|\Delta x_{i}\|} , so the limit on the left side remains F ( b ) − F ( a ) . F ( b ) − F ( a ) = lim ‖ Δ x i ‖ → 0 ∑ i = 1 n [ f ( c i ) ( Δ x i ) ] . {\displaystyle F(b)-F(a)=\lim _{\|\Delta x_{i}\|\to 0}\sum _{i=1}^{n}[f(c_{i})(\Delta x_{i})].} The expression on the right side of the equation defines the integral over f from a to b . Therefore, we obtain F ( b ) − F ( a ) = ∫ a b f ( x ) d x , {\displaystyle F(b)-F(a)=\int _{a}^{b}f(x)\,dx,} which completes the proof. As discussed above, a slightly weaker version of the second part follows from the first part. Similarly, it almost looks like the first part of the theorem follows directly from the second. That is, suppose G is an antiderivative of f . Then by the second theorem, G ( x ) − G ( a ) = ∫ a x f ( t ) d t {\textstyle G(x)-G(a)=\int _{a}^{x}f(t)\,dt} . Now, suppose F ( x ) = ∫ a x f ( t ) d t = G ( x ) − G ( a ) {\textstyle F(x)=\int _{a}^{x}f(t)\,dt=G(x)-G(a)} . Then F has the same derivative as G , and therefore F ′ = f . This argument only works, however, if we already know that f has an antiderivative, and the only way we know that all continuous functions have antiderivatives is by the first part of the Fundamental Theorem. [ 9 ] For example, if f ( x ) = e − x 2 , then f has an antiderivative, namely G ( x ) = ∫ 0 x f ( t ) d t {\displaystyle G(x)=\int _{0}^{x}f(t)\,dt} and there is no simpler expression for this function. It is therefore important not to interpret the second part of the theorem as the definition of the integral. Indeed, there are many functions that are integrable but lack elementary antiderivatives , and discontinuous functions can be integrable but lack any antiderivatives at all. Conversely, many functions that have antiderivatives are not Riemann integrable (see Volterra's function ). Suppose the following is to be calculated: ∫ 2 5 x 2 d x . {\displaystyle \int _{2}^{5}x^{2}\,dx.} Here, f ( x ) = x 2 {\displaystyle f(x)=x^{2}} and we can use F ( x ) = 1 3 x 3 {\textstyle F(x)={\frac {1}{3}}x^{3}} as the antiderivative. Therefore: ∫ 2 5 x 2 d x = F ( 5 ) − F ( 2 ) = 5 3 3 − 2 3 3 = 125 3 − 8 3 = 117 3 = 39. {\displaystyle \int _{2}^{5}x^{2}\,dx=F(5)-F(2)={\frac {5^{3}}{3}}-{\frac {2^{3}}{3}}={\frac {125}{3}}-{\frac {8}{3}}={\frac {117}{3}}=39.} Suppose d d x ∫ 0 x t 3 d t {\displaystyle {\frac {d}{dx}}\int _{0}^{x}t^{3}\,dt} is to be calculated. Using the first part of the theorem with f ( t ) = t 3 {\displaystyle f(t)=t^{3}} gives d d x ∫ 0 x t 3 d t = f ( x ) = x 3 . {\displaystyle {\frac {d}{dx}}\int _{0}^{x}t^{3}\,dt=f(x)=x^{3}.} This can also be checked using the second part of the theorem. Specifically, F ( t ) = 1 4 t 4 {\textstyle F(t)={\frac {1}{4}}t^{4}} is an antiderivative of f ( t ) {\displaystyle f(t)} , so d d x ∫ 0 x t 3 d t = d d x F ( x ) − d d x F ( 0 ) = d d x x 4 4 = x 3 . {\displaystyle {\frac {d}{dx}}\int _{0}^{x}t^{3}\,dt={\frac {d}{dx}}F(x)-{\frac {d}{dx}}F(0)={\frac {d}{dx}}{\frac {x^{4}}{4}}=x^{3}.} Suppose f ( x ) = { sin ⁡ ( 1 x ) − 1 x cos ⁡ ( 1 x ) x ≠ 0 0 x = 0 {\displaystyle f(x)={\begin{cases}\sin \left({\frac {1}{x}}\right)-{\frac {1}{x}}\cos \left({\frac {1}{x}}\right)&x\neq 0\\0&x=0\\\end{cases}}} Then f ( x ) {\displaystyle f(x)} is not continuous at zero. Moreover, this is not just a matter of how f {\displaystyle f} is defined at zero, since the limit as x → 0 {\displaystyle x\to 0} of f ( x ) {\displaystyle f(x)} does not exist. Therefore, the corollary cannot be used to compute ∫ 0 1 f ( x ) d x . {\displaystyle \int _{0}^{1}f(x)\,dx.} But consider the function F ( x ) = { x sin ⁡ ( 1 x ) x ≠ 0 0 x = 0. {\displaystyle F(x)={\begin{cases}x\sin \left({\frac {1}{x}}\right)&x\neq 0\\0&x=0.\\\end{cases}}} Notice that F ( x ) {\displaystyle F(x)} is continuous on [ 0 , 1 ] {\displaystyle [0,1]} (including at zero by the squeeze theorem ), and F ( x ) {\displaystyle F(x)} is differentiable on ( 0 , 1 ) {\displaystyle (0,1)} with F ′ ( x ) = f ( x ) . {\displaystyle F'(x)=f(x).} Therefore, part two of the theorem applies, and ∫ 0 1 f ( x ) d x = F ( 1 ) − F ( 0 ) = sin ⁡ ( 1 ) . {\displaystyle \int _{0}^{1}f(x)\,dx=F(1)-F(0)=\sin(1).} The theorem can be used to prove that ∫ a b f ( x ) d x = ∫ a c f ( x ) d x + ∫ c b f ( x ) d x . {\displaystyle \int _{a}^{b}f(x)dx=\int _{a}^{c}f(x)dx+\int _{c}^{b}f(x)dx.} Since, ∫ a b f ( x ) d x = F ( b ) − F ( a ) , ∫ a c f ( x ) d x = F ( c ) − F ( a ) , and ∫ c b f ( x ) d x = F ( b ) − F ( c ) , {\displaystyle {\begin{aligned}\int _{a}^{b}f(x)dx&=F(b)-F(a),\\\int _{a}^{c}f(x)dx&=F(c)-F(a),{\text{ and }}\\\int _{c}^{b}f(x)dx&=F(b)-F(c),\end{aligned}}} the result follows from, F ( b ) − F ( a ) = F ( c ) − F ( a ) + F ( b ) − F ( c ) . {\displaystyle F(b)-F(a)=F(c)-F(a)+F(b)-F(c).} The function f does not have to be continuous over the whole interval. Part I of the theorem then says: if f is any Lebesgue integrable function on [ a , b ] and x 0 is a number in [ a , b ] such that f is continuous at x 0 , then F ( x ) = ∫ a x f ( t ) d t {\displaystyle F(x)=\int _{a}^{x}f(t)\,dt} is differentiable for x = x 0 with F ′( x 0 ) = f ( x 0 ) . We can relax the conditions on f still further and suppose that it is merely locally integrable. In that case, we can conclude that the function F is differentiable almost everywhere and F ′( x ) = f ( x ) almost everywhere. On the real line this statement is equivalent to Lebesgue's differentiation theorem . These results remain true for the Henstock–Kurzweil integral , which allows a larger class of integrable functions. [ 10 ] In higher dimensions Lebesgue's differentiation theorem generalizes the Fundamental theorem of calculus by stating that for almost every x , the average value of a function f over a ball of radius r centered at x tends to f ( x ) as r tends to 0. Part II of the theorem is true for any Lebesgue integrable function f , which has an antiderivative F (not all integrable functions do, though). In other words, if a real function F on [ a , b ] admits a derivative f ( x ) at every point x of [ a , b ] and if this derivative f is Lebesgue integrable on [ a , b ] , then [ 11 ] F ( b ) − F ( a ) = ∫ a b f ( t ) d t . {\displaystyle F(b)-F(a)=\int _{a}^{b}f(t)\,dt.} This result may fail for continuous functions F that admit a derivative f ( x ) at almost every point x , as the example of the Cantor function shows. However, if F is absolutely continuous , it admits a derivative F′ ( x ) at almost every point x , and moreover F′ is integrable, with F ( b ) − F ( a ) equal to the integral of F′ on [ a , b ] . Conversely, if f is any integrable function, then F as given in the first formula will be absolutely continuous with F′ = f almost everywhere. The conditions of this theorem may again be relaxed by considering the integrals involved as Henstock–Kurzweil integrals . Specifically, if a continuous function F ( x ) admits a derivative f ( x ) at all but countably many points, then f ( x ) is Henstock–Kurzweil integrable and F ( b ) − F ( a ) is equal to the integral of f on [ a , b ] . The difference here is that the integrability of f does not need to be assumed. [ 12 ] The version of Taylor's theorem that expresses the error term as an integral can be seen as a generalization of the fundamental theorem. There is a version of the theorem for complex functions: suppose U is an open set in C and f : U → C is a function that has a holomorphic antiderivative F on U . Then for every curve γ : [ a , b ] → U , the curve integral can be computed as ∫ γ f ( z ) d z = F ( γ ( b ) ) − F ( γ ( a ) ) . {\displaystyle \int _{\gamma }f(z)\,dz=F(\gamma (b))-F(\gamma (a)).} The fundamental theorem can be generalized to curve and surface integrals in higher dimensions and on manifolds . One such generalization offered by the calculus of moving surfaces is the time evolution of integrals . The most familiar extensions of the fundamental theorem of calculus in higher dimensions are the divergence theorem and the gradient theorem . One of the most powerful generalizations in this direction is the generalized Stokes theorem (sometimes known as the fundamental theorem of multivariable calculus): [ 13 ] Let M be an oriented piecewise smooth manifold of dimension n and let ω {\displaystyle \omega } be a smooth compactly supported ( n − 1) -form on M . If ∂ M denotes the boundary of M given its induced orientation , then ∫ M d ω = ∫ ∂ M ω . {\displaystyle \int _{M}d\omega =\int _{\partial M}\omega .} Here d is the exterior derivative , which is defined using the manifold structure only. The theorem is often used in situations where M is an embedded oriented submanifold of some bigger manifold (e.g. R k ) on which the form ω {\displaystyle \omega } is defined. The fundamental theorem of calculus allows us to pose a definite integral as a first-order ordinary differential equation. ∫ a b f ( x ) d x {\displaystyle \int _{a}^{b}f(x)\,dx} can be posed as d y d x = f ( x ) , y ( a ) = 0 {\displaystyle {\frac {dy}{dx}}=f(x),\;\;y(a)=0} with y ( b ) {\displaystyle y(b)} as the value of the integral.
https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus
In abstract algebra , an abelian group ( G , + ) {\displaystyle (G,+)} is called finitely generated if there exist finitely many elements x 1 , … , x s {\displaystyle x_{1},\dots ,x_{s}} in G {\displaystyle G} such that every x {\displaystyle x} in G {\displaystyle G} can be written in the form x = n 1 x 1 + n 2 x 2 + ⋯ + n s x s {\displaystyle x=n_{1}x_{1}+n_{2}x_{2}+\cdots +n_{s}x_{s}} for some integers n 1 , … , n s {\displaystyle n_{1},\dots ,n_{s}} . In this case, we say that the set { x 1 , … , x s } {\displaystyle \{x_{1},\dots ,x_{s}\}} is a generating set of G {\displaystyle G} or that x 1 , … , x s {\displaystyle x_{1},\dots ,x_{s}} generate G {\displaystyle G} . So, finitely generated abelian groups can be thought of as a generalization of cyclic groups. Every finite abelian group is finitely generated. The finitely generated abelian groups can be completely classified. There are no other examples (up to isomorphism). In particular, the group ( Q , + ) {\displaystyle \left(\mathbb {Q} ,+\right)} of rational numbers is not finitely generated: [ 1 ] if x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are rational numbers, pick a natural number k {\displaystyle k} coprime to all the denominators; then 1 / k {\displaystyle 1/k} cannot be generated by x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} . The group ( Q ∗ , ⋅ ) {\displaystyle \left(\mathbb {Q} ^{*},\cdot \right)} of non-zero rational numbers is also not finitely generated. The groups of real numbers under addition ( R , + ) {\displaystyle \left(\mathbb {R} ,+\right)} and non-zero real numbers under multiplication ( R ∗ , ⋅ ) {\displaystyle \left(\mathbb {R} ^{*},\cdot \right)} are also not finitely generated. [ 1 ] [ 2 ] The fundamental theorem of finitely generated abelian groups can be stated two ways, generalizing the two forms of the fundamental theorem of finite abelian groups . The theorem, in both forms, in turn generalizes to the structure theorem for finitely generated modules over a principal ideal domain , which in turn admits further generalizations. The primary decomposition formulation states that every finitely generated abelian group G is isomorphic to a direct sum of primary cyclic groups and infinite cyclic groups . A primary cyclic group is one whose order is a power of a prime . That is, every finitely generated abelian group is isomorphic to a group of the form where n ≥ 0 is the rank , and the numbers q 1 , ..., q t are powers of (not necessarily distinct) prime numbers. In particular, G is finite if and only if n = 0. The values of n , q 1 , ..., q t are ( up to rearranging the indices) uniquely determined by G , that is, there is one and only one way to represent G as such a decomposition. The proof of this statement uses the basis theorem for finite abelian group : every finite abelian group is a direct sum of primary cyclic groups . Denote the torsion subgroup of G as tG . Then, G/tG is a torsion-free abelian group and thus it is free abelian. tG is a direct summand of G , which means there exists a subgroup F of G s.t. G = t G ⊕ F {\displaystyle G=tG\oplus F} , where F ≅ G / t G {\displaystyle F\cong G/tG} . Then, F is also free abelian. Since tG is finitely generated and each element of tG has finite order, tG is finite. By the basis theorem for finite abelian group, tG can be written as direct sum of primary cyclic groups. We can also write any finitely generated abelian group G as a direct sum of the form where k 1 divides k 2 , which divides k 3 and so on up to k u . Again, the rank n and the invariant factors k 1 , ..., k u are uniquely determined by G (here with a unique order). The rank and the sequence of invariant factors determine the group up to isomorphism. These statements are equivalent as a result of the Chinese remainder theorem , which implies that Z j k ≅ Z j ⊕ Z k {\displaystyle \mathbb {Z} _{jk}\cong \mathbb {Z} _{j}\oplus \mathbb {Z} _{k}} if and only if j and k are coprime . The history and credit for the fundamental theorem is complicated by the fact that it was proven when group theory was not well-established, and thus early forms, while essentially the modern result and proof, are often stated for a specific case. Briefly, an early form of the finite case was proven by Gauss in 1801, the finite case was proven by Kronecker in 1870, and stated in group-theoretic terms by Frobenius and Stickelberger in 1878. [ citation needed ] The finitely presented case is solved by Smith normal form , and hence frequently credited to ( Smith 1861 ), [ 3 ] though the finitely generated case is sometimes instead credited to Poincaré in 1900; [ citation needed ] details follow. Group theorist László Fuchs states: [ 3 ] As far as the fundamental theorem on finite abelian groups is concerned, it is not clear how far back in time one needs to go to trace its origin. ... it took a long time to formulate and prove the fundamental theorem in its present form ... The fundamental theorem for finite abelian groups was proven by Leopold Kronecker in 1870, [ citation needed ] using a group-theoretic proof, [ 4 ] though without stating it in group-theoretic terms; [ 5 ] a modern presentation of Kronecker's proof is given in ( Stillwell 2012 ), 5.2.2 Kronecker's Theorem, 176–177 . This generalized an earlier result of Carl Friedrich Gauss from Disquisitiones Arithmeticae (1801), which classified quadratic forms; Kronecker cited this result of Gauss's. The theorem was stated and proved in the language of groups by Ferdinand Georg Frobenius and Ludwig Stickelberger in 1878. [ 6 ] [ 7 ] Another group-theoretic formulation was given by Kronecker's student Eugen Netto in 1882. [ 8 ] [ 9 ] The fundamental theorem for finitely presented abelian groups was proven by Henry John Stephen Smith in ( Smith 1861 ), [ 3 ] as integer matrices correspond to finite presentations of abelian groups (this generalizes to finitely presented modules over a principal ideal domain), and Smith normal form corresponds to classifying finitely presented abelian groups. The fundamental theorem for finitely generated abelian groups was proven by Henri Poincaré in 1900, using a matrix proof (which generalizes to principal ideal domains). [ citation needed ] This was done in the context of computing the homology of a complex, specifically the Betti number and torsion coefficients of a dimension of the complex, where the Betti number corresponds to the rank of the free part, and the torsion coefficients correspond to the torsion part. [ 4 ] Kronecker's proof was generalized to finitely generated abelian groups by Emmy Noether in 1926. [ 4 ] Stated differently the fundamental theorem says that a finitely generated abelian group is the direct sum of a free abelian group of finite rank and a finite abelian group, each of those being unique up to isomorphism. The finite abelian group is just the torsion subgroup of G . The rank of G is defined as the rank of the torsion-free part of G ; this is just the number n in the above formulas. A corollary to the fundamental theorem is that every finitely generated torsion-free abelian group is free abelian. The finitely generated condition is essential here: Q {\displaystyle \mathbb {Q} } is torsion-free but not free abelian. Every subgroup and factor group of a finitely generated abelian group is again finitely generated abelian. The finitely generated abelian groups, together with the group homomorphisms , form an abelian category which is a Serre subcategory of the category of abelian groups . Note that not every abelian group of finite rank is finitely generated; the rank 1 group Q {\displaystyle \mathbb {Q} } is one counterexample, and the rank-0 group given by a direct sum of countably infinitely many copies of Z 2 {\displaystyle \mathbb {Z} _{2}} is another one.
https://en.wikipedia.org/wiki/Fundamental_theorem_of_finitely_generated_abelian_groups
In number theory , the fundamental theorem of ideal theory in number fields states that every nonzero proper ideal in the ring of integers of a number field admits unique factorization into a product of nonzero prime ideals . In other words, every ring of integers of a number field is a Dedekind domain . This number theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fundamental_theorem_of_ideal_theory_in_number_fields
In mathematical optimization , the fundamental theorem of linear programming states, in a weak formulation, that the maxima and minima of a linear function over a convex polygonal region occur at the region's corners. Further, if an extreme value occurs at two corners, then it must also occur everywhere on the line segment between them. Consider the optimization problem Where P = { x ∈ R n : A x ≤ b } {\displaystyle P=\{x\in \mathbb {R} ^{n}:Ax\leq b\}} . If P {\displaystyle P} is a bounded polyhedron (and thus a polytope) and x ∗ {\displaystyle x^{\ast }} is an optimal solution to the problem, then x ∗ {\displaystyle x^{\ast }} is either an extreme point (vertex) of P {\displaystyle P} , or lies on a face F ⊂ P {\displaystyle F\subset P} of optimal solutions. Suppose, for the sake of contradiction, that x ∗ ∈ i n t ( P ) {\displaystyle x^{\ast }\in \mathrm {int} (P)} . Then there exists some ϵ > 0 {\displaystyle \epsilon >0} such that the ball of radius ϵ {\displaystyle \epsilon } centered at x ∗ {\displaystyle x^{\ast }} is contained in P {\displaystyle P} , that is B ϵ ( x ∗ ) ⊂ P {\displaystyle B_{\epsilon }(x^{\ast })\subset P} . Therefore, Hence x ∗ {\displaystyle x^{\ast }} is not an optimal solution, a contradiction. Therefore, x ∗ {\displaystyle x^{\ast }} must live on the boundary of P {\displaystyle P} . If x ∗ {\displaystyle x^{\ast }} is not a vertex itself, it must be the convex combination of vertices of P {\displaystyle P} , say x 1 , . . . , x t {\displaystyle x_{1},...,x_{t}} . Then x ∗ = ∑ i = 1 t λ i x i {\displaystyle x^{\ast }=\sum _{i=1}^{t}\lambda _{i}x_{i}} with λ i ≥ 0 {\displaystyle \lambda _{i}\geq 0} and ∑ i = 1 t λ i = 1 {\displaystyle \sum _{i=1}^{t}\lambda _{i}=1} . Observe that Since x ∗ {\displaystyle x^{\ast }} is an optimal solution, all terms in the sum are nonnegative. Since the sum is equal to zero, we must have that each individual term is equal to zero. Hence, c T x ∗ = c T x i {\displaystyle c^{T}x^{\ast }=c^{T}x_{i}} for each x i {\displaystyle x_{i}} , so every x i {\displaystyle x_{i}} is also optimal, and therefore all points on the face whose vertices are x 1 , . . . , x t {\displaystyle x_{1},...,x_{t}} , are optimal solutions.
https://en.wikipedia.org/wiki/Fundamental_theorem_of_linear_programming
In mathematics , The fundamental theorem of topos theory states that the slice E / X {\displaystyle \mathbf {E} /X} of a topos E {\displaystyle \mathbf {E} } over any one of its objects X {\displaystyle X} is itself a topos. Moreover, if there is a morphism f : A → B {\displaystyle f:A\rightarrow B} in E {\displaystyle \mathbf {E} } then there is a functor f ∗ : E / B → E / A {\displaystyle f^{*}:\mathbf {E} /B\rightarrow \mathbf {E} /A} which preserves exponentials and the subobject classifier . For any morphism f in E {\displaystyle \mathbf {E} } there is an associated "pullback functor" f ∗ := − ↦ f × − → f {\displaystyle f^{*}:=-\mapsto f\times -\rightarrow f} which is key in the proof of the theorem. For any other morphism g in E {\displaystyle \mathbf {E} } which shares the same codomain as f , their product f × g {\displaystyle f\times g} is the diagonal of their pullback square, and the morphism which goes from the domain of f × g {\displaystyle f\times g} to the domain of f is opposite to g in the pullback square, so it is the pullback of g along f , which can be denoted as f ∗ g {\displaystyle f^{*}g} . Note that a topos E {\displaystyle \mathbf {E} } is isomorphic to the slice over its own terminal object, i.e. E ≅ E / 1 {\displaystyle \mathbf {E} \cong \mathbf {E} /1} , so for any object A in E {\displaystyle \mathbf {E} } there is a morphism f : A → 1 {\displaystyle f:A\rightarrow 1} and thereby a pullback functor f ∗ : E → E / A {\displaystyle f^{*}:\mathbf {E} \rightarrow \mathbf {E} /A} , which is why any slice E / A {\displaystyle \mathbf {E} /A} is also a topos. For a given slice E / B {\displaystyle \mathbf {E} /B} let X B {\displaystyle X \over B} denote an object of it, where X is an object of the base category. Then B ∗ {\displaystyle B^{*}} is a functor which maps: − ↦ B × − B {\displaystyle -\mapsto {B\times - \over B}} . Now apply f ∗ {\displaystyle f^{*}} to B ∗ {\displaystyle B^{*}} . This yields so this is how the pullback functor f ∗ {\displaystyle f^{*}} maps objects of E / B {\displaystyle \mathbf {E} /B} to E / A {\displaystyle \mathbf {E} /A} . Furthermore, note that any element C of the base topos is isomorphic to 1 × C 1 = 1 ∗ C {\displaystyle {1\times C \over 1}=1^{*}C} , therefore if f : A → 1 {\displaystyle f:A\rightarrow 1} then f ∗ : 1 ∗ → A ∗ {\displaystyle f^{*}:1^{*}\rightarrow A^{*}} and f ∗ : C ↦ A ∗ C {\displaystyle f^{*}:C\mapsto A^{*}C} so that f ∗ {\displaystyle f^{*}} is indeed a functor from the base topos E {\displaystyle \mathbf {E} } to its slice E / A {\displaystyle \mathbf {E} /A} . Consider a pair of ground formulas ϕ {\displaystyle \phi } and ψ {\displaystyle \psi } whose extensions [ _ | ϕ ] {\displaystyle [\_|\phi ]} and [ _ | ψ ] {\displaystyle [\_|\psi ]} (where the underscore here denotes the null context) are objects of the base topos. Then ϕ {\displaystyle \phi } implies ψ {\displaystyle \psi } if and only if there is a monic from [ _ | ϕ ] {\displaystyle [\_|\phi ]} to [ _ | ψ ] {\displaystyle [\_|\psi ]} . If these are the case then, by theorem, the formula ψ {\displaystyle \psi } is true in the slice E / [ _ | ϕ ] {\displaystyle \mathbf {E} /[\_|\phi ]} , because the terminal object [ _ | ϕ ] [ _ | ϕ ] {\displaystyle [\_|\phi ] \over [\_|\phi ]} of the slice factors through its extension [ _ | ψ ] {\displaystyle [\_|\psi ]} . In logical terms, this could be expressed as so that slicing E {\displaystyle \mathbf {E} } by the extension of ϕ {\displaystyle \phi } would correspond to assuming ϕ {\displaystyle \phi } as a hypothesis. Then the theorem would say that making a logical assumption does not change the rules of topos logic.
https://en.wikipedia.org/wiki/Fundamental_theorem_of_topos_theory
There are two fundamental theorems of welfare economics . The first states that in economic equilibrium , a set of complete markets , with complete information , and in perfect competition , will be Pareto optimal (in the sense that no further exchange would make one person better off without making another worse off). The requirements for perfect competition are these: [ 1 ] The theorem is sometimes seen as an analytical confirmation of Adam Smith 's " invisible hand " principle, namely that competitive markets ensure an efficient allocation of resources . However, there is no guarantee that the Pareto optimal market outcome is equitative , as there are many possible Pareto efficient allocations of resources differing in their desirability (e.g. one person may own everything and everyone else nothing). [ 2 ] The second theorem states that any Pareto optimum can be supported as a competitive equilibrium for some initial set of endowments. The implication is that any desired Pareto optimal outcome can be supported; Pareto efficiency can be achieved with any redistribution of initial wealth. However, attempts to correct the distribution may introduce distortions, and so full optimality may not be attainable with redistribution. [ 3 ] The theorems can be visualized graphically for a simple pure exchange economy by means of the Edgeworth box diagram. In a discussion of import tariffs Adam Smith wrote that: Every individual necessarily labours to render the annual revenue of the society as great as he can... He is in this, as in many other ways, led by an invisible hand to promote an end which was no part of his intention... By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it. [ 4 ] Note that Smith's ideas were not directed towards welfare economics specifically, as this field of economics had not been created at the time. However, his arguments have been credited towards the creation of the branch as well as the fundamental theories of welfare economics. [ 5 ] Walras wrote that 'exchange under free competition is an operation by which all parties obtain the maximum satisfaction subject to buying and selling at a uniform price'. [ 6 ] Edgeworth took a step towards the first fundamental theorem in his 'Mathematical Psychics', looking at a pure exchange economy with no production. He included imperfect competition in his analysis. [ 7 ] His definition of equilibrium is almost the same as Pareto's later definition of optimality: it is a point such that... in whatever direction we take an infinitely small step, P and Π [the utilities of buyer and seller] do not increase together, but that, while one increases, the other decreases. [ 8 ] Instead of concluding that equilibrium was Pareto optimal, Edgeworth concluded that the equilibrium maximizes the sum of utilities of the parties, which is a special case of Pareto efficiency: It seems to follow on general dynamical principles applied to this special case that equilibrium is attained when the total pleasure-energy of the contractors is a maximum relative , or subject, to conditions... [ 9 ] Pareto stated the first fundamental theorem in his Manuale (1906) and with more rigour in its French revision ( Manuel , 1909). [ 10 ] He was the first to claim optimality under his own criterion or to support the claim by convincing arguments. [ citation needed ] He defines equilibrium more abstractly than Edgeworth as a state which would maintain itself indefinitely in the absence of external pressures [ 11 ] and shows that in an exchange economy it is the point at which a common tangent to the parties' indifference curves passes through the endowment. [ 12 ] His definition of optimality is given in Chap. VI: We will say that the members of a collectivity enjoy a maximum of ophelimity [i.e. of utility] at a certain position when it is impossible to move a small step away such that the ophelimity enjoyed by each individual in the collectivity increases, or such that it diminishes. [He has previously defined an increase in individual ophelimity as a move onto a higher indifference curve.] That is to say that any small step is bound to increase the ophelimity of some individuals while diminishing that of others. [ 13 ] The following paragraph gives us a theorem: For phenomena of type I [i.e. perfect competition], when equilibrium takes place at a point of tangency of indifference curves, the members of the collectivity enjoy a maximum of ophelimity. He adds that 'a rigorous proof cannot be given without the help of mathematics' and refers to his Appendix. [ 14 ] Wicksell , referring to his definition of optimality, commented: With such a definition it is almost self-evident that this so-called maximum obtains under free competition, because if , after an exchange is effected, it were possible by means of a further series of direct or indirect exchanges to produce an additional satisfaction of needs for the participators, then to that extent such a continued exchange would doubtless have taken place, and the original position could not be one of final equilibrium. [ 15 ] Pareto didn't find it so straightforward. He gives a diagrammatic argument in his text, applying solely to exchange, [ 16 ] and a 32-page mathematical argument in the Appendix [ 17 ] which Samuelson found 'not easy to follow'. [ 18 ] Pareto was hampered by not having a concept of the production–possibility frontier , whose development was due partly to his collaborator Enrico Barone . [ 19 ] His own 'indifference curves for obstacles' seem to have been a false path. Shortly after stating the first fundamental theorem, Pareto asks a question about distribution: Consider a collectivist society which seeks to maximise the ophelimity of its members. The problem divides into two parts. Firstly we have a problem of distribution: how should the goods within a society be shared between its members? And secondly, how should production be organised so that, when goods are so distributed, the members of society obtain the maximum ophelimity? His answer is an informal precursor of the second theorem: Having distributed goods according to the answer to the first problem, the state should allow the members of the collectivity to operate a second distribution, or operate it itself, in either case making sure that it is performed in conformity with the workings of free competition. [ 20 ] Barone , an associate of Pareto, proved an optimality property of perfect competition, [ 21 ] namely that – assuming exogenous prices – it maximises the monetary value of the return from productive activity, this being the sum of the values of leisure, savings, and goods for consumption, all taken in the desired proportions. [ 22 ] He makes no argument that the prices chosen by the market are themselves optimal. His paper wasn't translated into English until 1935. It received an approving summary from Samuelson [ 23 ] but seems not to have influenced the development of the welfare theorems as they now stand. In 1934 Lerner restated Edgeworth's condition for exchange that indifference curves should meet as tangents, presenting it as an optimality property. He stated a similar condition for production, namely that the production–possibility frontier ( PPF , to which he gave the alternative name of 'productive indifference curve') should be tangential with an indifference curve for the community. He was one of the originators of the PPF, having used it in a paper on international trade in 1932. [ 24 ] He shows that the two arguments can be presented in the same terms, since the PPF plays the same role as the mirror-image indifference curve in an Edgeworth box. He also mentions that there's no need for the curves to be differentiable, since the same result obtains if they touch at pointed corners. His definition of optimality was equivalent to Pareto's: If... it is possible to move one individual into a preferred position without moving another individual into a worse position... we may say that the relative optimum is not reached... The optimality condition for production is equivalent to the pair of requirements that (i) price should equal marginal cost and (ii) output should be maximised subject to (i). Lerner thus reduces optimality to tangency for both production and exchange, but does not say why the implied point on the PPF should be the equilibrium condition for a free market. Perhaps he considered it already sufficiently well established. [ 25 ] Lerner ascribes to his LSE colleague Victor Edelberg the credit for suggesting the use of indifference curves. Samuelson surmised that Lerner obtained his results independently of Pareto's work. [ 26 ] Hotelling put forward a new argument to show that 'sales at marginal costs are a condition of maximum general welfare' (under Pareto's definition). He accepted that this condition was satisfied by perfect competition, but argued in consequence that perfect competition could not be optimal since some beneficial projects would be unable to recoup their fixed costs by charging at this rate (for example, in a natural monopoly ). [ 27 ] Lange 's paper 'The Foundations of Welfare Economics' is the source of the now-traditional pairing of two theorems, one governing markets, the other distribution. He justified the Pareto definition of optimality for the first theorem by reference to Lionel Robbins 's rejection of interpersonal utility comparisons, [ 28 ] and suggested various ways to reintroduce interpersonal comparisons for the second theorem such as the adjudications of a democratically elected Congress. Lange believed that such a congress could act in a similar way to a capitalist: through setting price vectors, it could achieve any optimal production plan to have achieve efficiency and social equality. [ 29 ] His reasoning is a mathematical translation (into Lagrange multipliers ) of Lerner's graphical argument. The second theorem does not take its familiar form in his hands; rather he simply shows that the optimisation conditions for a genuine social utility function are similar to those for Pareto optimality. Samuelson (crediting Abram Bergson for the substance of his ideas) brought Lange's second welfare theorem to approximately its modern form. [ 30 ] He follows Lange in deriving a set of equations which are necessary for Pareto optimality, and then considers what additional constraints arise if the economy is required to satisfy a genuine social welfare function, finding a further set of equations from which it follows 'that all of the action necessary to achieve a given ethical desideratum may take the form of lump sum taxes or bounties' . [ 31 ] Arrow 's and Debreu 's two papers [ 32 ] (written independently and published almost simultaneously) sought to improve on the rigour of Lange's first theorem. Their accounts refer to (short-run) production as well as exchange, expressing the conditions for both through linear functions. Equilibrium for production is expressed by the constraint that the value of a manufacturer's net output, i.e. the dot product of the production vector with the price vector, should be maximised over the manufacturer's production set . This is interpreted as profit maximisation . Equilibrium for exchange is interpreted as meaning that the individual's utility should be maximised over the positions obtainable from the endowment through exchange, these being the positions whose value is no greater than the value of his or her endowment, where the value of an allocation is its dot product with the price vector. Arrow motivated his paper by reference to the need to extend proofs to cover equilibria at the edge of the space, and Debreu by the possibility of indifference curves being non-differentiable. Modern texts follow their style of proof. In their 1986 paper, "Externalities in Economies with Imperfect Information and Incomplete Markets", Bruce Greenwald and Joseph Stiglitz showed that the fundamental welfare theorems do not hold if there are incomplete markets or imperfect information. [ 33 ] The paper establishes that a competitive equilibrium of an economy with asymmetric information is generically not even constrained Pareto efficient. A government facing the same information constraints as the private individuals in the economy can nevertheless find Pareto-improving policy interventions. [ 34 ] Greenwald and Stiglitz noted several relevant situations, including how moral hazard may render a situation inefficient (e.g. an alcohol tax may be pareto improving as it reduces automobile accidents). [ 35 ] In principle, there are two commonly found versions of the fundamental theorems, one relating to an exchange economy in which endowments are exogenously given, and one relating to an economy in which production occurs. The production economy is more general and entails additional assumptions. The assumptions are all based on the standard graduate microeconomics textbook. [ 36 ] The fundamental theorems do not generally ensure existence, nor uniqueness of the equilibria. The second fundamental theorem has more demanding conditions. The following provides a non-exhaustive list of common failures of the assumptions underlying the fundamental theorems. Another instance in which the welfare theorems fail to hold is in the canonical Overlapping generations model (OLG). A further assumption that is implicit in the statement of the theorem is that the value of total endowments in the economy (some of which might be transformed into other goods via production) is finite. [ 37 ] In the OLG model, the finiteness of endowments fails, giving rise to similar problems as described by Hilbert's paradox of the Grand Hotel . Whether the assumptions underlying the fundamental theorems are an adequate description of markets is at least partially an empirical question and may differ case by case. The first fundamental theorem holds under general conditions. [ 38 ] A formal statement is as follows: If preferences are locally nonsatiated, and if ( X ∗ , Y ∗ , p ) {\displaystyle (\mathbf {X^{*}} ,\mathbf {Y^{*}} ,\mathbf {p} )} is a price equilibrium with transfers, then the allocation ( X ∗ , Y ∗ ) {\displaystyle (\mathbf {X^{*}} ,\mathbf {Y^{*}} )} is Pareto optimal. An equilibrium in this sense either relates to an exchange economy only or presupposes that firms are allocatively and productively efficient, which can be shown to follow from perfectly competitive factor and production markets. [ 38 ] Given a set G {\displaystyle G} of types of goods we work in the real vector space over G {\displaystyle G} , R G {\displaystyle \mathbb {R} ^{G}} and use boldface for vector valued variables. For instance, if G = { butter , cookies , milk } {\displaystyle G=\lbrace {\text{butter}},{\text{cookies}},{\text{milk}}\rbrace } then R G {\displaystyle \mathbb {R} ^{G}} would be a three dimensional vector space and the vector ⟨ 1 , 2 , 3 ⟩ {\displaystyle \langle 1,2,3\rangle } would represent the bundle of goods containing 1 unit of butter, 2 units of cookies and 3 units of milk. Suppose that consumer i has wealth w i {\displaystyle w_{i}} such that Σ i w i = p ⋅ e + Σ j p ⋅ y j ∗ {\displaystyle \Sigma _{i}w_{i}=\mathbf {p} \cdot \mathbf {e} +\Sigma _{j}\mathbf {p} \cdot \mathbf {y_{j}^{*}} } where e {\displaystyle \mathbf {e} } is the aggregate endowment of goods (i.e. the sum of all consumer and producer endowments) and y j ∗ {\displaystyle \mathbf {y_{j}^{*}} } is the production of firm j . Preference maximization (from the definition of price equilibrium with transfers) implies (using > i {\displaystyle >_{i}} to denote the preference relation for consumer i ): In other words, if a bundle of goods is strictly preferred to x i ∗ {\displaystyle \mathbf {x_{i}^{*}} } it must be unaffordable at price p {\displaystyle \mathbf {p} } . Local nonsatiation additionally implies: To see why, imagine that x i ≥ i x i ∗ {\displaystyle \mathbf {x_{i}} \geq _{i}\mathbf {x_{i}^{*}} } but p ⋅ x i < w i {\displaystyle \mathbf {p} \cdot \mathbf {x_{i}} <w_{i}} . Then by local nonsatiation we could find x i ′ {\displaystyle \mathbf {x'_{i}} } arbitrarily close to x i {\displaystyle \mathbf {x_{i}} } (and so still affordable) but which is strictly preferred to x i ∗ {\displaystyle \mathbf {x_{i}^{*}} } . But x i ∗ {\displaystyle \mathbf {x_{i}^{*}} } is the result of preference maximization, so this is a contradiction. An allocation is a pair ( X , Y ) {\displaystyle (\mathbf {X} ,\mathbf {Y} )} where X ∈ Π i ∈ I R G {\displaystyle \mathbf {X} \in \Pi _{i\in I}\mathbb {R} ^{G}} and Y ∈ Π j ∈ J R G {\displaystyle \mathbf {Y} \in \Pi _{j\in J}\mathbb {R} ^{G}} , i.e. X {\displaystyle \mathbf {X} } is the 'matrix' (allowing potentially infinite rows/columns) whose i th column is the bundle of goods allocated to consumer i and Y {\displaystyle \mathbf {Y} } is the 'matrix' whose j th column is the production of firm j . We restrict our attention to feasible allocations which are those allocations in which no consumer sells or producer consumes goods which they lack, i.e., for every good and every consumer that consumers initial endowment plus their net demand must be positive similarly for producers. Now consider an allocation ( X , Y ) {\displaystyle (\mathbf {X} ,\mathbf {Y} )} that Pareto dominates ( X ∗ , Y ∗ ) {\displaystyle (\mathbf {X^{*}} ,Y^{*})} . This means that x i ≥ i x i ∗ {\displaystyle \mathbf {x_{i}} \geq _{i}\mathbf {x_{i}^{*}} } for all i and x i > i x i ∗ {\displaystyle \mathbf {x_{i}} >_{i}\mathbf {x_{i}^{*}} } for some i . By the above, we know p ⋅ x i ≥ w i {\displaystyle \mathbf {p} \cdot \mathbf {x_{i}} \geq w_{i}} for all i and p ⋅ x i > w i {\displaystyle \mathbf {p} \cdot \mathbf {x_{i}} >w_{i}} for some i . Summing, we find: Because Y ∗ {\displaystyle \mathbf {Y^{*}} } is profit maximizing, we know Σ j p ⋅ y j ∗ ≥ Σ j p ⋅ y j {\displaystyle \Sigma _{j}\mathbf {p} \cdot y_{j}^{*}\geq \Sigma _{j}p\cdot y_{j}} , so Σ i p ⋅ x i > Σ j p ⋅ y j {\displaystyle \Sigma _{i}\mathbf {p} \cdot \mathbf {x_{i}} >\Sigma _{j}\mathbf {p} \cdot \mathbf {y_{j}} } . But goods must be conserved so Σ i x i > Σ j y j {\displaystyle \Sigma _{i}\mathbf {x_{i}} >\Sigma _{j}\mathbf {y_{j}} } . Hence, ( X , Y ) {\displaystyle (\mathbf {X} ,\mathbf {Y} )} is not feasible. Since all Pareto-dominating allocations are not feasible, ( X ∗ , Y ∗ ) {\displaystyle (\mathbf {X^{*}} ,\mathbf {Y^{*}} )} must itself be Pareto optimal. [ 38 ] Note that while the fact that Y ∗ {\displaystyle \mathbf {Y^{*}} } is profit maximizing is simply assumed in the statement of the theorem the result is only useful/interesting to the extent such a profit maximizing allocation of production is possible. Fortunately, for any restriction of the production allocation Y ∗ {\displaystyle \mathbf {Y^{*}} } and price to a closed subset on which the marginal price is bounded away from 0, e.g., any reasonable choice of continuous functions to parameterize possible productions, such a maximum exists. This follows from the fact that the minimal marginal price and finite wealth limits the maximum feasible production (0 limits the minimum) and Tychonoff's theorem ensures the product of these compacts spaces is compact ensuring us a maximum of whatever continuous function we desire exists. The second theorem formally states that, under the assumptions that every production set Y j {\displaystyle Y_{j}} is convex and every preference relation ≥ i {\displaystyle \geq _{i}} is convex and locally nonsatiated , any desired Pareto-efficient allocation can be supported as a price quasi -equilibrium with transfers. [ 38 ] Further assumptions are needed to prove this statement for price equilibria with transfers. The proof proceeds in two steps: first, we prove that any Pareto-efficient allocation can be supported as a price quasi-equilibrium with transfers; then, we give conditions under which a price quasi-equilibrium is also a price equilibrium. Let us define a price quasi-equilibrium with transfers as an allocation ( x ∗ , y ∗ ) {\displaystyle (x^{*},y^{*})} , a price vector p , and a vector of wealth levels w (achieved by lump-sum transfers) with Σ i w i = p ⋅ ω + Σ j p ⋅ y j ∗ {\displaystyle \Sigma _{i}w_{i}=p\cdot \omega +\Sigma _{j}p\cdot y_{j}^{*}} (where ω {\displaystyle \omega } is the aggregate endowment of goods and y j ∗ {\displaystyle y_{j}^{*}} is the production of firm j ) such that: The only difference between this definition and the standard definition of a price equilibrium with transfers is in statement ( ii ). The inequality is weak here ( p ⋅ x i ≥ w i {\displaystyle p\cdot x_{i}\geq w_{i}} ) making it a price quasi-equilibrium. Later we will strengthen this to make a price equilibrium. [ 38 ] Define V i {\displaystyle V_{i}} to be the set of all consumption bundles strictly preferred to x i ∗ {\displaystyle x_{i}^{*}} by consumer i , and let V be the sum of all V i {\displaystyle V_{i}} . V i {\displaystyle V_{i}} is convex due to the convexity of the preference relation ≥ i {\displaystyle \geq _{i}} . V is convex because every V i {\displaystyle V_{i}} is convex. Similarly Y + { ω } {\displaystyle Y+\{\omega \}} , the union of all production sets Y i {\displaystyle Y_{i}} plus the aggregate endowment, is convex because every Y i {\displaystyle Y_{i}} is convex. We also know that the intersection of V and Y + { ω } {\displaystyle Y+\{\omega \}} must be empty, because if it were not it would imply there existed a bundle that is strictly preferred to ( x ∗ , y ∗ ) {\displaystyle (x^{*},y^{*})} by everyone and is also affordable. This is ruled out by the Pareto-optimality of ( x ∗ , y ∗ ) {\displaystyle (x^{*},y^{*})} . These two convex, non-intersecting sets allow us to apply the separating hyperplane theorem . This theorem states that there exists a price vector p ≠ 0 {\displaystyle p\neq 0} and a number r such that p ⋅ z ≥ r {\displaystyle p\cdot z\geq r} for every z ∈ V {\displaystyle z\in V} and p ⋅ z ≤ r {\displaystyle p\cdot z\leq r} for every z ∈ Y + { ω } {\displaystyle z\in Y+\{\omega \}} . In other words, there exists a price vector that defines a hyperplane that perfectly separates the two convex sets. Next we argue that if x i ≥ i x i ∗ {\displaystyle x_{i}\geq _{i}x_{i}^{*}} for all i then p ⋅ ( Σ i x i ) ≥ r {\displaystyle p\cdot (\Sigma _{i}x_{i})\geq r} . This is due to local nonsatiation: there must be a bundle x i ′ {\displaystyle x'_{i}} arbitrarily close to x i {\displaystyle x_{i}} that is strictly preferred to x i ∗ {\displaystyle x_{i}^{*}} and hence part of V i {\displaystyle V_{i}} , so p ⋅ ( Σ i x i ′ ) ≥ r {\displaystyle p\cdot (\Sigma _{i}x'_{i})\geq r} . Taking the limit as x i ′ → x i {\displaystyle x'_{i}\rightarrow x_{i}} does not change the weak inequality, so p ⋅ ( Σ i x i ) ≥ r {\displaystyle p\cdot (\Sigma _{i}x_{i})\geq r} as well. In other words, x i {\displaystyle x_{i}} is in the closure of V . Using this relation we see that for x i ∗ {\displaystyle x_{i}^{*}} itself p ⋅ ( Σ i x i ∗ ) ≥ r {\displaystyle p\cdot (\Sigma _{i}x_{i}^{*})\geq r} . We also know that Σ i x i ∗ ∈ Y + { ω } {\displaystyle \Sigma _{i}x_{i}^{*}\in Y+\{\omega \}} , so p ⋅ ( Σ i x i ∗ ) ≤ r {\displaystyle p\cdot (\Sigma _{i}x_{i}^{*})\leq r} as well. Combining these we find that p ⋅ ( Σ i x i ∗ ) = r {\displaystyle p\cdot (\Sigma _{i}x_{i}^{*})=r} . We can use this equation to show that ( x ∗ , y ∗ , p ) {\displaystyle (x^{*},y^{*},p)} fits the definition of a price quasi-equilibrium with transfers. Because p ⋅ ( Σ i x i ∗ ) = r {\displaystyle p\cdot (\Sigma _{i}x_{i}^{*})=r} and Σ i x i ∗ = ω + Σ j y j ∗ {\displaystyle \Sigma _{i}x_{i}^{*}=\omega +\Sigma _{j}y_{j}^{*}} we know that for any firm j: which implies p ⋅ y j ≤ p ⋅ y j ∗ {\displaystyle p\cdot y_{j}\leq p\cdot y_{j}^{*}} . Similarly we know: which implies p ⋅ x i ≥ p ⋅ x i ∗ {\displaystyle p\cdot x_{i}\geq p\cdot x_{i}^{*}} . These two statements, along with the feasibility of the allocation at the Pareto optimum, satisfy the three conditions for a price quasi-equilibrium with transfers supported by wealth levels w i = p ⋅ x i ∗ {\displaystyle w_{i}=p\cdot x_{i}^{*}} for all i . We now turn to conditions under which a price quasi-equilibrium is also a price equilibrium, in other words, conditions under which the statement "if x i > i x i ∗ {\displaystyle x_{i}>_{i}x_{i}^{*}} then p ⋅ x i ≥ w i {\displaystyle p\cdot x_{i}\geq w_{i}} " imples "if x i > i x i ∗ {\displaystyle x_{i}>_{i}x_{i}^{*}} then p ⋅ x i > w i {\displaystyle p\cdot x_{i}>w_{i}} ". For this to be true we need now to assume that the consumption set X i {\displaystyle X_{i}} is convex and the preference relation ≥ i {\displaystyle \geq _{i}} is continuous . Then, if there exists a consumption vector x i ′ {\displaystyle x'_{i}} such that x i ′ ∈ X i {\displaystyle x'_{i}\in X_{i}} and p ⋅ x i ′ < w i {\displaystyle p\cdot x'_{i}<w_{i}} , a price quasi-equilibrium is a price equilibrium. To see why, assume to the contrary x i > i x i ∗ {\displaystyle x_{i}>_{i}x_{i}^{*}} and p ⋅ x i = w i {\displaystyle p\cdot x_{i}=w_{i}} , and x i {\displaystyle x_{i}} exists. Then by the convexity of X i {\displaystyle X_{i}} we have a bundle x i ″ = α x i + ( 1 − α ) x i ′ ∈ X i {\displaystyle x''_{i}=\alpha x_{i}+(1-\alpha )x'_{i}\in X_{i}} with p ⋅ x i ″ < w i {\displaystyle p\cdot x''_{i}<w_{i}} . By the continuity of ≥ i {\displaystyle \geq _{i}} for α {\displaystyle \alpha } close to 1 we have α x i + ( 1 − α ) x i ′ > i x i ∗ {\displaystyle \alpha x_{i}+(1-\alpha )x'_{i}>_{i}x_{i}^{*}} . This is a contradiction, because this bundle is preferred to x i ∗ {\displaystyle x_{i}^{*}} and costs less than w i {\displaystyle w_{i}} . Hence, for price quasi-equilibria to be price equilibria it is sufficient that the consumption set be convex, the preference relation to be continuous, and for there always to exist a "cheaper" consumption bundle x i ′ {\displaystyle x'_{i}} . One way to ensure the existence of such a bundle is to require wealth levels w i {\displaystyle w_{i}} to be strictly positive for all consumers i . [ 38 ]
https://en.wikipedia.org/wiki/Fundamental_theorems_of_welfare_economics
In thermodynamics , the fundamental thermodynamic relation are four fundamental equations which demonstrate how four important thermodynamic quantities depend on variables that can be controlled and measured experimentally. Thus, they are essentially equations of state, and using the fundamental equations, experimental data can be used to determine sought-after quantities like G ( Gibbs free energy ) or H ( enthalpy ). [ 1 ] The relation is generally expressed as a microscopic change in internal energy in terms of microscopic changes in entropy , and volume for a closed system in thermal equilibrium in the following way. d U = T d S − P d V {\displaystyle \mathrm {d} U=T\,\mathrm {d} S-P\,\mathrm {d} V\,} Here, U is internal energy , T is absolute temperature , S is entropy , P is pressure , and V is volume . This is only one expression of the fundamental thermodynamic relation. It may be expressed in other ways, using different variables (e.g. using thermodynamic potentials ). For example, the fundamental relation may be expressed in terms of the enthalpy H as d H = T d S + V d P {\displaystyle \mathrm {d} H=T\,\mathrm {d} S+V\,\mathrm {d} P\,} in terms of the Helmholtz free energy F as d F = − S d T − P d V {\displaystyle \mathrm {d} F=-S\,\mathrm {d} T-P\,\mathrm {d} V\,} and in terms of the Gibbs free energy G as d G = − S d T + V d P . {\displaystyle \mathrm {d} G=-S\,\mathrm {d} T+V\,\mathrm {d} P\,.} The first law of thermodynamics states that: d U = δ Q − δ W {\displaystyle \mathrm {d} U=\delta Q-\delta W\,} where δ Q {\displaystyle \delta Q} and δ W {\displaystyle \delta W} are infinitesimal amounts of heat supplied to the system by its surroundings and work done by the system on its surroundings, respectively. According to the second law of thermodynamics we have for a reversible process: d S = δ Q T {\displaystyle \mathrm {d} S={\frac {\delta Q}{T}}\,} Hence: δ Q = T d S {\displaystyle \delta Q=T\,\mathrm {d} S\,} By substituting this into the first law, we have: d U = T d S − δ W {\displaystyle \mathrm {d} U=T\,\mathrm {d} S-\delta W\,} Letting δ W {\displaystyle \delta W} be reversible pressure-volume work done by the system on its surroundings, δ W = P d V {\displaystyle \delta W\ =P\mathrm {d} V\,} we have: d U = T d S − P d V {\displaystyle \mathrm {d} U=T\,\mathrm {d} S-P\,\mathrm {d} V\,} This equation has been derived in the case of reversible changes. However, since U , S , and V are thermodynamic state functions that depend on only the initial and final states of a thermodynamic process, the above relation holds also for non-reversible changes. If the composition, i.e. the amounts n i {\displaystyle n_{i}} of the chemical components, in a system of uniform temperature and pressure can also change, e.g. due to a chemical reaction, the fundamental thermodynamic relation generalizes to: d U = T d S − P d V + ∑ i μ i d n i {\displaystyle \mathrm {d} U=T\,\mathrm {d} S-P\,\mathrm {d} V\ +\sum _{i}\mu _{i}\,\mathrm {d} n_{i}\,} The μ i {\displaystyle \mu _{i}} are the chemical potentials corresponding to particles of type i {\displaystyle i} . If the system has more external parameters than just the volume that can change, the fundamental thermodynamic relation generalizes to d U = T d S + ∑ j X j d x j + ∑ i μ i d n i {\displaystyle \mathrm {d} U=T\,\mathrm {d} S+\sum _{j}X_{j}\,\mathrm {d} x_{j}+\sum _{i}\mu _{i}\,\mathrm {d} n_{i}\,} Here the X j {\displaystyle X_{j}} are the generalized forces corresponding to the external parameters x j {\displaystyle x_{j}} . (The negative sign used with pressure is unusual and arises because pressure represents a compressive stress that tends to decrease volume. Other generalized forces tend to increase their conjugate displacements.) The fundamental thermodynamic relation and statistical mechanical principles can be derived from one another. The above derivation uses the first and second laws of thermodynamics. The first law of thermodynamics is essentially a definition of heat , i.e. heat is the change in the internal energy of a system that is not caused by a change of the external parameters of the system. However, the second law of thermodynamics is not a defining relation for the entropy. The fundamental definition of entropy of an isolated system containing an amount of energy E {\displaystyle E} is: S = k B log ⁡ [ Ω ( E ) ] {\displaystyle S=k_{\text{B}}\log \left[\Omega \left(E\right)\right]\,} where Ω ( E ) {\displaystyle \Omega \left(E\right)} is the number of quantum states in a small interval between E {\displaystyle E} and E + δ E {\displaystyle E+\delta E} . Here δ E {\displaystyle \delta E} is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of δ E {\displaystyle \delta E} . However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on δ E {\displaystyle \delta E} . The entropy is thus a measure of the uncertainty about exactly which quantum state the system is in, given that we know its energy to be in some interval of size δ E {\displaystyle \delta E} . Deriving the fundamental thermodynamic relation from first principles thus amounts to proving that the above definition of entropy implies that for reversible processes we have: d S = δ Q T {\displaystyle dS={\frac {\delta Q}{T}}} The relevant assumption from statistical mechanics is that all the Ω ( E ) {\displaystyle \Omega \left(E\right)} states at a particular energy are equally likely. This allows us to extract all the thermodynamical quantities of interest. The temperature is defined as: 1 k B T ≡ β ≡ d log ⁡ [ Ω ( E ) ] d E {\displaystyle {\frac {1}{k_{\text{B}}T}}\equiv \beta \equiv {\frac {d\log \left[\Omega (E)\right]}{dE}}} This definition can be derived from the microcanonical ensemble , which is a system of a constant number of particles, a constant volume and that does not exchange energy with its environment. Suppose that the system has some external parameter, x, that can be changed. In general, the energy eigenstates of the system will depend on x . According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in. The generalized force, X , corresponding to the external parameter x is defined such that X d x {\displaystyle Xdx} is the work performed by the system if x is increased by an amount dx . E.g., if x is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate E r {\displaystyle E_{r}} is given by: X = − d E r d x {\displaystyle X=-{\frac {dE_{r}}{dx}}} Since the system can be in any energy eigenstate within an interval of δ E {\displaystyle \delta E} , we define the generalized force for the system as the expectation value of the above expression: X = − ⟨ d E r d x ⟩ {\displaystyle X=-\left\langle {\frac {dE_{r}}{dx}}\right\rangle \,} To evaluate the average, we partition the Ω ( E ) {\displaystyle \Omega (E)} energy eigenstates by counting how many of them have a value for d E r d x {\displaystyle {\frac {dE_{r}}{dx}}} within a range between Y {\displaystyle Y} and Y + δ Y {\displaystyle Y+\delta Y} . Calling this number Ω Y ( E ) {\displaystyle \Omega _{Y}\left(E\right)} , we have: Ω ( E ) = ∑ Y Ω Y ( E ) {\displaystyle \Omega (E)=\sum _{Y}\Omega _{Y}(E)\,} The average defining the generalized force can now be written: X = − 1 Ω ( E ) ∑ Y Y Ω Y ( E ) {\displaystyle X=-{\frac {1}{\Omega (E)}}\sum _{Y}Y\Omega _{Y}(E)\,} We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we change x to x + dx . Then Ω ( E ) {\displaystyle \Omega \left(E\right)} will change because the energy eigenstates depend on x, causing energy eigenstates to move into or out of the range between E {\displaystyle E} and E + δ E {\displaystyle E+\delta E} . Let's focus again on the energy eigenstates for which d E r d x {\displaystyle {\frac {dE_{r}}{dx}}} lies within the range between Y {\displaystyle Y} and Y + δ Y {\displaystyle Y+\delta Y} . Since these energy eigenstates increase in energy by Y dx , all such energy eigenstates that are in the interval ranging from E − Y dx to E move from below E to above E . There are N Y ( E ) = Ω Y ( E ) δ E Y d x {\displaystyle N_{Y}(E)={\frac {\Omega _{Y}(E)}{\delta E}}Y\,dx} such energy eigenstates. If Y d x ≤ δ E {\displaystyle Ydx\leq \delta E} , all these energy eigenstates will move into the range between E {\displaystyle E} and E + δ E {\displaystyle E+\delta E} and contribute to an increase in Ω {\displaystyle \Omega } . The number of energy eigenstates that move from below E + δ E {\displaystyle E+\delta E} to above E + δ E {\displaystyle E+\delta E} is, of course, given by N Y ( E + δ E ) {\displaystyle N_{Y}\left(E+\delta E\right)} . The difference N Y ( E ) − N Y ( E + δ E ) {\displaystyle N_{Y}(E)-N_{Y}(E+\delta E)\,} is thus the net contribution to the increase in Ω {\displaystyle \Omega } . Note that if Y dx is larger than δ E {\displaystyle \delta E} there will be energy eigenstates that move from below E {\displaystyle E} to above E + δ E {\displaystyle E+\delta E} . They are counted in both N Y ( E ) {\displaystyle N_{Y}(E)} and N Y ( E + δ E ) {\displaystyle N_{Y}(E+\delta E)} , therefore the above expression is also valid in that case. Expressing the above expression as a derivative with respect to E and summing over Y yields the expression: ( ∂ Ω ∂ x ) E = − ∑ Y Y ( ∂ Ω Y ∂ E ) x = ( ∂ ( Ω X ) ∂ E ) x {\displaystyle \left({\frac {\partial \Omega }{\partial x}}\right)_{E}=-\sum _{Y}Y\left({\frac {\partial \Omega _{Y}}{\partial E}}\right)_{x}=\left({\frac {\partial (\Omega X)}{\partial E}}\right)_{x}\,} The logarithmic derivative of Ω {\displaystyle \Omega } with respect to x is thus given by: ( ∂ log ⁡ ( Ω ) ∂ x ) E = β X + ( ∂ X ∂ E ) x {\displaystyle \left({\frac {\partial \log \left(\Omega \right)}{\partial x}}\right)_{E}=\beta X+\left({\frac {\partial X}{\partial E}}\right)_{x}\,} The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and thus vanishes in the thermodynamic limit. We have thus found that: ( ∂ S ∂ x ) E = X T {\displaystyle \left({\frac {\partial S}{\partial x}}\right)_{E}={\frac {X}{T}}\,} Combining this with ( ∂ S ∂ E ) x = 1 T {\displaystyle \left({\frac {\partial S}{\partial E}}\right)_{x}={\frac {1}{T}}\,} Gives: d S = ( ∂ S ∂ E ) x d E + ( ∂ S ∂ x ) E d x = d E T + X T d x {\displaystyle dS=\left({\frac {\partial S}{\partial E}}\right)_{x}\,dE+\left({\frac {\partial S}{\partial x}}\right)_{E}\,dx={\frac {dE}{T}}+{\frac {X}{T}}\,dx\,} which we can write as: d E = T d S − X d x {\displaystyle dE=T\,dS-X\,dx} It has been shown that the fundamental thermodynamic relation together with the following three postulates [ 2 ] is sufficient to build the theory of statistical mechanics without the equal a priori probability postulate. For example, in order to derive the Boltzmann distribution , we assume the probability density of microstate i satisfies Pr ( i ) ∝ f ( E i , T ) {\textstyle \Pr(i)\propto f(E_{i},T)} . The normalization factor (partition function) is therefore Z = ∑ i f ( E i , T ) . {\displaystyle Z=\sum _{i}f(E_{i},T).} The entropy is therefore given by S = k B ∑ i f ( E i , T ) Z log ⁡ ( f ( E i , T ) Z ) . {\displaystyle S=k_{B}\sum _{i}{\frac {f(E_{i},T)}{Z}}\log \left({\frac {f(E_{i},T)}{Z}}\right).} If we change the temperature T by dT while keeping the volume of the system constant, the change of entropy satisfies d S = ( ∂ S ∂ T ) V d T {\displaystyle dS=\left({\frac {\partial S}{\partial T}}\right)_{V}dT} where ( ∂ S ∂ T ) V = − k B ∑ i Z ⋅ ∂ f ( E i , T ) ∂ T ⋅ log ⁡ f ( E i , T ) − ∂ Z ∂ T ⋅ f ( E i , T ) ⋅ log ⁡ f ( E i , T ) Z 2 = − k B ∑ i ∂ ∂ T ( f ( E i , T ) Z ) ⋅ log ⁡ f ( E i , T ) {\displaystyle {\begin{aligned}\left({\frac {\partial S}{\partial T}}\right)_{V}&=-k_{B}\sum _{i}{\frac {Z\cdot {\frac {\partial f(E_{i},T)}{\partial T}}\cdot \log f(E_{i},T)-{\frac {\partial Z}{\partial T}}\cdot f(E_{i},T)\cdot \log f(E_{i},T)}{Z^{2}}}\\&=-k_{B}\sum _{i}{\frac {\partial }{\partial T}}\left({\frac {f(E_{i},T)}{Z}}\right)\cdot \log f(E_{i},T)\\\end{aligned}}} Considering that ⟨ E ⟩ = ∑ i f ( E i , T ) Z ⋅ E i {\displaystyle \left\langle E\right\rangle =\sum _{i}{\frac {f(E_{i},T)}{Z}}\cdot E_{i}} we have d ⟨ E ⟩ = ∑ i ∂ ∂ T ( f ( E i , T ) Z ) ⋅ E i ⋅ d T {\displaystyle d\left\langle E\right\rangle =\sum _{i}{\frac {\partial }{\partial T}}{\left({\frac {f(E_{i},T)}{Z}}\right)}\cdot E_{i}\cdot dT} From the fundamental thermodynamic relation, we have − d S k B + d ⟨ E ⟩ k B T + P k B T d V = 0 {\displaystyle -{\frac {dS}{k_{\text{B}}}}+{\frac {d\left\langle E\right\rangle }{k_{\text{B}}T}}+{\frac {P}{k_{\text{B}}T}}dV=0} Since we kept V constant when perturbing T , we have d V = 0 {\textstyle dV=0} . Combining the equations above, we have ∑ i ∂ ∂ T ( f ( E i , T ) Z ) ⋅ [ log ⁡ f ( E i , T ) + E i k B T ] ⋅ d T = 0 {\displaystyle \sum _{i}{\frac {\partial }{\partial T}}{\left({\frac {f(E_{i},T)}{Z}}\right)}\cdot \left[\log f(E_{i},T)+{\frac {E_{i}}{k_{\text{B}}T}}\right]\cdot dT=0} Physics laws should be universal, i.e., the above equation must hold for arbitrary systems, and the only way for this to happen is log ⁡ f ( E i , T ) + E i k B T = 0 {\displaystyle \log f(E_{i},T)+{\frac {E_{i}}{k_{\text{B}}T}}=0} That is f ( E i , T ) = exp ⁡ ( − E i k B T ) . {\displaystyle f(E_{i},T)=\exp \left(-{\frac {E_{i}}{k_{\text{B}}T}}\right).} It has been shown that the third postulate in the above formalism can be replaced by the following: [ 3 ] However, the mathematical derivation will be much more complicated.
https://en.wikipedia.org/wiki/Fundamental_thermodynamic_relation
Funga refers to the fungi of a particular region, habitat, or geological period. In life sciences , funga is a recent term (2000s) for the kingdom fungi similar to the longstanding fauna for animals and flora for plants. The term seeks to simplify projects oriented toward implementation of educational and conservation goals. The term highlights parallel terminology referring to treatments of these macroorganisms in particular geographical areas. An official proposal for the term occurred in 2018, despite previous use. [ 1 ] [ 2 ] The Species Survival Commission (SSC) of the International Union for Conservation of Nature (IUCN) in August 2021 called for the recognition of fungi as one of three kingdoms of life, and critical to protecting and restoring Earth. Funga was recommended by the IUCN in 2021. They ask that the phrase animals and plants be replaced by animals, fungi, and plants , and fauna and flora by fauna, flora, and funga . [ 3 ] [ 4 ] This fungus -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Funga
Fungal-bacterial endosymbiosis encompasses the mutualistic relationship between a fungus and intracellular bacteria species residing within the fungus. Many examples of endosymbiotic relationships between bacteria and plants, algae and insects exist and have been well characterized, however fungal-bacteria endosymbiosis has been less well described. Fungal-bacterial endosymbiosis represents a diverse range of endosymbionts and hosts with respect to the initiation of the association and the benefits provided by and for each partner. Well-studied examples include Burkholderia species (sp.)/Rhizopus microsporus (R. microsporus), Nostoc punctiforme (N. punctiforme)/Geosiphon pyriforme (G. pyriforme) [ 1 ] [ 2 ] and Candidatus Glomeribacter gigasporarum (Ca. G. sporarum) /Gigaspora margarita (G. margarita) bacteria/fungi associations. What is known on these associations impacts our understanding of the ecological interactions of plants, fungi and bacteria. The classification of bacterial endosymbionts and their fungal partners occur across a diverse set of phyla. Ca. G. sporarum and Burkholderia sp. have been identified to be β-proteobacteria, a gram-negative class of bacteria, and N. punctiforme is a cyanobacteria . These phyla are not closely related showing that the capability of endosymbiosis with fungi is widely spread. A similar pattern is seen with the fungal partners with examples occurring across broad phyla/divisions such as Glomeromycota, Zygomycota, Ascomycota and Basidiomycota. The common feature of these fungi is that they are often arbuscular or ectomycorrhizal fungi and form symbiotic relations with plants as well as with their bacterial endosymbionts. Though commonalities exist, the taxonomic classification does not offer a consistent symbiotic phenotype. The definition of “endosymbiont” indicates that the bacteria are localized within the cytoplasm of cells or hyphae of the fungi partner. Specifically, the bacteria grow within the membranes of their fungal counterpart, commonly referred to as vacuoles or symbiosomes. This is a feature common in all fungal-bacterial symbiosis suggesting that internalization of the bacteria via phagocytosis is the main method of incorporation. The bacteria involved may be internalized by the fungi on a cyclic basis or obligatorily living within the fungi. The interaction between N. punctiforme and G. pyriforme is an example of a cyclical association which forms at a certain point in their separate life cycles. N. punctiforme forms masses of filaments which gather in the dimmer underground soil while G. pyriforme grows lateral vegetative hyphae occupying the same area. The endosymbiotic relationship is formed when G. pyriforme engulf and internalize N. punctiforme in their growing hyphae in specialized compartments. Within the fungi, N. punctiforme replicates for the duration of about 6 months, coinciding with the life span of Geosiphon. Ca. G. sporarum, in contrast, is an obligate endosymbiont in the AM (arbuscular mycorrhizal) fungus G. margarita. They have been observed replicating within vacuoles and have been found in all stages of the life of the fungus including the spores, vegetative hyphae, and plant cell-associated hyphae. It is thought that the bacteria are transmitted vertically from parent to offspring in the fungi as permanent residents. Thus, bacterial endosymbionts are typically incorporated into growing fungi either through phagocytosis during some point in the life cycle of the fungus or passed on vertically forming permanent associations with the fungus. In most cases, bacteria provide the fungus with some form of metabolic benefit while the fungus often provides a suitable living environment. Burkholderia sp. in R. microsporus have been found to produce rhizoxin, an inhibitor of mitosis originally thought to be produced by R. microsporus itself. The production of rhizoxin by Burkholderia sp. leading to the death of plant cells allows R. microsporus to gain greater access to nutrients. The bacteria also appears to play a role in dictating asexual spore formation in R. microsporus. The benefit gained by the bacteria in this case is not specifically known. In other cases such as N. punctiforme and Ca. G. sporarum, nutrient exchange exists between the partners. N. punctiforme are autotrophic cyanobacteria capable of fixing nitrogen and provides G. pyriforme with fixed nitrogen. Ca. G. sporarum, on the other hand, has been found to increase the content of fatty acids, a method of usable organic carbon storage, in G. margarita while relying heavily on its AM fungi host to provide key nutrients suggesting that nutrient exchange is a two-way interaction. The AM fungi host relies on the plant host for its nutrients. Interactions between bacteria and fungi are based on benefits to metabolism and represent complex interactions between bacterial, fungal and plant components. Many of the fungal partners involved in the endosymbiotic relationship with the bacteria are also in mutualistic or parasitic relationships with other plants. The presence of intracellular bacteria living within these fungi add another level of complexity and suggests that at some level, the plant is benefitting indirectly from the interaction between fungi and bacteria. About 80% of natural and cultivated plants harbour AM fungi. These interactions increase nutrient availability in the plant and lead to increased plant growth and environmental stress-resistance. There exists a current demand in agriculture to cultivate and optimize to increase yield sustainably. Without considering the bacteria that live within AM fungi, like Ca. G. sporarum, as a factor that may contribute the beneficial nature of AM fungi to plants, we may overlook what makes widespread agricultural application possible. On the other side of the spectrum are the fungi that cause disease in agricultural crops leading to huge loses, such as R. microsporus which causes blight in rice seedlings. R. microsporus relies on its bacterial partner of the Burkholderia sp. for the pathogenic toxin. Previous efforts to control infection included the use of harmful pesticides to eliminate the fungi, however more recent research takes into mind the role of the endosymbiotic bacteria in pathogenesis and uses phages to target the bacteria. We can see that fungal-bacterial endosymbiosis significantly impacts the global concern of food production and we can think of the deeper understanding of these relationships as being the solution to these problems.
https://en.wikipedia.org/wiki/Fungal-bacterial_endosymbiosis
Fungal DNA barcoding is the process of identifying species of the biological kingdom Fungi through the amplification and sequencing of specific DNA sequences and their comparison with sequences deposited in a DNA barcode database such as the ISHAM reference database, [ 1 ] or the Barcode of Life Data System (BOLD). In this attempt, DNA barcoding relies on universal genes that are ideally present in all fungi with the same degree of sequence variation. The interspecific variation, i.e., the variation between species, in the chosen DNA barcode gene should exceed the intraspecific (within-species) variation. [ 2 ] A fundamental problem in fungal systematics is the existence of teleomorphic and anamorphic stages in their life cycles. These morphs usually differ drastically in their phenotypic appearance, preventing a straightforward association of the asexual anamorph with the sexual teleomorph. Moreover, fungal species can comprise multiple strains that can vary in their morphology or in traits such as carbon- and nitrogen utilisation, which has often led to their description as different species, eventually producing long lists of synonyms. [ 3 ] Fungal DNA barcoding can help to identify and associate anamorphic and teleomorphic stages of fungi, and through that to reduce the confusing multitude of fungus names. For this reason, mycologists were among the first to spearhead the investigation of species discrimination by means of DNA sequences, [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] at least 10 years earlier than the DNA barcoding proposal for animals by Paul D. N. Hebert and colleagues in 2003, who popularised the term "DNA barcoding". [ 9 ] [ 10 ] The success of identification of fungi by means of DNA barcode sequences stands and falls with the quantitative (completeness) and qualitative (level of identification) aspect of the reference database. Without a database covering a broad taxonomic range of fungi, many identification queries will not result in a satisfyingly close match. Likewise, without a substantial curatorial effort to maintain the records at a high taxonomic level of identification, queries – even when they might have a close or exact match in the reference database – will not be informative if the closest match is only identified to phylum or class level. [ 11 ] [ 12 ] Another crucial prerequisite for DNA barcoding is the ability to unambiguously trace the provenance of DNA barcode data back to the originally sampled specimen, the so-called voucher specimen. This is common practice in biology along with the description of new taxa , where the voucher specimens, on which the taxonomic description is based, become the type specimens . When the identity of a certain taxon (or a genetic sequence in the case of DNA barcoding) is in doubt, the original specimen can be re-examined to review and ideally solve the issue. Voucher specimens should be clearly labelled as such, including a permanent voucher identifier that unambiguously connects the specimen with the DNA barcode data derived from it. Furthermore, these voucher specimens should be deposited in publicly accessible repositories like scientific collections or herbaria to preserve them for future reference and to facilitate research involving the deposited specimens. [ 13 ] In fungi, the Internal transcribed spacer ( ITS ) is a roughly 600 base pairs long region in the ribosomal tandem repeat gene cluster of the nuclear genome . The region is flanked by the DNA sequences for the ribosomal small subunit (SSU) or 18S subunit at the 5' end, and by the large subunit (LSU) or 28S subunit at the 3' end. [ 14 ] [ 15 ] The Internal Transcribed Spacer itself consists of two parts, ITS1 and ITS2 , which are separated from each other by the 5.8S subunit nested between them. Like the flanking 18S and 28S subunits, the 5.8S subunit contains a highly conserved DNA sequence, as they code for structural parts of the ribosome , which is a key component in intracellular protein synthesis . Due to several advantages of ITS (see below) and a comprehensive amount of sequence data accumulated in the 1990s and early 2000s, Begerow et al. (2010) and Schoch et al. (2012) proposed the ITS region as primary DNA barcode region for the genetic identification of fungi . [ 12 ] [ 2 ] UNITE [ 16 ] is an open ITS barcoding database for fungi and all other eukaryotes. The conserved flanking regions of 18S and 28S serve as anchor points for the primers used for PCR amplification of the ITS region. [ 17 ] Moreover, the conserved nested 5.8S region allows for the construction of "internal" primers, i.e., primers attaching to complementary sequences within the ITS region. White et al. (1990) proposed such internal primers, named ITS2 and ITS3, along with the flanking primers ITS1 and ITS4 in the 18S and the 28S subunit, respectively. [ 17 ] Due to their almost universal applicability to ITS sequencing in fungi, these primers are still in wide use today. Optimised primers specifically for ITS sequencing in Dikarya (comprising Basidiomycota and Ascomycota ) have been proposed by Toju et al. (2012). [ 18 ] For the majority of fungi, the ITS primers proposed by White et al. (1990) have become the standard primers used for PCR amplification. These primers are: [ 17 ] Forward primers: Reverse primers: A major advantage of using the ITS region as molecular marker and fungal DNA barcode is that the entire ribosomal gene cluster is arranged in tandem repeats, i.e., in multiple copies. [ 15 ] This allows for its PCR amplification and Sanger sequencing even from small material samples (given the DNA is not fragmented due to age or other degenerative influences ). [ 14 ] Hence, a high PCR success rate is usually observed when amplifying ITS . However, this success rate varies greatly among fungal groups, from 65% in non-Dikarya (including the now paraphyletic Mucoromycotina , the Chytridiomycota and the Blastocladiomycota ) to 100% in Saccharomycotina and Basidiomycota [ 2 ] (with the exception of very low success in Pucciniomycotina ). [ 19 ] Furthermore, the choice of primers for ITS amplification can introduce biases towards certain taxonomic fungus groups. [ 20 ] For example, the "universal" ITS primers [ 17 ] fail to amplify about 10% of the tested fungal specimens. [ 19 ] The tandem repeats of the ribosomal gene cluster cause the problem of significant intragenomic sequence heterogeneity observed among ITS copies of several fungal groups. [ 21 ] [ 22 ] [ 23 ] In Sanger sequencing, this will cause ITS sequence reads of different lengths to superpose each other, potentially rendering the resulting chromatograph unreadable. Furthermore, because of the non-coding nature of the ITS region that can lead to a substantial amount of indels , it is impossible to consistently align ITS sequences from highly divergent species for further bigger-scale phylogenetic analyses. [ 9 ] [ 14 ] The degree of intragenomic sequence heterogeneity can be investigated in more detail through molecular cloning of the initially PCR-amplified ITS sequences, followed by sequencing of the clones. This procedure of initial PCR amplification, followed by cloning of the amplicons and finally sequencing of the cloned PCR products is the most common approach of obtaining ITS sequences for DNA metabarcoding of environmental samples, in which a multitude of different fungal species can be present simultaneously. However, this approach of sequencing after cloning was rarely done for the ITS sequences that make up the reference libraries used for DNA barcode-aided identification, thus potentially giving an underestimate of the existing ITS sequence variation in many samples. [ 24 ] The weighted arithmetic mean of the intraspecific (within-species) ITS variability among fungi is 2.51%. This variability, however, can range from 0% for example in Serpula lacrymans (n=93 samples) over 0.19% in Tuber melanosporum (n=179) up to 15.72% in Rhizoctonia solani (n=608), or even 24.75% in Pisolithus tinctorius (n=113). In cases of high intraspecific ITS variability, the application of a threshold of 3% sequence variability – a canonical upper value for intraspecific variation – will therefore lead to a higher estimate of operational taxonomic units (OTUs), i.e., putative species, than there actually are in a sample. [ 25 ] In the case of medically relevant fungal species, a more strict threshold of 2.5% ITS variability allows only around 75% of all species to be accurately identified to the species level. [ 1 ] On the other hand, morphologically well-defined, but evolutionarily young species complexes or sibling species may only differ (if at all) in a few nucleotides of the ITS sequences. Solely relying on ITS barcode data for the identification of such species pairs or complexes may thus obscure the actual diversity and might lead to misidentification if not accompanied by the investigation of morphological and ecological features and/or comparison of additional diagnostic genetic markers . [ 19 ] [ 24 ] [ 26 ] [ 27 ] For some taxa, ITS (or its ITS2 part) is not variable enough as fungal DNA barcode, as for example has been shown in Aspergillus , Cladosporium , Fusarium and Penicillium . [ 28 ] [ 29 ] [ 30 ] [ 31 ] Efforts to define a universally applicable threshold value of ITS variability that demarcates intraspecific from interspecific (between-species) variability thus remain futile. [ 25 ] Nonetheless, the probability of correct species identification with the ITS region is high in the Dikarya , and especially so in Basidiomycota , where even the ITS1 part is often sufficient to identify the species. [ 32 ] However, its discrimination power is partly superseded by that of the DNA-directed RNA polymerase II subunit RPB1 (see also below). [ 2 ] Due to the shortcomings of ITS' as primary fungal DNA barcode, the necessity of establishing a second DNA barcode marker was expressed. [ 9 ] Several attempts were made to establish other genetic markers that could serve as additional DNA barcodes, [ 19 ] [ 33 ] [ 34 ] similar to the situation in plants , where the plastidial genes rbcL , matK and trnH-psbA , as well as the nuclear ITS are often used in combination for DNA barcoding. [ 35 ] The translational elongation factor 1α is part of the eucaryotic elongation factor 1 complex, whose main function is to facilitate the elongation of the amino acid chain of a polypeptide during the translation process of gene expression . [ 36 ] Stielow et al. (2015) investigated the TEF1α gene, among a number of others, as potential genetic marker for fungal DNA barcoding. The TEF1α gene coding for the translational elongation factor 1α is generally considered to have a slow mutation rate , and it is therefore generally better suited for investigating older splits deeper in the phylogenetic history of an organism group. Despite this, the authors conclude that TEF1α is the most promising candidate for an additional DNA barcode marker in fungi as it also features sequence regions of higher mutation rates. [ 19 ] Following this, a quality-controlled reference database was established and merged with the previously existing ISHAM-ITS database for fungal ITS DNA barcodes [ 1 ] to form the ISHAM database. [ 37 ] TEF1α has been successfully used to identify a new species of Cantharellus from Texas and distinguish it from a morphologically similar species. [ 38 ] In the genera Ochroconis and Verruconis (Sympoventuriaceae, Venturiales), however, the marker does not allow distinction of all species. [ 39 ] TEF1α has also been used in phylogenetic analyses at the genus level, e.g. in the case of Cantharellus [ 40 ] and the entomopathogenic Beauveria , [ 41 ] and for the phylogenetics of early-diverging fungal lineages. [ 42 ] TEF1α primers used in the broad-scale screening of the performance of DNA barcode gene candidates of Stielow et al. (2015) were the forward primer EF1-983F with the sequence 5'-GCYCCYGGHCAYCGTGAYTTYAT-3' , and the reverse primer EF1-1567R with the sequence 5'-ACHGTRCCRATACCACCRATCTT-3' . [ 41 ] In addition, a number of new primers was developed, with the primer pair in bold resulting in a high average amplification success of 88%: [ 19 ] Forward primers: Reverse primers: Primers used for the investigation of Rhizophydiales and especially Batrachochytrium dendrobatidis , a pathogen of amphibia, are the forward primer tef1F with the nucleotide sequence 5'-TACAARTGYGGTGGTATYGACA-3' , and the reverse primer tef1R with the sequence 5'-ACNGACTTGACYTCAGTRGT-3' . [ 43 ] These primers also successfully amplified the majority of Cantharellus species investigated by Buyck et al. (2014), with the exception of a few species for which more specific primers were developed: the forward primer tef-1Fcanth with the sequence 5'-AGCATGGGTDCTYGACAAG-3' , and the reverse primer tef-1Rcanth with the sequence 5'-CCAATYTTRTAYACATCYTGGAG-3' . [ 40 ] The D1/D2 domain is part of the nuclear large subunit ( 28S ) ribosomal RNA, and it is therefore located in the same ribosomal tandem repeat gene cluster as the Internal Transcribed Spacer ( ITS ). But unlike the non-coding ITS sequences, the D1/D2 domain contains coding sequence. With about 600 base pairs it is about the same nucleotide sequence length as ITS , [ 44 ] which makes amplification and sequencing rather straightforward, an advantage that has led to the accumulation of an extensive amount of D1/D2 sequence data especially for yeasts . [ 3 ] [ 7 ] [ 44 ] Regarding the molecular identification of basidiomycetous yeasts, D1/D2 (or ITS ) can be used alone. [ 44 ] However, Fell et al. (2000) and Scorzetti et al. (2002) recommend the combined analysis of the D1/D2 and ITS regions, [ 3 ] [ 44 ] a practice that later became the standard required information for describing new taxa of asco- and basidiomycetous yeasts. [ 14 ] When attempting to identify early diverging fungal lineages, the study of Schoch et al. (2012), comparing the identification performance of different genetic markers, showed that the large subunit (as well as the small subunit ) of the ribosomal RNA performs better than ITS or RPB1 . [ 2 ] For basidiomycetous yeasts, the forward primer F63 with the sequence 5'-GCATATCAATAAGCGGAGGAAAAG-3' , and the reverse primer LR3 with the sequence 5'-GGTCCGTGTTTCAAGACGG-3' have been successfully used for PCR amplification of the D1/D23 domain. [ 3 ] The D1/D2 domain of ascomycetous yeasts like Candida can be amplified with the forward primer NL-1 (same as F63 ) and the reverse primer NL-4 (same as LR3 ). [ 6 ] The RNA polymerase II subunit RPB1 is the largest subunit of the RNA polymerase II . In Saccharomyces cerevisiae , it is encoded by the RPO21 gene. [ 46 ] PCR amplification success of RPB1 is very taxon-dependent, ranging from 70 to 80% in Ascomycota to 14% in early diverging fungal lineages. [ 2 ] Apart from the early diverging lineages, RPB1 has a high rate of species identification in all fungal groups. In the species-rich Pezizomycotina it even outperforms ITS. [ 2 ] In a study comparing the identification performance of four genes, RPB1 was among the most effective genes when combining two genes in the analysis: combined analysis with either ITS or with the large subunit ribosomal RNA yielded the highest identification success. [ 2 ] Other studies also used RPB2 , the second-largest subunit of the RNA polymerase II, e.g. for studying the phylogenetic relationships among species of the genus Cantharellus [ 40 ] or for a phylogenetic study shedding light on the relationships among early-diverging lineages in the fungal kingdom. [ 42 ] Primers successfully amplifying RPB1 especially in Ascomycota are the forward primer RPB1-Af with the sequence 5'-GARTGYCCDGGDCAYTTYGG-3' , and the reverse primer RPB1-Ac-RPB1-Cr with the sequence 5'-CCNGCDATNTCRTTRTCCATRTA-3' . [ 2 ] The Intergenic Spacer ( IGS ) is the region of non-coding DNA between individual tandem repeats of the ribosomal gene cluster in the nuclear genome , as opposed to the Internal Transcribed Spacer (ITS) that is situated within these tandem repeats. IGS has been successfully used for the differentiation of strains of Xanthophyllomyces dendrorhous [ 47 ] as well as for species distinction in the psychrophilic genus Mrakia ( Cystofilobasidiales ). [ 48 ] Due to these results, IGS has been recommended as a genetic marker for additional differentiation (along with D1/D2 and ITS ) of closely related species and even strains within one species in basidiomycete yeasts. [ 3 ] The recent discovery of additional non-coding RNA genes in the IGS region of some basidiomycetes cautions against uncritical use of IGS sequences for DNA barcoding and phylogenetic purposes. [ 49 ] The cytochrome c oxidase subunit I ( COI ) gene outperforms ITS in DNA barcoding of Penicillium (Ascomycota) species, with species-specific barcodes for 66% of the investigated species versus 25% in the case of ITS . Furthermore, a part of the β-Tubulin A ( BenA ) gene exhibits a higher taxonomic resolution in distinguishing Penicillium species as compared to COI and ITS . [ 50 ] In the closely related Aspergillus niger complex, however, COI is not variable enough for species discrimination. [ 51 ] In Fusarium , COI exhibits paralogues in many cases, and homologous copies are not variable enough to distinguish species. [ 52 ] COI also performs poorly in the identification of basidiomycote rusts of the order Pucciniales due to the presence of introns . Even when the obstacle of introns is overcome, ITS and the LSU rRNA ( 28S ) outperform COI as DNA barcode marker. [ 53 ] In the subdivision Agaricomycotina , PCR amplification success was poor for COI , even with multiple primer combinations. Successfully sequenced COI samples also included introns and possible paralogous copies, as reported for Fusarium . [ 52 ] [ 54 ] Agaricus bisporus was found to contain up to 19 introns, making the COI gene of this species the longest recorded, with 29,902 nucleotides. [ 55 ] Apart from the substantial troubles of sequencing COI , COI and ITS generally perform equally well in distinguishing basidiomycote mushrooms. [ 54 ] Topoisomerase I ( TOP1 ) was investigated as additional DNA barcode candidate by Lewis et al. (2011) based on proteome data, with the developed universal primer pair [ 33 ] being subsequently tested on actual samples by Stielow et al. (2015). The forward primer TOP1_501-F with the sequence 5'-TGTAAAACGACGGCCAGT-ACGAT-ACTGCCAAGGTTTTCCGTACHTACAACGC-3' (where the first section marks the universal M13 forward primer tail, the second part consisting of ACGAT a spacer, and the third part the actual primer) and reverse the primer TOP1_501-R with 5'-CAGGAAACAGCTATGA-CCCAGTCCTCGTCAACWGACTTRATRGCCCA-3' (the first section marking the universal M13 reverse primer tail, the second part the actual TOP1 reverse primer) amplify a fragment of approximately 800 base pairs. [ 19 ] TOP1 was found to be a promising DNA barcode candidate marker for ascomycetes, where it can distinguish species in Fusarium and Penicillium – genera, in which the primary ITS barcode performs poorly. However, poor amplification success with the TOP1 universal primers is observed in early-diverging fungal lineages and basidiomycetes except Pucciniomycotina (where ITS PCR success is poor). [ 19 ] Like TOP1 , the Phosphoglycerate kinase ( PGK ) was among the genetic markers investigated by Lewis et al. (2011) and Stielow et al. (2015) as potential additional fungal DNA barcodes. A number of universal primers was developed, [ 33 ] with the PGK533 primer pair, amplifying a circa 1,000 base pair fragment, being the most successful in most fungi except Basidiomycetes. Like TOP1 , PGK is superior to ITS in species differentiation in ascomycete genera like Penicillium and Fusarium , and both PGK and TOP1 perform as good as TEF1α in distinguishing closely related species in these genera. [ 19 ] A citizen science project investigated the consensus between the labelling of dried, commercially sold mushrooms and the DNA barcoding results from these mushrooms. All samples were found to be correctly labelled. However, an obstacle was the unreliability of ITS reference databases in terms of the level of identification, as the two databases (GenBank and UNITE) used for ITS sequence comparison gave different identification results in some of the samples. [ 56 ] [ 57 ] Correct labelling of mushrooms intended for consumption was also investigated by Raja et al. (2016), who used the ITS region for DNA barcoding from dried mushrooms, mycelium powders, and dietary supplement capsules. In only 30% of the 33 samples did the product label correctly state the binomial fungus name. In another 30%, the genus name was correct, but the species epithet did not match, and in 15% of the cases not even the genus name of the binomial name given on the product label matched the result of the obtained ITS barcode. For the remaining 25% of the samples, no ITS sequence could be obtained. [ 58 ] Xiang et al. (2013) showed that using ITS sequences, the commercially highly valuable the caterpillar fungus Ophiocordyceps sinensis and its counterfeit versions ( O. nutans , O. robertsii , Cordyceps cicadae , C. gunnii , C. militaris , and the plant Ligularia hodgsonii ) can be reliably identified to the species level. [ 59 ] A study by Vi Hoang et al. (2019) focused on the identification accuracy of pathogenic fungi using both the primary ( ITS ) and secondary ( TEF1α ) barcode markers. Their results show that in Diutina (a segregate of Candida [ 60 ] ) and Pichia , species identification is straightforward with either the ITS or the TEF1α as well as with a combination of both. In the Lodderomyces assemblage, which contains three of the five most common pathogenic Candida species ( C. albicans , C. dubliniensis , and C. parapsilosis ), ITS failed to distinguish Candida orthopsilosis and C. parapsilosis , which are part of the Candida parapsilosis complex of closely related species. [ 61 ] TEF1α , on the other hand, allowed identification of all investigated species of the Lodderomyces clade. Similar results were obtained for Scedosporium species, which are attributed to a wide range of localised to invasive diseases: ITS could not distinguish between S. apiospermum and S. boydii , whereas with TEF1α all investigated species of this genus could be accurately identified. This study therefore underlines the usefulness of applying more than one DNA barcoding marker for fungal species identification. [ 62 ] Fungal DNA barcoding has been successfully applied to the investigation of foxing phenomena, a major concern in the conservation of paper documents . Sequeira et al. (2019) sequenced ITS from foxing stains and found Chaetomium globosum , Ch. murorum , Ch. nigricolor , Chaetomium sp., Eurotium rubrum , Myxotrichum deflexum , Penicillium chrysogenum , P. citrinum , P. commune , Penicillium sp. and Stachybotrys chartarum to inhabit the investigated paper stains. [ 63 ] Another study investigated fungi that act as biodeteriorating agents in the Old Cathedral of Coimbra , part of the University of Coimbra , a UNESCO World Heritage Site . Sequencing the ITS barcode of ten samples with classical Sanger as well as with Illumina next-generation sequencing techniques, they identified 49 fungal species. Aspergillus versicolor , Cladosporium cladosporioides , C. sphaerospermum , C. tenuissimum , Epicoccum nigrum , Parengyodontium album , Penicillium brevicompactum , P. crustosum , P. glabrum , Talaromyces amestolkiae and T. stollii were the most common species isolated from the samples. [ 64 ] Another study concerning objects of cultural heritage investigated the fungal diversity on a canvas painting by Paula Rego using the ITS2 subregion of the ITS marker. Altogether, 387 OTUs (putative species) in 117 genera of 13 different classes of fungi were observed. [ 65 ]
https://en.wikipedia.org/wiki/Fungal_DNA_barcoding
Fungal infection , also known as mycosis , is a disease caused by fungi . [ 5 ] [ 13 ] Different types are traditionally divided according to the part of the body affected: superficial, subcutaneous , and systemic. [ 3 ] [ 6 ] Superficial fungal infections include common tinea of the skin , such as tinea of the body , groin , hands , feet and beard , and yeast infections such as pityriasis versicolor . [ 7 ] Subcutaneous types include eumycetoma and chromoblastomycosis , which generally affect tissues in and beneath the skin. [ 1 ] [ 7 ] Systemic fungal infections are more serious and include cryptococcosis , histoplasmosis , pneumocystis pneumonia , aspergillosis and mucormycosis . [ 3 ] Signs and symptoms range widely. [ 3 ] There is usually a rash with superficial infection. [ 2 ] Fungal infection within the skin or under the skin may present with a lump and skin changes. [ 3 ] Pneumonia -like symptoms or meningitis may occur with a deeper or systemic infection. [ 2 ] Fungi are everywhere, but only some cause disease. [ 13 ] Fungal infection occurs after spores are either breathed in , come into contact with skin or enter the body through the skin such as via a cut , wound or injection . [ 3 ] It is more likely to occur in people with a weak immune system . [ 14 ] This includes people with illnesses such as HIV/AIDS , and people taking medicines such as steroids or cancer treatments . [ 14 ] Fungi that cause infections in people include yeasts , molds and fungi that are able to exist as both a mold and yeast . [ 3 ] The yeast Candida albicans can live in people without producing symptoms, and is able to cause both superficial mild candidiasis in healthy people, such as oral thrush or vaginal yeast infection , and severe systemic candidiasis in those who cannot fight infection themselves. [ 3 ] Diagnosis is generally based on signs and symptoms, microscopy , culture , sometimes requiring a biopsy and the aid of medical imaging . [ 6 ] Some superficial fungal infections of the skin can appear similar to other skin conditions such as eczema and lichen planus . [ 7 ] Treatment is generally performed using antifungal medicines , usually in the form of a cream or by mouth or injection , depending on the specific infection and its extent. [ 15 ] Some require surgically cutting out infected tissue . [ 3 ] Fungal infections have a world-wide distribution and are common, affecting more than one billion people every year. [ 11 ] An estimated 1.7 million deaths from fungal disease were reported in 2020. [ 12 ] Several, including sporotrichosis , chromoblastomycosis and mycetoma are neglected . [ 16 ] A wide range of fungal infections occur in other animals, and some can be transmitted from animals to people. [ 17 ] Mycoses are traditionally divided into superficial , subcutaneous, or systemic, where infection is deep, more widespread and involving internal body organs. [ 3 ] [ 11 ] They can affect the nails , vagina , skin and mouth . [ 18 ] Some types such as blastomycosis , cryptococcus , coccidioidomycosis and histoplasmosis , affect people who live in or visit certain parts of the world. [ 18 ] Others such as aspergillosis , pneumocystis pneumonia , candidiasis , mucormycosis and talaromycosis , tend to affect people who are unable to fight infection themselves. [ 18 ] Mycoses might not always conform strictly to the three divisions of superficial, subcutaneous and systemic. [ 3 ] Some superficial fungal infections can cause systemic infections in people who are immunocompromised. [ 3 ] Some subcutaneous fungal infections can invade into deeper structures, resulting in systemic disease. [ 3 ] Candida albicans can live in people without producing symptoms, and is able to cause both mild candidiasis in healthy people and severe invasive candidiasis in those who cannot fight infection themselves . [ 3 ] [ 7 ] ICD-11 codes include: [ 5 ] Superficial mycoses include candidiasis in healthy people, common tinea of the skin , such as tinea of the body , groin , hands , feet and beard , and malassezia infections such as pityriasis versicolor . [ 3 ] [ 7 ] Subcutaneous fungal infections include sporotrichosis , chromoblastomycosis , and eumycetoma . [ 3 ] Systemic fungal infections include histoplasmosis , cryptococcosis , coccidioidomycosis , blastomycosis , mucormycosis , aspergillosis , pneumocystis pneumonia and systemic candidiasis. [ 3 ] Systemic mycoses due to primary pathogens originate normally in the lungs and may spread to other organ systems. Organisms that cause systemic mycoses are inherently virulent . [ further explanation needed ] . [ citation needed ] Systemic mycoses due to opportunistic pathogens are infections of people with immune deficiencies who would otherwise not be infected. Examples of immunocompromised conditions include AIDS , alteration of normal flora by antibiotics, immunosuppressive therapy , and metastatic cancer . Examples of opportunistic mycoses include Candidiasis , Cryptococcosis and Aspergillosis . [ citation needed ] Most common mild mycoses often present with a rash. [ 2 ] Infections within the skin or under the skin may present with a lump and skin changes. [ 3 ] Less common deeper fungal infections may present with pneumonia-like symptoms or meningitis . [ 2 ] Mycoses are caused by certain fungi ; yeasts , molds and some fungi that can exist as both a mold and yeast . [ 3 ] [ 6 ] They are everywhere and infection occurs after spores are either breathed in , come into contact with skin or enter the body through the skin such as via a cut, wound or injection. [ 3 ] Candida albicans is the most common cause of fungal infection in people, particularly as oral or vaginal thrush, often following taking antibiotics. [ 3 ] Fungal infections are more likely in people with weak immune systems . [ 14 ] This includes people with illnesses such as HIV/AIDS, and people taking medicines such as steroids or cancer treatments . [ 14 ] People with diabetes also tend to develop fungal infections. [ 19 ] Very young and very old people, also, are groups at risk. [ 20 ] Individuals being treated with antibiotics are at higher risk of fungal infections. [ 21 ] Children whose immune systems are not functioning properly (such as children with cancer) are at risk of invasive fungal infections. [ 22 ] During the COVID-19 pandemic some fungal infections have been associated with COVID-19 . [ 10 ] [ 23 ] [ 24 ] Fungal infections can mimic COVID-19 and occur at the same time as COVID-19, and more serious fungal infections can complicate COVID-19. [ 10 ] A fungal infection may occur after antibiotics for a bacterial infection which has occurred following COVID-19. [ 25 ] The most common serious fungal infections in people with COVID-19 include aspergillosis and invasive candidiasis . [ 26 ] COVID-19–associated mucormycosis is generally less common, but in 2021 was noted to be significantly more prevalent in India. [ 10 ] [ 27 ] Fungal infections occur after spores are breathed in , come into contact with skin or enter the body through a wound. [ 3 ] Diagnosis is generally by signs and symptoms, microscopy , biopsy , culture and sometimes with the aid of medical imaging . [ 6 ] Some tinea and candidiasis infections of the skin can appear similar to eczema and lichen planus . [ 7 ] Pityriasis versicolor can look like seborrheic dermatitis, pityriasis rosea , pityriasis alba and vitiligo . [ 7 ] Some fungal infections such as coccidioidomycosis , histoplasmosis , and blastomycosis can present with fever , cough , and shortness of breath , thereby resembling COVID-19 . [ 28 ] Keeping the skin clean and dry, as well as maintaining good hygiene , will help larger topical mycoses. Because some fungal infections are contagious, it is important to wash hands after touching other people or animals. Sports clothing should also be washed after use. [ clarification needed ] [ citation needed ] Treatment depends on the type of fungal infection, and usually requires topical or systemic antifungal medicines . [ 15 ] Pneumocystosis that does not respond to anti-fungals is treated with co-trimoxazole . [ 29 ] Sometimes, infected tissue needs to be surgically cut away . [ 3 ] Worldwide, every year fungal infections affect more than one billion people. [ 11 ] An estimated 1.6 million deaths from fungal disease were reported in 2017. [ 30 ] The figure has been rising, with an estimated 1.7 million deaths from fungal disease reported in 2020. [ 12 ] Fungal infections also constitute a significant cause of illness and mortality in children. [ 31 ] According to the Global Action Fund for Fungal Infections , every year there are over 10 million cases of fungal asthma, around 3 million cases of long-term aspergillosis of lungs, 1 million cases of blindness due to fungal keratitis , more than 200,000 cases of meningitis due to cryptococcus, 700,000 cases of invasive candidiasis, 500,000 cases of pneumocystosis of lungs, 250,000 cases of invasive aspergillosis, and 100,000 cases of histoplasmosis. [ 32 ] In 500 BC, an account of ulcers in the mouth by Hippocrates may have described thrush. [ 33 ] Paris-based Hungarian microscopist David Gruby first reported that human disease could be caused by fungi in the early 1840s. [ 33 ] During the 2003 SARS outbreak , fungal infections were reported in 14.8–33% of people affected by SARS, and it was the cause of death in 25–73.7% of people with SARS. [ 34 ] A wide range of fungal infections occur in other animals, and some can be transmitted from animals to people, such as Microsporum canis from cats. [ 17 ]
https://en.wikipedia.org/wiki/Fungal_disease
Extracellular enzymes or exoenzymes are synthesized inside the cell and then secreted outside the cell, where their function is to break down complex macromolecules into smaller units to be taken up by the cell for growth and assimilation. [ 1 ] These enzymes degrade complex organic matter such as cellulose and hemicellulose into simple sugars that enzyme-producing organisms use as a source of carbon, energy, and nutrients. [ 2 ] Grouped as hydrolases , lyases , oxidoreductases and transferases , [ 1 ] these extracellular enzymes control soil enzyme activity through efficient degradation of biopolymers . Plant residues, animals and microorganisms enter the dead organic matter pool upon senescence [ 3 ] and become a source of nutrients and energy for other organisms. Extracellular enzymes target macromolecules such as carbohydrates ( cellulases ), lignin ( oxidases ), organic phosphates ( phosphatases ), amino sugar polymers ( chitinases ) and proteins ( proteases ) [ 4 ] and break them down into soluble sugars that are subsequently transported into cells to support heterotrophic metabolism. [ 1 ] Biopolymers are structurally complex and require the combined actions of a community of diverse microorganisms and their secreted exoenzymes to depolymerize the polysaccharides into easily assimilable monomers . These microbial communities are ubiquitous in nature, inhabiting both terrestrial and aquatic ecosystems . The cycling of elements from dead organic matter by heterotrophic soil microorganisms is essential for nutrient turnover and energy transfer in terrestrial ecosystems. [ 5 ] Exoenzymes also aid digestion in the guts of ruminants, [ 6 ] termites, [ 7 ] humans and herbivores. By hydrolyzing plant cell wall polymers, microbes release energy that has the potential to be used by humans as biofuel. [ 8 ] Other human uses include waste water treatment, [ 9 ] composting [ 10 ] and bioethanol production. [ 11 ] Extracellular enzyme production supplements the direct uptake of nutrients by microorganisms and is linked to nutrient availability and environmental conditions. The varied chemical structure of organic matter requires a suite of extracellular enzymes to access the carbon and nutrients embedded in detritus . Microorganisms differ in their ability to break down these different substrates and few organisms have the potential to degrade all the available plant cell wall materials. [ 12 ] To detect the presence of complex polymers, some exoenzymes are produced constitutively at low levels, and expression is upregulated when the substrate is abundant. [ 13 ] This sensitivity to the presence of varying concentrations of substrate allows fungi to respond dynamically to the changing availability of specific resources. Benefits of exoenzyme production can also be lost after secretion because the enzymes are liable to denature, degrade or diffuse away from the producer cell. Enzyme production and secretion is an energy intensive process [ 14 ] and, because it consumes resources otherwise available for reproduction, there is evolutionary pressure to conserve those resources by limiting production. [ 15 ] Thus, while most microorganisms can assimilate simple monomers, degradation of polymers is specialized, and few organisms can degrade recalcitrant polymers like cellulose and lignin. [ 16 ] Each microbial species carries specific combinations of genes for extracellular enzymes and is adapted to degrade specific substrates . [ 12 ] In addition, the expression of genes that encode for enzymes is typically regulated by the availability of a given substrate. For example, presence of a low-molecular weight soluble substrate such as glucose will inhibit enzyme production by repressing the transcription of associated cellulose-degrading enzymes. [ 17 ] Environmental conditions such as soil pH , [ 18 ] soil temperature, [ 19 ] moisture content, [ 20 ] and plant litter type and quality [ 21 ] have the potential to alter exoenzyme expression and activity. Variations in seasonal temperatures can shift metabolic needs of microorganisms in synchrony with shifts in plant nutrient requirements. [ 22 ] Agricultural practices such as fertilizer amendments and tillage can change the spatial distribution of resources, resulting in altered exoenzyme activity in the soil profile . [ 23 ] Introduction of moisture exposes soil organic matter to enzyme catalysis [ 24 ] and also increases loss of soluble monomers via diffusion. Additionally, osmotic shock resulting from water potential changes can impact enzyme activities as microbes redirect energy from enzyme production to synthesizing osmolytes to maintain cellular structures. Most of the extracellular enzymes involved in polymer degradation in leaf litter and soil have been ascribed to fungi. [ 25 ] [ 26 ] [ 27 ] By adapting their metabolism to the availability of varying amounts of carbon and nitrogen in the environment, fungi produce a mixture of oxidative and hydrolytic enzymes to efficiently break down lignocelluloses like wood. During plant litter degradation, cellulose and other labile substrates are degraded first [ 28 ] followed by lignin depolymerization with increased oxidative enzyme activity and shifts in microbial community composition. In plant cell walls, cellulose and hemicellulose is embedded in a pectin scaffold [ 29 ] that requires pectin degrading enzymes, such as polygalacturonases and pectin lyases to weaken the plant cell wall and uncover hemicellulose and cellulose to further enzymatic degradation. [ 30 ] Degradation of lignin is catalyzed by enzymes that oxidase aromatic compounds, such as phenol oxidases , peroxidases and laccases. Many fungi have multiple genes encoding lignin-degrading exoenzymes. [ 31 ] Most efficient wood degraders are saprotrophic ascomycetes and basidiomycetes . Traditionally, these fungi are classified as brown rot (Ascomycota and Basidiomycota), white rot (Basidiomycota) and soft rot (Ascomycota) based on the appearance of the decaying material. [ 2 ] Brown rot fungi preferentially attack cellulose and hemicellulose; [ 32 ] while white rot fungi degrade cellulose and lignin. To degrade cellulose, basidiomycetes employ hydrolytic enzymes, such as endoglucanases , cellobiohydrolase and β-glucosidase. [ 33 ] Production of endoglucanases is widely distributed among fungi and cellobiohydrolases have been isolated in multiple white-rot fungi and in plant pathogens. [ 33 ] β-glucosidases are secreted by many wood-rotting fungi, both white and brown rot fungi, mycorrhizal fungi [ 34 ] and in plant pathogens. In addition to cellulose, β-glucosidases can cleave xylose, mannose and galactose. [ 35 ] In white-rot fungi such as Phanerochaete chrysosporium , expression of manganese-peroxidase is induced by the presence of manganese, hydrogen peroxide and lignin, [ 36 ] while laccase is induced by availability of phenolic compounds. [ 37 ] Production of lignin-peroxidase and manganese-peroxidase is the hallmark of basidiomycetes and is often used to assess basidiomycete activity, especially in biotechnology applications. [ 38 ] Most white-rot species also produce laccase, a copper-containing enzyme that degrades polymeric lignin and humic substances. [ 39 ] Brown-rot basidiomycetes are most commonly found in coniferous forests, and are so named because they degrade wood to leave a brown residue that crumbles easily. Preferentially attacking hemicellulose in wood, followed by cellulose, these fungi leave lignin largely untouched. [ 40 ] The decayed wood of soft-rot Ascomycetes is brown and soft. One soft-rot Ascomycete, Trichoderma reesei , is used extensively in industrial applications as a source for cellulases and hemicellulases. [ 41 ] Laccase activity has been documented in T. reesei , in some species in the Aspergillus genus [ 42 ] and in freshwater ascomycetes. [ 43 ] Methods for estimating soil enzyme activities involve sample harvesting prior to analysis, mixing of samples with buffers and the use of substrate. Results can be influenced by: sample transport from field-site, storage methods, pH conditions for assay , substrate concentrations, temperature at which the assay is run, sample mixing and preparation. [ 44 ] For hydrolytic enzymes, colorimetric assays are required that use a p-nitrophenol (p-NP)-linked substrate, [ 45 ] or fluorometric assays that use a 4-methylumbelliferone (MUF)-linked substrate. [ 46 ] Oxidative enzymes such as phenol oxidase and peroxidase mediate lignin degradation and humification. [ 47 ] Phenol oxidase activity is quantified by oxidation of L-3, 4-dihydoxyphenylalanine (L-DOPA), pyrogallol (1, 2, 3-trihydroxybenzene), or ABTS (2, 2’-azino-bis (3-ethylbenzothiazoline-6-sulphonic acid). Peroxidase activity is measured by running the phenol oxidase assay concurrently with another assay with L-DOPA and hydrogen peroxide (H 2 O 2 ) added to every sample. [ 48 ] The difference in measurements between the two assays is indicative of peroxidase activity. Enzyme assays typically apply proxies that reveal exo-acting activities of enzymes. Exo-acting enzymes hydrolyze substrates from the terminal position. While activity of endo-acting enzymes which break down polymers midchain need to be represented by other substrate proxies. New enzyme assays aim to capture the diversity of enzymes and assess the potential activity of them in a more clear way. [ 49 ] [ 50 ] [ 51 ] With newer technologies available, molecular methods to quantify abundance of enzyme-coding genes are used to link enzymes with their producers in soil environments. [ 52 ] [ 53 ] Transcriptome analyses are now employed to examine genetic controls of enzyme expression, [ 54 ] while proteomic methods can reveal the presence of enzymes in the environment and link to the organisms producing them. [ 55 ] β-glucosidase Esterases N-acetylglucosaminidase Laccase (polyphenol oxidase) Peroxidase L-DOPA, ABTS [ 39 ] Laccases – soften paper and improving bleaching [ 61 ] Pectinases – in the manufacture of yogurt Water retention Impact on soil processes
https://en.wikipedia.org/wiki/Fungal_extracellular_enzyme_activity
Fungal genomes are among the smallest genomes of eukaryotes . The sizes of fungal genomes range from less than 10 Mbp to hundreds of Mbp. [ 1 ] [ 2 ] The average genome size is approximately 37 Mbp in Ascomycota , 47 Mbp in Basidiomycota and 75 Mbp in Oomycota . [ 1 ] The sizes and gene numbers of the smallest genomes of free-living fungi such as those of Wallemia ichthyophaga , Wallemia mellicola or Malassezia restricta are comparable to bacterial genomes . [ 3 ] [ 4 ] [ 5 ] The genome of the extensively researched yeast Saccharomyces cerevisiae contains approximately 12 Mbp and was the first completely sequenced eukaryotic genome. [ 6 ] Due to their compact size fungal genomes can be sequenced with less resources than most other eukaryotic genomes and are thus important models for research. [ 7 ] Some fungi exist as stable haploid, diploid, or polyploid cells, others change ploidy in response to environmental conditions and aneuploidy is also observed in novel environments or during periods of stress. [ 8 ] The comparison of fungal genomes has been used to study the evolution of fungi, to improve the resolution of the phylogeny of fungal species, and to determine the time of the emergence and changes in species traits and lifestyles, such as the evolution symbiotic or pathogenic interactions, and the evolution of different morphologies. [ 2 ] Major chromosomal rearrangements in fungi were found to be more frequent than in other eukaryotes, thus macrosynteny in fungi is rare. [ 9 ] However, in filamentous ascomycetes genes were found to be conserved within homologous chromosomes, but with randomized orders and orientations, a phenomenon named mesosynteny . [ 9 ] Mesosynteny was also observed in the basidiomycetous genus Rhodotorula . [ 10 ] A comparison of more than 1000 Saccharomyces cerevisiae genomes was used to identify the geographical origin and several domestication events of the species as well as map genomic variants to the species-wide phenotypic landscape of the yeast. [ 11 ] Comparisons of several genomes of the same species led to discovery of high levels of recombination in species that were previously considered asexual . [ 12 ] [ 13 ] [ 14 ] In the extremely halotolerant black yeast Hortaea werneckii it was discovered that while the species is clonal, both haploid and diploid strains can be found in nature and the diploid strains are highly heterozygous hybrids, which appear to be stable over large time scales and geographical distances. [ 15 ] While genomic distance measures such as the average nucleotide identity (ANI) are used routinely to distinguish bacterial species, the use of fungal genomes in taxonomy is currently rare. Genome sequences can be used to expand the number of genes used in phylogenetic analyses, but many publicly available genomes lack gene annotations and popular rDNA markers are typically missing from genomic sequences or are incorrectly assembled. [ 16 ] Suggested measures of overall genome related indices in yeast include ANI, digital DNA–DNA hybridization (dDDH) and K r distance. [ 17 ] Genomic collinearity was suggested as a possible source of markers to resolve species complexes. [ 18 ] Pairwise Kr genomic distances and average nucleotide identity were used in the description of new species within the genera Aureobasidium and Tilletia . [ 19 ] [ 20 ] Alternatively, quick and simple to calculate similarity measures based on MinHash also appear to produce usefully accurate estimates of distance between genomes. For example, a fixed threshold genomic distance calculated tools such as Mash and Dashing was able to determine whether two genomes belong to the same or to different species with over 90% accuracy, indicating that simple measures of genomic distance might be useful to delineate fungal species and still largely support the existing fungal taxonomy. [ 21 ]
https://en.wikipedia.org/wiki/Fungal_genome
Fungal isolates have been researched for decades. Because fungi often exist in thin mycelial monolayers , with no protective shell, immune system , and limited mobility, they have developed the ability to synthesize a variety of unusual compounds for survival. Researchers have discovered fungal isolates with anticancer , antimicrobial , immunomodulatory , and other bio-active properties. The first statins , β-Lactam antibiotics , as well as a few important antifungals, were discovered in fungi. BMS manufactures paclitaxel using Penicillium and plant cell fermentation. Fungi can synthesize podophyllotoxin and camptothecin , precursors to etoposide , teniposide , topotecan , and irinotecan . Lentinan , PSK , and PSP , are registered anticancer immunologic adjuvants. Irofulven and acylfulvene are anticancer derivatives of illudin S . Clavaric acid is a reversible farnesyltransferase inhibitor. Inonotus obliquus creates betulinic acid precursor betulin . Flammulina velutipes creates asparaginase . Plinabulin is a fungal isolate derivative currently being researched for anticancer applications. The statins lovastatin , mevastatin , and simvastatin precursor monacolin J, are fungal isolates. Additional fungal isolates that inhibit cholesterol are zaragozic acids , eritadenine , and nicotinamide riboside . Ciclosporin , mycophenolic acid , mizoribine , FR901483 , and gliotoxin , are immunosuppressant fungal isolates. Penicillin , cephalosporins , fusafungine , usnic acid , fusidic acid , fumagillin , brefeldin A , verrucarin A , alamethicin , are antibiotic fungal isolates. Antibiotics retapamulin , tiamulin , and valnemulin are derivatives of the fungal isolate pleuromutilin . Griseofulvin , echinocandins , strobilurin , azoxystrobin , caspofungin , micafungin , are fungal isolates with antifungal activity. The headache medications cafergot , dihydroergotamine , methysergide , methylergometrine , the dementia medications hydergine , nicergoline , the Parkinson's disease medications lisuride , bromocriptine , cabergoline , and pergolide were all derived from Claviceps isolates. Polyozellus multiplex synthesizes prolyl endopeptidase inhibitors polyozellin , thelephoric acid , and kynapcins . Boletus badius synthesizes L-theanine . Researchers have discovered other interesting fungal isolates like the antihyperglycemic compounds ternatin , aspergillusol A , sclerotiorin , and antimalarial compounds codinaeopsin , efrapeptins , and antiamoebin . The fungal isolate ergothioneine is actively absorbed and concentrated by the human body via SLC22A4 . Other notable fungal isolates include vitamin D 1 , vitamin D 2 , and vitamin D 4 .
https://en.wikipedia.org/wiki/Fungal_isolate
The fungal loop hypothesis suggests that soil fungi in arid ecosystems connect the metabolic activity of plants and biological soil crusts which respond to different soil moisture levels. Compiling diverse evidence such as limited accumulation of soil organic matter , high phenol oxidative and proteolytic enzyme potentials due to microbial activity, and symbioses between plants and fungi, the fungal loop hypothesis suggests that carbon and nutrients are cycled in biotic pools rather than leached or effluxed to the atmosphere during and between pulses of precipitation . [ 1 ] The fungal loop hypothesis is similar in concept to the microbial loop hypotheses in oceans or soil, more specifically for arid ecosystems. That is because of characteristics specific to arid ecosystems that are not found elsewhere. In arid ecosystems, there is low total precipitation and high variability in size of rain events (pulses) within and between years. Differences in how plants and decomposers respond to these pulses of precipitation affects biogeochemical cycling within the ecosystem. For example, extracellular enzymes present in the soil become active nearly instantaneously after any moisture pulse, while production in microbes and plants have lag times of various durations, and require pulse events of different sizes. [ 2 ] Arid ecosystems also often have patchy distributions of vascular plants with bare patches of soil in between. Such vegetation reduces radiation and wind speed at the soil surface which reduces evaporation and thus creates favorable microhabitats for other species. In addition, as plants senesce and become litter , this increases carbon and nitrogen contents in the top layers of soil under plant canopies. [ 3 ] Together, these effects create “islands of fertility” where plants are distributed. [ 4 ] In the bare soil between plants, biological soil crusts are often present. Crust microorganisms can fix carbon and nitrogen from the atmosphere as well as trap nitrogen-rich dust. [ 5 ] Therefore, biology soil crusts contribute to carbon content and nutrient resources in soil surfaces where plant cover is low. Litter produced by plants must be broken down by decomposers into nutrients available to organisms. Both bacteria and fungi produce extracellular enzymes to break down large molecules into compounds that can be taken up by plants. [ 6 ] However, fungi can metabolize at higher temperatures and lower water potentials than bacteria. Therefore, in arid ecosystems where precipitation falls during the hot season, fungi are likely the most important contributors to nutrient cycling due to their tolerance to temperature and ability to persist during long dry periods. In several sites in the southwestern US, denitrification and nitrification were shown to be mostly carried out by fungi. [ 1 ] In arid ecosystems, many primary producers, such as grasses and biological soil crusts, form symbioses with fungi. Mycorrhizal fungi colonizing plant roots acquire carbon directly from plant roots, provide phosphorus sources to plants, and have been shown to transport water. [ 7 ] Dark septate endophytes (DSE) are also common in many aridland plants and are hypothesized to perform similar roles as mycorrhizal fungi. [ 8 ] Fungi are an integral part of the biological soil crust community, and similar fungal taxa have been found in biological soil crusts and plant root zones, which suggests hyphal connections between these two spatially separate organisms. [ 9 ] The fungal loop hypothesis suggests that biological soil crusts and associated microbes are able to become active after smaller water pulses compared to vascular plants, which require more water to become active. However, fungi are able to take up the nutrients produced by biological soil crusts at lower water potentials, and keep them in the biotic pool until larger water pulses allow plants to become active and take up those nutrients. Active plants then are able to contribute excess carbon from photosynthesis to their fungal symbionts. Therefore, root-associated fungi symbiotic with plants and biological soil crusts connect the spatially- and temporally-distinct activities of crusts and plants. Evidence of conditions favorable to the existence of a fungal loop is readily available. [ 1 ] However, direct experimental tests of the hypothesis in arid ecosystems are still relatively rare. One study used isotopic labeling to trace where nitrate and glutamate moved when provided to biological soil crust or to grass foliage a distance away. They found that organic and inorganic N could be dispersed up to 100 cm per day bidirectionally between plants and crust. [ 10 ] Other research has shown evidence of bidirectional transport in soil-fungal-plant connections in redistributing water in arid ecosystems. [ 11 ]
https://en.wikipedia.org/wiki/Fungal_loop_hypothesis
A fungating lesion is a skin lesion that fungates , that is, becomes like a fungus in its appearance or growth rate. It is marked by ulcerations (breaks on the skin or surface of an organ) and necrosis (death of living tissue) and usually presents a foul odor. This kind of lesion may occur in many types of cancer , including breast cancer , melanoma , and squamous cell carcinoma , and especially in advanced disease. The characteristic malodorous smell is caused by dimethyl trisulfide . [ 1 ] It is usually not a fungal infection but rather a neoplastic growth with necrosing portions. There is a weak evidence that 6% miltefosine solution applied topically on superficial fungating breast lesions of less than 1 cm in size, on patients who received previous radiotherapy, surgery, hormonal therapy or chemotherapy for their breast cancer, may slow the disease progression. [ 2 ] This article incorporates public domain material from Dictionary of Cancer Terms . U.S. National Cancer Institute . This oncology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fungating_lesion
Information theory is the mathematical study of the quantification , storage , and communication of information . The field was established and formalized by Claude Shannon in the 1940s, [ 1 ] though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley . It is at the intersection of electronic engineering , mathematics , statistics , computer science , neurobiology , physics , and electrical engineering . [ 2 ] [ 3 ] A key measure in information theory is entropy . Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process . For example, identifying the outcome of a fair coin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a die (which has six equally likely outcomes). Some other important measures in information theory are mutual information , channel capacity , error exponents , and relative entropy . Important sub-fields of information theory include source coding , algorithmic complexity theory , algorithmic information theory and information-theoretic security . Applications of fundamental topics of information theory include source coding/ data compression (e.g. for ZIP files ), and channel coding/ error detection and correction (e.g. for DSL ). Its impact has been crucial to the success of the Voyager missions to deep space, [ 4 ] the invention of the compact disc , the feasibility of mobile phones and the development of the Internet and artificial intelligence . [ 5 ] [ 6 ] [ 3 ] The theory has also found applications in other areas, including statistical inference , [ 7 ] cryptography , neurobiology , [ 8 ] perception , [ 9 ] signal processing , [ 2 ] linguistics , the evolution [ 10 ] and function [ 11 ] of molecular codes ( bioinformatics ), thermal physics , [ 12 ] molecular dynamics , [ 13 ] black holes , quantum computing , information retrieval , intelligence gathering , plagiarism detection , [ 14 ] pattern recognition , anomaly detection , [ 15 ] the analysis of music , [ 16 ] [ 17 ] art creation , [ 18 ] imaging system design, [ 19 ] study of outer space , [ 20 ] the dimensionality of space , [ 21 ] and epistemology . [ 22 ] Information theory studies the transmission, processing, extraction, and utilization of information . Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled A Mathematical Theory of Communication , in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem , showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent. [ 8 ] Coding theory is concerned with finding explicit methods, called codes , for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible. [ 23 ] [ 24 ] A third class of information theory codes are cryptographic algorithms (both codes and ciphers ). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis , [ 25 ] such as the unit ban . The landmark event establishing the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948. Historian James Gleick rated the paper as the most important development of 1948, noting that the paper was "even more profound and more fundamental" than the transistor . [ 26 ] He came to be known as the "father of information theory". [ 27 ] [ 28 ] [ 29 ] Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter to Vannevar Bush . [ 29 ] Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs , all implicitly assuming events of equal probability. Harry Nyquist 's 1924 paper, Certain Factors Affecting Telegraph Speed , contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation W = K log m (recalling the Boltzmann constant ), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley 's 1928 paper, Transmission of Information , uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as H = log S n = n log S , where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit , which since has sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers. [ citation needed ] Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs . Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory . [ citation needed ] In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion: With it came the ideas of: Information theory is based on probability theory and statistics, where quantified information is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is called entropy , which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable. [ 30 ] Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit or shannon , based on the binary logarithm . Other units include the nat , which is based on the natural logarithm , and the decimal digit , which is based on the common logarithm . In what follows, an expression of the form p log p is considered by convention to be equal to zero whenever p = 0 . This is justified because lim p → 0 + p log ⁡ p = 0 {\displaystyle \lim _{p\rightarrow 0+}p\log p=0} for any logarithmic base. Based on the probability mass function of each source symbol to be communicated, the Shannon entropy H , in units of bits (per symbol), is given by where p i is the probability of occurrence of the i -th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base e , where e is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base 2 8 = 256 will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys ) per symbol. Intuitively, the entropy H X of a discrete random variable X is a measure of the amount of uncertainty associated with the value of X when only its distribution is known. The entropy of a source that emits a sequence of N symbols that are independent and identically distributed (iid) is N ⋅ H bits (per message of N symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length N will be less than N ⋅ H . If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If X {\displaystyle \mathbb {X} } is the set of all messages { x 1 , ..., x n } that X could be, and p ( x ) is the probability of some x ∈ X {\displaystyle x\in \mathbb {X} } , then the entropy, H , of X is defined: [ 31 ] (Here, I ( x ) is the self-information , which is the entropy contribution of an individual message, and E X {\displaystyle \mathbb {E} _{X}} is the expected value .) A property of entropy is that it is maximized when all the messages in the message space are equiprobable p ( x ) = 1/ n ; i.e., most unpredictable, in which case H ( X ) = log n . The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit: The joint entropy of two discrete random variables X and Y is merely the entropy of their pairing: ( X , Y ) . This implies that if X and Y are independent , then their joint entropy is the sum of their individual entropies. For example, if ( X , Y ) represents the position of a chess piece— X the row and Y the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece. Despite similar notation, joint entropy should not be confused with cross-entropy . The conditional entropy or conditional uncertainty of X given random variable Y (also called the equivocation of X about Y ) is the average conditional entropy over Y : [ 32 ] Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that: Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of X relative to Y is given by: where SI ( S pecific mutual Information) is the pointwise mutual information . A basic property of the mutual information is that That is, knowing Y , we can save an average of I ( X ; Y ) bits in encoding X compared to not knowing Y . Mutual information is symmetric : Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X : In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y . This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ 2 test : mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. The Kullback–Leibler divergence (or information divergence , information gain , or relative entropy ) is a way of comparing two distributions: a "true" probability distribution ⁠ p ( X ) {\displaystyle p(X)} ⁠ , and an arbitrary probability distribution ⁠ q ( X ) {\displaystyle q(X)} ⁠ . If we compress data in a manner that assumes ⁠ q ( X ) {\displaystyle q(X)} ⁠ is the distribution underlying some data, when, in reality, ⁠ p ( X ) {\displaystyle p(X)} ⁠ is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric). Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution ⁠ p ( x ) {\displaystyle p(x)} ⁠ . If Alice knows the true distribution ⁠ p ( x ) {\displaystyle p(x)} ⁠ , while Bob believes (has a prior ) that the distribution is ⁠ q ( x ) {\displaystyle q(x)} ⁠ , then Bob will be more surprised than Alice, on average, upon seeing the value of X . The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him. Directed information , I ( X n → Y n ) {\displaystyle I(X^{n}\to Y^{n})} , is an information theory measure that quantifies the information flow from the random process X n = { X 1 , X 2 , … , X n } {\displaystyle X^{n}=\{X_{1},X_{2},\dots ,X_{n}\}} to the random process Y n = { Y 1 , Y 2 , … , Y n } {\displaystyle Y^{n}=\{Y_{1},Y_{2},\dots ,Y_{n}\}} . The term directed information was coined by James Massey and is defined as where I ( X i ; Y i | Y i − 1 ) {\displaystyle I(X^{i};Y_{i}|Y^{i-1})} is the conditional mutual information I ( X 1 , X 2 , . . . , X i ; Y i | Y 1 , Y 2 , . . . , Y i − 1 ) {\displaystyle I(X_{1},X_{2},...,X_{i};Y_{i}|Y_{1},Y_{2},...,Y_{i-1})} . In contrast to mutual information, directed information is not symmetric. The I ( X n → Y n ) {\displaystyle I(X^{n}\to Y^{n})} measures the information bits that are transmitted causally [ clarification needed ] from X n {\displaystyle X^{n}} to Y n {\displaystyle Y^{n}} . The Directed information has many applications in problems where causality plays an important role such as capacity of channel with feedback, [ 33 ] [ 34 ] capacity of discrete memoryless networks with feedback, [ 35 ] gambling with causal side information, [ 36 ] compression with causal side information, [ 37 ] real-time control communication settings, [ 38 ] [ 39 ] and in statistical physics. [ 40 ] Other important information theoretic quantities include the Rényi entropy and the Tsallis entropy (generalizations of the concept of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information . Also, pragmatic information has been proposed as a measure of how much information has been used in making a decision. Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel ) or intermediary "helpers" (the relay channel ), or more general networks , compression followed by transmission may no longer be optimal. Any process that generates successive messages can be considered a source of information. A memoryless source is one in which each message is an independent identically distributed random variable , whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic . These terms are well studied in their own right outside information theory. Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is: that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is: that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result. [ 41 ] The information rate is defined as: It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of source coding . Communications over a channel is the primary motivation of information theory. However, channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. Consider the communications process over a discrete channel. A simple model of the process is shown below: Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let p ( y | x ) be the conditional probability distribution function of Y given X . We will consider p ( y | x ) to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of f ( x ) , the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal , we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the channel capacity and is given by: This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N , there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε ; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C , it is impossible to transmit with arbitrarily small block error. Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity. In practice many channels have memory. Namely, at time i {\displaystyle i} the channel is given by the conditional probability P ( y i | x i , x i − 1 , x i − 2 , . . . , x 1 , y i − 1 , y i − 2 , . . . , y 1 ) {\displaystyle P(y_{i}|x_{i},x_{i-1},x_{i-2},...,x_{1},y_{i-1},y_{i-2},...,y_{1})} . It is often more comfortable to use the notation x i = ( x i , x i − 1 , x i − 2 , . . . , x 1 ) {\displaystyle x^{i}=(x_{i},x_{i-1},x_{i-2},...,x_{1})} and the channel become P ( y i | x i , y i − 1 ) {\displaystyle P(y_{i}|x^{i},y^{i-1})} . In such a case the capacity is given by the mutual information rate when there is no feedback available and the Directed information rate in the case that either there is feedback or not [ 33 ] [ 42 ] (if there is no feedback the directed information equals the mutual information). Fungible information is the information for which the means of encoding is not important. [ 43 ] Classical information theorists and computer scientists are mainly concerned with information of this sort. It is sometimes referred as speakable information. [ 44 ] Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban , was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe . Shannon himself defined an important concept now called the unicity distance . Based on the redundancy of the plaintext , it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability. Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers . The security of all such methods comes from the assumption that no known attack can break them in a practical amount of time. Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key ) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material. Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators , but even they require random seeds external to the software to work as intended. These can be obtained via extractors , if done carefully. The measure of sufficient randomness in extractors is min-entropy , a value related to Shannon entropy through Rényi entropy ; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses. One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods. [ 45 ] Semioticians Doede Nauta [ nl ] and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics. [ 46 ] : 171 [ 47 ] : 137 Nauta defined semiotic information theory as the study of " the internal processes of coding, filtering, and information processing. " [ 46 ] : 91 Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and Ferruccio Rossi-Landi [ it ] to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones. [ 48 ] Quantitative information theoretic methods have been applied in cognitive science to analyze the integrated process organization of neural information in the context of the binding problem in cognitive neuroscience . [ 49 ] In this context, either an information-theoretical measure, such as functional clusters ( Gerald Edelman and Giulio Tononi 's functional clustering model and dynamic core hypothesis (DCH) [ 50 ] ) or effective information (Tononi's integrated information theory (IIT) of consciousness [ 51 ] [ 52 ] [ 53 ] ), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods ( Karl J. Friston 's free energy principle (FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and the Bayesian brain hypothesis [ 54 ] [ 55 ] [ 56 ] [ 57 ] [ 58 ] ). Information theory also has applications in the search for extraterrestrial intelligence , [ 59 ] black holes , [ 60 ] bioinformatics , [ 61 ] and gambling . [ 62 ] [ 63 ]
https://en.wikipedia.org/wiki/Fungible_information
Fungicides are pesticides used to kill parasitic fungi or their spores . [ 1 ] [ 2 ] Fungi can cause serious damage in agriculture , resulting in losses of yield and quality. Fungicides are used both in agriculture and to fight fungal infections in animals . Fungicides are also used to control oomycetes , which are not taxonomically /genetically fungi, although sharing similar methods of infecting plants. Fungicides can either be contact, translaminar or systemic. Contact fungicides are not taken up into the plant tissue and protect only the plant where the spray is deposited. Translaminar fungicides redistribute the fungicide from the upper, sprayed leaf surface to the lower, unsprayed surface. Systemic fungicides are taken up and redistributed through the xylem vessels. Few fungicides move to all parts of a plant. Some are locally systemic, and some move upward. [ 3 ] [ 4 ] Most fungicides that can be bought retail are sold in liquid form, the active ingredient being present at 0.08% in weaker concentrates, and as high as 0.5% for less potent fungicides. Fungicides in powdered form are usually around 90% sulfur. Some major fungal threats to agriculture (and the associated diseases) are Ascomycetes ("potato late blight"), basidiomycetes (" powdery mildew "), deuteromycetes (various rusts), and oomycetes (" downy mildew "). [ 1 ] Like other pesticides , fungicides are numerous and diverse. This complexity has led to diverse schemes for classifying fungicides. Classifications are based on inorganic (elemental sulfur and copper salts) vs organic , chemical structures (dithiocarbamates vs phthalimides), and, most successfully, mechanism of action (MOA). These respective classifications reflect the evolution of the underlying science . Traditional fungicides are simple inorganic compounds like sulfur , [ 5 ] and copper salts. While cheap, they must be applied repeatedly and are relatively ineffective. [ 2 ] Other active ingredients in fungicides include neem oil , rosemary oil, jojoba oil , the bacterium Bacillus subtilis , and the beneficial fungus Ulocladium oudemansii . In the 1930s dithiocarbamate -based fungicides, the first organic compounds used for this purpose, became available. These include ferbam , ziram , zineb , maneb , and mancozeb . These compounds are non-specific and are thought to inhibit cysteine-based protease enzymes. Similarly nonspecific are N-substituted phthalimides . Members include captafol , captan , and folpet . Chlorothalonil is also non-specific. [ 2 ] Specific fungicides target a particular biological process in the fungus. Some fungicides target succinate dehydrogenase , a metabolically central enzyme. Fungi of the class Basidiomycetes were the initial focus of these fungicides. These fungi are active against cereals. Some of the most common fungal crop pathogens are known to suffer from mycoviruses , and it is likely that they are as common as for plant and animal viruses, although not as well studied. Given the obligately parasitic nature of mycoviruses, it is likely that all of these are detrimental to their hosts, and thus are potential biocontrols /biofungicides. [ 7 ] Doses that provide the most control of the disease also provide the largest selection pressure to acquire resistance. [ 8 ] In some cases, the pathogen evolves resistance to multiple fungicides, a phenomenon known as cross resistance . These additional fungicides typically belong to the same chemical family, act in the same way, or have a similar mechanism for detoxification. Sometimes negative cross-resistance occurs, where resistance to one chemical class of fungicides increases sensitivity to a different chemical class of fungicides. This has been seen with carbendazim and diethofencarb . Also possible is resistance to two chemically different fungicides by separate mutation events. For example, Botrytis cinerea is resistant to both azoles and dicarboximide fungicides . A common mechanism for acquiring resistance is alteration of the target enzyme. For example, Black Sigatoka , an economically important pathogen of banana, is resistant to the QoI fungicides, due to a single nucleotide change resulting in the replacement of one amino acid (glycine) by another (alanine) in the target protein of the QoI fungicides, cytochrome b. [ 9 ] It is presumed that this disrupts the binding of the fungicide to the protein, rendering the fungicide ineffective. Upregulation of target genes can also render the fungicide ineffective. This is seen in DMI-resistant strains of Venturia inaequalis . [ 10 ] Resistance to fungicides can also be developed by efficient efflux of the fungicide out of the cell. Septoria tritici has developed multiple drug resistance using this mechanism. The pathogen had five ABC-type transporters with overlapping substrate specificities that together work to pump toxic chemicals out of the cell. [ 11 ] In addition to the mechanisms outlined above, fungi may also develop metabolic pathways that circumvent the target protein, or acquire enzymes that enable the metabolism of the fungicide to a harmless substance. Fungicides that are at risk of losing their potency due to resistance include Strobilurins such as azoxystrobin . [ 12 ] Cross-resistance can occur because the active ingredients share a common mode of action. The industry-sponsored Fungicide Resistance Action Committee (FRAC), whose parent organization is CropLife International , [ 13 ] advises on the use of fungicides in crop protection and classifies the available compounds according to their chemical structures and mechanism of action so as to manage the risks of resistance developing. [ 14 ] The 2024 FRAC poster of fungicides includes all the chemicals mentioned in this article. [ 15 ] Fungicides pose risks for humans. [ 16 ] Fungicide residues have been found on food for human consumption, mostly from post-harvest treatments. [ 17 ] Some fungicides are dangerous to human health , such as vinclozolin , which has now been removed from use. [ 18 ] Ziram is also a fungicide that is toxic to humans with long-term exposure, and fatal if ingested. [ 19 ] A number of fungicides are also used in human health care.
https://en.wikipedia.org/wiki/Fungicide
Fungiculture is the cultivation of fungi such as mushrooms . Cultivating fungi can yield foods (which include mostly mushrooms ), medicine , construction materials and other products. A mushroom farm is involved in the business of growing fungi. The word is also commonly used to refer to the practice of cultivation of fungi by animals such as leafcutter ants , termites , ambrosia beetles , and marsh periwinkles . As fungi, mushrooms require different conditions than plants for optimal growth. Plants develop through photosynthesis , a process that converts atmospheric carbon dioxide into carbohydrates , especially cellulose . While sunlight provides an energy source for plants, mushrooms derive all of their energy and growth materials from their growth medium, through biochemical decomposition processes. This does not mean that light is an irrelevant requirement, since some fungi use light as a signal for fruiting. [ 1 ] [ 2 ] However, all the materials for growth must already be present in the growth medium. [ 3 ] Mushrooms grow well at relative humidity levels of around 95–100%, and substrate moisture levels of 50 to 75%. [ 1 ] Instead of seeds , mushrooms reproduce through spores . Spores can be contaminated with airborne microorganisms , which will interfere with mushroom growth and prevent a healthy crop. Mycelium , or actively growing mushroom culture, is placed on a substrate—usually sterilized grains such as rye or millet—and induced to grow into those grains. This is called inoculation. Inoculated grains (or plugs ) are referred to as spawn. Spores are another inoculation option, but are less developed than established mycelium. Since they are also contaminated easily, they are only manipulated in laboratory conditions with a laminar flow cabinet . All mushroom growing techniques require the correct combination of humidity, temperature, substrate (growth medium) and inoculum (spawn or starter culture). Wild harvests, outdoor log inoculation and indoor trays all provide these elements. Mushrooms can be grown on logs placed outdoors in stacks or piles, as has been done for hundreds of years. [ 4 ] Sterilization is not performed as part of this method. Since production may be unpredictable and seasonal, less than 5% of commercially sold mushrooms are produced this way. [ 5 ] Here, tree logs are inoculated with spawn, then allowed to grow as they would in wild conditions. Fruiting, or pinning, is triggered by seasonal changes, or by briefly soaking the logs in cool water. [ 4 ] Shiitake and oyster mushrooms have traditionally been produced using the outdoor log technique, although controlled techniques such as indoor tray growing or artificial logs made of compressed substrate have been substituted. [ 5 ] [ 6 ] [ 7 ] Shiitake mushrooms that are grown under a forested canopy are considered non-timber forest products . [ 8 ] In the Northeastern United States , shiitake mushrooms can be cultivated on a variety of hardwood logs including oak, American beech , sugar maple and hophornbeam . Softwood should not be used to cultivate shiitake mushrooms because the resin of softwoods will oftentimes inhibit the growth of the shiitake mushroom making it impractical as a growing substrate. [ 9 ] To produce shiitake mushrooms, 1 metre (3-foot) hardwood logs with a diameter ranging between 10–15 cm (4–6 in) are inoculated with the mycelium of the shiitake fungus. Inoculation is completed by drilling holes in hardwood logs, filling the holes with cultured shiitake mycelium or inoculum, and then sealing the filled holes with hot wax. After inoculation, the logs are placed under the closed canopy of a coniferous stand and are left to incubate for 12 to 15 months. Once incubation is complete, the logs are soaked in water for 24 hours. Seven to ten days after soaking, shiitake mushrooms will begin to fruit and can be harvested once fully ripe. [ 10 ] Indoor mushroom cultivation for the purpose of producing a commercial crop was first developed in caves in France. The caves provided a stable environment (temperature, humidity) all year round. The technology for a controlled growth medium and fungal spawn was brought to the UK in the late 1800s in caves created by quarrying near areas such as Bath, Somerset . [ 11 ] Growing indoors allows the ability to control light, temperature and humidity while excluding contaminants and pests. This enables consistent production, regulated by spawning cycles. [ 12 ] By the mid-twentieth century this was typically accomplished in windowless, purpose-built buildings, for large-scale commercial production. Indoor tray growing is the most common commercial technique, followed by containerized growing. The tray technique provides the advantages of scalability and easier harvesting. There are a series of stages in the farming of the most widely used commercial mushroom species Agaricus bisporus . These are composting, fertilizing, spawning, casing, pinning, and cropping." [ 13 ] [ 14 ] Add fertilizer / additives Remove unwanted NH 3 . Must be below 27 to 29 °C (80 to 85 °F) to avoid damaging mycelia [ 14 ] Allow mycelium to grow through substrate and form a colony. Depends on substrate dimensions and composition. Finished when mycelium has propagated through entire substrate layer Add a top covering or dressing to the colonized substrate. Fertilizing with nitrogen increases yields. Induces pinning Adjusting temperature, humidity and CO 2 will also affect the number of pins, and mushroom size Complete sterilization is not required or performed during composting. In most cases, a pasteurization step is included to allow some beneficial microorganisms to remain in the growth substrate. [ 13 ] Specific time spans and temperatures required during stages 3–6 will vary respective to species and variety. Substrate composition and the geometry of growth substrate will also affect the ideal times and temperatures. Pinning is the trickiest part for a mushroom grower, since a combination of carbon dioxide (CO 2 ) concentration, temperature, light, and humidity triggers mushrooms towards fruiting. [ 1 ] [ 2 ] [ 13 ] Up until the point when rhizomorphs or mushroom "pins" appear, the mycelium is an amorphous mass spread throughout the growth substrate, unrecognizable as a mushroom. Carbon dioxide concentration becomes elevated during the vegetative growth phase, when mycelium is sealed in a gas-resistant plastic barrier or bag which traps gases produced by the growing mycelium. To induce pinning, this barrier is opened or ruptured. CO 2 concentration then decreases from about 0.08% to 0.04%, the ambient atmospheric level . [ 13 ] Oyster mushroom farming is rapidly expanding around many parts of the world. Oyster mushrooms are grown in substrate that comprises sterilized wheat, paddy straw and even used coffee grounds , [ 15 ] and they do not require much space compared to other crops. The per unit production and profit extracted is comparatively higher than other crops. [ 16 ] Oyster mushrooms can also be grown indoors from kits, most commonly in the form of a box containing growing medium with spores. [ 17 ] [ 18 ] Mushroom production converts the raw natural ingredients into mushroom tissue, most notably the carbohydrate chitin . [ 1 ] An ideal substrate will contain enough nitrogen and carbohydrate for rapid mushroom growth. Common bulk substrates include several of the following ingredients: [ 12 ] [ 14 ] Mushrooms metabolize complex carbohydrates in their substrate into glucose , which is then transported through the mycelium as needed for growth and energy. While it is used as a main energy source, its concentration in the growth medium should not exceed 2%. For ideal fruiting, closer to 1% is ideal. [ 1 ] Parasitic insects, bacteria and other fungi all pose risks to indoor production. Sciarid or phorid flies may lay eggs in the growth medium, which hatch into maggots and damage developing mushrooms during all growth stages. Bacterial blotch caused by Pseudomonas bacteria or patches of Trichoderma green mold also pose a risk during the fruiting stage. Pesticides and sanitizing agents are available to use against these infestations. [ 12 ] [ 23 ] Biological controls for sciarid and phorid flies have also been proposed. [ 24 ] Trichoderma green mold can affect mushroom production, for example in the mid-1990s in Pennsylvania leading to significant crop losses. The contaminating fungus originated from poor hygiene by workers and poorly prepared growth substrates. [ 25 ] Mites in the genus Histiostoma have been found in mushroom farms. Histiostoma gracilipes feeds on mushrooms directly, while H. heinemanni is suspected to spread diseases. [ 26 ] [ 27 ] Pennsylvania is the top-producing mushroom state in the United States, and celebrates September as "Mushroom Month". [ 29 ] The borough of Kennett Square is a historical and present leader in mushroom production. It currently leads production of Agaricus-type mushrooms, [ 30 ] followed by California, Florida and Michigan. [ 31 ] Other mushroom-producing states: [ 32 ] The lower Fraser Valley of British Columbia, which includes Vancouver, has a significant number of producers – about 60 as of 1998 – . [ 33 ] Oyster mushroom cultivation has taken off in Europe as of late. Many entrepreneurs nowadays find it as a quite profitable business, a start-up with a small investment and good profit. Italy with 785,000 tonnes and Netherlands with 307,000 tonnes are between the top ten mushroom producing countries in the world. The world's biggest producer of mushroom spawn [ 34 ] is also situated in France. According to a research carried out on Production and Marketing of Mushrooms: Global and National Scenario [ 35 ] Poland , Netherlands , Belgium , Lithuania are the major exporting mushrooms countries in Europe and countries like UK , Germany , France , Russia are considered to be the major importing countries. [ citation needed ] Oyster mushroom cultivation is a sustainable business where different natural resources can be used as a substrate. The number of people becoming interested in this field is rapidly increasing. The possibility of creating a viable business in urban environments by using coffee grounds is appealing for many entrepreneurs. [ citation needed ] Since mushroom cultivation is not a subject available at school, most urban farmers learned it by doing. The time to master mushroom cultivation is time consuming and costly in missed revenue. For this reason there are numerous companies in Europe specialized in mushroom cultivation that are offering training for entrepreneurs and organizing events to build community and share knowledge. They also show the potential positive impact of this business on the environment. [ 36 ] [ 37 ] Courses about mushroom cultivation can be attended in many countries around Europe. There is education available for growing mushrooms on coffee grounds, [ 38 ] [ 39 ] more advanced training for larger scale farming, [ 40 ] spawn production and lab work [ 41 ] and growing facilities. [ 42 ] Events are organised with different intervals. The Mushroom Learning Network gathers once a year in Europe. The International Society for Mushroom Science gathers once every five-years somewhere in the world.
https://en.wikipedia.org/wiki/Fungiculture
Fungistatics are anti-fungal agents that inhibit the growth of fungus (without killing the fungus). [ 1 ] The term fungistatic may be used as both a noun and an adjective . Fungistatics have applications in agriculture, the food industry, the paint industry, and medicine. Fluconazole is a fungistatic antifungal medication that is administered orally or intravenously. It is used to treat a variety of fungal infections, especially Candida infections of the vagina ("yeast infections'), mouth, throat, and bloodstream. It is also used to prevent infections in people with weak immune systems, including those with neutropenia due to cancer chemotherapy, transplant patients, and premature babies. Its mechanism of action involves interfering with synthesis of the fungal cell membrane. Itraconazole (R51211), invented in 1984, is a triazole fungistatic antifungal agent prescribed to patients with fungal infections. The drug may be given orally or intravenously. Itraconazole has a broader spectrum of activity than fluconazole (but not as broad as voriconazole or posaconazole). In particular, it is active against Aspergillus, which fluconazole is not. The mechanism of action of itraconazole is the same as the other azole antifungals: it inhibits the fungal-mediated synthesis of ergosterol . Sodium benzoate and potassium sorbate are both examples of fungistatic substances that are widely used in the preservation of food and beverages. [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Fungistatics
Fungivory or mycophagy is the process of organisms consuming fungi . Many different organisms have been recorded to gain their energy from consuming fungi, including birds, mammals, insects, plants, amoebas, gastropods, nematodes, bacteria and other fungi. Some of these, which only eat fungi, are called fungivores whereas others eat fungi as only part of their diet, being omnivores . Many mammals eat fungi, but only a few feed exclusively on fungi; most are opportunistic feeders and fungi only make up part of their diet. [ 1 ] At least 22 species of primate , including humans , bonobos , colobines , gorillas , lemurs , macaques , mangabeys , marmosets and vervet monkeys are known to feed on fungi. Most of these species spend less than 5% of the time they spend feeding eating fungi, and fungi therefore form only a small part of their diet. Some species spend longer foraging for fungi, and fungi account for a greater part of their diet; buffy-tufted marmosets spend up to 12% of their time consuming sporocarps, Goeldi’s monkeys spend up to 63% of their time doing so and the Yunnan snub-nosed monkey spends up to 95% of its feeding time eating lichens . Fungi are comparatively very rare in tropical rainforests compared to other food sources such as fruit and leaves, and they are also distributed more sparsely and appear unpredictably, making them a challenging source of food for Goeldi’s monkeys. [ 2 ] Fungi are renowned for their poisons to deter animals from feeding on them: even today humans die from eating poisonous fungi. A natural consequence of this is the virtual absence of obligate vertebrate fungivores, with the diprotodont family Potoridae being the major exception. One of the few extant vertebrate fungivores is the northern flying squirrel , [ 3 ] but it is believed that in the past there were numerous vertebrate fungivores and that toxin development greatly lessened their number and forced these species to abandon fungi or diversify. [ 4 ] Many terrestrial gastropod mollusks are known to feed on fungi. It is the case in several species of slugs from distinct families . Among them are the Philomycidae (e. g. Philomycus carolinianus and Phylomicus flexuolaris ) and Ariolimacidae ( Ariolimax californianus ), which respectively feed on slime molds ( myxomycetes ) and mushrooms ( basidiomycetes ). [ 5 ] Species of mushroom producing fungi used as food source by slugs include milk-caps, Lactarius spp., the oyster mushroom, Pleurotus ostreatus and the penny bun, Boletus edulis . Other species pertaining to different genera, such as Agaricus , Pleurocybella and Russula , are also eaten by slugs. Slime molds used as food source by slugs include Stemonitis axifera and Symphytocarpus flaccidus . [ 5 ] Some slugs are selective towards certain parts or developmental stages of the fungi they eat, though this behavior varies greatly. Depending on the species and other factors, slugs eat only fungi at specific stages of development. Moreover, in other cases, whole mushrooms can be eaten, without any trace of selectivity. [ 5 ] In 2008, Euprenolepis procera a species of ant from the rainforests of South East Asia was found to harvest mushrooms from the rainforest. Witte & Maschwitz found that their diet consisted almost entirely of mushrooms, representing a previously undiscovered feeding strategy in ants. [ 6 ] Several beetle families, including the Erotylidae , Endomychidae , and certain Tenebrionidae [ 7 ] also are specialists on fungi, though they may eat other foods occasionally. Other insects, like fungus gnats and scuttle flies , [ 8 ] utilize fungi at their larval stage. Feeding on fungi is crucial for dead wood eaters as this is the only way to acquire nutrients not available in nutritionally scarce dead wood. [ 9 ] [ 10 ] Jays ( Perisoreus ) are believed to be the first birds in which mycophagy was recorded. Canada jays ( P. canadensis ), Siberian jays ( P. infaustus ) and Oregon jays ( P. obscurus ) have all been recorded to eat mushrooms, with the stomachs of Siberian jays containing mostly fungi in the early winter. The ascomycete, Phaeangium lefebvrei found in north Africa and the Middle East is eaten by migrating birds in winter and early spring, mainly by species of lark ( Alaudidae ). Bedouin hunters have been reported to use P. lefebvrei as bait in traps to attract birds. [ 11 ] The ground-foraging superb lyrebird Menura novaehollandiae has also been found to opportunistically forage on fungi. [ 12 ] Fungi are known to form an important part of the diet of the southern cassowary ( Casuarius casuarius ) of Australia. Bracket fungi have been found in their droppings throughout the year, and Simpson in the Australasian Mycological Newsletter suggested it is likely they also eat species of Agaricales and Pezizales but these have not been found in their droppings since they disintegrate when they are eaten. Emus ( Dromaius novaehollandiae ) will eat immature Lycoperdon and Bovista fungi if presented to them as will brush turkeys ( Alectura lathami ) if offered Mycena , suggesting that species of Megapodiidae may feed opportunistically on mushrooms. [ 13 ] Mycoparasitism occurs when any fungus feeds on other fungi, a form of parasitism , our knowledge of it in natural environments is very limited. [ 14 ] Collybia grow on dead mushrooms. The fungal genus, Trichoderma produces enzymes such as chitinases which degrade the cell walls of other fungi. [ 15 ] They are able to detect other fungi and grow towards them, they then bind to the hyphae of other fungi using lectins on the host fungi as a receptor, forming an appressorium . Once this is formed, Trichoderma inject toxic enzymes into the host and probably peptaibol antibiotics , which create holes in the cell wall, allowing Trichoderma to grow inside of the host and feed. [ 16 ] Trichoderma are able to digest sclerotia , durable structures which contain food reserves, which is important if they are to control pathogenic fungi in the long term. [ 15 ] Trichoderma species have been recorded as protecting crops from Botrytis cinerea , Rhizoctonia solani , Alternaria solani , Glomerella graminicola , Phytophthora capsici , Magnaporthe grisea and Colletotrichum lindemuthianum ; although this protection may not be entirely due to Trichoderma digesting these fungi, but by them improving plant disease resistance indirectly. [ 16 ] Bacterial mycophagy was a term coined in 2005, to describe the ability of some bacteria to "grow at the expense of living fungal hyphae". In a 2007 review in the New Phytologist this definition was adapted to only include bacteria which play an active role in gaining nutrition from fungi, excluding those that feed off passive secretions by fungi, or off dead or damaged hyphae. [ 17 ] The majority of our knowledge in this area relates to interactions between bacteria and fungi in the soil and in or around plants, little is known about interactions in marine and freshwater habitats, or those occurring on or inside animals. It is not known what effects bacterial mycophagy has on the fungal communities in nature. [ 17 ] There are three mechanisms by which bacteria feed on fungi; they either kill fungal cells, cause them to secrete more material out of their cells or enter into the cells to feed internally and they are categorised according to these habits. Those that kill fungal cells are called necrotrophs, the molecular mechanisms of this feeding are thought to overlap considerably with bacteria that feed on fungi after they have died naturally. Necrotrophs may kill the fungi through digesting their cell wall or by producing toxins which kill fungi, such as tolaasin produced by Pseudomonas tolaasii . Both of these mechanisms may be required since fungal cell walls are highly complex, so require many different enzymes to degrade them, and because experiments demonstrate that bacteria that produce toxins cannot always infect fungi. It is likely that these two systems act synergistically , with the toxins killing or inhibiting the fungi and exoenzymes degrading the cell wall and digesting the fungus. Examples of necrotrophs include Staphylococcus aureus which feed on Cryptococcus neoformans , Aeromonas caviae which feed on Rhizoctonia solani , Sclerotium rolfsii and Fusarium oxysporum , and some myxobacteria which feed on Cochliobolus miyabeanus and Rhizoctonia solani . [ 17 ] Bacteria which manipulate fungi to produce more secretions which they in turn feed off are called extracellular biotrophs; many bacteria feed on fungal secretions, but do not interact directly with the fungi and these are called saprotrophs , rather than biotrophs. Extracellular biotrophs could alter fungal physiology in three ways; they alter their development , the permeability of their membranes (including the efflux of nutrients) and their metabolism . The precise signalling molecules that are used to achieve these changes are unknown, but it has been suggested that auxins (better known for their role as a plant hormone ) and quorum sensing molecules may be involved. Bacteria have been identified that manipulate fungi in these ways, for example mycorrhiza helper bacteria (MHBs) and Pseudomonas putida , but it remains to be demonstrated whether the changes they cause are directly beneficial to the bacteria. In the case of MHBs, which increase infection of plant roots by mycorrhizal fungi, they may benefit, because the fungi gain nutrition from the plant and in turn the fungi will secrete more sugars. [ 17 ] The third group, that enter into living fungal cells are called endocellular biotrophs. Some of these are transmitted vertically whereas others are able to actively invade and subvert fungal cells. The molecular interactions involved in these interactions are mostly unknown. Many endocellular biotrophs, for example some Burkholderia species, belong to the β-proteobacteria which also contains species which live inside the cells of mammals and amoeba. Some of them, for example Candidatus Glomeribacter gigasporarum , which colonises the spores of Gigaspora margarita , have reduced genome sizes indicating that they have become entirely dependent on the metabolic functions of the fungal cells in which they live. When all the endocellular bacteria inside G. margarita were removed, the fungus grew differently and was less fit , suggesting that some bacteria may also provide services to the fungi they live in. [ 17 ] The ciliate family Grossglockneridae , including the species Grossglockneria acuta , feed exclusively on fungi. G. acuta first attaches themselves to a hyphae or sporangium via a feeding tube and then a ring-shaped structure, around 2 μm in diameter is observed to appear on the fungus, possibly consisting of degraded cell wall material. G. acuta then feeds through the hole in the cell wall for, on average, 10 minutes, before detaching itself and moving away. The precise mechanism of feeding is not known, but it conceivably involves enzymes including acid phosphatases , cellulases and chitinases . Microtubules are visible in the feeding tube, as are possible reserves of cell membrane , which may be used to form food vacuoles filled with the cytoplasm of the fungus, via endocytosis , which are then transported back into G. acuta . The holes made by G. acuta bear some similarities to those made by amoeba, but unlike amoeba G. acuta never engulfs the fungus. [ 18 ] Around 90% of land plants live in symbiosis with mycorrhizal fungi, [ 19 ] where fungi gain sugars from plants and plants gain nutrients from the soil via the fungi. Some species of plant have evolved to manipulate this symbiosis, so that they no longer give fungi sugars that they produce and instead gain sugars from the fungi, a process called myco-heterotrophy. Some plants are only dependent on fungi as a source of sugars during the early stages of their development , these include most of the orchids as well as many ferns and lycopods . Others are dependent on this food source for their entire lifetime, including some orchids and Gentianaceae , and all species of Monotropaceae and Triuridaceae . [ 20 ] Those that are dependent on fungi, but still photosynthesise are called mixotrophs since they gain nutrition in more than one way, by gaining a significant amount of sugars from fungi, they are able to grow in the deep shade of forests. Examples include the orchids Epipactis , Cephalanthera and Plantanthera and the tribe Pyroleae of the family Ericaceae . [ 19 ] Others, such as Monotropastrum humile , no longer photosynthesise and are totally dependent on fungi for nutrients. [ 20 ] Around 230 such species exist, and this trait is thought to have evolved independently on five occasions outside of the orchid family. Some individuals of the orchid species Cephalanthera damasonium are mixotrophs, but others do not photosynthesise. [ 21 ] Because the fungi that myco-heterotrophic plants gain sugars from in turn gain them from plants that do photosynthesise, they are considered indirect parasites of other plants. [ 20 ] The relationship between orchids and orchid mycorrhizae has been suggested to be somewhere between predation and parasitism. [ 21 ] The precise mechanisms by which these plants gain sugars from fungi are not known and has not been demonstrated scientifically. Two pathways have been proposed; they may either degrade fungal biomass, particularly the fungal hyphae which penetrate plant cells in a similar manner to in arbuscular mycorrhizae , or absorb sugars from the fungi by disrupting their cell membranes , through mass flow . To prevent the sugars returning to the fungi, they must compartmentalise the sugars or convert them into forms which the fungi cannot use. [ 20 ] Three insect lineages, beetles, ants and termites, independently evolved the ability to farm fungi between 40 and 60 million years ago. In a similar way to the way that human societies became more complex after the development of plant-based agriculture, the same occurred in these insect lineages when they evolved this ability and these insects are now of major importance in ecosystems. [ 22 ] The methods that insects use to farm fungi share fundamental similarities with human agriculture. Firstly, insects inoculate a particular habitat or substrate with fungi, much in the same as humans plant seeds in fields. Secondly, they cultivate the fungi by regulating the growing environment to try to improve the growth of the fungus, as well as protecting it from pests and diseases. Thirdly they harvest the fungus when it is mature and feed on it. Lastly they are dependent on the fungi they grow, in the same way that humans are dependent on crops. [ 23 ] Ambrosia beetles , for example Austroplatypus incompertus , farm ambrosia fungi inside of trees and feed on them. The mycangia (organs which carry fungal spores) of ambrosia beetles contain various species of fungus, including species of Ambrosiomyces , Ambrosiella , Ascoidea , Ceratocystis , Dipodascus , Diplodia , Endomycopsis , Monacrosporium and Tuberculariella . [ 24 ] The ambrosia fungi are only found in the beetles and their galleries, suggesting that they and the beetles have an obligate symbiosis . [ 22 ] Around 330 species of termites in twelve genera of the subfamily Macrotermitinae cultivate a specialised fungus in the genus Termitomyces . The fungus is kept in a specialised part of the nest in fungus cones. Worker termites eat plant matter, producing faecal pellets which they continuously place on top of the cone. [ 25 ] The fungus grows into this material and soon produces immature mushrooms, a rich source of protein, sugars and enzymes, which the worker termites eat. The nodules also contain indigestible asexual spores , meaning that the faecal pellets produced by the workers always contain spores of the fungus that colonise the plant material that they defaecate. The Termitomyces also fruits, forming mushrooms above ground, which mature at the same time that the first workers emerge from newly formed nests. The mushrooms produce spores that are wind dispersed, and through this method, new colonies acquire a fungal strain. [ 23 ] In some species, the genetic variation of the fungus is very low, suggesting that spores of the fungus are transmitted vertically from nest to nest, rather than from wind dispersed spores. [ 26 ] Around 220 described species, and more undescribed species of ants in the tribe Attini cultivate fungi. They are only found in the New World and are thought to have evolved in the Amazon Rainforest , where they are most diverse today. For these ants, farmed fungi are the only source of food on which their larvae are raised on and are also an important food for adults. Queen ants carry a small part of fungus in small pouches in their mouthparts when they leave the nest to mate, allowing them to establish a new fungus garden when they form a new nest. Different lineages cultivate fungi on different substrates, those that evolved earlier do so on a wide range of plant matter, whereas leaf cutter ants are more selective, mainly using only fresh leaves and flowers. The fungi are members of the families Lepiotaceae and Pterulaceae . Other fungi in the genus Escovopsis parasitise the gardens and antibiotic -producing bacteria also inhabit the gardens. [ 23 ] [ 27 ] The marine snail Littoraria irrorata , which lives in the salt marshes of the southeast of the United States feeds on fungi that it encourages to grow. It creates and maintains wounds on the grass, Spartina alterniflora which are then infected by fungi, probably of the genera Phaeosphaeria and Mycosphaerella , which are the preferred diet of the snail. They also deposit faeces on the wounds that they create, which encourage the growth of the fungi because they are rich in nitrogen and fungal hyphae . Juvenile snails raised on uninfected leaves do not grow and are more likely to die, indicating the importance of the fungi in the diet of L. irrorata . [ 28 ]
https://en.wikipedia.org/wiki/Fungivore
Fungus pockets are any of various convergently evolved inoculum -retention and -cultivation organs in a wide range of insect taxa. They are generally [ 1 ] [ 2 ] divided into mycangia (or "mycetangia") [ 3 ] and infrabuccal pockets . Fungus pockets are found in ambrosia beetles , [ 4 ] [ 3 ] bark beetles , termites and attine ants . [ 1 ] [ 2 ] This insect -related article is a stub . You can help Wikipedia by expanding it . This fungus -related article is a stub . You can help Wikipedia by expanding it . This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fungus_pocket
In architecture, the funicular curve (also funicular polygon , funicular shape , from the Latin : fūniculus , "of rope" [ 1 ] ) is an approach used to design the compression-only structural forms (like masonry arches ) using an equivalence between the rope with hanging weights and standing arch with its load. This duality was noticed by Robert Hooke in 1675 ("as hangs the flexible line, so, but inverted, will stand the rigid arch"). [ 2 ] If the hanging rope carries just its own weight (in this case it is usually called a "chain" and is equivalent to a free-standing arch with no external load), the resulting curve is a catenary . [ 3 ] In graphic statics , a funicular polygon is a graphic method of finding out the line of action for a combination of forces applied to a solid body at different points, a complement to the force polygon used to obtain the value and direction of the resultant force . [ 4 ] Both polygons were introduced by Pierre Varignon ( Nouvelle Mecanique ou Statique , 1725) and became the basis of the graphic statics in the second half of the 19th century. [ 5 ] Multiple ropes with weights can be connected together forming a hanging chain model of a complete structure. The uses of this "outlandish", complicated in comparison with even pre-computer techniques, like graphic statics, method were rare, yet interesting. Usually the technique was used for planar structures as well as the ones with rotational symmetry , like domes . The method can also be applied to arbitrary three-dimensional structures, as first shown by Gaudi while designing the church of Colònia Güell . Gaudi had built a 1:10 scale hanging chain model of the church that did not survive. He also used a smaller copy that was at the time stored in the Sagrada Família basilica. This small model, on exhibit at the museum of the basilica, is often misinterpreted as a model of the basilica itself. [ 3 ] This architecture -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Funicular_curve
Funky caching is the generation, display and storage of dynamic content when a requested static web page resource isn't available. The name is based on the idea of treating the web server, serving static pages, as a cache . However, unlike common reverse caches , the funky cache is part of the web server software, and has the ability to dynamically generate this content. It assumes that all pages are potentially generatable on-demand. If they are not, the conventional HTTP 404 error is returned, as usual. The overall advantage is relatively small, compared to a conventional cache. Architecturally it is also a poor design. However it does allow small sites with no separate cache layer to achieve some of the advantages of caching (albeit a little inflexibly). This is why it became popular at one time for small, single-server dynamic web sites, particularly those built within the PHP community, where the technique originated. A drawback to the technique is that it requires the web server process to have write access to the web content space. For security reasons, this is not usually required or permitted. It is also known as the ErrorDocument trick, Smarter Caching and Rasmus' Trick, [ 1 ] the latter name in honor of Rasmus Lerdorf , creator of the PHP programming language, who was allegedly the first to present this mechanism (though it is also attributed to Stig Bakken [ 2 ] ). One common usage is the replacement of the HTTP Error404 ErrorDocument with a dynamic script. Another way to look at it as a variation of the cache-aside pattern where, instead of reading the data from the data store, it is generated dynamically, and where the implementation spans an architecture (in this case the Web server and the Web app language) instead of being implemented in a single system. [ 3 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Funky_caching
Fura-2 , an aminopolycarboxylic acid , is a ratiometric fluorescent dye which binds to free intracellular calcium . [ 1 ] It was the first widely used dye for calcium imaging , and remains very popular. Fura-2 is excited at 340 nm and 380 nm of light, and the ratio of the emissions at those wavelengths is directly related to the amount of intracellular calcium. Regardless of the presence of calcium, Fura-2 emits at 510 nm of light. The use of the ratio automatically cancels out confounding variables, such as variable dye concentration and cell thickness, making Fura-2 one of the most appreciated tools to quantify calcium levels. The high photon yield of fura-2 allowed the first real time (video rate) measurements of calcium inside living cells in 1986. [ 2 ] More recently, genetically encoded calcium indicators based on spectral variants of the green fluorescent protein , such as Cameleons , [ 3 ] have supplemented the use of Fura-2 and other small molecule dyes for calcium imaging, but Fura-2 remains faster.
https://en.wikipedia.org/wiki/Fura-2
Fura-2-acetoxymethyl ester , often abbreviated Fura-2AM , is a membrane-permeant derivative of the ratiometric calcium indicator Fura-2 used in biochemistry to measure cellular calcium concentrations by fluorescence . [ 1 ] When added to cells, Fura-2AM crosses cell membranes and once inside the cell, the acetoxymethyl groups are removed by cellular esterases. Removal of the acetoxymethyl esters regenerates "Fura-2", the pentacarboxylate calcium indicator. Measurement of Ca 2+ -induced fluorescence at both 340 nm and 380 nm allows for calculation of calcium concentrations based 340/380 ratios. The use of the ratio automatically cancels out certain variables such as local differences in fura-2 concentration or cell thickness that would otherwise lead to artifacts when attempting to image calcium concentrations in cells. This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fura-2-acetoxymethyl_ester
A furanose is a collective term for carbohydrates that have a chemical structure that includes a five-membered ring system consisting of four carbon atoms and one oxygen atom. The name derives from its similarity to the oxygen heterocycle furan , but the furanose ring does not have double bonds . [ 1 ] The furanose ring is a cyclic hemiacetal of an aldopentose or a cyclic hemiketal of a ketohexose . A furanose ring structure consists of four carbon and one oxygen atom with the anomeric carbon to the right of the oxygen. The highest numbered chiral carbon (typically to the left of the oxygen in a Haworth projection ) determines whether or not the structure has a d -configuration or L -configuration. In an l -configuration furanose, the substituent on the highest numbered chiral carbon is pointed downwards out of the plane, and in a D -configuration furanose, the highest numbered chiral carbon is facing upwards. The furanose ring will have either alpha or beta configuration, depending on which direction the anomeric hydroxy group is pointing. In a d -configuration furanose, alpha configuration has the hydroxy pointing down, and beta has the hydroxy pointing up. It is the opposite in an l -configuration furanose. Typically, the anomeric carbon undergoes mutarotation in solution, and the result is an equilibrium mixture of α and β configurations.
https://en.wikipedia.org/wiki/Furanose
A furnace roller or furnace roll is a heat resistant roller used in roller hearth furnaces and other industrial equipment. They are used to allow products to easily move into, through, and out of furnaces , kilns and ovens . [ 1 ] Furnace rollers consist of a cylinder (solid or hollow) or disks mounted via bearings on a central shaft . They can be powered, actively moving items through the furnace, or unpowered. Furnace rollers can be made in many different sizes and from different materials to suit various applications and temperature ranges. Typical materials include nickel chromium and molybdenum . [ 2 ] [ 3 ] Furnace rollers are used in a wide variety of industries including the production of steel and ceramics , and the application of heat cured coatings . This industry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Furnace_roller
A profilometer is a measuring instrument used to measure a surface's profile , in order to quantify its roughness . Critical dimensions as step, curvature, flatness are computed from the surface topography. While the historical notion of a profilometer was a device similar to a phonograph that measures a surface as the surface is moved relative to the contact profilometer's stylus , this notion is changing with the emergence of numerous non-contact profilometry techniques. Non-scanning technologies measure the surface topography within a single camera acquisition, XYZ scanning is no longer needed. As a consequence, dynamic changes of topography are measured in real-time. Contemporary profilometers are not only measuring static topography, but now also dynamic topography – such systems are described as time-resolved profilometers. Optical methods [ 1 ] [ 2 ] include interferometry based methods such as digital holographic microscopy , vertical scanning interferometry / white light interferometry , phase shifting interferometry , and differential interference contrast microscopy (Nomarski microscopy); focus detection methods such as intensity detection, focus variation , differential detection, critical angle method, astigmatic method, Foucault method, and confocal microscopy ; pattern projection methods such as fringe projection , Fourier profilometry , Moire , and pattern reflection methods . Contact and pseudo-contact methods [ 1 ] [ 2 ] include stylus profilometer (mechanical profilometer), [ 3 ] atomic force microscopy , [ 4 ] and scanning tunneling microscopy A diamond stylus is moved vertically in contact with a sample and then moved laterally across the sample for a specified distance and specified contact force. A profilometer can measure small surface variations in vertical stylus displacement as a function of position. A typical profilometer can measure small vertical features ranging in height from 10 nanometres to 1 millimetre. The height position of the diamond stylus generates an analog signal which is converted into a digital signal, stored, analyzed, and displayed. The radius of diamond stylus ranges from 20 nanometres to 50 μm, and the horizontal resolution is controlled by the scan speed and data signal sampling rate. The stylus tracking force can range from less than 1 to 50 milligrams. Advantages of contact profilometers include acceptance, surface independence, resolution, it is a direct technique with no modeling required. Most of the world's surface finish standards are written for contact profilometers. To follow the prescribed methodology, this type of profilometer is often required. Contacting the surface is often an advantage in dirty environments where non-contact methods can end up measuring surface contaminants instead of the surface itself. Because the stylus is in contact with the surface, this method is not sensitive to surface reflectance or color. The stylus tip radius can be as small as 20 nanometres, significantly better than white-light optical profiling. Vertical resolution is typically sub-nanometer as well. An optical profilometer is a non-contact method for providing much of the same information as a stylus based profilometer. There are many different techniques which are currently being employed, such as laser triangulation ( triangulation sensor ), confocal microscopy (used for profiling very small objects), coherence scanning interferometry , and digital holography . Advantages of optical profilometers are speed, reliability and spot size. For small steps and requirements to do 3D scanning, because the non-contact profilometer does not touch the surface the scan speeds are dictated by the light reflected from the surface and the speed of the acquisition electronics. For doing large steps, a 3D scan on an optical profiler can be much slower than a 2D scan on a stylus profiler. Optical profilometers do not touch the surface and therefore cannot be damaged by surface wear or careless operators. Many non-contact profilometers are solid-state which tends to reduce the required maintenance significantly. The spot size, or lateral resolution, of optical methods ranges from a few micrometres down to sub micrometre. Non-scanning technologies as digital holographic microscopy enable 3D topography measurement in real-time. 3D topography is measured from a single camera acquisition as a consequence the acquisition rate is only limited by the camera acquisition rate, some systems measure topography at a frame rate of 1000 fps. Time-resolved systems enable measurement of topography changes as healing of smart materials or measurement of moving specimens. Time-resolved profilometers can be combined with a stroboscopic unit to measure MEMS vibrations in the MHz range. The stroboscopic unit provides excitation signal to the MEMS and provides trigger signal to light source and camera. The advantage of time-resolved profilometers is that they are robust against vibrations. Unlike scanning methods, time-resolved profilometer acquisition time is in the milliseconds range. There is no need of vertical calibration: vertical measurement does not depend on a scanning mechanism, digital holographic microscopy vertical measurement has an intrinsic vertical calibration based on laser source wavelength. Samples are not static and there is response of the specimen topography to external stimulus. With on-flight measurement the topography of a moving sample is acquired with short exposure time. MEMS vibrations measurement can be accomplished when the system is combined with a stroboscopic unit. Optical fiber -based optical profilometers scan surfaces with optical probes which send light interference signals back to the profilometer detector via an optical fiber. Fiber-based probes can be physically located hundreds of meters away from the detector enclosure, without signal degradation. The additional advantages of using fiber-based optical profilometers are flexibility, long profile acquisition, ruggedness, and ease of incorporating into industrial processes. With the small diameter of certain probes, surfaces can be scanned even inside hard-to-reach spaces, such as narrow crevices or small-diameter tubes. [ 5 ] Because these probes generally acquire one point at a time and at high sample speeds, acquisition of long (continuous) surface profiles is possible. Scanning can take place in hostile environments, including very hot or cryogenic temperatures, or in radioactive chambers, while the detector is located at a distance, in a human-safe environment. [ 6 ] Fiber-based probes are easily installed in-process, such as above moving webs or mounted onto a variety of positioning systems. A furrow profilometer is used for the measurement of the cross-sectional geometry of furrows and corrugations, and is important in furrow assessments. [ 7 ] Profilometers are widely used to evaluate surface finish, texture, and roughness in various industrial processes. They detect deviations from desired surface specifications and support quality control in manufacturing sectors. In the fabrication of semiconductor devices, profilometers are essential for analyzing surface topography, step height, and thin film structures. They assist in monitoring etching, deposition, and lithography processes. Profilometry is employed in the inspection of implants and biomedical surfaces, ensuring biocompatibility and functional surface characteristics. Profilometers are crucial for inspecting precision optical components such as laser mirrors, prisms, and super-polished glass flats, where sub-nanometer roughness is often required. Advanced profilometers enable high-resolution 3D surface characterization in fields like MEMS (Micro-Electro-Mechanical Systems), microfluidics, and nanotechnology, where accurate measurement of micro- and nano-scale structures is essential. Profilometers are increasingly used in the characterization of textured surfaces in photovoltaic (PV) cells. Accurate surface topography measurements help optimize light trapping and surface passivation in solar cells, improving their efficiency and durability. [ 8 ]
https://en.wikipedia.org/wiki/Furrow_profilometer
In mathematics , particularly in number theory , Hillel Furstenberg 's proof of the infinitude of primes is a topological proof that the integers contain infinitely many prime numbers . When examined closely, the proof is less a statement about topology than a statement about certain properties of arithmetic sequences . [ 1 ] [ 2 ] Unlike Euclid's classical proof , Furstenberg's proof is a proof by contradiction . The proof was published in 1955 in the American Mathematical Monthly while he was still an undergraduate student at Yeshiva University . Define a topology on the integers Z {\displaystyle \mathbb {Z} } , called the evenly spaced integer topology , by declaring a subset U ⊆ Z {\displaystyle \mathbb {Z} } to be an open set if and only if it is a union of arithmetic sequences S ( a , b ) for a ≠ 0, or is empty (which can be seen as a nullary union (empty union) of arithmetic sequences), where Equivalently, U is open if and only if for every x in U there is some non-zero integer a such that S ( a , x ) ⊆ U . The axioms for a topology are easily verified: This topology has two notable properties: The only integers that are not integer multiples of prime numbers are −1 and +1, i.e. Now, by the first topological property, the set on the left-hand side cannot be closed. On the other hand, by the second topological property, the sets S ( p , 0) are closed. So, if there were only finitely many prime numbers, then the set on the right-hand side would be a finite union of closed sets, and hence closed. This would be a contradiction , so there must be infinitely many prime numbers. The evenly spaced integer topology on Z {\displaystyle \mathbb {Z} } is the topology induced by the inclusion Z ⊂ Z ^ {\displaystyle \mathbb {Z} \subset {\hat {\mathbb {Z} }}} , where Z ^ {\displaystyle {\hat {\mathbb {Z} }}} is the profinite integer ring with its profinite topology. It is homeomorphic to the rational numbers Q {\displaystyle \mathbb {Q} } with the subspace topology inherited from the real line , [ 3 ] which makes it clear that any finite subset of it, such as { − 1 , + 1 } {\displaystyle \{-1,+1\}} , cannot be open.
https://en.wikipedia.org/wiki/Furstenberg's_proof_of_the_infinitude_of_primes
Furuno Electric Co., Ltd. ( 古野電気株式会社 , Furuno Denki Kabushiki-gaisha ) (commonly known as Furuno ) is a Japanese electronics company whose main products are marine electronics , including marine radar systems , fish finders , and navigational instruments. The company also manufactures global positioning systems and medical equipment , and entered the weather radar market in 2013. [ 2 ] Furuno Electric Shokai was founded in Nagasaki, Japan in 1948. The same year, Furuno commercialized the world's first practical fish finder . Manufacturing continued to ramp up as the decade came to a close, and by the mid-1950s, Furuno was producing various Marine supplements, such as early examples of commercial Marine radars . In 1973, Furuno created an early iteration of satellite positioning receivers for vessels at sea. Later that decade, Furuno entered the United States market, establishing an HQ in the United States as Furuno USA. [ 3 ] Following this expansion and continued growth, Furuno continued expanding their marine-based radar products. In 2009, Furuno acquired San Francisco based eRide, Inc., a fabless semiconductor company . [ 4 ] Following this acquisition, in 2013, Furuno introduced an X-band weather radar , the smallest of its kind. [ 5 ] In 2015, the company's GNSS Receiver Modules were used in radio controlled flying quadcopters . [ 6 ] Furuno's marine electronic devices has been featured in Licence to Kill (1989) as product placement. [ 7 ]
https://en.wikipedia.org/wiki/Furuno
The Furuta pendulum , or rotational inverted pendulum, consists of a driven arm which rotates in the horizontal plane and a pendulum attached to that arm which is free to rotate in the vertical plane. It was invented in 1992 at Tokyo Institute of Technology by Katsuhisa Furuta [ 1 ] [ 2 ] [ 3 ] [ 4 ] and his colleagues. It is an example of a complex nonlinear oscillator of interest in control system theory . The pendulum is underactuated and extremely non-linear due to the gravitational forces and the coupling arising from the Coriolis and centripetal forces. Since then, dozens, possibly hundreds of papers and theses have used the system to demonstrate linear and non-linear control laws. [ 5 ] [ 6 ] [ 7 ] The system has also been the subject of two texts. [ 8 ] [ 9 ] Despite the great deal of attention the system has received, very few publications successfully derive (or use) the full dynamics. Many authors [ 3 ] [ 8 ] have only considered the rotational inertia of the pendulum for a single principal axis (or neglected it altogether [ 9 ] ). In other words, the inertia tensor only has a single non-zero element (or none), and the remaining two diagonal terms are zero. It is possible to find a pendulum system where the moment of inertia in one of the three principal axes is approximately zero, but not two. A few authors [ 2 ] [ 4 ] [ 6 ] [ 10 ] [ 11 ] [ 12 ] have considered slender symmetric pendulums where the moments of inertia for two of the principal axes are equal and the remaining moment of inertia is zero. Of the dozens of publications surveyed for this wiki only a single conference paper [ 13 ] and journal paper [ 14 ] were found to include all three principal inertial terms of the pendulum. Both papers used a Lagrangian formulation but each contained minor errors (presumably typographical). The equations of motion presented here are an extract from a paper [ 15 ] on the Furuta pendulum dynamics derived at the University of Adelaide . Consider the rotational inverted pendulum mounted to a DC motor as shown in Fig. 1. The DC motor is used to apply a torque τ 1 {\displaystyle \tau _{1}} to Arm 1. The link between Arm 1 and Arm 2 is not actuated but free to rotate. The two arms have lengths L 1 {\displaystyle L_{1}} and L 2 {\displaystyle L_{2}} . The arms have masses m 1 {\displaystyle m_{1}} and m 2 {\displaystyle m_{2}} which are located at l 1 {\displaystyle l_{1}} and l 2 {\displaystyle l_{2}} respectively, which are the lengths from the point of rotation of the arm to its center of mass. The arms have inertia tensors J 1 {\displaystyle {\boldsymbol {J}}_{1}} and J 2 {\displaystyle {\boldsymbol {J}}_{2}} (about the centre of mass of the arms respectively). Each rotational joint is viscously damped with damping coefficients b 1 {\displaystyle b_{1}} and b 2 {\displaystyle b_{2}} , where b 1 {\displaystyle b_{1}} is the damping provided by the motor bearings and b 2 {\displaystyle b_{2}} is the damping arising from the pin coupling between Arm 1 and Arm 2. A right hand coordinate system has been used to define the inputs, states and the Cartesian coordinate systems 1 and 2. The coordinate axes of Arm 1 and Arm 2 are the principal axes such that the inertia tensors are diagonal. The angular rotation of Arm 1, θ 1 {\displaystyle \theta _{1}} , is measured in the horizontal plane where a counter-clockwise direction (when viewed from above) is positive. The angular rotation of Arm 2, θ 2 {\displaystyle \theta _{2}} , is measured in the vertical plane where a counter-clockwise direction (when viewed from the front) is positive. When the Arm is hanging down in the stable equilibrium position θ 2 = 0 {\displaystyle \theta _{2}=0} . The torque the servo-motor applies to Arm 1, τ 1 {\displaystyle \tau _{1}} , is positive in a counter-clockwise direction (when viewed from above). A disturbance torque, τ 2 {\displaystyle \tau _{2}} , is experienced by Arm 2, where a counter-clockwise direction (when viewed from the front) is positive. Before deriving the dynamics of the system a number of assumptions must be made. These are: The non-linear equations of motion are given by [ 15 ] θ ¨ 1 ( J 1 z z + m 1 l 1 2 + m 2 L 1 2 + ( J 2 y y + m 2 l 2 2 ) sin 2 ⁡ ( θ 2 ) + J 2 x x cos 2 ⁡ ( θ 2 ) ) + θ ¨ 2 m 2 L 1 l 2 cos ⁡ ( θ 2 ) − m 2 L 1 l 2 sin ⁡ ( θ 2 ) θ ˙ 2 2 + θ ˙ 1 θ ˙ 2 sin ⁡ ( 2 θ 2 ) ( m 2 l 2 2 + J 2 y y − J 2 x x ) + b 1 θ ˙ 1 = τ 1 {\displaystyle {\ddot {\theta }}_{1}\left(J_{1zz}+m_{1}l_{1}^{2}+m_{2}L_{1}^{2}+(J_{2yy}+m_{2}l_{2}^{2})\sin ^{2}(\theta _{2})+J_{2xx}\cos ^{2}(\theta _{2})\right)+{\ddot {\theta }}_{2}m_{2}L_{1}l_{2}\cos(\theta _{2})-m_{2}L_{1}l_{2}\sin(\theta _{2}){\dot {\theta }}_{2}^{2}+{\dot {\theta }}_{1}{\dot {\theta }}_{2}\sin(2\theta _{2})(m_{2}l_{2}^{2}+J_{2yy}-J_{2xx})+b_{1}{\dot {\theta }}_{1}=\tau _{1}} and θ ¨ 1 m 2 L 1 l 2 cos ⁡ ( θ 2 ) + θ ¨ 2 ( m 2 l 2 2 + J 2 z z ) + 1 / 2 θ ˙ 1 2 sin ⁡ ( 2 θ 2 ) ( − m 2 l 2 2 − J 2 y y + J 2 x x ) + b 2 θ ˙ 2 + g m 2 l 2 sin ⁡ ( θ 2 ) = τ 2 {\displaystyle {\ddot {\theta }}_{1}m_{2}L_{1}l_{2}\cos(\theta _{2})+{\ddot {\theta }}_{2}(m_{2}l_{2}^{2}+J_{2zz})+1/2{\dot {\theta }}_{1}^{2}\sin(2\theta _{2})(-m_{2}l_{2}^{2}-J_{2yy}+J_{2xx})+b_{2}{\dot {\theta }}_{2}+gm_{2}l_{2}\sin(\theta _{2})=\tau _{2}} Most Furuta pendulums tend to have long slender arms, such that the moment of inertia along the axis of the arms is negligible. In addition, most arms have rotational symmetry such that the moments of inertia in two of the principal axes are equal. Thus, the inertia tensors may be approximated as follows: J 1 = d i a g [ J 1 x x , J 1 y y , J 1 z z ] = d i a g [ 0 , J 1 , J 1 ] {\displaystyle {\boldsymbol {J}}_{1}=diag[J_{1xx},J_{1yy},J_{1zz}]=diag[0,J_{1},J_{1}]} J 2 = d i a g [ J 2 x x , J 2 y y , J 2 z z ] = d i a g [ 0 , J 2 , J 2 ] {\displaystyle {\boldsymbol {J}}_{2}=diag[J_{2xx},J_{2yy},J_{2zz}]=diag[0,J_{2},J_{2}]} Further simplifications are obtained by making the following substitutions. The total moment of inertia of Arm 1 about the pivot point (using the parallel axis theorem ) is J 1 ^ = J 1 + m 1 l 1 2 {\displaystyle {\hat {J_{1}}}=J_{1}+m_{1}l_{1}^{2}} . The total moment of inertia of Arm 2 about its pivot point is J 2 ^ = J 2 + m 2 l 2 2 {\displaystyle {\hat {J_{2}}}=J_{2}+m_{2}l_{2}^{2}} . Finally, define the total moment of inertia the motor rotor experiences when the pendulum (Arm 2) is in its equilibrium position (hanging vertically down), J 0 ^ = J ^ 1 + m 2 L 1 2 = J 1 + m 1 l 1 2 + m 2 L 1 2 {\displaystyle {\hat {J_{0}}}={\hat {J}}_{1}+m_{2}L_{1}^{2}=J_{1}+m_{1}l_{1}^{2}+m_{2}L_{1}^{2}} . Substituting the previous definitions into the governing DEs gives the more compact form θ ¨ 1 ( J 0 ^ + J 2 ^ sin 2 ⁡ ( θ 2 ) ) + θ ¨ 2 m 2 L 1 l 2 cos ⁡ ( θ 2 ) − m 2 L 1 l 2 sin ⁡ ( θ 2 ) θ ˙ 2 2 + θ ˙ 1 θ ˙ 2 sin ⁡ ( 2 θ 2 ) J 2 ^ + b 1 θ ˙ 1 = τ 1 {\displaystyle {\ddot {\theta }}_{1}\left({\hat {J_{0}}}+{\hat {J_{2}}}\sin ^{2}(\theta _{2})\right)+{\ddot {\theta }}_{2}m_{2}L_{1}l_{2}\cos(\theta _{2})-m_{2}L_{1}l_{2}\sin(\theta _{2}){\dot {\theta }}_{2}^{2}+{\dot {\theta }}_{1}{\dot {\theta }}_{2}\sin(2\theta _{2}){\hat {J_{2}}}+b_{1}{\dot {\theta }}_{1}=\tau _{1}} and θ ¨ 1 m 2 L 1 l 2 cos ⁡ ( θ 2 ) + θ ¨ 2 J 2 ^ − 1 / 2 θ ˙ 1 2 sin ⁡ ( 2 θ 2 ) J 2 ^ + b 2 θ ˙ 2 + g m 2 l 2 sin ⁡ ( θ 2 ) = τ 2 {\displaystyle {\ddot {\theta }}_{1}m_{2}L_{1}l_{2}\cos(\theta _{2})+{\ddot {\theta }}_{2}{\hat {J_{2}}}-1/2{\dot {\theta }}_{1}^{2}\sin(2\theta _{2}){\hat {J_{2}}}+b_{2}{\dot {\theta }}_{2}+gm_{2}l_{2}\sin(\theta _{2})=\tau _{2}}
https://en.wikipedia.org/wiki/Furuta_pendulum
2-(2-Furyl)-3-(5-nitro-2-furyl)acrylic acid amide, a-(Furyl)-b-(5-nitro-2-furyl)acrylic amide, trans -2-(2-Furyl)-3-(5-nitro-2-furyl)acrylamide, 2-(2-furyl)-3-(5-nitro-2-furyl) acrylamide, Furylfuramide (also known as AF-2 ) [ 1 ] is a synthetic nitrofuran derivative which was widely used as a food preservative in Japan since at least 1965, but withdrawn from the market in 1974 when it was observed to be mutagenic to bacteria in vitro and thus suspected of carcinogenicity . This was confirmed later when animal testing [ 4 ] found it to cause benign and malignant tumors in the mammary glands , stomachs , esophagi , and lungs of rodents of both sexes , although insufficient evidence exists in human exposure. [ 3 ] This successful use of bacterial mutagenicity as a screen for carcinogenicity confirmed the use of this methodology as a rapid and efficient test, in comparison to animal testing alone, and led to its further development. The availability of such simpler tests in turn gave rise to greater government oversight and testing of compounds to which the public would be exposed. [ 5 ]
https://en.wikipedia.org/wiki/Furylfuramide
Fusarium oxysporum (Schlecht as emended by Snyder and Hansen), [ 1 ] an ascomycete fungus, comprises all the species, varieties and forms recognized by Wollenweber and Reinking [ 2 ] within an infrageneric grouping called section Elegans. It is part of the family Nectriaceae . Although their predominant role in native soils may be as harmless or even beneficial plant endophytes or soil saprophytes , many strains within the F. oxysporum complex are soil borne pathogens of plants , especially in agricultural settings. While the species, as defined by Snyder and Hansen, has been widely accepted for more than 50 years, [ 3 ] [ 4 ] more recent work indicates this taxon is actually a genetically heterogeneous polytypic morphospecies, [ 5 ] [ 6 ] whose strains represent some of the most abundant and widespread microbes of the global soil microflora. [ 7 ] The Fot1 family of transposable elements was first discovered by Daboussi et al. , 1992 in several formae speciales [ 8 ] [ 9 ] and Davière et al. , 2001 and Langin et al. , 2003 have since found them in most strains at copy numbers as high as 100. [ 8 ] These diverse and adaptable fungi have been found in soils ranging from the Sonoran Desert , to tropical and temperate forest , grasslands and soils of the tundra. [ 10 ] F. oxysporum strains are ubiquitous soil inhabitants that have the ability to exist as saprophytes , and degrade lignin [ 11 ] [ 12 ] and complex carbohydrates [ 13 ] [ 14 ] [ 1 ] associated with soil debris. They are pervasive plant endophytes that can colonize plant roots [ 15 ] [ 16 ] and may even protect plants or form the basis of disease suppression. [ 17 ] [ 18 ] Because the hosts of a given forma specialis usually are closely related, many have assumed that members of a forma specialis are also closely related and descended from a common ancestor. [ 19 ] However, results from research conducted on Fusarium oxysporum f. sp. cubense forced scientists to question these assumptions. Researchers used anonymous, single-copy restriction fragment length polymorphsims (RFLPs) to identify 10 clonal lineages from a collection of F. oxysporum f.sp. cubense from across the world. These results showed that pathogens of banana causing Panama disease could be as closely related to other host's pathogens, such as melon or tomato, as they are to each other. Exceptional amounts of genetic diversity within F. oxysporum f.sp. cubense were deduced from the high level of chromosomal polymorphisms found among strains, random amplified polymorphic DNA fingerprints and from the number and geographic distribution of vegetative compatibility groups. [ 20 ] Presented with the wide-ranging occurrence of F. oxysporum strains that are nonpathogenic, it is reasonable to conclude that certain pathogenic forms were descended from originally nonpathogenic ancestors. Given the association of these fungi with plant roots, a form that is able to grow beyond the cortex and into the xylem could exploit this ability and hopefully gain an advantage over fungi that are restricted to the cortex. [ citation needed ] The progression of a fungus into vascular tissue may elicit an immediate host response, successfully restricting the invader; or an otherwise ineffective or delayed response, reducing the vital water-conducting capacity and induce wilting. [ 21 ] On the other hand, the plant might be able to tolerate limited growth of the fungus within xylem vessels, preceded by an endophytic association. [ 22 ] In this case, any further changes in the host or parasite could disturb the relationship, in a way that fungal activities or a host response would result in the generation of disease symptoms. Pathogenic strains of F. oxysporum have been studied for more than 100 years. The host range of these fungi is broad and includes animals, ranging from arthropods [ 23 ] to humans, [ 24 ] as well as plants, including a range of both gymnosperms and angiosperms . While collectively, plant pathogenic F. oxysporum strains have a broad host range, individual isolates usually cause disease only in a narrow range of plant species. This observation has led to the idea of "special form" or forma specialis in F. oxysporum . Formae speciales have been defined as "…an informal rank in Classification… used for parasitic fungi characterized from a physiological standpoint (e.g. by the ability to cause disease in particular hosts) but scarcely or not at all from a morphological standpoint." Exhaustive host range studies have been conducted for relatively few formae speciales of F. oxysporum . [ 25 ] For more information on Fusarium oxysporum as a plant pathogen, see Fusarium wilt and Koa wilt . Different strains of F. oxysporum have been used for the purpose of producing nanomaterials (especially Silver nanoparticles ). In 2000, the government of Colombia proposed dispersing strains of Crivellia and Fusarium oxysporum , also known as Agent Green , as a biological weapon to forcibly eradicate coca and other illegal crops. [ 26 ] [ self-published source? ] The weaponized strains were developed by the US government, who originally conditioned their approval of Plan Colombia on the use of this weapon, but ultimately withdrew the condition. [ 27 ] In February 2001, the EU Parliament issued a declaration specifically against the use of these biological agents in warfare. [ 27 ] The fungus has the ability to dissolve gold , then precipitate it onto its surface, encrusting itself with gold. This phenomenon was first observed in Boddington, West Australia. [ 28 ] As a result of this discovery, F. oxysporum is currently being evaluated as a possible way to help detect hidden underground gold reserves. [ 29 ] It also is used to manufacture gold nanoparticles. [ 30 ]
https://en.wikipedia.org/wiki/Fusarium_oxysporum
FuseNet is a nuclear fusion focused educational organization. [ 1 ] Between 2008 and 2013 it was funded by a European Union grant under EURATOM : Fusion Energy Research. [ 2 ] The purpose of FuseNet is to coordinate and facilitate fusion education, to share best practices, to jointly develop educational tools, to organize educational events. The members of FuseNet have jointly established academic criteria for the award of European Fusion Doctorate and Master Certificates. These criteria are set to stimulate a high level of fusion education throughout Europe. FuseNet is the umbrella organization and single voice for the training and education of the next generation fusion engineers and scientists. FuseNet is recognized as such by the European Commission. This nuclear chemistry –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/FuseNet
In hydraulic systems, a fuse (or velocity fuse ) is a component which prevents the sudden loss of hydraulic fluid pressure . It is a safety feature, designed to allow systems to continue operating, or at least to not fail catastrophically, in the event of a system breach. It does this by stopping or greatly restricting the flow of hydraulic fluid through the fuse if the flow exceeds a threshold. The term "fuse" is used here in analogy with electrical fuses which perform a similar function. Hydraulic systems rely on high pressures (usually over 7000 kPa ) to work properly. If a hydraulic system loses fluid pressure, such as due to a burst hydraulic hose, it will become inoperative and components such as actuators may collapse. This is an undesirable condition in life-critical systems such as aircraft or heavy machinery, such as forklifts. Hydraulic fuses help guard against catastrophic failure of a hydraulic system by automatically isolating the defective branch. When a hydraulic system is damaged, there is generally a rapid flow of hydraulic fluid towards the breach. Most hydraulic fuses detect this flow and seal themselves (or restrict flow) if the flow exceeds a predetermined limit. There are many different fuse designs but most involve a passive spring-controlled mechanism which closes when the pressure differential across the fuse becomes excessive. Many gas station pumps are equipped with a velocity fuse to limit gasoline flow. The fuse can be heard to engage with a "click" on some pumps if the nozzle trigger is depressed fully. A slight reduction in fuel flow can be observed. The fuse resets instantly upon releasing the trigger. There are two types of hydraulic fuses. The first one acts like a pressure relief valve , venting in case of a pressure surge. The second is more or less like a check valve . The only difference is a check valve is in place to prevent upstream fluid from coming back and venting out. A fuse is in place before the venting area and stops fluid from venting forward of it. Hydraulic fuses are not a perfect solution to fluid loss. They will probably be ineffective against slow, seeping loss of hydraulic fluid, and they may be unable to prevent fluid loss in the event of a catastrophic system failure involving multiple breaches to hydraulic lines. Also, when a fuse activates it is likely that the system will no longer function as designed, as hydraulically-actuated devices may be present in the section isolated by the fuse. Depending on the system, hydraulic fuses may reset automatically after a delay, or may require manual re-opening. Forklift main hoist cylinders are usually equipped with a fuse built into the hose adapter at the base of the cylinder that resets immediately upon stopping the flow. In the design of a spillway for a dam , a fuse plug is a water retaining structure designed to wash out in a controlled fashion if the main dam is in danger of overtopping due to flood, and if the normal spillway channel is insufficient to control the overtopping.
https://en.wikipedia.org/wiki/Fuse_(hydraulic)
Fuse Services Framework is an open source SOAP and REST web services platform based on Apache CXF for use in enterprise IT organizations. [ 1 ] It is productized and supported by the Fuse group at FuseSource Corp. Fuse Services Framework service-enables new and existing systems for use in enterprise SOA infrastructure. Fuse Services Framework is a pluggable, small-footprint engine that creates high performance, secure and robust services in minutes using front-end programming APIs like JAX-WS and JAX-RS . It supports multiple transports and bindings and is extensible so developers can add bindings for additional message formats so all systems can work together without having to communicate through a centralized server. Fuse Services Framework is now a part of Red Hat JBoss Fuse . Fabric8 is a free Apache 2.0 Licensed upstream community for the JBoss Fuse product from Red Hat. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Fuse_Services_Framework
Fused quartz , fused silica or quartz glass is a glass consisting of almost pure silica (silicon dioxide, SiO 2 ) in amorphous (non- crystalline ) form. This differs from all other commercial glasses, such as soda-lime glass , lead glass , or borosilicate glass , in which other ingredients are added which change the glasses' optical and physical properties, such as lowering the melt temperature, the spectral transmission range, or the mechanical strength. Fused quartz, therefore, has high working and melting temperatures, making it difficult to form and less desirable for most common applications, but is much stronger, more chemically resistant, and exhibits lower thermal expansion , making it more suitable for many specialized uses such as lighting and scientific applications. The terms fused quartz and fused silica are used interchangeably but can refer to different manufacturing techniques, resulting in different trace impurities. However fused quartz, being in the glassy state , has quite different physical properties compared to crystalline quartz despite being made of the same substance. [ 2 ] Due to its physical properties it finds specialty uses in semiconductor fabrication and laboratory equipment, for instance. Compared to other common glasses, the optical transmission of pure silica extends well into the ultraviolet and infrared wavelengths, so is used to make lenses and other optics for these wavelengths. Depending on manufacturing processes, impurities will restrict the optical transmission, resulting in commercial grades of fused quartz optimized for use in the infrared, or in the ultraviolet. The low coefficient of thermal expansion of fused quartz makes it a useful material for precision mirror substrates or optical flats . [ 3 ] Fused quartz is produced by fusing (melting) high-purity silica sand, which consists of quartz crystals. There are four basic types of commercial silica glass: Quartz contains only silicon and oxygen, although commercial quartz glass often contains impurities. Two dominant impurities are aluminium and titanium [ 5 ] which affect the optical transmission at ultraviolet wavelengths. If water is present in the manufacturing process, hydroxyl (OH) groups may become embedded which reduces transmission in the infrared. Melting is effected at approximately 2200 °C (4000 °F) using either an electrically heated furnace (electrically fused) or a gas/oxygen-fuelled furnace (flame-fused). [ 6 ] Fused silica can be made from almost any silicon -rich chemical precursor, usually using a continuous process which involves flame oxidation of volatile silicon compounds to silicon dioxide, and thermal fusion of the resulting dust (although alternative processes are used). This results in a transparent glass with an ultra-high purity and improved optical transmission in the deep ultraviolet. One common method involves adding silicon tetrachloride to a hydrogen–oxygen flame. [ citation needed ] Fused quartz is normally transparent. The material can, however, become translucent if small air bubbles are allowed to be trapped within. The water content (and therefore infrared transmission) of fused quartz is determined by the manufacturing process. Flame-fused material always has a higher water content due to the combination of the hydrocarbons and oxygen fueling the furnace, forming hydroxyl [OH] groups within the material. An IR grade material typically has an [OH] content below 10 ppm. [ 7 ] Many optical applications of fused quartz exploit its wide transparency range, which can extend well into the ultraviolet and into the near-mid infrared. Fused quartz is the key starting material for optical fiber , used for telecommunications. Because of its strength and high melting point (compared to ordinary glass ), fused quartz is used as an envelope for halogen lamps and high-intensity discharge lamps , which must operate at a high envelope temperature to achieve their combination of high brightness and long life. Some high-power vacuum tubes used silica envelopes whose good transmission at infrared wavelengths facilitated radiation cooling of their incandescent anodes . Because of its physical strength, fused quartz was used in deep diving vessels such as the bathysphere and benthoscope and in the windows of crewed spacecraft, including the Space Shuttle and International Space Station . [ 8 ] Fused quartz was used also in composite armour development. [ 9 ] In the semiconductor industry, its combination of strength, thermal stability, and UV transparency makes it an excellent substrate for projection masks for photolithography . Its UV transparency also finds use as windows on EPROMs (erasable programmable read only memory ), a type of non-volatile memory chip which is erased by exposure to strong ultraviolet light. EPROMs are recognizable by the transparent fused quartz (although some later models use UV-transparent resin) window which sits on top of the package, through which the silicon chip is visible, and which transmits UV light for erasing. [ 10 ] [ 11 ] Due to the thermal stability and composition, it is used in 5D optical data storage [ 12 ] and in semiconductor fabrication furnaces. [ 13 ] [ 14 ] Fused quartz has nearly ideal properties for fabricating first surface mirrors such as those used in telescopes . The material behaves in a predictable way and allows the optical fabricator to put a very smooth polish onto the surface and produce the desired figure with fewer testing iterations. In some instances, a high-purity UV grade of fused quartz has been used to make several of the individual uncoated lens elements of special-purpose lenses including the Zeiss 105 mm f/4.3 UV Sonnar, a lens formerly made for the Hasselblad camera, and the Nikon UV-Nikkor 105 mm f/4.5 (presently sold as the Nikon PF10545MF-UV) lens. These lenses are used for UV photography, as the quartz glass can be transparent at much shorter wavelengths than lenses made with more common flint or crown glass formulas. Fused quartz can be metallised and etched for use as a substrate for high-precision microwave circuits, the thermal stability making it a good choice for narrowband filters and similar demanding applications. The lower dielectric constant than alumina allows higher impedance tracks or thinner substrates. Fused quartz as an industrial raw material is used to make various refractory shapes such as crucibles, trays, shrouds, and rollers for many high-temperature thermal processes including steelmaking , investment casting , and glass manufacture. Refractory shapes made from fused quartz have excellent thermal shock resistance and are chemically inert to most elements and compounds, including virtually all acids, regardless of concentration, except hydrofluoric acid , which is very reactive even in fairly low concentrations. Translucent fused-quartz tubes are commonly used to sheathe electric elements in room heaters , industrial furnaces, and other similar applications. Owing to its low mechanical damping at ordinary temperatures, it is used for high-Q resonators, in particular, for wine-glass resonator of hemispherical resonator gyro. [ 15 ] [ 16 ] For the same reason fused quartz is also the material used for modern glass instruments such as the glass harp and the verrophone , and is also used for new builds of the historical glass harmonica , giving these instruments a greater dynamic range and a clearer sound than with the historically used lead crystal . Quartz glassware is occasionally used in chemistry laboratories when standard borosilicate glass cannot withstand high temperatures or when high UV transmission is required. The cost of production is significantly higher, limiting its use; it is usually found as a single basic element, such as a tube in a furnace, or as a flask, the elements in direct exposure to the heat. The extremely low coefficient of thermal expansion, about 5.5 × 10 −7 /K (20–320 °C), accounts for its remarkable ability to undergo large, rapid temperature changes without cracking (see thermal shock ). Fused quartz is prone to phosphorescence and " solarisation " (purplish discoloration) under intense UV illumination, as is often seen in flashtubes . "UV grade" synthetic fused silica (sold under various tradenames including "HPFS", "Spectrosil", and "Suprasil") has a very low metallic impurity content making it transparent deeper into the ultraviolet. An optic with a thickness of 1 cm has a transmittance around 50% at a wavelength of 170 nm, which drops to only a few percent at 160 nm. However, its infrared transmission is limited by strong water absorptions at 2.2 μm and 2.7 μm. "Infrared grade" fused quartz (tradenames "Infrasil", "Vitreosil IR", and others), which is electrically fused, has a greater presence of metallic impurities, limiting its UV transmittance wavelength to around 250 nm, but a much lower water content, leading to excellent infrared transmission up to 3.6 μm wavelength. All grades of transparent fused quartz/fused silica have nearly identical mechanical properties. The optical dispersion of fused quartz can be approximated by the following Sellmeier equation : [ 17 ] where the wavelength λ {\displaystyle \lambda } is measured in micrometers. This equation is valid between 0.21 and 3.71 μm and at 20 °C. [ 17 ] Its validity was confirmed for wavelengths up to 6.7 μm. [ 4 ] Experimental data for the real (refractive index) and imaginary (absorption index) parts of the complex refractive index of fused quartz reported in the literature over the spectral range from 30 nm to 1000 μm have been reviewed by Kitamura et al. [ 4 ] and are available online . Its quite high Abbe Number of 67.8 makes it among the lowest dispersion glasses at visible wavelengths, as well as having an exceptionally low refractive index in the visible ( n d = 1.4585). Note that fused quartz has a very different and lower refractive index compared to crystalline quartz which is birefringent with refractive indices n o = 1.5443 and n e = 1.5534 at the same wavelength. Although these forms have the same chemical formula, their differing structures result in different optical and other physical properties.
https://en.wikipedia.org/wiki/Fused_quartz