text
stringlengths
11
320k
source
stringlengths
26
161
In mathematics , Kummer's congruences are some congruences involving Bernoulli numbers , found by Ernst Eduard Kummer ( 1851 ). Kubota & Leopoldt (1964) used Kummer's congruences to define the p-adic zeta function . The simplest form of Kummer's congruence states that where p is a prime, h and k are positive even integers not divisible by p −1 and the numbers B h are Bernoulli numbers . More generally if h and k are positive even integers not divisible by p − 1, then whenever where φ( p a +1 ) is the Euler totient function , evaluated at p a +1 and a is a non negative integer. At a = 0, the expression takes the simpler form, as seen above. The two sides of the Kummer congruence are essentially values of the p-adic zeta function , and the Kummer congruences imply that the p -adic zeta function for negative integers is continuous, so can be extended by continuity to all p -adic integers.
https://en.wikipedia.org/wiki/Kummer's_congruence
In mathematics , Kummer's theorem is a formula for the exponent of the highest power of a prime number p that divides a given binomial coefficient. In other words, it gives the p -adic valuation of a binomial coefficient . The theorem is named after Ernst Kummer , who proved it in 1852 ( Kummer 1852 ). Kummer's theorem states that for given integers n ≥ m ≥ 0 and a prime number p , the p -adic valuation ν p ( n m ) {\displaystyle \nu _{p}\!{\tbinom {n}{m}}} of the binomial coefficient ( n m ) {\displaystyle {\tbinom {n}{m}}} is equal to the number of carries when m is added to n − m in base p . An equivalent formation of the theorem is as follows: Write the base- p {\displaystyle p} expansion of the integer n {\displaystyle n} as n = n 0 + n 1 p + n 2 p 2 + ⋯ + n r p r {\displaystyle n=n_{0}+n_{1}p+n_{2}p^{2}+\cdots +n_{r}p^{r}} , and define S p ( n ) := n 0 + n 1 + ⋯ + n r {\displaystyle S_{p}(n):=n_{0}+n_{1}+\cdots +n_{r}} to be the sum of the base- p {\displaystyle p} digits. Then The theorem can be proved by writing ( n m ) {\displaystyle {\tbinom {n}{m}}} as n ! m ! ( n − m ) ! {\displaystyle {\tfrac {n!}{m!(n-m)!}}} and using Legendre's formula . [ 1 ] To compute the largest power of 2 dividing the binomial coefficient ( 10 3 ) {\displaystyle {\tbinom {10}{3}}} write m = 3 and n − m = 7 in base p = 2 as 3 = 11 2 and 7 = 111 2 . Carrying out the addition 11 2 + 111 2 = 1010 2 in base 2 requires three carries: Therefore the largest power of 2 that divides ( 10 3 ) = 120 = 2 3 ⋅ 15 {\displaystyle {\tbinom {10}{3}}=120=2^{3}\cdot 15} is 3. Alternatively, the form involving sums of digits can be used. The sums of digits of 3, 7, and 10 in base 2 are S 2 ( 3 ) = 1 + 1 = 2 {\displaystyle S_{2}(3)=1+1=2} , S 2 ( 7 ) = 1 + 1 + 1 = 3 {\displaystyle S_{2}(7)=1+1+1=3} , and S 2 ( 10 ) = 1 + 0 + 1 + 0 = 2 {\displaystyle S_{2}(10)=1+0+1+0=2} respectively. Then Kummer's theorem can be generalized to multinomial coefficients ( n m 1 , … , m k ) = n ! m 1 ! ⋯ m k ! {\displaystyle {\tbinom {n}{m_{1},\ldots ,m_{k}}}={\tfrac {n!}{m_{1}!\cdots m_{k}!}}} as follows:
https://en.wikipedia.org/wiki/Kummer's_theorem
Kune was a free/open source distributed social network focused on collaboration rather than just on communication. [ 2 ] That is, it focused on online real-time collaborative editing , decentralized social networking and web publishing, while focusing on workgroups rather than just on individuals. [ 3 ] [ 4 ] It aimed to allow for the creation of online spaces for collaborative work where organizations and individuals can build projects online, coordinate common agendas, set up virtual meetings, publish on the web, and join organizations with similar interests. It had a special focus on Free Culture and social movements needs. [ 5 ] [ 6 ] Kune was a project of the Comunes Collective . The project seems abandoned since 2017, with no new commits, blog entries or site activity. [ 7 ] Kune was programmed using the Java -based GWT in the client-side, integrating Apache Wave (formerly Google Wave ) and using mainly the open protocols XMPP and Wave Federation Protocol . GWT Java sources on the client side generates obfuscated and deeply optimized JavaScript conforming a single page application . Wave extensions (gadgets, bots) run on top of Kune (as in Facebook apps ) and can be programmed in Java+GWT, JavaScript or Python . The last version was under development since 2007 until 2017. [ 8 ] The code was hosted in the GIT of Gitorious , [ 9 ] with a development site [ 2 ] and its main node [ 10 ] maintained by the Comunes Collective . Kune is 100% free software and was built only using free software. Its software is licensed under the AGPL license, while the art is under a Creative Commons BY-SA. Kune was born in order to face a growing concern from the community behind it. Nowadays, groups (a group of friends, activists, a NGO, a small start-up) that need to work together typically will use multiple free (like beer) commercial centralized for-profit services (e.g. Google Docs , Google Groups , Facebook , Wordpress.com , Dropbox , Flickr , eBay ...) in order to communicate and collaborate online. However, "If you're not paying for it, you're the product". [ 11 ] In order to avoid that, such groups of users may ask a technical expert to build them mailing lists, a webpage and maybe to set up an etherpad . However, technicians are needed for any new list (as they cannot configure e.g. GNU Mailman ), configuration change, etc., creating a strong dependency and ultimately a bottleneck. [ 12 ] Kune aims to cover all those needs of groups to communicate and collaborate, in a usable way and thus without depending on technical experts. [ 13 ] It aims to be a free/libre web service (and thus in the cloud ), but decentralized as email, so a user can choose the server they want and still interoperate transparently with the rest. [ citation needed ] Opposite to most distributed social networks, this software focuses on collaboration and building, not only on communication and sharing. Thus, Kune does not aim to ultimately replace Facebook, but also all the above-mentioned commercial services. Kune has a strong focus on the construction of Free Culture and eventually facilitate Commons-based peer production . [ 14 ] The origin of Kune relies on the community behind Ourproject.org . Ourproject [ 17 ] aimed to provide for Free Culture (social/cultural projects) what SourceForge and other software forges meant for free software : a collection of communication and collaboration tools that would boost the emergence of community-driven free projects. [ 18 ] However, although Ourproject was relatively successful, it was far from the original aims. The analysis of the situation in 2005 [ 19 ] concluded that only the groups that had a techie among them (who would manage Mailman or install a CMS ) were able to move forward, while the rest would abandon the service. Thus, new free collaborative tools were needed, more usable and suitable for anyone, as the available free tools required a high degree of technical expertise. This is why Kune, whose name means "together" in Esperanto , was developed. The first prototypes of Kune were developed using Ruby on Rails and Pyjamas (later known as Pyjs ). However, with the release of Java and the Google Web Toolkit as free software, the community embraced these technologies since 2007. [ 20 ] In 2009, with a stable codebase and about to release a major version of Kune, [ 21 ] Google announced the Google Wave project and promised it would be released as free software. Wave was using the same technologies of Kune (Java + GWT, Guice, XMPP protocol) so it would be easy to integrate after its release. Besides, Wave was offering an open federated protocol, easy extensibility (through gadgets), easy control versioning, and very good real-time edition of documents. Thus, the community decided to halt the development of Kune, and wait for its release... in the meanwhile developing gadgets that would be integrated in Kune later on. [ 22 ] [ 23 ] [ 24 ] In this same period, the community established the Comunes Association (with an acknowledged inspiration in Software in the Public Interest ) as a non-profit legal umbrella for free software tools for encouraging the Commons and facilitating the work of social movements . [ 25 ] The umbrella covered Ourproject, Kune and Move Commons, [ 26 ] together with some other minor projects. In November 2010, the free Apache Wave (previously Wave-in-a-Box) was released, under the umbrella of the Apache Foundation . Since then, the community began integrating its source code within the Kune previous codebase, [ 27 ] and with the support of the IEPALA Foundation. [ 28 ] Kune released its Beta and moved to production in April 2012. Since then, Kune has been catalogued as "activism 2.0" [ 29 ] and citizen tool, [ 30 ] [ 31 ] a tool for NGOs, [ 32 ] [ 33 ] multi-tool for general purpose [ 34 ] (and following that, criticized for the risk of falling on the second-system effect [ 35 ] ) and example of the new paradigm. [ 36 ] It was selected as "open website of the week" by the Open University of Catalonia , [ 37 ] and as one of the #Occupy Tech projects. [ 38 ] Nowadays, there are plans of another federated social network, Lorea (based on Elgg ), to connect with Kune. [ 39 ] Kune has the active support of several organizations and institutions:
https://en.wikipedia.org/wiki/Kune_(software)
In set theory , a branch of mathematics, Kunen's inconsistency theorem , proved by Kenneth Kunen ( 1971 ), shows that several plausible large cardinal axioms are inconsistent with the axiom of choice . Some consequences of Kunen's theorem (or its proof) are: It is not known if Kunen's theorem still holds in ZF (ZFC without the axiom of choice), though Suzuki (1999) showed that there is no definable elementary embedding from V into V . That is there is no formula J in the language of set theory such that for some parameter p ∈ V for all sets x ∈ V and y ∈ V : j ( x ) = y ↔ J ( x , y , p ) . {\displaystyle j(x)=y\leftrightarrow J(x,y,p)\,.} Kunen used Morse–Kelley set theory in his proof. If the proof is re-written to use ZFC, then one must add the assumption that replacement holds for formulas involving j . Otherwise one could not even show that j "λ exists as a set. The forbidden set j "λ is crucial to the proof. The proof first shows that it cannot be in M . The other parts of the theorem are derived from that. It is possible to have models of set theory that have elementary embeddings into themselves, at least if one assumes some mild large cardinal axioms. For example, if 0 # exists then there is an elementary embedding from the constructible universe L into itself. This does not contradict Kunen's theorem because if 0 # exists then L cannot be the whole universe of sets. This set theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kunen's_inconsistency_theorem
In stochastic calculus , the Kunita–Watanabe inequality is a generalization of the Cauchy–Schwarz inequality to integrals of stochastic processes . It was first obtained by Hiroshi Kunita and Shinzo Watanabe and plays a fundamental role in their extension of Ito's stochastic integral to square-integrable martingales. [ 1 ] Let M , N be continuous local martingales and H , K measurable processes. Then where the angled brackets indicates the quadratic variation and quadratic covariation operators. The integrals are understood in the Lebesgue–Stieltjes sense.
https://en.wikipedia.org/wiki/Kunita–Watanabe_inequality
The Kunming-Montreal Global Biodiversity Framework (GBF) is an outcome of the 2022 United Nations Biodiversity Conference . Its tentative title had been the "Post-2020 Global Biodiversity Framework". [ 1 ] The GBF was adopted by the 15th Conference of Parties (COP15) to the Convention on Biological Diversity (CBD) on 19 December 2022. [ 2 ] It has been promoted as a " Paris Agreement for Nature". [ 3 ] [ 4 ] It is one of a handful of agreements under the auspices of the CBD, and it is the most significant to date. It has been hailed as a "huge, historic moment" and a "major win for our planet and for all of humanity." [ 5 ] The Framework is named after two cities, Kunming , which was scheduled to be the host city for COP15 in October 2020 but postponed and subsequently relinquished the hosting duties due to China's COVID policy , and Montreal , which is the seat of the Convention on Biological Diversity Secretariat and stepped in to host COP15 after Kunming's cancellation. [ 6 ] Human activities around the planet have been causing a crisis of biodiversity loss around the globe. This phenomenon has been known as the Holocene extinction , which is the sixth mass extinction event in the earth's history. [ 7 ] The decline in nature threatens the survival of a million species [ 8 ] and impacts billions of people. [ 9 ] [ 10 ] Due to increasing awareness of the biodiversity crisis, there was pressure from citizens and investors around the world to take action to address the interlinked crises of climate change and biodiversity loss. [ 5 ] [ 11 ] Previous agreements, including the Aichi Biodiversity Targets , had largely failed to achieve their targets for biodiversity loss. [ 12 ] In the lead up to the adoption of the GBF, it was hoped that the GBF would act as an ambitious, science-based, and comprehensive sister agreement to the Paris Agreement - an international agreement for climate change under the auspices of the United Nations Framework Convention on Climate Change . [ 13 ] COP15, the summit where the GBF was adopted, was described by Elizabeth Maruma Mrema (Executive Secretary of the Convention on Biological Diversity) as a "Paris moment for biodiversity". [ 14 ] The GBF contains four global goals ("Kunming-Montreal Global Goals for 2050") and 23 targets ("Kunming-Montreal 2030 Global Targets"). The four goals are: [ 15 ] The 23 targets are categorized into three areas as: [ 16 ] "Target 3" is especially referred to as the " 30 by 30 " target. [ 17 ] It succeeds the Strategic Plan for Biodiversity 2011-2020 (including the Aichi Biodiversity Targets). [ 18 ] It aims for governments to designate 30% of Earth's terrestrial and aquatic area as protected areas by 2030. [ 15 ] As part of the target, countries must stop subsidizing activities that destroy wilderness, such as mining and industrial fishing. [ 19 ] In parallel to the development of these goals and targets, the concept of nature-positive emerged as a global societal goal for nature that mirrors the mission and vision of the GBF. [ 20 ] Nature-positive refers to the goal to halt and reverse nature loss by 2030, and to achieve nature recovery by 2050, [ 21 ] while the Global Biodiversity Framework also aims to halt and reverse the loss of biodiversity to begin the road to nature recovery. [ 22 ] Since the implementation of the GBF, nature-positive has played a role in mainstreaming nature throughout businesses and governance systems to achieve the targets of the framework. [ 20 ] The implementation of the GBF will likely lead to the following effects according to the United Nations Environment Programme Finance Initiative : [ 23 ] The GBF is not a legally binding treaty, [ 24 ] but it is expected to have a major impact in countries around the world as they endeavor to meet their targets, through the development of new plans and regulations. [ 25 ] For example, protected areas will be expanded and subsidies for ecologically destructive activities such as fishing will have to be redirected. [ 26 ] Progress towards national targets has been under review at COP16 . By the summit’s end, just 44 out of 196 parties had come up with new biodiversity plans. [ 27 ] [ 28 ]
https://en.wikipedia.org/wiki/Kunming-Montreal_Global_Biodiversity_Framework
Kuno Lorenz (born September 17, 1932 in Vachdorf , Thüringen ) is a German philosopher . He developed a philosophy of dialogue, in connection with the pragmatic theory of action of the Erlangen constructivist school. Lorenz is married to the literary scholar Karin Lorenz-Lindemann [ de ] . After studying mathematics and physics in Tübingen , Hamburg , Bonn and Princeton , Lorenz earned his Ph.D. in 1961 under Paul Lorenzen in Kiel with a thesis about Arithmetic and Logic as Games . In 1969 he received his habilitation degree in philosophy also under Lorenzen but this time in Erlangen . In 1970 he was offered the chair of philosophy at the University of Hamburg to succeed Carl Friedrich von Weizsäcker . From 1974 till his retirement in 1997 he taught at the University of Saarland in Saarbrücken. Among his former students is Arno Ros . Lorenz developed (along with Paul Lorenzen ) an approach to arithmetic and logic as dialogue games. In dialogical logic ( game semantics ), tree calculations (generally, of Gentzen -type calculus) are written upside down, so that the initial assertion of a proponent stays above and is defended against an opponent as in a game. This is a linguistically more congenial approach to logic, which is more suitable as a model for argumentation than the formal derivation in a calculus or truth tables . Lorenz presented for the first time a simple demonstration of Gentzen's consistency proof on this game-theoretic basis. If one regards logic and mathematics in this way as a game, an intuitionist approach becomes a more plausible option. Not only logic, but the whole of philosophy is given a dialogical treatment by Lorenz. Only in the mirror of a relative Other is it possible to reflect upon oneself. Lorenz developed a dialogical constructivism from the focus on the dialogical principle (Martin Buber) and the process of language games of the later Ludwig Wittgenstein . In addition, the pragmatism of Charles Sanders Peirce and the historicism of Wilhelm Dilthey are complementary juxtaposed.
https://en.wikipedia.org/wiki/Kuno_Lorenz
Bürgi's Kunstweg is a set of algorithms invented by Jost Bürgi at the end of the 16th century. [ 1 ] They can be used for the calculation of sines to an arbitrary precision. Bürgi used these algorithms to calculate a Canon Sinuum , a table of sines in steps of 2 arc seconds . It is thought that this table had 8 sexagesimal places. Some authors have speculated that this table only covered the range from 0 to 45 degrees, but nothing seems to support this claim. Such tables were extremely important for navigation at sea. Johannes Kepler called the Canon Sinuum the most precise known table of sines. [ 2 ] Bürgi explained his algorithms in his work Fundamentum Astronomiae which he presented to Emperor Rudolf II . in 1592. The principles of iterative sine table calculation through the Kunstweg are as follows: cells in a column sum up the values of the two previous cells in the same column . The final cell's value is divided by two, and the next iteration starts. Finally, the values of the last column get normalized. Rather accurate approximations of sines are obtained after few iterations. As recently as 2015, Folkerts et al. showed that this simple process converges indeed towards the true sines. [ 3 ] According to Folkerts et al., this was the first step towards difference calculus .
https://en.wikipedia.org/wiki/Kunstweg
Kuphus is a genus of shipworms , marine bivalve molluscs in the family Teredinidae . While there are four extinct species in the genus, [ 2 ] the only extant species is Kuphus polythalamius (also incorrectly spelled as Kuphus polythalamia ). [ 3 ] [ 4 ] It is the longest bivalve mollusc in the world, where the only known permanent natural habitat is Kalamansig, Sultan Kudarat in the Philippines . [ 5 ] Members of this genus secrete calcareous tubes. Based only on the calcareous tube, this species was originally thought by Linnaeus to be a tube worm, so he placed it in the genus Serpula . Despite the fact that Kuphus polythalamius is now known to be a mollusc, its common name is the giant tube worm . [ 6 ] Since 1981 however, the name "giant tube worm" has also been applied to the hydrothermal vent species Riftia pachyptila , which is indeed a worm, an annelid . The sole living species is: Extinct species are: Large, tusk-shaped, calcareous tubes were occasionally washed up on beaches. There was disagreement among zoologists in the 18th century as to whether the creature which made one of these was a polychaete tube-worm or came from a mollusc . Linnaeus described the species in 1758. He considered that it was a serpulid worm and named it Serpula arenaria , a name which in 1767 he changed to Serpula polythalamia . There was some confusion as to precisely which taxon he was describing, but S. polythalamius became the type species of the genus Serpula , a genus of polychaete worms. In 1770, Guettard introduced the name Kuphus for the genus, realising that the animal was not a worm but a mollusc. This meant that, according to the ICZN rules , the specific name became Kuphus polythalamius (Linnaeus, 1758). [ 7 ] Fossils of Kuphus polythalamius have been found dating back to the Oligocene . They came from rocks in various tropical and sub-tropical areas including Indonesia, Pakistan, Jamaica, Grenada, South Africa and Somalia. [ 8 ] Fossils of the extinct species, Kuphus melitensis , are found in Late Oligocene-aged coralline limestone of Malta . [ 2 ] Fossils of the extinct species, Kuphus incrassatus , have been found in rocks in Jamaica, Mexico, Panama, Puerto Rico, Trinidad and Tobago, Florida and Mississippi. [ 9 ] Another species is Kuphus arenarius that have been recorded in Oligocene to Miocene-aged limestone layers of Asmari Formation in Iran. They are common in sedimentary Tertiary rocks in the Caribbean region. They date back to the Oligocene and Miocene and have been used for absolute dating of the rocks, using the relative proportions of two strontium isotopes in the fossils. [ 10 ] Fossils of the extinct species, Kuphus fistula , dating from the Miocene and Pliocene , have been found in various locations in Virginia in the United States. [ 11 ] Fossils found near Warsaw by paleontologist , Friedrich von Huene in 1941 were misidentified as being the teeth and parts of the jaw of a new species of dinosaur , which he named Succinodon putzeri . It was later determined that these were in fact the fossil remains of a marine boring bivalve, a previously undescribed species of Kuphus . [ 12 ] Today, Kuphus polythalamius is found in the western Pacific Ocean, the western and eastern Indian Ocean and the Indo-Malaysian area. [ 13 ] The range includes the Philippines , Indonesia and Mozambique . [ 14 ] However, the only thoroughly studied natural habitat of the species is in Kalamansig, Sultan Kudarat in the Philippines . [ 15 ]
https://en.wikipedia.org/wiki/Kuphus
Kuphus polythalamius (known as giant tamilok ) is a species of shipworm , a marine bivalve mollusc in the family Teredinidae . The tube of Kuphus polythalamius is known as a crypt and is a calcareous secretion designed to enable the animal to live in its preferred habitat, the mud of mangrove swamps. A typical specimen measures 100 cm (40 in) in length and is shaped like a truncated elephant's tusk. The wider, anterior end is closed, has a rounded tip, and is about 110 mm (4.5 in) in diameter. From there the tube tapers to an open, posterior end about 38 mm (1.5 in) in diameter, with a central septum. Siphons project through this end for feeding and respiration. They can be withdrawn inside the tube and the end can be sealed with a set of specialised plates or "pallets". The two small valves of the mollusc are inside the tube along with the mantle , gut and other soft organs. In the intact but otherwise empty tube found on the strandline , they can be seen by X-ray photography. [ 1 ] The giant clam ( Tridacna gigas ) is generally considered to be the largest bivalve mollusc. It is indeed the heaviest species, growing to over 200 kg (440 lb) and measuring up to 120 cm (47 in) in length, [ 2 ] but Kuphus polythalamius holds the record for the largest bivalve by length. A specimen owned by Victor Dan in the United States has a length of 1,532 mm (60 in), which is considerably longer than the largest giant clam. [ 2 ] [ 3 ] Today, Kuphus polythalamius is found in the western Pacific Ocean, the western and eastern Indian Ocean and the Indo-Malaysian area. [ 4 ] The range includes the Philippines , Indonesia and Mozambique . [ 5 ] However, the only thoroughly studied natural habitat of the species is in Kalamansig, Sultan Kudarat in the Philippines . [ 6 ] Marine biologist Ruth Turner studied shipworms and considered that their common ancestor would have been very like Kuphus polythalamius , the most primitive of the teredinids. She believed that the anatomy of the tube was such that the animal would not have been able to burrow in wood as other modern teredinids do, but would instead have lived buried in soft sediments. [ 1 ] In April 2017, the species became the focus of international attention when the announcement of a scientific study conducted in the Philippines was misinterpreted by foreign news reporters as the discovery of a rare live specimen. [ 7 ] The sample was gunmetal black, and very muscular. While other shipworms feed on submerged wood, K. polythalamius was found to use bacteria in its gills to use hydrogen sulphide in the water as an energy source used to convert carbon dioxide into nutrients. [ 8 ] [ 9 ] In this respect it resembles the unrelated giant tube worm , which actually is a worm. Videos uploaded to YouTube , however, already show Philippine scientists dissecting specimens as far back as 2010, after a news feature on a giant tamilok , the local name for the common shipworm , was broadcast on a local TV network. [ 10 ] The report by local media celebrity Jessica Soho suggests that local residents in the province of Sultan Kudarat , Mindanao island, were familiar enough with the creature to the point of treating it as a delicacy. After the discovery of the species in Sultan Kudarat, various environmental groups launched a campaign to protect the species and its habitat from further destruction and human consumption. Currently, the municipal waters where the species thrive in is protected by the local government. [ 6 ]
https://en.wikipedia.org/wiki/Kuphus_polythalamius
In mathematics , the Kuramoto–Sivashinsky equation (also called the KS equation or flame equation ) is a fourth-order nonlinear partial differential equation . It is named after Yoshiki Kuramoto and Gregory Sivashinsky , who derived the equation in the late 1970s to model the diffusive–thermal instabilities in a laminar flame front. [ 1 ] [ 2 ] [ 3 ] It was later and independently derived by G. M. Homsy [ 4 ] and A. A. Nepomnyashchii [ 5 ] in 1974, in connection with the stability of liquid film on an inclined plane and by R. E. LaQuey et. al. [ 6 ] in 1975 in connection with trapped-ion instability. The Kuramoto–Sivashinsky equation is known for its chaotic behavior. [ 7 ] [ 8 ] The 1d version of the Kuramoto–Sivashinsky equation is An alternate form is obtained by differentiating with respect to x {\displaystyle x} and substituting v = u x {\displaystyle v=u_{x}} . This is the form used in fluid dynamics applications. [ 9 ] The Kuramoto–Sivashinsky equation can also be generalized to higher dimensions. In spatially periodic domains, one possibility is where Δ {\displaystyle \Delta } is the Laplace operator , and Δ 2 {\displaystyle \Delta ^{2}} is the biharmonic operator . The Cauchy problem for the 1d Kuramoto–Sivashinsky equation is well-posed in the sense of Hadamard—that is, for given initial data u ( x , 0 ) {\displaystyle u(x,0)} , there exists a unique solution u ( x , 0 ≤ t < ∞ ) {\displaystyle u(x,0\leq t<\infty )} that depends continuously on the initial data. [ 10 ] The 1d Kuramoto–Sivashinsky equation possesses Galilean invariance —that is, if u ( x , t ) {\displaystyle u(x,t)} is a solution, then so is u ( x − c t , t ) − c {\displaystyle u(x-ct,t)-c} , where c {\displaystyle c} is an arbitrary constant. [ 11 ] Physically, since u {\displaystyle u} is a velocity, this change of variable describes a transformation into a frame that is moving with constant relative velocity c {\displaystyle c} . On a periodic domain, the equation also has a reflection symmetry : if u ( x , t ) {\displaystyle u(x,t)} is a solution, then − u ( − x , t ) {\displaystyle -u(-x,t)} is also a solution. [ 11 ] Solutions of the Kuramoto–Sivashinsky equation possess rich dynamical characteristics. [ 11 ] [ 12 ] [ 13 ] Considered on a periodic domain 0 ≤ x ≤ L {\displaystyle 0\leq x\leq L} , the dynamics undergoes a series of bifurcations as the domain size L {\displaystyle L} is increased, culminating in the onset of chaotic behavior. Depending on the value of L {\displaystyle L} , solutions may include equilibria, relative equilibria, and traveling waves —all of which typically become dynamically unstable as L {\displaystyle L} is increased. In particular, the transition to chaos occurs by a cascade of period-doubling bifurcations . [ 13 ] A third-order derivative term representing dispersion of wavenumbers are often encountered in many applications. The disperseively modified Kuramoto–Sivashinsky equation, which is often called as the Kawahara equation , [ 14 ] is given by [ 15 ] where δ 3 {\displaystyle \delta _{3}} is real parameter. A fifth-order derivative term is also often included, which is the modified Kawahara equation and is given by [ 16 ] Three forms of the sixth-order Kuramoto–Sivashinsky equations are encountered in applications involving tricritical points , which are given by [ 17 ] in which the last equation is referred to as the Nikolaevsky equation , named after V. N. Nikolaevsky who introduced the equation in 1989, [ 18 ] [ 19 ] [ 20 ] whereas the first two equations has been introduced by P. Rajamanickam and J. Daou in the context of transitions near tricritical points, [ 17 ] i.e., change in the sign of the fourth derivative term with the plus sign approaching a Kuramoto–Sivashinsky type and the minus sign approaching a Ginzburg–Landau type . Applications of the Kuramoto–Sivashinsky equation extend beyond its original context of flame propagation and reaction–diffusion systems . These additional applications include flows in pipes and at interfaces, plasmas , chemical reaction dynamics, and models of ion-sputtered surfaces. [ 9 ] [ 21 ]
https://en.wikipedia.org/wiki/Kuramoto–Sivashinsky_equation
In mathematics, especially in topology , a Kuranishi structure is a smooth analogue of scheme structure. If a topological space is endowed with a Kuranishi structure, then locally it can be identified with the zero set of a smooth map ( f 1 , … , f k ) : R n + k → R k {\displaystyle (f_{1},\ldots ,f_{k})\colon \mathbb {R} ^{n+k}\to \mathbb {R} ^{k}} , or the quotient of such a zero set by a finite group. Kuranishi structures were introduced by Japanese mathematicians Kenji Fukaya and Kaoru Ono in the study of Gromov–Witten invariants and Floer homology in symplectic geometry, and were named after Masatake Kuranishi . [ 1 ] Let X {\displaystyle X} be a compact metrizable topological space . Let p ∈ X {\displaystyle p\in X} be a point. A Kuranishi neighborhood of p {\displaystyle p} (of dimension k {\displaystyle k} ) is a 5-tuple where They should satisfy that dim ⁡ U p − rank ⁡ E p = k {\displaystyle \dim U_{p}-\operatorname {rank} E_{p}=k} . If p , q ∈ X {\displaystyle p,q\in X} and K p = ( U p , E p , S p , F p , ψ p ) {\displaystyle K_{p}=(U_{p},E_{p},S_{p},F_{p},\psi _{p})} , K q = ( U q , E q , S q , F q , ψ q ) {\displaystyle K_{q}=(U_{q},E_{q},S_{q},F_{q},\psi _{q})} are their Kuranishi neighborhoods respectively, then a coordinate change from K q {\displaystyle K_{q}} to K p {\displaystyle K_{p}} is a triple where In addition, these data must satisfy the following compatibility conditions: A Kuranishi structure on X {\displaystyle X} of dimension k {\displaystyle k} is a collection where In addition, the coordinate changes must satisfy the cocycle condition , namely, whenever q ∈ F p , r ∈ F q {\displaystyle q\in F_{p},\ r\in F_{q}} , we require that over the regions where both sides are defined. In Gromov–Witten theory , one needs to define integration over the moduli space of pseudoholomorphic curves M ¯ g , n ( X , A ) {\displaystyle {\overline {\mathcal {M}}}_{g,n}(X,A)} . [ 2 ] This moduli space is roughly the collection of maps u {\displaystyle u} from a nodal Riemann surface with genus g {\displaystyle g} and n {\displaystyle n} marked points into a symplectic manifold X {\displaystyle X} , such that each component satisfies the Cauchy–Riemann equation If the moduli space is a smooth, compact, oriented manifold or orbifold, then the integration (or a fundamental class ) can be defined. When the symplectic manifold X {\displaystyle X} is semi-positive , this is indeed the case (except for codimension 2 boundaries of the moduli space) if the almost complex structure J {\displaystyle J} is perturbed generically. However, when X {\displaystyle X} is not semi-positive (for example, a smooth projective variety with negative first Chern class), the moduli space may contain configurations for which one component is a multiple cover of a holomorphic sphere u : S 2 → X {\displaystyle u\colon S^{2}\to X} whose intersection with the first Chern class of X {\displaystyle X} is negative. Such configurations make the moduli space very singular so a fundamental class cannot be defined in the usual way. The notion of Kuranishi structure was a way of defining a virtual fundamental cycle, which plays the same role as a fundamental cycle when the moduli space is cut out transversely. It was first used by Fukaya and Ono in defining the Gromov–Witten invariants and Floer homology, and was further developed when Fukaya, Yong-Geun Oh , Hiroshi Ohta, and Ono studied Lagrangian intersection Floer theory . [ 3 ]
https://en.wikipedia.org/wiki/Kuranishi_structure
In point-set topology , Kuratowski's closure-complement problem asks for the largest number of distinct sets obtainable by repeatedly applying the set operations of closure and complement to a given starting subset of a topological space . The answer is 14. This result was first published by Kazimierz Kuratowski in 1922. [ 1 ] It gained additional exposure in Kuratowski's fundamental monograph Topologie (first published in French in 1933; the first English translation appeared in 1966) before achieving fame as a textbook exercise in John L. Kelley 's 1955 classic, General Topology . [ 2 ] Letting S {\displaystyle S} denote an arbitrary subset of a topological space, write k S {\displaystyle kS} for the closure of S {\displaystyle S} , and c S {\displaystyle cS} for the complement of S {\displaystyle S} . The following three identities imply that no more than 14 distinct sets are obtainable: The first two are trivial. The third follows from the identity k i k i S = k i S {\displaystyle kikiS=kiS} where i S {\displaystyle iS} is the interior of S {\displaystyle S} which is equal to the complement of the closure of the complement of S {\displaystyle S} , i S = c k c S {\displaystyle iS=ckcS} . (The operation k i = k c k c {\displaystyle ki=kckc} is idempotent.) A subset realizing the maximum of 14 is called a 14-set . The space of real numbers under the usual topology contains 14-sets. Here is one example: where ( 1 , 2 ) {\displaystyle (1,2)} denotes an open interval and [ 4 , 5 ] {\displaystyle [4,5]} denotes a closed interval. Let X {\displaystyle X} denote this set. Then the following 14 sets are accessible: Despite its origin within the context of a topological space, Kuratowski's closure-complement problem is actually more algebraic than topological. A surprising abundance of closely related problems and results have appeared since 1960, many of which have little or nothing to do with point-set topology. [ 3 ] The closure-complement operations yield a monoid that can be used to classify topological spaces. [ 4 ] This topology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kuratowski's_closure-complement_problem
Kuratowski's free set theorem , named after Kazimierz Kuratowski , is a result of set theory , an area of mathematics . It was largely forgotten for decades, but has been applied recently in solving several lattice theory problems, such as the congruence lattice problem . Denote by [ X ] < ω {\displaystyle [X]^{<\omega }} the set of all finite subsets of a set X {\displaystyle X} . Likewise, for a positive integer n {\displaystyle n} , denote by [ X ] n {\displaystyle [X]^{n}} the set of all n {\displaystyle n} -elements subsets of X {\displaystyle X} . For a mapping Φ : [ X ] n → [ X ] < ω {\displaystyle \Phi \colon [X]^{n}\to [X]^{<\omega }} , we say that a subset U {\displaystyle U} of X {\displaystyle X} is free (with respect to Φ {\displaystyle \Phi } ), if for any n {\displaystyle n} -element subset V {\displaystyle V} of U {\displaystyle U} and any u ∈ U ∖ V {\displaystyle u\in U\setminus V} , u ∉ Φ ( V ) {\displaystyle u\notin \Phi (V)} . Kuratowski published in 1951 the following result, which characterizes the infinite cardinals of the form ℵ n {\displaystyle \aleph _{n}} . The theorem states the following. Let n {\displaystyle n} be a positive integer and let X {\displaystyle X} be a set. Then the cardinality of X {\displaystyle X} is greater than or equal to ℵ n {\displaystyle \aleph _{n}} if and only if for every mapping Φ {\displaystyle \Phi } from [ X ] n {\displaystyle [X]^{n}} to [ X ] < ω {\displaystyle [X]^{<\omega }} , there exists an ( n + 1 ) {\displaystyle (n+1)} -element free subset of X {\displaystyle X} with respect to Φ {\displaystyle \Phi } . For n = 1 {\displaystyle n=1} , Kuratowski's free set theorem is superseded by Hajnal's set mapping theorem . This set theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kuratowski's_free_set_theorem
In topology and related branches of mathematics , the Kuratowski closure axioms are a set of axioms that can be used to define a topological structure on a set . They are equivalent to the more commonly used open set definition. They were first formalized by Kazimierz Kuratowski , [ 1 ] and the idea was further studied by mathematicians such as Wacław Sierpiński and António Monteiro , [ 2 ] among others. A similar set of axioms can be used to define a topological structure using only the dual notion of interior operator . [ 3 ] Let X {\displaystyle X} be an arbitrary set and ℘ ( X ) {\displaystyle \wp (X)} its power set . A Kuratowski closure operator is a unary operation c : ℘ ( X ) → ℘ ( X ) {\displaystyle \mathbf {c} :\wp (X)\to \wp (X)} with the following properties: [K2] It is extensive : for all A ⊆ X {\displaystyle A\subseteq X} , A ⊆ c ( A ) {\displaystyle A\subseteq \mathbf {c} (A)} ; [K3] It is idempotent : for all A ⊆ X {\displaystyle A\subseteq X} , c ( A ) = c ( c ( A ) ) {\displaystyle \mathbf {c} (A)=\mathbf {c} (\mathbf {c} (A))} ; A consequence of c {\displaystyle \mathbf {c} } preserving binary unions is the following condition: [ 4 ] In fact if we rewrite the equality in [K4] as an inclusion, giving the weaker axiom [K4''] ( subadditivity ): then it is easy to see that axioms [K4'] and [K4''] together are equivalent to [K4] (see the next-to-last paragraph of Proof 2 below). Kuratowski (1966) includes a fifth (optional) axiom requiring that singleton sets should be stable under closure: for all x ∈ X {\displaystyle x\in X} , c ( { x } ) = { x } {\displaystyle \mathbf {c} (\{x\})=\{x\}} . He refers to topological spaces which satisfy all five axioms as T 1 -spaces in contrast to the more general spaces which only satisfy the four listed axioms. Indeed, these spaces correspond exactly to the topological T 1 -spaces via the usual correspondence (see below). [ 5 ] If requirement [K3] is omitted, then the axioms define a Čech closure operator . [ 6 ] If [K1] is omitted instead, then an operator satisfying [K2] , [K3] and [K4'] is said to be a Moore closure operator . [ 7 ] A pair ( X , c ) {\displaystyle (X,\mathbf {c} )} is called Kuratowski , Čech or Moore closure space depending on the axioms satisfied by c {\displaystyle \mathbf {c} } . The four Kuratowski closure axioms can be replaced by a single condition, given by Pervin: [ 8 ] Axioms [K1] – [K4] can be derived as a consequence of this requirement: Alternatively, Monteiro (1945) had proposed a weaker axiom that only entails [K2] – [K4] : [ 9 ] Requirement [K1] is independent of [M] : indeed, if X ≠ ∅ {\displaystyle X\neq \varnothing } , the operator c ⋆ : ℘ ( X ) → ℘ ( X ) {\displaystyle \mathbf {c} ^{\star }:\wp (X)\to \wp (X)} defined by the constant assignment A ↦ c ⋆ ( A ) := X {\displaystyle A\mapsto \mathbf {c} ^{\star }(A):=X} satisfies [M] but does not preserve the empty set, since c ⋆ ( ∅ ) = X {\displaystyle \mathbf {c} ^{\star }(\varnothing )=X} . Notice that, by definition, any operator satisfying [M] is a Moore closure operator. A more symmetric alternative to [M] was also proven by M. O. Botelho and M. H. Teixeira to imply axioms [K2] – [K4] : [ 2 ] A dual notion to Kuratowski closure operators is that of Kuratowski interior operator , which is a map i : ℘ ( X ) → ℘ ( X ) {\displaystyle \mathbf {i} :\wp (X)\to \wp (X)} satisfying the following similar requirements: [ 3 ] [I2] It is intensive : for all A ⊆ X {\displaystyle A\subseteq X} , i ( A ) ⊆ A {\displaystyle \mathbf {i} (A)\subseteq A} ; [I3] It is idempotent : for all A ⊆ X {\displaystyle A\subseteq X} , i ( i ( A ) ) = i ( A ) {\displaystyle \mathbf {i} (\mathbf {i} (A))=\mathbf {i} (A)} ; For these operators, one can reach conclusions that are completely analogous to what was inferred for Kuratowski closures. For example, all Kuratowski interior operators are isotonic , i.e. they satisfy [K4'] , and because of intensivity [I2] , it is possible to weaken the equality in [I3] to a simple inclusion. The duality between Kuratowski closures and interiors is provided by the natural complement operator on ℘ ( X ) {\displaystyle \wp (X)} , the map n : ℘ ( X ) → ℘ ( X ) {\displaystyle \mathbf {n} :\wp (X)\to \wp (X)} sending A ↦ n ( A ) := X ∖ A {\displaystyle A\mapsto \mathbf {n} (A):=X\setminus A} . This map is an orthocomplementation on the power set lattice, meaning it satisfies De Morgan's laws : if I {\displaystyle {\mathcal {I}}} is an arbitrary set of indices and { A i } i ∈ I ⊆ ℘ ( X ) {\displaystyle \{A_{i}\}_{i\in {\mathcal {I}}}\subseteq \wp (X)} , n ( ⋃ i ∈ I A i ) = ⋂ i ∈ I n ( A i ) , n ( ⋂ i ∈ I A i ) = ⋃ i ∈ I n ( A i ) . {\displaystyle \mathbf {n} \left(\bigcup _{i\in {\mathcal {I}}}A_{i}\right)=\bigcap _{i\in {\mathcal {I}}}\mathbf {n} (A_{i}),\qquad \mathbf {n} \left(\bigcap _{i\in {\mathcal {I}}}A_{i}\right)=\bigcup _{i\in {\mathcal {I}}}\mathbf {n} (A_{i}).} By employing these laws, together with the defining properties of n {\displaystyle \mathbf {n} } , one can show that any Kuratowski interior induces a Kuratowski closure (and vice versa), via the defining relation c := n i n {\displaystyle \mathbf {c} :=\mathbf {nin} } (and i := n c n {\displaystyle \mathbf {i} :=\mathbf {ncn} } ). Every result obtained concerning c {\displaystyle \mathbf {c} } may be converted into a result concerning i {\displaystyle \mathbf {i} } by employing these relations in conjunction with the properties of the orthocomplementation n {\displaystyle \mathbf {n} } . Pervin (1964) further provides analogous axioms for Kuratowski exterior operators [ 3 ] and Kuratowski boundary operators , [ 10 ] which also induce Kuratowski closures via the relations c := n e {\displaystyle \mathbf {c} :=\mathbf {ne} } and c ( A ) := A ∪ b ( A ) {\displaystyle \mathbf {c} (A):=A\cup \mathbf {b} (A)} . Notice that axioms [K1] – [K4] may be adapted to define an abstract unary operation c : L → L {\displaystyle \mathbf {c} :L\to L} on a general bounded lattice ( L , ∧ , ∨ , 0 , 1 ) {\displaystyle (L,\land ,\lor ,\mathbf {0} ,\mathbf {1} )} , by formally substituting set-theoretic inclusion with the partial order associated to the lattice, set-theoretic union with the join operation, and set-theoretic intersections with the meet operation; similarly for axioms [I1] – [I4] . If the lattice is orthocomplemented, these two abstract operations induce one another in the usual way. Abstract closure or interior operators can be used to define a generalized topology on the lattice. Since neither unions nor the empty set appear in the requirement for a Moore closure operator, the definition may be adapted to define an abstract unary operator c : S → S {\displaystyle \mathbf {c} :S\to S} on an arbitrary poset S {\displaystyle S} . A closure operator naturally induces a topology as follows. Let X {\displaystyle X} be an arbitrary set. We shall say that a subset C ⊆ X {\displaystyle C\subseteq X} is closed with respect to a Kuratowski closure operator c : ℘ ( X ) → ℘ ( X ) {\displaystyle \mathbf {c} :\wp (X)\to \wp (X)} if and only if it is a fixed point of said operator, or in other words it is stable under c {\displaystyle \mathbf {c} } , i.e. c ( C ) = C {\displaystyle \mathbf {c} (C)=C} . The claim is that the family of all subsets of the total space that are complements of closed sets satisfies the three usual requirements for a topology, or equivalently, the family S [ c ] {\displaystyle {\mathfrak {S}}[\mathbf {c} ]} of all closed sets satisfies the following: [T2] It is complete under arbitrary intersections , i.e. if I {\displaystyle {\mathcal {I}}} is an arbitrary set of indices and { C i } i ∈ I ⊆ S [ c ] {\displaystyle \{C_{i}\}_{i\in {\mathcal {I}}}\subseteq {\mathfrak {S}}[\mathbf {c} ]} , then ⋂ i ∈ I C i ∈ S [ c ] {\textstyle \bigcap _{i\in {\mathcal {I}}}C_{i}\in {\mathfrak {S}}[\mathbf {c} ]} ; Notice that, by idempotency [K3] , one may succinctly write S [ c ] = im ⁡ ( c ) {\displaystyle {\mathfrak {S}}[\mathbf {c} ]=\operatorname {im} (\mathbf {c} )} . [T1] By extensivity [K2] , X ⊆ c ( X ) {\displaystyle X\subseteq \mathbf {c} (X)} and since closure maps the power set of X {\displaystyle X} into itself (that is, the image of any subset is a subset of X {\displaystyle X} ), c ( X ) ⊆ X {\displaystyle \mathbf {c} (X)\subseteq X} we have X = c ( X ) {\displaystyle X=\mathbf {c} (X)} . Thus X ∈ S [ c ] {\displaystyle X\in {\mathfrak {S}}[\mathbf {c} ]} . The preservation of the empty set [K1] readily implies ∅ ∈ S [ c ] {\displaystyle \varnothing \in {\mathfrak {S}}[\mathbf {c} ]} . [T2] Next, let I {\displaystyle {\mathcal {I}}} be an arbitrary set of indices and let C i {\displaystyle C_{i}} be closed for every i ∈ I {\displaystyle i\in {\mathcal {I}}} . By extensivity [K2] , ⋂ i ∈ I C i ⊆ c ( ⋂ i ∈ I C i ) {\textstyle \bigcap _{i\in {\mathcal {I}}}C_{i}\subseteq \mathbf {c} \left(\bigcap _{i\in {\mathcal {I}}}C_{i}\right)} . Also, by isotonicity [K4'] , if ⋂ i ∈ I C i ⊆ C i {\textstyle \bigcap _{i\in {\mathcal {I}}}C_{i}\subseteq C_{i}} for all indices i ∈ I {\displaystyle i\in {\mathcal {I}}} , then c ( ⋂ i ∈ I C i ) ⊆ c ( C i ) = C i {\textstyle \mathbf {c} \left(\bigcap _{i\in {\mathcal {I}}}C_{i}\right)\subseteq \mathbf {c} (C_{i})=C_{i}} for all i ∈ I {\displaystyle i\in {\mathcal {I}}} , which implies c ( ⋂ i ∈ I C i ) ⊆ ⋂ i ∈ I C i {\textstyle \mathbf {c} \left(\bigcap _{i\in {\mathcal {I}}}C_{i}\right)\subseteq \bigcap _{i\in {\mathcal {I}}}C_{i}} . Therefore, ⋂ i ∈ I C i = c ( ⋂ i ∈ I C i ) {\textstyle \bigcap _{i\in {\mathcal {I}}}C_{i}=\mathbf {c} \left(\bigcap _{i\in {\mathcal {I}}}C_{i}\right)} , meaning ⋂ i ∈ I C i ∈ S [ c ] {\textstyle \bigcap _{i\in {\mathcal {I}}}C_{i}\in {\mathfrak {S}}[\mathbf {c} ]} . [T3] Finally, let I {\displaystyle {\mathcal {I}}} be a finite set of indices and let C i {\displaystyle C_{i}} be closed for every i ∈ I {\displaystyle i\in {\mathcal {I}}} . From the preservation of binary unions [K4] , and using induction on the number of subsets of which we take the union, we have ⋃ i ∈ I C i = c ( ⋃ i ∈ I C i ) {\textstyle \bigcup _{i\in {\mathcal {I}}}C_{i}=\mathbf {c} \left(\bigcup _{i\in {\mathcal {I}}}C_{i}\right)} . Thus, ⋃ i ∈ I C i ∈ S [ c ] {\textstyle \bigcup _{i\in {\mathcal {I}}}C_{i}\in {\mathfrak {S}}[\mathbf {c} ]} . Conversely, given a family κ {\displaystyle \kappa } satisfying axioms [T1] – [T3] , it is possible to construct a Kuratowski closure operator in the following way: if A ∈ ℘ ( X ) {\displaystyle A\in \wp (X)} and A ↑ = { B ∈ ℘ ( X ) | A ⊆ B } {\displaystyle A^{\uparrow }=\{B\in \wp (X)\ |\ A\subseteq B\}} is the inclusion upset of A {\displaystyle A} , then c κ ( A ) := ⋂ B ∈ ( κ ∩ A ↑ ) B {\displaystyle \mathbf {c} _{\kappa }(A):=\bigcap _{B\in (\kappa \cap A^{\uparrow })}B} defines a Kuratowski closure operator c κ {\displaystyle \mathbf {c} _{\kappa }} on ℘ ( X ) {\displaystyle \wp (X)} . [K1] Since ∅ ↑ = ℘ ( X ) {\displaystyle \varnothing ^{\uparrow }=\wp (X)} , c κ ( ∅ ) {\displaystyle \mathbf {c} _{\kappa }(\varnothing )} reduces to the intersection of all sets in the family κ {\displaystyle \kappa } ; but ∅ ∈ κ {\displaystyle \varnothing \in \kappa } by axiom [T1] , so the intersection collapses to the null set and [K1] follows. [K2] By definition of A ↑ {\displaystyle A^{\uparrow }} , we have that A ⊆ B {\displaystyle A\subseteq B} for all B ∈ ( κ ∩ A ↑ ) {\displaystyle B\in \left(\kappa \cap A^{\uparrow }\right)} , and thus A {\displaystyle A} must be contained in the intersection of all such sets. Hence follows extensivity [K2] . [K3] Notice that, for all A ∈ ℘ ( X ) {\displaystyle A\in \wp (X)} , the family c κ ( A ) ↑ ∩ κ {\displaystyle \mathbf {c} _{\kappa }(A)^{\uparrow }\cap \kappa } contains c κ ( A ) {\displaystyle \mathbf {c} _{\kappa }(A)} itself as a minimal element w.r.t. inclusion. Hence c κ 2 ( A ) = ⋂ B ∈ c κ ( A ) ↑ ∩ κ B = c κ ( A ) {\textstyle \mathbf {c} _{\kappa }^{2}(A)=\bigcap _{B\in \mathbf {c} _{\kappa }(A)^{\uparrow }\cap \kappa }B=\mathbf {c} _{\kappa }(A)} , which is idempotence [K3] . [K4'] Let A ⊆ B ⊆ X {\displaystyle A\subseteq B\subseteq X} : then B ↑ ⊆ A ↑ {\displaystyle B^{\uparrow }\subseteq A^{\uparrow }} , and thus κ ∩ B ↑ ⊆ κ ∩ A ↑ {\displaystyle \kappa \cap B^{\uparrow }\subseteq \kappa \cap A^{\uparrow }} . Since the latter family may contain more elements than the former, we find c κ ( A ) ⊆ c κ ( B ) {\displaystyle \mathbf {c} _{\kappa }(A)\subseteq \mathbf {c} _{\kappa }(B)} , which is isotonicity [K4'] . Notice that isotonicity implies c κ ( A ) ⊆ c κ ( A ∪ B ) {\displaystyle \mathbf {c} _{\kappa }(A)\subseteq \mathbf {c} _{\kappa }(A\cup B)} and c κ ( B ) ⊆ c κ ( A ∪ B ) {\displaystyle \mathbf {c} _{\kappa }(B)\subseteq \mathbf {c} _{\kappa }(A\cup B)} , which together imply c κ ( A ) ∪ c κ ( B ) ⊆ c κ ( A ∪ B ) {\displaystyle \mathbf {c} _{\kappa }(A)\cup \mathbf {c} _{\kappa }(B)\subseteq \mathbf {c} _{\kappa }(A\cup B)} . [K4] Finally, fix A , B ∈ ℘ ( X ) {\displaystyle A,B\in \wp (X)} . Axiom [T2] implies c κ ( A ) , c κ ( B ) ∈ κ {\displaystyle \mathbf {c} _{\kappa }(A),\mathbf {c} _{\kappa }(B)\in \kappa } ; furthermore, axiom [T2] implies that c κ ( A ) ∪ c κ ( B ) ∈ κ {\displaystyle \mathbf {c} _{\kappa }(A)\cup \mathbf {c} _{\kappa }(B)\in \kappa } . By extensivity [K2] one has c κ ( A ) ∈ A ↑ {\displaystyle \mathbf {c} _{\kappa }(A)\in A^{\uparrow }} and c κ ( B ) ∈ B ↑ {\displaystyle \mathbf {c} _{\kappa }(B)\in B^{\uparrow }} , so that c κ ( A ) ∪ c κ ( B ) ∈ ( A ↑ ) ∩ ( B ↑ ) {\displaystyle \mathbf {c} _{\kappa }(A)\cup \mathbf {c} _{\kappa }(B)\in \left(A^{\uparrow }\right)\cap \left(B^{\uparrow }\right)} . But ( A ↑ ) ∩ ( B ↑ ) = ( A ∪ B ) ↑ {\displaystyle \left(A^{\uparrow }\right)\cap \left(B^{\uparrow }\right)=(A\cup B)^{\uparrow }} , so that all in all c κ ( A ) ∪ c κ ( B ) ∈ κ ∩ ( A ∪ B ) ↑ {\displaystyle \mathbf {c} _{\kappa }(A)\cup \mathbf {c} _{\kappa }(B)\in \kappa \cap (A\cup B)^{\uparrow }} . Since then c κ ( A ∪ B ) {\displaystyle \mathbf {c} _{\kappa }(A\cup B)} is a minimal element of κ ∩ ( A ∪ B ) ↑ {\displaystyle \kappa \cap (A\cup B)^{\uparrow }} w.r.t. inclusion, we find c κ ( A ∪ B ) ⊆ c κ ( A ) ∪ c κ ( B ) {\displaystyle \mathbf {c} _{\kappa }(A\cup B)\subseteq \mathbf {c} _{\kappa }(A)\cup \mathbf {c} _{\kappa }(B)} . Point 4. ensures additivity [K4] . In fact, these two complementary constructions are inverse to one another: if C l s K ( X ) {\displaystyle \mathrm {Cls} _{\text{K}}(X)} is the collection of all Kuratowski closure operators on X {\displaystyle X} , and A t p ( X ) {\displaystyle \mathrm {Atp} (X)} is the collection of all families consisting of complements of all sets in a topology, i.e. the collection of all families satisfying [T1] – [T3] , then S : C l s K ( X ) → A t p ( X ) {\displaystyle {\mathfrak {S}}:\mathrm {Cls} _{\text{K}}(X)\to \mathrm {Atp} (X)} such that c ↦ S [ c ] {\displaystyle \mathbf {c} \mapsto {\mathfrak {S}}[\mathbf {c} ]} is a bijection, whose inverse is given by the assignment C : κ ↦ c κ {\displaystyle {\mathfrak {C}}:\kappa \mapsto \mathbf {c} _{\kappa }} . First we prove that C ∘ S = 1 C l s K ( X ) {\displaystyle {\mathfrak {C}}\circ {\mathfrak {S}}={\mathfrak {1}}_{\mathrm {Cls} _{\text{K}}(X)}} , the identity operator on C l s K ( X ) {\displaystyle \mathrm {Cls} _{\text{K}}(X)} . For a given Kuratowski closure c ∈ C l s K ( X ) {\displaystyle \mathbf {c} \in \mathrm {Cls} _{\text{K}}(X)} , define c ′ := C [ S [ c ] ] {\displaystyle \mathbf {c} ':={\mathfrak {C}}[{\mathfrak {S}}[\mathbf {c} ]]} ; then if A ∈ ℘ ( X ) {\displaystyle A\in \wp (X)} its primed closure c ′ ( A ) {\displaystyle \mathbf {c} '(A)} is the intersection of all c {\displaystyle \mathbf {c} } -stable sets that contain A {\displaystyle A} . Its non-primed closure c ( A ) {\displaystyle \mathbf {c} (A)} satisfies this description: by extensivity [K2] we have A ⊆ c ( A ) {\displaystyle A\subseteq \mathbf {c} (A)} , and by idempotence [K3] we have c ( c ( A ) ) = c ( A ) {\displaystyle \mathbf {c} (\mathbf {c} (A))=\mathbf {c} (A)} , and thus c ( A ) ∈ ( A ↑ ∩ S [ c ] ) {\displaystyle \mathbf {c} (A)\in \left(A^{\uparrow }\cap {\mathfrak {S}}[\mathbf {c} ]\right)} . Now, let C ∈ ( A ↑ ∩ S [ c ] ) {\displaystyle C\in \left(A^{\uparrow }\cap {\mathfrak {S}}[\mathbf {c} ]\right)} such that A ⊆ C ⊆ c ( A ) {\displaystyle A\subseteq C\subseteq \mathbf {c} (A)} : by isotonicity [K4'] we have c ( A ) ⊆ c ( C ) {\displaystyle \mathbf {c} (A)\subseteq \mathbf {c} (C)} , and since c ( C ) = C {\displaystyle \mathbf {c} (C)=C} we conclude that C = c ( A ) {\displaystyle C=\mathbf {c} (A)} . Hence c ( A ) {\displaystyle \mathbf {c} (A)} is the minimal element of A ↑ ∩ S [ c ] {\displaystyle A^{\uparrow }\cap {\mathfrak {S}}[\mathbf {c} ]} w.r.t. inclusion, implying c ′ ( A ) = c ( A ) {\displaystyle \mathbf {c} '(A)=\mathbf {c} (A)} . Now we prove that S ∘ C = 1 A t p ( X ) {\displaystyle {\mathfrak {S}}\circ {\mathfrak {C}}={\mathfrak {1}}_{\mathrm {Atp} (X)}} . If κ ∈ A t p ( X ) {\displaystyle \kappa \in \mathrm {Atp} (X)} and κ ′ := S [ C [ κ ] ] {\displaystyle \kappa ':={\mathfrak {S}}[{\mathfrak {C}}[\kappa ]]} is the family of all sets that are stable under c κ {\displaystyle \mathbf {c} _{\kappa }} , the result follows if both κ ′ ⊆ κ {\displaystyle \kappa '\subseteq \kappa } and κ ⊆ κ ′ {\displaystyle \kappa \subseteq \kappa '} . Let A ∈ κ ′ {\displaystyle A\in \kappa '} : hence c κ ( A ) = A {\displaystyle \mathbf {c} _{\kappa }(A)=A} . Since c κ ( A ) {\displaystyle \mathbf {c} _{\kappa }(A)} is the intersection of an arbitrary subfamily of κ {\displaystyle \kappa } , and the latter is complete under arbitrary intersections by [T2] , then A = c κ ( A ) ∈ κ {\displaystyle A=\mathbf {c} _{\kappa }(A)\in \kappa } . Conversely, if A ∈ κ {\displaystyle A\in \kappa } , then c κ ( A ) {\displaystyle \mathbf {c} _{\kappa }(A)} is the minimal superset of A {\displaystyle A} that is contained in κ {\displaystyle \kappa } . But that is trivially A {\displaystyle A} itself, implying A ∈ κ ′ {\displaystyle A\in \kappa '} . We observe that one may also extend the bijection S {\displaystyle {\mathfrak {S}}} to the collection C l s C ˇ ( X ) {\displaystyle \mathrm {Cls} _{\check {C}}(X)} of all Čech closure operators, which strictly contains C l s K ( X ) {\displaystyle \mathrm {Cls} _{\text{K}}(X)} ; this extension S ¯ {\displaystyle {\overline {\mathfrak {S}}}} is also surjective, which signifies that all Čech closure operators on X {\displaystyle X} also induce a topology on X {\displaystyle X} . [ 11 ] However, this means that S ¯ {\displaystyle {\overline {\mathfrak {S}}}} is no longer a bijection. A pair of Kuratowski closures c 1 , c 2 : ℘ ( X ) → ℘ ( X ) {\displaystyle \mathbf {c} _{1},\mathbf {c} _{2}:\wp (X)\to \wp (X)} such that c 2 ( A ) ⊆ c 1 ( A ) {\displaystyle \mathbf {c} _{2}(A)\subseteq \mathbf {c} _{1}(A)} for all A ∈ ℘ ( X ) {\displaystyle A\in \wp (X)} induce topologies τ 1 , τ 2 {\displaystyle \tau _{1},\tau _{2}} such that τ 1 ⊆ τ 2 {\displaystyle \tau _{1}\subseteq \tau _{2}} , and vice versa. In other words, c 1 {\displaystyle \mathbf {c} _{1}} dominates c 2 {\displaystyle \mathbf {c} _{2}} if and only if the topology induced by the latter is a refinement of the topology induced by the former, or equivalently S [ c 1 ] ⊆ S [ c 2 ] {\displaystyle {\mathfrak {S}}[\mathbf {c} _{1}]\subseteq {\mathfrak {S}}[\mathbf {c} _{2}]} . [ 13 ] For example, c ⊤ {\displaystyle \mathbf {c} _{\top }} clearly dominates c ⊥ {\displaystyle \mathbf {c} _{\bot }} (the latter just being the identity on ℘ ( X ) {\displaystyle \wp (X)} ). Since the same conclusion can be reached substituting τ i {\displaystyle \tau _{i}} with the family κ i {\displaystyle \kappa _{i}} containing the complements of all its members, if C l s K ( X ) {\displaystyle \mathrm {Cls} _{\text{K}}(X)} is endowed with the partial order c ≤ c ′ ⟺ c ( A ) ⊆ c ′ ( A ) {\displaystyle \mathbf {c} \leq \mathbf {c} '\iff \mathbf {c} (A)\subseteq \mathbf {c} '(A)} for all A ∈ ℘ ( X ) {\displaystyle A\in \wp (X)} and A t p ( X ) {\displaystyle \mathrm {Atp} (X)} is endowed with the refinement order, then we may conclude that S {\displaystyle {\mathfrak {S}}} is an antitonic mapping between posets. In any induced topology (relative to the subset A ) the closed sets induce a new closure operator that is just the original closure operator restricted to A : c A ( B ) = A ∩ c X ( B ) {\displaystyle \mathbf {c} _{A}(B)=A\cap \mathbf {c} _{X}(B)} , for all B ⊆ A {\displaystyle B\subseteq A} . [ 14 ] A function f : ( X , c ) → ( Y , c ′ ) {\displaystyle f:(X,\mathbf {c} )\to (Y,\mathbf {c} ')} is continuous at a point p {\displaystyle p} iff p ∈ c ( A ) ⇒ f ( p ) ∈ c ′ ( f ( A ) ) {\displaystyle p\in \mathbf {c} (A)\Rightarrow f(p)\in \mathbf {c} '(f(A))} , and it is continuous everywhere iff f ( c ( A ) ) ⊆ c ′ ( f ( A ) ) {\displaystyle f(\mathbf {c} (A))\subseteq \mathbf {c} '(f(A))} for all subsets A ∈ ℘ ( X ) {\displaystyle A\in \wp (X)} . [ 15 ] The mapping f {\displaystyle f} is a closed map iff the reverse inclusion holds, [ 16 ] and it is a homeomorphism iff it is both continuous and closed, i.e. iff equality holds. [ 17 ] Let ( X , c ) {\displaystyle (X,\mathbf {c} )} be a Kuratowski closure space. Then A point p {\displaystyle p} is close to a subset A {\displaystyle A} if p ∈ c ( A ) . {\displaystyle p\in \mathbf {c} (A).} This can be used to define a proximity relation on the points and subsets of a set. [ 21 ] Two sets A , B ∈ ℘ ( X ) {\displaystyle A,B\in \wp (X)} are separated iff ( A ∩ c ( B ) ) ∪ ( B ∩ c ( A ) ) = ∅ {\displaystyle (A\cap \mathbf {c} (B))\cup (B\cap \mathbf {c} (A))=\varnothing } . The space X {\displaystyle X} is connected iff it cannot be written as the union of two separated subsets. [ 22 ]
https://en.wikipedia.org/wiki/Kuratowski_closure_axioms
In mathematics , the Kuratowski–Ulam theorem , introduced by Kazimierz Kuratowski and Stanislaw Ulam ( 1932 ), called also the Fubini theorem for category , is an analog of Fubini's theorem for arbitrary second countable Baire spaces . Let X and Y be second countable Baire spaces (or, in particular, Polish spaces ), and let A ⊂ X × Y {\displaystyle A\subset X\times Y} . Then the following are equivalent if A has the Baire property : Even if A does not have the Baire property, 2. follows from 1. [ 1 ] Note that the theorem still holds (perhaps vacuously) for X an arbitrary Hausdorff space and Y a Hausdorff space with countable π-base . The theorem is analogous to the regular Fubini's theorem for the case where the considered function is a characteristic function of a subset in a product space, with the usual correspondences, namely, meagre set with a set of measure zero, comeagre set with one of full measure, and a set with the Baire property with a measurable set. This topology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kuratowski–Ulam_theorem
The Kurchatov Medal , or the Gold Medal in honour of Igor Kurchatov is an award given for outstanding achievements in nuclear physics and in the field of nuclear energy . The USSR Academy of Sciences established this award on February 9, 1960 in honour of Igor Kurchatov and in recognition of his lifetime contributions to the fields of nuclear physics, nuclear energy and nuclear engineering. [ 1 ] In the USSR , the Kurchatov Medal award was given every three years starting in 1962. Honorarium was included as part of the award through 1989. Later in Russia , the Kurchatov Gold Medal award has been resumed, and the medal has been given since 1998. Source: Russian Academy of Sciences
https://en.wikipedia.org/wiki/Kurchatov_Medal
The Kurnakov test , also known as Kurnakov's reaction , is a chemical test that distinguishes pairs of cis - and trans -isomers of [PtA 2 X 2 ] (A = NH 3 , X = halogen or pseudohalide ). Upon treatment with thiourea , the trans -dihalides give less soluble white products, whereas the cis -dihalides give more soluble yellow products. The test is still used to assay samples of the drug cisplatin , but it is mainly of pedagogical interest, as it illustrates the trans effect . The test was devised by Soviet chemist Nikolai Kurnakov . [ 1 ] [ 2 ] [ 3 ] The Kurnakov test is sometimes used to detect transplatin in samples of the drug cisplatin. In hot aqueous solution, the cis -compound reacts with aqueous thiourea (tu) to give a deeper yellow solution, from which yellow needles of [Pt(tu) 4 ]Cl 2 chloride deposit on cooling. The trans -compound gives a colourless solution, from which snow-white needles of trans -[Pt(tu) 2 (NH 3 ) 2 ]Cl 2 deposit on cooling. [ 4 ] [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Kurnakov_test
Kuroshima Research Station is a marine research institute in Okinawa , Japan , located on the island of Kuroshima . (黒島) [ 1 ] It was established in 1973 as the Yaeyama Marine Park Research Institute, [ 2 ] [ 3 ] for the purpose of managing and utilising the marine park area in Sekisei (石西) lagoon between Ishigaki (石垣) Island and Iriomote (西表) Island including Kuroshima Island. From the beginning, it worked as ocean research station, and existed until 2002 under the financial support of Nagoya Railroad Business Operations Co. Ltd. At present, Kuroshima Research Station belongs to the NPO Sea Turtle Association of Japan, which took over the activities of the institute. [ 3 ] [ 2 ] Some of the institutes activities include 30 years’ research into the nesting of sea turtles , including confirmation of the nesting of the hawksbill sea turtle for the first time in Japan , and confirming the nesting of green sea turtles . They have also researched acanthasters and corals . In 2005, they sponsored the Japanese Sea Turtle Conference , which is held every year at the location of sea turtles nesting rookeries in Japan.
https://en.wikipedia.org/wiki/Kuroshima_Research_Station
Kurt Martin Mislow (June 5, 1923 – October 5, 2017) was a German-born American organic chemist who specialized in stereochemistry . Born in Berlin on June 5, 1923, Mislow had moved to London by 1938, after some time in Milan. With the help of his uncle Alfred Eisenstaedt , Mislow's family left London for New York City in 1940. Mislow earned a bachelor's degree in chemistry from Tulane University in 1944, and received a doctorate from the California Institute of Technology , where he was supervised by Linus Pauling . [ 1 ] His thesis was entitled: I. The Synthesis of Potential Antimaterials. Some 2-Substituted 8-(3-Diethylaminopropylaminol)-Quinolines. II. Isomorphism in Relation to Serological Specificity. III. A Study of the Hammick Reaction . [ 2 ] Mislow first taught at New York University , then moved to Princeton University in 1964. While at Princeton, Mislow served as Hugh Stott Taylor Professor of Chemistry and led the chemistry department from 1968 to 1974. He became a professor emeritus in 1988. [ 1 ] Over the course of his career, Mislow was named a Guggenheim fellow twice, in 1956 and 1974. Between 1959 and 1963, Mislow was granted the Sloan Research Fellowship . He became a member of the National Academy of Sciences in 1972, followed by fellowships in the American Academy of Arts and Sciences , granted in 1974, and the American Association for the Advancement of Science , bestowed in 1980. In 1999, Mislow was named a foreign member of the Accademia dei Lincei . The American Chemical Society honored Mislow with several awards, among them the James Flack Norris Award in Physical Organic Chemistry (1975), the William H. Nichols Medal Award (1987), and the Arthur C. Cope Scholar Award (1995). [ 1 ] Mislow died at the age of 94 years on Oct. 5, 2017. He was survived by his wife, son, and two grandsons. [ 3 ]
https://en.wikipedia.org/wiki/Kurt_Mislow
Kurt Wüthrich (born 4 October 1938 in Aarberg , Canton of Bern ) is a Swiss chemist / biophysicist and Nobel Chemistry laureate , known for developing nuclear magnetic resonance (NMR) methods for studying biological macromolecules . [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] Born in Aarberg , Switzerland , Wüthrich was educated in chemistry , physics , and mathematics at the University of Bern before pursuing his PhD supervised by Silvio Fallab [ 8 ] at the University of Basel , awarded in 1964. [ 9 ] [ 10 ] After his PhD, Wüthrich continued postdoctoral research with Fallab for a short time before leaving to work at the University of California, Berkeley for two years from 1965 with Robert E. Connick . That was followed by a stint working with Robert G. Shulman at the Bell Telephone Laboratories in Murray Hill, New Jersey from 1967 to 1969. Wüthrich returned to Switzerland, to Zürich , in 1969, where he began his career there at the ETH Zürich , rising to Professor of Biophysics by 1980. He currently maintains a laboratory at the ETH Zürich , at The Scripps Research Institute , in La Jolla, California and at the iHuman Institute of ShanghaiTech University . He has also been a visiting professor at the University of Edinburgh (1997–2000), the Chinese University of Hong Kong (where he was an Honorary Professor) and Yonsei University . [ 8 ] During his graduate studies Wüthrich started out working with electron paramagnetic resonance spectroscopy, and the subject of his PhD thesis was "the catalytic activity of copper compounds in autoxidation reactions". [ 11 ] During his time as a postdoc in Berkeley he began working with the newly developed and related technique of nuclear magnetic resonance spectroscopy to study the hydration of metal complexes . When Wüthrich joined the Bell Labs, he was put in charge of one of the first superconducting NMR spectrometers, and started studying the structure and dynamics of proteins. He has pursued this line of research ever since. After returning to Switzerland, Wüthrich collaborated with, among others, Nobel laureate Richard R. Ernst on developing the first two-dimensional NMR experiments, and established the nuclear Overhauser effect as a convenient way of measuring distances within proteins . This research later led to the complete assignment of resonances for among others the bovine pancreatic trypsin inhibitor and glucagon . In October 2010, Wüthrich participated in the USA Science and Engineering Festival 's Lunch with a Laureate program where middle and high school students will get to engage in an informal conversation with a Nobel Prize–winning scientist over a brown-bag lunch. [ 12 ] Wüthrich is also a member on the USA Science and Engineering Festival 's Advisory Board [ 13 ] and a supporter of the Campaign for the Establishment of a United Nations Parliamentary Assembly , an organisation which campaigns for democratic reform in the United Nations. [ 14 ] Wüthrich is a member of the Executive Advisory Board of the World.Minds Foundation, where he contributes to international dialogue on science, research, and innovation policy. [ 15 ] He was awarded the Louisa Gross Horwitz Prize from Columbia University in 1991, the Louis-Jeantet Prize for Medicine in 1993, the Otto Warburg Medal in 1999 and half of the Nobel Prize in Chemistry in 2002 for "his development of nuclear magnetic resonance spectroscopy for determining the three-dimensional structure of biological macromolecules in solution". He received the Bijvoet Medal of the Bijvoet Center for Biomolecular Research of Utrecht University in 2008. [ 16 ] He was elected a Foreign Member of the Royal Society (ForMemRS) in 2010 . [ 17 ] He was also awarded the 2018 Fray International Sustainability Award at SIPS 2018 by FLOGEN Star Outreach. [ 18 ] On 2 April 2018, Dr. Wüthrich established permanent residency in Shanghai, China , after obtaining a Chinese permanent residence card. [ 19 ] [ 20 ]
https://en.wikipedia.org/wiki/Kurt_Wüthrich
Kurth Kiln was established by the Forests Commission Victoria in 1941 on a site about 7 km north of Gembrook on the Tomahawk Creek. [ 1 ] Dr Ernest Edgar Kurth from the University of Tasmania was commissioned to design the kiln with the aim of mass-producing charcoal as an alternative fuel in the response to war-time petrol rationing. [ 2 ] Gembrook was selected as the ideal site for the Kurth Kiln because it fully met three essential criteria required for successful operation; Dr Kurth was paid £5 for the use of his patented design (No 2563/41) and the total cost of establishing the kiln was 1,799 pounds 17 shillings and 2 pence. [ 3 ] The kiln commenced operation in March 1942 but transport difficulties combined with an oversupply of charcoal from private operators meant the kiln was used only intermittently during 1943 and was shut down soon after. Over the period of its operation, Kurth Kiln produced only 471 tons of charcoal which represented a tiny fraction of Victoria’s total production. [ 4 ] Australia’s declaration of War on 2 September 1939 immediately raised concerns about the security of petrol supplies. At the time Australia was totally reliant on imported fuel and had a limited storage capacity. At the start of the war, the country had sufficient petrol for only three months supply and by May 1940 the Commonwealth Oil Board estimated that only 67 per cent of the total capacity of about 140 million gallons was on-hand. [ 5 ] The Federal Government considered increasing the price of fuel to dampen demand. Another plan was that petrol should be merchandised in two colours, blue for commercial vehicles, and red for private cars, with red petrol being substantially more expensive than blue. The motoring industry and newspapers resisted the changes. [ 5 ] When France collapsed, the Minister for Supply stressed to Cabinet that continuity of the already erratic deliveries was under threat, and on 6 June 1940 Cabinet finally made the decision that rationing should be introduced to reduce consumption by 50 per cent. Political expediency came into play and petrol rationing was introduced in September 1940 and restrictions were tightened as the War progressed. [ 5 ] On 17 June 1941 the Prime Minister, Robert Menzies announced further restrictions to motorists limiting them to two gallons per month which were enough for 1,000 miles per year so many simply put their cars up on blocks for the duration and switched to public transport or walked. The scheme finally devised for petrol rationing coupons was complicated, and the paperwork was profuse. [ 5 ] Drivers had to apply for a petrol licence, from which they were allocated ration tickets based on an assessment of their needs. Forests Commission firefighting vehicles were exempt from the petrol restrictions. District Foresters were authorised to issue petrol coupons for timber industry trucks, which, in the absence of private cars and utes, often served as family transport too. Country school buses were fitted with gas converters and on declared days of Acute Fire Danger they were banned. Some of Melbourne's buses also ran on charcoal. [ 6 ] Petrol rationing was not strictly enforced until 1942 but remained in place until February 1950. [ 5 ] During the first half of the twentieth century charcoal production in Victoria was small and employed relatively few people. The charcoal was supplied to blacksmiths, coal depots, gas works and some powerhouses. The main areas of State forest producing charcoal were in a broad arc across Central Victoria at places like Beaufort , Trentham , Lionville , Macedon , Broadford , the Dandenongs and Gembrook . Charcoal was also produced in East Gippsland , mainly at Nowa Nowa . [ 4 ] The best trees were durable species like red ironbark ( Eucalyptus sideroxylon ), red box ( E. polyanthemos ), grey box ( E. microcarpa ), yellow box ( E. mellidora ), red gum ( E. camadulensis ) and for lighter grades of charcoal, yellow stringybark ( E. meullerana ). [ 7 ] The charcoal was produced by slow and controlled burning of wood in earthen, brick or metal kilns. The simplest method was the "beehive kiln" where wood was stacked vertically into a conical heap to allow air and smoke movement and then covered with a layer of sticks and twigs. The stack was then covered further with a thick layer of earth and ash. Once the stack was alight the small chimney opening at the top was sealed. It then required constant tending to ensure no air entered the stack and took about three days to reduce the wood to charcoal. Large beehive kilns each contained about 50 cords of wood and were very labour intensive. They also produced charcoal of uneven consistency with a high ash content and some inevitable soil contamination. Other charcoal producers opted for lined pits in the ground , steel chambers or recycled boilers . [ 7 ] With the urging of the Federal Government charcoal quickly emerged as a substitute fuel in response to the national petrol rationing. A special vehicle-mounted unit which converted charcoal into flammable gases (principally carbon monoxide and hydrogen) was known to be suitable to power internal combustion engines. [ 5 ] Charcoal was relatively simple to produce but had a well-deserved reputation for being inconvenient to use for short trips, inefficient, under powered, dirty, belching black smoke, catching fire and occasionally exploding. [ 8 ] Also, the cost of installation of a heavy and cumbersome producer gas kit was about £100, or the equivalent of 16 times the average weekly wage, and a bag of charcoal lasted between 30 and 60 miles at a cost between 6 and 10 shillings. [ 8 ] Charcoal had a cost advantage over petrol and tests earlier in 1938-39 indicated savings in the order 80% for truck operations for producer gas compared to the cost of petrol. [ 8 ] At the time, a new car, if you could get one, cost around £250 for a small Austin and £525 for a large Buick. Over the war-years, some 56,000 gas producer units of varying design were fitted to private and commercial vehicles in Australia. This low conversion rate to producer gas technology represented less than 6.5% of all vehicles on the road by 1944. [ 5 ] The overall consumption of charcoal in Victoria rose thirtyfold from about 110 tons per month before the War to over 3300 tons by mid-1942. [ 5 ] This massive increase required nearly 280,000 tones of dry wood annually to feed the hundreds of kilns set up across State forests, as well as on private land, at a time when labour was critically short. The task of ensuring adequate supplies of charcoal fell to the Forests Commission Victoria (FCV) . It subsequently formed the State Charcoal Branch to organise the increased production of charcoal, to build up reserves to meet emergencies and to regulate the cost to consumers. The assistance of an expert Advisory Panel, representing charcoal producers, manufacturers and distributors of vehicle gas equipment, the Department of Supply and Development, and the Victorian Automobile Chamber of Commerce, was enlisted under the Chairman of the Forests Commission, Alfred Vernon Galbraith . Preliminary arrangements were made for bag supplies, for railway sidings in Melbourne, and for processing of charcoal bought by the Branch in excess of the requirements of private grading firms. In its first year, 17,421 tons of charcoal were produced compared with 1,650 tons before the War. [ 4 ] Production peaked at 38,922 tons in 1942-43. [ 4 ] The Commission had the added responsibility of providing emergency firewood to Melbourne for heating and cooking purposes as a result of reductions in the supply of coal, electricity and gas. [ 6 ] [ 9 ] The Emergency Firewood Project continued long after the war ended and over the period from 1941 to 1954, nearly 2 million tons of firewood was produced. [ 10 ] An estimated 221 kilns and 12 pits were producing charcoal by the middle of 1942. Some of the labour was provided by Italian wartime internees . There were also over 600 commercial kilns operating mostly on private property. At least 50 to 60 private charcoal retorts were set up in the Barmah forest alone. [ 6 ] [ 9 ] The majority of the kilns were metal retorts because charcoal from earthen beehive kilns or unlined pits proved unsuitable for motor vehicles, causing gumming of engine valves and the controls in the gas lines due to the condensation of tar. [ 7 ] The Chairman of the Forests Commission, Alfred Vernon Galbraith became aware of experiments by Dr Ernest Kurth , Professor of Chemistry at the University of Tasmania who had been experimenting with the pyrolysis of wood and kiln designs since 1940. His work led to a quarter-size prototype kiln at Dover , Tasmania in March 1941. [ 2 ] A second operational kiln was built at a sawmill near Launceston in January 1942. [ 11 ] In July 1941, Professor Kurth provided details of his kiln design and nine pages of handwritten notes on its operation to Galbraith which marked the beginning of the project at Gembrook. [ 2 ] The kiln was unusual in that it could operate continually with the top loading of wood billets and bottom recovery of charcoal. A water-cooled grate at the bottom of the stack caused the brittle charred wood to crumble under its own weight into manageable pieces, while at the same time maintaining the charring temperature at the critical point to produce a consistent quality charcoal. Not only was this process said to be 50% faster than any other method then in use, but it was also 10-15% more efficient. So cooled charcoal was raked out the bottom as more wood was added into the top. [ 2 ] Tests in Tasmania indicated that the prototype could produce about 1.4 tons of charcoal per day which compared favourably with a single ton every three days from a standard steel kiln. [ 3 ] Seven tons of wood produced one ton of charcoal with an output of 20 tons per week if the kiln was operated continuously with three shifts per day. The cost of building the kiln was estimated to be about half that required for five or six portable steel kilns needed to produce the same quantity. Good quality charcoal sold for 4s 6d for a 50 lb bag. [ 3 ] The claimed advantages could not be ignored by an organisation under pressure to secure Victoria's war-time charcoal supplies, so Galbraith enclosed £5 to Professor Kurth in a letter on 16 July 1941 for the use of his patented design. [ 2 ] The selection of the suitable site on State forest for Professor Kurth's kiln depended on adequate supplies of water, wood, and gradient. [ 2 ] Firstly, the kiln needed approximately 2000 gallons (9100 litres) per day for its cooling system, secondly, it needed about 28 cords (100 cubic metres) of wood per week as feedstock, and thirdly a slope of approximately 18 feet (6 metres) was required to facilitate top loading. The site chosen at Tomahawk Creek about 7 km north of Gembrook met all these requirements. [ 2 ] Water was supplied by an old mining race from the Tomahawk Creek which also powered a water wheel operating a vibrating screen to grade the charcoal before bagging. [ 2 ] Much of the Gembrook landscape had been cleared for agriculture, supported a thriving timber industry and also mined for gold and gemstones since 1859 so most of the older mature forest had been disturbed in some way. More importantly, a large area of older and damaged messmate trees ( Eucalyptus obliqua ) in the nearby State forest had been deliberately ringbarked as a silvicultural treatment to encourage new regeneration growth by Forests Commission unemployment relief workers some 10 years earlier during the 1930s depression .  Approximately 145 men had worked over nearly 4000 acres during 1930-31. [ 2 ] This left a large amount of standing dry wood suitable for the kilns operation within a one-mile radius. Access to roads and the railway station at Gembrook for transport was an important consideration.  There was also a critical shortage of labour to operate the kiln during the war years which was exacerbated by a major timber salvage program underway in the Central Highlands after the deadly 1939 Black Friday bushfires . Although the site at Gembrook had not been severely burnt in either 1926 or 1939. [ 4 ] Earth works commenced in late August 1941. A construction contract for the kiln was awarded to builders Stanley and Nance of Middle Park on 17 October and the detailed design work was done by the Forests Commission’s own architect, Mr S. J. B. Hart.  Building commenced in November and by 18 December 1941 was nearly completed. The architect reported on 11 February 1942 to the Secretary of the Forests Commission that the kiln was ready for operations after a small trial run. [ 12 ] The project cost £1799 17s 2d which included the establishment of the site, construction of the kiln, erection of buildings, purchase of equipment, the connection of telephone and supply of water. It was a sizable investment. [ 13 ] The kiln was a rectangular structure 4m x 3.5m x 8m high on a concrete foundation and the red brick walls were reinforced with iron strapping. The kiln held 25 tons of 3-feet-long billets per load which were carried up on a small inclined tramway to be loaded into the top. [ 14 ] The initial firing was on 18 March 1942 and after some initial teething problems with the steel doors the kiln was in full production by mid-1942. [ 2 ] Combustion took two days to complete and kiln produced 243 tons of charcoal for its first financial year 1941-42. But in the latter half of the year, this reduced to only 29 tons. Production by July 1942 had been so successful that problems of storage soon became apparent. Commissioner Finton George Gerraty reported that about 70 tons of graded charcoal was stockpiled and the storage sheds were taxed to the limit. Production was suspended to solve this short-term problem. However, at the same time the kiln suffered further structural problems. An inspection in September 1942 indicated that major repairs were required to some loosening brickwork near the inspection doors and this together with the emerging oversupply of charcoal raised serious questions about Kurth Kilns future. [ 2 ] Continuing transport and distribution difficulties from Gembrook combined with an oversupply of charcoal from private operators meant the kiln was used only intermittently during 1943 and was shut down soon after. [ 2 ] Petrol rationing also eased at the end of the War reducing the demand for charcoal but didn't end until 1950. [ 5 ] Over the total period of its operation, Kurth Kiln produced 471 tons of charcoal which represented only a tiny fraction of Victoria’s total production. [ 4 ] But Kurth Kiln was a victim of circumstance and not the "white elephant" that these low production figures might suggest. [ 2 ] After the cessation of war hostilities, the Kallista District of the Forests Commission advised in February 1946, that they could absorb 40 returned servicemen for silvicultural, afforestation, fire protection, and utilisation works. This was part of a five-year statewide plan drawn up and approved by the Allied Works Council [1] with funding allocated for the first two-year period totalling £3,842,175. [ 4 ] Similar camps were established on State forest to house and employ migrants and refugees from war-torn Europe. [ 6 ] [2] Archived 2018-08-09 at the Wayback Machine To provide accommodation, eighteen masonite huts were purchased from the Army and erected at the site. By July 1946 the Commission decided to make Kurth Kiln its main base camp for the region to house 80 to 100 men. The forest camp operated continuously until 8 January 1963 when three huts were burnt by bushfires. The remaining buildings began to slowly deteriorate and construction of an all-weather road network meant workmen could be housed at nearby townships instead. The site declined through to the 1970s.  Three huts were demolished and the material used to modify a remaining one as a caretaker’s residence in about 1984 for Ron Thornton who lived on-site for another 16 years. The sheds were then mainly used as storage of eucalyptus seed needed for regeneration works and for Forests Commission equipment such and pumps and hoses. The small dam next to the kiln was regularly used to as a pump school to prepare staff for the summer fire season. [ 2 ] Kurth kiln began its transformation into a picnic ground in about 1978 led by District Forester Frank May with works supervised by two local overseers Tom Steege and Bob Ferris. [ 6 ] The site is now registered as historical and scientific significance with Kurth Kiln being included as an indicative place on the Register of the National Estate (004495) as well as being listed in the Heritage Inventory of archaeological sites maintained by Heritage Victoria (H8022-0013). [ 3 ] Remnants of a historic steel charcoal kiln can also still be found at nearby Tonimbuk [ 15 ] and another at Kinglake West. Parks Victoria is now responsible for Kurth Kiln and has undertaken conservation works on several of the most urgent building repairs. A small and active volunteer friends group formed in June 1999 which helps to protect and interpret the site. [ 3 ] McHugh, Peter. (2020). Forests and Bushfire History of Victoria : A compilation of short stories, Victoria. https://nla.gov.au/nla.obj-2899074696/view Forests Commission Retired Personnel Association (FCRPA) - Peter McHugh - https://www.victoriasforestryheritage.org.au/
https://en.wikipedia.org/wiki/Kurth_Kiln
Kutlu Özergin Ülgen is a Turkish biochemical engineer researching pharmacophore modelling to identify pharmacological chaperones used to treat infectious diseases, genetic diseases, and cancer. Ülgen is a professor in the department of chemical engineering at Boğaziçi University . Özergin completed a B.S. (1987) and M.S. (1989) in chemical engineering at Boğaziçi University . She earned a Ph.D. in chemical engineering at University of Manchester in 1992. [ 1 ] Özergin researched Streptomyces coelicolor antibiotic production and bioreactors . Her dissertation was titled Study of antibiotic synthesis by free and immobilised streptomyces coelicolor a3(2) . Özergin's doctoral advisor was Ferda Mavituna [ Wikidata ] . [ 2 ] In 1992, Ülgen joined the faculty at Boğaziçi University as in instructor in the department of chemical engineering. She was promoted to assistant professor in 1994, associate professor in 1996, and professor in 2002. She served as head of the chemical engineering department from 2009 to 2011. Ülgen served as associate dean of the faculty of engineering from 2012 to December 2015. [ 1 ] [ 3 ] Ülgen researches pharmacophore modelling to identify pharmacological chaperones to treat infectious diseases, genetic diseases, and cancer. [ 4 ] She uses a systems biology approach to investigate the reconstruction of signaling networks in yeast, worms, and humans. She also researches protein purification , computational physiology, and metabolic pathway engineering. [ 1 ]
https://en.wikipedia.org/wiki/Kutlu_Ö._Ülgen
The Kutta condition is a principle in steady-flow fluid dynamics , especially aerodynamics , that is applicable to solid bodies with sharp corners, such as the trailing edges of airfoils . It is named for German mathematician and aerodynamicist Martin Kutta . Kuethe and Schetzer state the Kutta condition as follows: [ 1 ] : § 4.11 A body with a sharp trailing edge which is moving through a fluid will create about itself a circulation of sufficient strength to hold the rear stagnation point at the trailing edge. In fluid flow around a body with a sharp corner, the Kutta condition refers to the flow pattern in which fluid approaches the corner from above and below, meets at the corner, and then flows away from the body. None of the fluid flows around the sharp corner. The Kutta condition is significant when using the Kutta–Joukowski theorem to calculate the lift created by an airfoil with a sharp trailing edge. The value of circulation of the flow around the airfoil must be that value which would cause the Kutta condition to exist. Applying 2-D potential flow , if an airfoil with a sharp trailing edge begins to move with an angle of attack through air, the two stagnation points are initially located on the underside near the leading edge and on the topside near the trailing edge, just as with the cylinder. As the air passing the underside of the airfoil reaches the trailing edge it must flow around the trailing edge and along the topside of the airfoil toward the stagnation point on the topside of the airfoil. Vortex flow occurs at the trailing edge and, because the radius of the sharp trailing edge is zero, the speed of the air around the trailing edge should be infinitely fast. Though real fluids cannot move at infinite speed, they can move very fast. The high airspeed around the trailing edge causes strong viscous forces to act on the air adjacent to the trailing edge of the airfoil and the result is that a strong vortex accumulates on the topside of the airfoil, near the trailing edge. As the airfoil begins to move it carries this vortex, known as the starting vortex , along with it. Pioneering aerodynamicists were able to photograph starting vortices in liquids to confirm their existence. [ 2 ] [ 3 ] [ 4 ] The vorticity in the starting vortex is matched by the vorticity in the bound vortex in the airfoil, in accordance with Kelvin's circulation theorem . [ 1 ] : § 2.14 As the vorticity in the starting vortex progressively increases the vorticity in the bound vortex also progressively increases and causes the flow over the topside of the airfoil to increase in speed. The starting vortex is soon cast off the airfoil and is left behind, spinning in the air where the airfoil left it. The stagnation point on the topside of the airfoil then moves until it reaches the trailing edge. [ 1 ] : §§ 6.2, 6.3 The starting vortex eventually dissipates due to viscous forces. As the airfoil continues on its way, there is a stagnation point at the trailing edge. The flow over the topside conforms to the upper surface of the airfoil. The flow over both the topside and the underside join up at the trailing edge and leave the airfoil travelling parallel to one another. This is known as the Kutta condition. [ 5 ] : § 4.8 When an airfoil is moving with an angle of attack, the starting vortex has been cast off and the Kutta condition has become established, there is a finite circulation of the air around the airfoil. The airfoil is generating lift, and the magnitude of the lift is given by the Kutta–Joukowski theorem . [ 5 ] : § 4.5 One of the consequences of the Kutta condition is that the airflow over the topside of the airfoil travels much faster than the airflow under the underside. A parcel of air which approaches the airfoil along the stagnation streamline is cleaved in two at the stagnation point, one half traveling over the topside and the other half traveling along the underside. The flow over the topside is so much faster than the flow along the underside that these two halves never meet again. They do not even re-join in the wake long after the airfoil has passed. [ citation needed ] There is a popular fallacy called the equal transit-time fallacy that claims the two halves rejoin at the trailing edge of the airfoil. This has been understood as a fallacy since Martin Kutta's discovery. Whenever the speed or angle of attack of an airfoil changes there is a weak starting vortex which begins to form, either above or below the trailing edge. This weak starting vortex causes the Kutta condition to be re-established for the new speed or angle of attack. As a result, the circulation around the airfoil changes and so too does the lift in response to the changed speed or angle of attack. [ 6 ] [ 5 ] : § 4.7-4.9 The Kutta condition gives some insight into why airfoils have sharp trailing edges, [ 7 ] even though this is undesirable from structural and manufacturing viewpoints. In irrotational, inviscid, incompressible flow (potential flow) over an airfoil , the Kutta condition can be implemented by calculating the stream function over the airfoil surface. [ 8 ] [ 9 ] The same Kutta condition implementation method is also used for solving two dimensional subsonic (subcritical) inviscid steady compressible flows over isolated airfoils. [ 10 ] [ 11 ] The viscous correction for the Kutta condition can be found in some of the recent studies. [ 12 ] The Kutta condition allows an aerodynamicist to incorporate a significant effect of viscosity while neglecting viscous effects in the underlying conservation of momentum equation. It is important in the practical calculation of lift on a wing . The equations of conservation of mass and conservation of momentum applied to an inviscid fluid flow, such as a potential flow , around a solid body result in an infinite number of valid solutions. One way to choose the correct solution would be to apply the viscous equations, in the form of the Navier–Stokes equations . However, these normally do not result in a closed-form solution. The Kutta condition is an alternative method of incorporating some aspects of viscous effects, while neglecting others, such as skin friction and some other boundary layer effects. The condition can be expressed in a number of ways. One is that there cannot be an infinite change in velocity at the trailing edge. Although an inviscid fluid can have abrupt changes in velocity, in reality viscosity smooths out sharp velocity changes. If the trailing edge has a non-zero angle, the flow velocity there must be zero. At a cusped trailing edge, however, the velocity can be non-zero although it must still be identical above and below the airfoil. Another formulation is that the pressure must be continuous at the trailing edge. The Kutta condition does not apply to unsteady flow. Experimental observations show that the stagnation point (one of two points on the surface of an airfoil where the flow speed is zero) begins on the top surface of an airfoil (assuming positive effective angle of attack ) as flow accelerates from zero, and moves backwards as the flow accelerates. Once the initial transient effects have died out, the stagnation point is at the trailing edge as required by the Kutta condition. Mathematically, the Kutta condition enforces a specific choice among the infinite allowed values of circulation .
https://en.wikipedia.org/wiki/Kutta_condition
The Kutta–Joukowski theorem is a fundamental theorem in aerodynamics used for the calculation of lift of an airfoil (and any two-dimensional body including circular cylinders) translating in a uniform fluid at a constant speed so large that the flow seen in the body-fixed frame is steady and unseparated . The theorem relates the lift generated by an airfoil to the speed of the airfoil through the fluid, the density of the fluid and the circulation around the airfoil. The circulation is defined as the line integral around a closed loop enclosing the airfoil of the component of the velocity of the fluid tangent to the loop. [ 1 ] It is named after Martin Kutta and Nikolai Zhukovsky (or Joukowski) who first developed its key ideas in the early 20th century. Kutta–Joukowski theorem is an inviscid theory , but it is a good approximation for real viscous flow in typical aerodynamic applications. [ 2 ] Kutta–Joukowski theorem relates lift to circulation much like the Magnus effect relates side force (called Magnus force) to rotation. [ 3 ] However, the circulation here is not induced by rotation of the airfoil. The fluid flow in the presence of the airfoil can be considered to be the superposition of a translational flow and a rotating flow. This rotating flow is induced by the effects of camber , angle of attack and the sharp trailing edge of the airfoil. It should not be confused with a vortex like a tornado encircling the airfoil. At a large distance from the airfoil, the rotating flow may be regarded as induced by a line vortex (with the rotating line perpendicular to the two-dimensional plane). In the derivation of the Kutta–Joukowski theorem the airfoil is usually mapped onto a circular cylinder. In many textbooks, the theorem is proved for a circular cylinder and the Joukowski airfoil , but it holds true for general airfoils. The theorem applies to two-dimensional inviscid flow flow around an airfoil section (or any shape of infinite span ). The lift per unit span L ′ {\displaystyle L'\,} of the airfoil is given by [ 4 ] where ρ ∞ {\displaystyle \rho _{\infty }} and V ∞ {\displaystyle V_{\infty }} are the fluid density and the fluid velocity far upstream of the airfoil, and Γ {\displaystyle \Gamma } is the circulation defined as the line integral around a closed contour C {\displaystyle C} enclosing the airfoil and followed in the negative (clockwise) direction. As explained below, this path must be in a region of potential flow and not in the boundary layer of the cylinder. The integrand V cos ⁡ θ {\displaystyle V\cos \theta } is the component of the local fluid velocity in the direction tangent to the curve C {\displaystyle C} , and d s {\displaystyle ds} is an infinitesimal length on the curve C {\displaystyle C} . Equation (1) is a form of the Kutta–Joukowski theorem . Kuethe and Schetzer state the Kutta–Joukowski theorem as follows: [ 5 ] A lift-producing airfoil either has camber or operates at a positive angle of attack , the angle between the chord line and the fluid flow far upstream of the airfoil. Moreover, the airfoil must have a sharp trailing edge. [ 6 ] Any real fluid is viscous, which implies that the fluid velocity vanishes on the airfoil. Prandtl showed that for large Reynolds number , defined as Re = ρ V ∞ c A μ {\displaystyle {\mathord {\text{Re}}}={\frac {\rho V_{\infty }c_{A}}{\mu }}\,} , and small angle of attack, the flow around a thin airfoil is composed of a narrow viscous region called the boundary layer near the body and an inviscid flow region outside. In applying the Kutta-Joukowski theorem, the loop must be chosen outside this boundary layer. (For example, the circulation calculated using the loop corresponding to the surface of the airfoil would be zero for a viscous fluid.) The sharp trailing edge requirement corresponds physically to a flow in which the fluid moving along the lower and upper surfaces of the airfoil meet smoothly, with no fluid moving around the trailing edge of the airfoil. This is known as the Kutta condition . Kutta and Joukowski showed that for computing the pressure and lift of a thin airfoil for flow at large Reynolds number and small angle of attack, the flow can be assumed inviscid in the entire region outside the airfoil provided the Kutta condition is imposed. This is known as the potential flow theory and works remarkably well in practice. Two derivations are presented below. The first is a heuristic argument, based on physical insight. The second is a formal and technical one, requiring basic vector analysis and complex analysis . For a heuristic argument, consider a thin airfoil of chord c {\displaystyle c} and infinite span, moving through air of density ρ {\displaystyle \rho } . Let the airfoil be inclined to the oncoming flow to produce an air speed V {\displaystyle V} on one side of the airfoil, and an air speed V + v {\displaystyle V+v} on the other side. The circulation is then The difference in pressure Δ P {\displaystyle \Delta P} between the two sides of the airfoil can be found by applying Bernoulli's equation : so the downward force on the air, per unit span, is and the upward force (lift) on the airfoil is ρ V Γ . {\displaystyle \rho V\Gamma .\,} A differential version of this theorem applies on each element of the plate and is the basis of thin-airfoil theory . First of all, the force exerted on each unit length of a cylinder of arbitrary cross section is calculated. [ 7 ] Let this force per unit length (from now on referred to simply as force) be F {\displaystyle \mathbf {F} } . So then the total force is: where C denotes the borderline of the cylinder, p {\displaystyle p} is the static pressure of the fluid, n {\displaystyle \mathbf {n} \,} is the unit vector normal to the cylinder, and ds is the arc element of the borderline of the cross section. Now let ϕ {\displaystyle \phi } be the angle between the normal vector and the vertical. Then the components of the above force are: Now comes a crucial step: consider the used two-dimensional space as a complex plane . So every vector can be represented as a complex number , with its first component equal to the real part and its second component equal to the imaginary part of the complex number. Then, the force can be represented as: The next step is to take the complex conjugate of the force F {\displaystyle F} and do some manipulation: Surface segments ds are related to changes dz along them by: Plugging this back into the integral, the result is: Now the Bernoulli equation is used, in order to remove the pressure from the integral. Throughout the analysis it is assumed that there is no outer force field present. The mass density of the flow is ρ . {\displaystyle \rho .} Then pressure p {\displaystyle p} is related to velocity v = v x + i v y {\displaystyle v=v_{x}+iv_{y}} by: With this the force F {\displaystyle F} becomes: Only one step is left to do: introduce w = f ( z ) , {\displaystyle w=f(z),} the complex potential of the flow. This is related to the velocity components as w ′ = v x − i v y = v ¯ , {\displaystyle w'=v_{x}-iv_{y}={\bar {v}},} where the apostrophe denotes differentiation with respect to the complex variable z . The velocity is tangent to the borderline C , so this means that v = ± | v | e i ϕ . {\displaystyle v=\pm |v|e^{i\phi }.} Therefore, v 2 d z ¯ = | v | 2 d z , {\displaystyle v^{2}d{\bar {z}}=|v|^{2}dz,} and the desired expression for the force is obtained: which is called the Blasius theorem . To arrive at the Joukowski formula, this integral has to be evaluated. From complex analysis it is known that a holomorphic function can be presented as a Laurent series . From the physics of the problem it is deduced that the derivative of the complex potential w {\displaystyle w} will look thus: The function does not contain higher order terms, since the velocity stays finite at infinity. So a 0 {\displaystyle a_{0}\,} represents the derivative the complex potential at infinity: a 0 = v x ∞ − i v y ∞ {\displaystyle a_{0}=v_{x\infty }-iv_{y\infty }\,} . The next task is to find out the meaning of a 1 {\displaystyle a_{1}\,} . Using the residue theorem on the above series: Now perform the above integration: The first integral is recognized as the circulation denoted by Γ . {\displaystyle \Gamma .} The second integral can be evaluated after some manipulation: Here ψ {\displaystyle \psi \,} is the stream function . Since the C border of the cylinder is a streamline itself, the stream function does not change on it, and d ψ = 0 {\displaystyle d\psi =0\,} . Hence the above integral is zero. As a result: Take the square of the series: Plugging this back into the Blasius–Chaplygin formula, and performing the integration using the residue theorem: And so the Kutta–Joukowski formula is: The lift predicted by the Kutta-Joukowski theorem within the framework of inviscid potential flow theory is quite accurate, even for real viscous flow, provided the flow is steady and unseparated. [ 8 ] In deriving the Kutta–Joukowski theorem, the assumption of irrotational flow was used. When there are free vortices outside of the body, as may be the case for a large number of unsteady flows, the flow is rotational. When the flow is rotational, more complicated theories should be used to derive the lift forces. Below are several important examples.
https://en.wikipedia.org/wiki/Kutta–Joukowski_theorem
The Kuwajima Taxol total synthesis by the group of Isao Kuwajima of the Tokyo Institute of Technology is one of several efforts in taxol total synthesis published in the 1990s. [ 1 ] [ 2 ] The total synthesis of Taxol is considered a landmark in organic synthesis . This synthesis is truly synthetic without any help from small biomolecule precursors and also a linear synthesis with molecule ring construction in the order of A, B, C, D. At some point chirality is locked into the molecule via an asymmetric synthesis step which is unique compared to the other efforts. In common with the other efforts the tail addition is based on the Ojima lactam . The 20 carbon frame is constructed from several pieces: propargyl alcohol (C1, C2, C14), propionaldehyde (C13, C12, C18), isobutyric acid (C15, C16, C17, C11), Trimethyl(phenylthiomethyl)silane (C10), 2-bromobenzaldehyde (C3 to C9), diethylaluminum cyanide (C19) and trimethylsilylmethyl bromide (C20) Ring A synthesis ( scheme 1 ) started by joining the THP protected propargyl alcohol 1.1 (the C2-C1-C14 fragment) and propionaldehyde 1.2 (fragment C13-C12-C18) in a nucleophilic addition with n -butyllithium to alcohol 1.3 . The Lindlar catalyst then reduced the alkyne to the alkene in 1.4 and Swern oxidation converted the alcohol group to the enone group in 1.5 . Fragment C11-C15-C16-C17 1.6 was then added as the lithium enolate of isobutyric acid ethyl ester in a conjugate addition to gamma keto ester 1.7 . A Claisen condensation closed the ring to 1.8 and the intermediate enol is captured by pivaloyl chloride (piv) as a protective group . The THP group was removed with TsOH to 1.9 and the formed alcohol oxidized by Swern oxidation to aldehyde 1.10 . The TIPS silyl enol ether 1.11 was formed by reaction with the triflate TIPSOtf and DBU in DMAP setting the stage for asymmetric dihydroxylation to hydroxyaldehyde 1.12 . The piv protecting group was then replaced by a TIPS group in 1.14 after protecting the aldehyde as the aminal 1.13 and as this group is automatically lost on column chromatography , the step was repeated to aminal 1.15. The C10 fragment was then introduced by the lithium salt of Trimethyl(phenylthiomethyl)silane 1.16 in a Peterson olefination to the sulfide 1.17 followed by deprotection to completed ring A 1.18 . The A ring is now complete with the aldehyde group and de sulfide group in place for anchoring with ring C forming ring B. The bottom part of ring B was constructed by nucleophilic addition to the aldehyde of 2.1 ( scheme 2 ) with dibenzyl acetal of 2-bromobenzaldehyde 2.2 as its aryllithium . This step is much in common with the B ring synthesis in the Nicolaou Taxol total synthesis except that the aldehyde group is located at ring A and not ring B. The diol in 2.3 was protected as the boronic ester 2.4 preparing the molecule for upper part ring closure with tin tetrachloride to tricycle 2.5 in a Grob fragmentation -like reaction. After deprotection ( pinacol ) to diol 2.6 , DIBAL reduction to triol 2.7 and TBS reprotection (TBSOtf, lutidine ) to alcohol 2.8 it was possible to remove the phenylsulfide group in with a tributyltin hydride and AIBN (see Barton-McCombie deoxygenation ) to alcohol 2.9 . Palladium on carbon hydrogenation removed the benzyl protecting group allowing the Swern oxidation of 2.10 to ketone 2.11 Completion of the C ring required complete reduction of the arene, placement of para oxygen atoms and importantly introduction of the C19 methyl group. The first assault on the aromatic ring in 3.1 ( scheme 3 ) was launched with Birch reduction ( potassium , ammonia , tetrahydrofuran , -78 °C, then ethanol ) to diene 3.2 . Deprotection ( TBAF ) to diol 3.3 , reprotection as the benzaldehyde acetal 3.4 and reduction ( sodium borohydride ) to alcohol 3.5 allowed the oxidation of the diene to the 1,4-butenediol 3.6 . In this photochemical [4+2] cycloaddition , singlet oxygen was generated from oxygen and rose bengal and the intermediate peroxide was reduced with thiourea . The next order of business was introduction of the C19 fragment: the new diol group was protected as the PMP acetal 3.7 (PMP stands for p-methoxyphenyl ) allowing the oxidation of the C4 alcohol to ketone 3.8 with the Dess-Martin periodinane . Diethylaluminum cyanide reacted in a conjugate addition to the enone group to nitrile 3.9 . The enol was protected as the TBS ether 3.10 allowing for the reduction of the nitrile group first to the aldehyde with DIBAL and then on to the alcohol 3.11 with Lithium aluminium hydride . The alcohol group was replaced by bromine in an Appel reaction which caused an elimination reaction (loss of HBr) to cyclopropane 3.12 . Treatment with hydrochloric acid formed ketone 3.13 , reaction with Samarium(II) iodide gave ring-opening finally putting the C19 methyl group in place in 3.14 and deprotection (TBAF) and enol-ketone conversion gave hydroxyketone 3.15 By protecting the diol group in triol 4.1 ( scheme 4 ) as the phenyl boronic ester 4.2 , the remaining alcohol group could be protected as the TBS ether 4.3 . After deprotecting the diol group ( hydrogen peroxide , sodium bicarbonate ) again in 4.4 it was possible to oxidize the C19 alcohol to the ketone 4.5 with Dess-Martin periodinane . In a new round of protections the C7 alcohol was converted to the 2-methoxy-2-propyl (MOP) ether 4.6 with 2-propenylmethylether and PPTS and the C7 ketone was converted to its enolate 4.7 by reaction with KHMDS and N,N-bis(trifluoromethylsulfonyl)aniline . These preambles facilitated the introduction of the final missing C20 fragment as the Grignard reagent trimethylsilylmethylmagnesium bromide which coupled with the triflate in a tetrakis(triphenylphosphine)palladium(0) catalysed reaction to the silane 4.8 . The trimethylsilyl group eliminated on addition of NCS to organochloride 4.9 . Prior to ring-closing the D ring there was some unfinished business in ring C. A C10 alcohol was introduced by MoOPH oxidation to 4.10 but with the wrong stereochemistry . After acetylation to 4.11 and inversion of configuration with added base DBN this problem was remedied in compound 4.12 . Next dihydroxylation with Osmium(VIII) oxide formed the diol 4.13 with the primary alcohol on addition of base DBU displacing the chlorine atom in a nucleophilic aliphatic substitution to oxetane 4.14 . The C1, C2 and C4 functional groups were put in place next and starting from oxetane 5.1 ( scheme 5 ) the MOM protecting group is removed in 5.2 ( PPTS ) and replaced by a TES group TESCl ) in 5.3 . The acetal group was removed in 5.4 ( hydrogenation PdOH 2 , H 2 ) and replaced by a carbonate ester group in 5.5 ( triphosgene , pyridine ). The tertiary alcohol group was acetylated in 5.6 and in the final step the carbonate group was opened by reaction with phenyllithium to the hydroxyester 5.7 . Prior to tail addition the TES protective group was removed in 5.8 ( hydrogen fluoride pyridine ) and replaced by a TROC (trichloroethyl carbonate, TROCCl ) group in 5.9 . The C13 alcohol protective group was removed in 5.10 ( TASF ) enabling the tail addition of Ojima lactam 5.11 (this step is common with all total synthetic efforts to date) to 5.12 with Lithium bis(trimethylsilyl)amide . The synthesis was completed with TROC removal ( zinc , acetic acid ) to taxol 5.13 .
https://en.wikipedia.org/wiki/Kuwajima_Taxol_total_synthesis
Kvant-1 ( Russian : Квант-1 ; English : Quantum-I/1 ) (37KE) was the first module to be attached in 1987 to the Mir Core Module , which formed the core of the Soviet space station Mir . It remained attached to Mir until the entire space station was deorbited in 2001. [ 4 ] The Kvant-1 module contained scientific instruments for astrophysical observations and materials science experiments. It was used to conduct research into the physics of active galaxies, quasars and neutron stars and it was uniquely positioned for studies of the Supernova SN 1987A . Furthermore, it supported biotechnology experiments in anti-viral preparations and fractions. Some additions to Kvant-1 during its lifetime were solar arrays and the Sofora and Rapana girders. The Kvant-1 module was based on the TKS spacecraft and was the first, experimental version of a planned series of '37K' type modules. The 37K modules featured a jettisonable TKS-E type propulsion module, also called the Functional Service Module (FSM). The control system of Kvant-1 had been developed by NPO "Electropribor" ( Kharkiv , Ukraine ). [ 5 ] After previous engineering tests with the Salyut 6 and Salyut 7 space stations (and temporarily attached TKS-derived space station modules like Kosmos 1267 , Kosmos 1443 and Kosmos 1686 ) it became the first space station module to be attached semi-permanently to the first modular space station in the history of space flight. [ 3 ] Kvant-1 was originally planned to be docked to the Salyut 7 space station, the plans however evolved to launch to Mir , initially considered on board the Soviet Buran space shuttle, which finally changed to a launch to Mir by the Proton-K rocket. The Kvant spacecraft represented the first use of a new kind of Soviet space station module, designated 37K. An order authorising the beginning of development was issued on 17 September 1979. The basic 37K design consisted of a 4.2 m diameter pressurised cylinder with a docking port at the forward end. It was not equipped with its own propulsion system. The original authorisation was for a total of eight 37K's of various configurations: The 37KE was designated Kvant and was equipped with an astrophysics payload. It also used the Salyut-5B digital flight control computer and Gyrodyne flywheel orientation system developed for Almaz . As the module neared completion Salyut 7 experienced numerous technical problems and Kvant was retargeted for docking with Mir. But at that time Mir was planned to be in a 65-degree orbit, and Kvant was 800 kg too heavy for the Proton launch vehicle to place in such an orbit. In January 1985 Mir was changed to a 51.6-degree orbit, which solved one problem. But now it was planned that Kvant would dock with the rear port of Mir, requiring the addition of lines to conduct rocket propellant from the Progress tanker spacecraft to Mir's storage tanks. This increased weight again, forcing the FGB to have its propellant load reduced to 60% in the high-pressure tanks and empty low-pressure tanks. With a reported total launch weight varying between 20,600 and 22,797 kilograms (45,415 and 50,259 lb), [ 3 ] [ 6 ] Kvant-1 was supposedly at that time the heaviest payload lifted by Proton, requiring special custom modifications to its launch vehicle. [ 6 ] Kvant-1 consisted of two pressurized working compartments, one unpressurized experiment compartment and one small airlock for access to the telescopes and film change and retrieval. It also carried additional life support systems including an Elektron oxygen generator and equipment for removing carbon dioxide from the air. [ 3 ] Scientific equipment on board Kvant-1 included: [ 3 ] [ 6 ] To allow astronomical observations, Kvant-1 carried – in addition to two Earth horizon sensors, two star sensors, and three star trackers [ 6 ] – six gyrodines which permitted extremely accurate pointing of the entire Mir complex. As the gyrodines were powered by electricity, they also reduced significantly the amount of attitude control propellant needed by the Mir base block's control thrusters – saving 15 tons of propellant in the first two years. They did, however, use a great deal of electricity – the average consumption of the Kvant-1 module was estimated to have been 6.90 kW. [ 3 ] [ 6 ] Kvant-1 was originally intended to be launched and docked to Salyut 7 , but delays forced it to be launched to Mir instead. Kvant-1 did not have any propulsion systems of its own and to reach Mir, Kvant-1 was mated with a Functional Service Module (FSM) – carrying propulsion and electrical systems – to act as a space tug. The FSM was derived from the TKS spacecraft , which would later form the basis for the Functional Cargo Block of the Kvant-2 , Kristall , Spektr , and Priroda modules. Kvant-1 and its FSM were launched on March 30, 1987 – at the time of the launch, the Mir station was staffed by the EO-2 crew, which had already docked on the front port with the Soyuz TM-2 spacecraft. On April 9, Kvant-1 achieved a soft dock with the aft port on Mir. However, the Kvant-1 was not able to achieve a hard dock which meant that the two spacecraft were only loosely connected – in this configuration, Mir could not orient itself or else damage would occur. The EO-2 crew conducted an emergency EVA on April 11 to investigate the problem. The crew found a piece of debris, probably a trash bag, that was left by Progress 28. After removing it, Kvant-1 was finally able to achieve a hard dock with the station on the same day. The Kvant-FSM, which contained the now unneeded propulsion of the Kvant-1 module, was finally jettisoned on April 12, revealing Kvant-1's rear docking port. [ 3 ] After finally achieving hard-dock and jettisoning of the Kvant-FSM, tests of the onboard systems of Kvant-1 were conducted until the end of April. May was spent in preparation for the extension of the electrical power with activities which required little electricity, like medical experiments and Earth resources photography – much-needed additional electrical power would enable experiments like the Korund 1-M kiln, which was used to conduct melts lasting several days, and to power Kvant-1's gyrodines, needed for astronomical observations. For this, Kvant had carried stowed solar arrays, which were attached to the Mir base block during an EVA on June 12. [ 3 ] With the testing of Kvant-1 concluded, additional solar panels installed and Kvant's gyrodines available, a major step in the construction of the Mir space station was achieved. The X-ray telescope onboard Kvant-1 could start with a bang: it was uniquely placed to study Supernova SN 1987A in the Large Magellanic Cloud, the peak of its light reaching Earth in May 1987. The cosmonauts onboard Mir could examine the exploding star during 115 sessions between June and September 1987. [ 3 ] In January 1991, support structures that were designed to hold solar arrays were installed on Kvant-1. In July 1991, the crew constructed the Sofora girder during four EVAs. The Sofora girder was designed to test new construction techniques, mount a propulsion unit, and act as a place to hold experiments outside the station. In September 1992, the crew installed the VDU propulsion unit on the end of the Sofora girder. It was delivered earlier by Progress M-14 . The VDU was designed to increase the station's attitude control capability. The then-six-year-old VDU propulsion unit was finally replaced in April 1998 by a new one that was delivered by Progress M-38. In September 1993, the Rapana girder was constructed on Kvant-1 during two EVAs. The Rapana girder was designed to test girder assembly experiments for a possible Mir 2 space station. External experiments were also later held on the Rapana girder. In June, 1996, the Rapana girder was extended during an EVA. On May 22, 1995, one of Kristall 's solar panels was redeployed on Kvant-1. In May 1996, the Mir Cooperative Solar Array, which was delivered with the Mir Docking Module , was deployed on Kvant-1. In November 1997, Kristall's old solar panel that was attached to Kvant-1 was disposed of and the all-Russian solar array, which was also delivered with the Docking Module, was attached in its place. On February 23, 1997, a backup solid-fuel oxygen canister caught fire in the Kvant-1 module. [ 7 ] The fire spewed molten metal, and the crew was concerned that it could melt through the hull of the space station. [ 8 ] Smoke filled the station, and the crew donned respirators to continue breathing, although some respirators were faulty and did not supply oxygen. After burning for fourteen minutes and using up three fire extinguishers, the fire died out. [ 8 ] [ 9 ] The smoke remained thick for forty-five minutes after the fire was extinguished. After the respirators ran out of oxygen and the smoke began to clear the crew switched to using filter masks. [ 8 ] [ 10 ]
https://en.wikipedia.org/wiki/Kvant-1
The Kværner process or the Kværner carbon black and hydrogen process (CB&H) is a method of producing carbon black and hydrogen gas from hydrocarbons such as methane , natural gas and biogas with no greenhouse gas pollution. The process was developed in the 1980s by the Norwegian engineering firm Kværner , and was first commercially exploited in 1999. [ 1 ] Further refinement enabled the methane pyrolysis process for implementation at high-volume and low-cost. The endothermic reaction separates (i.e. decomposes) hydrocarbons into carbon and hydrogen in a plasma burner at around 1600 °C. The resulting components, carbon particles and hydrogen, are present as a mixture in form of an aerosol. [ 3 ] In comparison to other reformation methods such as steam reforming and partial oxidation that have carbon dioxide as a by-product, there is no by-product in the Kværner process. The natural gas is efficiently and completely transformed into pure carbon and hydrogen and does not release carbon dioxide into the atmosphere. After separating the mixture, the carbon particles can be used for instance as activated carbon, graphite or industrial soot, special kinds of carbon such as carbon discs and carbon cones (see SEM image). The carbon is obtained as black powdery solid matter and forms a technical product which may be used e.g. as filler in the rubber industry, as pigment soot for inks and paints or as raw material for electrical components. The hydrogen may be discharged for the chemical industry or may be used for generating electricity. [ 4 ] Of the available energy of the feed, approximately 48% is contained in the hydrogen, 40% is contained in activated carbon and 10% in superheated steam. [ citation needed ] [ 5 ] A variation of this process using plasma arc waste disposal was presented in 2009. Methane and natural gas is converted to hydrogen, heat and carbon using a plasma converter. [ 6 ]
https://en.wikipedia.org/wiki/Kværner_process
In mathematics , there are two different results that share the common name of the Ky Fan inequality . One is an inequality involving the geometric mean and arithmetic mean of two sets of real numbers of the unit interval . The result was published on page 5 of the book Inequalities by Edwin F. Beckenbach and Richard E. Bellman (1961), who refer to an unpublished result of Ky Fan . They mention the result in connection with the inequality of arithmetic and geometric means and Augustin Louis Cauchy 's proof of this inequality by forward-backward-induction; a method which can also be used to prove the Ky Fan inequality. This Ky Fan inequality is a special case of Levinson's inequality and also the starting point for several generalizations and refinements; some of them are given in the references below. The second Ky Fan inequality is used in game theory to investigate the existence of an equilibrium. If with 0 ≤ x i ≤ 1 2 {\textstyle 0\leq x_{i}\leq {\frac {1}{2}}} for i = 1, ..., n , then with equality if and only if x 1 = x 2 = ⋅ ⋅ ⋅ = x n . Let denote the arithmetic and geometric mean, respectively, of x 1 , . . ., x n , and let denote the arithmetic and geometric mean, respectively, of 1 − x 1 , . . ., 1 − x n . Then the Ky Fan inequality can be written as which shows the similarity to the inequality of arithmetic and geometric means given by G n ≤ A n . If x i ∈ [0, ⁠ 1 / 2 ⁠ ] and γ i ∈ [0,1] for i = 1, . . ., n are real numbers satisfying γ 1 + . . . + γ n = 1, then with the convention 0 0 := 0. Equality holds if and only if either The classical version corresponds to γ i = 1/ n for all i = 1, . . ., n . Idea: Apply Jensen's inequality to the strictly concave function Detailed proof: (a) If at least one x i is zero, then the left-hand side of the Ky Fan inequality is zero and the inequality is proved. Equality holds if and only if the right-hand side is also zero, which is the case when γ i x i = 0 for all i = 1, . . ., n . (b) Assume now that all x i > 0. If there is an i with γ i = 0, then the corresponding x i > 0 has no effect on either side of the inequality, hence the i th term can be omitted. Therefore, we may assume that γ i > 0 for all i in the following. If x 1 = x 2 = . . . = x n , then equality holds. It remains to show strict inequality if not all x i are equal. The function f is strictly concave on (0, ⁠ 1 / 2 ⁠ ], because we have for its second derivative Using the functional equation for the natural logarithm and Jensen's inequality for the strictly concave f , we obtain that where we used in the last step that the γ i sum to one. Taking the exponential of both sides gives the Ky Fan inequality. A second inequality is also called the Ky Fan Inequality, because of a 1972 paper, "A minimax inequality and its applications". This second inequality is equivalent to the Brouwer Fixed Point Theorem , but is often more convenient. Let S be a compact convex subset of a finite-dimensional vector space V , and let f ( x , y ) {\displaystyle f(x,y)} be a function from S × S {\displaystyle S\times S} to the real numbers that is lower semicontinuous in x , concave in y and has f ( z , z ) ≤ 0 {\displaystyle f(z,z)\leq 0} for all z in S . Then there exists x ∗ ∈ S {\displaystyle x^{*}\in S} such that f ( x ∗ , y ) ≤ 0 {\displaystyle f(x^{*},y)\leq 0} for all y ∈ S {\displaystyle y\in S} . This Ky Fan Inequality is used to establish the existence of equilibria in various games studied in economics.
https://en.wikipedia.org/wiki/Ky_Fan_inequality
In mathematics , there are different results that share the common name of the Ky Fan inequality . The Ky Fan inequality presented here is used in game theory to investigate the existence of an equilibrium. Another Ky Fan inequality is an inequality involving the geometric mean and arithmetic mean of two sets of real numbers of the unit interval . Suppose that E {\displaystyle E} is a convex compact subset of a Hilbert space and that f {\displaystyle f} is a function from E × E {\displaystyle E\times E} to R {\displaystyle \mathbb {R} } satisfying Then there exists e ∈ E {\displaystyle e\in E} such that
https://en.wikipedia.org/wiki/Ky_Fan_inequality_(game_theory)
In mathematics , Ky Fan's lemma (KFL) is a combinatorial lemma about labellings of triangulations. It is a generalization of Tucker's lemma . It was proved by Ky Fan in 1952. [ 1 ] KFL uses the following concepts. Let T be a boundary-antipodally-symmetric triangulation of B n {\displaystyle B_{n}} and L a boundary-odd labeling of T . If L has no complementary edge, then L has an odd number of n -dimensional alternating simplices. By definition, an n -dimensional alternating simplex must have labels with n + 1 different sizes. This means that, if the labeling L uses only n different sizes (i.e. L : V ( T ) → { + 1 , − 1 , + 2 , − 2 , … , + n , − n } {\displaystyle L:V(T)\to \{+1,-1,+2,-2,\ldots ,+n,-n\}} ), it cannot have an n -dimensional alternating simplex. Hence, by KFL, L must have a complementary edge. KFL can be proved constructively based on a path-based algorithm. The algorithm it starts at a certain point or edge of the triangulation, then goes from simplex to simplex according to prescribed rules, until it is not possible to proceed any more. It can be proved that the path must end in an alternating simplex. The proof is by induction on n . The basis is n = 1 {\displaystyle n=1} . In this case, B n {\displaystyle B_{n}} is the interval [ − 1 , 1 ] {\displaystyle [-1,1]} and its boundary is the set { − 1 , 1 } {\displaystyle \{-1,1\}} . The labeling L is boundary-odd, so L ( − 1 ) = − L ( + 1 ) {\displaystyle L(-1)=-L(+1)} . Without loss of generality, assume that L ( − 1 ) = − 1 {\displaystyle L(-1)=-1} and L ( + 1 ) = + 1 {\displaystyle L(+1)=+1} . Start at −1 and go right. At some edge e , the labeling must change from negative to positive. Since L has no complementary edges, e must have a negative label and a positive label with a different size (e.g. −1 and +2); this means that e is a 1-dimensional alternating simplex. Moreover, if at any point the labeling changes again from positive to negative, then this change makes a second alternating simplex, and by the same reasoning as before there must be a third alternating simplex later. Hence, the number of alternating simplices is odd. The following description illustrates the induction step for n = 2 {\displaystyle n=2} . In this case B n {\displaystyle B_{n}} is a disc and its boundary is a circle. The labeling L is boundary-odd, so in particular L ( − v ) = − L ( v ) {\displaystyle L(-v)=-L(v)} for some point v on the boundary. Split the boundary circle to two semi-circles and treat each semi-circle as an interval. By the induction basis, this interval must have an alternating simplex, e.g. an edge with labels (+1,−2). Moreover, the number of such edges on both intervals is odd. Using the boundary criterion, on the boundary we have an odd number of edges where the smaller number is positive and the larger negative, and an odd number of edges where the smaller number is negative and the larger positive. We call the former decreasing , the latter increasing . There are two kinds of triangles. By induction, this proof can be extended to any dimension.
https://en.wikipedia.org/wiki/Ky_Fan_lemma
Kylin ( Chinese : 麒麟 ; pinyin : Qílín ; Wade–Giles : Ch'i²-lin² ) is an operating system developed by academics at the National University of Defense Technology in the People's Republic of China since 2001. It is named after the mythical beast qilin . The first versions were based on FreeBSD and were intended for use by the Chinese military and other government organizations. With version 3.0, Kylin became Linux -based, and there is a version called NeoKylin which was announced in 2010. By 2019, the NeoKylin variant is compatible with more than 4,000 software and hardware products, and it ships pre-installed on most computers sold in China. Together, Kylin and Neokylin have 90% market share of the government sector. [ 1 ] A separate project using Ubuntu as the base Linux operating system was announced in 2013. The first version of Ubuntu Kylin was released in April 2013. In August 2020, v10 of Kylin OS was launched. It is compatible with 10,000 hardware and software products, and it "supports Google 's Android ecosystem ". [ 2 ] In July 2022, an open-source version of Kylin, titled openKylin was released. [ 3 ] Development of Kylin began in 2001, when the National University of Defense Technology was assigned the mission of developing an operating system under the 863 Program intended to make China independent of foreign technology. [ 4 ] The aim was "to support several kinds of server platforms, to achieve high performance, high availability and high security, as well as conforming to international standards of Unix and Linux operating systems". [ 4 ] It was created using a hierarchy model, including "the basic kernel layer which is similar to Mach , the system service layer which is similar to BSD and the desktop environment which is similar to Windows ". [ 4 ] It was designed to comply with the UNIX standards and to be compatible with Linux applications. [ 4 ] In February 2006, "China Military Online" (a website sponsored by PLA Daily of the Chinese People's Liberation Army ) reported the "successful development of the Kylin server operating system", which it said was "the first 64-bit operating system with high security level ( B2 class )" and "also the first operating system without Linux kernel that has obtained Linux global standard authentification [ sic ] by the international Free Standards Group". [ 5 ] In April 2006, it was said that the Kylin operating system was largely based on FreeBSD 5.3. An anonymous Chinese student in Australia, who used the pseudonym "Dancefire", carried out a kernel similarity analysis and showed that the similarities between the two operating systems reached 99.45 percent. [ 6 ] [ 7 ] One of Kylin's developers confirmed that Kylin was based on FreeBSD during a speech at the international conference EuroBSDCon 2006. [ 8 ] In 2009, a report presented to the US-China Economic and Security Review Commission stated that the purpose of Kylin is to make Chinese computers impenetrable to competing countries in the cyberwarfare arena. The Washington Post reported that: [ 9 ] China has developed more secure operating software for its tens of millions of computers and is already installing it on government and military systems, hoping to make Beijing ’s networks impenetrable to U.S. military and intelligence agencies. The deployment of Kylin was said to have "hardened key Chinese servers". [ 9 ] With the advent of version 3.0, Kylin has used the Linux kernel. [ 10 ] In December 2010, it was announced that China Standard Software and the National University of Defense Technology had signed a strategic partnership to launch a version called NeoKylin. [ 11 ] China Standard Software is the maker of the "NeoShine Linux" desktop series. NeoKylin is intended for use by government offices, national defense, energy and other sectors of the Chinese economy. [ 11 ] In 2014, Bloomberg News reported that the northeastern city of Siping had migrated its computers from Microsoft Windows to NeoKylin, as part of a government effort to shift computer technology to Chinese suppliers. [ 12 ] In September 2015, US computer maker Dell reported that 42% of personal computers they sold in China were now running NeoKylin. [ 13 ] The operating system of the Tianhe-1 supercomputer is 64-bit Kylin Linux, which is oriented to high-performance parallel computing optimization, and supports power management and high-performance virtual computing . [ 14 ] The newer Tianhe-2 also uses Kylin Linux. [ 15 ] In 2013, Canonical reached an agreement with the Ministry of Industry and Information Technology of the People's Republic of China to release an Ubuntu -based Linux OS with features targeted at the Chinese market. [ 16 ] Ubuntu Kylin has been described as "a loose continuation of China's Kylin OS". [ 17 ] It is intended for desktop and laptop computers. [ 18 ] The first official release, Ubuntu Kylin 13.04, was on 25 April 2013. [ 19 ]
https://en.wikipedia.org/wiki/Kylin_(operating_system)
Kyocera Corporation ( 京セラ株式会社 , Kyōsera Kabushiki-gaisha , pronounced [kʲoːseɾa] ) is a Japanese multinational ceramics and electronics manufacturer headquartered in Kyoto , Japan . It was founded as Kyoto Ceramic Company, Limited ( 京都セラミック株式会社 , Kyōto Seramikku Kabushiki-gaisha ) in 1959 by Kazuo Inamori and renamed in 1982. It manufactures industrial ceramics, solar power generating systems, telecommunications equipment, office document imaging equipment, electronic components, semiconductor packages, cutting tools, and components for medical and dental implant systems. Kyocera's original product was a ceramic insulator known as a "kelcima" for use in cathode-ray tubes . The company quickly adapted its technologies to produce an expanding range of ceramic components for electronic and structural applications. In the 1960s, as the NASA space program, the birth of Silicon Valley and the advancement of computer technology created demand for semiconductor integrated circuits (ICs), Kyocera developed ceramic semiconductor packages that remain among its core product lines. In the mid-1970s, Kyocera began expanding its material technologies to produce a diverse range of applied ceramic products, including solar photovoltaic modules; biocompatible tooth- and joint-replacement systems; industrial cutting tools; consumer ceramics, such as ceramic-bladed kitchen knives and ceramic-tipped ballpoint pens; and lab-grown gemstones, including rubies , emeralds , sapphires , opals , alexandrites and padparadschahs . The company acquired electronic equipment manufacturing and radio communication technologies in 1979 through an investment in Cybernet Electronics Corporation, which was merged into Kyocera in 1982. Shortly afterward, Kyocera introduced one of the first portable, battery-powered laptop computers, sold in the U.S. as the Tandy Model 100 , which featured an LCD screen and telephone-modem data transfer capability. Kyocera gained optical technology by acquiring Yashica in 1983, along with Yashica's prior licensing agreement with Carl Zeiss , and manufactured film and digital cameras under the Kyocera, Yashica and Contax trade names until 2005, when the company discontinued all film and digital camera production. In the 1980s, Kyocera marketed audio components, such as CD players , receivers , turntables , and cassette decks . These featured unique elements, including Kyocera ceramic-based platforms. At one time, Kyocera owned the famous KLH brand founded by Henry Kloss , though Kloss and the original Cambridge design and engineering staff had left the company by the time of the Kyocera purchase. In 1989, Kyocera stopped production of audio components and sought a buyer for the KLH brand. In 1989, Kyocera acquired Elco Corporation, a manufacturer of electronic connectors. In 1990, Kyocera's global operations expanded significantly with the addition of AVX Corporation , a global manufacturer of passive electronic components , such as ceramic chip capacitors, filters and voltage suppressors. Expanding sales of photovoltaic solar energy products led the company to create Kyocera Solar Corporation in Japan in 1996, and Kyocera Solar, Inc. in the U.S. in 1999. On August 4, 1999, Kyocera completed its merger with solar energy systems integrator Golden Genesis Company (Nasdaq:GGGO). [ 1 ] In January 2000, Kyocera acquired photocopier manufacturer Mita Industrial Company, following Mita's decline and bankruptcy in the late 1990s. [ 2 ] This resulted in the creation of Kyocera Mita Corporation (now Kyocera Document Solutions Corporation), headquartered in Osaka, Japan, with subsidiaries in more than 25 nations. Also in 2000, Kyocera acquired the mobile phone manufacturing operations of Qualcomm Incorporated to form Kyocera Wireless Corp. In 2003, Kyocera Wireless Corp. established Kyocera Wireless India (KWI), a mobile phone subsidiary in Bangalore. KWI has established alliances with several leading players providing CDMA services in India. Kyocera Wireless Corporation was the first to combine BREW capabilities and enhanced brilliant Color displays on entry-level CDMA Handsets, when it demonstrated BREW-enabled handsets at the BREW 2003 Developers Conference. [ 3 ] In 2008, Kyocera acquired Sanyo Mobile , the mobile phone division of Sanyo Electric Co., Ltd. , and its associated operations in Japan, the United States and Canada. In April 2009, Kyocera unveiled its EOS concept phone at CTIA , with an OLED and which is powered by kinetic energy from the user. The prototype phone also has a foldable design which is capable of morphing into a variety of shapes. [ 4 ] In 2009 Kyocera sold its Indian R&D Division (Wireless) to Mindtree Limited . [ 5 ] [ 6 ] In March 2010, Kyocera launched its first Smartphone ( Zio ) since 2001, after focusing on lower cost phones. [ 7 ] In March, 2010, Kyocera announced the merger of its two wholly owned subsidiaries: San Diego–based Kyocera Wireless Corp. and Kyocera Communications, Inc. The merged enterprise continued under the name Kyocera Communications, Inc. Later that month, Kyocera agreed to acquire part of the thin film transistor (TFT) liquid crystal display (LCD) design and manufacturing business of Sony Corporation's subsidiary Sony Mobile Display Corporation. [ 8 ] In October 2010, Kyocera acquired 100% ownership of the shares of TA Triumph-Adler AG (Nuremberg, Germany) and converted the daughter company into TA Triumph-Adler GmbH. TA Triumph-Adler GmbH currently distributes Kyocera-made printing devices and software with TA Triumph-Adler and UTAX trademarks within the EMEA (Europe-Middle East-Africa) region. TA Triumph-Adler GmbH is located in Nuremberg, Germany and UTAX GmbH (subsidiary of TA Triumph-Adler) in Norderstedt, Germany. [ citation needed ] In July 2011, Kyocera's wholly owned Germany-based subsidiary Kyocera Fineceramics GmbH acquired 100% ownership of the shares in Denmark -based industrial cutting tool manufacturing and sales company Unimerco Group A/S. Unimerco had been founded in Denmark in 1964. [ 9 ] [ 10 ] Today, the subsidiary is known as Kyocera Unimerco A/S, and comprises a tooling division and fastening division. [ 11 ] In February 2012, Kyocera became the total stock holder of Optrex Corporation, which was subsequently renamed Kyocera Display Corporation. [ citation needed ] In March 2016, Kyocera acquired an international cutting tool company called SGS Tool Company for $89 million. [ 12 ] In August 2017, Kyocera acquired 100% ownership of Senco Industrial Tools. [ 13 ] In November, 2020, Kyocera acquired a light source company called SLD laser. The company innovated a product that uses phosphor to convert blue laser light to produce a broad-spectrum, incoherent, high luminance white light source. [ 14 ] Kyocera Document Solutions Corporation manufactures a wide range of printers, MFPs. and toner cartridges which are sold throughout Europe, the Middle East, Africa, Australia and the Americas. Kyocera printing devices are also marketed under the Copystar name in Americas and under TA Triumph-Adler and Utax names in EMEA (Europe-Middle East-Africa) region. This division is overseen by Aaron Thomas (North American division President), Henry Goode, and Adam Stevens In the past, Kyocera manufactured satellite phones for the Iridium network. Three handsets were released in 1999 including one with an unusual docking station which contained the Iridium transceiver and antenna, as well as a pager for the Iridium network. [ 15 ] [ 16 ] Kyocera manufactures mobile phones for wireless carriers in the United States and Canada. Marketing is done by its subsidiary Kyocera International, Inc. Kyocera acquired the terminal business of US digital communications technology company Qualcomm in February 2000, [ 17 ] and became a major supplier of mobile handsets. In 2008, Kyocera also took over the handset business of Sanyo , eventually forming 'Kyocera Communications, Inc.'. The Kyocera Communications terminal division is located in San Diego . Kyocera Corporation manufactures and markets phones for the Japanese market which are sold under different brands. Kyocera makes phones for some Japanese wireless carriers including au , willcom , SoftBank and Y!mobile . In May 2012, Kyocera released the world's first speaker-less smartphone, the Kyocera Urbano Progresso. This phone produces vibration to conduct sound through the ear canal instead of the customary speaker, making it easier to hear phone conversations in busy and noisy places. This also benefits those who are having difficulty hearing, but are not totally deaf. It could be used across the world on CDMA, GSM, GPRS and UMTS networks. This phone was only available in Japan. [ 18 ] Kyocera maintains production bases for photovoltaic cells and solar modules in Japan and China. In 2009, it was announced that Kyocera's solar modules were available as an option on the Toyota Prius . [ 19 ] The company also operates solar power plants, such as the Kagoshima Nanatsujima Mega Solar Power Plant . Kyocera sells ceramic knives via its web store and retail outlets under the name Kyocera Advanced Ceramics. Kyocera's headquarters building in Kyoto is 95 metres (312 ft) tall. A 1,900-panel photovoltaic power system is on the roof and south wall of the building, which can supply 12.5% of the facility's needed energy, generating 182 megawatt hours per year. [ 20 ] Between 1978 and 1998, Kyocera and the International Affairs Board of the City of San Diego sponsored an all-expense paid tour of Japan for students from the United States called HORIZON (stylized in all capital letters and designated by year: e.g. HORIZON '98). The program's purpose was to acquaint these students with the Japanese people and their culture, and to facilitate friendship and understanding. The program was open to students ages 10–14; applicants were chosen randomly. The brand Mita was the first main sponsor of the Argentine club Independiente , from 1985 to 1992. Mita also sponsored English club Aston Villa F.C. , appearing on shirt fronts from 1984 to 1993, [ 21 ] and Italian club Como 1907 from 1983 to 1989. Between 2005 and 2008, Kyocera also sponsored Reading F.C. and Brazilian football team Atlético Paranaense , having the naming rights of their stadium . Kyocera is currently the sponsor of the football club Kyoto Sanga F.C. of the J-League (its hometown team; here the word "Kyocera" is written in Japanese katakana , everywhere else in the Latinized logo). Kyocera holds the naming rights for the Kyocera Dome Osaka , colloquially known as Osaka Dome. The indoor dome is the home field of the baseball teams Orix Buffaloes and Hanshin Tigers .
https://en.wikipedia.org/wiki/Kyoto_Ceramic_Co.,_Ltd.
The Kyoto Prize in Basic Sciences is awarded once a year by the Inamori Foundation . The Prize is one of three Kyoto Prize categories; the others are the Kyoto Prize in Advanced Technology and the Kyoto Prize in Arts and Philosophy . The first Kyoto Prize in Basic Sciences was awarded to Claude Elwood Shannon , the “Establishment of Mathematical Foundation of Information Theory”. [ 1 ] The Prize is regarded as a prestigious award available in fields which are traditionally not honored with a Nobel Prize . [ 2 ] The Kyoto Prize in Basic Sciences is awarded on a rotating basis to researchers in the following four fields: Source: Kyoto Prize
https://en.wikipedia.org/wiki/Kyoto_Prize_in_Basic_Sciences
Kyoung-Shin Choi ( Korean : 최경신 ) is a professor of chemistry at the University of Wisconsin-Madison . [ 4 ] [ 5 ] Choi's research focuses on the electrochemical synthesis of electrode materials, for use in electrochemical and photoelectrochemical devices. Choi studied piano at Yewon Middle School, Korea's first middle school dedicated to the arts. In high school, Choi liked Chemistry and Physics classes tremendously and decided to become a scientist. [ 6 ] [ 7 ] Choi attended college at Seoul National University in South Korea , earning her B.S. (major in Food and Nutrition and minor in Chemistry) in 1993 and M.S. in 1995. [ 6 ] [ 7 ] She worked with Jin-Ho Choy on the crystal structure, pressure-induced phase transitions, and magnetism of chromium-niobium oxide materials that adopt the double perovskite structure . [ 8 ] For her doctoral study, Choi came to the United States in 1995. [ 6 ] [ 7 ] She worked at Michigan State University in the laboratory of Mercouri G. Kanatzidis , earning her Ph.D. in chemistry in 2000. Her graduate work focused on the synthesis of various solid state antimony and bismuth-containing chalcogenides [ 9 ] [ 10 ] [ 11 ] using the "molten polychalcogenide salt method." [ 12 ] Choi then conducted postdoctoral studies from 2000 to 2002 at the University of California, Santa Barbara with Galen D. Stucky and Eric W. McFarland . Her postdoctoral research concerned the electrochemical synthesis of nanostructured thin films. [ 13 ] [ 14 ] Choi began her independent career at Purdue University as an assistant professor in 2002, and was later promoted to associate professor. She was a visiting scholar at the National Renewable Energy Laboratory in 2008. In 2012, she moved to University of Wisconsin-Madison as a full professor of chemistry. [ 15 ] Choi has served as an associate editor of the journal Chemistry of Materials since 2014. [ 16 ] The Choi research group studies electrodes and catalysts for use in photoelectrochemical and electrochemical applications. Earlier work in the group has included the crystallization of cuprous oxide in various morphologies, in which the authors utilized electrochemistry to control the crystallization process and resultant crystal morphologies. [ 17 ] [ 18 ] The Choi group has extensively studied bismuth vanadate , a photoanode for light-driven water splitting. This material suffers from facile bulk electron-hole recombination , but by combining the bismuth vanadate catalyst with oxygen-evolution catalysts such as FeOOH and NiOOH, Choi and coworkers were able to minimize this deleterious process and achieve higher catalytic efficiencies. [ 19 ] [ 20 ] The Choi group has also studied the stability of the bismuth vanadate catalyst, [ 21 ] as well as the effects of surface composition on the interfacial energetics of photoelectrochemical catalysis. [ 22 ] In one report, Choi and coworkers developed a photoelectrochemical cell (PEC), a device that can split water into hydrogen and oxygen given inputs of light and electricity. PECs are promising devices for hydrogen production , for use in a hydrogen economy . However, the anodic reaction, the oxygen evolution reaction (OER), is slow and limits the overall process. To sidestep this problem, Choi and coworkers paired the hydrogen evolution reaction (HER) with oxidation of 5-hydroxymethylfurfural (HMF) to 2,5-furandicarboxylic acid (FDCA). [ 23 ] This allows them to generate FDCA, a valuable commodity chemical used in plastic production, from HMF, which can be derived from cellulose . [ 24 ] Source: [ 25 ]
https://en.wikipedia.org/wiki/Kyoung-Shin_Choi
Scripps Research Institute University of California, San Diego University of Pennsylvania Kyriacos Costa Nicolaou ( Greek : Κυριάκος Κ. Νικολάου ; born July 5, 1946) [ 1 ] is a Greek Cypriot-American chemist known for his research in the area of natural products total synthesis . He is currently Harry C. and Olga K. Wiess Professor of Chemistry at Rice University , having previously held academic positions at The Scripps Research Institute / UC San Diego and the University of Pennsylvania . [ 2 ] [ 3 ] K. C. Nicolaou was born on July 5, 1946, in Karavas , Cyprus where he grew up and went to school until the age of 18. In 1964, he went to England where he spent two years learning English and preparing to enter University. He studied chemistry at the University of London (B.Sc., 1969, Bedford College ; Ph.D. 1972, University College London , with Professors F. Sondheimer and P. J. Garratt). In 1972, he moved to the United States and, after postdoctoral appointments at Columbia University (1972–1973, Professor T. J. Katz) and Harvard University (1973–1976, Professor E. J. Corey ), he joined the faculty at the University of Pennsylvania where he became the Rhodes-Thompson Professor of Chemistry. While at Penn, he won the prestigious Sloan Fellowship . [ 4 ] In 1989, he relocated to San Diego, where he took up a joint appointment at the University of California , San Diego , where he served as Professor of Chemistry, and The Scripps Research Institute , where he was Darlene Shiley Professor of Chemistry and Chairman of the Department of Chemistry. In 1996, he was appointed Aline W. and L.S. Skaggs Professor of Chemical Biology in The Skaggs Institute for Chemical Biology, The Scripps Research Institute. From 2005 to 2011, he directed Chemical Synthesis Laboratory @ ICES-A*STAR, Singapore. In 2013, Nicolaou moved to Rice University . The Nicolaou group is active in the field of organic chemistry with research interests in methodology development and total synthesis . [ 5 ] [ 6 ] [ 7 ] [ 8 ] He is responsible for the synthesis of many complex molecules found in nature, such as Taxol and vancomycin . His group's route to Taxol , completed in 1994 at roughly the same time as a synthesis by the group of Robert A. Holton , attracted national news media attention due to Taxol 's structural complexity and its potent anti-cancer activity. [ 9 ] [ 10 ] He is also the co-author of three popular books on total synthesis : Additionally, he authored or co-authored several other books: K. C. Nicolaou has received numerous awards and honors including:
https://en.wikipedia.org/wiki/Kyriacos_Costa_Nicolaou
The Kármán line (or von Kármán line / v ɒ n ˈ k ɑːr m ɑː n / ) [ 2 ] is a conventional definition of the edge of space ; it is widely but not universally accepted. The international record-keeping body FAI (Fédération aéronautique internationale) defines the Kármán line at an altitude of 100 kilometres (54 nautical miles; 62 miles; 330,000 feet) above mean sea level . While named after Theodore von Kármán , who calculated a theoretical limit of altitude for aeroplane flight at 83.8 km (52.1 mi) above Earth, the later established Kármán line is more general and has no distinct physical significance, in that there is a rather gradual difference between the characteristics of the atmosphere at the line, and experts disagree on defining a distinct boundary where the atmosphere ends and space begins. It lies well above the altitude reachable by conventional airplanes or high-altitude balloons , and is approximately where satellites, even on very eccentric trajectories, will decay before completing a single orbit. The Kármán line is mainly used for legal and regulatory purposes of differentiating between aircraft and spacecraft , which are then subject to different jurisdictions and legislations. While international law does not define the edge of space, or the limit of national airspace, [ 3 ] [ 4 ] most international organizations and regulatory agencies (including the United Nations) accept the FAI's Kármán line definition or something close to it. [ 5 ] As defined by the FAI, the Kármán line was established in the 1960s. [ 6 ] Various countries and entities define space's boundary differently for various purposes. [ 7 ] [ 3 ] [ 8 ] The FAI uses the term Kármán line to define the boundary between aeronautics and astronautics: [ 6 ] The expressions " edge of space " or "near space" are often used (by, for instance, the FAI in some of their publications) [ 10 ] to refer to a region below the boundary of Outer Space, which is often meant to include substantially lower regions as well. Thus, certain balloon or airplane flights might be described as "reaching the edge of space". In such statements, "reaching the edge of space" merely refers to going higher than average aeronautical vehicles commonly would. [ 11 ] [ 12 ] There is still no international legal definition of the demarcation between a country's air space and outer space. [ 13 ] In 1963, Andrew G. Haley discussed the Kármán line in his book Space Law and Government . [ 14 ] In a chapter on the limits of national sovereignty , he made a survey of major writers' opinions. [ 14 ] : 82–96 He indicated the inherent imprecision of the Line: The line represents a mean or median measurement. It is comparable to such measures used in the law as mean sea level , meander line, tide line; but it is more complex than these. In arriving at the von Kármán jurisdictional line, myriad factors must be considered – other than the factor of aerodynamic lift. These factors have been discussed in a very large body of literature and by a score or more of commentators. They include the physical constitution of the air ; the biological and physiological viability; and still other factors which logically join to establish a point at which air no longer exists and at which airspace ends. [ 14 ] : 78, 79 In the final chapter of his autobiography, Kármán addresses the issue of the edge of outer space : Where space begins ... can actually be determined by the speed of the space vehicle and its altitude above the Earth. Consider, for instance, the record flight of Captain Iven Carl Kincheloe Jr. in an X-2 rocket plane . Kincheloe flew 2000 miles per hour (3,200 km/h) at 126,000 feet (38,500 m), or 24 miles up. At this altitude and speed, aerodynamic lift still carries 98 percent of the weight of the plane, and only two percent is carried by inertia, or Kepler force , as space scientists call it. But at 300,000 feet (91,440 m) or 57 miles up, this relationship is reversed because there is no longer any air to contribute lift: only inertia prevails. This is certainly a physical boundary, where aerodynamics stops and astronautics begins, and so I thought why should it not also be a jurisdictional boundary? Andrew G. Haley has termed it the Kármán Jurisdictional Line. Below this line, space belongs to each country. Above this level there would be free space. [ 15 ] No atmosphere abruptly ends, instead becoming progressively less dense with altitude. Depending on how the various layers that make up the space around the Earth are defined (and depending on whether these layers are considered part of the actual atmosphere), the definition of the edge of space could vary considerably: If one were to consider the thermosphere and exosphere part of the atmosphere and not of space, one might have to extend the boundary of space to at least 10,000 km (6,200 miles) above sea level. The Kármán line thus is a largely arbitrary definition based on some technical considerations. An aircraft can stay aloft only by constantly traveling forward relative to the air (rather than the ground), so that the wings can generate aerodynamic lift. The thinner the air, the faster the plane must go to generate enough lift to stay up. [ 16 ] At very high speeds, centrifugal force (Kepler force) contributes to maintaining altitude. This is the virtual force that keeps satellites in circular orbit without any aerodynamic lift. As altitude increases and air density decreases, the speed to generate enough aerodynamic lift to support the aircraft weight increases until the speed becomes so high that the centrifugal force contribution becomes significant. At a high enough altitude, the centrifugal force will dominate over the lift force and the aircraft would become effectively an orbiting spacecraft instead of an aircraft supported by aerodynamic lift. In 1956, von Kármán presented a paper in which he discussed aerothermal limits to flight. The faster aircraft fly, the more heat they would generate due to aerodynamic heating from friction with the atmosphere and adiabatic processes . Based on the current state of the art , he calculated the speeds and altitudes at which continuous flight was possible—fast enough that enough lift would be generated and slow enough that the vehicle would not overheat. [ 17 ] The chart included an inflection point at around 275,000 feet (52.08 mi; 83.82 km), above which the minimum speed would place the vehicle into orbit . [ 18 ] [ 19 ] The term "Kármán line" was invented by Andrew G. Haley in a 1959 paper, [ 20 ] based on the chart in von Kármán's 1956 paper, but Haley acknowledged that the 275,000 feet (52.08 mi; 83.82 km) limit was theoretical and would change as technology improved, as the minimum speed in von Kármán's calculations was based on the speed-to-weight ratio of current aircraft, namely the Bell X-2 , and the maximum speed based on current cooling technologies and heat-resistant materials. [ 18 ] Haley also cited other technical considerations for that altitude, as it was approximately the altitude limit for an airbreathing jet engine based on current technology. In the same 1959 paper, Haley also referred to 295,000 feet (55.9 mi; 90 km) as the "von Kármán Line", which was the lowest altitude at which free-radical atomic oxygen occurred. [ 18 ] The U.S. Armed Forces definition of an astronaut is a person who has flown higher than 50 miles (80 km) above mean sea level , approximately the line between the mesosphere and the thermosphere . NASA formerly used the FAI's 100-kilometre (62-mile) figure, though this was changed in 2005 to eliminate any inconsistency between military personnel and civilians flying in the same vehicle. [ 21 ] Three veteran NASA X-15 pilots ( John B. McKay , William H. Dana and Joseph Albert Walker ) were retroactively (two posthumously ) awarded their astronaut wings , as they had flown between 90 km (56 miles) and 108 km (67 miles) during the 1960s, but at the time had not been recognized as astronauts. [ 11 ] The latter altitude, achieved twice by Walker, exceeds the modern international definition of the boundary of space. The United States Federal Aviation Administration also recognizes this line as a space boundary: [ 22 ] Suborbital Flight: Suborbital spaceflight occurs when a spacecraft reaches space but its velocity is such that it cannot achieve orbit. Many people believe that in order to achieve spaceflight, a spacecraft must reach an altitude higher than 100 kilometers (62 miles) above sea level. Works by Jonathan McDowell (Harvard-Smithsonian Center for Astrophysics) [ 23 ] and Thomas Gangale (University of Nebraska-Lincoln) in 2018 [ 18 ] [ 24 ] advocate that the demarcation of space should be at 80 km (50 miles; 260,000 feet), citing as evidence von Kármán's original notes and calculations (which concluded the boundary should be 270,000 ft), confirmation that orbiting objects can survive multiple perigees at altitudes around 80 to 90 km, plus functional, cultural, physical, technological, mathematical, and historical factors. [ 3 ] [ 25 ] More precisely, the paper summarizes: To summarize, the lowest possible sustained circular orbits are at of order 125 km altitude, but elliptical orbits with perigees at 100 km can survive for long periods. In contrast, Earth satellites with perigees below 80 km are highly unlikely to complete their next orbit. It is noteworthy that meteors (travelling much more quickly) usually disintegrate in the 70–100 km altitude range, adding to the evidence that this is the region where the atmosphere becomes important. These findings prompted the FAI to propose holding a joint conference with the International Astronautical Federation (IAF) in 2019 to "fully explore" the issue. [ 10 ] Another definition proposed in international law discussions defines the lower boundary of space as the lowest perigee attainable by an orbiting space vehicle, but does not specify an altitude. [ 26 ] This is the definition adopted by the U.S. military. [ 27 ] : 13 Due to atmospheric drag, the lowest altitude at which an object in a circular orbit can complete at least one full revolution without propulsion is approximately 150 km (93 miles), [ 28 ] whereas an object can maintain an elliptic orbit with perigee as low as about 90 km (56 miles) without propulsion. [ citation needed ] The U.S. government is resisting efforts to specify a precise regulatory boundary. [ 29 ] [ 30 ] While the Kármán line is defined for Earth only, several scientists have estimated the corresponding figures for Mars and Venus . Isidoro Martínez arrived at 80 km (50 miles) and 250 km (160 miles) high, respectively, [ 31 ] while Nicolas Bérend arrived at 113 km (70 miles) and 303 km (188 miles). [ 32 ] In 2014, Oscar Sharp directed The Kármán Line , a British live-action drama short film starring Olivia Colman as Sarah, a wife and mother who suddenly starts levitating until she slowly and eventually crosses the eponymous Kármán line and into outer space. [ 33 ]
https://en.wikipedia.org/wiki/Kármán_line
In isotropic turbulence the Kármán–Howarth equation (after Theodore von Kármán and Leslie Howarth 1938), which is derived from the Navier–Stokes equations , is used to describe the evolution of non-dimensional longitudinal autocorrelation . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Consider a two-point velocity correlation tensor for homogeneous turbulence For isotropic turbulence, this correlation tensor can be expressed in terms of two scalar functions, using the invariant theory of full rotation group, first derived by Howard P. Robertson in 1940, [ 6 ] where u ′ {\displaystyle u'} is the root mean square turbulent velocity and u 1 , u 2 , u 3 {\displaystyle u_{1},\ u_{2},\ u_{3}} are turbulent velocity in all three directions. Here, f ( r ) {\displaystyle f(r)} is the longitudinal correlation and g ( r ) {\displaystyle g(r)} is the lateral correlation of velocity at two different points. From continuity equation, we have Thus f ( r , t ) {\displaystyle f(r,t)} uniquely determines the two-point correlation function. Theodore von Kármán and Leslie Howarth derived the evolution equation for f ( r , t ) {\displaystyle f(r,t)} from Navier–Stokes equation as where h ( r , t ) {\displaystyle h(r,t)} uniquely determines the triple correlation tensor L.G. Loitsianskii derived an integral invariant for the decay of the turbulence by taking the fourth moment of the Kármán–Howarth equation in 1939, [ 7 ] [ 8 ] i.e., If f ( r ) {\displaystyle f(r)} decays faster than r − 3 {\displaystyle r^{-3}} as r → ∞ {\displaystyle r\rightarrow \infty } and also in this limit, if we assume that r 4 h {\displaystyle r^{4}h} vanishes, we have the quantity, which is invariant. Lev Landau and Evgeny Lifshitz showed that this invariant is equivalent to conservation of angular momentum . [ 9 ] However, Ian Proudman and W.H. Reid showed that this invariant does not hold always since lim r → ∞ ( r 4 h ) {\displaystyle \lim _{r\rightarrow \infty }(r^{4}h)} is not in general zero, at least, in the initial period of the decay. [ 10 ] [ 11 ] In 1967, Philip Saffman showed that this integral depends on the initial conditions and the integral can diverge under certain conditions. [ 12 ] For the viscosity dominated flows, during the decay of turbulence, the Kármán–Howarth equation reduces to a heat equation once the triple correlation tensor is neglected, i.e., With suitable boundary conditions, the solution to above equation is given by [ 13 ] so that,
https://en.wikipedia.org/wiki/Kármán–Howarth_equation
Kármán–Moore theory is a linearized theory for supersonic flows over a slender body, named after Theodore von Kármán and Norton B. Moore, who developed the theory in 1932. [ 1 ] [ 2 ] The theory, in particular, provides an explicit formula for the wave drag , which converts the kinetic energy of the moving body into outgoing sound waves behind the body. [ 3 ] Consider a slender body with pointed edges at the front and back. The supersonic flow past this body will be nearly parallel to the x {\displaystyle x} -axis everywhere since the shock waves formed (one at the leading edge and one at the trailing edge) will be weak; as a consequence, the flow will be potential everywhere, which can be described using the velocity potential φ = x v 1 + ϕ {\displaystyle \varphi =xv_{1}+\phi } , where v 1 {\displaystyle v_{1}} is the incoming uniform velocity and ϕ {\displaystyle \phi } characterising the small deviation from the uniform flow. In the linearized theory, ϕ {\displaystyle \phi } satisfies where β 2 = ( v 1 2 − c 1 2 ) / c 1 2 = M 1 2 − 1 {\displaystyle \beta ^{2}=(v_{1}^{2}-c_{1}^{2})/c_{1}^{2}=M_{1}^{2}-1} , c 1 {\displaystyle c_{1}} is the sound speed in the incoming flow and M 1 {\displaystyle M_{1}} is the Mach number of the incoming flow. This is just the two-dimensional wave equation and ϕ {\displaystyle \phi } is a disturbance propagated with an apparent time x / v 1 {\displaystyle x/v_{1}} and with an apparent velocity v 1 / β {\displaystyle v_{1}/\beta } . Let the origin ( x , y , z ) = ( 0 , 0 , 0 ) {\displaystyle (x,y,z)=(0,0,0)} be located at the leading end of the pointed body. Further, let S ( x ) {\displaystyle S(x)} be the cross-sectional area (perpendicular to the x {\displaystyle x} -axis) and l {\displaystyle l} be the length of the slender body, so that S ( x ) = 0 {\displaystyle S(x)=0} for x < 0 {\displaystyle x<0} and for x > 1 {\displaystyle x>1} . Of course, in supersonic flows, disturbances (i.e., ϕ {\displaystyle \phi } ) can be propagated only into the region behind the Mach cone . The weak Mach cone for the leading-edge is given by x − β r = 0 {\displaystyle x-\beta r=0} , whereas the weak Mach cone for the trailing edge is given by x − β r = l {\displaystyle x-\beta r=l} , where r 2 = y 2 + z 2 {\displaystyle r^{2}=y^{2}+z^{2}} is the squared radial distance from the x {\displaystyle x} -axis. The disturbance far away from the body is just like a cylindrical wave propagation. In front of the cone x − β r = 0 {\displaystyle x-\beta r=0} , the solution is simply given by ϕ = 0 {\displaystyle \phi =0} . Between the cones x − β r = 0 {\displaystyle x-\beta r=0} and x − β r = l {\displaystyle x-\beta r=l} , the solution is given by [ 3 ] whereas the behind the cone x − β r = l {\displaystyle x-\beta r=l} , the solution is given by The solution described above is exact for all r {\displaystyle r} when the slender body is a solid of revolution. If this is not the case, the solution is valid at large distances will have correction associated with the non-linear distortion of the shock profile, whose strength is proportional to ( M 1 − 1 ) 1 / 8 r − 3 / 4 {\displaystyle (M_{1}-1)^{1/8}r^{-3/4}} and a factor depending on the shape function S ( x ) {\displaystyle S(x)} . [ 4 ] The drag force F {\displaystyle F} is just the x {\displaystyle x} -component of the momentum per unit time. To calculate this, consider a cylindrical surface with a large radius and with an axis along the x {\displaystyle x} -axis. The momentum flux density crossing through this surface is simply given by Π x r = ρ v r ( v 1 + v x ) ≈ ρ 1 ( ∂ ϕ / ∂ r ) ( v 1 + ∂ ϕ / ∂ x ) {\displaystyle \Pi _{xr}=\rho v_{r}(v_{1}+v_{x})\approx \rho _{1}(\partial \phi /\partial r)(v_{1}+\partial \phi /\partial x)} . Integrating Π x r {\displaystyle \Pi _{xr}} over the cylindrical surface gives the drag force. Due to symmetry, the first term in Π x r {\displaystyle \Pi _{xr}} upon integration gives zero since the net mass flux ρ v r {\displaystyle \rho v_{r}} is zero on the cylindrical surface considered. The second term gives the non-zero contribution, At large distances, the values x − ξ ∼ β r {\displaystyle x-\xi \sim \beta r} (the wave region) are the most important in the solution for ϕ {\displaystyle \phi } ; this is because, as mentioned earlier, ϕ {\displaystyle \phi } is a like disturbance propating with a speed v 1 / β {\displaystyle v_{1}/\beta } with an apparent time x / v 1 {\displaystyle x/v_{1}} . This means that we can approximate the expression in the denominator as ( x − ξ ) 2 − β 2 r 2 ≈ 2 β r ( x − ξ − β r ) . {\displaystyle (x-\xi )^{2}-\beta ^{2}r^{2}\approx 2\beta r(x-\xi -\beta r).} Then we can write, for example, From this expression, we can calculate ∂ ϕ / ∂ r {\displaystyle \partial \phi /\partial r} , which is also equal to − β ∂ ϕ / ∂ x {\displaystyle -\beta \partial \phi /\partial x} since we are in the wave region. The factor 1 / r {\displaystyle 1/{\sqrt {r}}} appearing in front of the integral need not to be differentiated since this gives rise to the small correction proportional to 1 / r {\displaystyle 1/r} . Effecting the differentiation and returning to the original variables, we find Substituting this in the drag force formula gives us This can be simplified by carrying out the integration over X {\displaystyle X} . When the integration order is changed, the limit for X {\displaystyle X} ranges from the m a x ( ξ 1 , ξ 2 ) {\displaystyle \mathrm {max} (\xi _{1},\xi _{2})} to L → ∞ {\displaystyle L\to \infty } . Upon integration, we have The integral containing the term L {\displaystyle L} is zero because S ′ ( 0 ) = S ′ ( l ) = 0 {\displaystyle S'(0)=S'(l)=0} (of course, in addition to S ( 0 ) = S ( l ) = 0 {\displaystyle S(0)=S(l)=0} ). The final formula for the wave drag force may be written as or The drag coefficient is then given by Since F ∼ ρ 1 v 1 2 S 2 / l 2 {\displaystyle F\sim \rho _{1}v_{1}^{2}S^{2}/l^{2}} that follows from the formula derived above, C d ∼ S 2 / l 4 {\displaystyle C_{d}\sim S^{2}/l^{4}} , indicating that the drag coefficient is proportional to the square of the cross-sectional area and inversely proportional to the fourth power of the body length. The shape with smallest wave drag for a given volume V {\displaystyle V} and length l {\displaystyle l} can be obtained from the wave drag force formula. This shape is known as the Sears–Haack body . [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Kármán–Moore_theory
Károly Bezdek (born May 28, 1955, in Budapest , Hungary) is a Hungarian - Canadian mathematician . He is a professor as well as a Canada Research Chair of mathematics and the director of the Centre for Computational and Discrete Geometry at the University of Calgary in Calgary , Alberta, Canada. Also he is a professor (on leave) of mathematics at the University of Pannonia in Veszprém , Hungary . His main research interests are in geometry in particular, in combinatorial , computational , convex , and discrete geometry . He has authored 3 books and more than 130 research papers. He is a founding Editor-in-Chief of the e-journal Contributions to Discrete Mathematics (CDM). Károly Bezdek was born in Budapest , Hungary, but grew up in Dunaújváros , Hungary. His parents are Károly Bezdek Sr. (mechanical engineer) and Magdolna Cserey. His brother András Bezdek is also a mathematician. Károly and his brother have scored at the top level in several Mathematics and Physics competitions for high school and university students in Hungary. Károly's list of awards include winning the first prize in the traditional KöMal (Hungarian Math. Journal for Highschool Students) contest in the academic year 1972–1973, as well as winning the first prize for the research results presented at the National Science Conference for Hungarian Undergraduate Students (TDK) in 1978. Károly entered the Faculty of Science of the Eötvös Loránd University in Hungary, and completed his Diploma in Mathematics in 1978. Bezdek is married to Éva Bezdek, and has three sons: Dániel, [ 1 ] Máté [ 2 ] and Márk. [ 3 ] [ 4 ] Károly Bezdek received his Ph.D. (1980) as well as his Habilitation degree (1997) in mathematics from Eötvös Loránd University , in Budapest , Hungary and his Candidate of Mathematical Sciences degree (1985) as well as his Doctor of Mathematical Sciences degree (1995) from the Hungarian Academy of Sciences . [ 5 ] He has been a faculty member of the Department of Geometry at Eötvös Loránd University in Budapest since 1978. In particular, he has been the chair of that department between 1999-2006 and a full professor between 1998 and 2012. During 1978–2003, while being on a number of special leaves from Eötvös Loránd University , he has held numerous visiting positions at research institutions in Canada, Germany, the Netherlands , and United States. This included a period of about 7 years at the Department of Mathematics of Cornell University in Ithaca , New York. Between 1998 and 2001 Bezdek was appointed a Széchenyi Professor of mathematics at Eötvös Loránd University , in Budapest , Hungary. From 2003 Károly Bezdek is the Canada Research Chair of computational and discrete geometry at the Department of Mathematics and Statistics of the University of Calgary and is the director of the Center for Computational and Discrete Geometry at the University of Calgary . Between 2006 and 2010 Bezdek was an associated member of the Alfréd Rényi Institute of Mathematics in Budapest , Hungary. From 2010 Bezdek is a full professor (on leave) at the Department of Mathematics of the University of Pannonia in Veszprém , Hungary. Between July–December, 2011 Bezdek was a program co-chair of the 6 month thematic program on discrete geometry and its applications at the Fields Institute in Toronto , Ontario, Canada. Also, he is one of the three founding editors-in-chief of the free peer-reviewed electronic journal Contributions to Discrete Mathematics. [ 6 ] His research interests are in combinatorial , computational , convex and discrete geometry including some aspects of geometric analysis , rigidity and optimization . He is the author of more than 130 research papers and has written three research monographs. In particular, he is known for the following works: His three research monographs "Classical Topics in Discrete Geometry", CMS Books in Mathematics, Springer , New York, 2010, "Lectures on Sphere Arrangements - the Discrete Geometric Side", Fields Institute Monographs, Springer , New York, 2013, and "Volumetric Discrete Geometry", Discrete Mathematics and Its Applications, Chapman and Hall - CRC Press, Boca Raton, FL, 2019 (co-authored with Zs. Lángi), lead the reader to the frontiers of discrete geometry . The conference proceedings "Discrete Geometry and Optimization", Fields Institute Communications, Springer , New York, 2013, edited jointly by him, Antoine Deza ( McMaster University ) and Yinyu Ye ( Stanford University ) reflects and stimulates the fruitful interplay between discrete geometry and optimization . [ 19 ] 22 October 2020: 2020 Immigrant of Distinction Award for Lifetime Achievement of the City of Calgary [ 20 ] 15 May 2017: 2017 Research Excellence Award of the University of Calgary [ 21 ] 19 June 2015: 2015 László Fejes Tóth Prize (Hungarian: Fejes Tóth László-díj) [ 22 ]
https://en.wikipedia.org/wiki/Károly_Bezdek
Károly Ereky ( German : Karl Ereky ; 20 October 1878 – 17 June 1952) was a Hungarian agricultural engineer. The term ' biotechnology ' was coined by him in 1919. [ 1 ] He is regarded by some as the "father" of biotechnology. [ 2 ] [ 3 ] [ 4 ] Ereky was born on 18 October 1878 in Esztergom , Hungary, as Károly Wittmann. His father was István Wittmann and his mother Mária Dukai Takách. (Among her relatives was Judit Dukai Takách (1795-1836) who was the first Hungarian female poet.) In 1893 he changed his name to Ereky. He had three brothers: Jenő, Ferenc and István. Ereky finished grammar school at Sümeg and Székesfehérvár. He attended the Technical University of Budapest and in 1900 received a degree in technical engineering. There may be a family connection between Ereky and compatriot Franz Wittmann , prominent electrical engineer and inventor of the Wittmann-oscilloscope. He then worked as machine designer for several paper and food industry companies in Vienna , Austria, until 1905. He moved to Budapest and became an assistant professor in József Technical University. In 1919 he became the Hungarian Minister of Food. He wrote over one hundred publications which were written in Hungarian and published in German. Ereky was also proficient in speaking both German and English. In 1922 he wrote a book on the mechanisms of chlorophyll and how it can be used for animal feeding. In 1925 he wrote a book on leaf proteins as a possible food source which he also promoted as a commercial product. Ereky coined the word "biotechnology" in Hungary during 1919 in a book he published in Berlin called Biotechnologie der Fleisch-, Fett- und Milcherzeugung im landwirtschaftlichen Grossbetriebe (Biotechnology of Meat, Fat and Milk Production in an Agricultural Large-Scale Farm) where he described a technology based on converting raw materials into a more useful product. [ 5 ] He built a slaughterhouse for a thousand pigs and also a fattening farm with space for 50,000 pigs, raising over 100,000 pigs a year. The enterprise was enormous, becoming one of the largest and most profitable meat and fat operations in the world. Ereky further developed a theme that would be reiterated through the 20th century: biotechnology could provide solutions to societal crises, such as food and energy shortages. For Ereky, the term "biotechnology" indicated the process by which raw materials could be biologically upgraded into socially useful products. The book sold several thousand copies within few weeks in Germany. In 1921 the book was translated into Dutch. On 19 September 1946, Ereky was sent to prison in Vác for 12 years by People's Tribunal for his counter-revolutionary role in Hungary. He died in prison on 17 June 1952 at the age of 74.
https://en.wikipedia.org/wiki/Károly_Ereky
A Kégresse track is a kind of rubber or canvas continuous track which uses a flexible belt rather than interlocking metal segments. It can be fitted to a conventional car or truck to turn it into a half-track , suitable for use over rough or soft ground. Conventional front wheels and steering are used, although skis may also be fitted. A snowmobile is a smaller ski-only type. The mechanism incorporates an articulated bogie , fitted to the rear of the vehicle with a large drive wheel at one end, a large unpowered idler wheel at the other, and several small guide wheels in between, over which run a reinforced flexible belt. The belt is fitted with metal or rubber treads to grip the ground. Adolphe Kégresse designed the original system whilst working for Tsar Nicholas II of Russia as a chauffeur and as the head of the royal garage between 1906 and 1917. He applied it to several cars including a Russo-Balt and a 12-cylinder Packard . After 1917, Putilov Ironworks also fitted the system to a number of Austin Armoured and Rolls Royce cars. Following the Russian Revolution , Kégresse returned to his native France, where the system was used on Citroën cars between 1921 and 1937 for off-road and military vehicles. Expeditions across undeveloped parts of Asia, America, and Africa were undertaken by Citroën, demonstrating all-terrain capabilities. During World War II , the Wehrmacht captured many Citroën half-track vehicles and armored them for their own use. [ 1 ] British firm Burford developed the Burford-Kégress, an armoured personnel carrier conversion of their 30 cwt trucks. The rear-axle powered Kégresse tracks were produced under license from Citroën. A 1921 prototype passed trials and the British Army placed an order, but in continuous operation the tracks wore and broke. By 1929, the vehicles were taken out of service and later scrapped. Citroën-Kégresse vehicles served in the Polish motorized artillery during the 1930s. [ 2 ] Domestically produced Kégresse half-track trucks included the 1934 Półgąsienicowy - "Half-track car" - or better known C4P derived from the 4.5-ton Polski Fiat 621 truck. The C4P was designed by the BiRZ Badań Technicznych Broni Pancernych - Warsaw Armored Weapons and Technical Research Bureau in 1934. The engine and cab received some modifications and the front axle reinforced to integrate the 4x4 transmission. Production began in 1936 at Państwowe Zakłady Inżynierii 's Warsaw plant. By 1939, more than 400 were produced including at least 80 artillery tractors . The FN-Kégresse 3T was a half-track vehicle used by Belgian armed forces as an artillery tractor between 1934 and 1940. 130 were built, with some 100 in service before the German invasion . In the late 1920s, the US Army purchased Citroën-Kégresse vehicles for evaluation, followed by a licence to produce the tracks. A 1939 prototype went into production with M2 and M3 half-track versions. More than 41,000 vehicles in over 70 versions were produced between 1940 and 1944.
https://en.wikipedia.org/wiki/Kégresse_track
Köhler illumination is a method of specimen illumination used for transmitted and reflected light (trans- and epi-illuminated) optical microscopy . Köhler illumination acts to generate an even illumination of the sample and ensures that an image of the illumination source (for example a halogen lamp filament ) is not visible in the resulting image. Köhler illumination is the predominant technique for sample illumination in modern scientific light microscopy. It requires additional optical elements which are more expensive and may not be present in more basic light microscopes. Prior to Köhler illumination critical illumination was the predominant technique for sample illumination. Critical illumination has the major limitation that the image of the light source (typically a light bulb ) falls in the same plane as the image of the specimen, i.e., the bulb filament is visible in the final image. The image of the light source is often referred to as the filament image . Critical illumination therefore gives uneven illumination of the sample; bright regions in the filament image illuminate those regions of the sample more strongly. Uneven illumination is undesirable as it can introduce artifacts such as glare and shadowing in the image. Various methods can be used to diffuse the filament image, including reducing power to the light source or using an opal glass bulb or an opal glass diffuser between the bulb and the sample. These methods are all, to some extent, functional at reducing the unevenness of illumination; however, they all reduce intensity of illumination and alter the range of wavelengths of light which reach the sample. To address these limitations August Köhler designed a method of illumination which uses a perfectly defocused image of the light source to illuminate the sample. This work was published in 1893 in the Zeitschrift für wissenschaftliche Mikroskopie [ 1 ] and was soon followed by publication of an English translation in the Journal of the Royal Microscopical Society . [ 2 ] Köhler illumination has also been developed in the context of nonimaging optics . [ 3 ] The primary limitation of critical illumination is the formation of an image of the light source in the specimen image plane. Köhler illumination addresses this by ensuring the image of the light source is perfectly defocused in the sample plane and its conjugate image planes . In a ray diagram of the illumination light path, this can be seen as the image-forming rays passing parallel through the sample. Köhler illumination requires several optical components to function: These components lie in this order between the light source and the specimen and control the illumination of the specimen. The collector/field lenses act to collect light from the light source and focus it at the plane of the condenser diaphragm. The condenser lens acts to project this light, without focusing it, through the sample. This illumination scheme creates two sets of conjugate image planes, one with the light source and its images and one with the specimen and its images. These two sets of image planes are found at the following points (see image for numbers and letters): The primary advantage of Köhler illumination is the uniform illumination of the sample. This reduces image artifacts and provides high sample contrast. Uniform illumination of the sample is also critical for advanced illumination techniques such as phase contrast and differential interference contrast microscopy. Adjusting the condenser diaphragm alters sample contrast . Furthermore, altering the size of the condenser diaphragm allows adjustment of sample depth of field by altering the effective numerical aperture of the microscope. The role of the condenser diaphragm is analogous to the aperture in photography although the condenser diaphragm of a microscope functions by controlling illumination of the specimen, while the aperture of a camera functions by controlling illumination of the detector. Altering the condenser diaphragm allows the amount of light entering the sample to be freely adjusted without altering the wavelengths of light present, in contrast to reducing power to the light source with critical illumination (which changes the color temperature of the lamp). This adjustment is always coupled to an alteration of the numerical aperture of the system, as stated above, and so adjustment of the illumination source intensity by other means is still necessary. By adjustment of the field diaphragm, the image of the field diaphragm aperture in the sample plane is set to a size slightly larger than the imaged region of the sample (which corresponds in turn to the portion of the sample image thrown into the eyepiece field stop ). As the field diaphragm, sample, and eyepiece field stop all lie on conjugate image planes , this adjustment allows the illuminating rays to completely fill the eyepiece field of view, while minimizing the amount of extraneous light which must be blocked by the eyepiece field stop. Such extraneous light scatters inside the system and degrades contrast. Microscopes using Köhler illumination must be routinely checked for correct alignment. The realignment procedure tests whether the correct optical components are in focus at the two sets of conjugate image planes; the light source image planes and the specimen image planes. Alignment of optical components on the specimen image plane is typically performed by first loading a test specimen and bringing it into focus by moving the objective or the specimen. The field diaphragm is then partially closed; the edges of the diaphragm should be in the same conjugate image planes as the specimen, therefore should appear in focus. The focus can be adjusted by raising or lowering the condenser lenses and diaphragm. Finally, the field diaphragm is reopened to just beyond the field of view. In order to test the alignment of components on the light source image plane, the eyepiece must be removed to allow observation of the intermediate image plane (the position of the eyepiece diaphragm) either directly or by using a phase telescope / Bertrand lens . The light source (e.g. the bulb filament) and the edges of the condenser diaphragm should appear in focus. Any optical components at the back focal plane of the objective (e.g. the phase ring for phase contrast microscopy) and at the condenser diaphragm (e.g. the annulus for phase contrast microscopy) should also appear in focus.
https://en.wikipedia.org/wiki/Köhler_illumination
Köhler theory describes the vapor pressure of aqueous aerosol particles in thermodynamic equilibrium with a humid atmosphere. It is used in atmospheric sciences and meteorology to determine the humidity at which a cloud is formed. Köhler theory combines the Kelvin effect , which describes the change in vapor pressure due to a curved surface, with Raoult's Law , which relates the vapor pressure to the solute concentration. [ 1 ] [ 2 ] [ 3 ] It was initially published in 1936 by Hilding Köhler , Professor of Meteorology in the Uppsala University. The Köhler equation relates the saturation ratio S {\displaystyle S} over an aqueous solution droplet of fixed dry mass to its wet diameter D {\textstyle D} as: [ 4 ] S ( D ) = a w exp ⁡ ( 4 σ d v w R T D ) , {\displaystyle S(D)=a_{w}\exp {\left({\frac {4\sigma _{d}v_{w}}{RTD}}\right)},} with: In practice, simplified formulations of the Köhler equation are often used. The Köhler curve is the visual representation of the Köhler equation. It shows the saturation ratio S {\displaystyle S} – or the supersaturation s = ( S − 1 ) ⋅ 100 % {\displaystyle s=\left(S-1\right)\cdot 100\%} – at which the droplet is in equilibrium with the environment over a range of droplet diameters. The exact shape of the curve is dependent upon the amount and composition of the solutes present in the atmosphere. The Köhler curves where the solute is sodium chloride are different from when the solute is sodium nitrate or ammonium sulfate . The figure above shows three Köhler curves of sodium chloride. Consider (for droplets containing solute with a dry diameter equal to 0.05 micrometers) a point on the graph where the wet diameter is 0.1 micrometers and the supersaturation is 0.35%. Since the relative humidity is above 100%, the droplet will grow until it is in thermodynamic equilibrium. As the droplet grows, it never encounters equilibrium, and thus grows without bound, as long as the level of supersaturation is maintained. However, if the supersaturation is only 0.3%, the drop will only grow until about 0.5 micrometers. The supersaturation at which the drop will grow without bound is called the critical supersaturation. The diameter at which the curve peaks is called the critical diameter. In practice, simpler versions of the Köhler equation are often used. To derive these, solutes are assumed to be electrolytes that dissociate fully into a fixed number of ions given by the van’t Hoff factor i {\textstyle i} . Also, mixing volumes are neglected and the molar volume of water is calculated by v w = M w ρ w {\textstyle v_{w}={\frac {M_{w}}{\rho _{w}}}} , where ρ w {\textstyle \rho _{w}} and M w {\textstyle M_{w}} are density and molar mass of water, respectively. It is further assumed that the droplets are dilute at high humidity, which allows the following simplifications: Given these assumptions, the Köhler equation is simplified to: S = ( 1 − 6 M w i n s π ρ w D 3 ) ⋅ exp ⁡ ( 4 σ w M w ρ w R T D ) {\displaystyle S=\left(1-{\frac {6M_{w}in_{s}}{\pi \rho _{w}D^{3}}}\right)\cdot \exp {\left({\frac {4\sigma _{w}M_{w}}{\rho _{w}RTD}}\right)}} To further simplify the equation, exp ⁡ ( a / r ) {\textstyle \exp {\left(a/r\right)}} is approximated by 1 + a / r {\textstyle 1+a/r} and terms proportional to 1 / D 4 {\textstyle 1/D^{4}} are neglected. This results in the often used equation: [ 2 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] S = 1 + 4 σ w M w ρ w R T D − 6 M w i n s π ρ w D 3 = 1 + A D − B D 3 {\displaystyle S=1+{\frac {4\sigma _{w}M_{w}}{\rho _{w}RTD}}-{\frac {6M_{w}in_{s}}{\pi \rho _{w}D^{3}}}=1+{\frac {A}{D}}-{\frac {B}{D^{3}}}} with the coefficients A ≈ 2.4 ⋅ 10 − 9 m {\textstyle A\approx 2.4\cdot 10^{-9}\ \mathrm {m} } and B ≈ i n s ⋅ 3.4 ⋅ 10 − 5 m 3 m o l − 1 {\textstyle B\approx in_{s}\cdot 3.4\cdot 10^{-5}\ \mathrm {m^{3}mol^{-1}} } at T = 273.15 K {\textstyle T=273.15\ \mathrm {K} } . This equation allows to analytically derive the critical diameter and critical saturation ratio (given by the maximum of the Köhler curve) as S c r i t = 1 + 4 A 3 27 B , D c r i t = 3 B A {\displaystyle S_{\mathrm {crit} }=1+{\sqrt {\frac {4A^{3}}{27B}}},\qquad D_{\mathrm {crit} }={\sqrt {\frac {3B}{A}}}} Another form of the Köhler equation is derived from the logarithmic from of the equation above: ln ⁡ ( S ) = ln ⁡ ( 1 − 6 M w i n s π ρ w D 3 ) + 4 σ w M w ρ w R T D {\displaystyle \ln(S)=\ln {\left(1-{\frac {6M_{w}in_{s}}{\pi \rho _{w}D^{3}}}\right)}+{\frac {4\sigma _{w}M_{w}}{\rho _{w}RTD}}} With ln ⁡ ( 1 − x ) ≈ − x {\textstyle \ln {\left(1-x\right)}\approx -x} as x → 0 {\textstyle x\rightarrow 0} , this leads to: [ 2 ] [ 3 ] ln ⁡ ( S ) = 4 σ w M w ρ w R T D − 6 M w i n s π ρ w D 3 {\displaystyle \ln \left(S\right)={\frac {4\sigma _{w}M_{w}}{\rho _{w}RTD}}-{\frac {6M_{w}in_{s}}{\pi \rho _{w}D^{3}}}} and ln ⁡ ( S c r i t ) = 4 A 3 27 B , D c r i t = 3 B A {\displaystyle \ln {\left(S_{\mathrm {crit} }\right)}={\sqrt {\frac {4A^{3}}{27B}}},\qquad D_{\mathrm {crit} }={\sqrt {\frac {3B}{A}}}}
https://en.wikipedia.org/wiki/Köhler_theory
In kinetics , König's theorem or König's decomposition is a mathematical relation derived by Johann Samuel König that assists with the calculations of angular momentum and kinetic energy of bodies and systems of particles. The theorem is divided in two parts. The first part expresses the angular momentum of a system as the sum of the angular momentum of the centre of mass and the angular momentum applied to the particles relative to the center of mass . [ 1 ] L → = r → C o M × ∑ i m i v → C o M + L → ′ = L → C o M + L → ′ {\displaystyle \displaystyle {\vec {L}}={\vec {r}}_{CoM}\times \sum \limits _{i}m_{i}{\vec {v}}_{CoM}+{\vec {L}}'={\vec {L}}_{CoM}+{\vec {L}}'} Considering an inertial reference frame with origin O, the angular momentum of the system can be defined as: L → = ∑ i ( r → i × m i v → i ) {\displaystyle {\vec {L}}=\sum \limits _{i}({\vec {r}}_{i}\times m_{i}{\vec {v}}_{i})} The position of a single particle can be expressed as: r → i = r → C o M + r → i ′ {\displaystyle {\vec {r}}_{i}={\vec {r}}_{CoM}+{\vec {r}}'_{i}} And so we can define the velocity of a single particle: v → i = v → C o M + v → i ′ {\displaystyle {\vec {v}}_{i}={\vec {v}}_{CoM}+{\vec {v}}'_{i}} The first equation becomes: But the following terms are equal to zero: ∑ i m i r → i ′ = 0 {\displaystyle \sum \limits _{i}m_{i}{\vec {r}}'_{i}=0} ∑ i m i v → i ′ = 0 {\displaystyle \sum \limits _{i}m_{i}{\vec {v}}'_{i}=0} So we prove that: L → = ∑ i r → i ′ × m i v → i ′ + M r → C o M × v → C o M {\displaystyle {\vec {L}}=\sum \limits _{i}{\vec {r}}'_{i}\times m_{i}{\vec {v}}'_{i}+M{\vec {r}}_{CoM}\times {\vec {v}}_{CoM}} where M is the total mass of the system. The second part expresses the kinetic energy of a system of particles in terms of the velocities of the individual particles and the centre of mass . Specifically, it states that the kinetic energy of a system of particles is the sum of the kinetic energy associated to the movement of the center of mass and the kinetic energy associated to the movement of the particles relative to the center of mass . [ 2 ] K = K ′ + K CoM {\displaystyle K=K'+K_{\text{CoM}}} The total kinetic energy of the system is: K = ∑ i 1 2 m i v i 2 {\displaystyle K=\sum _{i}{\frac {1}{2}}m_{i}v_{i}^{2}} Like we did in the first part, we substitute the velocity: We know that v ¯ C o M ⋅ ∑ i m i v ¯ i ′ = 0 , {\displaystyle {\bar {v}}_{CoM}\cdot \sum _{i}m_{i}{\bar {v}}'_{i}=0,} so if we define: K ′ = ∑ i 1 2 m i v i ′ 2 {\displaystyle K'=\sum _{i}{\frac {1}{2}}m_{i}{v'_{i}}^{2}} K CoM = ∑ i 1 2 m i v CoM 2 = 1 2 M v CoM 2 {\displaystyle K_{\text{CoM}}=\sum _{i}{\frac {1}{2}}m_{i}v_{\text{CoM}}^{2}={\frac {1}{2}}Mv_{\text{CoM}}^{2}} we're left with: K = K ′ + K CoM {\displaystyle K=K'+K_{\text{CoM}}} The theorem can also be applied to rigid bodies , stating that the kinetic energy K of a rigid body, as viewed by an observer fixed in some inertial reference frame N, can be written as: N K = 1 2 m ⋅ N v ¯ ⋅ N v ¯ + 1 2 N H ¯ ⋅ N ω R {\displaystyle ^{N}K={\frac {1}{2}}m\cdot {^{N}\mathbf {\bar {v}} }\cdot {^{N}\mathbf {\bar {v}} }+{\frac {1}{2}}{^{N}\!\mathbf {\bar {H}} }\cdot ^{N}{\!\!\mathbf {\omega } }^{R}} where m {\displaystyle {m}} is the mass of the rigid body; N v ¯ {\displaystyle {^{N}\mathbf {\bar {v}} }} is the velocity of the center of mass of the rigid body, as viewed by an observer fixed in an inertial frame N; N H ¯ {\displaystyle {^{N}\!\mathbf {\bar {H}} }} is the angular momentum of the rigid body about the center of mass, also taken in the inertial frame N; and N ω R {\displaystyle ^{N}{\!\!\mathbf {\omega } }^{R}} is the angular velocity of the rigid body R relative to the inertial frame N. [ 3 ] This article about theoretical physics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/König's_theorem_(kinetics)
The fortifications of the former East Prussian capital Königsberg (now Kaliningrad ) consist of numerous defensive walls , forts, bastions and other structures. They make up the First and the Second Defensive Belt, built in 1626—1634 and 1843—1859, respectively. [ 2 ] The 15 metre-thick First Belt was erected due to Königsberg's vulnerability during the Polish–Swedish wars . [ 2 ] The Second Belt was largely constructed on the place of the first one, which was in a bad condition. [ 2 ] The new belt included twelve bastions, three ravelins , seven spoil banks and two fortresses, surrounded by a water moat . [ 2 ] Ten brick gates served as entrances and passages through defensive lines and were equipped with moveable bridges . [ 2 ] The Königsberg fortifications became largely obsolete even before the completion of construction due to the rapid development of artillery . [ 2 ] Following the military setbacks of Nazi Germany , however, they became strategically important again (particularly during the East Prussian offensive in 1945). The Astronomic Bastion (German: Bastion Sternwarte ) was erected in 1855-1860 and received its name from its proximity to the Königsberg Observatory . [ 3 ] The bastion's wall was demolished in 1910. [ 3 ] Subsequently the bastion was used to accommodate the Russian OMON for some time. [ 4 ] Later the structure was bought by the Russian MP Asanbuba Niudyurbegov. [ 4 ] The Bronsart Fort ( German : Bronsart bei Mandein ) was constructed in 1875-80 and is named after Prussian general Paul Bronsart von Schellendorff . It did not suffer much during military actions, remaining in quite good condition. [ 3 ] The Dohna Tower ( German : Dohnaturm ) was built in 1858 in Neo-Romanesque style [ 1 ] and is named after Prussian politician Friedrich Ferdinand Alexander zu Dohna-Schlobitten . Following its restoration after World War II the tower started to accommodate the Amber Museum. [ citation needed ] The King Friedrich Wilhelm I Fort, originally known as Quednau, is the largest fort of Königsberg. [ 3 ] The two-storeyed Gneisenau Fort was named after Prussian field marshal August von Gneisenau . It was heavily damaged by Soviet troops during World War II. [ 3 ] The erection of Grolman Bastion, which was named after Prussian general Karl von Grolman , was finished in 1851. It is strengthened with casemates and caponiers inside its wall and consists of lesser Oberteich and Kupferteich Bastions. [ citation needed ] The construction of stone Pillau Citadel started in the beginning of the 17th century. The citadel gained its final appearance by the beginning of the 18th century. [ citation needed ] The large Stein Fort was named after Prussian statesman Baron vom Stein . It remained in better condition than some other fortifications because it lay a bit aside from the places of the main Soviet attacks during World War II. [ citation needed ] The Barnekow Fort is one of the small forts, named after Prussian general Albert von Barnekow . [ 3 ]
https://en.wikipedia.org/wiki/Königsberg_fortifications
The Körber European Science Prize is a science and technology award , presented annually by the Körber Foundation in Hamburg , honoring outstanding scientists working in Europe for their promising research projects. The prize is endowed with 1 million euro (until 2018: 750,000 euro) and promotes research projects in the life sciences and physical sciences . [ 1 ] The prize was initiated by the entrepreneur Kurt A. Körber with the help of Reimar Lüst , the president of the Max Planck Society . The first award was in 1985. At first, European research teams were honored, but since 2005, only individuals qualify. [ 2 ] Candidates for the prize need not be from Europe , but they must be living in Europe. [ 3 ] Renowned scientists from all over Europe, grouped into two Search Committees, select promising candidates. The awards are annual and alternate between the life and physical sciences. Those who are shortlisted are then asked to submit a detailed proposal for a research project which is then judged in two rounds of assessment by the Search Committee. The work of the Search Committee is supported by international experts. A maximum of five candidates are subsequently recommended to the Trustee Committee which, based on a summary of expert assessments, previous publications, and scientific career history, decides on the new prizewinner. A personal application is not allowed. All prizewinners receive a certificate and one million euro (until 2008: 750,000 euros) prize money. The prizewinners can keep 10 percent of the money for themselves and must spend the rest on research in Europe in three to five years. Aside from these restrictions they alone can decide how to use the money. [ 3 ] The prize is presented every year in the Great Hall of Hamburg City Hall in the presence of the Mayor of the Free and Hanseatic City of Hamburg and 600 guests from science, industry, politics, and society.
https://en.wikipedia.org/wiki/Körber_European_Science_Prize
Küpfmüller's uncertainty principle by Karl Küpfmüller in the year 1924 states that the relation of the rise time of a bandlimited signal to its bandwidth is a constant. [ 1 ] with k {\displaystyle k} either 1 {\displaystyle 1} or 1 2 {\displaystyle {\frac {1}{2}}} A bandlimited signal u ( t ) {\displaystyle u(t)} with fourier transform u ^ ( f ) {\displaystyle {\hat {u}}(f)} is given by the multiplication of any signal u ^ _ ( f ) {\displaystyle {\underline {\hat {u}}}(f)} with a rectangular function of width Δ f {\displaystyle \Delta f} in frequency domain: This multiplication with a rectangular function acts as a Bandlimiting filter and results in u ^ ( f ) = g ^ ( f ) u ^ _ ( f ) =: u ^ _ ( f ) | Δ f . {\displaystyle {\hat {u}}(f)={\hat {g}}(f){\underline {\hat {u}}}(f)=:{{\underline {\hat {u}}}(f)}{{\Big |}_{\Delta f}}.} Applying the convolution theorem , we also know Since the fourier transform of a rectangular function is a sinc function si {\displaystyle \operatorname {si} } and vice versa, it follows directly by definition that Now the first root g ( Δ t ) = 0 {\displaystyle g(\Delta t)=0} is at Δ t = ± 1 Δ f {\displaystyle \Delta t=\pm {\frac {1}{\Delta f}}} . This is the rise time Δ t {\displaystyle \Delta t} of the pulse g ( t ) {\displaystyle g(t)} . Since the rise time influences how fast g(t) can go from 0 to its maximum, it affects how fast the bandwidth limited signal transitions from 0 to its maximal value. We have the important finding, that the rise time is inversely related to the frequency bandwidth: the lower the rise time, the wider the frequency bandwidth needs to be. Equality is given as long as Δ t {\displaystyle \Delta t} is finite. Regarding that a real signal has both positive and negative frequencies of the same frequency band, Δ f {\displaystyle \Delta f} becomes 2 ⋅ Δ f {\displaystyle 2\cdot \Delta f} , which leads to k = 1 2 {\displaystyle k={\frac {1}{2}}} instead of k = 1 {\displaystyle k=1} ´
https://en.wikipedia.org/wiki/Küpfmüller's_uncertainty_principle
In fluid dynamics , the Küssner effect describes the unsteady aerodynamic forces on an airfoil or hydrofoil caused by encountering a transverse gust . This is directly related to the Küssner function , used in describing the effect. Both the effect and function are named after Hans Georg Küssner (1900–1984), a German aerodynamics engineer. [ 1 ] Küssner derived an approximate model for an airfoil encountering a sudden step-like change in the transverse gust velocity ; or, equivalently, as seen from a frame of reference moving with the airfoil: a sudden change in the angle of attack . The airfoil is modelled as a flat plate in a potential flow , moving with constant horizontal velocity. [ 2 ] For this case he derived the impulse response function (known as Küssner function [ 3 ] ) needed to compute the unsteady lift and moment exerted by the air on the airfoil. This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Küssner_effect
A kīpuka is an area of land surrounded by one or more younger lava flows. A kīpuka forms when lava flows on either side of a hill , ridge , or older lava dome as it moves downslope or spreads from its source. Older and more weathered than their surroundings, kīpukas often appear to be like islands within a sea of lava flows. They are often covered with soil and late ecological successional vegetation that provide visual contrast as well as habitat for animals in an otherwise inhospitable environment. In volcanic landscapes, kīpukas play an important role as biological reservoirs or refugia for plants and animals, from which the covered land can be recolonized. [ 1 ] Kīpuka, along with ʻaʻā and pāhoehoe , are Hawaiian words related to volcanology that have entered the lexicon of geology . Descriptive proverbs and poetical sayings in Hawaiian oral tradition also use the word, in an allusive sense, to mean a place where life or culture endures, regardless of any encroachment or interference. [ 2 ] [ 3 ] By extension, from the appearance of island "patches" within a highly contrasted background, any similarly noticeable variation or change of form, such as an opening in a forest, or a clear place in a congested setting, may be colloquially called kīpuka . [ 4 ] Kīpuka provides useful study sites for ecological research because they facilitate replication ; multiple kīpuka in a system (isolated by the same lava flow) will tend to have uniform substrate age and successional characteristics, but are often isolated-enough from their neighbors to provide meaningful, comparable differences in size, invasion , etc. They are also receptive to experimental treatments . Kīpuka along Saddle Road on Hawaiʻi have served as the natural laboratory for a variety of studies, examining ecological principles like island biogeography , [ 5 ] food web control , [ 6 ] and biotic resistance to invasiveness. [ 7 ] In addition, Drosophila silvestris populations inhabit kīpukas, making kīpukas useful for understanding the fragmented population structure and reproductive isolation of this fly species. [ 8 ] This volcanology article is a stub . You can help Wikipedia by expanding it . This article related to topography is a stub . You can help Wikipedia by expanding it . This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kīpuka
Kőnig's lemma or Kőnig's infinity lemma is a theorem in graph theory due to the Hungarian mathematician Dénes Kőnig who published it in 1927. [ 1 ] It gives a sufficient condition for an infinite graph to have an infinitely long path. The computability aspects of this theorem have been thoroughly investigated by researchers in mathematical logic , especially in computability theory . This theorem also has important roles in constructive mathematics and proof theory . Let G {\displaystyle G} be a connected , locally finite , infinite graph . This means that every two vertices can be connected by a finite path, each vertex is adjacent to only finitely many other vertices, and the graph has infinitely many vertices. Then G {\displaystyle G} contains a ray : a simple path (a path with no repeated vertices) that starts at one vertex and continues from it through infinitely many vertices. Another way of stating the theorem is: "If the human race never dies out, somebody now living has a line of descendants that will never die out". [ 2 ] A useful special case of the lemma is that every infinite tree contains either a vertex of infinite degree or an infinite simple path. If it is locally finite, it meets the conditions of the lemma and has a ray, and if it is not locally finite then it has an infinite-degree vertex. The construction of a ray, in a graph G {\displaystyle G} that meets the conditions of the lemma, can be performed step by step, maintaining at each step a finite path that can be extended to reach infinitely many vertices (not necessarily all along the same path as each other). To begin this process, start with any single vertex v 1 {\displaystyle v_{1}} . This vertex can be thought of as a path of length zero, consisting of one vertex and no edges. By the assumptions of the lemma, each of the infinitely many vertices of G {\displaystyle G} can be reached by a simple path that starts from v 1 {\displaystyle v_{1}} . Next, as long as the current path ends at some vertex v i {\displaystyle v_{i}} , consider the infinitely many vertices that can be reached by simple paths that extend the current path, and for each of these vertices construct a simple path to it that extends the current path. There are infinitely many of these extended paths, each of which connects from v i {\displaystyle v_{i}} to one of its neighbors, but v i {\displaystyle v_{i}} has only finitely many neighbors. Therefore, it follows by a form of the pigeonhole principle that at least one of these neighbors is used as the next step on infinitely many of these extended paths. Let v i + 1 {\displaystyle v_{i+1}} be such a neighbor, and extend the current path by one edge, the edge from v i {\displaystyle v_{i}} to v i + 1 {\displaystyle v_{i+1}} . This extension preserves the property that infinitely many vertices can be reached by simple paths that extend the current path. Repeating this process for extending the path produces an infinite sequence of finite simple paths, each extending the previous path in the sequence by one more edge. The union of all of these paths is the ray whose existence was promised by the lemma. The computability aspects of Kőnig's lemma have been thoroughly investigated. For this purpose it is convenient to state Kőnig's lemma in the form that any infinite finitely branching subtree of ω < ω {\displaystyle \omega ^{<\omega }} has an infinite path. Here ω {\displaystyle \omega } denotes the set of natural numbers (thought of as an ordinal number ) and ω < ω {\displaystyle \omega ^{<\omega }} the tree whose nodes are all finite sequences of natural numbers, where the parent of a node is obtained by removing the last element from a sequence. Each finite sequence can be identified with a partial function from ω {\displaystyle \omega } to itself, and each infinite path can be identified with a total function. This allows for an analysis using the techniques of computability theory. A subtree of ω < ω {\displaystyle \omega ^{<\omega }} in which each sequence has only finitely many immediate extensions (that is, the tree has finite degree when viewed as a graph) is called finitely branching . Not every infinite subtree of ω < ω {\displaystyle \omega ^{<\omega }} has an infinite path, but Kőnig's lemma shows that any finitely branching infinite subtree must have such a path. For any subtree T {\displaystyle T} of ω < ω {\displaystyle \omega ^{<\omega }} the notation Ext ⁡ ( T ) {\displaystyle \operatorname {Ext} (T)} denotes the set of nodes of T {\displaystyle T} through which there is an infinite path. Even when T {\displaystyle T} is computable the set Ext ⁡ ( T ) {\displaystyle \operatorname {Ext} (T)} may not be computable. Whenever a subtree T {\displaystyle T} of ω < ω {\displaystyle \omega ^{<\omega }} has an infinite path, the path is computable from Ext ⁡ ( T ) {\displaystyle \operatorname {Ext} (T)} , step by step, greedily choosing a successor in Ext ⁡ ( T ) {\displaystyle \operatorname {Ext} (T)} at each step. The restriction to Ext ⁡ ( T ) {\displaystyle \operatorname {Ext} (T)} ensures that this greedy process cannot get stuck. There exist non-finitely branching computable subtrees of ω < ω {\displaystyle \omega ^{<\omega }} that have no arithmetical path, and indeed no hyperarithmetical path. [ 3 ] However, every computable subtree of ω < ω {\displaystyle \omega ^{<\omega }} with a path must have a path computable from Kleene's O , the canonical Π 1 1 {\displaystyle \Pi _{1}^{1}} complete set. This is because the set Ext ⁡ ( T ) {\displaystyle \operatorname {Ext} (T)} is always Σ 1 1 {\displaystyle \Sigma _{1}^{1}} (for the meaning of this notation, see analytical hierarchy ) when T {\displaystyle T} is computable. A finer analysis has been conducted for computably bounded trees. A subtree of ω < ω {\displaystyle \omega ^{<\omega }} is called computably bounded or recursively bounded if there is a computable function f {\displaystyle f} from ω {\displaystyle \omega } to ω {\displaystyle \omega } such that for every sequence in the tree and every natural number n {\displaystyle n} , the n {\displaystyle n} th element of the sequence is at most f ( n ) {\displaystyle f(n)} . Thus f {\displaystyle f} gives a bound for how "wide" the tree is. The following basis theorems apply to infinite, computably bounded, computable subtrees of ω < ω {\displaystyle \omega ^{<\omega }} . A weak form of Kőnig's lemma which states that every infinite binary tree has an infinite branch is used to define the subsystem WKL 0 of second-order arithmetic . This subsystem has an important role in reverse mathematics . Here a binary tree is one in which every term of every sequence in the tree is 0 or 1, which is to say the tree is computably bounded via the constant function 2. The full form of Kőnig's lemma is not provable in WKL 0 , but is equivalent to the stronger subsystem ACA 0 . The proof given above is not generally considered to be constructive , because at each step it uses a proof by contradiction to establish that there exists an adjacent vertex from which infinitely many other vertices can be reached, and because of the reliance on a weak form of the axiom of choice . Facts about the computational aspects of the lemma suggest that no proof can be given that would be considered constructive by the main schools of constructive mathematics . The fan theorem of L. E. J. Brouwer ( 1927 ) is, from a classical point of view, the contrapositive of a form of Kőnig's lemma. A subset S of { 0 , 1 } < ω {\displaystyle \{0,1\}^{<\omega }} is called a bar if any function from ω {\displaystyle \omega } to the set { 0 , 1 } {\displaystyle \{0,1\}} has some initial segment in S . A bar is detachable if every sequence is either in the bar or not in the bar (this assumption is required because the theorem is ordinarily considered in situations where the law of the excluded middle is not assumed). A bar is uniform if there is some number N {\displaystyle N} so that any function from ω {\displaystyle \omega } to { 0 , 1 } {\displaystyle \{0,1\}} has an initial segment in the bar of length no more than N {\displaystyle N} . Brouwer's fan theorem says that any detachable bar is uniform. This can be proven in a classical setting by considering the bar as an open covering of the compact topological space { 0 , 1 } ω {\displaystyle \{0,1\}^{\omega }} . Each sequence in the bar represents a basic open set of this space, and these basic open sets cover the space by assumption. By compactness, this cover has a finite subcover. The N of the fan theorem can be taken to be the length of the longest sequence whose basic open set is in the finite subcover. This topological proof can be used in classical mathematics to show that the following form of Kőnig's lemma holds: for any natural number k , any infinite subtree of the tree { 0 , … , k } < ω {\displaystyle \{0,\ldots ,k\}^{<\omega }} has an infinite path. Kőnig's lemma may be considered to be a choice principle; the first proof above illustrates the relationship between the lemma and the axiom of dependent choice . At each step of the induction, a vertex with a particular property must be selected. Although it is proved that at least one appropriate vertex exists, if there is more than one suitable vertex there may be no canonical choice. In fact, the full strength of the axiom of dependent choice is not needed; as described below, the axiom of countable choice suffices. If the graph is countable, the vertices are well-ordered and one can canonically choose the smallest suitable vertex. In this case, Kőnig's lemma is provable in second-order arithmetic with arithmetical comprehension , and, a fortiori, in ZF set theory (without choice). Kőnig's lemma is essentially the restriction of the axiom of dependent choice to entire relations R {\displaystyle R} such that for each x {\displaystyle x} there are only finitely many z {\displaystyle z} such that x R z {\displaystyle xRz} . Although the axiom of choice is, in general, stronger than the principle of dependent choice, this restriction of dependent choice is equivalent to a restriction of the axiom of choice. In particular, when the branching at each node is done on a finite subset of an arbitrary set not assumed to be countable, the form of Kőnig's lemma that says "Every infinite finitely branching tree has an infinite path" is equivalent to the principle that every countable set of finite sets has a choice function, that is to say, the axiom of countable choice for finite sets. [ 4 ] This form of the axiom of choice (and hence of Kőnig's lemma) is not provable in ZF set theory. In the category of sets , the inverse limit of any inverse system of non-empty finite sets is non-empty. This may be seen as a generalization of Kőnig's lemma and can be proved with Tychonoff's theorem , viewing the finite sets as compact discrete spaces, and then using the finite intersection property characterization of compactness.
https://en.wikipedia.org/wiki/Kőnig's_lemma
In the mathematical area of graph theory , Kőnig's theorem , proved by Dénes Kőnig ( 1931 ), describes an equivalence between the maximum matching problem and the minimum vertex cover problem in bipartite graphs . It was discovered independently, also in 1931, by Jenő Egerváry in the more general case of weighted graphs . A vertex cover in a graph is a set of vertices that includes at least one endpoint of every edge, and a vertex cover is minimum if no other vertex cover has fewer vertices. [ 1 ] A matching in a graph is a set of edges no two of which share an endpoint, and a matching is maximum if no other matching has more edges. [ 2 ] It is obvious from the definition that any vertex-cover set must be at least as large as any matching set (since for every edge in the matching, at least one vertex is needed in the cover). In particular, the minimum vertex cover set is at least as large as the maximum matching set. Kőnig's theorem states that, in any bipartite graph , the minimum vertex cover set and the maximum matching set have in fact the same size. [ 3 ] In any bipartite graph , the number of edges in a maximum matching equals the number of vertices in a minimum vertex cover . [ 3 ] The bipartite graph shown in the above illustration has 14 vertices; a matching with six edges is shown in blue, and a vertex cover with six vertices is shown in red. There can be no smaller vertex cover, because any vertex cover has to include at least one endpoint of each matched edge (as well as of every other edge), so this is a minimum vertex cover. Similarly, there can be no larger matching, because any matched edge has to include at least one endpoint in the vertex cover, so this is a maximum matching. Kőnig's theorem states that the equality between the sizes of the matching and the cover (in this example, both numbers are six) applies more generally to any bipartite graph. The following proof provides a way of constructing a minimum vertex cover from a maximum matching. Let G = ( V , E ) {\displaystyle G=(V,E)} be a bipartite graph and let A , B {\displaystyle A,B} be the two parts of the vertex set V {\displaystyle V} . Suppose that M {\displaystyle M} is a maximum matching for G {\displaystyle G} . Construct the flow network G ∞ ′ {\displaystyle G'_{\infty }} derived from G {\displaystyle G} in such way that there are edges of capacity 1 {\displaystyle 1} from the source s {\displaystyle s} to every vertex a ∈ A {\displaystyle a\in A} and from every vertex b ∈ B {\displaystyle b\in B} to the sink t {\displaystyle t} , and of capacity + ∞ {\displaystyle +\infty } from a {\displaystyle a} to b {\displaystyle b} for any ( a , b ) ∈ E {\displaystyle (a,b)\in E} . The size | M | {\displaystyle |M|} of the maximum matching in G {\displaystyle G} is the size of a maximum flow in G ∞ ′ {\displaystyle G'_{\infty }} , which, in turn, is the size of a minimum cut in the network G ∞ ′ {\displaystyle G'_{\infty }} , as follows from the max-flow min-cut theorem . Let ( S , T ) {\displaystyle (S,T)} be a minimum cut. Let A = A S ∪ A T {\displaystyle A=A_{S}\cup A_{T}} and B = B S ∪ B T {\displaystyle B=B_{S}\cup B_{T}} , such that A S , B S ⊂ S {\displaystyle A_{S},B_{S}\subset S} and A T , B T ⊂ T {\displaystyle A_{T},B_{T}\subset T} . Then the minimum cut is composed only of edges going from s {\displaystyle s} to A T {\displaystyle A_{T}} or from B S {\displaystyle B_{S}} to t {\displaystyle t} , as any edge from A S {\displaystyle A_{S}} to B T {\displaystyle B_{T}} would make the size of the cut infinite. Therefore, the size of the minimum cut is equal to | A T | + | B S | {\displaystyle |A_{T}|+|B_{S}|} . On the other hand, A T ∪ B S {\displaystyle A_{T}\cup B_{S}} is a vertex cover, as any edge that is not incident to vertices from A T {\displaystyle A_{T}} and B S {\displaystyle B_{S}} must be incident to a pair of vertices from A S {\displaystyle A_{S}} and B T {\displaystyle B_{T}} , which would contradict the fact that there are no edges between A S {\displaystyle A_{S}} and B T {\displaystyle B_{T}} . Thus, A T ∪ B S {\displaystyle A_{T}\cup B_{S}} is a minimum vertex cover of G {\displaystyle G} . [ 4 ] No vertex in a vertex cover can cover more than one edge of M {\displaystyle M} (because the edge half-overlap would prevent M {\displaystyle M} from being a matching in the first place), so if a vertex cover with | M | {\displaystyle |M|} vertices can be constructed, it must be a minimum cover. [ 5 ] To construct such a cover, let U {\displaystyle U} be the set of unmatched vertices in A {\displaystyle A} (possibly empty), and let Z {\displaystyle Z} be the set of vertices that are either in U {\displaystyle U} or are connected to U {\displaystyle U} by alternating paths (paths that alternate between edges that are in the matching and edges that are not in the matching). Let Every edge e {\displaystyle e} in E {\displaystyle E} either belongs to an alternating path (and has a right endpoint in K {\displaystyle K} ), or it has a left endpoint in K {\displaystyle K} . For, if e {\displaystyle e} is matched but not in an alternating path, then its left endpoint cannot be in an alternating path (because two matched edges can not share a vertex) and thus belongs to A ∖ Z {\displaystyle A\setminus Z} . Alternatively, if e {\displaystyle e} is unmatched but not in an alternating path, then its left endpoint cannot be in an alternating path, for such a path could be extended by adding e {\displaystyle e} to it. Thus, K {\displaystyle K} forms a vertex cover. [ 6 ] Additionally, every vertex in K {\displaystyle K} is an endpoint of a matched edge. For, every vertex in A ∖ Z {\displaystyle A\setminus Z} is matched because Z {\displaystyle Z} is a superset of U {\displaystyle U} , the set of unmatched left vertices. And every vertex in B ∩ Z {\displaystyle B\cap Z} must also be matched, for if there existed an alternating path to an unmatched vertex then changing the matching by removing the matched edges from this path and adding the unmatched edges in their place would increase the size of the matching. However, no matched edge can have both of its endpoints in K {\displaystyle K} . Thus, K {\displaystyle K} is a vertex cover of cardinality equal to M {\displaystyle M} , and must be a minimum vertex cover. [ 6 ] To explain this proof, we first have to extend the notion of a matching to that of a fractional matching - an assignment of a weight in [0,1] to each edge, such that the sum of weights near each vertex is at most 1 (an integral matching is a special case of a fractional matching in which the weights are in {0,1}). Similarly we define a fractional vertex-cover - an assignment of a non-negative weight to each vertex, such that the sum of weights in each edge is at least 1 (an integral vertex-cover is a special case of a fractional vertex-cover in which the weights are in {0,1}). The maximum fractional matching size in a graph G = ( V , E ) {\displaystyle G=(V,E)} is the solution of the following linear program : Maximize 1 E · x Subject to: x ≥ 0 E __________ A G · x ≤ 1 V . where x is a vector of size | E | in which each element represents the weight of an edge in the fractional matching. 1 E is a vector of | E | ones, so the first line indicates the size of the matching. 0 E is a vector of | E | zeros, so the second line indicates the constraint that the weights are non-negative. 1 V is a vector of | V | ones and A G is the incidence matrix of G, so the third line indicates the constraint that the sum of weights near each vertex is at most 1. Similarly, the minimum fractional vertex-cover size in G = ( V , E ) {\displaystyle G=(V,E)} is the solution of the following LP: Minimize 1 V · y Subject to: y ≥ 0 V __________ A G T · y ≥ 1 E . where y is a vector of size |V| in which each element represents the weight of a vertex in the fractional cover. Here, the first line is the size of the cover, the second line represents the non-negativity of the weights, and the third line represents the requirement that the sum of weights near each edge must be at least 1. Now, the minimum fractional cover LP is exactly the dual linear program of the maximum fractional matching LP. Therefore, by the LP duality theorem, both programs have the same solution. This fact is true not only in bipartite graphs but in arbitrary graphs: In any graph, the largest size of a fractional matching equals the smallest size of a fractional vertex cover. What makes bipartite graphs special is that, in bipartite graphs, both these linear programs have optimal solutions in which all variable values are integers. This follows from the fact that in the fractional matching polytope of a bipartite graph, all extreme points have only integer coordinates, and the same is true for the fractional vertex-cover polytope. Therefore the above theorem implies: [ 7 ] In any bipartite graph, the largest size of a matching equals the smallest size of a vertex cover. The constructive proof described above provides an algorithm for producing a minimum vertex cover given a maximum matching. Thus, the Hopcroft–Karp algorithm for finding maximum matchings in bipartite graphs may also be used to solve the vertex cover problem efficiently in these graphs. [ 8 ] Despite the equivalence of the two problems from the point of view of exact solutions, they are not equivalent for approximation algorithms . Bipartite maximum matchings can be approximated arbitrarily accurately in constant time by distributed algorithms ; in contrast, approximating the minimum vertex cover of a bipartite graph requires at least logarithmic time. [ 9 ] In the graph shown in the introduction take L {\displaystyle L} to be the set of vertices in the bottom layer of the diagram and R {\displaystyle R} to be the set of vertices in the top layer of the diagram. From left to right label the vertices in the bottom layer with the numbers 1, …, 7 and label the vertices in the top layer with the numbers 8, …, 14. The set U {\displaystyle U} of unmatched vertices from L {\displaystyle L} is {1}. The alternating paths starting from U {\displaystyle U} are 1–10–3–13–7, 1–10–3–11–5–13–7, 1–11–5–13–7, 1–11–5–10–3–13–7, and all subpaths of these starting from 1. The set Z {\displaystyle Z} is therefore {1,3,5,7,10,11,13}, resulting in L ∖ Z = { 2 , 4 , 6 } {\displaystyle L\setminus Z=\{2,4,6\}} , R ∩ Z = { 10 , 11 , 13 } {\displaystyle R\cap Z=\{10,11,13\}} and the minimum vertex cover K = { 2 , 4 , 6 , 10 , 11 , 13 } {\displaystyle K=\{2,4,6,10,11,13\}} . For graphs that are not bipartite, the minimum vertex cover may be larger than the maximum matching. Moreover, the two problems are very different in complexity: maximum matchings can be found in polynomial time for any graph, while minimum vertex cover is NP-complete . The complement of a vertex cover in any graph is an independent set , so a minimum vertex cover is complementary to a maximum independent set; finding maximum independent sets is another NP-complete problem. The equivalence between matching and covering articulated in Kőnig's theorem allows minimum vertex covers and maximum independent sets to be computed in polynomial time for bipartite graphs, despite the NP-completeness of these problems for more general graph families. [ 10 ] Kőnig's theorem is named after the Hungarian mathematician Dénes Kőnig . Kőnig had announced in 1914 and published in 1916 the results that every regular bipartite graph has a perfect matching , [ 11 ] and more generally that the chromatic index of any bipartite graph (that is, the minimum number of matchings into which it can be partitioned) equals its maximum degree [ 12 ] – the latter statement is known as Kőnig's line coloring theorem . [ 13 ] However, Bondy & Murty (1976) attribute Kőnig's theorem itself to a later paper of Kőnig (1931). According to Biggs, Lloyd & Wilson (1976) , Kőnig attributed the idea of studying matchings in bipartite graphs to his father, mathematician Gyula Kőnig . In Hungarian, Kőnig's name has a double acute accent , but his theorem is sometimes spelled (incorrectly) in German characters, with an umlaut . Kőnig's theorem is equivalent to many other min-max theorems in graph theory and combinatorics, such as Hall's marriage theorem and Dilworth's theorem . Since bipartite matching is a special case of maximum flow , the theorem also results from the max-flow min-cut theorem . [ 14 ] A graph is said to be perfect if, in every induced subgraph , the chromatic number equals the size of the largest clique . Any bipartite graph is perfect, [ 15 ] because each of its subgraphs is either bipartite or independent; in a bipartite graph that is not independent the chromatic number and the size of the largest clique are both two while in an independent set the chromatic number and clique number are both one. A graph is perfect if and only if its complement is perfect, [ 16 ] and Kőnig's theorem can be seen as equivalent to the statement that the complement of a bipartite graph is perfect. For, each color class in a coloring of the complement of a bipartite graph is of size at most 2 and the classes of size 2 form a matching, a clique in the complement of a graph G is an independent set in G , and as we have already described an independent set in a bipartite graph G is a complement of a vertex cover in G . Thus, any matching M in a bipartite graph G with n vertices corresponds to a coloring of the complement of G with n -| M | colors, which by the perfection of complements of bipartite graphs corresponds to an independent set in G with n -| M | vertices, which corresponds to a vertex cover of G with M vertices. Conversely, Kőnig's theorem proves the perfection of the complements of bipartite graphs, a result proven in a more explicit form by Gallai (1958) . One can also connect Kőnig's line coloring theorem to a different class of perfect graphs, the line graphs of bipartite graphs. If G is a graph, the line graph L ( G ) has a vertex for each edge of G , and an edge for each pair of adjacent edges in G . Thus, the chromatic number of L ( G ) equals the chromatic index of G . If G is bipartite, the cliques in L ( G ) are exactly the sets of edges in G sharing a common endpoint. Now Kőnig's line coloring theorem, stating that the chromatic index equals the maximum vertex degree in any bipartite graph, can be interpreted as stating that the line graph of a bipartite graph is perfect. [ 17 ] Since line graphs of bipartite graphs are perfect, the complements of line graphs of bipartite graphs are also perfect. A clique in the complement of the line graph of G is just a matching in G . And a coloring in the complement of the line graph of G , when G is bipartite, is a partition of the edges of G into subsets of edges sharing a common endpoint; the endpoints shared by each of these subsets form a vertex cover for G . Therefore, Kőnig's theorem itself can also be interpreted as stating that the complements of line graphs of bipartite graphs are perfect. [ 17 ] Konig's theorem can be extended to weighted graphs . Jenő Egerváry (1931) considered graphs in which each edge e has a non-negative integer weight w e . The weight vector is denoted by w . The w- weight of a matching is the sum of weights of the edges participating in the matching. A w- vertex-cover is a multiset of vertices ("multiset" means that each vertex may appear several times), in which each edge e is adjacent to at least w e vertices. Egerváry 's theorem says: In any edge-weighted bipartite graph, the maximum w- weight of a matching equals the smallest number of vertices in a w- vertex-cover. The maximum w- weight of a fractional matching is given by the LP: [ 18 ] Maximize w · x Subject to: x ≥ 0 E __________ A G · x ≤ 1 V . And the minimum number of vertices in a fractional w- vertex-cover is given by the dual LP: Minimize 1 V · y Subject to: y ≥ 0 V __________ A G T · y ≥ w . As in the proof of Konig's theorem, the LP duality theorem implies that the optimal values are equal (for any graph), and the fact that the graph is bipartite implies that these programs have optimal solutions in which all values are integers. One can consider a graph in which each vertex v has a non-negative integer weight b v . The weight vector is denoted by b . The b -weight of a vertex-cover is the sum of b v for all v in the cover. A b -matching is an assignment of a non-negative integral weight to each edge, such that the sum of weights of edges adjacent to any vertex v is at most b v . Egerváry's theorem can be extended, using a similar argument, to graphs that have both edge-weights w and vertex-weights b : [ 18 ] In any edge-weighted vertex-weighted bipartite graph, the maximum w- weight of a b -matching equals the minimum b -weight of vertices in a w- vertex-cover.
https://en.wikipedia.org/wiki/Kőnig's_theorem_(graph_theory)
In set theory , Kőnig's theorem states that if the axiom of choice holds, I is a set , κ i {\displaystyle \kappa _{i}} and λ i {\displaystyle \lambda _{i}} are cardinal numbers for every i in I , and κ i < λ i {\displaystyle \kappa _{i}<\lambda _{i}} for every i in I , then ∑ i ∈ I κ i < ∏ i ∈ I λ i . {\displaystyle \sum _{i\in I}\kappa _{i}<\prod _{i\in I}\lambda _{i}.} The sum here is the cardinality of the disjoint union of the sets m i , and the product is the cardinality of the Cartesian product . However, without the use of the axiom of choice, the sum and the product cannot be defined as cardinal numbers, and the meaning of the inequality sign would need to be clarified. Kőnig's theorem was introduced by Kőnig ( 1904 ) in the slightly weaker form that the sum of a strictly increasing sequence of nonzero cardinal numbers is less than their product. The precise statement of the result: if I is a set , A i and B i are sets for every i in I , and A i < B i {\displaystyle A_{i}<B_{i}} for every i in I , then ∑ i ∈ I A i < ∏ i ∈ I B i , {\displaystyle \sum _{i\in I}A_{i}<\prod _{i\in I}B_{i},} where < means strictly less than in cardinality , i.e. there is an injective function from A i to B i , but not one going the other way. The union involved need not be disjoint (a non-disjoint union can't be any bigger than the disjoint version, also assuming the axiom of choice ). In this formulation, Kőnig's theorem is equivalent to the axiom of choice . [ 1 ] (Of course, Kőnig's theorem is trivial if the cardinal numbers m i and n i are finite and the index set I is finite. If I is empty , then the left sum is the empty sum and therefore 0, while the right product is the empty product and therefore 1.) Kőnig's theorem is remarkable because of the strict inequality in the conclusion. There are many easy rules for the arithmetic of infinite sums and products of cardinals in which one can only conclude a weak inequality ≤, for example: if m i < n i {\displaystyle m_{i}<n_{i}} for all i in I , then one can only conclude ∑ i ∈ I m i ≤ ∑ i ∈ I n i , {\displaystyle \sum _{i\in I}m_{i}\leq \sum _{i\in I}n_{i},} since, for example, setting m i = 1 {\displaystyle m_{i}=1} and n i = 2 {\displaystyle n_{i}=2} , where the index set I is the natural numbers, yields the sum ℵ 0 {\displaystyle \aleph _{0}} for both sides, and we have an equality. If we take m i = 1 {\displaystyle m_{i}=1} , and n i = 2 {\displaystyle n_{i}=2} for each i {\displaystyle i} in κ {\displaystyle \kappa } , then the left side of the above inequality is just κ {\displaystyle \kappa } , while the right side is 2 κ {\displaystyle 2^{\kappa }} , the cardinality of functions from κ {\displaystyle \kappa } to { 0 , 1 } {\displaystyle \{0,1\}} , that is, the cardinality of the power set of κ {\displaystyle \kappa } . Thus, Kőnig's theorem gives us an alternate proof of Cantor's theorem . (Historically of course Cantor's theorem was proved much earlier.) One way of stating the axiom of choice is "an arbitrary Cartesian product of non-empty sets is non-empty". Let B i be a non-empty set for each i in I . Let A i = {} for each i in I . Thus by Kőnig's theorem, we have: That is, the Cartesian product of the given non-empty sets B i has a larger cardinality than the sum of empty sets. Thus it is non-empty, which is just what the axiom of choice states. Since the axiom of choice follows from Kőnig's theorem, we will use the axiom of choice freely and implicitly when discussing consequences of the theorem. Kőnig's theorem has also important consequences for cofinality of cardinal numbers. If κ is regular , then this follows from Cantor's theorem. If κ is singular, then κ is a limit cardinal. Choose a strictly increasing cf(κ)-sequence of cardinals approaching κ. Let λ be their sum. Each summand is less than κ, so, by Kőnig's theorem, λ is less than the product of cf(κ) copies of κ. We finish the proof by showing that λ = κ. Since each summand is a lower bound for λ, λ ≥ κ. For the other inequality, λ ≤ cf(κ)·κ = κ. According to Easton's theorem , the next consequence of Kőnig's theorem is the only nontrivial constraint on the continuum function for regular cardinals . Let μ = λ κ {\displaystyle \mu =\lambda ^{\kappa }} . Suppose that, contrary to this corollary, κ ≥ cf ⁡ ( μ ) {\displaystyle \kappa \geq \operatorname {cf} (\mu )} . Then using the previous corollary, μ < μ cf ⁡ ( μ ) ≤ μ κ = ( λ κ ) κ = λ κ ⋅ κ = λ κ = μ {\displaystyle \mu <\mu ^{\operatorname {cf} (\mu )}\leq \mu ^{\kappa }=(\lambda ^{\kappa })^{\kappa }=\lambda ^{\kappa \cdot \kappa }=\lambda ^{\kappa }=\mu } , a contradiction. Assuming Zermelo–Fraenkel set theory , including especially the axiom of choice , we can prove the theorem. Remember that we are given ∀ i ∈ I . A i < B i {\displaystyle \forall i\in I.A_{i}<B_{i}} , and we want to show ∑ i ∈ I A i < ∏ i ∈ I B i . {\textstyle \sum _{i\in I}A_{i}<\prod _{i\in I}B_{i}.} The axiom of choice implies that the condition A < B is equivalent to the condition that there is no function from A onto B and B is nonempty. So we are given that there is no function from A i onto B i ≠{}, and we have to show that any function f from the disjoint union of the A s to the product of the B s is not surjective and that the product is nonempty. That the product is nonempty follows immediately from the axiom of choice and the fact that the factors are nonempty. For each i choose a b i in B i not in the image of A i under the composition of f with the projection to B i . Then the product of the elements b i is not in the image of f , so f does not map the disjoint union of the A s onto the product of the B s.
https://en.wikipedia.org/wiki/Kőnig's_theorem_(set_theory)
L'Hôpital's rule ( / ˌ l oʊ p iː ˈ t ɑː l / , loh-pee- TAHL ), also known as Bernoulli's rule , is a mathematical theorem that allows evaluating limits of indeterminate forms using derivatives . Application (or repeated application) of the rule often converts an indeterminate form to an expression that can be easily evaluated by substitution. The rule is named after the 17th-century French mathematician Guillaume de l'Hôpital . Although the rule is often attributed to de l'Hôpital, the theorem was first introduced to him in 1694 by the Swiss mathematician Johann Bernoulli . L'Hôpital's rule states that for functions f and g which are defined on an open interval I and differentiable on I ∖ { c } {\textstyle I\setminus \{c\}} for a (possibly infinite) accumulation point c of I , if lim x → c f ( x ) = lim x → c g ( x ) = 0 or ± ∞ , {\textstyle \lim \limits _{x\to c}f(x)=\lim \limits _{x\to c}g(x)=0{\text{ or }}\pm \infty ,} and g ′ ( x ) ≠ 0 {\textstyle g'(x)\neq 0} for all x in I ∖ { c } {\textstyle I\setminus \{c\}} , and lim x → c f ′ ( x ) g ′ ( x ) {\textstyle \lim \limits _{x\to c}{\frac {f'(x)}{g'(x)}}} exists, then The differentiation of the numerator and denominator often simplifies the quotient or converts it to a limit that can be directly evaluated by continuity . Guillaume de l'Hôpital (also written l'Hospital [ a ] ) published this rule in his 1696 book Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbes (literal translation: Analysis of the Infinitely Small for the Understanding of Curved Lines ), the first textbook on differential calculus . [ 1 ] [ b ] However, it is believed that the rule was discovered by the Swiss mathematician Johann Bernoulli . [ 3 ] The general form of l'Hôpital's rule covers many cases. Let c and L be extended real numbers : real numbers, as well as positive and negative infinity. Let I be an open interval containing c (for a two-sided limit) or an open interval with endpoint c (for a one-sided limit , or a limit at infinity if c is infinite). On I ∖ { c } {\displaystyle I\smallsetminus \{c\}} , the real-valued functions f and g are assumed differentiable with g ′ ( x ) ≠ 0 {\displaystyle g'(x)\neq 0} . It is also assumed that lim x → c f ′ ( x ) g ′ ( x ) = L {\textstyle \lim \limits _{x\to c}{\frac {f'(x)}{g'(x)}}=L} , a finite or infinite limit. If either lim x → c f ( x ) = lim x → c g ( x ) = 0 {\displaystyle \lim _{x\to c}f(x)=\lim _{x\to c}g(x)=0} or lim x → c | f ( x ) | = lim x → c | g ( x ) | = ∞ , {\displaystyle \lim _{x\to c}|f(x)|=\lim _{x\to c}|g(x)|=\infty ,} then lim x → c f ( x ) g ( x ) = L . {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}=L.} Although we have written x → c throughout, the limits may also be one-sided limits ( x → c + or x → c − ), when c is a finite endpoint of I . In the second case, the hypothesis that f diverges to infinity is not necessary; in fact, it is sufficient that lim x → c | g ( x ) | = ∞ . {\textstyle \lim _{x\to c}|g(x)|=\infty .} The hypothesis that g ′ ( x ) ≠ 0 {\displaystyle g'(x)\neq 0} appears most commonly in the literature, but some authors sidestep this hypothesis by adding other hypotheses which imply g ′ ( x ) ≠ 0 {\displaystyle g'(x)\neq 0} . For example, [ 4 ] one may require in the definition of the limit lim x → c f ′ ( x ) g ′ ( x ) = L {\textstyle \lim \limits _{x\to c}{\frac {f'(x)}{g'(x)}}=L} that the function f ′ ( x ) g ′ ( x ) {\textstyle {\frac {f'(x)}{g'(x)}}} must be defined everywhere on an interval I ∖ { c } {\displaystyle I\smallsetminus \{c\}} . [ c ] Another method [ 5 ] is to require that both f and g be differentiable everywhere on an interval containing c . All four conditions for l'Hôpital's rule are necessary: Where one of the above conditions is not satisfied, l'Hôpital's rule is not valid in general, and its conclusion may be false in certain cases. The necessity of the first condition can be seen by considering the counterexample where the functions are f ( x ) = x + 1 {\displaystyle f(x)=x+1} and g ( x ) = 2 x + 1 {\displaystyle g(x)=2x+1} and the limit is x → 1 {\displaystyle x\to 1} . The first condition is not satisfied for this counterexample because lim x → 1 f ( x ) = lim x → 1 ( x + 1 ) = ( 1 ) + 1 = 2 ≠ 0 {\displaystyle \lim _{x\to 1}f(x)=\lim _{x\to 1}(x+1)=(1)+1=2\neq 0} and lim x → 1 g ( x ) = lim x → 1 ( 2 x + 1 ) = 2 ( 1 ) + 1 = 3 ≠ 0 {\displaystyle \lim _{x\to 1}g(x)=\lim _{x\to 1}(2x+1)=2(1)+1=3\neq 0} . This means that the form is not indeterminate. The second and third conditions are satisfied by f ( x ) {\displaystyle f(x)} and g ( x ) {\displaystyle g(x)} . The fourth condition is also satisfied with lim x → 1 f ′ ( x ) g ′ ( x ) = lim x → 1 ( x + 1 ) ′ ( 2 x + 1 ) ′ = lim x → 1 1 2 = 1 2 . {\displaystyle \lim _{x\to 1}{\frac {f'(x)}{g'(x)}}=\lim _{x\to 1}{\frac {(x+1)'}{(2x+1)'}}=\lim _{x\to 1}{\frac {1}{2}}={\frac {1}{2}}.} But the conclusion fails, since lim x → 1 f ( x ) g ( x ) = lim x → 1 x + 1 2 x + 1 = lim x → 1 ( x + 1 ) lim x → 1 ( 2 x + 1 ) = 2 3 ≠ 1 2 . {\displaystyle \lim _{x\to 1}{\frac {f(x)}{g(x)}}=\lim _{x\to 1}{\frac {x+1}{2x+1}}={\frac {\lim _{x\to 1}(x+1)}{\lim _{x\to 1}(2x+1)}}={\frac {2}{3}}\neq {\frac {1}{2}}.} Differentiability of functions is a requirement because if a function is not differentiable, then the derivative of the function is not guaranteed to exist at each point in I {\displaystyle {\mathcal {I}}} . The fact that I {\displaystyle {\mathcal {I}}} is an open interval is grandfathered in from the hypothesis of the Cauchy's mean value theorem . The notable exception of the possibility of the functions being not differentiable at c {\displaystyle c} exists because l'Hôpital's rule only requires the derivative to exist as the function approaches c {\displaystyle c} ; the derivative does not need to be taken at c {\displaystyle c} . For example, let f ( x ) = { sin ⁡ x , x ≠ 0 1 , x = 0 {\displaystyle f(x)={\begin{cases}\sin x,&x\neq 0\\1,&x=0\end{cases}}} , g ( x ) = x {\displaystyle g(x)=x} , and c = 0 {\displaystyle c=0} . In this case, f ( x ) {\displaystyle f(x)} is not differentiable at c {\displaystyle c} . However, since f ( x ) {\displaystyle f(x)} is differentiable everywhere except c {\displaystyle c} , then lim x → c f ′ ( x ) {\displaystyle \lim _{x\to c}f'(x)} still exists. Thus, since lim x → c f ( x ) g ( x ) = 0 0 {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}={\frac {0}{0}}} and lim x → c f ′ ( x ) g ′ ( x ) {\displaystyle \lim _{x\to c}{\frac {f'(x)}{g'(x)}}} exists, l'Hôpital's rule still holds. The necessity of the condition that g ′ ( x ) ≠ 0 {\displaystyle g'(x)\neq 0} near c {\displaystyle c} can be seen by the following counterexample due to Otto Stolz . [ 6 ] Let f ( x ) = x + sin ⁡ x cos ⁡ x {\displaystyle f(x)=x+\sin x\cos x} and g ( x ) = f ( x ) e sin ⁡ x . {\displaystyle g(x)=f(x)e^{\sin x}.} Then there is no limit for f ( x ) / g ( x ) {\displaystyle f(x)/g(x)} as x → ∞ . {\displaystyle x\to \infty .} However, which tends to 0 as x → ∞ {\displaystyle x\to \infty } , although it is undefined at infinitely many points. Further examples of this type were found by Ralph P. Boas Jr. [ 7 ] The requirement that the limit lim x → c f ′ ( x ) g ′ ( x ) {\displaystyle \lim _{x\to c}{\frac {f'(x)}{g'(x)}}} exists is essential; if it does not exist, the original limit lim x → c f ( x ) g ( x ) {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}} may nevertheless exist. Indeed, as x {\displaystyle x} approaches c {\displaystyle c} , the functions f {\displaystyle f} or g {\displaystyle g} may exhibit many oscillations of small amplitude but steep slope, which do not affect lim x → c f ( x ) g ( x ) {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}} but do prevent the convergence of lim x → c f ′ ( x ) g ′ ( x ) {\displaystyle \lim _{x\to c}{\frac {f'(x)}{g'(x)}}} . For example, if f ( x ) = x + sin ⁡ ( x ) {\displaystyle f(x)=x+\sin(x)} , g ( x ) = x {\displaystyle g(x)=x} and c = ∞ {\displaystyle c=\infty } , then f ′ ( x ) g ′ ( x ) = 1 + cos ⁡ ( x ) 1 , {\displaystyle {\frac {f'(x)}{g'(x)}}={\frac {1+\cos(x)}{1}},} which does not approach a limit since cosine oscillates infinitely between 1 and −1 . But the ratio of the original functions does approach a limit, since the amplitude of the oscillations of f {\displaystyle f} becomes small relative to g {\displaystyle g} : In a case such as this, all that can be concluded is that so that if the limit of f g {\textstyle {\frac {f}{g}}} exists, then it must lie between the inferior and superior limits of f ′ g ′ {\textstyle {\frac {f'}{g'}}} . In the example, 1 does indeed lie between 0 and 2.) Note also that by the contrapositive form of the Rule, if lim x → c f ( x ) g ( x ) {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}} does not exist, then lim x → c f ′ ( x ) g ′ ( x ) {\displaystyle \lim _{x\to c}{\frac {f'(x)}{g'(x)}}} also does not exist. In the following computations, we indicate each application of l'Hôpital's rule by the symbol = H {\displaystyle \ {\stackrel {\mathrm {H} }{=}}\ } . Sometimes l'Hôpital's rule is invoked in a tricky way: suppose f ( x ) + f ′ ( x ) {\displaystyle f(x)+f'(x)} converges as x → ∞ and that e x ⋅ f ( x ) {\displaystyle e^{x}\cdot f(x)} converges to positive or negative infinity. Then: lim x → ∞ f ( x ) = lim x → ∞ e x ⋅ f ( x ) e x = H lim x → ∞ e x ( f ( x ) + f ′ ( x ) ) e x = lim x → ∞ ( f ( x ) + f ′ ( x ) ) , {\displaystyle \lim _{x\to \infty }f(x)=\lim _{x\to \infty }{\frac {e^{x}\cdot f(x)}{e^{x}}}\ {\stackrel {\mathrm {H} }{=}}\ \lim _{x\to \infty }{\frac {e^{x}{\bigl (}f(x)+f'(x){\bigr )}}{e^{x}}}=\lim _{x\to \infty }{\bigl (}f(x)+f'(x){\bigr )},} and so, lim x → ∞ f ( x ) {\textstyle \lim _{x\to \infty }f(x)} exists and lim x → ∞ f ′ ( x ) = 0. {\textstyle \lim _{x\to \infty }f'(x)=0.} (This result remains true without the added hypothesis that e x ⋅ f ( x ) {\displaystyle e^{x}\cdot f(x)} converges to positive or negative infinity, but the justification is then incomplete.) Sometimes L'Hôpital's rule does not reduce to an obvious limit in a finite number of steps, unless some intermediate simplifications are applied. Examples include the following: A common logical fallacy is to use L'Hôpital's rule to prove the value of a derivative by computing the limit of a difference quotient . Since applying l'Hôpital requires knowing the relevant derivatives, this amounts to circular reasoning or begging the question , assuming what is to be proved. For example, consider the proof of the derivative formula for powers of x : Applying L'Hôpital's rule and finding the derivatives with respect to h yields nx n −1 as expected, but this computation requires the use of the very formula that is being proven. Similarly, to prove lim x → 0 sin ⁡ ( x ) x = 1 {\displaystyle \lim _{x\to 0}{\frac {\sin(x)}{x}}=1} , applying L'Hôpital requires knowing the derivative of sin ⁡ ( x ) {\displaystyle \sin(x)} at x = 0 {\displaystyle x=0} , which amounts to calculating lim h → 0 sin ⁡ ( h ) h {\displaystyle \lim _{h\to 0}{\frac {\sin(h)}{h}}} in the first place; a valid proof requires a different method such as the squeeze theorem . Other indeterminate forms, such as 1 ∞ , 0 0 , ∞ 0 , 0 · ∞ , and ∞ − ∞ , can sometimes be evaluated using L'Hôpital's rule. We again indicate applications of L'Hopital's rule by = H {\displaystyle \ {\stackrel {\mathrm {H} }{=}}\ } . For example, to evaluate a limit involving ∞ − ∞ , convert the difference of two functions to a quotient: L'Hôpital's rule can be used on indeterminate forms involving exponents by using logarithms to "move the exponent down". Here is an example involving the indeterminate form 0 0 : It is valid to move the limit inside the exponential function because this function is continuous . Now the exponent x {\displaystyle x} has been "moved down". The limit lim x → 0 + x ⋅ ln ⁡ x {\displaystyle \lim _{x\to 0^{+}}x\cdot \ln x} is of the indeterminate form 0 · ∞ dealt with in an example above: L'Hôpital may be used to determine that Thus The following table lists the most common indeterminate forms and the transformations which precede applying l'Hôpital's rule: The Stolz–Cesàro theorem is a similar result involving limits of sequences, but it uses finite difference operators rather than derivatives . Consider the parametric curve in the xy -plane with coordinates given by the continuous functions g ( t ) {\displaystyle g(t)} and f ( t ) {\displaystyle f(t)} , the locus of points ( g ( t ) , f ( t ) ) {\displaystyle (g(t),f(t))} , and suppose f ( c ) = g ( c ) = 0 {\displaystyle f(c)=g(c)=0} . The slope of the tangent to the curve at ( g ( c ) , f ( c ) ) = ( 0 , 0 ) {\displaystyle (g(c),f(c))=(0,0)} is the limit of the ratio f ( t ) g ( t ) {\displaystyle \textstyle {\frac {f(t)}{g(t)}}} as t → c . The tangent to the curve at the point ( g ( t ) , f ( t ) ) {\displaystyle (g(t),f(t))} is the velocity vector ( g ′ ( t ) , f ′ ( t ) ) {\displaystyle (g'(t),f'(t))} with slope f ′ ( t ) g ′ ( t ) {\displaystyle \textstyle {\frac {f'(t)}{g'(t)}}} . L'Hôpital's rule then states that the slope of the curve at the origin ( t = c ) is the limit of the tangent slope at points approaching the origin, provided that this is defined. The proof of L'Hôpital's rule is simple in the case where f and g are continuously differentiable at the point c and where a finite limit is found after the first round of differentiation. This is only a special case of L'Hôpital's rule, because it only applies to functions satisfying stronger conditions than required by the general rule. However, many common functions have continuous derivatives (e.g. polynomials , sine and cosine , exponential functions ), so this special case covers most applications. Suppose that f and g are continuously differentiable at a real number c , that f ( c ) = g ( c ) = 0 {\displaystyle f(c)=g(c)=0} , and that g ′ ( c ) ≠ 0 {\displaystyle g'(c)\neq 0} . Then This follows from the difference quotient definition of the derivative. The last equality follows from the continuity of the derivatives at c . The limit in the conclusion is not indeterminate because g ′ ( c ) ≠ 0 {\displaystyle g'(c)\neq 0} . The proof of a more general version of L'Hôpital's rule is given below. The following proof is due to Taylor (1952) , where a unified proof for the 0 0 {\textstyle {\frac {0}{0}}} and ± ∞ ± ∞ {\textstyle {\frac {\pm \infty }{\pm \infty }}} indeterminate forms is given. Taylor notes that different proofs may be found in Lettenmeyer (1936) and Wazewski (1949) . Let f and g be functions satisfying the hypotheses in the General form section. Let I {\displaystyle {\mathcal {I}}} be the open interval in the hypothesis with endpoint c . Considering that g ′ ( x ) ≠ 0 {\displaystyle g'(x)\neq 0} on this interval and g is continuous, I {\displaystyle {\mathcal {I}}} can be chosen smaller so that g is nonzero on I {\displaystyle {\mathcal {I}}} . [ d ] For each x in the interval, define m ( x ) = inf f ′ ( t ) g ′ ( t ) {\displaystyle m(x)=\inf {\frac {f'(t)}{g'(t)}}} and M ( x ) = sup f ′ ( t ) g ′ ( t ) {\displaystyle M(x)=\sup {\frac {f'(t)}{g'(t)}}} as t {\displaystyle t} ranges over all values between x and c . (The symbols inf and sup denote the infimum and supremum .) From the differentiability of f and g on I {\displaystyle {\mathcal {I}}} , Cauchy's mean value theorem ensures that for any two distinct points x and y in I {\displaystyle {\mathcal {I}}} there exists a ξ {\displaystyle \xi } between x and y such that f ( x ) − f ( y ) g ( x ) − g ( y ) = f ′ ( ξ ) g ′ ( ξ ) {\displaystyle {\frac {f(x)-f(y)}{g(x)-g(y)}}={\frac {f'(\xi )}{g'(\xi )}}} . Consequently, m ( x ) ≤ f ( x ) − f ( y ) g ( x ) − g ( y ) ≤ M ( x ) {\displaystyle m(x)\leq {\frac {f(x)-f(y)}{g(x)-g(y)}}\leq M(x)} for all choices of distinct x and y in the interval. The value g ( x )- g ( y ) is always nonzero for distinct x and y in the interval, for if it was not, the mean value theorem would imply the existence of a p between x and y such that g' ( p )=0. The definition of m ( x ) and M ( x ) will result in an extended real number, and so it is possible for them to take on the values ±∞. In the following two cases, m ( x ) and M ( x ) will establish bounds on the ratio ⁠ f / g ⁠ . Case 1: lim x → c f ( x ) = lim x → c g ( x ) = 0 {\displaystyle \lim _{x\to c}f(x)=\lim _{x\to c}g(x)=0} For any x in the interval I {\displaystyle {\mathcal {I}}} , and point y between x and c , and therefore as y approaches c , f ( y ) g ( x ) {\displaystyle {\frac {f(y)}{g(x)}}} and g ( y ) g ( x ) {\displaystyle {\frac {g(y)}{g(x)}}} become zero, and so Case 2: lim x → c | g ( x ) | = ∞ {\displaystyle \lim _{x\to c}|g(x)|=\infty } For every x in the interval I {\displaystyle {\mathcal {I}}} , define S x = { y ∣ y is between x and c } {\displaystyle S_{x}=\{y\mid y{\text{ is between }}x{\text{ and }}c\}} . For every point y between x and c , As y approaches c , both f ( x ) g ( y ) {\displaystyle {\frac {f(x)}{g(y)}}} and g ( x ) g ( y ) {\displaystyle {\frac {g(x)}{g(y)}}} become zero, and therefore The limit superior and limit inferior are necessary since the existence of the limit of ⁠ f / g ⁠ has not yet been established. It is also the case that [ e ] and In case 1, the squeeze theorem establishes that lim x → c f ( x ) g ( x ) {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}} exists and is equal to L . In the case 2, and the squeeze theorem again asserts that lim inf x → c f ( x ) g ( x ) = lim sup x → c f ( x ) g ( x ) = L {\displaystyle \liminf _{x\to c}{\frac {f(x)}{g(x)}}=\limsup _{x\to c}{\frac {f(x)}{g(x)}}=L} , and so the limit lim x → c f ( x ) g ( x ) {\displaystyle \lim _{x\to c}{\frac {f(x)}{g(x)}}} exists and is equal to L . This is the result that was to be proven. In case 2 the assumption that f ( x ) diverges to infinity was not used within the proof. This means that if | g ( x )| diverges to infinity as x approaches c and both f and g satisfy the hypotheses of L'Hôpital's rule, then no additional assumption is needed about the limit of f ( x ): It could even be the case that the limit of f ( x ) does not exist. In this case, L'Hopital's theorem is actually a consequence of Cesàro–Stolz. [ 9 ] In the case when | g ( x )| diverges to infinity as x approaches c and f ( x ) converges to a finite limit at c , then L'Hôpital's rule would be applicable, but not absolutely necessary, since basic limit calculus will show that the limit of f ( x )/ g ( x ) as x approaches c must be zero. A simple but very useful consequence of L'Hopital's rule is that the derivative of a function cannot have a removable discontinuity. That is, suppose that f is continuous at a , and that f ′ ( x ) {\displaystyle f'(x)} exists for all x in some open interval containing a , except perhaps for x = a {\displaystyle x=a} . Suppose, moreover, that lim x → a f ′ ( x ) {\displaystyle \lim _{x\to a}f'(x)} exists. Then f ′ ( a ) {\displaystyle f'(a)} also exists and In particular, f' is also continuous at a . Thus, if a function is not continuously differentiable near a point, the derivative must have an essential discontinuity at that point. Consider the functions h ( x ) = f ( x ) − f ( a ) {\displaystyle h(x)=f(x)-f(a)} and g ( x ) = x − a {\displaystyle g(x)=x-a} . The continuity of f at a tells us that lim x → a h ( x ) = 0 {\displaystyle \lim _{x\to a}h(x)=0} . Moreover, lim x → a g ( x ) = 0 {\displaystyle \lim _{x\to a}g(x)=0} since a polynomial function is always continuous everywhere. Applying L'Hopital's rule shows that f ′ ( a ) := lim x → a f ( x ) − f ( a ) x − a = lim x → a h ′ ( x ) g ′ ( x ) = lim x → a f ′ ( x ) {\displaystyle f'(a):=\lim _{x\to a}{\frac {f(x)-f(a)}{x-a}}=\lim _{x\to a}{\frac {h'(x)}{g'(x)}}=\lim _{x\to a}f'(x)} .
https://en.wikipedia.org/wiki/L'Hôpital's_rule
L-3 SmartDeck - is a fully integrated cockpit system originally developed by L-3 Avionics Systems . [ 1 ] and acquired in 2010 by Esterline CMC Electronics through an exclusive licensing agreement. SmartDeck is one of the many systems available today known as a “ glass cockpit .” Popularized by large transport category aircraft in the 1980s, the glass cockpit is a high technology cockpit configuration in which the traditional flight instruments and gauges are replaced by computer screens that combine information into an organized and user friendly format. As computer technology advances, glass cockpit systems are declining in cost and becoming available in smaller general aviation aircraft. These technologies are often able to offer pilots more flight information than would be available in a conventional style cockpit and many feature a high level of automation that can aid the pilot in navigation and system monitoring. L-3 created SmartDeck as an alternative to other glass cockpit systems currently on the market. The major design objectives of integration and ease of use were achieved by designing the menu structure with a “three-clicks-or-less” philosophy similar to the Apple iPod and by incorporating navigation, weather, traffic and terrain avoidance, communication, flight controls, engine monitoring and enhanced vision into one cockpit system. This is achieved by combining a number of L-3’s situational awareness technologies into the system. At the National Business Aviation Association annual convention in October 2010, CMC Electronics announced that it had acquired the SmartDeck technology from L-3 and L-3 ceased all development. CMC has continued the development and, as of March 2012, was expecting to announce a launch customer in the near future. The user interface for a basic SmartDeck system consists of one primary flight display (PFD) , one multi-function display (MFD) , one flight display controller (FDC), and a center console unit (CCU) display system. Other components include two air data attitude and heading reference systems (ADAHRS), two data concentrators, two magnetometers , two WAAS GPS receivers, two nav/com radios with a PS Engineering audio panel, a transponder and the S-TEC Intelliflight 1950 Integrated Digital Flight Control System (DFCS). SmartDeck interfaces with the L-3 Avionics SkyWatch collision avoidance system, Landmark terrain awareness warning system (TAWS B), Stormscope lightning detection system and IRIS Infrared Imaging System, among other avionics technologies. The SmartDeck system is customizable for different customers and platforms. SmartDeck features a high level of redundancy that offers added safety in the event of a system failure. The dual ADAHRS continuously compare flight data and alert the pilot if the difference between the two units exceeds a predefined tolerance; during an ADAHRS miscompare, both flight displays will act as PFDs and the discrepancy will be highlighted. This is known as reversionary mode, a condition in which both screens combine all the standard PFD information with a number of key MFD functions. Each component in the system is connected via a dual IEEE 1394 interface , also known as FireWire . This high speed connection interface is common on high-speed computers and is also used on military aircraft such as the F-22 Raptor and F-35 Lightning II . Users can monitor the system health on the MFD during flight and will be notified in the event of a failed connection; however, the system will continue to function normally as long as part of the redundant network connections remain linked. The chief purpose of the SmartDeck Primary Flight Display is to provide the attitude , airspeed , altitude , turn rate , vertical speed and course information available in the standard six pack of a conventional cockpit. In addition, the PFD gives autopilot mode information, abbreviated engine parameters, glide slope and localizer information and winds aloft . Quick reference true airspeed , ground speed , density altitude , outside air temperature, bearing , ground track , DME data, and time en route data are also displayed on the PFD. Dedicated buttons along the bottom of the PFD are used to change the reference bugs for indicated airspeed , course, heading, altitude and vertical speed as well as the barometer setting and source for navigation information. The reference bug settings also control the autopilot and flight director . SmartDeck's PFD is also equipped with synthetic vision , a 3D rendering of obstacles, terrain and airports that allows the pilot to see "through" weather and darkness. The image moves in real time with the aircraft and presents a clear view of the outside environment. SmartDeck’s MFD contains a host of flight information available on a number of “pages” dedicated to different functions. Each page features its own menu and submenus that are used to control the display options. The amount of information available on each screen is customizable and much of the information can be combined onto one page to decrease the need for frequently changing screens. The map page is displayed for the majority of a routine flight on the MFD to aid the pilot in navigation and to assist with situational awareness. A moving map can be displayed in a VFR or IFR format on the MFD with an aircraft icon that represents the aircraft’s present position. A number of selectable options allow the pilot to easily customize the detail level of the moving map. Selectable map overlays include: Additionally, pilot selectable traffic, weather and terrain information is available on dedicated thumbnails or overlaid on the map. A thumbnail overlay for an enhanced vision display is also available. During instrument approaches or while performing SIDs and STARs, a chart overlay option is available on the map page. Chart overlay gives aircraft position on the designated Jeppessen chart in lieu of the map. This function allows the pilot to maintain additional situational awareness throughout the approach and departure phases of the flight. The auxiliary page combines a large amount of aircraft system data into one easy to navigate page. The various submenus of the auxiliary page display aircraft systems, such as engine parameters and electrical; system health, which displays connections of different components; and subsystems, like GPS or transponder functionality. Also available on the Aux page are normal, abnormal and emergency checklists, aircraft performance charts and a setup page for customization of the PFD and MFD screens. Checklist progress is maintained when switching to other pages giving the pilot quick access to procedures without hindering safe navigation. The SmartDeck CCU is a smaller display screen used for entering flight plan data, obtaining airport information, and entering nav/com frequencies or transponder codes. SmartDeck is the only glass cockpit system in the light aircraft market that includes a display dedicated to such functions. Because radio frequencies, flight plan data and airport info can also be manipulated on the MFD, SmartDeck provides a “feature in use” annunciation if the user is accessing or modifying information in two places at once. When airways or instrument approaches are loaded into a flight plan, the CCU will automatically change to the appropriate navigation frequencies as the flight progresses. The system displays the location identifier next to communication frequencies when selected from the database and identifies the Morse code ID for navigation frequencies. A save feature allows up to 30 flight plans with as many as 100 waypoints to be saved on the unit. The S-TEC Intelliflight 1950 DFCS is the integrated autopilot used with SmartDeck. It is a two-axis attitude-based digital autopilot with a flight director. Autopilot controls are located on the CCU and include heading, nav, approach, indicated airspeed hold, vertical speed hold, and altitude hold buttons. With the autopilot engaged, the system can fly full instrument approaches and holds automatically as well as pilot created holds using the “place hold” function. After the desired mode is activated, autopilot parameters such as vertical speed and heading are selected using dedicated buttons along the bottom of the PFD and changed with a concentric control knob on the Flight Data Controller. The various autopilot modes include: SmartDeck has received Technical Standard Order (TSO) Authorization and Supplemental Type Certification (STC) from the FAA . The system was certified in a Cirrus SR22 . A limited STC is available through aftermarket dealers for installation on the Cirrus SR22 G2 model aircraft. L-3 was also awarded the development phase for Cirrus’ new “ Cirrus Vision SF50 ”. Later in the program, Cirrus decided to switch to a similar system by Garmin, prompting L-3 to sue them for $18M. Following FAA certification, SmartDeck will compete directly with the Garmin G1000 , Avidyne Entegra , Chelton FlightLogic and the Collins Pro Line series.
https://en.wikipedia.org/wiki/L-3_SmartDeck
L -Norpseudoephedrine , or (−)-norpseudoephedrine , is a psychostimulant drug of the amphetamine family. It is one of the four optical isomers of phenylpropanolamine , the other three being cathine ((+)-norpseudoephedrine), (−)-norephedrine , and (+)-norephedrine ; as well as one of the two enantiomers of norpseudoephedrine (the other being cathine). [ 1 ] Similarly to cathine, L -norpseudoephedrine acts as a releasing agent of norepinephrine ( EC 50 = 30 nM) and to a lesser extent of dopamine (EC 50 = 294 nM). [ 2 ] Due to the 10-fold difference in its potency for inducing the release of the two neurotransmitters however, L -norpseudoephedrine could be called a modestly selective or preferential norepinephrine releasing agent, similarly to related compounds like ephedrine and pseudoephedrine .
https://en.wikipedia.org/wiki/L-Norpseudoephedrine
9.60 (amino) l -Photo-leucine is a synthetic derivative of the l -leucine amino acid that is used as its natural analog and is characterized for having photo-reactivity, which makes it suitable for observing and characterizing protein-protein interactions (PPI). When a protein containing this amino acid (A) is exposed to ultraviolet light while interacting with another protein (B), the complex formed from these two proteins (AB) remains attached and can be isolated for study. Photo-leucine, as well as another photo-reactive amino acid derived from methionine , photo-methionine, were first synthesized in 2005 by Monika Suchanek, Anna Radzikowska and Christoph Thiele [ 2 ] from the Max Planck Institute of Molecular Cell Biology and Genetics with the objective of identifying protein to protein interaction throughout a simple western blot test that would provide high specificity. The resemblance of the photo-reactive amino acids to the natural ones allows the former to avoid the extensive control mechanisms that take place during the protein synthesis within the cell. As mentioned in the introduction, l -photo-leucine is a synthetic derivative of the l -Leucine amino acid. l -photo-leucine is characterized by the presence of a diazirine ring linked to the R radical of the original amino acid. This cyclopropene ring-shaped molecule is constituted of a carbon atom attached to two nitrogen atoms through a covalent single bond. These two nitrogen atoms are simultaneously connected to each other by a double covalent bond. The diazirine carbon is located in the position where theoretically the 2nd carbon atom of the R radical of l -leucine would be, linked up with the 1st and 3rd carbon of this theoretical R radical. The diazirine ring confers to the photo-leucine its photoreactive property. When irradiated with UV light, it splits releasing nitrogen in gas form and leaving an unbound carbon atom (see Diazirine ). In protein-protein interactions (PPI), this atom is attached to the complex formed by the two proteins susceptible of being under study. The rest of the amino acid has indeed the same structure as the original l -leucine molecule, which includes, as every amino acid, an amino group and a carboxyl group bonded to an α-carbon, and a radical that is attached to this carbon atom. The R chain, contains, in this case, a diazirine ring and two extra carbon atoms connected each to the diazirine carbon as it has been previously mentioned. [ 3 ] For use in biology experiments, only the l - enantiomer of the photo-leucine amino acid is synthesized, so that it can substitute for natural l -leucine. (Natural proteins consist only of l -amino acids; see homochirality .) l -Photo-leucine resembles l -leucine in its structure. However, the latter contains a photo-activatable diazirine ring, which the former does not, and which yields a reactive carbene after the light-induced loss of nitrogen, fact that confers l -photo-leucine its properties. This photo-reactive amino acid is synthesized by α-bromination of the azi-carboxylic acid followed by aminolysis of azi-bromo-carboxylic acid. The classic procedure for synthesizing photo-leucine is based on the following steps: Recently, synthesis of photo-leucine has been improved. This new way of synthesizing photo-leucine requires boc-(S)-photo-leucine, which is prepared via ozonolysis of a commercially available product, followed by formation of the diazirine by de method of Church and Weiss. This route supposes a significant improvement over the original six-step synthesis of (S)-photo-leucine, which proceeded in low yield and required enzymatic resolution of a racemic intermediate. [ 4 ] l -Photo-leucine acquires its function after being exposed to UV light. This causes diazirine ring of l -photo-leucine to lose its nitrogen atoms in form of nitrogen gas, leaving its carbon atom as a reactive free radical. The bonds established between this carbon, belonging to one protein (A), and atoms belonging to another protein (B) are responsible for the cross-linking properties of l -photo-leucine, which allow it to attach these two peptide chains into a single complex (AB). The appropriate wavelength to activate the l -photo-leucine molecule ranges from 320 to 370 nanometers. Lamps with higher power are more effective in accomplishing this objective and do so in less time. The ideal wavelength for the activation of the photo-leucine amino acid is of 345 nm. To increase efficiency, a shallow and uncovered plate must be used. Also, rotation of the samples located under the UV right may be necessary to make sure they receive even UV irradiation, and thus to, yet again, improve the cross-linking efficiency. If the cross-linking is done in vivo, within living cells, these must be exposed to the UV radiation during a period of 15 minutes or less. In the absence of the original amino acid ( l -leucine ) in an environment, l -photo-leucine is used just as its naturally occurring analog in the protein processing mechanisms of the cell. Therefore, it can be used as a substitute for leucine in the primary structure of the protein. This property of photo-leucine is very useful for studying protein-protein interactions (PPIs), due to the fact that the photo-leucine molecule, because of its molecular structure, participates in the covalent cross-linking of proteins in the protein-protein interaction (PPI) domains when it is activated by ultraviolet (UV) light. This fact allows to determine and describe stable and transient protein interactions within cells without using any additional chemical cross-linkers, which could damage the cell structure being studied. The study of these protein-protein interactions is important because they are crucial in organizing cellular processes in space and time. In fact, interest in protein-protein interactions is not confined only to basic research: many of these interactions involved in viral fusion or in growth-factor signaling are promising targets for antiviral and anticancer drugs. Photo-affinity labeling is a powerful tool to identify protein targets of biologically active small molecules and to probe the structure of ligand binding sites, reason due to which photo amino acids, including photo-leucine, are so useful. Monika Suchanek, Anna Radzikowska and Christoph Thiele carried out an experiment in which they had successfully managed to label proteins from the cells of a monkey's kidney (COS7). [ 2 ] These cells were grown in a high-glucose medium, from which a sample of 3 cm² was removed to proceed with the western blotting. At about 70% of confluence, the initial medium was replaced by another one lacking the amino acids methionine, leucine, isoleucine and valine, as well as phenol red. Afterwards, photo-amino acids were added to a final concentration of 4 mM of photo-leucine and photo-isoleucine, 1.7 mM of photo-methionine, and cultivated for 22 hours. Once the time was over, cells were washed using PBS and UV-irradiated using a 200-W high pressure mercury lamp with a glass filter that removed wavelengths under 310 nm during 1 to 3 minutes. This did not affect to the viability of the cell (which only was altered after 10 minutes of irradiation). Cell was driven to lysis and subsequent western blotting to analyse the isolated cross-linked complexes. MacKinnon A. L. et al. used photo-leucine to label proteins in a crude membrane fraction, which allowed them to identify the central part of a translocation channel within the membrane that is the target of the cyclodepsipeptide inhibitor. [ 4 ] Traditionally, the recognition of protein-protein interactions was carried out through chemical cross-linking, that involved the use of moderately reactive bifunctional reagent, commonly attached to free amino groups. However, photochemical cross-linking is much more specific due to the short lifetime of the excited intermediates. In addition, photochemical cross-linking does not interfere with the antibodies recognition whilst the former does. But photo-leucine's advantages go further, because in addition to having a lot of advantages, it doesn't have negative effects. For example, although unnatural amino acids are in general toxic to cells, photo-leucine has been proved not to have any substantial effect to cell viability. Those results have been corroborate by many experiments. For example, an essay with Escherichia coli -galactosidase showed that the addition of either of the three photo-amino acids or of a mixture of them had no effect on enzyme activity. That helps to conclude that photo-amino acids are nontoxic to cultivated mammalian cells and can, at least partially, functionally replace their natural forms. However, currently photo-reactive amino acids are used in combination with chemical cross-linkers in order to achieve the most reliable results possible within protein-protein interaction studies. [ 2 ]
https://en.wikipedia.org/wiki/L-Photo-leucine
L -Photo-methionine is a photo-reactive amino acid derivative of L -methionine that was synthetically formed in 2005. [ 1 ] Protein are long polymer chains of amino acids ; which can range in various structures and sizes. Proteins can interact with each other ( protein-protein interactions or PPI) and with these interactions, affects cellular interactions and pathways. [ 1 ] Such interactions; in viral fusion and in growth-factor signaling looked promising for antiviral or anti-cancer drugs, so research must be done to understand the interactions. [ 1 ] With that, research has begun to prove that proteins function in supramolecular complexes compared to isolated entities. [ 1 ] So, scientists Monika Suchanek, Anna Radzikowski, and Christoph Thiele researched that the direct way to study these interactions in the natural environment better was to create a new way of photo-cross-linking proteins; which led to the synthesis of L -photo-methionine and in that same study, L -photo-leucine . [ 1 ] Racemic Photo-Methionine is synthesized from 4,4'-azi-pentanal by the Strecker amino acid synthesis . [ 1 ] The L enantiomer is separated by enzymatic resolution of the acetamide. As it was previously mentioned, L-Photo-Methionine can be used to study protein-protein interactions with the proteins in their native environment. How this is possible is how the amino acid behaves when exposed to UV light. [ 1 ] To prove that first the synthesis works, a radioactive carbon ( 14 C) as added under its own synthesis to perform proper spectroscopic methods. Because the previous synthesis had worked, photo-methionine is photo-reactive due to the diazirine ring. Once this ring has become exposed to UV light, nitrogen leaves as nitrogen gas (N 2 ) and forms the highly reactive intermediate carbene . [ 1 ] Photo-activation of amino acids provide the ability of photo-cross-linking in proteins. [ 1 ] This type of cross-linking has three major advantages; there is greater specificity for this cross linking due to the short lived intermediates and that this amino acid is functional, and most importantly; not toxic (meaning it should not disrupt the protein's function or structure dramatically). [ 1 ] Research had found that this activation is the rate-limiting step; not the cross-linkage. [ 2 ] Scientists Miquel Vila-Perello´, Matthew R. Pratt, Frej Tulin, and Tom W. Muir wanted to create an efficient synthesis as the original had required an enzymatic solution and had a low yield. [ 3 ] So, they started with L-glutamic acid with protecting groups on both the carboxylic acid (tert-butyl), and Boc on the amine. This synthesis will not undergo detail as the classic, but below is the full synthesis. To find the actual steps, look to the reference. [ 3 ] So, once they had synthesized L-photo-methionine, the yield was 32%, much higher (by six times) the original synthesis. [ 3 ] It was used then (with a protection group Fmoc on the amine) which that product underwent more synthetic steps to study if an amino-acid cross linker and a post-translational modification (PTM) could be introduced to the same protein site specifically to capture a covalent interaction of the amino-acid is dependent on the PTM. [ 3 ] PTM's regulate protein-protein interactions that have characteristics that are transient and substoichiometric; making these difficult to detect by standard methods. [ 3 ] So, in order to see if it would work, the MH2 domain of Smad2 was used because this signaling protein is known to form stable homo-trimers once they come into contact with receptor -phosphorylated serine residues. [ 3 ] Expression protein ligation (known as EPL) was used to synthesize to form Smad2-MH2-CSpSM-photo-Met (1). The product was studied with the cross-linker (photo-Met) against a control protein: HA-MH2-CSpSMpS (this lacks photo-methionine, 2) using SDS-PAGE and western blotting using anti-HA antibody. [ 3 ] 1 had generated two major cross-linked species that have molecular weight consistent with a dimer and trimer of Smad2-SH2. Without that cross-linker, the dimer and trimer were barely detected in the non-irradiated 1, and in 2 before and after UV irradiation. [ 3 ] Proving that l-photo-methionine can be used with EPL and could be used to determine a transient MH2-MH2 interaction that was dependent on a PTM. [ 3 ] As mentioned before, scientists Monika Suchanek, Anna Radzikowski, and Christoph Thiele wanted to study protein-protein interaction in their natural environment. Specifically, the membrane proteins (in a complex and are SCAP , Insig-1 , and SREBP ) that regulate cholesterol homeostasis so they wanted to know what their function was and the complex structure. [ 1 ] What they had found was that using this photo-reactive amino acid was incorporated efficiently into the protein by mammalian cells, but did not need to use modified tRNAs (transfer RNA's) or AARS's (aminoacyl tRNA syntheses) which that allowed the specific cross-linking needed. [ 1 ] This cross-linking could be determined by western blotting and they had discovered a direct interaction between Insig-1 and PGRMC1 (a progesterone -binding membrane protein). [ 1 ] All four of the membrane proteins are found in the endoplasmic reticulum and the complex responds to low cholesterol levels. [ 1 ] Cells ( COS7 ) that had HA ( hemagglutinin tagged PGRMC1) and Myc tagged Insig-1 were grown with and without photo-Met. In the presence of photo-Met, Insig-1 and SCAP had cross-linked with PGRMC1; specifically, Insig-1 cross-linked had a strong band. [ 1 ] The cross-linking was detected by immunoprecipitating detergent-extracts with an antibody to HA then the precipitant was tested for Insig-1 using western blotting with the antibody for Myc. [ 1 ] An identical band was found doing the reverse order of the detection; meaning Myc antibody was immunoprecipitated then followed by blotting with the HA antibody. [ 1 ] So, the method had proven to work that photo-Met could cross-link proteins, but the physiological implications of this cross-linking has yet to be determined. Protein was studied using a protein nanoprobe (that enables cross-linking) that introduced photo-methionine within the protein (during the recombinant expression) which lead to the protein keeping its reserved structure while having the ability to be mapped out for its interactions. The model was used as a region of contact surface that is involved in a well-known interaction (homodimerization) between two molecules of 14-3-3ζ protein. Once the photo-methionine is introduced and has become activated using UV-light, it can cross-link with no specificity (meaning no group) and the links have zero-length. High resolution mass spectrometry or MS can (even MS/MS) be used then to determine the cross-linked residues and the reaction radius; allowing the researchers to characterize and research the homodimerization of the protein. The usage of the high-resolution MS with photo-methionine has its advantages as it again allows the protein to be in its native state, there are reasonable time scales while using small quantities of the protein. There are also fewer limitations on the reaction specificity and restrictions using a photo-active cross-linker (photo-methionine) compared to chemical cross-linking . This method of combined photo-initiated cross-linking from the protein nanoprobe in tandem with MS could be useful to characterize not only homodimer formation but also oligomers and in theory; heteromers (such as the composition of the protein-protein mixture and its functionality). [ 4 ] Cytochrome b 5 was synthesized with photo-methionine to map the protein-protein interactions while also identifying its structure to study the mammalian mixed function oxidase system (also known as the MFO). [ 5 ] This system is located in the membrane of the endoplasmic reticulum and it is composed of cytochrome P450 , NADPH : cytochrome P450 reductase , and cytochrome b 5 along with NADH : cytochrome b 5 reductase . [ citation needed ] Once the cytochrome b 5 complex had photo-methionine incorporated (meaning photo-met was substituted in place of methionine and now photo-cyt b 5 ), photo-cyt b 5 and cytochrome P450 were put under UV-light and the products were able to be studied using SDS-Page ; this method had shown three cross-links. [ citation needed ] The photo-methionine had proven successful in mapping photo-cyt b 5 as the MALDI-TOF method shown three oligomers (from chymotryptic peptides) that were composed of photo-cyt b 5 and cytochrome P450 in molecular weight ratio's of 1:1, 1:2, and 2:1. [ citation needed ] What makes photo-methionine here so useful in studying cytochrome P450 and cytochrome b 5 is that this method not only mapped protein-protein interfaces not only in regions exposed to solvent, but also in the native environment; the membrane. [ 5 ] A typical cross-linking method can only work in solvent exposed regions, proving once again that photo-methionine is useful to map these protein-protein interactions with the protein in their native environment. [ 5 ] Laminin's are non-collagenous proteins found in basement membranes and form networks through non-covalent self-interactions. Nidogens (also known as entactins) are sulfated monomeric glycoproteins that are ubiquitously present in basement membranes of higher organisms. Nidogens help with the formation of the basement. With both laminins and nidogens present, both interact with each other to have a stoichiometry relationship of 1:1 in a complex. In order to study the short arm of laminin γ1, photo methionine introduced both to nidogen-1 , laminin γ1 LEb2-4, and laminin γ1 short arm to see if this photo-cross linking method could map out the structure. MS/MS analysis was done before cross-linking to find only 13-25% of methionine's had been incorporated, but once UV-A-induced or another cross-linker, BS 2 G-mediated cross-linked (a homobifunctional cross-linker), the percentage of photo-methionine's had increased to 35%. Both cross-linkers had shown extra structural insight both computationally and experimentally to help with understanding the functions. [ 6 ] Cyclooxygenase-2 (COX-2) and microsomal prostaglandin E 2 synthase-1 (mPGES-1) structures were studied using both photo-methionine (photo-activatable) and bifunctional cross linkers. Photo-methionine used in COX-2 had shown just as the bifunctional cross-linker that there was a dimeric structure which this was consistent with the crystal structure of the enzyme. In mPGES-1, the human cells ( A549 ) had been treated with disuccinimidyl suberate (chemical cross-linker) had yielded a dimer of 33kDa and a trimer of 45kDa while it was treated with photo-methionine had yielded a dimer of the same molecular weight (33kDa) and two putative trimers (50kDa and 55kDa). Once a mPGES-1 inhibitor ( MF63 ) was introduced; this had inhibited the formation of the 50kDa and 55kDa complexes. The dimer and trimer yielded by the chemical cross-linker was not affected by the inhibitor. Yet, photo-methionine nor disuccinimidyl suberate had not shown any protein-protein interactions between COX-2 and mPGES-1 and this could be due to various reasons. For photo-methionine; one could be due to the low incorporation at this time as it was 0.7%. So, for mPGES-1; this has 152 amino acids so only one photo-methionine would be incorporated per monomer. That also means not one specific methionine would be replaced so that results in a heterogeneous population of mPGES-1 resulting in different cross-linking. Even though it did not show any protein-protein interactions, it could be used to detect inhibitor-induced protein conformational changes in the cell membranes on top of determining oligomeric structures. [ 7 ] Photo-methionine can be used to label recombinant proteins in Escherichia coli cells; though methionine in general is a rare amino acid so that means it could only give limited structural data. [ 8 ] Nevertheless, photo-methionine was incorporated into Ca 2+ regulating protein calmodulin (CaM that was 17-kDa) that has nine methionine's and studied via mass spectrometry (MS). [ 9 ] What makes this method different is the use of mineral salts medium instead of DMEM (Dulbecco's Modified Eagle's Limiting Medium) or dialyzed fetal bovine serum for the incorporation into the cells. [ 9 ] Using the mineral salt medium allowed the cells to be grown from the beginning in order to eliminate complicated steps with other protocols (incubating the cells in LB medium , followed by washing, and further incubating in the depleted medium), meaning that photo-methionine could be incorporated at the very beginning of the cell growth that had a high yield above 30%. [ 9 ] Photo-methionine had shown no damage during the cell growth process and once photo-activated by UV-A light, CaM had nine distinct cross-link sites once MS had determined there was peaks of photo-methionine labeled CaM. [ 9 ] Not only can photo-methionine be used for mapping 3D protein structure, studying protein-protein interactions, but now hydrophobic regions in the protein. [ 9 ]
https://en.wikipedia.org/wiki/L-Photo-methionine
An L -ribonucleic acid aptamer ( L -RNA aptamer , trade name Spiegelmer ) is an RNA -like molecule built from L - ribose units. [ 1 ] It is an artificial oligonucleotide named for being a mirror image of natural oligonucleotides. L -RNA aptamers are a form of aptamers . Due to their L -nucleotides, they are highly resistant to degradation by nucleases . [ 2 ] L -RNA aptamers are considered potential drugs and are currently being tested in clinical trials. L -RNA aptamers, built using L -ribose, are the enantiomers of natural oligonucleotides, which are made with D -ribose. Nucleic acid aptamers, including L -RNA aptamers, contain adenosine monophosphate , guanosine monophosphate , cytidine monophosphate , uridine monophosphate , a phosphate group , a nucleobase and a ribose sugar. Like other aptamers, L -RNA aptamers are able to bind molecules such as peptides , proteins , and substances of low molecular weight. The affinity of L -RNA aptamers to their target molecules often lies in the pico to nanomolar range and is thus comparable to antibodies . [ clarification needed ] [ 3 ] L -RNA aptamers themselves have low antigenicity . In contrast to other aptamers, L -RNA aptamers have high stability in blood serum, since they are less susceptible to be cleaved hydrolytically by enzymes. [ 4 ] They are excreted by the kidneys in a short time due to their low molar mass (which is below the renal threshold). L -RNA aptamers modified with a higher molar mass, such as PEGylated L -RNA aptamers, show a prolonged plasma half-life. Unlike other aptamers, L -RNA aptamers are not directly made using systematic evolution of ligands by exponential enrichment (SELEX), as L -nucleic acids are not amenable to enzymatic methods, such as polymerase chain reaction (PCR), used in SELEX. Therefore, the selection is done with mirrored target molecules. The first step is the production of the target's enantiomer . In the case of peptides and small proteins that are produced synthetically, an enantiomer is made using synthetic D - amino acids . If the target is a larger protein molecule, beyond synthetic abilities, the enantiomer of an epitope is produced. [ 4 ] Conventional (up to 10 16 different oligonucleotides) existing molecule library serves as a starting point for the subsequent SELEX process. [ clarification needed ] Selection, separation, and amplification using the mirror image of the target molecule is performed. The sequence of the oligonucleotide selected using SELEX is determined with the help of DNA sequencing . This information is used for the synthesis of the oligonucleotide's enantiomer, the L -RNA aptamer, using L -nucleotides. L -RNA aptamers have been obtained for the chemokines CCL2 and CXCL12 , the complement components C5a and ghrelin . They are currently in preclinical or clinical development. Proof-of-concept for an anti-CCL2/MCP-1 L -RNA aptamers has recently been demonstrated in diabetic nephropathy patients. [ 2 ] They can also be used as diagnostic agents. [ 4 ]
https://en.wikipedia.org/wiki/L-Ribonucleic_acid_aptamer
The L-arabinose operon , also called the ara or araBAD operon , is an operon required for the breakdown of the five-carbon sugar L-arabinose in Escherichia coli . [ 1 ] The L-arabinose operon contains three structural genes : araB , araA , araD (collectively known as araBAD ), which encode for three metabolic enzymes that are required for the metabolism of L-arabinose. [ 2 ] AraB ( ribulokinase ), AraA (an isomerase ), and AraD (an epimerase ) produced by these genes catalyse conversion of L-arabinose to an intermediate of the pentose phosphate pathway , D- xylulose-5-phosphate . [ 2 ] The structural genes of the L-arabinose operon are transcribed from a common promoter into a single transcript , a mRNA . [ 3 ] The expression of the L-arabinose operon is controlled as a single unit by the product of regulatory gene araC and the catabolite activator protein (CAP)- cAMP complex. [ 4 ] The regulator protein AraC is sensitive to the level of arabinose and plays a dual role as both an activator in the presence of arabinose and a repressor in the absence of arabinose to regulate the expression of araBAD . [ 5 ] AraC protein not only controls the expression of araBAD but also auto-regulates its own expression at high AraC levels. [ 6 ] L-arabinose operon is composed of structural genes and regulatory regions including the operator region ( araO 1 , araO 2 ) and the initiator region ( araI 1 , araI 2 ). [ 7 ] The structural genes, araB , araA and araD , encode enzymes for L-arabinose catabolism . There is also a CAP binding site where CAP-cAMP complex binds to and facilitates catabolite repression , and results in positive regulation of araBAD when the cell is starved of glucose . [ 8 ] The regulatory gene, araC , is located upstream of the L-arabinose operon and encodes the arabinose-responsive regulatory protein AraC. Both araC and araBAD have a discrete promoter where RNA polymerase binds and initiates transcription . [ 4 ] araBAD and araC are transcribed in opposite directions from the araBAD promoter ( P BAD ) and araC promoter ( P C ) respectively. [ 2 ] Both L-ribulose 5-phosphate and D-xylulose-5-phosphate are metabolites of the pentose phosphate pathway , which links the metabolism of 5-carbon sugars to that of 6-carbon sugars . [ 6 ] The L-arabinose system is not only under the control of CAP-cAMP activator, but also positively or negatively regulated through binding of AraC protein. AraC functions as a homodimer , which can control transcription of araBAD through interaction with the operator and the initiator region on L-arabinose operon. Each AraC monomer is composed of two domains including a DNA binding domain and a dimerisation domain. [ 9 ] The dimerisation domain is responsible for arabinose-binding. [ 10 ] AraC undergoes conformational change upon arabinose-binding, in which, it has two distinct conformations. [ 6 ] The conformation is purely determined by the binding of allosteric inducer arabinose. [ 11 ] AraC can also negatively autoregulate its own expression when the concentration of AraC becomes too high. AraC synthesis is repressed through binding of dimeric AraC to the operator region ( araO 1 ). When arabinose is absent, cells do not need the ara BAD products for breaking down arabinose. Therefore, dimeric AraC acts as a repressor: one monomer binds to the operator of the araBAD gene ( araO 2 ), another monomer binds to a distant DNA half site known as araI 1 . [ 12 ] This leads to the formation of a DNA loop. [ 13 ] This orientation blocks RNA polymerase from binding to the araBAD promoter. [ 14 ] Therefore, transcription of structural gene araBAD is inhibited. [ 15 ] Expression of the araBAD operon is activated in the absence of glucose and in the presence of arabinose. When arabinose is present, both AraC and CAP work together and function as activators. [ 16 ] AraC acts as an activator in the presence of arabinose. AraC undergoes a conformational change when arabinose binds to the dimerization domain of AraC. As a result, the AraC-arabinose complex falls off from araO 2 and breaks the DNA loop. Hence, it is more energetically favourable for AraC-arabinose to bind to two adjacent DNA half sites: araI 1 and araI 2 in the presence of arabinose. One of the monomers binds araI 1 , the remaining monomer binds araI 2 - in other words, binding of AraC to araI 2 is allosterically induced by arabinose. One of the AraC monomers places near to the araBAD promoter in this configuration, which helps to recruit RNA polymerase to the promoter to initiate transcription. [ 17 ] CAP act as a transcriptional activator only in the absence of E. coli' s preferred sugar, glucose. [ 18 ] When glucose is absent, high level of CAP protein/cAMP complex bind to CAP binding site, a site between araI 1 and araO 1 . [ 19 ] Binding of CAP/cAMP is responsible for opening up the DNA loop between araI 1 and araO 2 , increasing the binding affinity of AraC protein for araI 2 and thereby promoting RNA polymerase to bind to araBAD promoter to switch on the expression of the araBAD required for metabolising L-arabinose. The expression of araC is negatively regulated by its own protein product, AraC. The excess AraC binds to the operator of the araC gene, araO 1 , at high AraC levels, which physically blocks the RNA polymerase from accessing the araC promoter. [ 20 ] Therefore, the AraC protein inhibits its own expression at high concentrations. [ 16 ] The L-arabinose operon has been a focus for research in molecular biology since 1970, and has been investigated extensively at its genetic , biochemical , physiological and biotechnical levels. [ 3 ] The L-arabinose operon has been commonly used in protein expression system , as the araBAD promoter can be used for producing targeted expression under tight regulation. By fusing the araBAD promoter to a gene of interest, the expression of the target gene can be solely regulated by arabinose: for example, the pGLO plasmid contains a green fluorescent protein gene under the control of the P BAD promoter, allowing GFP production to be induced by arabinose. Other operon systems in E. coli :
https://en.wikipedia.org/wiki/L-arabinose_operon
L-form bacteria , also known as L-phase bacteria , L-phase variants or cell wall-deficient bacteria ( CWDB ), are growth forms derived from different bacteria . They lack cell walls . [ 1 ] Two types of L-forms are distinguished: unstable L-forms , spheroplasts that are capable of dividing, but can revert to the original morphology, and stable L-forms , L-forms that are unable to revert to the original bacteria. L-form bacteria were first isolated in 1935 by Emmy Klieneberger-Nobel , who named them " L-forms " after the Lister Institute in London where she was working. [ 2 ] She first interpreted these growth forms as symbionts related to pleuropneumonia-like organisms (PPLOs, later commonly called mycoplasmas). [ 3 ] Mycoplasmas (now in scientific classification called Mollicutes ), parasitic or saprotrophic species of bacteria, also lack a cell wall (peptidoglycan/murein is absent). [ 4 ] [ 5 ] Morphologically, they resemble L-form bacteria. Therefore, mycoplasmas formerly were sometimes considered stable L-forms or, because of their small size, even viruses, but phylogenetic analysis has identified them as bacteria that have lost their cell walls in the course of evolution. [ 6 ] Both, mycoplasmas and L-form bacteria are resistant against penicillin . After the discovery of PPLOs (mycoplasmas/ Mollicutes ) and L-form bacteria, their mode of reproduction (proliferation) became a major subject of discussion. In 1954, using phase-contrast microscopy, continual observations of live cells have shown that L-form bacteria (previously also called L-phase bacteria) and pleuropneumonia-like organisms (PPLOs, now mycoplasmas/ Mollicutes ) ) do not proliferate by binary fission, but by a uni- or multi-polar budding mechanism. Microphotograph series of growing microcultures of different strains of L-form bacteria, PPLOs and, as a control, a Micrococcus species (dividing by binary fission) have been presented. [ 3 ] Additionally, electron microscopic studies have been performed. [ 7 ] Bacterial morphology is determined by the cell wall . Since the L-form has no cell wall, its morphology is different from that of the strain of bacteria from which it is derived. Typical L-form cells are spheres or spheroids . For example, L-forms of the rod-shaped bacterium Bacillus subtilis appear round when viewed by phase contrast microscopy or by transmission electron microscopy . [ 8 ] Although L-forms can develop from Gram-positive as well as from Gram-negative bacteria , in a Gram stain test , the L-forms always colour Gram-negative, due to the lack of a cell wall. The cell wall is important for cell division , which, in most bacteria, occurs by binary fission . This process usually requires a cell wall and components of the bacterial cytoskeleton such as FtsZ . The ability of L-form bacteria and mycoplasmas to grow and divide in the absence of both of these structures is highly unusual, and may represent a form of cell division that was important in early forms of life. This mode of division seems to involve the extension of thin protrusions from the cell's surface and these protrusions then pinching off to form new cells. The lack of cell wall in L-forms means that division is disorganised, giving rise to a variety of cell sizes, from very tiny to very big. [ 1 ] L-forms can be generated in the laboratory from many bacterial species that usually have cell walls, such as Bacillus subtilis or Escherichia coli . This is done by inhibiting peptidoglycan synthesis with antibiotics or treating the cells with lysozyme , an enzyme that digests cell walls. The L-forms are generated in a culture medium that is the same osmolarity as the bacterial cytosol (an isotonic solution ), which prevents cell lysis by osmotic shock . [ 2 ] L-form strains can be unstable, tending to revert to the normal form of the bacteria by regrowing a cell wall, but this can be prevented by long-term culture of the cells under the same conditions that were used to produce them – letting the wall-disabling mutations to accumulate by genetic drift . [ 9 ] Some studies have identified mutations that occur, as these strains are derived from normal bacteria. [ 1 ] [ 2 ] One such point mutation D92E is in an enzyme yqiD / ispA ( P54383 ) involved in the mevalonate pathway of lipid metabolism that increased the frequency of L-form formation 1,000-fold. [ 1 ] The reason for this effect is not known, but it is presumed that the increase is related to this enzyme's role in making a lipid important in peptidoglycan synthesis. Another methodology of induction relies on nanotechnology and landscape ecology . Microfluidics devices can be built in order to challenge peptidoglycan synthesis by extreme spatial confinement. After biological dispersal through a constricted (sub-micrometre scale) biological corridor connecting adjacent micro habitat patches , L-form-like cells can be derived [ 10 ] using a microfluifics-based (synthetic) ecosystem implementing an adaptive landscape [ 11 ] selecting for shape-shifting phenotypes similar to L-forms. Some publications have suggested that L-form bacteria might cause diseases in humans, [ 12 ] and other animals [ 13 ] but, as the evidence that links these organisms to disease is fragmentary and frequently contradictory, this hypothesis remains controversial. [ 14 ] [ 15 ] The two extreme viewpoints on this question are that L-form bacteria are either laboratory curiosities of no clinical significance or important but unappreciated causes of disease. [ 5 ] Research on L-form bacteria is continuing. For example, L-form organisms have been observed in mouse lungs after experimental inoculation with Nocardia caviae , [ 16 ] [ 17 ] and a recent study suggested that these organisms may infect immunosuppressed patients having undergone bone marrow transplants . [ 18 ] The formation of strains of bacteria lacking cell walls has also been proposed to be important in the acquisition of bacterial antibiotic resistance . [ 19 ] [ 20 ] L-form bacteria may be useful in research on early forms of life, and in biotechnology . These strains are being examined for possible uses in biotechnology as host strains for recombinant protein production . [ 21 ] [ 22 ] [ 23 ] Here, the absence of a cell wall can allow production of large amounts of secreted proteins that would otherwise accumulate in the periplasmic space of bacteria. [ 24 ] [ 25 ] L-form bacteria are seen as a persister cells , and a source of recurrent infection that has become of medical interest. [ 26 ]
https://en.wikipedia.org/wiki/L-form_bacteria
L-selectride is a organoboron compound with the chemical formula Li[(CH 3 CH 2 CH(CH 3 )) 3 BH] . A colorless salt, it is usually dispensed as a solution in THF. As a particularly basic and bulky borohydride , it is used for stereoselective reduction of ketones. [ 1 ] Like other borohydrides, reductions are effected in two steps: delivery of the hydride equivalent to give the lithium alkoxide followed by hydrolytic workup: The selectivity of this reagent is illustrated by its reduction of all three methylcyclohexanones to the less stable methylcyclohexanols in >98% yield. Under certain conditions, L-selectride can selectively reduce enones by conjugate addition of hydride, owing to the greater steric hindrance the bulky hydride reagent experiences at the carbonyl carbon relative to the (also-electrophilic) β-position. [ 2 ] L-Selectride can also stereoselectively reduce carbonyl groups in a 1,2-fashion, again due to the steric nature of the hydride reagent. [ 3 ] It reduces ketones to alcohols. [ 4 ] N-selectride and K-selectride are related compounds, but instead of lithium as cation they have sodium and potassium cations respectively. These reagents can sometimes be used as alternatives to, for instance, sodium amalgam reductions in inorganic chemistry. [ 5 ]
https://en.wikipedia.org/wiki/L-selectride
Luitzen Egbertus Jan " Bertus " Brouwer [ a ] (27 February 1881 – 2 December 1966) was a Dutch mathematician and philosopher who worked in topology , set theory , measure theory and complex analysis . [ 2 ] [ 4 ] [ 5 ] Regarded as one of the greatest mathematicians of the 20th century, he is known as one of the founders of modern topology, particularly for establishing his fixed-point theorem and the topological invariance of dimension . [ 6 ] [ 7 ] [ 8 ] Brouwer also became a major figure in the philosophy of intuitionism , a constructivist school of mathematics which argues that math is a cognitive construct rather than a type of objective truth . This position led to the Brouwer–Hilbert controversy , in which Brouwer sparred with his formalist colleague David Hilbert . Brouwer's ideas were subsequently taken up by his student Arend Heyting and Hilbert's former student Hermann Weyl . In addition to his mathematical work, Brouwer also published the short philosophical tract Life, Art, and Mysticism (1905). Brouwer was born to Dutch Protestant parents. [ 9 ] Early in his career, Brouwer proved a number of theorems in the emerging field of topology. The most important were his fixed point theorem , the topological invariance of degree, and the topological invariance of dimension . Among mathematicians generally, the best known is the first one, usually referred to now as the Brouwer fixed point theorem. It is a corollary to the second, concerning the topological invariance of degree, which is the best known among algebraic topologists. The third theorem is perhaps the hardest. Brouwer also proved the simplicial approximation theorem in the foundations of algebraic topology , which justifies the reduction to combinatorial terms, after sufficient subdivision of simplicial complexes , of the treatment of general continuous mappings. In 1912, at age 31, he was elected a member of the Royal Netherlands Academy of Arts and Sciences . [ 10 ] He was an Invited Speaker of the ICM in 1908 at Rome [ 11 ] and in 1912 at Cambridge, UK. [ 12 ] He was elected to the American Philosophical Society in 1943. [ 13 ] Brouwer founded intuitionism , a philosophy of mathematics that challenged the then-prevailing formalism of David Hilbert and his collaborators, who included Paul Bernays , Wilhelm Ackermann , and John von Neumann (cf. Kleene (1952), p. 46–59). A variety of constructive mathematics , intuitionism is a philosophy of the foundations of mathematics . [ 14 ] It is sometimes (simplistically) characterized by saying that its adherents do not admit the law of excluded middle as a general axiom in mathematical reasoning, although it may be proven as a theorem in some special cases. Brouwer was a member of the Significs Group . It formed part of the early history of semiotics —the study of symbols—around Victoria, Lady Welby in particular. The original meaning of his intuitionism probably cannot be completely disentangled from the intellectual milieu of that group. In 1905, at the age of 24, Brouwer expressed his philosophy of life in a short tract Life, Art and Mysticism , which has been described by the mathematician Martin Davis as "drenched in romantic pessimism" (Davis (2002), p. 94). Arthur Schopenhauer had a formative influence on Brouwer, not least because he insisted that all concepts be fundamentally based on sense intuitions. [ 15 ] [ 16 ] [ 17 ] Brouwer then "embarked on a self-righteous campaign to reconstruct mathematical practice from the ground up so as to satisfy his philosophical convictions"; indeed his thesis advisor refused to accept his Chapter II "as it stands, ... all interwoven with some kind of pessimism and mystical attitude to life which is not mathematics, nor has anything to do with the foundations of mathematics" (Davis, p. 94 quoting van Stigt, p. 41). Nevertheless, in 1908: "After completing his dissertation, Brouwer made a conscious decision to temporarily keep his contentious ideas under wraps and to concentrate on demonstrating his mathematical prowess" (Davis (2000), p. 95); by 1910 he had published a number of important papers, in particular the Fixed Point Theorem. Hilbert—the formalist with whom the intuitionist Brouwer would ultimately spend years in conflict—admired the young man and helped him receive a regular academic appointment (1912) at the University of Amsterdam (Davis, p. 96). It was then that "Brouwer felt free to return to his revolutionary project which he was now calling intuitionism " (ibid). He was combative as a young man. According to Mark van Atten, this pugnacity reflected his combination of independence, brilliance, high moral standards and extreme sensitivity to issues of justice. [ 5 ] He was involved in a very public and eventually demeaning controversy with Hilbert in the late 1920s over editorial policy at Mathematische Annalen , at the time a leading journal. According to Abraham Fraenkel , Brouwer espoused Germanic Aryanness and Hilbert removed him from the editorial board of Mathematische Annalen after Brouwer objected to contributions from Ostjuden . [ 18 ] In later years Brouwer became relatively isolated; the development of intuitionism at its source was taken up by his student Arend Heyting . Dutch mathematician and historian of mathematics Bartel Leendert van der Waerden attended lectures given by Brouwer in later years, and commented: "Even though his most important research contributions were in topology, Brouwer never gave courses in topology, but always on — and only on — the foundations of his intuitionism. It seemed that he was no longer convinced of his results in topology because they were not correct from the point of view of intuitionism, and he judged everything he had done before, his greatest output, false according to his philosophy." [ 19 ] About his last years, Davis (2002) remarks:
https://en.wikipedia.org/wiki/L._E._J._Brouwer
The L1 and L2 interpreted languages were developed by Bell Labs in the 1950s to provide floating-point arithmetic capabilities, simplified memory access, and other enhancements for the IBM model 650 (IBM 650) digital computer and allow users to more easily develop application-specific code for these machines. L1 was developed by Michael Wolontis and Dolores Leagus and was released in September 1955. Later, Richard Hamming and Ruth A. Weiss developed the L2 package which enhanced L1 by providing additional mathematical capabilities tailored to more engineering-oriented applications. L1 and L2 were widely used within Bell Labs, and also by outside users, who usually called them "Bell 1 and Bell 2." According to Bell Labs, "In the late 1950s, at least half the IBM 650s doing scientific and engineering work used either Bell 1 or Bell 2." [ 1 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/L1_and_L2_(programming_language)
L4S (for Low Latency, Low Loss and Scalable Throughput ) is an IETF network protocol and congestion control technology designed to lower network latency [ 1 ] by reducing bufferbloat throughout the Internet. L4S uses novel congestion control mechanisms to reduce queuing in the network. [ 1 ] It uses Explicit Congestion Notification to transmit information about path latency problems, and allows congested nodes to use the ECN bits to send information back to senders that will allow them to adjust their transmit rate, reducing the need for data buffering within router queues. L4S has the advantage that it is an incremental technology which can start to provide incremental latency improvements without having to be adopted throughout the entire Internet. [ 2 ] L4S is specified in RFC 9330 . It uses the last codepoint of the Internet Protocol header's ECN field that had not previously been assigned to signal that traffic is from an L4S-capable sender. [ 3 ] The full set of four ECN codes for packets are thus: [ 4 ] Routers can thus treat L4S traffic differently from non-L4S traffic, knowing that L4S endpoints will respond to throttle back traffic in a more controlled way than would be possible using classic ECN. This is done by treating L4S traffic differently for both the cases of queuing and marking. [ 5 ] As of January 2025 [update] , Internet service providers had started to roll out L4S in their production networks, with Comcast being an early adopter. [ 6 ] Apple have incorporated L4S support in their newer operating systems since 2023. [ 7 ] Linux support for L4S, in the form of TCP Prague , is available on an experimental basis, and is expected to be merged into the main Linux kernel tree soon. [ 5 ]
https://en.wikipedia.org/wiki/L4S
Lagos Deep Offshore Logistics Base (LADOL), officially LADOL Free Zone , also known as LADOL Base or the initials LFZ , is an industrial Free Zone privately owned logistics and engineering facility located on an island in the Port of Apapa , Lagos , Nigeria . LADOL was designed to provide logistics, engineering and other support services to offshore oil & gas exploration and production companies operating in and around West Africa. LADOL's developer, LiLe, began the construction of the logistics and engineering base in 2001 and commenced full operations in 2006. In June 2006, LADOL was designated as a Free Zone pursuant to the Nigeria Export Processing Zones Act No. 63 1992. Completed at a cost of US$150 million, LADOL's initial infrastructures included: a 200m quay, 8.5m draft, 25-ton/m2 high load bearing area and additional 30-ton bollards at either end that can accommodate up to six supply vessels and three heavy-lift vessels ; a hotel; warehouse; office complex; road; water treatment; and underground reticulation. In 2015, with the support of Total Upstream Nigeria Limited, LADOL was further expanded to include a new US$300 million Floating Production Storage and Offloading (FPSO) vessel fabrication and integration facility. The FPSO vessel fabrication and integration facility – currently being operated by SHI-MCI FZE, a Nigerian Local Content initiative-driven incorporated joint venture between Samsung Heavy Industries and LADOL's shipyard operator, Mega-Construction and Integration FZE – was initiated to fabricate and integrate Total Egina FPSO in Nigeria and other similar projects expected to be carried out in Africa. The next phase of LADOL's expansion has been reported to include a dry dock that will be the largest in West Africa and attract as many as 100,000 direct and indirect jobs. [ 1 ]
https://en.wikipedia.org/wiki/LADOL
LAMA2 muscular dystrophy ( LAMA2 -MD ) is a genetically determined muscle disease caused by pathogenic mutations in the LAMA2 gene. It is a subtype of a larger group of genetic muscle diseases known collectively as congenital muscular dystrophies. The clinical presentation of LAMA2 -MD varies according to the age at presentation. The severe forms present at birth and are known as early onset LAMA2 congenital muscular dystrophy type 1A or MDC1A . The mild forms are known as late onset LAMA2 muscular dystrophy or late onset LAMA2 -MD . [ 1 ] [ 2 ] The nomenclature LGMDR23 can be used interchangeably with late onset LAMA2 -MD. [ 3 ] Suggestive clinical features include, muscular hyperlaxity or hypotonia, growth retardation progressive spine and joint contractures, and cardiac and respiratory failure. [ 1 ] [ 2 ] For consensus, generally, the term congenital muscular dystrophy refers to a diverse group of childhood onset muscle diseases -usually occurring the first two years of life- and mostly inherited through an autosomal recessive mode. Congenital muscular dystrophies have known phenotype-genotype profiles and produce muscle degenerative pathology. [ 4 ] There are two types of LAMA2 muscular dystrophy (LAMA2-MD). The first type is the congenital type known as early onset LAMA2 congenital muscular dystrophy type 1A or MDC1A. It presents at birth and has a relatively severe clinical presentation. Characteristically it manifests in muscle weakness, hyperlaxity or hypotonia, respiratory difficulties and developmental delay. [ 1 ] [ 4 ] The second type is the late onset LAMA2 muscular dystrophy or late onset LAMA2-MD. The age of presentation of late onset LAMA2-MD ranges from early childhood to adulthood. It usually has a mild clinical presentation in the form of progressive spine and joint contractures, and cardiac and respiratory failure. [ 1 ] Delayed development of motor milestones as loss of ambulatory capacity is usually more severe in the congenital type 1A or MDC1A. [ 1 ] [ 5 ] Skeletal muscle weakness is a characteristic feature. It more evident in the proximal muscles of the extremities. Facial and neck weakness have also been reported. [ 6 ] Scoliosis is a side curvature or abnormal deviation of the spine with an element of rotation. Scoliosis is usually rigid and progressive. It may be accompanied by lordosis. [ 7 ] The clinical orthopedic features of congenital type 1A (MDC1A) in terms of type, distribution, laterality, deformity progression, chronological order of muscle and joint involvement etc., have shown a fairly characteristic pattern. [ 7 ] This is important to the differential diagnosis of LAMA2 -MD and other subtypes of congenital muscular dystrophies among others. [ 7 ] LAMA2 -MD especially MDC1A, usually manifests in progressive contractures of large joints like knees, ankles, elbow and hips. Contractures tend to be bilateral. That is involving both the left and right sides. [ 7 ] Observing the chronological order of development of joint contractures, namely early versus late in the disease course, could offer differential diagnostic clues for congenital muscular dystrophies as MDC1A, LMNA-Related muscular dystrophy among other genetic muscle diseases. [ 7 ] [ 8 ] [ 9 ] Of note, any unique clinical orthopedic features of LAMA2-MD should be put into context with the other clinical features, characteristic brain and muscle imaging, muscle immunostaining and genetic testing findings. An International retrospective early natural history study of LAMA2-MD proposed a classification based on motor or ambulatory capacity in which patients who attain the ability to sit and remain seated are classified as LAMA2-MD1 or LAMA2-RD1 and those who attain the ability to walk independently are classified as LAMA2-MD2 or LAMA2-RD2. [ 10 ] A study on a large series LAMA2-MD patients showed that bone mineral density was reduced in all adults and most children. Fragility fractures were reported occasionally. [ 6 ] Respiratory insufficiency can occur in both types of LAMA2 -MD. Respiratory tract infections are a cause of death in the congenital type 1A or MDC1A. [ 5 ] [ 11 ] Cardiac involvement in LAMA2 -MD may manifest in dilated cardiomyopathy and systolic dysfunction. Cardiac screening and surveillance are important in LAMA2 -MD. This is aimed at timely diagnosis and management of subclinical cardiac involvement. [ 12 ] [ 13 ] Epilepsy is a fairly common manifestation of both types of LAMA2 -MD. However, the age at occurrence of first epileptic fit is earlier in the congenital type 1A or MDC1A. Screening for epilepsy should be included in the workup. Intelligence is usually normal. [ 14 ] [ 15 ] Epilepsy and intellectual disability were associated with motor dysfunction namely inability to sit and/or walk. Epilepsy and to a lesser extent intellectual disability were also strongly correlated to cortical abnormalities on brain MRI. [ 15 ] LAMA2 -MD is caused by pathogenic variants or mutations in the LAMA2 gene that encodes alpha2 chain of laminin-211 or laminin-alpha2, previously known as laminin type 2 or merosin. laminin-211 is important to the function and integrity of the sarcolemma of muscle fibers. [ 16 ] laminin-alpha2 is also present in extra-muscular locations as the central and peripheral nervous system. [ 17 ] Pathogenic variants of the LAMA2 gene which lead to loss of function are accompanied by complete deficiency of laminin-alpha2 (merosin) and result in a severe clinical picture or phenotype namely early onset MDC1A. Pathogenic variants of the LAMA2 gene accompanied by partial deficiency of laminin-alpha2 result in a milder clinical picture namely late onset LAMA2 muscular dystrophy or late onset LAMA2 -MD. The disease is inherited through an autosomal recessive mode. [ 2 ] [ 18 ] Correlating the characteristic clinical picture with the specific imaging, laboratory and muscle biopsy findings is essential to the diagnosis of LAMA2 -MD. The presence of pathogenic variants in LAMA2 gene by Genetic testing , -DNA testing- of the affected individual confirms the diagnosis of LAMA2 -MD. [ 1 ] [ 2 ] [ 15 ] Abnormal white matter signals in Brain MRI is a near-universal sign in patients with LAMA2 -MD. These white matter abnormalities appear as hyperintense signals on T2-weighted and FLAIR brain MRI images especially in locations that are originally myelinated in the immature brain as the periventricular area. Occasional MRI abnormalities include cortical malformations as polymicrogyria , lissencephaly , pachygyria . [ 1 ] [ 2 ] [ 15 ] [ 5 ] In LAMA2 -MD there seems to be a directly proportional relationship between the magnitude of white mater and cortical abnormalities on brain MRI and the degree of motor dysfunction in terms of the ability to sit and walk. [ 15 ] Muscle MRI especially Whole-body muscle MRI can provide important diagnostic clues. Some studies have shown a reasonably characteristic pattern of muscle involvement on whole-body muscle MRI in LAMA2 -MD patients. [ 19 ] [ 20 ] This relates to muscles or group of muscles involvement versus sparing. For example, sparing of the gracilis, sartorius muscles, [ 20 ] and the adductor longus muscle [ 19 ] [ 21 ] has been linked to LAMA2 -MD. On the other hand, studies showed a specific predilection to involve the gluteus maximus and anterior thigh muscles, [ 20 ] [ 21 ] adductor magnus muscle, [ 19 ] [ 21 ] serratus anterior muscle [ 19 ] [ 20 ] in LAMA2 -MD, and so forth. Abnormal muscle texture or geometry on muscle MRI as presence of granular pattern of involvement in a muscle has been suggested to be a diagnostic clue. [ 20 ] Similarly, a homogenous pattern of involvement of group of muscle e.g., anterior compartment of thigh, could be used to support diagnosis. [ 19 ] Homogenous pattern refers to involvement of all individual muscles of a muscle compartment to the same extent. Moreover, Whole body muscle MRI could be indicative of clinical disease severity and duration of LAMA2-MD. It can also help establish phenotype genotype correlations [ 19 ] [ 21 ] However, these muscle MRI features may overlap with other subtypes of congenital muscular dystrophy. Additionally, some inconsistencies between the above muscle imaging studies can be noted. Thus, more longitudinal studies with larger cohorts and standardized methodologies are needed to arrive at a more uniform and consistent muscle MRI signature in LAMA2 -MD. It is therefore paramount to correlate muscle imaging findings with clinical, neuro-imaging, laboratory and genetic testing findings. [ 20 ] [ 22 ] [ 23 ] There is an inversely proportional relationship between the quantity of laminin alpha2 (merosin) found on immunohistochemistry and disease severity. That means, a more marked degree of laminin-alpha2 deficiency e.g., total or near-total deficiency, is associated with more pronounced muscle degenerative pathology as myofibrosis, necrosis and fiber size variation. This is also associated with a more severe clinical picture. A less marked degree of laminin-alpha2 (merosin) deficiency -residual staining- is associated with a less pronounced muscle degenerative pathology and a milder clinical picture. Generally, congenital muscular dystrophy type 1A or MDC1A is known to have a severer clinical picture than late onset LAMA2 -MD. However, the degree of deficiency of laminin alpha2 (merosin) on immunohistochemistry in MDC1A can varies. Clinical disease severity associated with total laminin-alpha2 (merosin) deficiency usually manifests itself in early onset of symptoms, loss of ambulatory capacity and respiratory difficulties. [ 5 ] There is no definite cure available for LAMA2 -MD. However, preclinical studies on experimental animal models of Laminin alpha-2 chain deficient congenital muscular dystrophy are showing favorable yet early results. Generally, these preclinical studies are geared toward investigating the various factors behind disease initiation and progression, and exploration of potential ameliorating or curative therapies. [ 24 ] Preclinical studies focus on combating substances that regulate and promote muscle fibrosis in the pathogenesis of LAMA2 -MD e.g., TGF-β. This may reduce muscle fibrosis and enhance healthy muscle architecture subsequently. [ 25 ] [ 26 ] [ 27 ] Alternatively, preclinical studies can be geared toward enhancing proteins that are involved in muscle regeneration. Laminin alpha2 (Laminin-211) and laminin-221 complex are an important molecule for muscle cell receptors namely integrin-α7β1 and α-dystroglycan. In LAMA2-CMD the laminin alpha2 deficiency results in malfunctioning or down regulation of integrin-α7β1 and α-dystroglycan. This disrupts the proper linkage between the basal lamina and muscle cell membrane. Consequently, the contractile mechanism is disrupted. Integrin-α7β1 is important to satellite cell function, and myoblast adhesion and viability. Thusly, integrin-α7β1is an important contributor to skeletal muscle regeneration. Cell therapies that compensate for the deficiency or down regulation of integrin-α7β1 have the potential to delay or control the muscle degenerative process and preserve muscle architecture in LAMA2 -CMD patients. Additionally, the use of laminin-111 treatment in experimental mouse models of LAMA2 -CMD has showed satisfactory results in terms of increase in life expectancy muscle function and regeneration. [ 28 ] Currently, treatment is mainly supportive and palliative. It is directed at anticipating and preventing or alleviating the systemic complications associated disease progression. This refers to management of respiratory, cardiac, orthopedic and rehabilitative, central nervous system e.g., epilepsy, gastrointestinal and so forth. [ 4 ] Prognosis is dependent on the subtype of LAMA2 -MD. Nearly all children with early onset or congenital muscular dystrophy type 1A (MDC1A) are unable to walk independently. Nevertheless, children with MDC1A are usually able to sit. Contrastingly, patients with late onset LAMA2 -MD are usually able to walk independently. Of note, in both types of LAMA2-MD developmental motor milestones are delayed. Additionally, the prognosis is dependent on the degree of surveillance and supportive care that patients receive in regard to the multisystem manifestations and potential complications of LAMA2 -MD. This refers to prompt and timely management of orthopedic, cardiopulmonary, epilepsy and gastrointestinal systems among others. The multisystem manifestations may affect the quality of life of patients with LAMA2 -MD. [ 4 ] It is estimated that congenital muscular dystrophies occur in between 0.563 per 100,000 ( in Italy ) [ 29 ] and 2.5 per 100,000 ( in western Sweden ). [ 30 ] The prevalence data on congenital muscular dystrophy type 1A (MDC1A) varies by geographic location or population. [ 29 ] [ 31 ] Example, in the United Kingdom MDC1A constituted about 37% of all congenital muscular dystrophy subtypes namely the most common subtype. [ 31 ] In Qatar, MDC1A constituted 48% of congenital muscular dystrophy subtypes with estimated a point prevalence of 0.8 in 100.000 in a patient cohort from the Gulf and Middle East. [ 32 ] Contrastingly, in Australia it constituted 16% of all congenital muscular dystrophy subtypes namely the third most common subtype. [ 33 ] A scoping review on clinical orthopedic manifestations of congenital muscular dystrophy subtypes reported that the most common subtype was MDC1A accounting for 37% of the total study sample. [ 7 ]
https://en.wikipedia.org/wiki/LAMA2_related_congenital_muscular_dystrophy
LANDR Audio is a cloud-based music creation platform developed by MixGenius, an artificial intelligence company based in Montreal , Quebec . Since launching with its flagship automated mastering service in 2014, LANDR has expanded its offerings to include distribution services , a music samples library, virtual studio technology (VSTs) and plug-ins , a service marketplace for musicians, and online video conferencing. MixGenius launched an automated mastering service in 2014 under the name LANDR, meant to represent the left and right audio channels. The engine, developed through several years, was built by analyzing thousands of mastered tracks and by doing research and analysis on the workflows of mastering engineers. The engine performs the standard mastering processes, such as equalization , dynamic compression , audio excitement or saturation, and limiting /maximizing. [ 1 ] The company, now mainly referred to as LANDR Audio, continues to add services to their platform with the goal of bridging the gap between DIY musicians and the professional music market under CEO Pascal Pilon. LANDR has also created educational materials to help musicians improve their music production skills. Their educational content is disseminated through their blog, social media, and YouTube channel. Traditional music mastering is a post-production process by which a mastering engineer cleans up and normalizes an audio track to achieve a uniform and consistent master recording from which copies can be reliably made. The LANDR AI engine recreates this process to produce release-ready masters that conform to, both, physical and digital distribution quality standards. The LANDR engine analyzes uploaded tracks and creates a mastering chain catered to the style and genre of that track. Users can then use presets or choose to customize their masters using various settings and features. Users can also choose the file format of their final master or master in batches for consistent sound across multi-track releases. The engine offers various output formats, though WAVs are the standard choice for music distribution. LANDR Distribution allows users to release and monetize their music on digital streaming platforms like Spotify , Apple Music , and TikTok. LANDR users can currently distribute to 70 digital service providers (DSPs) and aggregators resulting in a full stable of over 150 digital streaming stores and platforms. LANDR has also been named a preferred partner of both Apple Music and Spotify since 2020. The curated library hosts over one million samples from various third-party providers and is updated weekly with new content. The Samples marketplace also hosts AI-led tools to help users search and preview samples in context. Selector suggests complementary samples to users as they browse. Creator , a browser-based audio interface, allows users to preview and play with samples while they browse the library. Tracks made with Creator can also be shared directly to TikTok. In 2021, LANDR launched a proprietary Samples plug-in to allow users to browse and preview the Samples library from within their preferred digital audio workstation (DAW). LANDR hosts a variety of free, subscription-based, and rent-to-own VSTs and plug-ins for DAWs. The rent-to-own program allows users to pay a monthly fee over a set period of time before owning their product license outright. Aimed at music professionals and DIY creators alike, the LANDR Network service marketplace allows users to create profiles, upload music, and connect with fellow artists and professionals to buy and sell their services. It also offers online collaboration tools like Sessions , video conferencing that allows users to sync and share DAW audio, and Projects , a collaborative, online workspace. LANDR won the Technovation Award at Canadian Music Week in 2014 [ 2 ] and has continued to grow in popularity since. The company was number 18 on CNBC 's February 28, 2017 edition of their "Upstart 25" lists. [ 3 ] In a Pitchfork feature about mastering, Jordan Kisner noted that responses from users of LANDR were mixed, reasoning that they found the site's technology to not be "flexible or intelligent" and that "You get what you pay for: a computer algorithm, rather than a live engineer with taste and experience." [ 4 ] However, despite hesitancy from the community to embrace artificial intelligence in music production spaces, LANDR received positive feedback from industry leaders like Bob Weir ( Grateful Dead ), [ 5 ] Tiga , [ 6 ] and Nas . [ 7 ] Notably, the engine was utilized by Gwen Stefani ’s team at the 2016 Grammys, [ 8 ] cementing its place as a professional audio production tool.
https://en.wikipedia.org/wiki/LANDR
The Lincoln Adaptable Real-time Information Assurance Testbed ( LARIAT ) is a physical [ 1 ] computing platform developed by the MIT Lincoln Laboratory as a testbed for network security applications. [ 2 ] Use of the platform is restricted to the United States military, though some academic organizations can also use the platform under certain conditions. [ 3 ] LARIAT was designed to help with the development and testing of intrusion detection (ID) and information assurance (IA) technologies. [ 4 ] Initially created in 2002, [ 5 ] LARIAT was the first simulated platform for ID testing [ 6 ] and was created to improve upon a preexisting non-simulated testbed that was created for DARPA 's 1998 and 1999 ID analyses. [ 4 ] LARIAT is used by the United States military for training purposes and automated systems testing. [ 7 ] The platform simulates users and reflects vulnerabilities caused by design flaws and user interactions [ 8 ] and allows for interaction with real-world programs such as web browsers and office suites while simulating realistic user activity on these applications. [ 9 ] These virtual users are managed by Markov models which allow them to act differently from each other in a realistic way. [ 7 ] This results in a realistic simulation of an active network of users that can then be targeted for malicious attacks to test the effectiveness of the attacks against network defenses, while also testing the effectiveness of intrusion detection methods and software in a simulated real-world environment with actual users in amongst the malicious traffic on the network. This is done because network intrusion detection software cannot as easily find instances of malicious network traffic when it is mixed in with non-malicious network traffic generated by legitimate users of the network. [ 9 ] The traffic generators used by the testbed run on a modified version of Linux , [ 10 ] and a Java -based [ 10 ] graphical user interface called Director [ 7 ] is provided to allow users of the platform to configure and control testing parameters and to monitor the resulting network traffic. [ 4 ] [ 9 ] Cyberwarfare training programs such as those at the Korea Institute of Military Science and Technology's research center use the principles and methodologies of the LARIAT platform in the development of simulated threat generators for cyberwarfare training. [ 11 ] In non-security contexts, systems such as Artificial Intelligence programs build on the principles of the LARIAT platform to study and then simulate real-time user input and activity for automated testing systems. [ 12 ] The MIT Lincoln Laboratory designed the Lincoln Laboratory Simulator (LLSIM) as a fully virtualized Java -based successor to LARIAT that can be run on a single computer without the need for dedicated physical network hardware or expensive testbeds. [ 5 ] [ 13 ] It is not a full replacement for LARIAT, however, as it does not generate low-level data such as network packets . While this makes it more scalable than LARIAT since it simplifies certain processes, it cannot be used for certain ID testing purposes that LARIAT can be utilized for. [ 14 ]
https://en.wikipedia.org/wiki/LARIAT
LASNEX is a computer program that simulates the interactions between x-rays and a plasma , along with many effects associated with these interactions. The program is used to predict the performance of inertial confinement fusion (ICF) devices such as the Nova laser or proposed particle beam "drivers". Versions of LASNEX have been used since the late 1960s or early 1970s, and the program has been constantly updated. LASNEX's existence was mentioned in John Nuckolls ' seminal paper in Nature in 1972 that first widely introduced the ICF concept, [ 1 ] saying it was "...like breaking an enemy code. It tells you how many divisions to bring to bear on a problem." [ 2 ] LASNEX uses a 2-dimensional finite element method (FEM) for calculations, breaking down the experimental area into a grid of arbitrary polygons . Each node on the grid records values for various parameters in the simulation . Values for thermal (low-energy) electrons and ions, super-thermal (high-energy and relativistic) electrons, x-rays from the laser, reaction products and the electric and magnetic fields were all stored for each node. The simulation engine then evolves the system forward through time, reading values from the nodes, applying formulas, and writing them back out. The process is very similar to other FEM systems, like those used in aerodynamics . [ 3 ] In spite of numerous problems in very early ICF research, LASNEX offered clear suggestions that slight increases in performance would be all that was needed to reach ignition . [ 2 ] By the late 1970s further work with LASNEX indicated that the issue was not energy as much as the number of laser beams, and suggested that the Shiva laser with 10 kJ of energy in 20 beams would reach ignition. It did not, failing to contain the Rayleigh–Taylor instability . [ 2 ] A review of the progress by The New York Times the following year noted that the system "fell short of the more optimistic estimates by a factor of 10,000". [ 2 ] Real-world results from the Shiva project were then used to tune the LASNEX code, which now predicted that a somewhat larger machine, the Nova laser , would reach ignition. It did not; although Nova demonstrated fusion reactions on a large scale, it was far from ignition. [ 2 ] Nova's results were also used to tune the LASNEX system, which once again predicted that ignition could be reached, this time with a significantly larger machine. Given the past failures and rising costs, the Department of Energy decided to directly test the concept with a series of underground nuclear tests known as "Halite" and "Centurion", depending on which lab was handling the experiment. Halite/Centurion placed typical ICF targets in hohlraums , metal cylinders intended to smooth out the driver's energy so it shines on the fuel target evenly. The hohlraum/fuel assemblies were then placed at various distances from a small atomic bomb, detonation of which released significant quantities of x-rays. These x-rays heated the hohlraums until they glowed in the x-ray spectrum (having been heated "x-ray hot" as opposed to "white hot") and it was this smooth x-ray illumination that started the fusion reactions within the fuel. These results demonstrated that the amount of energy needed to cause ignition was approximately 100 MJ, about 25 times greater than any machine that was being considered. [ 2 ] The data from Halite/Centurion was used to further tune LASNEX, which then predicted that careful shaping of laser pulse would reduce the energy required by a factor of about 100 times, between 1 and 2 MJ, so a design with a total output of 4 MJ began to be on the safe side. This emerged as the National Ignition Facility concept. [ 2 ] In 2022, NIF achieved ignition, triggering a self-sustaining fusion reaction which released 3.15 MJ of energy using 2.05 MJ of laser energy. [ 4 ] For these reasons, LASNEX is somewhat controversial in the ICF field. [ 5 ] More accurately, LASNEX generally predicted a device's low-energy behaviour quite closely, but becomes increasingly inaccurate as the energy levels are increased. [ 6 ] Advanced 3D versions of the same basic concept, like ICF3D [ 7 ] and HYDRA, [ 8 ] continue to drive modern ICF design, and likewise have failed to closely match experimental performance.
https://en.wikipedia.org/wiki/LASNEX
The LAS (LASer) format is a file format designed for the interchange and archiving of lidar point cloud data. It is an open, binary format specified by the American Society for Photogrammetry and Remote Sensing (ASPRS). The format is widely used [ 1 ] and regarded as an industry standard for lidar data. [ 2 ] [ 3 ] A LAS file consists of the following overall sections: A LAS file contains point records in one of the point data record formats defined by the LAS specification; as of LAS 1.4, there are 11 point data record formats (0 through 10) available. All point data records must be of the same format within the file. The various formats differ in the data fields available, such as GPS time , RGB and NIR color and wave packet information. The 3D point coordinates are represented within the point data records by 32-bit integers, to which a scaling and offset defined in the public header must be applied in order to obtain the actual coordinates. As the number of bytes used per point data record is explicitly given in the public header block, it is possible to add user-defined fields in "extra bytes" to the fields given by the specification-defined point data record formats. A standardized way of interpreting such extra bytes was introduced in the LAS 1.4 specification, in the form of a specific EVLR. [ 4 ] LAS file format is not compressed, but there is an open source project LASzip [ 6 ] which defined the open file format LAZ [ 7 ] to losslessly compress LAS data. This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/LAS_file_format
LAVIS is a software tool created by the TOOL Corporation , Japan. LAVIS is a "layout visualisation platform". It supports a variety of formats such as GDSII , OASIS and LEF / DEF [ 1 ] [ 2 ] and can be used as a platform for common IC processes. [ 2 ]
https://en.wikipedia.org/wiki/LAVIS_(software)
LB buffer , also known as lithium borate buffer , is a buffer solution used in agarose electrophoresis , typically for the separation of nucleic acids such as DNA and RNA . It is made up of Lithium borate ( lithium hydroxide monohydrate and boric acid ). LB(R) is a registered (USPTO) trademark of Faster Better Media LLC, which owns US patent 7,163,610 covering low-conductance lithium borate polynucleotide electrophoresis. Lithium Borate buffer has a lower conductivity , produces crisper resolution, and can be run at higher speeds than can gels made from TBE or TAE (5-50 V/cm as compared to 5-10 V/cm). At a given voltage, the heat generation and thus the gel temperature is much lower than with TBE/TAE buffers, therefore the voltage can be increased to speed up electrophoresis so that a gel run takes only a fraction of the usual time. [ 1 ] Downstream applications, such as isolation of DNA from a gel slice or Southern blot analysis, work as expected with lithium boric acid gels. [ 2 ] [ 3 ] SB buffer containing sodium borate is similar to lithium borate and has nearly all of its advantages at a somewhat lower cost, but the lithium buffer permits use of even higher voltages due to the lower conductivity of lithium ions as compared to sodium ions and has a better resolution for fragments above 4kb. This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/LB_buffer
In chemistry, ligand close packing theory ( LCP theory ), sometimes called the ligand close packing model describes how ligand – ligand repulsions affect the geometry around a central atom. [ 1 ] It has been developed by R. J. Gillespie and others from 1997 onwards [ 2 ] and is said to sit alongside VSEPR [ 1 ] which was originally developed by R. J. Gillespie and R Nyholm . [ 3 ] The inter-ligand distances in a wide range of molecules have been determined. The example below shows a series of related molecules: [ 4 ] The consistency of the interligand distances (F-F and O-F) in the above molecules is striking and this phenomenon is repeated across a wide range of molecules and forms the basis for LCP theory. [ 5 ] From a study of known structural data a series of inter-ligand distances has been determined [ 1 ] and it has been found that there is a constant inter-ligand radius for a given central atom. The table below shows the inter-ligand radius (pm) for some of the period 2 elements: The ligand radius should not be confused with the ionic radius . In LCP theory a lone pair is treated as a ligand. Gillespie terms the lone pair a lone pair domain and states that these lone pair domains push the ligands together until they reach the interligand distance predicted by the relevant inter-ligand radii. [ 1 ] An example demonstrating this is shown below, where the F-F distance is the same in the AF 3 and AF 4 + species : LCP and VSEPR make very similar predictions as to geometry but LCP theory has the advantage that predictions are more quantitative particularly for the second period elements, Be, B, C, N, O, F. Ligand -ligand repulsions are important when [ 1 ]
https://en.wikipedia.org/wiki/LCP_theory
An LCR meter is a type of electronic test equipment used to measure the inductance (L), capacitance (C), and resistance (R) of an electronic component . [ 1 ] In the simpler versions of this instrument the impedance was measured internally and converted for display to the corresponding capacitance or inductance value. Readings should be reasonably accurate if the capacitor or inductor device under test does not have a significant resistive component of impedance. More advanced designs measure true inductance or capacitance, as well as the equivalent series resistance of capacitors and the Q factor of inductive components. Usually the device under test (DUT) is subjected to an AC voltage source . The meter measures the voltage across and the current through the DUT. From the ratio of these the meter can determine the magnitude of the impedance. The phase angle between the voltage and current is also measured in more advanced instruments; in combination with the impedance, the equivalent capacitance or inductance, and resistance, of the DUT can be calculated and displayed. The meter must assume either a parallel or a series model for these two elements. An ideal capacitor has no characteristics other than capacitance, but there are no physical ideal capacitors. All real capacitors have a little inductance, a little resistance, and some defects causing inefficiency. These can be seen as inductance or resistance in series with the ideal capacitor or in parallel with it. And so likewise with inductors. Even resistors can have inductance (especially if they are wire wound types) and capacitance as a consequence of the way they are constructed. The most useful assumption, and the one usually adopted, is that LR measurements have the elements in series (as is necessarily the case in an inductor's coil) and that CR measurements have the elements in parallel (as is necessarily the case between a capacitor's 'plates'). Leakage is a special case in capacitors, as the leakage is necessarily across the capacitor plates, that is, in series. An LCR meter can also be used to measure the inductance variation with respect to the rotor position in permanent magnet machines. (However, care must be taken, as some LCR meters will be damaged by the generated EMF produced by turning the rotor of a permanent-magnet motor; in particular those intended for electronic component measurements.) Handheld LCR meters typically have selectable test frequencies of 100 Hz, 120 Hz, 1 kHz, 10 kHz, and 100 kHz for top end meters. The display resolution and measurement range capability will typically change with the applied test frequency since the circuitry is more sensitive or less for a given component (i.e., an inductor or capacitor) as the test frequency changes. Benchtop LCR meters sometimes have selectable test frequencies of more than 100 kHz, with the high end Keysight E4982A operating up to 3 GHz. They often include options to superimpose a DC voltage or current on the AC measuring signal. Lower end meters might offer the possibility to externally supply these DC voltages or currents while higher end devices can supply them internally. In addition benchtop meters typically allow the usage of special fixtures (i.e., Kelvin wiring, that is to say, 4-wire connections ) to measure SMD components, air-core coils or transformers. Inductance, capacitance, resistance, and dissipation factor (DF) can also be measured by various bridge circuits . They involve adjusting variable calibrated elements until the signal at a detector becomes null, rather than measuring impedance and phase angle. Early commercial LCR bridges used a variety of techniques involving the matching or "nulling" of two signals derived from a single source. The first signal was generated by applying the test signal to the unknown and the second signal was generated by using a combination of known-value R and C standards. The signals were summed through a detector (normally a panel meter with or without some level of amplification). When zero current was noted by changing the value of the standards and looking for a "null" in the panel meter, it could be assumed that the current magnitude through the unknown was equal to that of the standard, and that the phase was exactly the reverse (180 degrees apart). The combination of standards selected could be arranged to read out C and DF directly which was the precise value of the unknown. An example of this type of measuring instrument is the GenRad /IET Labs Model 1620 and 1621 Capacitance Bridges.
https://en.wikipedia.org/wiki/LCR_meter
LEAPER ( L everaging e ndogenous A DAR for p rogrammable e diting of R NA) is a genetic engineering technique in molecular biology by which RNA can be edited. The technique relies on engineered strands of RNA to recruit native ADAR enzymes to swap out different compounds in RNA. Developed by researchers at Peking University in 2019, the technique, some have claimed, is more efficient than the CRISPR gene editing technique . [ 1 ] Initial studies have claimed that editing efficiencies of up to 80%. As opposed to DNA gene editing techniques (e.g., using CRISPR-Cas proteins to make modifications directly to a defective gene), LEAPER targets editing messenger RNA (mRNA) for the same gene which is transcribed into a protein. [ 3 ] Post-transcriptional RNA modification typically involves the strategy of converting adenosine-to-inosine (A-to-I) since inosine (I) demonstrably mimics guanosine (G) during translation into a protein. A-to-I editing is catalyzed by adenosine deaminase acting on RNA (ADAR) enzymes, whose substrates are double-stranded RNAs. [ 4 ] Three human ADAR genes have been identified with ADAR1 (official symbol ADAR) and ADAR2 (ADARB1) proteins developed activity profiles. LEAPER achieves this targeted RNA editing through the use of short engineered ADAR-recruiting RNAs (arRNAs). arRNAs consist of endogenous ADAR1 proteins with several RNA binding domains (RBDs) fused with a peptide, CRISPR-Cas13b protein, and a guide RNA (gRNA) between 100 and 150 nt in length for high editing efficiency designed to recruit the chimeric ADAR protein to a target site. [ 2 ] This results in a change in which protein is synthesized during translation . The technique was discovered by a team of researchers at Peking University in Beijing , China. The discovery was announced in the journal Nature Biotechnology in July 2019. [ 5 ] Chinese researchers have utilized LEAPER to restore functional enzyme activity in cells from patients with Hurler syndrome . They have claimed that LEAPER could have the potential to treat almost half of all known hereditary disorders. [ 5 ] Highly specific editing efficiencies of up to 80% can be achieved when LEAPER editing using arRNA151 is delivered via a plasmid or viral vector or as a synthetic oligonucleotide , though this efficiency varied significantly across cell types. [ 4 ] Based on these preliminary results, LEAPER may have the most therapeutic promise with no production of functional protein but if a partial restoration of protein expression would provide therapeutic benefit. For example, in human cells with defective α-L-iduronidase (IDUA) expression in cells from patients with IDUA-defective Hurler syndrome, LEAPER resulted in a W53X truncation mutant of p53 being edited using arRNA151 to achieve a "normal" p53 translation and functional p53-mediated transcriptional responses. [ 4 ] LEAPER is analogous to CRISPR Cas-13 in that it targets RNA before proteins are synthesized. However, LEAPER is simpler and more efficient as it only requires arRNA, rather than Cas and a guide RNA. [ 5 ] According to the developers of LEAPER, it has the potential to be easier and more precise than any CRISPR technique. [ 6 ] LEAPER also eliminates health concerns and technical barriers arising from the introduction of exogenous proteins. [ 7 ] It has also been called more ethical as it does not change DNA and thus does not result in heritable changes, unlike methods using CRISPR Cas-9. [ 8 ]
https://en.wikipedia.org/wiki/LEAPER_gene_editing
Leadership in Energy and Environmental Design ( LEED ) is a green building certification program used worldwide. [ 4 ] Developed by the non-profit U.S. Green Building Council (USGBC), it includes a set of rating systems for the design, construction, operation, and maintenance of green buildings , homes, and neighborhoods, which aims to help building owners and operators be environmentally responsible and use resources efficiently. As of 2024 [update] there were over 195,000 LEED-certified buildings and over 205,000 LEED-accredited professionals in 186 countries worldwide. [ 5 ] In the US, the District of Columbia consistently leads in LEED-certified square footage per capita, [ 6 ] followed in 2022 by the top-ranking states of Massachusetts, Illinois, New York, California, and Maryland. [ 6 ] Outside the United States, the top-ranking countries for 2022 were Mainland China, India, Canada, Brazil, and Sweden. [ 7 ] LEED Canada has developed a separate rating system adapted to the Canadian climate and regulations. Many U.S. federal agencies, state and local governments require or reward LEED certification. As of 2022 [update] , based on certified square feet per capita, the leading five states (after the District of Columbia ) were Massachusetts, Illinois, New York, California, and Maryland. [ 6 ] Incentives can include tax credits, zoning allowances, reduced fees, and expedited permitting. Offices, healthcare-, and education-related buildings are the most frequent LEED-certified buildings in the US (over 60%), followed by warehouses, distribution centers, retail projects and multifamily dwellings (another 20%). [ 8 ] Studies have found that for-rent LEED office spaces generally have higher rents and occupancy rates and lower capitalization rates. LEED is a design tool rather than a performance-measurement tool and has tended to focus on energy modeling rather than actual energy consumption. [ 9 ] [ 10 ] It has been criticized for a point system that can lead to inappropriate design choices and the prioritization of LEED certification points over actual energy conservation; [ 11 ] [ 12 ] for lacking climate specificity; [ 12 ] for not sufficiently addressing issues of climate change and extreme weather; [ 13 ] and for not incorporating principles of a circular economy . [ 14 ] Draft versions of LEED v5 were released for public comment in 2024, and the final version of LEED v5 is expected to appear in 2025. [ 15 ] It may address some of the previous criticisms. [ 15 ] [ 16 ] [ 17 ] [ 18 ] Despite concerns, LEED has been described as a "transformative force in the design and construction industry". [ 11 ] LEED is credited with providing a framework for green building, expanding the use of green practices and products in buildings, encouraging sustainable forestry, and helping professionals to consider buildings in terms of the well-being of their occupants and as part of larger systems. [ 11 ] In April 1993, the U.S. Green Building Council (USGBC) was founded by Rick Fedrizzi , the head of environmental marketing at Carrier, real estate developer David Gottfried , and environmental lawyer Michael Italiano. Representatives from 60 firms and nonprofits met at the American Institute of Architects to discuss organizing within the building industry to support green building and develop a green building rating system. [ 19 ] [ 20 ] [ 21 ] Also influential early on was architect Bob Berkebile. [ 22 ] [ 23 ] Fedrizzi served as the volunteer founding chair of USGBC from 1993 to 2004, and became its CEO as of 2004. As of November 4, 2016, he was succeeded as president and CEO of USGBC by Mahesh Ramanujam. [ 20 ] [ 25 ] Ramanujam served as CEO until 2021. Peter Templeton became interim president and CEO of USGBC as of November 1, 2021. [ 26 ] [ 27 ] A key player in developing the Leadership in Energy and Environmental Design (LEED) green certification program was Natural Resources Defense Council (NRDC) senior scientist Robert K. Watson . [ 28 ] [ 29 ] It was Watson, sometimes referred to as the "Founding Father of LEED", [ 28 ] who created the acronym. [ 29 ] Over two decades, Watson led a broad-based consensus process, bringing together non-profit organizations, government agencies, architects, engineers, developers, builders, product manufacturers and other industry leaders. The original planning group consisted of Watson, Mike Italiano, architect Bill Reed (founding LEED Technical Committee co-chair 1994–2003), [ 30 ] [ 31 ] [ 32 ] architect Sandy Mendler, [ 30 ] [ 33 ] [ 34 ] builder Gerard Heiber [ 30 ] [ 33 ] [ 35 ] and engineer Richard Bourne. [ 30 ] Tom Paladino and Lynne Barker (formerly King) co-chaired the LEED Pilot Committee [ 31 ] from 1996–2001. [ 36 ] Scot Horst chaired the LEED Steering Committee [ 37 ] beginning in 2005 and was deeply involved in the development of LEED 2009. [ 38 ] Joel Ann Todd took over as chair of the steering committee from 2009 to 2013, working to develop LEED v4, [ 39 ] and introducing social equity credits. [ 40 ] Other steering committee chairs include Chris Schaffner (2019) [ 41 ] and Jennifer Sanguinetti (2020). [ 42 ] Chairs of the USGBC's Energy and Atmosphere Technical Advisory Group for LEED technology have included Gregory Kats . [ 43 ] The LEED initiative has been strongly supported by the USGBC Board of Directors, including Chair of the Board of Directors Steven Winter (1999–2003). [ 44 ] The current chair of the Board of Directors is Anyeley Hallová (2023). [ 45 ] LEED has grown from one standard for new construction to a comprehensive system of interrelated standards covering aspects from the design and construction to the maintenance and operation of buildings. [ 48 ] [ 49 ] LEED has also grown from six committee volunteers to an organization of 122,626 volunteers, professionals and staff. [ 50 ] As of 2023 [update] , more than 185,000 LEED projects representing over 28 billion square feet (2.6 × 10 ^ 9 m 2 ) have been proposed worldwide, and more than 105,000 projects representing over 12 billion square feet (1.1 × 10 ^ 9 m 2 ) have been certified in 185 countries. [ 51 ] However, lumber, chemical and plastics trade groups have lobbied to weaken the application of LEED guidelines in several southern states. In 2013, the states of Alabama, Georgia and Mississippi effectively banned the use of LEED in new public buildings, in favor of other industry standards that the USGBC considers too lax. [ 52 ] [ 53 ] [ 54 ] LEED is considered a target of a type of disinformation attack known as astroturfing , involving "fake grassroots organizations usually sponsored by large corporations". [ 55 ] Unlike model building codes, such as the International Building Code , only members of the USGBC and specific "in-house" committees may add to, subtract from, or edit the standard, subject to an internal review process. Proposals to modify the LEED standards are offered and publicly reviewed by USGBC's member organizations, of which there were 4551 as of October 2023. [ 56 ] LEED has evolved since 1998 to more accurately represent and incorporate emerging green building technologies. LEED has developed building programs specific to new construction (NC), core and shell (CS), commercial interiors (CI), existing buildings (EB), neighborhood development (ND), homes (LEED for Homes), retail, schools, and healthcare. [ 57 ] The pilot version, LEED New Construction (NC) v1.0, led to LEED NCv2.0, LEED NCv2.2 in 2005, LEED 2009 ( a.k.a. LEED v3) in 2009, and LEED v4 in November 2013. LEED 2009 was depreciated for new projects registered from October 31, 2016. [ 58 ] LEED v4.1 was released on April 2, 2019. [ 59 ] Draft versions of LEED v5 have been released and revised in response to public comment during 2024. The official final version of LEED v5 is expected to be released in 2025. Future updates to the standard are planned to occur every five years. [ 15 ] LEED forms the basis for other sustainability rating systems such as the U.S. Environmental Protection Agency 's (EPA) Labs21 and LEED Canada. The Australian Green Star is based on both LEED and the UK's Building Research Establishment Environmental Assessment Methodology ( BREEAM ). [ 60 ] LEED 2009 encompasses ten rating systems for the design, construction and operation of buildings, homes and neighborhoods. Five overarching categories correspond to the specialties available under the LEED professional program. That suite consists of: [ 61 ] LEED v3 aligned credits across all LEED rating systems, weighted by environmental priority. [ 63 ] It reflects a continuous development process, with a revised third-party certification program and online resources. Under LEED 2009, an evaluated project scores points to a possible maximum of 100 across six categories: sustainable sites (SS), water efficiency (WE), energy and atmosphere (EA), materials and resources (MR), indoor environment quality (IEQ) and design innovation (INNO). Each of these categories also includes mandatory requirements, which receive no points. Up to 10 additional points may be earned: 4 for regional priority credits and 6 for innovation in design. Additional performance categories for residences (LEED for Homes) recognize the importance of transportation access, open space, and outdoor physical activity, and the need for buildings and settlements to educate occupants. [ c ] [ 64 ] [ 65 ] Buildings can qualify for four levels of certification: The aim of LEED 2009 is to allocate points "based on the potential environmental impacts and human benefits of each credit". These are weighed using the environmental impact categories of the EPA's Tools for the Reduction and Assessment of Chemical and Other Environmental Impacts (TRACI) and the environmental-impact weighting scheme developed by the National Institute of Standards and Technology (NIST). [ 67 ] Prior to LEED 2009 evaluation and certification, a building must comply with minimum requirements including environmental laws and regulations, occupancy scenarios, building permanence and pre-rating completion, site boundaries and area-to-site ratios. Its owner must share data on the building's energy and water use for five years after occupancy (for new construction) or date of certification (for existing buildings). [ 68 ] The credit weighting process has the following steps: First, a collection of reference buildings are assessed to estimate the environmental impacts of similar buildings. NIST weightings are then applied to judge the relative importance of these impacts in each category. Data regarding actual impacts on environmental and human health are then used to assign points to individual categories and measures. This system results in a weighted average for each rating scheme based upon actual impacts and the relative importance of those impacts to human health and environmental quality. [ 67 ] The LEED council also appears to have assigned credit and measured weighting based upon the market implications of point allocation. [ 67 ] From 2010, buildings can use carbon offsets to achieve green power credits for LEED-NC (new construction certification). [ 69 ] For LEED BD+C v4 credit, the IEQ category addresses thermal , visual, and acoustic comfort as well as indoor air quality . [ 70 ] Laboratory and field research have directly linked occupants' satisfaction and performance to the building's thermal conditions. [ 71 ] Energy reduction goals can be supported while improving thermal satisfaction. For example, providing occupants control over the thermostat or operable windows allows for comfort across a wider range of temperatures. [ 72 ] [ 73 ] On April 2, 2019, the USGBC released LEED v4.1, a new version of the LEED green building program, designed for use with cities, communities and homes. [ 59 ] [ 49 ] However, LEED v4.1 was never officially balloted. [ 15 ] An update to v4, proposed as of November 22, 2022, took effect on March 1, 2024. Any projects that register under LEED v4 after March 1, 2024 must meet these updated guidelines. [ 74 ] As of January 2023, USGBC began to develop LEED v5. LEED v5 is the first version of the LEED rating system to be based on the June 2022 Future of LEED principles. [ 75 ] The LEED v5 rating system will cover both new construction and existing buildings. [ 76 ] [ 77 ] [ 78 ] An initial draft version was discussed at Greenbuild 2023. [ 76 ] [ 77 ] [ 78 ] The beta draft of LEED v5 was released for an initial period of public comment on April 3, 2024. [ 15 ] Changes were made in response to nearly 6,000 comments. A second public comment period was opened for the revised version, from September 27 to October 28, 2024. [ 16 ] The official release of the final version of LEED v5 is expected to occur in 2025. Future updates of the certification system are planned to occur every five years. [ 15 ] LEED v5 reorganizes the credits system and prerequisites, and has a greater focus on decarbonization of buildings. The scorecard expresses three global goals of climate action (worth 50% of the certification points), quality of life (25%) and conservation and ecological restoration (25%) in terms of five principles: decarbonization, ecosystems, equity, health and resilience. [ 79 ] [ 80 ] One of the reponses to public comments was to emphasize a data-driven approach to Operations and Maintenance by more clearly identifying performance-based credits (80% of points) and decoupling them from strategic credits (20%). [ 16 ] In 2003, the Canada Green Building Council (CAGBC) received permission to create LEED Canada-NC v1.0, which was based upon LEED-NC 2.0. [ 81 ] As of 2021, Canada ranked second in the world (not including the USA) in its number of LEED-certified projects and square feet of space. [ 82 ] Buildings in Canada such as Winnipeg's Canadian Museum for Human Rights are LEED certified due to practices including the use of rainwater harvesting , green roofs, and natural lighting. [ 83 ] As of March 18, 2022, the Canada Green Building Council took over direct oversight for LEED™ green building certification of projects in Canada, formerly done by GBCI Canada. CAGBC will continue to work with Green Business Certification Inc. (GBCI) and USGBC while consolidating certification and credentialing for CAGBC's Zero Carbon Building Standards, LEED, TRUE, [ 84 ] and Investor Ready Energy Efficiency (IREE). [ 85 ] IREE is a model supported by CAGBC and the Canada Infrastructure Bank (CIB) for the verification of proposed retrofit projects. [ 86 ] [ 87 ] LEED certification is granted by the Green Building Certification Institute (GBCI), which arranges third-party verification of a project's compliance with the LEED requirements. [ 88 ] The certification process for design teams consists of the design application, under the purview of the architect and the engineer and documented in the official construction drawings, and the construction application, under the purview of the building contractor and documented during the construction and commissioning of the building. [ 89 ] A fee is required to register the building, and to submit the design and construction applications. Total fees are assessed based on building area, ranging from a minimum of $2,900 to over $1 million for a large project. [ 90 ] "Soft" costs – i.e., added costs to the building project to qualify for LEED certification – may range from 1% to 6% of the total project cost. The average cost increase was about 2%, or an extra $3–$5 per square foot. [ 91 ] The application review and certification process is conducted through LEED Online, USGBC's web-based service. The GBCI also utilizes LEED Online to conduct their reviews. [ 92 ] Applicants have the option of achieving credit points by building energy models. [ d ] One model represents the building as designed, and a second model represents a baseline building in the same location, with the same geometry and occupancy. Depending on location (climate) and building size, the standard provides requirements for heating, ventilation and air-conditioning (HVAC) system type, and wall and window definitions. This allows for a comparison with emphasis on factors that heavily influence energy consumption when considering design decisions. [ 93 ] [ 94 ] The LEED for Homes rating system was first piloted in 2005. [ 95 ] It has been available in countries including the U.S., [ 96 ] Canada, [ 97 ] Sweden, [ 98 ] and India. [ 99 ] LEED for Homes projects are low-rise residential . [ 100 ] The process of the LEED for Homes rating system differs significantly from the LEED rating system for new construction. [ 101 ] Unlike LEED, LEED for Homes requires an on-site inspection. [ 102 ] LEED for Homes projects are required to work with either an American [ 103 ] or a Canadian provider organization [ 104 ] and a green rater. The provider organization helps the project through the process while overseeing the green raters, individuals who conduct two mandatory site inspections: the thermal bypass inspection and the final inspection. [ 105 ] The provider and rater assist in the certification process but do not themselves certify the project. [ 102 ] In addition to certifying projects pursuing LEED, USGBC's Green Business Certification Inc. (GBCI) offers various accreditations to people who demonstrate knowledge of the LEED rating system, including LEED Accredited Professional (LEED AP), LEED Green Associate, and LEED Fellow. [ 106 ] [ 107 ] The Green Building Certification Institute (GBCI) describes its LEED professional accreditation as "demonstrat[ing] current knowledge of green building technologies, best practices" and the LEED rating system, to assure the holder's competency as one of "the most qualified, educated, and influential green building professionals in the marketplace." [ 108 ] Critics of LEED certification such as Auden Schendler and Randy Udall have pointed out that the process is slow, complicated, and expensive. In 2005, they published an article titled "LEED is Broken; Let's Fix It", in which they argued that the certification process "makes green building more difficult than it needs to be" and called for changes "to make LEED easier to use and more popular" to better accelerate the transition to green building. [ 109 ] Schendler and Udall also identified a pattern which they call "LEED brain", in which participants may become focused on "point mongering" and pick and choose design elements that don't actually go well together or don't fit local conditions, to gain points. The public relations value of LEED certification begins to drive the development of buildings rather than focusing on design. They give the example of debating whether to add a reflective roof, used to can counter "heat island" effects in urban areas, to a building high in the Rocky Mountains. [ 109 ] [ 110 ] : 230 A 2012 USA Today review of 7,100 LEED-certified commercial buildings found that designers tended to choose easier points such as using recycled materials, rather than more challenging ones that could increase the energy efficiency of a building. [ 11 ] Critics such as David Owen and Jeff Speck also point out that LEED certification focuses on the building itself, and does not take into account factors such as the location in which the building stands, or how employee commutes may be affected by a relocation. In Green Metropolis (2009), Owen discusses an environmentally-friendly building in San Bruno, California , built by Gap Inc. , which was located 16 miles (26 km) from the company's corporate headquarters in downtown San Francisco , and 15 miles (24 km) from Gap's corporate campus in Mission Bay . Although the company added shuttle buses between buildings, "no bus is as green as an elevator". [ 110 ] : 232–33 Similarly, in Walkable City (2013), Jeff Speck describes the relocation of the Environmental Protection Agency ' s Region 7 Headquarters from downtown Kansas City, Missouri , to a LEED-certified building 20 miles (32 km) away in the suburb of Lenexa, Kansas . Kaid Benfield of the Natural Resources Defense Council estimated that the carbon emissions associated with the additional miles driven were almost three times higher than before, a change from 0.39 metric tons per person per month to 1.08 metric tons of carbon dioxide per person per month. Speck writes that "The carbon saved by the new building's LEED status, if any, will be a small fraction of the carbon wasted by its location". [ 111 ] Both Speck and Owen make the point that a building-centric standard that doesn't consider location will inevitably undervalue the benefits of people living closer together in cities, compared to the costs of automobile-oriented suburban sprawl. [ 111 ] [ 110 ] : 221–35 LEED is a design tool and as such has focused on energy modeling, rather than being a performance-measurement tool that measures actual energy consumption. [ 9 ] [ 11 ] [ 12 ] LEED uses modeling software to predict future energy use based on intended use. Buildings certified under LEED do not have to prove energy or water efficiency in practice to receive LEED certification points. This has led to criticism of LEED's ability to accurately determine the efficiency of buildings, [ 11 ] and concerns about the accuracy of its predictive models. [ 112 ] [ 113 ] [ 114 ] Research papers provide most of what is known about the performance and effectiveness of LEED models and buildings. Much of the available research predates 2014, and therefore applies to buildings that were designed under early versions of the LEED rating and certification systems, LEED v3 (2009) or earlier. Research papers have tended to address performance and effectiveness of LEED in two credit category areas: energy [ 115 ] (EA) and indoor environment quality (IEQ). [ 116 ] Many early analyses should be considered as at best preliminary. [ 115 ] [ 117 ] Studies should be repeated with longer data history and larger building samples, include newer LEED certified buildings, and clearly identify green-building rating schemes and certification levels of individual buildings. Buildings may also need to be grouped according to location, since local conditions and regulation may influence building design and confound assessment results. [ 118 ] [ 115 ] In 2018, Pushkar examined LEED-NC 2009 (v3) Certified-level certified projects from countries in northern (Finland, Sweden) and southern (Turkey, Spain) regions of Europe to see how different types of credits are understood and applied. Pushkar found that credit achievements were similar within regions and countries for Indoor Environmental Quality (EQ), Materials and Resources (MR), Sustainable Sites (SS), and Water Efficiency (WE), but differed for Energy and Atmosphere (EA). Sustainable Sites (SS) and Water Efficiency (WE) were high achievement areas, scoring 80–100% and 70–75%; Indoor Environmental Quality was intermediate (40–60%); and Materials and Resources (MR) was low (20–40%). Energy and Atmosphere (EA) was intermediate (60–65%) in northern Europe, and low (40%) in southern Europe. These results examine the extent to which different credits have been chosen by modellers. [ 118 ] [ 119 ] Because LEED focuses on the design of the building and not on its actual energy consumption, it has been suggested that LEED buildings should be tracked to discover whether the potential energy savings from the design are being used in practice. [ 120 ] In 2009, architectural scientist Guy Newsham (et al.) of the National Research Council of Canada (NRC) re-analyzed a dataset of 100 LEED certified (v3 or earlier version) buildings. [ 115 ] The data included only "medium use" buildings, and did not include 21 laboratories, data centers and supermarkets which were expected to have higher energy activity. Researchers further attempted to match each building with a conventional building within the Commercial Building Energy Consumption Survey (CBECS) database according to building type and occupancy. [ 115 ] On average, the LEED buildings consumed 18 to 39% less energy by floor area than the conventional buildings. However, 28 to 35% of LEED-certified buildings used more energy. [ 115 ] [ 65 ] The paper found no correlation between the number of energy points achieved or LEED certification level and measured building performance. [ 115 ] In 2009 physicist John Scofield published an article in response to Newsham et al., analyzing the same database of LEED buildings and arriving at different conclusions. [ 121 ] Scofield criticized the earlier analysis for focusing on energy per floor area instead of a total energy consumption. Scofield considered source energy [ 122 ] (accounting for energy losses during generation and transmission) as well as site energy , and used area-weighted energy use intensities (EUIs) (energy per unit area per year), when comparing buildings to account for the fact that larger buildings tend to have larger EUIs. [ 121 ] Scofield concluded that, collectively, the LEED-certified buildings showed no significant source energy consumption savings or greenhouse gas emission reductions when compared to non-LEED buildings, although they did consume 10–17% less site energy. [ 121 ] Scofield notes the difficulties of building analysis, given both the lack of a randomly selected sample of LEED buildings, and the diversity of factors involved when selecting a comparison group of non-LEED buildings. In 2013 Scofield identified 21 LEED-certified New York City office buildings with publicly available energy performance data for 2011, out of 953 office buildings in New York City with such data. [ 123 ] Results differed with certification level. LEED-Gold buildings were found to use 20% less source energy than conventional buildings. However, buildings at the Silver and Certified levels used 11 to 15% more source energy, on average, than conventional buildings. (Data was not available for Platinum-level buildings.) [ 123 ] An analysis of 132 LEED buildings based on municipal energy benchmarking data from Chicago in 2015 showed that LEED-certified buildings used about 10% less energy on site than comparable conventional buildings. However, the study did not show differences in use of source energy. [ 65 ] [ 124 ] In 2014, architect Gwen Fuertes and engineer Stefano Schiavon [ 125 ] developed the first study that analyzes plug loads using LEED-documented data from certified projects. The study compared plug load assumptions made by 92 energy modeling practitioners against ASHRAE and Title 24 requirements, and the evaluation of the plug load calculation methodology used by 660 LEED-CI [ 126 ] and 429 LEED-NC [ 127 ] certified projects. They found that energy modelers only considered the energy consumption of predictable plug loads, such as refrigerators, computers and monitors. Overall the results suggested a disconnection between assumptions in the models and the actual performance of buildings. [ 112 ] [ 113 ] [ 114 ] Energy modeling might be a source of error during the LEED design phase. Engineers Christopher Stoppel and Fernanda Leite evaluated the predicted and actual energy consumption of two twin buildings using the energy model during the LEED design phase and the utility meter data after one year of occupancy. The study's results suggest that mechanical systems turnover and occupancy assumptions significantly differ from predicted to actual values. [ 128 ] In a 2019 review, Amiri et al. suggest that judging energy efficiency based on source energy may not be appropriate where the availability of energy types depends on city council or government policies. If some types of source energy are not supported locally, there is no opportunity to choose the types of energy promoted by the LEED scoring system. Amiri emphasizes that many studies have weaknesses due to the lack of randomly selected samples of LEED buildings, and the difficulty of selecting comparison groups of non-LEED buildings. Amiri also notes that the standards for building design have changed significantly over time. For example, newer non-LEED buildings may routinely use features such as high-quality windows which were rarely used in older buildings. Comparisons of LEED and non-LEED buildings therefore need to consider age as well as size, use, occupant behavior, and location aspects such as climate zone. [ 65 ] Zhang et al. (2019) examined renewable energy assessment methods and different assessment systems, and noted that LEED-US addresses management problems at the pre-occupancy phase. [ 129 ] Interest in Post‐occupancy evaluation (POE), the process of evaluating building performance after occupation, is increasing. This is due in part to concerns about differences between energy models in the design phase and actual use of buildings. POE research emphasizes the need to collect and analyze actual occupancy data from existing buildings, to better understand how people are using spaces and resources. [ 130 ] Asensio and Delmas (2017) carefully matched and compared buildings that did and did not participate in LEED, Energy Star, and Better Buildings Challenge programs in Los Angeles, California. They examined data for monthly energy consumption between 2005–2012, for more than 175,000 commercial buildings. Buildings from all three programs displayed “high magnitude” energy savings, ranging from 18–19% for Better Buildings and Energy Star to 30% for LEED-rated buildings. The three programs saved 210 million kilowatt-hours, equal to 145 kilotons of CO2 equivalent emissions per year. [ 131 ] The Centers for Disease Control and Prevention (CDC) defines indoor environmental quality (IEQ) as "the quality of a building's environment in relation to the health and wellbeing of those who occupy space within it." [ 132 ] The USGBC includes the following considerations for attaining IEQ credits: indoor air quality , the level of volatile organic compounds (VOC), lighting, thermal comfort , and daylighting and views. In consideration of a building's indoor environmental quality, published studies have also included factors such as: acoustics, building cleanliness and maintenance, colors and textures, workstation size, ceiling height, window access and shading, surface finishes, furniture adaptability and comfort. [ 133 ] [ 116 ] [ 134 ] The most widely used method for post-occupancy evaluation (POE) in IEQ-related studies is occupant surveys. [ 130 ] In 2013, architectural physicist Sergio Altamonte and Stefano Schiavon used occupant surveys from the Center for the Built Environment at Berkeley's database [ 135 ] to study IEQ occupant satisfaction in 65 LEED buildings and 79 non-LEED buildings. They analyzed 15 IEQ-related factors including the ease of interaction, building cleanliness, the comfort of furnishing, the amount of light, building maintenance, colors and textures, workplace cleanliness, the amount of space, furniture adjustability, visual comfort, air quality, visual privacy , noise, temperature, and sound privacy. Occupants reported being slightly more satisfied in LEED buildings for the air quality and slightly more dissatisfied with the amount of light. Overall, occupants of both LEED and non-LEED buildings had equal satisfaction with the building overall and with the workspace. [ 133 ] The authors noted that the data may not be representative of the entire building stock and a randomized approach was not used in the data assessment. [ 133 ] Newsham et al (2013) carried out an evaluation using both occupant interviews and physical site measurements. [ 116 ] Field studies and post-occupancy evaluations (POE) were performed in 12 "green" and 12 conventional buildings across Canada and the northern United States. Most but not all of the "green" buildings were LEED-certified. 2545 occupants completed a questionnaire. On-site, 974 randomly selected workstations were measured for thermal conditions, air quality, acoustics, lighting, workstation size, ceiling height, window access and shading, and surface finishes. Responses were positive in the areas of environmental satisfaction, satisfaction with thermal conditions, satisfaction with outside views, aesthetic appearance, reduced disturbance from HVAC noise, workplace image, night-time sleep quality, mood, physical symptoms, and reduced number of airborne particulates. The green buildings were rated more highly and in the case of airborne particulates exhibited superior performance than the conventional buildings. [ 116 ] Schiavon and Altomonte (2014) [ 136 ] found that occupants have equivalent satisfaction levels in LEED and non-LEED buildings when evaluated independently from the following factors: office type, spatial layout, distance from windows, building size, gender, age, type of work, time at workspace, and weekly working hours. LEED certified buildings may provide higher satisfaction in open spaces than in enclosed offices, in smaller buildings than in larger buildings, and to occupants having spent less than one year in their workspaces rather than to those who have used their workspace longer. This study suggests that the positive value of LEED certification as measured by occupant satisfaction may decrease with time. [ 136 ] In 2015, environmental health scientist Joseph Allen (et al.) [ 137 ] reviewed studies of indoor environmental quality and the potential health benefits of green-certified buildings. He concluded that green buildings provide better indoor environmental quality with direct benefits to the human health of occupants, compared to non-green buildings. Statistically significant measures from different studies included decreased symptoms of sick building syndrome, decreased sick days, decreased respiratory symptoms during the daytime and asthma symptoms at night, and lowered levels of PM 2.5 , NO 2 , and nicotine. However, Allen noted that the frequent use of subjective health performance indicators was a limitations of many of the studies reviewed. He proposed a framework to encourage the use of direct, objective, and leading “Health Performance Indicators” in building assessment. [ 137 ] The daylight credit was updated in LEED v4 to include a simulation option for daylight analysis that uses spatial daylight autonomy ( SDA ) and annual sunlight exposure ( ASE ) metrics to evaluate daylight quality in LEED projects. SDA is a metric that measures the annual sufficiency of daylight levels in interior spaces and ASE describes the potential for visual discomfort by direct sunlight and glare. These metrics are approved by the Illuminating Engineering Society of North America (IES) and codified in the LM-83-12 standard. LEED recommends a minimum of 300 lux for at least 50% of total occupied hours of the year for at least 55% of the occupied floor area. The threshold recommended by LEED for ASE is that no more than 10% of regularly occupied floor area can be exposed to more than 1000 lux of direct sunlight for more than 250 hours per year. Additionally, LEED requires window shades to be closed when more than 2% of a space is subject to direct sunlight above 1000 lux. According to building scientist Christopher Reinhart, the direct sunlight requirement is a very stringent approach that can discourage good daylight design. Reinhart proposed the application of the direct sunlight criterion only in spaces that require stringent control of sunlight (e.g. desks, white boards, etc.). [ 138 ] In 2024, Kent et al. compared satisfaction of people in buildings that had received either WELL certification or LEED certification. Ratings of buildings certified with WELL and LEED were matched on six dimensions: award level, years in building, time in workspace, type of workspace, proximity to a window, and floor height. Satisfaction with the overall building and one's workspace were high under both rating systems. However, satisfaction with LEED‑certified buildings (73% and 71%) tended to be lower than that for WELL‑certified buildings (94% and 87%). This may be because WELL is a human-centered standard for building design that focuses primarily on comfort, health, and well-being. In contrast, only 10% of the credits in LEED certification relate to indoor environmental quality (IEQ). Differences may also reflect age of buildings, which were not matched for in the design. [ 139 ] Water systems involve both water and energy as resources. Outside buildings, the acquisition, treatment, and transportation of water is involved. Inside building, onsite water treatment, heating, and wastewater treatment are issues. Data on the energy use of specific water and wastewater systems is becoming increasingly available. Energy use can sometimes be estimated from public sources. LEED v4 includes a number of credits related to Water Efficiency (WE). Points are awarded for Outdoor Water Use Reduction, Indoor Water Use Reduction and Building-level Water Metering based on predetermined percentage reductions in water or energy use. [ 140 ] [ 141 ] There has been criticism that the LEED rating system is not sensitive and does not vary enough with regard to local environmental conditions. For example, there are 16 climate zones in California , with unique weather and temperature patterns. The availability of electricity, water and other resources differs widely in different regions, making it important to consider interconnected systems and supply chain issues. Greer et al. (2019) reviewed renewable energy assessment methods and examined the effectiveness of LEED v4 buildings in California. They examined relationships between the climate mitigation points given for water efficiency (WE) and energy efficiency (EA) and used baseline energy and water budgets to calculate the avoided GHG emissions of buildings. Their calculations both demonstrate mitigation of expected climate change and also indicate high variability in environmental outcomes within the state. [ 140 ] While LEED v4 introduced “Impact Categories” as system goals, Greer suggests that closer linkages are needed between design points and outcomes, and that issues like supply chains, infrastructure, and regionalized variability should be considered. They report that impacts like the mitigation of expected climate change pollution can be calculated, and while "LEED points do not equally reward equal impact mitigation", such differences could be reconciled to better align LEED credits and goals. [ 140 ] The rise in LEED certification also brought forth a new era of construction and building research and ideation. Architects and designers have begun stressing the importance of occupancy health over high efficiency within new construction and have been trying to engage in more conversations with health professionals. Along with this, they also create buildings to perform better and analyze performance data to upkeep the process. Another way LEED has affected research is that designers and architects focus on creating spaces that are modular and flexible to ensure a longer lifespan while simultaneously sourcing products that are resilient through consistent use. [ 142 ] Innovation in LEED architecture is linked with new designs and high-quality construction. One example is use of nanoparticle technology for consolidation and conservation effects in cultural heritage buildings. [ 143 ] This practice began with the use of calcium hydroxide nano-particles in porous structures to improve mechanical strength . Titanium, silica, and aluminum-based compounds may also be used. [ 144 ] Material technology and construction techniques could be among first issues to consider in building design. For the facade of high-rise buildings , such as the Empire State Building , the surface area provides opportunities for design innovation. [ 145 ] VOC released from construction materials into the air is another challenge to address. [ 146 ] In Milan , a university-corporate partnership sought to produce semi-transparent solar panels to take the place of ordinary windows in glass-facade high-rise buildings. [ 147 ] Similar concepts are under development elsewhere, with considerable market potential. [ 148 ] [ 149 ] The Manzara Adalar skyscraper project in Istanbul , designed by Zaha Hadid , saw considerable innovation through the use of communal rooms, outdoor spaces, and natural lighting [ 150 ] as part of the Urban Transformation Project of the Kartal port region. [ 151 ] [ 152 ] [ 153 ] Other credit areas include: Materials and Resources (MR), and Regional Priority (RP). [ 118 ] When a LEED rating is pursued, the cost of initial design and construction may rise. There may be a lack of abundant availability of manufactured building components that meet LEED specifications. There are also added costs in USGBC correspondence, LEED design-aide consultants, and the hiring of the required Commissioning Authority , which are not in themselves necessary for an environmentally responsible project unless seeking LEED certification. [ 154 ] Proponents argue that these higher initial costs can be mitigated by the savings incurred over time due to projected lower-than-industry-standard operational costs typical of a LEED certified building. This life cycle costing is a method for assessing the total cost of ownership, taking into account all costs of acquiring, owning and operating, and the eventual disposal of a building. [ 155 ] [ 156 ] [ 157 ] Additional economic payback may come in the form of employee productivity gains incurred as a result of working in a healthier environment. Studies suggest that an initial up-front investment of 2% extra yields over ten times that initial investment over the life cycle of the building. [ 158 ] LEED has been developed and continuously modified by workers in the green building industry, especially in the ten largest metro areas in the U.S.; however, LEED certified buildings have been slower to penetrate small and middle markets. [ 159 ] [ 160 ] From a financial perspective, studies from 2008 and 2009 found that LEED for-rent office spaces generally charged higher rent and had higher occupancy rates. [ 161 ] [ 162 ] [ 163 ] Analysis of CoStar Group property data estimated the extra cost for the minimum benefit at 3%, with an additional 2.5% for silver-certified buildings. [ 164 ] More recent studies have confirmed earlier findings that certified buildings achieve significantly higher rents, sale prices and occupancy rates as well as lower capitalization rates, potentially reflecting lower investment risk. [ 165 ] Many federal, state, and local governments and school districts have adopted various types of LEED initiatives and incentives. LEED incentive programs can include tax credits, tax breaks, density zoning bonuses, reduced fees, priority or expedited permitting, free or reduced-cost technical assistance, grants and low-interest loans. [ 166 ] [ 167 ] [ 168 ] In the United States, states that have provided incentives include California , New York , [ 28 ] Delaware , Hawaii , Illinois , Maryland , Nevada , New Mexico , North Carolina , Pennsylvania , and Virginia . [ 169 ] Cincinnati , Ohio, provides property tax abatements for newly constructed or rehabilitated commercial or residential properties that earn are LEED certified. [ 170 ] Beginning in June 2013, USGBC has offered free LEED certification to the first LEED-certified project in a country that doesn't have one. [ 171 ] [ 172 ] The USGBC and Canada Green Building Council maintain online directories of U.S. LEED-certified and LEED Canada-certified projects. [ 50 ] [ 173 ] In 2012 the USGBC launched the Green Building Information Gateway (GBIG) to connect green building efforts and projects worldwide. It provides searchable access to a database of activities, buildings, places and collections of green building-related information from many sources and programs, including LEED projects. [ 174 ] A number of sites including the Canada Green Building Council (CaGBC) Project Database list resources relating to LEED buildings in Canada. [ 175 ] The Philip Merrill Environmental Center in Annapolis, Maryland was the first building to receive a LEED-Platinum rating, version 1.0. It was recognized as one of the "greenest" buildings constructed in the U.S. in 2001 at the time it was built. Sustainability issues ranging from energy use to material selection were given serious consideration throughout design and construction of this facility. [ 176 ] The first LEED platinum-rated building outside the U.S. is the CII Sohrabji Godrej Green Business Centre (CII GBC) in Hyderabad, India, [ 177 ] certified in 2003 under LEED version 2.0. [ 178 ] [ 179 ] [ 180 ] [ 181 ] [ 182 ] The Coastal Maine Botanical Gardens Bosarge Family Education Center , completed in 2011, achieved LEED Platinum certification and became known as "Maine's greenest building". [ 183 ] In October 2011 Apogee Stadium at the University of North Texas became the first newly built stadium in the country to achieve Platinum-level certification. [ 184 ] In Pittsburgh, Sota Construction Services' corporate headquarters [ 185 ] earned a LEED Platinum rating in 2012 with one of the highest scores by percentage of total points earned in any LEED category, making it one of the top ten greenest buildings in the world. It featured a super-efficient thermal envelope using cob walls, a geothermal well, radiant heat flooring, a roof-mounted solar panel array, and daylighting features. [ 186 ] When it received LEED Platinum in 2012, Manitoba Hydro Place in downtown Winnipeg was the most energy-efficient office tower in North America and the only office tower in Canada with a Platinum rating. The office tower employs south-facing winter gardens to capture solar energy during the harsh Manitoba winters and uses glass extensively to maximize natural light. [ 187 ] [ 188 ] [ 189 ] Pittsburgh 's 1,500,000-square-foot (140,000 m 2 ) David L. Lawrence Convention Center was the first Gold LEED-certified convention center and largest "green" building in the world when it opened in 2003. [ 190 ] It earned Platinum certification in 2012, becoming the only convention center with certifications for both the original building and new construction. [ 191 ] The Cashman Equipment building in Henderson, Nevada became the first construction equipment dealership to receive LEED gold certification in 2009. The headquarters of the Caterpillar brand, it is the largest LEED industrial complex in Nevada . [ 192 ] [ 193 ] Around 2010, the Empire State Building underwent a $550 million renovation, including $120 million towards energy efficiency and eco-friendliness. [ 195 ] It received a gold LEED rating in 2011, and at the time was the tallest LEED-certified building in the United States. [ 196 ] In July 2014, the San Francisco 49ers ' Levi's Stadium became the first NFL venue to earn a LEED Gold certification. [ 197 ] The Minnesota Vikings ' U.S. Bank Stadium equaled this feat with a Gold certification in Building Design and Construction in 2017 as well as a Platinum certification in Operations and Maintenance in 2019, a first for any professional sports stadium. [ 198 ] In San Francisco's Presidio , the Letterman Digital Arts Center earned a Gold certification in 2013. It was built almost entirely from the recycled remains of the Letterman Army Hospital , which previously occupied the site. [ 199 ] Although originally constructed in 1973, Willis Tower a commercial office building located in Chicago, adopted and implemented a new set of sustainable practices in 2018, earning the property LEED Gold certification under the LEED for Existing Buildings: O&M™ rating system. This adoption earned Willis Tower the ranking of the tallest LEED-certified building in the United States. [ 200 ] In September 2012, The Crystal in London became the world's first building awarded LEED Platinum and BREEAM Outstanding status. It generates its own energy using solar power and ground-source heat pumps and utilizes extensive KNX technologies to automate the building's environmental controls. [ 201 ] In Pittsburgh , the visitor's center of Phipps Conservatory & Botanical Gardens received Silver certification, [ 202 ] its Center for Sustainable Landscapes received a Platinum certification and fulfilled the Living Building Challenge for net-zero energy , [ 203 ] and its greenhouse facility received Platinum certification. It may be the only greenhouse in the world to have achieved such a rating. [ 204 ] Torre Mayor , at one time the tallest building in Mexico, achieved LEED Gold certification for an existing building [ 205 ] and eventually reached Platinum certification under LEED v4.1. [ 206 ] [ 207 ] The building is designed to withstand 8.5-magnitude earthquakes, and has enhanced many of its systems including air handling and water treatment. [ 205 ] [ 207 ] In 2017, [ 208 ] Kaiser Permanente , the largest integrated health system in the United States, [ 60 ] opened California's first LEED Platinum certified hospital, the Kaiser Permanente San Diego Medical Center. By 2020, Kaiser Permanente owned 40 LEED certified buildings. [ 208 ] Its construction of LEED buildings was one of multiple initiatives that enabled Kaiser Permanente to report net-zero carbon emissions in 2020. [ 60 ] As of 2022, University of California, Irvine had 32 LEED-certified buildings across the campus. 21 were LEED Platinum certified, and 11 were LEED Gold. [ 209 ] Extreme structures that have received LEED certification include: Amorepacific Headquarters in Seoul by David Chipperfield Architects ; [ 210 ] Project: Brave New World: SFMOMA by Snøhetta in San Francisco , California; [ 211 ] Project: UFO in a Sequinned Dress: Centro Botín in Santander by Renzo Piano ; Building Workshop in Zusammenarbeit with Luis Vidal + Architects, in Santander, Spain; [ 212 ] and Project: Vertical factory: Office building in London by Allford Hall Monaghan Morris in London . [ 213 ]
https://en.wikipedia.org/wiki/LEED
The Low Energy Gamma-Ray Imager ( LEGRI ) was a payload for the first mission of the Spanish MINISAT platform, and active from 1997 to 2002. The objective of LEGRI was to demonstrate the viability of HgI 2 detectors for space astronomy , providing imaging and spectroscopical capabilities in the 10-100 KeV range. LEGRI was successfully launched on April 21, 1997, on a Pegasus XL rocket. The instrument was activated on May 19, 1997. It was active until February 2002. [ 1 ] The LEGRI system included the Detector Unit, Mask Unit, Power Supply, Digital Processing Unit, Star Sensor, and Ground Support Unit. [ 1 ] The LEGRI consortium included: [ 1 ] This article about one or more spacecraft of Spain is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/LEGRI
The LEO ( Lyons Electronic Office ) was a series of early computer systems created by J. Lyons and Co. The first in the series, the LEO I, was the first computer used for commercial business applications. The prototype LEO I was modelled closely on the Cambridge EDSAC . Its construction was overseen by Oliver Standingford, Raymond Thompson and David Caminer of J. Lyons and Co. LEO I ran its first business application in 1951. In 1954 Lyons formed LEO Computers Ltd to market LEO I and its successors LEO II and LEO III to other companies. LEO Computers eventually became part of English Electric Company (EEL), (EELM), then English Electric Computers (EEC), where the same team developed the faster LEO 360 and even faster LEO 326 models. It then passed to International Computers Limited (ICL) and ultimately Fujitsu . LEO series computers were still in use until 1981. J. Lyons and Co. was one of the UK's leading catering and food manufacturing companies in the first half of the 20th century. In 1947, two of its senior managers, Oliver Standingford and Raymond Thompson, were sent to the United States to look at new business methods developed during World War II . During the visit, they met Herman Goldstine who was one of the original developers of ENIAC , the first general-purpose electronic computer. Standingford and Thompson saw the potential of computers to help solve the problem of administering a major business enterprise. They also learned from Goldstine that, back in the UK, Douglas Hartree and Maurice Wilkes were actually building another such machine, the pioneering EDSAC computer, at the University of Cambridge . [ 1 ] On their return to the UK, Standingford and Thompson visited Hartree and Wilkes in Cambridge and were favourably impressed with their technical expertise and vision. Hartree and Wilkes estimated that EDSAC was 12–18 months from completion, but said that this interval could be shortened by additional funding. Standingford and Thompson wrote a report to the Lyons' board recommending that Lyons should acquire or build a computer to meet their business needs. The board agreed that, as a first step, Lyons would provide Hartree and Wilkes with £2,500 for the EDSAC project, and would also provide them with the services of a Lyons electrical engineer, Ernest Lenaerts. EDSAC was completed and ran its first program in May 1949. [ 2 ] Following the successful completion of EDSAC, the Lyons board agreed to start the construction of their own machine, expanding on the EDSAC design. The LEO computer room, which took up around 2,500 square feet of floor space, was at Cadby Hall in Hammersmith. [ 3 ] The Lyons machine was christened Lyons Electronic Office, or LEO. On the recommendation of Wilkes, Lyons recruited John Pinkerton , a radar engineer and research student at Cambridge, as team leader for the project. Lenaerts returned to Lyons to work on the project, and Wilkes provided training for Lyons' engineer Derek Hemy, who would be responsible for writing LEO's programs. On 15 February 1951 the computer, carrying out a simple test program, was shown to HRH Princess Elizabeth . [ 4 ] The first business application to be run on LEO was Bakery Valuations, which computed the costs of ingredients used in bread and cakes. [ 5 ] This was successfully run on 5 September 1951, [ 4 ] and LEO took over Bakery Valuations calculations completely on 29–30 November 1951. [ 6 ] [ 4 ] Mary Coombs was employed in 1952 as the first female programmer to work on LEO, and as such she is recognized as the first female commercial programmer. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] Five files of archive material on the LEO Computer patent are held at the British Library and can be accessed through the British Library Archives catalogue. [ 12 ] LEO I's clock speed was 500 kHz, with most instructions taking about 1.5 ms to execute. [ 13 ] [ 14 ] [ 15 ] To be useful for business applications, the computer had to be able to handle a number of data streams, input and output, simultaneously. Therefore, its chief designer, John Pinkerton , designed the machine to have multiple input/output buffers . In the first instance, these were linked to fast paper tape readers and punches, fast punched card readers and punches, and a 100-line-per-minute tabulator. Later, other devices, including magnetic tape, were added. Its ultrasonic delay-line memory based on tanks of mercury , with 2K (2048) 35-bit words (i.e., 8 3 ⁄ 4 kilobytes ), was four times as large as that of EDSAC. The systems analysis was carried out by David Caminer . [ 16 ] Lyons used LEO I initially for valuation jobs, but its role was extended to include payroll , inventory , and so on. One of its early tasks was the elaboration of daily orders, which were phoned in every afternoon by the shops and used to calculate the overnight production requirements, assembly instructions, delivery schedules, invoices, costings, and management reports. This was the first instance of an integrated management information system. [ 17 ] The LEO project was also a pioneer in outsourcing : in 1956, Lyons started doing the payroll calculations for Ford UK and others on the LEO I machine. The success of this led to the company dedicating one of its LEO II machines to bureau services. Later, the system was used for scientific computations as well. Met Office staff used a LEO I before the Met Office bought its own computer, a Ferranti Mercury , in 1959. [ 18 ] In 1954, with the decision to proceed with LEO II and interest from other commercial companies, Lyons formed LEO Computers Ltd. The first LEO III was completed in 1961; it was a solid-state machine with a 13.2 μs cycle time ferrite core memory . [ 19 ] It was microprogrammed and was controlled by a multitasking "Master program" operating system, which allowed concurrent running of as many as 12 application programs. Users of LEO computers programmed in two coding languages: Intercode , [ 20 ] a low-level assembler type language; and CLEO ( acronym : Clear Language for Expressing Orders), the COBOL equivalent. [ 21 ] One of the features that LEO III shared with many computers of the day was a loudspeaker connected to the central processor via a divide-by-100 circuit and an amplifier which enabled operators to tell whether a program was looping by the distinctive sound it made. [ 22 ] Another quirk was that many intermittent faults were due to faulty connectors and could be temporarily fixed by briskly strumming the card handles. [ citation needed ] Some LEO III machines purchased in the mid-to-late 1960s remained in commercial use at GPO Telephones, the forerunner of British Telecom , until 1981, primarily producing telephone bills. [ 5 ] [ 19 ] They were kept running using parts from redundant LEOs purchased by the GPO. [ citation needed ] In 1963, LEO Computers Ltd was merged into English Electric Company and this led to the breaking up of the team that had inspired LEO computers. The company continued to build the LEO III, and went on to build the faster LEO 360 and even faster LEO 326 models, which had been designed by the LEO team before the takeover. English Electric LEO Computers (EEL) (1963), then English Electric Leo Marconi (EELM) (1964), later English Electric Computers (EEC) (1967), eventually merged with International Computers and Tabulators (ICT) and others to form International Computers Limited (ICL) in 1968. In the 1980s, there were still ICL 2900 mainframes running LEO programs, using an emulator written in ICL 2960 microcode at the Dalkeith development centre. [ 23 ] At least one modern emulator has been developed which can run some original LEO III software on a modern server. [ 24 ] ICL was bought by Fujitsu in 1990. Whether its investment in LEO actually benefited J. Lyons is unclear. Nick Pelling notes that before LEO I the company already had a proven, industry-leading system using clerks that gave it "near-real-time management information on more or less all aspects of its business", and that no jobs were lost when the system was computerized. In addition, LEO Computers lost money on many of its sales because of unrealistically low prices. [ 25 ] In 2018, the Centre for Computing History along with LEO Computers Society were awarded funding from the Heritage Lottery Fund for their project aiming to bring together, preserve, archive and digitise a range of LEO Computers artefacts, and documents. [ 26 ] The Centre's museum gallery has an area dedicated to LEO, and as of 2021 [update] they are also working on a LEO virtual reality project. [ 27 ] [ 3 ] In November 2021, to coincide with the 70th anniversary of the first successful full program run on LEO I, the project released a film about the history of LEO, which went on to win Video of the Year in the Association of British Science Writers Awards in July 2022. [ 28 ] [ 29 ]
https://en.wikipedia.org/wiki/LEO_(computer)
LEXO is the original version of the upgraded BURNOUT temperature regulating tumbler brand from manufacturer ThermAvant International, LLC, based in Columbia, Missouri . The creator of LEXO, Hongbin "Bill" Ma, is a professor of mechanical and aerospace engineering and director of the Center for Thermal Management at the University of Missouri . [ 1 ] After noticing how often he forgot coffee while waiting for it to cool, Ma began working on a “cup with constant temperature” in the summer of 2015. [ 2 ] [ 3 ] The LEXO was released to the general public in December 2016. The LEXO uses bio-based phase-change and advanced heat transfer materials to absorb the initial heat of the beverage and cool it to a more drinkable temperature. [ 4 ] When the temperature begins to drop, the LEXO slowly releases the stored heat back into the drink. [ 5 ] The LEXO can also insulate cold liquids. [ 6 ] The LEXO has three layers of 18/8 stainless-steel and BPA-free plastic lids. [ 7 ]
https://en.wikipedia.org/wiki/LEXO
This page provides supplementary data and solvent coefficients for linear free-energy relationships . The LFER used to obtain partition coefficients that uses the systems below takes the form log P s = c + eE + sS + aA + bB + vV The LFER used to obtain partition coefficients that uses the systems below takes the form log K s = c + eE + sS + aA + bB + lL
https://en.wikipedia.org/wiki/LFER_solvent_coefficients_(data_page)
Linux Foundation Energy (known as LF Energy ) is an initiative launched by the US-based Linux Foundation in 2018 to improve the power grid . [ 1 ] [ 2 ] [ 3 ] Its aim is to spur the uptake of digital technologies within the electricity sector and adjoining sectors using open source software and practices, with a key application being the smarter grid. [ 4 ] [ 5 ] LF Energy was formed in 2018. [ 6 ] The organization was founded by Shuli Goodman, who served as its executive director. [ 7 ] [ 8 ] [ 9 ] RTE supported the creation of LF Energy since early 2018 and became its first Strategic Member. [ 10 ] LF Energy is an umbrella organization that includes energy companies such as Alliander and RTE. [ 11 ] Energy company executives such as Arjan Stam (Director of System Operations at Alliander) and Lucian Balea (Director of Open Source) have joined LF Energy as governing board members. [ 2 ] LF Energy helped develop Alliander's open source program offices after Alliander joined the organization in 2019. The organization formally launched in May 2019. [ 11 ] LF Energy launched the open industrial IoT platform GXF ( Grid eXchange Fabric ) in collaboration with Alliander in February 2020. [ 12 ] [ 13 ] [ 14 ] LF Energy partnered with GE Renewable Energy , Schneider Electric , National Grid , and RTE (Réseau de Transport d'Électricité) to launch the Digital Substation Automation Systems (DSAS) initiative and the related Configuration Modules for Power Industry Automation Systems ( CoMPAS ) project in 2020. [ 15 ] The DSAS initiative aims to use open-source technology to convert electrical substations into digital substations to accelerate progress towards achieving carbon neutrality . [ 16 ] In like manner, CoMPAS provided software modules for automation systems in the power industry. [ 17 ] In 2020, LF Energy launched the second DSAS open-source project SEAPATH , which provided a platform for virtualized automation for power grids and substations. [ 18 ] In 2021, LF Energy collaborated with Sony Computer Science Laboratory on the microgrid initiative Hyphae , which aims to automate peer-to-peer renewable energy distribution. [ 19 ] [ 20 ] [ 21 ] The organization also introduced the SOGNO software initially funded by the European Commission Horizon 2020 programs. [ 22 ] [ 23 ] Its focus is on grid automation using microservices and control rooms. [ 24 ] Microsoft partnered with LF Energy as part of its 100/100/0 program in September 2021. [ 25 ] Google joined LF Energy as a Strategic Member as part of its 24/7 Carbon Free Energy initiative in early 2022. [ 26 ] [ 27 ] In early 2022, LF Energy launched the EVerest project , which aims to provide open source software for the electric vehicle charging infrastructure. [ 28 ] [ 29 ] LF Energy was also one of the organizations that took part in the Carbon Call , an initiative aimed at developing reliable measurement and accounting of carbon emissions. [ 30 ] [ 31 ] Shuli Goodman died on 3 January 2023. [ 32 ]
https://en.wikipedia.org/wiki/LF_Energy
LGA 1567 or Socket LS , is a CPU socket used for the high-end server segment. It has 1567 protruding pins to make contact with the pads on the processor. It supports Intel Nehalem , codenamed Beckton , Xeon 7500 and Xeon 6500 series processors first released in March 2010. The 6500 series is scalable up to 2 sockets, while the 7500 series is scalable up to 4/8 sockets on a supporting motherboard . [ 1 ] In this server segment, it is a successor of Socket 604 , which was first launched in 2002. A modification of LGA 2011 , the LGA 2011-1 or Socket R2, is a successor of LGA 1567. Later on, the Xeon E7 series using the Westmere-EX architecture reused the same socket. Dell also manufactures the proprietary "FlexMem Bridge" module that installs into two of the LGA 1567 sockets of certain PowerEdge servers to allow the use of additional memory slots with only two processors installed. [ 2 ] [ 3 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/LGA_1567
LGA 3647 is an Intel microprocessor compatible socket used by Xeon Phi x200 ("Knights Landing"), [ 1 ] Xeon Phi 72x5 ("Knights Mill"), Skylake-SP , Cascade Lake-SP , and Cascade Lake-W microprocessors. [ 2 ] The socket supports a 6-channel memory controller, non-volatile 3D XPoint memory DIMMs, Intel Ultra Path Interconnect (UPI), as a replacement for Quick Path Interconnect (QPI) , and 100G Omni-Path interconnect and also has a new mounting mechanism which does not use a lever to secure it in place but the CPU cooler's pressure and its screws to secure it in place. There are two sub-versions of this socket with differences also in the ILM ( Independent Loading Mechanism , pitch of center screws changed slightly and a more visible one being that the guiding pins are in other corners). The processor socket and the matching notches on the processor are at different location, preventing insertion of an incompatible processor and preventing use of the wrong heatsink in a system. The more common P0 variant has two sub-options for heatsink mounting – designated as square ILM and narrow ILM, choice of which depends on the server and mainboard design (likely based on space constraints). This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/LGA_3647
The contributions of LGBTQ individuals to architecture are significant, although the direct influence of their sexuality on the style, layout, and materials of their designs is still a subject of debate. Recent queer theoretical frameworks have explored how LGBTQ people shape, inhabit, and alter functions of architectural spaces, offering insights into how architectural practices reflect broader social and cultural dynamics, particularly regarding identity, visibility, and marginalization. [ 1 ] LGBTQ contributions to architecture span a wide spectrum of design practices, ranging from flamboyant and ostentatious styles to more restrained and conventional forms. While the works of Walpole and Beckford have often been associated with flamboyance, it is crucial to recognize that many LGBTQ architects and designers, such as Charles Robert Ashbee , contributed to more conventional movements like Arts & Crafts , without the overt markers of queerness. Focusing solely on the flamboyant aspects risks reinforcing stereotypes and overlooks the diversity of LGBTQ architectural contributions. [ 2 ] The lack of reliable data on LGBTQIA+ architects presents a significant challenge to achieving equity within the profession. This gap perpetuates the invisibility of queer identities and undermines efforts to address systemic exclusion. For instance, national data sources like the U.S. Census fail to offer meaningful insights into LGBTQIA+ populations, and architectural organizations such as NCARB and NAAB do not yet track these identities comprehensively. The absence of such critical metrics hinders the ability to measure progress or address the needs of LGBTQIA+ professionals. [ 3 ] While the concept of a distinct "queer architecture" remains a matter of scholarly debate, it is evident that architectural forms have responded to the shifting relationship between LGBTQ individuals and broader societal structures. The built environment has served as a means of negotiating identity, privacy, and visibility, providing spaces that reflect the tensions between societal pressures and personal expression. Whether through overtly dramatic designs or more subtle adaptations, architecture has offered LGBTQ individuals both refuge and resistance in the face of a hostile environment. [ 2 ] Antinous, a youth from Bithynia, became the beloved of Roman Emperor Hadrian (r. 117-138 CE) around 123 CE. Their relationship was rooted in the Greek tradition of erastes and eromenos, where an older man ( erastes ) took on an educational and affectionate role toward a younger man ( eromenos ). Hadrian, known for his preference for male lovers, brought Antinous into his circle and educated him both socially and intellectually. The bond between them had a profound impact on Hadrian's personal life and, later, on his architectural endeavors. [ 4 ] After Antinous' death in 130 CE, Hadrian, deeply affected, commissioned the construction of a city in his lover's honor, Antinopolis , near Hermopolis in Egypt. Modeled after Alexandria , the city's layout and the surrounding monuments reflected Hadrian's desire to immortalize Antinous. The city served not only as a tribute to his lover but also as a means of asserting his personal connection to the divine, as Antinous was deified following his death. [ 5 ] The cult of Antinous spread rapidly throughout the Roman Empire, establishing temples, altars, and statues in his honor. This cult, associated with healing powers, also symbolized the intersection of sexuality and religion in Roman society. Antinous was venerated as a god who, having once been mortal, could empathize with human suffering. His statues were focal points of worship, often receiving daily offerings and being treated with the same reverence as other deities. [ 4 ] However, the rise of Christianity led to resistance against the cult of Antinous, as Christian writers condemned it as immoral. Despite this, the cult persisted until it was officially outlawed by Emperor Theodosius I in 391 CE. Through the architectural projects he initiated in Antinous' name, Hadrian's sexuality and personal relationships left a lasting imprint on the built environment, reflecting how personal affection and sexuality could shape public life in ancient Rome. [ 4 ] The late 18th and early 19th-century Gothic style , particularly as exemplified by William Beckford and Horace Walpole , has been the subject of queer architectural analysis. Both Beckford and Walpole were key figures in the development of eccentric, fantastical architectural forms that blended personal identity with artistic expression. Their works, while rooted in Gothic aesthetics, reflect a broader social and cultural queerness, evident in their social circles and artistic endeavors. [ 2 ] LGBTQ artists and designers made pivotal contributions to 20th-century English modernism , with figures such as Enid Marx and interior designers featured in Vogue magazine playing central roles in defining a distinctive modernist aesthetic. The so-called "Amusing style," characterized by playful, whimsical, and gender-fluid design elements, challenged conventional gender norms and social expectations, reflecting broader shifts in cultural attitudes toward gender and identity during the interwar period. [ 2 ] The design of St Ann's Court by architect Christopher Tunnard and his partner Gerald Schlesinger exemplifies the ways in which LGBTQ individuals navigated societal homophobia during the early 20th century. Completed in 1937, the building incorporates features such as retractable screens in the master bedroom, allowing the couple to conceal their relationship from the public eye. This architectural response to social hostility highlights how design can accommodate both the desire for privacy and the need to navigate a homophobic society. [ 2 ] Walpole's Strawberry Hill serves as an important case study in how architecture can reflect the social dynamics and personal identities of its creator. The building's whimsical Gothic Revival style , characterized by theatrical and elaborate features, provided a setting for exclusive same-sex social gatherings. The space functioned as both a public display and a private retreat, illustrating how architectural design could serve as a subtle form of resistance to prevailing societal norms. [ 2 ] William Beckford's Fonthill Abbey , begun in 1796, represents an extravagant and highly personal interpretation of Georgian 'Gothick' architecture. Its excessive and ambitious design, particularly the 276-foot-high tower, epitomized Beckford's unique vision but also resulted in structural instability. Viewed through a modern lens, the building can be considered an example of 'camp'—intentionally exaggerated, self-aware, and theatrical architecture, marking a departure from the more scholarly Gothic Revival of the period. [ 2 ] The remodeling of Shibden Hall by Anne Lister offers a notable example of subversive queer architecture. Lister's modifications, which included the addition of a Gothic tower and library, reflected her need to balance societal expectations of respectability with her desire for personal privacy. Inspired in part by the Ladies of Llangollen's Plas Newydd, [ 6 ] Lister's architectural choices demonstrate how design can navigate the complex intersection of public persona and private identity, offering a space for both social engagement and seclusion. [ 2 ] Queer Space: Architecture and Same-Sex Desire by Aaron Betsky argues that queer spaces subvert traditional architectural forms, which often uphold heteronormative societal structures. This subversion manifests in the adaptive reuse of spaces, such as transforming bathhouses or dance clubs into sites of liberation and self-expression. These venues are not merely functional; they symbolize the rejection of rigid boundaries and linear designs in favor of fluidity, openness, and unpredictability—reflecting queer identities. [ 1 ] A central theme in Betsky's analysis is the transient and performative quality of queer spaces. He describes spaces like nightclubs and cruising grounds as "queer architectures" because they are less about permanence and more about experiences and interactions. For example, the design of gay clubs prioritizes sensory engagement—through dramatic lighting, mirrored surfaces, and flexible layouts—emphasizing movement and transformation rather than static functionality. This focus on temporality mirrors the precarious place of queer communities within broader social structures. [ 1 ] Betsky's work highlights how queer spaces often dissolve traditional divisions between public and private realms. Parks, alleys, and other urban landscapes become sites of intimacy and exploration, repurposed to meet the needs of queer individuals. Similarly, domestic spaces, such as the homes used for underground drag balls, take on public functions, fostering community and collective identity. This blurring of boundaries reflects a rejection of fixed spatial norms in favor of fluidity and adaptability. [ 1 ] Betsky also examines how gay communities have reshaped urban environments, turning marginalized neighborhoods into vibrant cultural centers. Spaces like New York's Fire Island , San Francisco's Castro District , and the underground ballroom scenes exemplify how queer individuals reimagine urban landscapes. Through both physical and symbolic transformations, these spaces serve as sites of resistance, solidarity, and visibility, challenging the invisibility imposed by mainstream architectural practices. [ 1 ] Betsky's notion of "queering" architecture extends beyond physical spaces to the reimagining of design principles. He critiques traditional architecture for its rigidity and argues that queer design practices emphasize fluidity, ambiguity, and subversion. This ethos is evident in spaces like the Haus of Gaga (Named in reference to contemporary artist Lady Gaga and the BauHaus ), where flamboyant design elements and playful reinterpretations of conventional forms create environments that celebrate queerness as an aesthetic and political stance. [ 1 ] Established in 1991 in New York City, played a pivotal role in reclaiming queer architectural history and fostering discourse on "queer design." By identifying historically significant spaces and recognizing the contributions of LGBTQ architects, OLGAD [ 7 ] connected architecture with broader movements for queer visibility. Its 1994 Guide to Lesbian & Gay New York Historical Landmarks expanded the understanding of LGBTQ place-based history beyond The Stonewall inn , influencing the recognition of sites like the Stonewall Inn on the National Register of Historic Places and as a National Historic Landmark. Evolving from OLGAD's efforts, the New York City LGBT Historic Sites Project, launched in 2015, continues to preserve, and highlight over 140 queer spaces across the city, showcasing the lasting impact of LGBTQ narratives on architectural and urban history. [ 1 ] The research series Where Are My People? [ 8 ] Queer in Architecture explores the experiences of LGBTQIA+ individuals within the architecture profession, highlighting their contributions and the challenges they face. The findings reveal significant underrepresentation of LGBTQIA+ individuals across all major architectural organizations, such as NCARB , AIA and ACSA , with reported figures of less than 2% membership for LGBTQIA+ individuals and even fewer identifying as non-binary . These disparities underscore the marginalization of queer identities in architecture, worsened by the societal reluctance to collect or analyze comprehensive data on this population. Visibility remains limited, as many individuals may choose not to disclose their identities due to fear of discrimination or safety concerns, reflecting broader patterns of social exclusion. [ 8 ] The study emphasizes the importance of understanding the intersectional nature of LGBTQIA+ identities and their impact on architectural practice. These identities intersect with race, gender, class, and other social categories, shaping how individuals experience and contribute to the discipline. Survey respondents consistently cited their heightened awareness of marginalization, a perspective that influences their approach to architecture. These intersections challenge traditional narratives about who architecture serves, urging a redefinition of the profession's priorities to include broader and more inclusive responses to human needs. [ 8 ] A recurring theme in the study is the role of LGBTQIA+ architects in challenging traditional norms and stereotypes. Respondents highlighted their commitment to atypical space-making practices that resist the status quo and address the needs of marginalized communities. This counter-normative approach often manifests in advocating for more inclusive and diverse spaces that serve populations beyond conventional frameworks. Such efforts align with broader discussions about the potential of architecture to foster equity and justice in the built environment. [ 8 ] The research underscores the importance of continued advocacy and data collection to support LGBTQIA+ individuals in architecture. By keeping surveys open and updating findings, initiatives like Where Are My People? aim to track progress and adapt to evolving challenges. These efforts are not merely about representation but about transforming architectural practice to better reflect the diversity of society. Encouraging participation from LGBTQIA+ professionals and expanding the discourse on equity and inclusion will be crucial for fostering a more just and innovative architectural discipline. [ 8 ]
https://en.wikipedia.org/wiki/LGBTQIA+_Architecture_Contributions_and_Subversion
The LG G Watch (model W100, codenamed Dory ) is an Android Wear -based smartwatch announced and released by LG and Google on June 25, 2014. It was released along with the Samsung Gear Live as launch devices for Android Wear , a modified version of Android designed specifically for smartwatches and other wearables. [ 2 ] It is compatible with all smartphones running Android 4.3 or higher that support Bluetooth LE . G Watch was, as of June 2014, only available in the United States and Canada at US$229 or in the United Kingdom for £159 on the Google Play Store. [ 3 ] As of July 2014, the G Watch was also made available in Australia , France , Germany , India , Ireland , Italy , Japan , South Korea , and Spain . [ 4 ] The G Watch R is a variant featuring a round face and an OLED screen. [ 5 ] The G Watch has IP67 certification for dust and water resistance. It has a user-replaceable buckle-based strap. The watch has no buttons. It uses an always-on rectangular shaped display. The G Watch runs Android Wear, which features a notification system based on Google Now technology that enables it to receive spoken commands from the user. [ 6 ] Users may also install the open-source AsteroidOS [ 7 ] or PostmarketOS . [ 8 ] JR Raphael of Computerworld liked the LG G Watch's superior dimmed-mode display, comfortable band, and easy-to-use charging cradle, but did not like the uninspired design and poor outdoor visibility display compared to Samsung Gear Live . [ 9 ]
https://en.wikipedia.org/wiki/LG_G_Watch
The LG G Watch R (model W110) is an Android Wear -based smartwatch announced and released by LG and Google on October 25, 2014. [ 1 ] It is the second round-faced smartwatch after the Motorola Moto 360 but, unlike the 360, it is the first to feature a full circular display. It is the successor to LG's original LG G Watch , which features a rectangular display. The G Watch R has IP67 certification for dust and water resistance. It has a user-replaceable buckle-based strap. The watch consists of a 1.2 GHz Quad-Core Qualcomm Snapdragon 400 processor, 4GB internal storage and 512MB RAM. It is encased in a brushed aluminum and stainless steel body, which holds on the P-OLED display . The smartwatch has Bluetooth LE connectivity, a barometer for several uses including atmospheric pressure and altitude, an accelerometer , a gyroscope and a heart rate monitor . Wi-Fi connectivity was enabled in an official patch. While the watch does include a microphone, the lack of a speaker makes it impossible to make calls on it.
https://en.wikipedia.org/wiki/LG_G_Watch_R
Rotating crown Button 6-axis sensor(Gyro/ Accelerometer) Barometer Microphone NFC Heart rate monitor GPS 3G Wi-Fi 802.11 b/g/n The LG Watch Sport is a smartwatch released by LG Corporation on Feb 09, 2017. [ 1 ] The device is one of the first smartwatches to ship with Android Wear version 2.0 with LTE (telecommunication) and Android Pay support.
https://en.wikipedia.org/wiki/LG_Watch_Sport
Rotating crown Button 6-axis sensor(Gyro/ Accelerometer) Microphone The LG Watch Style is a smartwatch released by LG Corporation on 9 February 2017. [ 2 ] The device is one of the first smartwatches to ship with Android Wear version 2.0. [ 3 ]
https://en.wikipedia.org/wiki/LG_Watch_Style
The LG Watch Urbane is a smartwatch released by LG Corporation on April 27, 2015. [ 1 ] There are gold and silver models, each with a 22mm-wide interchangeable strap . The watch has IP67 dust and water resistance. [ 2 ] The LG Watch Urbane runs Android Wear and is equipped with a Qualcomm Snapdragon 400 SoC , 512MB of LPDDR2 RAM, and 4GB of eMMC storage. The OLED display is a POLED variant with an equivalent resolution to a square display of 320x320, and has capacitive touch input. The watch communicates with its companion Android phone or iPhone using Bluetooth v4.1LE, and has 2.4 GHz 802.11b/g/n WiFi for synchronizing Google Services data. The watch has nine axis movement sensors (gyro, accelerometer, compass), barometer, and a heart rate sensor. The watch has a microphone which is used with Google Assistant's speech recognition . Unlike newer Wear devices it doesn't have a speaker, so it can only vibrate for alerts. The watch charges through contacts on its back, which connect via sprung "pogo" pins to a magnetically clamped puck, and the puck has a microUSB connector and thus requires an external power source. A second model, the Watch Urbane LTE , has cellular connectivity and runs WebOS instead of Android. [ 3 ] This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/LG_Watch_Urbane
Several smartphone models introduced by LG Electronics between 2015 and 2016 were discovered by users to have manufacturing defects, all of which eventually cause the devices to become unstable, or suffer from a bootloop , rendering them effectively inoperable . The LG G4 (2015) has been the most synonymous with these failures, with LG stating that the issues were the result of a "loose contact between components". Similar issues have also been reported to a smaller extent with the G4's successors and sister models, including the Nexus 5X , LG V10 and LG G Flex 2 . [ 1 ] In March 2017, a class-action lawsuit was filed against LG in regards to their handling of these hardware failures. When officially acknowledging the bootloop issues with the G4, LG stated that it was caused by a "loose contact between components"; Android Authority explained that "a loose connection between power supply or memory components could certainly cause a phone to fail to boot up properly, due to a lack of system stability or not being able to access vital memory. It's also possible that a faulty connection to other components, such as the camera or fingerprint scanner, could cause a similar problem. This could be down to important setup communications not being sent or received between peripherals correctly." [ 2 ] Early reports of "bootloop" issues with the LG G4 occurred on forums such as Reddit as early as September 2015; LG was initially inconsistent in accepting warranty claims on affected devices, leading some users to go through their respective wireless carrier instead. A petition was issued calling upon LG to acknowledge the bootloop issues. [ 2 ] In January 2016, LG officially acknowledged that some G4 models suffered from a manufacturing issue "resulting from a loose contact between components", that caused them to experience symptoms such as failure to reboot. Andrew Williams in Trusted Reviews was more specific, saying that "the cause of the problem has been confirmed as a fault in the soldering of one of the connectors on the device's main board". [ 3 ] LG stated that it was not known how many devices were affected by the defect, as when or whether it occurs depended on "usage behavior". LG said that users with booting issues should contact the local carrier where the G4 was purchased or a nearby LG Service Center for repair under warranty. Purchasers of G4 devices from non-carrier retailers should contact an LG Service Center "with the understanding that warranty conditions will differ". LG apologized "for the inconvenience caused to some of our customers who initially received incorrect diagnoses". [ 4 ] [ 5 ] In September 2016, reports began to circulate that similar failures were being encountered with the Nexus 5X manufactured for Google by LG, particularly whilst upgrading to Android 7.0 "Nougat" . Google stated that this was a hardware issue, and that it only impacted a small number of users. LG later stated that it would provide full refunds for affected devices, as the Nexus 5X was reaching the conclusion of its production run. [ 6 ] [ 7 ] [ 2 ] The LG V10's hardware is very similar to that of the G4; it was identified in a class-action lawsuit as suffering from nearly identical forms of hardware failure to the G4. [ 8 ] In response to reports that a model of its successor, the LG V20 , experienced a similar bootloop issue, LG claimed that the failure had been caused by the usage of a non-compliant third-party USB-C cable. [ 9 ] To a lesser extent, reports of bootloop issues occurring with the G4's predecessor and successor, the G3 and G5, have also been reported by users. However, they have not occurred to the same extent as the G4, and LG has not acknowledged any hardware defects with those models. [ 2 ] On December 1, 2018, all of the European LG G710EM models using a T-Mobile or T-Mobile-based SIM card began suffering from a bootlooping issue. [ 10 ] A fix was released with Firmware Version V10p. In March 2017, a U.S. lawsuit was filed against LG Electronics in the state of California for unjust enrichment , unfair trade practices , and warranty law violations, seeking damages and for LG to repair all affected G4 and V10 devices. The lawsuit claimed that LG continued to produce and distribute LG G4 and V10 smartphones with the defect, even after it acknowledged the issue, and claimed that LG failed to recall or "offer an adequate remedy to consumers" who bought the two models, nor provide any remedy for devices that fell outside of the one-year warranty period. A party to the suit claimed that LG had issued them multiple warranty replacement phones that eventually suffered from the same hardware failure. [ 8 ] The lawsuit was expanded the following month to include the G5, Nexus 5X, and V20. [ 11 ] The lawsuit was never certified as a class action , and was sent to arbitration . [ 12 ] In January 2018, LG agreed to pay the participants in the lawsuit a $700 credit towards the purchase of a LG smartphone or $425. [ 12 ] Since the lawsuit was not certified as a class action, consumers not actually participating in the lawsuit do not get this payment. In January 2018, due to stock shortages, Google's wireless network Project Fi began to offer the Moto X4 as an alternative replacement for bootloop affected Nexus 5X owners. [ 13 ]
https://en.wikipedia.org/wiki/LG_smartphone_bootloop_issues
LHASA ( Logic and Heuristics Applied to Synthetic Analysis ) is a computer program developed in 1971 by the research group of Elias James Corey at the Harvard University Department of Chemistry . The program uses artificial intelligence techniques to discover sequences of reactions which may be used to synthesize a molecule . [ 1 ] This program was one of the first to use a graphical interface to input and display chemical structures. [ 2 ] This chemistry -related article is a stub . You can help Wikipedia by expanding it . This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/LHASA
LHD (load, haul, dump) loaders are similar to conventional front end loaders but developed for the toughest of hard rock mining applications, keeping overall production economy, safety, and reliability in consideration. They are extremely rugged, highly maneuverable, and exceptionally productive. More than 75% of the world's underground metal mines use LHD for handling the muck of their excavations. [ 1 ] LHDs have powerful prime movers , advanced drive train technology, heavy planetary axles, four-wheel drive , articulated steering , and ergonomic controls. Their narrower, longer, and lower profile make them most suitable for underground applications where height and width are limited. As the length is not a limitation in a tunnel and decline, LHD loaders are designed with sufficient length. The length improves axial weight distribution and bucket capacity can be enhanced. The two-part construction with central articulation helps in tracking and maneuverability. In mining , there is a limitation for shifting heavy equipment, and sometimes, an LHD has to be shifted through a shaft while dismantled. [ 1 ] An LHD tramming capacity varies from 1 to 17-25 tonne . Their bucket size varies from 0.8 to 10 m 3 . Bucket height ranges from 1.8 to 2.5 m. LHDs are available in both diesel and electric versions. The diesel version is easily transportable from one location to another and has diesel engines with a power drive of 75 to 150 HP or more. Engines are either water- or air-cooled. LHD with electric motors as drives have a general capacity of 75 to 150 HP. These are operative at a medium voltage of 380 to 550 volts. Flexible trailing cables are provided with a reeling/unreeling facility to feed power. These drives operate hydraulic pumps and hydraulic motors for further operation of the various movements of buckets and vehicle traction and steering. The speed of the vehicle is controlled mechanically. The transmission is controlled by a hydrostatic drive; in hydrostatic transmission, the motor drives a variable displacement pump hydraulically connected to a hydro-motor driving the axle via a gearbox. The speed is controlled by changing the displacement volume of the axial pump. The power train consists of a closed-loop hydraulic transmission, parking brakes, a two-stage gearbox, and drive lines. [ 2 ] Service, emergency, and parking brakes with fire-resistant hydraulic fluid are used. Headlights, audible warning signals, backup alarms, and portable fire extinguishers are provided. A special cabin is also provided for the safety of the operator. A safety device is provided to shut off the engine if exhaust gases exceed a temperature of 85 °C (or as per the set value). [ 1 ] For electric shock safety, these LHDs' power source (gate end box) is equipped with earth conductivity protection using pilot core [ 3 ] in electric trailing cable, which isolates complete power when earth continuity is broken. Most LHDs come equipped with remote control capabilities, which are crucial for clearing materials in areas where the stope lacks top protection, preventing loose muck from falling off. Some LHD models offer remote tramming functionality, enabling them to handle a daily ore capacity of 8000 tons. [ 4 ] Two LHD OEM's ( Caterpillar and Sandvik ) have developed commercial auto-tramming systems - called Minegem and Automine respectively. [ 5 ]
https://en.wikipedia.org/wiki/LHD_(load,_haul,_dump_machine)
LIBOX was a free platform that allowed users to access and share their high definition media collections, including video, photos and music , across various devices and with friends. LIBOX offered this service for free thanks to a patent pending combination of peer-to-peer , grid and distributed computing technologies. LIBOX consisted of a downloadable desktop application that works on both Windows PCs and Macs , and a web-based interface. The service was accessed by any Web browser and placed no limitations on the amount of media that can be added or the number of people with which it can be shared . [ 1 ] LIBOX was founded in 2008 by Erez Pilosof , who previously founded Walla! , the first major web portal in Israel . Pilosof created LIBOX to allow users to manage and share media across all devices and keep its original high quality. He saw that as a consumer, trying to store your media on several different devices and in many different partial areas online was becoming an annoyance; it "seemed very limited and tedious and problematic” [ 2 ] Pilosof created LIBOX as a way to provide a smooth and dependable way for people to enjoy his/her media anywhere. The company started working on the patent-pending technology to power LIBOX in the Fall of 2008, released an Alpha version in October 2009 and launched a Beta version on June 22, 2010. The company has received funding from investors such as Evergreen Venture Partners and Rhodium to help grow the platform. [ 3 ] LIBOX closed down in 2011. The distributed LIBOX platform effectively creates private clouds that communicate between devices and Web browsers through a combination of algorithms , grids and peer-to-peer networking technologies. Files are not uploaded to an external server but streamed straight from the computer of the user that holds the file. [ 4 ] The mixture of technologies allows LIBOX to never limit how much media can be added to the platform, while keeping the service free for users. The LIBOX platform uses a single interface rendered in HTML across desktop, web and mobile applications . The LIBOX mobile applications are expected to be made available in Summer 2010. The platform allows users to “simply add a song to the Libox desktop application” making it “instantly available on a user’s smartphone and any other computer through a web browser.” [ 5 ] When it introduces native mobile applications, LIBOX will allow you to take a photo or record a song on a mobile device and automatically add it to your LIBOX library, accessible across your devices. Ultimately, it syncs all of media regardless of “file formats, folders, settings” and allows users to not worry about "file quality loss or cloud storage capacity." [ 5 ] LIBOX also allows users to share their media collection with other LIBOX users. The platform lets users create contact lists and instantly invite friends to enjoy their media. LIBOX uses the same technology to sync as it does to share it with friends.
https://en.wikipedia.org/wiki/LIBOX
In chemistry and physics, LIESST (Light-Induced Excited Spin-State Trapping) is a method of changing the electronic spin state of a compound by means of irradiation with light. [ 1 ] Many transition metal complexes with electronic configuration d 4 -d 7 are capable of spin crossover (and d 8 when molecular symmetry is lower than O h ). [ 2 ] Spin crossover refers to where a transition from the high spin (HS) state to the low spin (LS) state or vice versa occurs. Alternatives to LIESST include using thermal changes and pressure to induce spin crossover. The metal most commonly exhibiting spin crossover is iron, with the first known example, an iron(III) tris(dithiocarbamato) complex, reported by Cambi [ 3 ] et al. in 1931. For iron complexes, LIESST involves excitation of the low spin complex with green light to a triplet state. Two successive steps of intersystem crossing result in the high spin complex. Movement from the high spin complex to the low spin complex requires excitation with red light. [ 1 ] This chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/LIESST
LIGA is a fabrication technology used to create high- aspect-ratio microstructures. The term is a German acronym for Lithographie, Galvanoformung, Abformung – lithography , electroplating , and molding . LIGA consists of three main processing steps: lithography, electroplating, and molding. There are two main LIGA-fabrication technologies: X-Ray LIGA , which uses X-rays produced by a synchrotron to create high-aspect-ratio structures, and UV LIGA , a more accessible method which uses ultraviolet light to create structures with relatively low aspect ratios. Notable characteristics of X-ray LIGA-fabricated structures include: X-Ray LIGA is a fabrication process in microtechnology that was developed in the early 1980s [ 1 ] by a team under the leadership of Erwin Willy Becker and Wolfgang Ehrfeld at the Institute for Nuclear Process Engineering ( Institut für Kernverfahrenstechnik, IKVT) at the Karlsruhe Nuclear Research Center, since renamed to the Institute for Microstructure Technology ( Institut für Mikrostrukturtechnik , IMT) at the Karlsruhe Institute of Technology (KIT). LIGA was one of the first major techniques to allow on-demand manufacturing of high-aspect-ratio structures (structures that are much taller than wide) with lateral precision below one micrometer. In the process, an X-ray sensitive polymer photoresist, typically PMMA , bonded to an electrically conductive substrate, is exposed to parallel beams of high-energy X-rays from a synchrotron radiation source through a mask partly covered with a strong X-ray absorbing material. Chemical removal of exposed (or unexposed) photoresist results in a three-dimensional structure, which can be filled by the electrodeposition of metal. The resist is chemically stripped away to produce a metallic mold insert. The mold insert can be used to produce parts in polymers or ceramics through injection molding . The LIGA technique's unique value is the precision obtained by the use of deep X-ray lithography (DXRL). The technique enables microstructures with high aspect ratios and high precision to be fabricated in a variety of materials (metals, plastics, and ceramics). Many of its practitioners and users are associated with, or are located close to, synchrotron facilities. UV LIGA utilizes an inexpensive ultraviolet light source, like a mercury lamp , to expose a polymer photoresist, typically SU-8 . Because heating and transmittance are not an issue in optical masks, a simple chromium mask can be substituted for the technically sophisticated X-ray mask. These reductions in complexity make UV LIGA much cheaper and more accessible than its X-ray counterpart. However, UV LIGA is not as effective at producing precision molds and is thus used when cost must be kept low and very high aspect ratios are not required. X-ray masks are composed of a transparent low- Z carrier, a patterned high- Z absorber, and a metallic ring for alignment and heat removal. Due to extreme temperature variations induced by the X-ray exposure, carriers are fabricated from materials with high thermal conductivity to reduce thermal gradients. Currently [ when? ] , vitreous carbon and graphite are considered the best material, as their use significantly reduces side-wall roughness. Silicon , silicon nitride , titanium , and diamond are also used as carrier substrates but not preferred, as the required thin membranes are comparatively fragile and titanium masks tend to round sharp features due to edge fluorescence. Absorbers are gold, nickel, copper, tin, lead, and other X-ray-absorbing metals. Masks can be fabricated in several fashions. The most accurate and expensive masks are those created by electron-beam lithography , which provides resolutions as fine as 0.1 μm in resist 4 μm thick and 3 μm features in resist 20 μm thick. An intermediate method is the plated photomask, which provides 3-μm resolution and can be outsourced at a cost on the order of $1000 per mask. The least expensive method is a direct photomask, which provides 15-μm resolution in resist 80 μm thick. In summary, masks can cost between $1000 and $20,000 and take between two weeks and three months for delivery. Due to the small size of the market, each LIGA group typically has its own mask-making capability. Future trends in mask creation include larger formats, from a diameter of 100 mm to 150 mm , and smaller feature sizes. The starting material is a flat substrate , such as a silicon wafer or a polished disc of beryllium, copper, titanium, or other material. The substrate, if not already electrically conductive, is covered with a conductive plating base, typically through sputtering or evaporation . The fabrication of high-aspect-ratio structures requires the use of a photoresist able to form a mold with vertical sidewalls; thus, the photoresist must have a high selectivity and be relatively free from stress when applied in thick layers. The typical choice, poly(methyl methacrylate) ( PMMA ), is applied to the substrate by a glue-down process in which a precast, high-molecular-weight sheet of PMMA is attached to the plating base on the substrate. The applied photoresist is then milled down to the precise height by a fly cutter prior to pattern transfer by X-ray exposure. Because the layer must be relatively free from stress, this glue-down process is preferred over alternative methods such as casting. Further, the cutting of the PMMA sheet by the fly cutter requires specific operating conditions and tools to avoid introducing any stress and crazing of the photoresist. [ citation needed ] A key enabling technology of LIGA is the synchrotron, capable of emitting high-power, highly- collimated X-rays. This high collimation permits relatively large distances between the mask and the substrate without the penumbral blurring that occurs from other X-ray sources. In the electron storage ring or synchrotron , a magnetic field constrains electrons to follow a circular path, and the radial acceleration of the electrons causes electromagnetic radiation to be emitted forward. The radiation is thus strongly collimated in the forward direction and can be assumed to be parallel for lithographic purposes. Because of the much higher flux of usable collimated X-rays, shorter exposure times become possible. Photon energies for a LIGA exposure are approximately distributed between 2.5 and 15 keV . Unlike optical lithography, there are multiple exposure limits, identified as the top dose, bottom dose, and critical dose, whose values must be determined experimentally for a proper exposure. The exposure must be sufficient to meet the requirements of the bottom dose, the exposure under which a photoresist residue will remain, and the top dose, the exposure over which the photoresist will foam. The critical dose is the exposure at which unexposed resist begins to be attacked. Due to the insensitivity of PMMA, a typical exposure time for a 500-μm -thick PMMA is six hours. During exposure, secondary radiation effects such as Fresnel diffraction , mask and substrate fluorescence , and the generation of Auger electrons and photoelectrons can lead to overexposure. During exposure, the X-ray mask and the mask holder are heated directly by X-ray absorption and cooled by forced convection from nitrogen jets. Temperature rise in PMMA resist is mainly from heat conducted from the substrate backward into the resist and from the mask plate through the inner cavity air forward to the resist, with X-ray absorption being tertiary. Thermal effects include chemistry variations due to resist heating and geometry-dependent mask deformation. For high-aspect-ratio structures, the resist-developer system is required to have a ratio of dissolution rates in the exposed and unexposed areas of 1000:1. The standard, empirically optimized developer is a mixture of tetrahydro-1,4-oxazine (20%), 2-aminoethanol-1 (5%), 2-(2-butoxyethoxy)ethanol (60%), and water (15%). This developer provides the required ratio of dissolution rates and reduces stress-related cracking from swelling in comparison to conventional PMMA developers. After development, the substrate is rinsed with deionized water and dried either in a vacuum or by spinning. At this stage, the PMMA structures can be released as the final product (e.g., optical components) or can be used as molds for subsequent metal deposition. In the electroplating step, nickel, copper, or gold is plated upward from the metalized substrate into the voids left by the removed photoresist. Taking place in an electrolytic cell, the current density, temperature, and solution are carefully controlled to ensure proper plating. In the case of nickel deposition from NiCl 2 in a KCl solution, Ni is deposited on the cathode (metalized substrate) and Cl 2 evolves at the anode. Difficulties associated with plating into PMMA molds include voids, where hydrogen bubbles nucleate on contaminants; chemical incompatibility, where the plating solution attacks the photoresist; and mechanical incompatibility, where film stress causes the plated layer to lose adhesion. These difficulties can be overcome through the empirical optimization of the plating chemistry and environment for a given layout. After exposure, development, and electroplating, the resist is stripped. One method for removing the remaining PMMA is to flood-expose the substrate and use the developing solution to cleanly remove the resist. Alternatively, chemical solvents can be used. Stripping of a thick resist chemically is a lengthy process, taking two to three hours in acetone at room temperature. In multilayer structures, it is common practice to protect metal layers against corrosion by backfilling the structure with a polymer-based encapsulant. At this stage, metal structures can be left on the substrate (e.g., microwave circuitry) or released as the final product (e.g., gears). After stripping, the released metallic components can be used for mass replication through standard means of replication such as stamping or injection molding . In the 1990s, LIGA was a cutting-edge MEMS fabrication technology, resulting in the design of components showcasing the technique's unique versatility. Several companies that begin using the LIGA process later changed their business model (e.g., Steag microParts becoming Boehringer Ingelheim microParts, Mezzo Technologies). Currently, only two companies, HTmicro and microworks, continue their work in LIGA, benefiting from limitations of other competing fabrication technologies. UV LIGA, due to its lower production cost, is employed more broadly by several companies, such as Veco, Tecan, Temicon, and Mimotec in Switzerland, who supply the Swiss watch market with metal parts made of nickel and nickel-phosphorus. Below is a gallery of LIGA-fabricated structures arranged by date.
https://en.wikipedia.org/wiki/LIGA
In bioinformatics , LIGPLOT is a computer program that generates schematic 2-D representations of protein - ligand complexes from standard Protein Data Bank file input. [ 1 ] The LIGPLOT is used to generate images for the PDBsum resource that summarises molecular structure. This article about molecular modelling software is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/LIGPLOT
LINE1 (an abbreviation of Long interspersed nuclear element-1 , also known as L1 and LINE-1 ) is a family of related class I transposable elements in the DNA of many groups of eukaryotes , including animals and plants, classified with the long interspersed nuclear elements (LINEs). [ 1 ] L1 transposons are most ubiquitous in mammals, where they make up a significant fraction of the total genome length, [ 1 ] [ 2 ] for example they comprise approximately 17% of the human genome . [ 3 ] These active L1s can interrupt the genome through insertions, deletions, rearrangements, and copy number variations . [ 4 ] L1 activity has contributed to the instability and evolution of genomes and is tightly regulated in the germline by DNA methylation , histone modifications , and piRNA . [ 5 ] L1s can further impact genome variation through mispairing and unequal crossing over during meiosis due to its repetitive DNA sequences. [ 4 ] L1 gene products are also required by many non-autonomous Alu and SVA SINE retrotransposons. Mutations induced by L1 and its non-autonomous counterparts have been found to cause a variety of heritable and somatic diseases. [ 6 ] [ 7 ] In 2011, human L1 was reportedly discovered in the genome of the gonorrhea bacteria, evidently having arrived there by horizontal gene transfer . [ 8 ] [ 9 ] A typical L1 element is approximately 6,000 base pairs (bp) long and consists of two non-overlapping open reading frames (ORFs) which are flanked by untranslated regions (UTRs) and target site duplications. In humans, ORF2 is thought to be translated by an unconventional termination/reinitiation mechanism, [ 10 ] while mouse L1s contain an internal ribosome entry site (IRES) upstream of each ORF. [ 11 ] The 5' UTRs of mouse L1s contain a variable number of GC-rich tandemly repeated monomers of around 200 bp, followed by a short non-monomeric region. Human 5’ UTRs are ~900 bp in length and do not contain repeated motifs. All families of human L1s harbor in their most 5’ extremity a binding motif for the transcription factor YY1 . [ 12 ] Younger families also have two binding sites for SOX -family transcription factors, and both YY1 and SOX sites were shown to be required for human L1 transcription initiation and activation. [ 13 ] [ 14 ] Both mouse and human 5’ UTRs also contain a weak antisense promoter of unknown function. [ 15 ] [ 16 ] The first ORF of L1 encodes a 500-amino acid, 40- kDa protein that lacks homology with any protein of known function. In vertebrates, it contains a conserved C-terminus domain and a highly variable coiled-coil N-terminus that mediates the formation of ORF1 trimeric complexes. ORF1 trimers have RNA-binding and nucleic acid chaperone activity that are necessary for retrotransposition. [ 17 ] The second ORF of L1 encodes a protein that has endonuclease and reverse transcriptase activity. The encoded protein has a molecular weight of 150 kDa . The structure of the ORF2 protein was solved in 2023. Its protein core contains three domains of unknown functions, termed "tower/EN-linker" and "wrist/RNA-binding domain" that bind Alu RNA's polyA tail and C-terminal domain that binds Alu RNA stem loop. The nicking and reverse transcriptase activities of L1 ORF2p are boosted by single-stranded DNA structures likely present on the active replication forks . Unlike viral RTs, L1 ORF2p can be primed by RNA, including RNA hairpin primers produced by the Alu element. As with other transposable elements, the host organism keeps a heavy check on LINE1 to prevent it from becoming overly active. In the primitive eukaryote Entamoeba histolytica , ORF2 is massively expressed in antisense , resulting in no detectable amounts of its protein product. [ 18 ] L1 activity has been observed in numerous types of cancers , with particularly extensive insertions found in colorectal and lung cancers. [ 19 ] It is currently unclear if these insertions are causes or secondary effects of cancer progression. However, at least two cases have found somatic L1 insertions causative of cancer by disrupting the coding sequences of genes APC and PTEN in colon and endometrial cancer, respectively. [ 4 ] Quantification of L1 copy number by qPCR or L1 methylation levels with bisulfite sequencing are used as diagnostic biomarkers in some types of cancers. L1 hypomethylation of colon tumor samples is correlated with cancer stage progression. [ 20 ] [ 21 ] Furthermore, less invasive blood assays for L1 copy number or methylation levels are indicative of breast or bladder cancer progression and may serve as methods for early detection. [ 22 ] [ 23 ] Higher L1 copy numbers have been observed in the human brain compared to other organs. [ 24 ] [ 25 ] Studies of animal models and human cell lines have shown that L1s become active in neural progenitor cells (NPCs), and that experimental deregulation of or overexpression of L1 increases somatic mosaicism . This phenomenon is negatively regulated by Sox2 , which is downregulated in NPCs, and by MeCP2 and methylation of the L1 5' UTR. [ 26 ] Human cell lines modeling the neurological disorder Rett syndrome , which carry MeCP2 mutations, exhibit increased L1 transposition, suggesting a link between L1 activity and neurological disorders. [ 27 ] [ 26 ] Current studies are aimed at investigating the potential roles of L1 activity in various neuropsychiatric disorders including schizophrenia , autism spectrum disorders , epilepsy , bipolar disorder , Tourette syndrome , and drug addiction . [ 28 ] L1s are also highly expressed in octopus brain, suggesting a convergent mechanism in complex cognition. [ 29 ] Increased RNA levels of Alu , which requires L1 proteins, are associated with a form of age-related macular degeneration , a neurological disorder of the eyes . [ 30 ] The naturally occurring mouse retinal degeneration model rd7 is caused by an L1 insertion in the Nr2e3 gene. [ 31 ] In 2021, a study proposed that L1 elements may be responsible for potential endogenisation of the SARS-CoV-2 genome in Huh7 mutant cancer cells, [ 32 ] which would possibly explain why some patients test PCR positive for SARS-CoV-2 even after clearance of the virus. These results however have been criticized as "mechanistically plausible but likely very rare", [ 33 ] misleading and infrequent [ 34 ] or artefactual. [ 35 ]
https://en.wikipedia.org/wiki/LINE1
LISA Pathfinder , formerly Small Missions for Advanced Research in Technology-2 ( SMART-2 ), was an ESA spacecraft that was launched on 3 December 2015 on board Vega flight VV06 . [ 3 ] [ 4 ] [ 5 ] The mission tested technologies needed for the Laser Interferometer Space Antenna (LISA), an ESA gravitational wave observatory planned to be launched in 2035. The scientific phase started on 8 March 2016 and lasted almost sixteen months. [ 6 ] In April 2016 ESA announced that LISA Pathfinder demonstrated that the LISA mission is feasible. The estimated mission cost was €400 million. [ 7 ] LISA Pathfinder placed two test masses in a nearly perfect gravitational free-fall, and controlled and measured their relative motion with unprecedented accuracy. The laser interferometer measured the relative position and orientation of the masses to an accuracy of less than 0.01 nanometres, [ 8 ] a technology estimated to be sensitive enough to detect gravitational waves by the follow-on mission, the Laser Interferometer Space Antenna (LISA). The interferometer was a model of one arm of the final LISA interferometer, but reduced from millions of kilometers long to 40 cm. The reduction did not change the accuracy of the relative position measurement, nor did it affect the various technical disturbances produced by the spacecraft surrounding the experiment, whose measurement was the main goal of LISA Pathfinder. The sensitivity to gravitational waves, however, is proportional to the arm length, and this is reduced several billion-fold compared to the planned LISA experiment. LISA Pathfinder was an ESA-led mission. It involved European space companies and research institutes from France, Germany, Italy, The Netherlands, Spain, Switzerland, UK, and the US space agency NASA. [ 9 ] LISA Pathfinder was a proof-of-concept mission to prove that the two masses can fly through space, untouched but shielded by the spacecraft, and maintain their relative positions to the precision needed to realise a full gravitational wave observatory planned for launch in 2035. The primary objective was to measure deviations from geodesic motion . Much of the experimentation in gravitational physics requires measuring the relative acceleration between free-falling, geodesic reference test particles. [ 10 ] In LISA Pathfinder, precise inter-test-mass tracking by optical interferometry allowed scientists to assess the relative acceleration of the two test masses, situated about 38 cm apart in a single spacecraft. The science of LISA Pathfinder consisted of measuring and creating an experimentally-anchored physical model for all the spurious effects – including stray forces and optical measurement limits – that limit the ability to create, and measure, the perfect constellation of free-falling test particles that would be ideal for the LISA follow-up mission. [ 11 ] In particular, it verified: For the follow-up mission, LISA , [ 12 ] the test masses will be pairs of 2 kg gold/platinum cubes housed in each of three separate spacecraft 2.5 million kilometers apart. [ 13 ] LISA Pathfinder was assembled by Airbus Defence and Space in Stevenage (UK), under contract to the European Space Agency. It carried a European "LISA Technology Package" comprising inertial sensors, interferometer and associated instrumentation as well as two drag-free control systems: a European one using cold gas micro-thrusters (similar to those used on Gaia ), and a US-built "Disturbance Reduction System" using the European sensors and an electric propulsion system that uses ionised droplets of a colloid accelerated in an electric field . [ 14 ] The colloid thruster (or " electrospray thruster") system was built by Busek and delivered to JPL for integration with the spacecraft. [ 15 ] The LISA Technology Package (LTP) was integrated by Airbus Defence and Space Germany, but the instruments and components were supplied by contributing institutions across Europe. The noise rejection technical requirements on the interferometer were very stringent, which means that the physical response of the interferometer to changing environmental conditions, such as temperature, must be minimised. On the follow-up mission, eLISA, environmental factors will influence the measurements the interferometer takes. These environmental influences include stray electromagnetic fields and temperature gradients, which could be caused by the Sun heating the spacecraft unevenly, or even by warm instrumentation inside the spacecraft itself. Therefore, LISA Pathfinder was designed to find out how such environmental influences change the behaviour of the inertial sensors and the other instruments. LISA Pathfinder flew with an extensive instrument package which can measure temperature and magnetic fields at the test masses and at the optical bench. The spacecraft was even equipped to stimulate the system artificially: it carried heating elements which can warm the spacecraft's structure unevenly, causing the optical bench to distort and enabling scientists to see how the measurements change with varying temperatures. [ 16 ] Mission control for LISA Pathfinder was at ESOC in Darmstadt, Germany with science and technology operations controlled from ESAC in Madrid, Spain . [ 17 ] The spacecraft was first launched by Vega flight VV06 into an elliptical LEO parking orbit. From there it executed a short burn each time perigee was passed, slowly raising the apogee closer to the intended halo orbit around the Earth–Sun L 1 point. [ 1 ] [ 18 ] [ 19 ] The spacecraft reached its operational location in orbit around the Lagrange point L1 on 22 January 2016, where it underwent payload commissioning. [ 20 ] The testing started on 1 March 2016. [ 21 ] In April 2016 ESA announced that LISA Pathfinder demonstrated that the LISA mission is feasible. [ 22 ] On 7 June 2016, ESA presented the first results of two months' worth of science operation showing that the technology developed for a space-based gravitational wave observatory was exceeding expectations. The two cubes at the heart of the spacecraft are falling freely through space under the influence of gravity alone, unperturbed by other external forces, to a factor of 5 better than requirements for LISA Pathfinder. [ 23 ] [ 24 ] [ 25 ] In February 2017, BBC News reported that the gravity probe had exceeded its performance goals. [ 26 ] LISA Pathfinder was deactivated on 30 June 2017. [ 27 ] On 5 February 2018, ESA published the final results. Precision of measurements could be improved further, beyond current goals for the future LISA mission, due to venting of residue air molecules and better understanding of disturbances. [ 28 ]
https://en.wikipedia.org/wiki/LISA_Pathfinder
LISICON is an acronym for LIthium S uper I onic CON ductor , [ 1 ] which refers to a family of solids with the chemical formula Li 2+2x Zn 1−x GeO 4 . The first example of this structure was discovered in 1977, providing a chemical formula of Li 14 Zn(GeO 4 ) 4 .  The crystal structure of LISICON consists of a network of [Li 11 Zn(GeO 4 ) 4 ] 3- as well as 3 loosely bonded Li + . The weaker bonds allow for the lithium ions to easily move from site to site, not needing to break strong bonds to do so.  Also, this structure forms large “bottlenecks” between the interstitial positions which these ions occupy, additionally lowering the energy required to move from site to site. These two factors allow for the lithium ions to diffuse quickly and easily through the structure. However, because of the shape of the channels through which these lithium ions can diffuse, they are limited to 2 dimensional diffusion. LISICON compounds have relatively high ionic conductivity, on the order of 10 −6 S/cm at 25 °C. [ 2 ] [ 3 ] [ 4 ] [ 5 ] LISICONs readily react with lithium metal and atmospheric gases such as CO 2 ; as a result, their conductivity decreases with time. [ 6 ] There are other LISICON type solid electrolytes which make use of other elements to achieve higher ionic conductivities.  One such material has the chemical formula of Li (3+x) Ge x V (1-x) O 4 , where the value of x is between 0 and 1. There are two compositions, Li 3.5 Ge 0.5 V 0.5 O 4 and Li 3.6 Ge 0.6 V 0.4 O 4 , which had ionic conductivities of 4*10 −5 S/cm and 10 −5 S/cm, an order of magnitude of improvement upon the base LISICON structure.  These materials show good thermal stability and are stable in contact with CO 2 and ambient atmosphere, dealing with some problems extant with the original structure. [ 2 ] [ 7 ] There are materials with the chemical structure of Li (4-x) Si (1-x) P x O 4 .  This is a solid solution between Li 4 SiO 4 and Li 3 PO 4 .  This solid solution can be formed over the whole composition range at room temperature.  The highest ionic conductivity are achieved at compositions of Li 3.5 Si 0.5 P 0.5 O 4 and Li 3.4 Si 0.4 P 0.6 , with conductivity on the order of 10 −6 S/cm.  This results from the substitution of some Si 4+ for P 5+ in the lattice, resulting in the addition of interstitial Li-ions which diffuse much more easily. [ 8 ] The ionic conductivity is further improved with the doping of Cl − to replace O 2- in the lattice.  The compositions Li 10.42 Si 1.5 P 1.5 Cl 11.92 and Li 10.42 Ge 1.5 P 1.5 Cl 11.92 achieved ionic conductivities of 1.03 * 10 −5 S/cm and 3.7*10 −5 S/cm respectively.  This is theorized to be the due to the widening of the “bottlenecks” between interstitial points due to the Cl − ions smaller size, and the weakening of the ionic bonding Li + ions experienced due to chlorine's lower electronegativity . [ 9 ] The conductivities are almost 100 times higher in thio-LISICONs, where oxygen is replaced by sulfur, i.e. the corresponding thiosilicates . [ 6 ] The bonding between S 2- and Li + is weaker than that between O 2- and Li + , allowing for the Li + in the sulfide structure to be far more normal than its oxide counterparts.  Ceramic thio-LISCON materials based on the chemical formula Li (4-x) Ge (1-x) P x S 4 are promising electrolyte materials, with ionic conductivities on the order of 10 −3 S/m or 10 −2 S/m. [ 2 ] LISICONs can be used as the solid electrolyte in lithium-based solid-state batteries , [ 2 ] such as solid state nickel–lithium battery . For this application, solid lithium electrolytes require ionic conductivities greater than 10 −4 S/cm, negligible electronic conductivity, and a wide range of electrochemcial stability. [ 2 ]
https://en.wikipedia.org/wiki/LISICON