text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
An interchromatin granule is a cluster in the nucleus of a mammal cell which is enriched in pre-mRNA splicing factors. Interchromatin granules are located in the interchromatin regions of mammal cell nuclei . [ 1 ] [ 2 ] [ a ] They usually appear as irregularly shaped structures that vary in size and number. They can be observed by immunofluorescence microscopy . [ 2 ] [ 7 ]
Interchromatin granules are structures undergoing constant change, and their components exchange continuously with the nucleoplasm , active transcription sites and other nuclear locations. [ 2 ] [ 7 ] [ 8 ]
Research on dynamics of interchromatin granules has provided new insight into the functional organisation of the nucleus and gene expression.
Interchromatin granule clusters vary in size anywhere between one and several micrometers in diameter. They are composed of 20–25 nm granules [ 9 ] that are connected in a beaded chain fashion appearance by thin fibrils.
Interchromatin granule clusters (IGCs) may represent small nuclear ribonucleoproteins (snRNPs) that have completed their maturation process and can be supplied to nearby areas containing perichromatin fibers where splicing is taking place. [ 7 ] Other proteins, such as RNA polymerase II and certain transcription factors , as well as poly-adenylated RNA may also be present. [ 2 ] The maturation of snRNPs takes place in part in Cajal bodies , and IGCs may donate splicing factors to the snRNP. [ 8 ] | https://en.wikipedia.org/wiki/Interchromatin_granule |
The Intercollegiate Biomathematics Alliance ( IBA ) is a syndicate of organizations focused on connecting both academic and non-academic institutions to promote the study of biomathematics , ecology , and other related fields. [ 1 ] Biomathematics is a scientific area connecting biology, ecology, mathematics, and computer science. [ 2 ] Founded in 2014 by Executive director Olcay Akman of Illinois State University, [ 3 ] the Intercollegiate Biomathematics Alliance helps organizations to work together and share resources among one another that are not regularly available at all institutions. [ 4 ] The IBA is still young and typically attracts smaller colleges around the United States who tend to benefit more from being part of a consortium. However, in recent years, universities such as Arizona State University have joined and the IBA continues to maintain connections with larger research groups such as the Mathematical Bioscience Institute (MBI) and the National Institute for Mathematical and Biological Synthesis (NIMBioS). [ 3 ]
In 2007, Olcay Akman of mathematics and Steven Juliano of biological sciences started a master's degree program at Illinois State University . The program grew and is now operated under the same umbrella as the IBA, the Center for Collaborative Studies in Mathematical Biology. [ 2 ] In 2008, the first BEER (Biomathematics Ecology Education and Research) conference was held at Illinois State University with only 10 speakers and less than 50 attendees. In 2014, the BEER conference was the second largest biomathematics conference globally with more than 100 speakers. [ 2 ] Then in 2014, other universities were asked to collaborate with the common goal of educating students about biomathematics, and this led to the creation of the Intercollegiate Biomathematics Alliance (IBA). [ 2 ]
The IBA is not the first to create a network of institutions. Morehouse College in Atlanta, GA participates in its own network of institutions that helps to provide students with greater access to resources. [ 5 ] Similarly, Massachusetts Institute of Technology houses a consortium for research in energy, the MIT Energy Initiative. This network brings together the university and companies to expand research experiences and broaden educational perspectives. [ 6 ] By pooling together resources, these consortia attempt to unite organizations under a common goal and share resources in infrastructure, intellect, and academia.
As of 2021, the Intercollegiate Biomathematics Alliance has 9 member institutions. [ 1 ] In 2019, the IBA had 11 member institutions. IBA members pay dues based on their institutional size. [ 7 ] Individuals are also able to become members of the IBA with reduced rates for students. [ 1 ]
There is some incentive beyond collaboration efforts to become an IBA member. The organization offers reduced registration fees to the International Symposium on BEER, access to distance education courses, a copy of Spora-Journal of Biomathematics , and travel funding. [ 8 ]
BEAM is a research grant for undergraduate research that supports both faculty members and students. BEAM also provides some support for participants at CURE. [ 1 ]
BEER (Biomathematics Ecology Education and Research) is an annual research symposium that takes place in the fall. The first BEER symposium took place in 2008 at Illinois State University with only 10 speakers and 30 attendees. [ 2 ] By 2014, BEER was the second largest biomathematics conference globally. [ 2 ] In 2017, the 10th annual BEER symposium was celebrated at Illinois State University. [ 9 ] BEER has also been hosted by other institutions such as Arizona State University (2018) and University of Wisconsin- La Crosse (2019). In 2020, the 13th annual BEER symposium was hosted virtually due to the COVID-19 pandemic. BEER is expected to be hosted in 2021 by the University of Richmond in Richmond, VA. [ 1 ]
IBA-CLOUD is a super computer available for IBA members to assist in research. [ 1 ] [ 10 ] IBA-CLOUD is a high-performance computing cluster server and available for members of IBA to use remotely. [ 2 ]
Started in 2016, CURE is an undergraduate research workshop and experience. Students typically meet for a few days to work on their scientific research skills before choosing a faculty member to work with throughout the summer. [ 11 ] [ 7 ] Students come from around the country and some will present their work at BEER in the following fall. [ 3 ]
PEER is a service that the IBA provides for the scientific community. An appropriate IBA member will work together with individuals from other scientific fields to assist in experimental design, data analysis, and writing. [ 1 ]
Designed to strengthen the mathematical biology background of students before they apply for graduate programs. Courses are available online and in person in the following areas: mathematical modeling, data analysis, computer science, and biological sciences. [ 1 ]
Letters in Biomathematics (LiB) is an open access peer-reviewed international journal dedicated to showcasing the most current research in biomathematics and related fields. [ 12 ]
Spora: A Journal of Biomathematics is an open-access research journal for undergraduate and graduate research in the field of biomathematics. Currently there are five published volumes of Spora and 31 total published papers. [ 13 ] [ 14 ]
The IBA grants fellowship awards to outstanding scholars who have made significant contributions to the field of mathematical biology. [ 15 ] | https://en.wikipedia.org/wiki/Intercollegiate_Biomathematics_Alliance |
The interconnect bottleneck comprises limits on integrated circuit (IC) performance due to limits on the speed of connections between components, versus the internal speed of components.
In 2006 it was predicted to be a "looming crisis" by 2010. [ 1 ]
Improved performance of computer systems has been achieved, in large part, by downscaling the IC minimum feature size. This allows the basic IC building block, the transistor , to operate at a higher frequency, performing more computations per second. However, downscaling of the minimum feature size also results in tighter packing of the wires on a microprocessor , which increases parasitic capacitance and signal propagation delay. Consequently, the delay due to the communication between the parts of a chip becomes comparable to the computation delay itself. This phenomenon, known as an "interconnect bottleneck", is becoming a major problem in high-performance computer systems. [ 2 ]
This interconnect bottleneck can be solved by using optical interconnects to replace the long metallic interconnects. [ 3 ] Such hybrid optical/electronic interconnects promise better performance even with larger designs. Optics has widespread use in long-distance communications; still it has not yet been widely used in chip-to-chip or on-chip interconnections because they (in centimeter or micrometer range) are not yet industry-manufacturable owing to costlier technology and lack of fully mature technologies. As optical interconnections move from computer network applications to chip level interconnections, new requirements for high connection density and alignment reliability have become as critical for the effective use of these links. There are still many materials, fabrication, and packaging challenges in integrating optic and electronic technologies. | https://en.wikipedia.org/wiki/Interconnect_bottleneck |
A fixed link or fixed crossing is a permanent, unbroken road or rail connection across water that uses some combination of bridges, tunnels, and causeways and does not involve intermittent connections such as drawbridges or ferries . [ 1 ] A bridge–tunnel combination is commonly used for major fixed links.
This is a list of proposed and actual transport links between continents and to offshore islands. See also list of bridge–tunnels for another list of fixed links including links across rivers, bays and lakes.
In 1890 William Gilpin first proposed to connect all the continents by land via the Cosmopolitan Railway . Significant elements of that proposal, such as the English Channel Tunnel , have been constructed since that era. However, the improvement of the global shipping industry and advent of international air travel has reduced the demand for many intercontinental land connections.
There is no public highway connection between Great Britain and the European mainland ; only a rail connection, the Channel Tunnel.
A cross channel tunnel was first proposed in 1802 and construction actually started in 1881 before being abandoned. Roll-on/roll-off ferry services provided links across the channel for vehicles.
A road tunnel was proposed in 1979, but not considered viable. Construction of the Channel Tunnel started in 1988 and the tunnel opened in 1994. Automobiles and lorries / transport trucks are loaded onto the Eurotunnel Shuttle 's enclosed railway cars (similar to auto rack / motorail railway cars) for the trip through the tunnel. A service road tunnel runs the entire length of the crossing, but is closed to general use and used only during emergencies and for maintenance. Cyclists – both amateur and professional – have crossed the channel via the tunnel on special occasions. [ 2 ]
There have been proposals at various times for a second channel tunnel of some kind. [ 3 ]
Various ferry services link Ireland to Britain and France. A number of options for an Irish Sea fixed crossing have been proposed over the years, but none are currently under serious consideration. [ citation needed ] Additionally, there was a short-lived proposal for an underground roundabout beneath the Isle of Man , connecting tunnels to Ireland, Scotland, and two to England. [ 4 ] Another proposal was for an additional route from Scotland to the Isle of Man. [ 5 ]
In July 2009, the States of Jersey were considering the feasibility of building a 14-mile (23 km) tunnel between Jersey (a British Crown dependency ) and Lower Normandy in France . [ 6 ] [ 7 ] [ 8 ] There was a revival of the idea in 2018 [ 9 ] and 2021. [ 10 ] It was reported in the local media that a link between Jersey and the neighbouring island of Guernsey would cost £2.6 billion. [ 11 ]
The Crimean Bridge is a pair of parallel bridges constructed by the Russian Federation following the annexation of Crimea , to span the Kerch Strait between the Taman Peninsula of Krasnodar Krai and the Kerch Peninsula of Crimea . The bridge complex provides for both vehicular traffic and for rail. During the Russian invasion of Ukraine, the bridge has been attacked or damaged on three occasions: 8 October 2022 , 17 July 2023 and 12 August 2023. [ 12 ] As of August 2023, the bridge reopened at a limited capacity. [ 13 ]
The Øresund Bridge links southern Sweden to the Danish island of Zealand . Zealand is linked to the Danish mainland and the rest of Europe by the Great Belt Fixed Link . Most travellers between Sweden and Germany, both by road and train, use the 160 km (99 mi) shorter route with a ferry over the Fehmarn Belt southwestwards towards Hamburg or southwards to Rostock . The Fehmarn Belt Fixed Link is planned to be opened in 2029. A Gedser-Rostock Bridge is also under consideration but has been put back as the Fehmarn Belt crossing is now under construction. Proposals also exist for a fixed link from Rügen to southern Sweden, linking Berlin and the Øresund region .
Ferry services link Sweden to Finland via Åland . There are proposals of fixed links between Sweden and Finland. A tunnel could be built between Sweden and Åland, about 50 km (30 mi) long, and 100–200 m (330–660 ft) deep, with the lowest depth around Märket , a little detour. The area between Åland and Finland is shallow with many islands, able to be connected with bridges - some of which already exist. Between Umeå and Vaasa further north, there is a proposal to build the Kvarken Bridge , a series of bridges, the longest 26 km (16 mi), in total 40 km (25 mi). None of these proposals have been seriously investigated.
Ferry services link Finland to Estonia as well as overground rail and road routes via Saint Petersburg in Russia. Rail Baltica is a proposal for a rail link from Finland to Estonia, Latvia, Lithuania and Poland, bypassing Russia via a Helsinki to Tallinn Tunnel . The gulf has heavy ferry traffic, and the port of Helsinki has the largest number of international passengers of any port in Europe, and most travel to Tallinn or back. Finland and Estonia share close linguistic, cultural, economic and historical ties and proponents of what they call " Talsinki " (a portmanteau of the names of the two capitals) point to the Øresund region as an example of a cross-national metropolitan area linked by an underwater bridge-tunnel. A combination of a Finland to Estonia and a Finland to Sweden fixed link would reduce the need for ferries on the route the MS Estonia was on when it sank in 1994 causing a loss of 852 lives, the biggest peacetime maritime disaster in the Baltic.
The Strait of Messina has a busy ferry traffic. The Strait of Messina Bridge is planned [ citation needed ] , but the construction date has been postponed several times.
There is a project to link Elba with mainland Italy (through Piombino in Tuscany ) crossing the Piombino Channel in a Road Tunnel of 16 km. [ 14 ] The feasibility project was launched by the Cacelli Partners Ltd of Riccardo Cacelli, in collaboration with the Adu London studio of the architect David Ulivagnoli. [ 15 ] [ 16 ]
There is a project of an underwater tunnel that will link Palau (in Sardinia ) with the island of La Maddalena , crossing a 3 km stretch of sea. [ 17 ] [ 18 ]
The has been interest from Hyperloop One in using Hyperloop to connect the islands. [ 19 ] [ 20 ]
There were proposals to build a railway and highway bridge over the Adriatic sea to connect Italy and Croatia , from Ancona to Zadar following a 120 km route. The idea was presented by the Roman architect Giorgio De Romanis, and also called for the creation of a special company "Il ponte sull'Adriatico Srl". The bridge would be suspended above the sea at a height of between 30 and 70 meters, and would also allow the laying of pipes for water, oil and gas, as well as the accommodation of telecommunication cables. [ 21 ] [ 22 ] The idea received the support of the ex-governor of the Marche , Gian Mario Spacca . Some sources considered the project as being more realistic than a bridge between Calabria and Sicily [ 23 ]
The Vlora-Otranto Tunnel is a proposed undersea tunnel that aims to connect Vlorë , in Albania , with Otranto , in Italy , across the Strait of Otranto , a narrow strip of water that separates the Adriatic Sea from the Ionian Sea and is about 71 kilometers (44 miles) wide at its narrowest point.
The Boknafjord tunnel (main part of the Rogfast project) is under construction and will in 2033 be the world's longest and deepest undersea road tunnel, 26.7 kilometres (88,000 ft) long and reach 392 metres (1,286 ft) under sea level. It will connect the island of Bokn with the mainland at Stavanger under the open Bokna Fjord .
Tunnels and bridges are an important part of the Faroe Islands transportation network. The longest proposed one is the 25 km Suðuroyartunnilin (the Suðuroy Tunnel).
The Gibraltar Tunnel is proposed to be a rail tunnel linking Africa and Europe. A tunnel would likely be an electrified rail tunnel with car shuttles due to the depth of the Strait of Gibraltar (up to 900 metres (3,000 ft)) and the length of the tunnel making it a great challenge to remove vehicle exhaust. Similar considerations led to the Channel Tunnel linking the UK and France not being a highway tunnel. There have also been proposals for a bridge over the strait, although ship traffic would complicate this solution. Car ferries currently operate across the strait.
The proposed Strait of Sicily Tunnel would link Sicily to Tunisia . Together with the proposed Strait of Messina Bridge from Sicily to Italy this would provide a fixed link between Italy and Tunisia.
The Turkish Straits are the channel between European Turkey and Asian Turkey and consist of the (from south to north) the Dardanelles , the Sea of Marmara and the Bosphorus . [ 24 ] [ 25 ] [ 26 ]
Three suspension bridges cross the Bosphorus. The first of these, the Bosphorus Bridge , is 1,074 m (3,524 ft) long and was completed in 1973. The second, named Fatih Sultan Mehmet (Bosporus II) Bridge , is 1,090 m (3,576 ft) long, and was completed in 1988 about 5 km (3 mi) north of the first bridge. The Bosphorus Bridge forms part of the O1 Motorway , while the Fatih Sultan Mehmet Bridge forms part of the Trans-European Motorway .
Construction of a third suspension bridge, the Yavuz Sultan Selim Bridge , began on May 29, 2013; [ 27 ] it was opened to traffic on August 26, 2016. [ 28 ] The bridge was built near the northern end of the Bosporus, between the villages of Garipçe on the European side and Poyrazköy on the Asian side. [ 29 ] It is part of the "Northern Marmara Motorway", which will be further integrated with the existing Black Sea Coastal Highway, and will allow transit traffic to bypass city traffic.
The Marmaray project, featuring a 13.7 km (8.5 mi) long undersea railway tunnel , opened on 29 October 2013. [ 30 ] Approximately 1,400 m (4,593 ft) of the tunnel runs under the strait, at a depth of about 55 m (180 ft).
An undersea water supply tunnel with a length of 5,551 m (18,212 ft), [ 31 ] named the Bosporus Water Tunnel , was constructed in 2012 to transfer water from the Melen Creek in Düzce Province (to the east of the Bosporus strait, in northwestern Anatolia ) to the European side of Istanbul, a distance of 185 km (115 mi). [ 31 ] [ 32 ]
The Eurasia Tunnel is a road tunnel between Kazlicesme and Goztepe , which began construction in February 2011 and opened to traffic on 21 December 2016. The Great Istanbul Tunnel , a proposed undersea road and railway tunnel, will connect Şişli and Beykoz districts.
The Çanakkale 1915 Bridge opened in 2022, crossing the strait between the cities of Gelibolu and Lapseki .
The Mubarak Peace Bridge , also known as the Egyptian-Japanese Friendship Bridge, Al Salam Bridge, or Al Salam Peace Bridge, is a road bridge crossing the Suez Canal at El-Qantara, whose name means "the bridge" in Egyptian Arabic . The bridge links the continents of Africa and Asia.
The Saudi–Egypt Causeway is a proposal for a causeway and bridge between the Sinai Peninsula in Egypt and the northern part of Saudi Arabia . This would provide a direct road route between Egypt and Saudi Arabia without going through Israel or Jordan. A causeway faces considerable political hurdles as the disruption of Israeli shipping access to the Red Sea was seen as a casus belli by Israel ahead of the Six-Day War . There is a car ferry between Safaga , Egypt and Duba, Saudi Arabia . The two uninhabited islands in the strait ( Tiran island and Sanafir island ), which might be used for a bridge, tunnel or causeway, were disputed between Egypt and Saudi Arabia until President Abdel Fatah al-Sisi of Egypt officially ceded them to Saudi Arabia in 2016/2017. [ 33 ] [ 34 ] The potential construction of a fixed link was cited in some media reports as contributing to the cession. [ 35 ] [ 36 ]
The Bridge of the Horns is a proposed construction project to build a bridge between the coasts of Djibouti and Yemen across the Bab-el-Mandeb , the strait between the Red Sea and Gulf of Aden . [ 37 ] There are no ferry services on this route as of 2018.
Saudi Arabia considers to develop a 440 km tunnel across the Red Sea to link its industrial city of Jazan with Massawa 's port in Eritrea . It had the support of Pakistan and the Chinese Belt and Road Initiative . [ 38 ] [ 39 ]
The Palk Strait bridge proposal between India to Sri Lanka . India Boat Mail train and ferry service provided a train and ferry service from India to Sri Lanka until the First World War . An India–Sri Lanka HVDC Interconnection is under consideration to link the electricity networks of these countries.
Mainland Peninsular Malaysia is linked to Penang Island by two road bridges: the Penang Bridge and the Sultan Abdul Halim Muadzam Shah Bridge (Penang Second Bridge). To the south, it is linked to Singapore Island across the Straits of Johor by the Johor–Singapore Causeway and the Malaysia–Singapore Second Link ; the former also carries Malaysia's West Coast Line to the island.
There are proposals to link Johor (in Malaysia ) and Riau (in Indonesia ), in a Malacca Strait Bridge or underwater tunnel crossing Strait of Malacca and some islands. The longest connection is a 17.5 kilometer. The total length is between 39 and 40 kilometers. [ 40 ] [ 41 ] There's also a proposal of a Singapore Strait crossing linking Singapore with the Riau archipelago of Indonesia , most likely with the island of Batam . [ 42 ] Both projects would link Indonesia (specifically Sumatra and Java islands) to Mainland Asia.
Passenger and vehicle ferries link the various islands of Indonesia , the Philippines , Singapore , Malaysia , and Papua New Guinea .
There are proposals to link Java , the most populated Island of Indonesia , to Sumatra via a proposed Sunda Strait Bridge and from Sumatra to Singapore and/or Malaysia via the Malacca Strait Bridge .
The Guangdong–Hainan Ferry, or the Yuehai Ferry (part of the Guangdong–Hainan Railway ) [ 43 ] is a vehicle and train ferry connecting Hainan Island to Guangdong in mainland China . The ferries run across the Qiongzhou Strait , between Zhanjiang , Guangdong and Haikou , Hainan . A road-rail bridge has been proposed. [ 44 ]
Bohai Strait tunnel project is a proposed connection that would connect the Chinese cities of Yantai and Dalian across the Bohai Strait . [ 45 ]
The Taiwan Strait Tunnel Project is a proposed undersea tunnel to connect Pingtan in the People's Republic of China to Hsinchu in northern Taiwan as part of the G3 Beijing–Taipei Expressway . First proposed in 1996, [ 46 ] the project has since been subject to a number of academic discussions and feasibility studies, including by the China Railway Engineering Corporation . [ 47 ] There exist cross strait ferries, both within outlying islands of Taiwan and between the PRC and Taiwan. The political status of Taiwan complicates any such proposal.
The Korean government considered building underwater tunnels with China, the proposed route would be between Incheon - Weihai , being considerate to build an intermediary artificial island in the route of 341 kilometers. Other Korean cities, like Ongjin , Hwaseong and Pyeongtaek , are considered to be part of the routes to China. [ 45 ] Also, is part of the Chinese comprehensive development plan for the Bohai area . [ 48 ]
The Jeju Undersea Tunnel is a project to connect the South Korean provinces of South Jeolla and Jeju across the Jeju Strait , with intermediate stops at the islands of Bogildo and Chujado . [ 49 ] The total length of the proposed railway is 167 km, including a 66 km surface interval from Mokpo to Haenam, a 28 km bridge section from Haenam to Bogil Island, and a 73 km stretch from Bogil to Chuja and Jeju Islands.
The Busan–Geoje Fixed Link (or Geoga Bridge) is an 8.2-km bridge-tunnel fixed link that connects the South Korean city of Busan to Geoje Island . [ 50 ]
The " Korea Japan Friendship Tunnel System " is a proposal for a fixed link from the city of Fukuoka on Kyūshū , Japan, to the port city of Busan in Korea via four islands. The maximum ocean depth in this area is 146 m (479 ft). Similar proposals have been discussed for decades by Korean and Japanese politicians. A road bridge links Kyūshū to the main Japanese island of Honshu .
The Seikan Tunnel has provided a rail link from the main Japanese Island of Honshu to the northernmost Japanese island of Hokkaido since 1988. The proposed Sakhalin-Hokkaido Tunnel would link Hokkaido to the Russian island of Sakhalin . When combined with the proposed Sakhalin Tunnel between Sakhalin and the Russian Mainland and an extension of the Baikal Amur Mainline this would give a rail link from Japan to Russia and the mainland of Asia.
The Hong Kong–Zhuhai–Macau Bridge links Hong Kong and Macau , and Zhuhai in China . Opened on October 24, 2018, it is the longest fixed crossing in the world. [ citation needed ]
Shikoku and Kyushu are the only adjacent major Japanese islands not directly connected by a fixed link. Road travel between the two is possible only via Honshu , a detour of up to 600 km.
Since 1995, the Ōita and Ehime prefectures have been jointly conducting research into the technical feasibility of bridges over the Hōyo Strait and conducting basic research into natural and social conditions, and in 1998, in the "Hoyo Kaikyo Bridge Survey Report" it was concluded that the bridge would be technically feasible. The bridge proposed in the report uses a four-span suspension bridge with a central tower height of 376 m, central span length of 3,000 m, and bridge length of about 8,400 m as the main bridge, connecting the Toyo Strait with two bridges, the extension would be about 12.7 km. [ 51 ] The total project cost is currently estimated to be about 1.3 trillion yen (US$12.1 billion).
The Hoyo Kaikyo Route Promotion Council conducted a survey comparing various crossing technologies (bridges, tunnels) and modes of transportation (automobiles, railways) in 1997, and "Transportation method comparison study report" was published. According to the report, in the case of bridges, road bridges are technically possible, but due to the long span, it is difficult to use them as railway bridges and combined bridges. [ 52 ]
The Qatar Bahrain Causeway was a planned causeway between the two Arab states of Qatar and Bahrain . It was expected that a ferry service would be established between the two countries in 2017. [ 53 ]
Due to the Qatar diplomatic crisis and Bahrain's siding with Saudi Arabia, the bridge is very unlikely ever to be built.
The King Fahd Causeway is a series of bridges and causeways connecting Saudi Arabia and Bahrain . At 25 km (16 mi), the western terminus of the causeway is the al-Khour neighbourhood of Khobar, Saudi Arabia and the eastern terminus is Al Jasra, Bahrain.
Iran and Qatar (who will take most part of the project's financing) have plans for an underwater tunnel connecting the two countries, being planned to be the longest tunnel in the world (having 190 km). [ 54 ] It would link the Iranian port of Bandar-e Deyr to an unspecified location in Qatar across the Persian Gulf in both road and railway sections; however, a road tunnel is not considered very feasible due to the long distance. It had the support of Iran's Ports and Maritime Organisation managing director, Ali Akbar Safaei , and Iranian President Ebrahim Raisi , who expect the creation of a joint Qatar-Iran committee with the Emir of Qatar, Tamim bin Hamad Al Thani . [ 55 ] [ 56 ] [ 57 ] [ 58 ] [ 59 ] Also, it will create a straight and direct route between Saudi Arabia and Iran. [ 60 ]
Iran proposed to build an overpass bridge over the strait of Hormuz that will link Iran economically to GCC countries and Yemen through 39 km road link between Oman's Musandam exclave and southern Iran . The idea was having the support of Iranian Ambassador to Oman Ali Akbar Sibeveih . [ 61 ] [ 62 ] [ 63 ]
The Persian Gulf Bridge in Qeshm – Bandar Abbas ( Iran ) will be a 2.4 km (1.5 mi) long road-rail bridge , connecting Qeshm Island to mainland Iran , from the historic port of Laft to Pahal port in Bandar Abbas ( Hormozgan Province ). It was proposed to build an undersea tunnel instead of a bridge, but was rejected due to high costs. [ 64 ]
In 2010, Oman's ruler, Sultan Qaboos bin Saeed, asked the government to plan for the construction of a bridge connecting Masirah Island on Oman's eastern coast to the mainland in a railroad. [ 65 ]
The Saudis were exploring two options to build a 400 km tunnel or bridge to link Gwadar (in Pakistan ) with Muscat (in Oman ) at the mouth of Strait of Hormuz at one end. The objective was to bypass its trade routes (including for oil supplies) with Iran and Qatar, because of the Iran–Saudi Arabia proxy conflict , also to extend the Chinese Belt and Road Initiative . [ 38 ] [ 39 ]
A 1,200 miles (1,900 km) tunnel, carrying Maglev trains, is planned to connect India ’s biggest city Mumbai and emirate Fujairah ( United Arab Emirates ) through the Arabian Sea in a high-speed railway line to transport passengers, tourists and workers in just 2 hours. It had the support of Abdullah Alshehhi , chief consultant of Abu Dhabi National Advisory Office (a Masdar City -based consultancy), and the Gulf Cooperation Council . [ 66 ] [ 67 ] [ 68 ] It is considered the longest submarine tunnel project in the world and will have a depth of 15,000 feet below the surface of the Indian Ocean . [ 69 ] [ 70 ]
Also, Karachi (Pakistan) and Muscat (Oman) are also included future plans of train stations, making provision for a road to be constructed within the tunnel for cars and trucks as well as a floating hotel, shopping centres and fuel stations, featuring pipelines for oil and water. Expansion of the project might include the One Belt One Road Initiative, linking China to Pakistan economic corridor at Gwadar Port with UAE , through the Fujairah port to complement the Chinese silk road. [ 71 ]
One of the longest tunnels in the world and - depending on definitions (total length versus length actually under water) - either the longest or the second longest underwater tunnel ahead or behind of the Channel Tunnel, the Seikan Tunnel links Japan's northernmost main island Hokkaido to Honshu. Initially only built to Cape gauge , the rail line running through the tunnel has since been converted to dual gauge to allow standard gauge services, particularly Shinkansen . The Tōya Maru accident of 1954, in which a train ferry sank in a typhoon , killing over a thousand people, was a major factor in tilting the decision towards construction of the tunnel. The tunnel opened in 1988 and Hokkaido Shinkansen started running through it in 2016.
The Philippines is planning to build a bridge that will span Manila Bay and connect the provinces of Bataan and Cavite . The Bataan–Cavite Interlink Bridge , once completed, will be 32.15 kilometers (19.98 mi) long and will consist of two cable-stayed bridges, with a span of 400 and 900 meters (1,300 and 3,000 ft) each. The National Economic and Development Authority (NEDA) approved the bridge project in early 2020 with a budget of ₱175.7 billion . The implementation of the bridge project is projected to last six years. [ 72 ]
In October 2020, the Department of Public Works and Highways (DPWH) signed a $59 million engineering design contract, awarded to the joint venture of T. Y. Lin International from the US and Korea's Pyunghwa Engineering Consultants Ltd., who are working in tandem with Geneva-based Renardet S.A. and local firm DCCD Engineering Corporation. [ 73 ]
As of March 2023, the project's detailed engineering design is already 70% complete, according to DPWH. The construction of the bridge is targeted to start in late 2023. [ 74 ]
There is a proposal to span the Bering Sea with a bridge or tunnel called the Intercontinental Peace Bridge , the TKM-World Link or the AmerAsian Peace Tunnel . This would link the American Cape Prince of Wales with the Russian Cape Dezhnev . The link would consist of three tunnels connecting Alaska and Russia via two islands: Little Diomede (USA) and Big Diomede (Russia) . The longest single tunnel would be 24 miles (39 km). Since the Bering Sea at the site of the proposed crossing has a maximum known depth of 170 feet (52 m), the tunnels might be dug with conventional tunnel boring machines of the type used to build the Channel Tunnel. The three tunnel proposal is considered [ who? ] to be preferable over a bridge due to severe environmental conditions, especially the inescapable winter ice damage.
Each proposed tunnel would be shorter than some current tunnels. The Channel Tunnel linking England with mainland Europe is about 31.34 miles (50.44 km) long; the Seikan Tunnel , an ocean tunnel linking Hokkaido with Honshu in Japan is 33.46 miles (53.85 km) long; and the Swiss Gotthard Base Tunnel through the Alps , opened in 2016, is 35.7 miles (57.5 km) long.
For a bridge or tunnel to be useful, a road or railway must be built to connect it, despite the very difficult climate and very sparse local population. In Alaska, a 700-mile (1,100 km) link would be needed, and in Russia, a link more than 1,200 miles (1,900 km) long must be constructed. Until around 2010, such road connections were suggested by enthusiasts only, but at that time both the Russian government and the Alaskan state government started to consider such roads. The Alaska Railroad is currently the only railroad in Alaska, and is not connected to the wider North American rail network, but plans for an A2A Railway linking it to Alberta, Canada and from there to the rest of the North American rail network are under consideration.
There are no viable options for a fixed link between Asia and Oceania . Excluding the Indonesia–Papua New Guinea border that divides the island of New Guinea , the shortest distances between Southeast Asia and Oceania still cover a significant distance across the sea between Indonesia and Australia . The closest would be Badu Island in Queensland 's Torres Strait Islands , which is 123 kilometres (76 mi) away from Merauke Regency 's coastal border with Papua New Guinea. Otherwise, in the Arafura Sea , Rimbija Island in the Northern Territory 's uninhabited Wessel Islands is 300 kilometres (190 mi) south of Yos Sudarso Island , also in Merauke Regency; and across the Timor Sea , the northern tip of Melville Island is still 317 kilometres (197 mi) south of Selaru in the Tanimbar Islands of Maluku . [ 75 ]
A tunnel/bridge between the Australian mainland and the island of New Guinea , bridging the Torres Strait , is not considered economically feasible owing to the great distance. Cape York in northern Queensland is 140 km away from New Guinea . This is a very long distance compared to existing tunnels or bridges, and the demand for car travel is not so high; as of 2009 [ 76 ] there are no car ferries between Australia and Papua New Guinea . Passenger travel is by air or private boat only.
The Cook Strait between North Island and South Island of New Zealand has been suggested for a fixed link. The length would be at least 22 km, and the water depth is around 200 meters. [ 77 ] This is mostly considered a too complicated and costly project to be realised.
Ferry services link Vancouver Island to British Columbia on the Canadian Mainland and to the State of Washington in the US .
Proposals have been made for a fixed link to Vancouver Island for over a century. Because of the extreme depth and soft seabed of the Georgia Strait , and the potential for seismic activity, a bridge or tunnel would face monumental engineering, safety, and environmental challenges at a prohibitive cost. [ 78 ]
Prince Edward Island is linked to New Brunswick on the Canadian mainland by the Confederation Bridge which opened in 1997.
Various proposals have been considered for a fixed link consisting of bridges, tunnels, and/or causeways across the Strait of Belle Isle , connecting the Province of Newfoundland and Labrador 's mainland Labrador region with the island of Newfoundland . This strait has a minimum width of 17.4 km (10.8 mi).
Nine bridges and 13 tunnels (including railroad tunnels) connect the New York City boroughs of Brooklyn and Queens , on Long Island , to Manhattan and Staten Island and, via these, to Newark in New Jersey and The Bronx on the mainland of New York state. However, no fixed crossing of the Long Island Sound exists east of New York City; most traffic from the mainland United States must pass through the city to access Long Island. Passenger and auto ferries connect Suffolk County on Long Island northward across the Sound to the mainland of New York state and eastward [ clarification needed ] to the state of Connecticut . There have been various proposals, none successful, to replace these ferries with a fixed link across Long Island Sound to provide an alternate route around New York City for Long Island-bound traffic.
The Chesapeake Bay Bridge–Tunnel (CBBT) is a 23-mile-long (37 km) fixed link crossing the mouth of the United States' Chesapeake Bay , connecting the Delmarva Peninsula with Virginia Beach, Virginia . It opened in 1964.
The Overseas Highway is a 113-mile (182 km) highway carrying U.S. Route 1 (US 1) through the Florida Keys to Key West .
Ferry services between the US and Cuba and between Cuba and Haiti were common before 1960, but were suspended due to the United States embargo against Cuba . After the normalization of U.S.-Cuba diplomatic relations by U.S. President Barack Obama and Cuban President Raúl Castro , some American companies began plans to provide regular ferry services between Florida and Cuba. However, President Donald Trump reinstated many travel restrictions towards Cuba during his term, including prohibition of direct ferry services. [ 79 ]
There is only one regular ferry to Havana from a foreign port: Cancún , Mexico . [ 80 ]
A ferry travels between Mayagüez in Puerto Rico and Santo Domingo in the Dominican Republic . [ 81 ]
There have been proposals for a direct link between Key West (US) to Havana (Cuba) by tunnel or bridge [ 82 ] [ 83 ] and also for a direct tunnel between Florida and The Bahamas . [ 84 ]
On 2009, there was a proposal to build a bridge between Ceiba (on Puerto Rico island) and Vieques island, having an estimated cost of $600 million. The main goal was to cut travel time to and from the small island town that is currently served by daily ferry runs. [ 85 ]
After the independence of Trinidad and Tobago , members in the government have spoken of constructing a physical link between the islands of Trinidad and Tobago , wanting to physically unify the country. [ 86 ] As public discussion and commentary ensued over feasibility and cost, [ 87 ] an alternative proposal of a Gulf of Paria crossing was made of constructing a shorter connection which would connect Trinidad and Venezuela . [ 88 ]
In 2017, China showed interest in the construction of a mega bridge in the Caribbean Sea to connect Tobago and Trinidad. [ 89 ] [ 90 ]
On modern times, there have been studies from the Department of Civil & Environmental Engineering, from University of the West Indies at St. Augustine , that have developed studies for a possible bridge linking between Venezuela and Tobago, but only as case study, without official support. [ 91 ]
In Venezuela, the General Rafael Urdaneta Bridge connects Maracaibo with much of the rest of the country, being a bridge that crosses the narrowest part of Lake Maracaibo , in Zulia (northwestern Venezuela). There is another plan of a Second bridge over Lake Maracaibo for the construction of a mixed road-rail bridge that would link the Zulia cities of Santa Cruz de Mara and Punta de Palmas , located on both sides of Lake Maracaibo, in the Miranda Municipality. [ 92 ]
A notable break in the Pan-American Highway is a section of land located in the Darién Province in Panama and the Colombian border called the Darién Gap . It is an 87-kilometre (54 mi) stretch of rainforest. The gap has been crossed by adventurers on bicycle, motorcycle, all-terrain vehicle , and foot, dealing with jungle, swamp, insects, kidnapping, and other hazards.
Some people, groups, indigenous populations, and governments are opposed to completing the Darién portion of the highway. Reasons for opposition include protecting the rain forest, containing the spread of tropical diseases, protecting the livelihood of indigenous peoples in the area, and reducing the spread of drug trafficking and its associated violence from Colombia.
The Hercilio Luz Bridge is the first bridge constructed to link the Island of Santa Catarina to the mainland Brazil . Two additional crossings connecting the island to the mainland exist: Colombo Salles Bridge and Pedro Ivo Campos Bridge .
It's a construction project for a bridge that will cross the Chacao Channel . It is intended to unite the Isla Grande de Chiloé with the Chilean continental territory , in the Los Lagos Region . The opening of the bridge is planned for 2025. [ 93 ] [ 94 ] [ 95 ] It will be the longest suspension bridge in Latin America. [ 96 ] Previously there were suggestion of a connection by tunnel, but were rejected due to financial problems. [ 97 ]
At the end of the 19th century, Argentine President Domingo Sarmiento presented the " Argirópolis " project; which included building railway bridges uniting both countries through the Martín García island.
Several land connection projects through the Río de la Plata were evaluated by the governments of Argentina and Uruguay (also Mercosur ), with the objective of erecting a road for vehicular, rail or both types of transit. Although most of the proposals involve the construction of bridges, others also mention sub-fluvial tunnels as a possible alternative. The project would consist to link Colonia del Sacramento in Uruguay to Punta Lara in Argentina. [ 98 ] [ 99 ] [ 100 ] [ 101 ]
A transatlantic tunnel is a theoretical tunnel that would span the Atlantic Ocean between North America and Europe, perhaps enabling mass transit. Some proposals envision technologically advanced trains reaching speeds of 500 to 8,000 kilometres per hour (300 to 5,000 mph). [ 102 ] Most conceptions of the tunnel envision it between the United States and the United Kingdom ‒ or more specifically between New York City and London.
Advantages compared to air travel would be increased speed and use of electricity instead of oil-based fuel.
The main barriers to constructing such a tunnel are cost, with estimates of between $88 billion and $175 billion, as well as the limits of current materials science . [ 103 ]
The proposed routes are a direct link from the US to Europe, or from the US, crossing Canada , Greenland , Iceland and the Faroe Islands , to the United Kingdom , using an underwater vacuum tube train . [ 104 ] [ 105 ]
The Maritime Research Institute Netherlands (Marin), in 2019, tested a model trans-Atlantic underwater tunnel between the United States and Europe capable of supporting hyperloop . [ 106 ] [ 107 ] [ 108 ] | https://en.wikipedia.org/wiki/Intercontinental_and_transoceanic_fixed_links |
An intercooler is a heat exchanger used to cool a gas after compression. [ 1 ] Often found in turbocharged engines, intercoolers are also used in air compressors , air conditioners , refrigeration and gas turbines .
Most commonly used with turbocharged engines, an intercooler is used to counteract the heat of compression and heat soak in the pressurised intake air. By reducing the temperature of the intake air, the air becomes denser (allowing more fuel to be injected, resulting in increased power) and less likely to suffer from pre-ignition or knocking . Additional cooling can be provided by externally spraying a fine mist onto the intercooler surface, or even into the intake air itself , to further reduce intake charge temperature through evaporative cooling .
Intercoolers can vary dramatically in size, shape and design, depending on the performance and space requirements of the system. Many passenger cars use either front-mounted intercoolers located in the front bumper or grill opening, or top-mounted intercoolers located above the engine. An intercooling system can use an air-to-air design, an air-to-liquid design, or a combination of both.
In automotive engines where multiple stages of forced-induction are used (e.g. a sequential twin-turbo or twin-charged engine), the intercooling usually takes place after the last turbocharger/supercharger. However it is also possible to use separate intercoolers for each stage of the turbocharging/supercharging, such as in the JCB Dieselmax land speed record racing car. Some aircraft engines also use an intercooler for each stage of the forced induction. [ citation needed ] In engines with two-stage turbocharging, the term intercooler can specifically refer to the cooler between the two turbochargers and the term aftercooler is used for the cooler located between the second-stage turbo and the engine. However, the terms intercooler and charge-air cooler are also often used regardless of the location in the intake system. [ 2 ]
Air-to-air intercoolers are heat exchangers that transfer heat from the intake air directly to the atmosphere. Alternatively, air-to-liquid intercoolers transfer the heat from the intake air to intermediate liquid (usually water), which in turn transfers the heat to the atmosphere. The heat exchanger that transfers the heat from the fluid to the atmosphere operates in a similar fashion to the main radiator in a water-cooled engine's cooling system, or in some cases the engine's cooling system is also used for the intercooling system. Air-to-liquid intercoolers are usually heavier than their air-to-air counterparts, due to additional components making up the system (e.g. water circulation pump, radiator, fluid, and plumbing).
The majority of marine engines use air-to-liquid intercoolers, since the water of the lake, river or sea can easily be accessed for cooling purposes. In addition, most marine engines are located in closed compartments where obtaining a good flow of cooling air for an air-to-air unit would be difficult. Marine intercoolers take the form of a tubular heat exchanger with the air passing around a series of tubes within the cooler casing, and sea water circulating inside the tubes. The main materials used for this kind of application are meant to resist sea water corrosion: Copper-Nickel for the tubes and bronze for the sea water covers.
An alternative to using intercoolers - which is rarely used these days - was to inject excess fuel into the combustion chamber, so that the vaporization process would cool the cylinders in order to prevent knocking. However the downsides to this method were increased fuel consumption and exhaust gas emissions . [ 3 ]
Intercoolers are used to remove the waste heat from the first stage of two-stage air compressors. Two-stage air compressors are manufactured because of their inherent efficiency. The cooling action of the intercooler is principally responsible for this higher efficiency, bringing it closer to Carnot efficiency . Removing the heat-of-compression from the discharge of the first stage has the effect of densifying the air charge. This, in turn, allows the second stage to produce more work from its fixed compression ratio. Adding an intercooler to the setup requires additional investments. | https://en.wikipedia.org/wiki/Intercooler |
Interdependence theory is a social exchange theory developed in social psychology that examines how interpersonal relationships are defined through interpersonal interdependence, which is "the process by which interacting people influence one another's experiences". [ 1 ] p. 65 Originally proposed by Harold H. Kelley and John Thibaut in 1959, the theory provides a conceptual framework for analyzing the structure of interpersonal situations and how individuals' outcomes depend not only on their own actions but also on the actions of others.
The most basic principle of the theory is encapsulated in the equation I = ƒ[A, B, S], which states that all interpersonal interactions (I) are a function (ƒ) of the given situation (S), plus the actions and characteristics of the individuals (A & B) in the interaction. [ 2 ] [ 3 ] This equation represents how people's behaviors, thoughts, and emotions in relationships are influenced by both situational structures and psychological processes .
The theory's four basic assumptions are:
Interdependence theory was first introduced by Harold Kelley and John Thibaut in 1959 in their book, The Social Psychology of Groups . [ 4 ] This book drew inspiration from social exchange theory and game theory , and provided key definitions and concepts instrumental to the development of the interdependence framework. [ 5 ] [ 4 ] [ 6 ] In their second book, Interpersonal Relations: A Theory of Interdependence, [ 7 ] the theory was completely formalized in 1978. Harold Kelley continued the development of Interdependence theory in 2003, with the book An Atlas of Interpersonal Situations [ 8 ] . This book expanded on the previous work by adding two additional dimensions to the dimensions of interdependence, as well as by analyzing 21 specific situation types. [ 6 ] [ 5 ] [ 8 ] In addition, the work of Kelly and Thibaut built on the work of Kurt Lewin , who first defined interdependence, and stated that ""The essence of a group is not the similarity or dissimilarity of its members, but their interdependence . . . A change in the state of any subpart changes the state of any other subpart . . . Every move of one member will, relatively speaking, deeply affect the other members, and the state of the group" (pp. 84–88). [ 9 ] [ 6 ] [ 4 ]
All interactions are set within the context of their given situation (known in Interdependence theory as structure). In order to best analyze this factor, Interdependence theory presents a taxonomy of situations that includes the six dimensions listed below. A key concept with the Principle of Structure is Affordance, or what the situation affords (makes possible) for the individuals within the interaction. [ 6 ] [ 5 ]
Transformation is a psychological process through which individuals consider possible outcomes that result from both their action and the action of others, and weigh these outcomes against possible actions and courses of actions (Rewards and costs).
The Principle of Interaction (also referred to as the SABI model) is used to assess the variable that affect any given interaction. This model states that Interactions (I) are a function (ƒ) of the situation, Person A's (A) motives, traits, and actions, plus Person B's (B) motives, traits, and actions (I = ƒ[A, B, S]). [ 10 ] [ 5 ]
There are several factors that individuals bring to the Interaction. They are their consideration of Outcomes, Comparison Level, and Comparison Level for Alternatives.
Adaption refers to the process by where exposure to similar situations gives rise to habitual responses which have been proven to result in (on average) positive outcomes. In addition to the type of exposure based condition just described, adaptation can result based on rules of social norms . [ 10 ] [ 5 ] For example, person A might enter into a situation that is similar to situations he/she has experienced before, based on these previous experiences person A's actions are guided in a way in which to receive the same positive outcomes as the previous situations produced. Similarly, social norms guide individuals toward specific, society approved, actions.
Interdependence theory has been used by academics to "analyze group dynamics, power and dependence, social comparison, conflict and cooperation, attribution and self-presentation, trust and distrust, emotions, love and commitment, coordination and communication, risk and self-regulation, performance and motivation, social development, and neuroscientific model of social interaction" (Van Lange & Balliet, 2014, p. 67). [ 6 ] [ 8 ] [ 16 ] [ 17 ]
In addition, the theory provides a practical framework for understanding the underlying psychological factors that motivate other individuals in which you interact (in both personal and professional settings), as well as providing a framework for understanding the underlying psychological factors that motivate your own actions when interacting with others.
Sources: [ 10 ] [ 5 ] | https://en.wikipedia.org/wiki/Interdependence_theory |
An interdigital transducer (IDT) is a device that consists of two interlocking comb-shaped arrays of metallic electrodes (in the fashion of a zipper). These metallic electrodes are deposited on the surface of a piezoelectric substrate , such as quartz or lithium niobate , to form a periodic structure. [ 1 ]
IDTs primary function is to convert electric signals to surface acoustic waves (SAW) by generating periodically distributed mechanical forces via piezoelectric effect (an input transducer).
The same principle is applied to the conversion of SAW back to electric signals (an output transducer). These processes of generation and reception of SAW can be used in different types of SAW signal processing devices, such as band pass filters, delay lines, resonators, sensors, etc.
IDT was first proposed by Richard M. White and Voltmer in 1965.
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Interdigital_transducer |
Interdigitation is the interlinking of biological components that resembles the fingers of two hands being locked together. It can be a naturally occurring or man-made state.
Naturally occurring interdigitation includes skull sutures that develop during periods of brain growth, and which remain thin and straight, and later develop complex fractal interdigitations that provide interlocking strength. [ 1 ] A layer of the retina where photoreception occurs is called the interdigitation zone. [ 2 ] Adhesion or diffusive bonding occurs when sections of polymer chains from one surface interdigitate with those of an adjacent surface. In the dermis , dermal papillae (DP) (singular papilla, diminutive of Latin papula, 'pimple') are small, nipple-like extensions of the dermis into the epidermis , also known as interdigitations. The distal convoluted tubule (DCT), a portion of kidney nephron , can be recognized by several distinct features, including lateral membrane interdigitations with neighboring cells. [ 3 ]
Some hypotheses contend that crown shyness , the interdigitation of canopy branches, leads to "reciprocal pruning" of adjacent trees. [ 4 ]
Interdigitation is also found in biological research. Interdigitation fusion is a method of preparing calcium- and phosphate-loaded liposomes. [ 5 ] Drugs inserted in the bilayer biomembrane may influence the lateral organization of the lipid membrane, with interdigitation of the membrane to fill volume voids. [ 6 ] A similar interdigitation process involves investigating dissipative particle dynamics (DPD) simulations by adding alcohol molecules to the bilayers of double-tail lipids. [ 7 ] Pressure-induced interdigitation is used to study hydrostatic pressure of bicellular dispersions containing anionic lipids. [ 8 ] | https://en.wikipedia.org/wiki/Interdigitation |
The interesting number paradox is a humorous paradox which arises from the attempt to classify every natural number as either "interesting" or "uninteresting". The paradox states that every natural number is interesting. [ 1 ] The " proof " is by contradiction : if there exists a non-empty set of uninteresting natural numbers, there would be a smallest uninteresting number – but the smallest uninteresting number is itself interesting because it is the smallest uninteresting number, thus producing a contradiction .
"Interestingness" concerning numbers is not a formal concept in normal terms, but an innate notion of "interestingness" seems to run among some number theorists . Famously, in a discussion between the mathematicians G. H. Hardy and Srinivasa Ramanujan about interesting and uninteresting numbers, Hardy remarked that the number 1729 of the taxicab he had ridden seemed "rather a dull one", and Ramanujan immediately answered that it is interesting, being the smallest number that is the sum of two cubes in two different ways . [ 2 ] [ 3 ]
Attempting to classify all numbers this way leads to a paradox or an antinomy [ 4 ] of definition. Any hypothetical partition of natural numbers into interesting and uninteresting sets seems to fail. Since the definition of interesting is usually a subjective, intuitive notion, it should be understood as a semi-humorous application of self-reference in order to obtain a paradox.
The paradox is alleviated if "interesting" is instead defined objectively: for example, the smallest natural number that does not appear in an entry of the On-Line Encyclopedia of Integer Sequences (OEIS) was originally found to be 11630 on 12 June 2009. [ 5 ] The number fitting this definition later became 12407 from November 2009 until at least November 2011, then 13794 as of April 2012, until it appeared in sequence OEIS : A218631 as of 3 November 2012. Since November 2013, that number was 14228, at least until 14 April 2014. [ 5 ] In May 2021, the number was 20067. (This definition of uninteresting is possible only because the OEIS lists only a finite number of terms for each entry. [ 6 ] For instance, OEIS : A000027 is the sequence of all natural numbers , and if continued indefinitely would contain all positive integers. As it is, the sequence is recorded in its entry only as far as 77.) Depending on the sources used for the list of interesting numbers, a variety of other numbers can be characterized as uninteresting in the same way. [ 7 ] For instance, the mathematician and philosopher Alex Bellos suggested in 2014 that a candidate for the lowest uninteresting number would be 224 because it was, at the time, "the lowest number not to have its own page on [the English-language version of] Wikipedia ". [ 8 ] As of August 2024, this number is 315 .
However, as there are many significant results in mathematics that make use of self-reference (such as Gödel's incompleteness theorems ), the paradox illustrates some of the power of self-reference, [ nb 1 ] and thus touches on serious issues in many fields of study. The paradox can be related directly to Gödel's incompleteness theorems if one defines an "interesting" number as one that can be computed by a program that contains fewer bits than the number itself. [ 9 ] Similarly, instead of trying to quantify the subjective feeling of interestingness, one can consider the length of a phrase needed to specify a number. For example, the phrase "the least number not expressible in fewer than eleven words" sounds like it should identify a unique number, but the phrase itself contains only ten words, and so the number identified by the phrase would have an expression in fewer than eleven words after all. This is known as the Berry paradox . [ 10 ]
In 1945, Edwin F. Beckenbach published a short letter in The American Mathematical Monthly suggesting that
One might conjecture that there is an interesting fact concerning each of the positive integers. Here is a "proof by induction" that such is the case. Certainly, 1, which is a factor of each positive integer, qualifies, as do 2, the smallest prime; 3, the smallest odd prime; 4, Bieberbach's number; etc . Suppose the set S of positive integers concerning each of which there is no interesting fact is not vacuous, and let k be the smallest member of S . But this is a most interesting fact concerning k ! Hence S has no smallest member and therefore is vacuous. Is the proof valid? [ 11 ]
Constance Reid included the paradox in the 1955 first edition of her popular mathematics book From Zero to Infinity , but removed it from later editions. [ 12 ] Martin Gardner presented the paradox as a "fallacy" in his Scientific American column in 1958, including it with six other "astonishing assertions" whose purported proofs were also subtly erroneous. [ 1 ] A 1980 letter to The Mathematics Teacher mentions a jocular proof that "all natural numbers are interesting" having been discussed three decades earlier. [ 13 ] In 1977, Greg Chaitin referred to Gardner's statement of the paradox and pointed out its relation to an earlier paradox of Bertrand Russell on the existence of a smallest undefinable ordinal (despite the fact that all sets of ordinals have a smallest element and that "the smallest undefinable ordinal" would appear to be a definition). [ 4 ] [ 14 ]
In The Penguin Dictionary of Curious and Interesting Numbers (1987), David Wells commented that 39 "appears to be the first uninteresting number", a fact that made it "especially interesting", and thus 39 must be simultaneously interesting and dull. [ 15 ] | https://en.wikipedia.org/wiki/Interesting_number_paradox |
An interface in the Java programming language is an abstract type that is used to declare a behavior that classes must implement. They are similar to protocols . Interfaces are declared using the interface keyword , and may only contain method signature and constant declarations (variable declarations that are declared to be both static and final ). All methods of an Interface do not contain implementation (method bodies) as of all versions below Java 8. Starting with Java 8, default [ 1 ] : 99 and static [ 1 ] : 7 methods may have implementation in the interface definition. [ 2 ] Then, in Java 9, private and private static methods were added. At present, [ when? ] a Java interface can have up to six different types. [ clarification needed ]
Interfaces cannot be instantiated , but rather are implemented. A class that implements an interface must implement all of the non-default methods described in the interface, or be an abstract class . Object references in Java may be specified to be of an interface type; in each case, they must either be null , or be bound to an object that implements the interface.
One benefit of using interfaces is that they simulate multiple inheritance . All classes in Java must have exactly one base class , the only exception being java.lang.Object (the root class of the Java type system ); multiple inheritance of classes is not allowed. However, an interface may inherit multiple interfaces and a class may implement multiple interfaces.
Interfaces are used to encode similarities which the classes of various types share, but do not necessarily constitute a class relationship. For instance, a human and a parrot can both whistle ; however, it would not make sense to represent Human s and Parrot s as subclasses of a Whistler class. Rather they most likely be subclasses of an Animal class (likely with intermediate classes), but both would implement the Whistler interface.
Another use of interfaces is being able to use an object without knowing its type of class, but rather only that it implements a certain interface. For instance, if one were annoyed by a whistling noise, one may not know whether it is a human or a parrot, because all that could be determined is that a whistler is whistling. The call whistler.whistle() will call the implemented method whistle of object whistler no matter what class it has, provided it implements Whistler . In a more practical example, a sorting algorithm may expect an object of type Comparable . Thus, without knowing the specific type, it knows that objects of that type can somehow be sorted.
For example:
An interface:
Interfaces are defined with the following syntax (compare to Java's class definition ):
Example: public interface Interface1 extends Interface2;
The body of the interface contains abstract methods , but since all methods in an interface are, by definition, abstract, the abstract keyword is not required. Since the interface specifies a set of exposed behaviors, all methods are implicitly public .
Thus, a simple interface may be
The member type declarations in an interface are implicitly static, final and public, but otherwise they can be any type of class or interface. [ 3 ]
The syntax for implementing an interface uses this formula:
Classes may implement an interface. For example:
If a class implements an interface and does not implement all its methods, it must be marked as abstract . If a class is abstract, one of its subclasses is expected to implement its unimplemented methods, though if any of the abstract class' subclasses do not implement all interface methods, the subclass itself must be marked again as abstract .
Classes can implement multiple interfaces:
Interfaces can share common class methods:
However a given class cannot implement the same or a similar interface multiple times:
Interfaces are commonly used in the Java language for callbacks , [ 4 ] as Java does not allow multiple inheritance of classes, nor does it allow the passing of methods (procedures) as arguments. Therefore, in order to pass a method as a parameter to a target method, current practice is to define and pass a reference to an interface as a means of supplying the signature and address of the parameter method to the target method rather than defining multiple variants of the target method to accommodate each possible calling class.
Interfaces can extend several other interfaces, using the same formula as described below. For example,
is legal and defines a subinterface. It allows multiple inheritance, unlike classes. Predator and Venomous may possibly define or inherit methods with the same signature, say kill(Prey p) . When a class implements VenomousPredator it will implement both methods simultaneously.
Some common Java interfaces are: | https://en.wikipedia.org/wiki/Interface_(Java) |
In the physical sciences , an interface is the boundary between two spatial regions occupied by different matter , or by matter in different physical states . The interface between matter and air , or matter and vacuum , is called a surface , and studied in surface science . In thermal equilibrium , the regions in contact are called phases , and the interface is called a phase boundary . An example for an interface out of equilibrium is the grain boundary in polycrystalline matter.
The importance of the interface depends on the type of system: the bigger the quotient area/volume, the greater the effect the interface will have. Consequently, interfaces are very important in systems with large interface area-to-volume ratios, such as colloids .
Interfaces can be flat or curved. For example, oil droplets in a salad dressing are spherical but the interface between water and air in a glass of water is mostly flat.
Surface tension is the physical property which rules interface processes involving liquids. For a liquid film on flat surfaces, the liquid-vapor interface keeps flat to minimize interfacial area and system free energy . For a liquid film on rough surfaces, the surface tension tends to keep the meniscus flat, while the disjoining pressure makes the film conformal to the substrate. The equilibrium meniscus shape is a result of the competition between the capillary pressure and disjoining pressure. [ 1 ]
Interfaces may cause various optical phenomena , such as refraction . Optical lenses serve as an example of a practical application of the interface between glass and air.
One topical interface system is the gas-liquid interface between aerosols and other atmospheric molecules.
This physics -related article is a stub . You can help Wikipedia by expanding it .
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Interface_(matter) |
Interface and colloid science is an interdisciplinary intersection of branches of chemistry , physics , nanoscience and other fields dealing with colloids , heterogeneous systems consisting of a mechanical mixture of particles between 1 nm and 1000 nm dispersed in a continuous medium. A colloidal solution is a heterogeneous mixture in which the particle size of the substance is intermediate between a true solution and a suspension , i.e. between 1–1000 nm. Smoke from a fire is an example of a colloidal system in which tiny particles of solid float in air. Just like true solutions, colloidal particles are small and cannot be seen by the naked eye. They easily pass through filter paper. But colloidal particles are big enough to be blocked by parchment paper or animal membrane.
Interface and colloid science has applications and ramifications in the chemical industry, pharmaceuticals, biotechnology, ceramics, minerals, nanotechnology , and microfluidics, among others.
There are many books dedicated to this scientific discipline, [ 1 ] [ 2 ] [ 3 ] [ 4 ] and there is a glossary of terms, Nomenclature in Dispersion Science and Technology, published by the US National Institute of Standards and Technology . [ 5 ] | https://en.wikipedia.org/wiki/Interface_and_colloid_science |
Interface bloat is a phenomenon in software design where an interface incorporates too many (often unnecessary) operations or elements, causing issues such as difficulty navigating and usability. [ 1 ] [ 2 ]
While the term bloat can refer to a variety of terms in software design, [ 3 ] Interface bloat refers to the phenomenon where the user interface (UI) becomes unnecessarily complex and overloaded with features, options, or elements that can overwhelm users. [ 4 ] This often leads to a cluttered experience, decreased usability, and increased difficulty for users to accomplish their tasks efficiently. [ 1 ] [ 2 ] Interface bloat can arise from various sources, including the addition of excessive functionality without proper consideration of user needs, the merging of disparate features, or pressure to include numerous options to cater to a broader audience. [ 2 ]
This software-engineering -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Interface_bloat |
Interface conditions describe the behaviour of electromagnetic fields ; electric field , electric displacement field , and the magnetic field at the interface of two materials. The differential forms of these equations require that there is always an open neighbourhood around the point to which they are applied, otherwise the vector fields and H are not differentiable . In other words, the medium must be continuous[no need to be continuous][This paragraph need to be revised, the wrong concept of "continuous" need to be corrected]. On the interface of two different media with different values for electrical permittivity and magnetic permeability , that condition does not apply.
However, the interface conditions for the electromagnetic field vectors can be derived from the integral forms of Maxwell's equations .
where: n 12 {\displaystyle \mathbf {n} _{12}} is normal vector from medium 1 to medium 2.
Therefore, the tangential component of E is continuous across the interface.
Two of our sides are infinitesimally small, leaving only
After dividing by l, and rearranging,
This argument works for any tangential direction. The difference in electric field dotted into any tangential vector is zero, meaning only the components of E {\displaystyle \mathbf {E} } parallel to the normal vector can change between mediums. Thus, the difference in electric field vector is parallel to the normal vector. Two parallel vectors always have a cross product of zero.
n 12 {\displaystyle \mathbf {n} _{12}} is the unit normal vector from medium 1 to medium 2. σ s {\displaystyle \sigma _{s}} is the surface charge density between the media (unbounded charges only, not coming from polarization of the materials).
This can be deduced by using Gauss's law and similar reasoning as above.
Therefore, the normal component of D has a step of surface charge on the interface surface. If there is no surface charge on the interface, the normal component of D is continuous.
where: n 12 {\displaystyle \mathbf {n} _{12}} is normal vector from medium 1 to medium 2.
Therefore, the normal component of B is continuous across the interface (the same in both media).
where: n 12 {\displaystyle \mathbf {n} _{12}} is the unit normal vector from medium 1 to medium 2. j s {\displaystyle \mathbf {j} _{s}} is the surface current density between the two media (unbounded current only, not coming from polarisation of the materials).
Therefore, the tangential component of H is discontinuous across the interface by an amount equal to the magnitude of the surface current density. The normal components of H in the two media are in the ratio of the permeabilities. [ 1 ]
There are no charges nor surface currents at the interface, and so the tangential component of H and the normal component of D are both continuous.
There are charges and surface currents at the interface, and so the tangential component of H and the normal component of D are not continuous. [ 1 ]
The boundary conditions must not be confused with the interface conditions. For numerical calculations, the space where the calculation of the electromagnetic field is achieved must be restricted to some boundaries. This is done by assuming conditions at the boundaries which are physically correct and numerically solvable in finite time. In some cases, the boundary conditions resume to a simple interface condition. The most usual and simple example is a fully reflecting (electric wall) boundary - the outer medium is considered as a perfect conductor. In some cases, it is more complicated: for example, the reflection-less (i.e. open) boundaries are simulated as perfectly matched layer or magnetic wall that do not resume to a single interface. | https://en.wikipedia.org/wiki/Interface_conditions_for_electromagnetic_fields |
In the context of chemistry and molecular modelling , the Interface force field (IFF) is a force field for classical molecular simulations of atoms , molecules , and assemblies up to the large nanometer scale, covering compounds from across the periodic table . [ 1 ] It employs a consistent classical Hamiltonian energy function for metals, oxides, and organic compounds, linking biomolecular and materials simulation platforms into a single platform. The reliability is often higher than that of density functional theory calculations at more than a million times lower computational cost. IFF includes a physical-chemical interpretation for all parameters as well as a surface model database that covers different cleavage planes and surface chemistry of included compounds. The Interface Force Field is compatible with force fields for the simulation of primarily organic compounds and can be used with common molecular dynamics and Monte Carlo codes. [ 2 ] [ 3 ] [ 4 ] [ 5 ] Structures and energies of included chemical elements and compounds are rigorously validated and property predictions are up to a factor of 100 more accurate relative to earlier models.
IFF was developed by Hendrik Heinz and his research group in 2013, based on preliminary work dating back to 2003 that includes a new rationale for atomic charges, use of energy expressions, interpretation of parameters, and a series of outperforming force field parameters for minerals, metals, and polymers. [ 1 ] The force fields covered new chemical space and were one to two orders of magnitude more accurate than prior models where available, with apparently no restrictions to extend them further across the periodic table.
As early as in the late 1960s, interatomic potentials were developed, for example, for amino acids and later served the CHARMM program. The fraction of covered chemical space was small, however, considering the size of the periodic table, and compatible interatomic potentials for inorganic compounds remained largely unavailable. [ 6 ] Different energy functions, lack of interpretation and validation of parameters restricted modeling to isolated compounds with unpredictable errors. Assumptions of formal charges, a lack of rationale for Lennard-Jones parameters and even for bonded terms, fixed atoms, as well as other approximations often led to collapsed structures and random energy differences when allowing atom mobility. A concept for consistent simulations of inorganic-organic interfaces, that formed the basis of IFF, was first introduced in 2003. [ 7 ]
A major obstacle was the poor definition of atomic charges in molecular models, especially for inorganic compounds, due to reliance on quantum chemistry calculations and partitioning methods that may be suitable for field-based but not for point-based charge distributions necessary in force fields. As a result, uncertainties in quantum-mechanically derived point charges were often 100% or higher, clearly unsuited to quantify chemical bonding or chemical processes in force fields and in molecular simulations. [ 8 ] IFF utilizes a method to assign atomic charges that translates chemical bonding accurately into molecular models, including metals, oxides, minerals, and organic molecules. The models reproduce multipole moments internal to a chemical compound on the basis of experimental data for electron deformation densities, dipole moments (often known to <1% error), as well as consideration of atomization energies , ionization energies , coordination numbers , and trends relative to other chemically similar compounds in the periodic table (the Extended Born Model). [ 8 ] The method ensures a combination of experimental data and theory to represent chemical bonding and yields up to ten times more reliable and reproducible atomic charges in comparison to the use of quantum methods, with typical uncertainties of 5%. [ 9 ] [ 10 ] This approach is essential to carry out consistent all-atom simulations of compounds across the periodic table that vary widely in the type of chemical bonding and in internal polarity. IFF also allows the inclusion of specific features of the electronic structure such as π electrons in graphitic materials and aromatic compounds [ 11 ] as well as image charges in metals. [ 12 ]
Another distinctive characteristic of IFF is the systematic reproduction of structures and energies to validate the classical Hamiltonian . First, the quality of structural predictions is assessed by validation of lattice parameters and densities from X-ray data, which has been common in molecular simulations. Second, in addition, IFF uses surface and cleavage energies for solids from experimental measurements to ensure a reliable potential energy surface. Third, in addition, force field parameters and reference data are considered at standard temperature and pressure . This protocol is far more practical than using lattice parameters at a temperature of 0 K and cohesive (vaporization) energies at up to 3000 K, which is commonly the case to assess ab-initio calculations, as then the conditions are far from practical utility and experimental data for validation may be limited or not at all available. [ 13 ] As a result of the advances in IFF, hydration energies, adsorption energies, thermal, and mechanical properties can often be computed in quantitative agreement with measurements without further parameter modifications. The IFF parameters also have a physical-chemical interpretation and allow chemical analogy as an effective method to derive parameters for chemically similar, yet not parameterized compounds in good accuracy.
Alternative approaches based on gray-box or black-box fitting of force field parameters, e.g., using lattice parameters and mechanical properties (the 2nd derivative of the energy) as target quantities, lack interpretability and frequently incur 50% to 500% error in surface and interfacial energies, which is usually not sufficient to accelerate materials design. [ 1 ]
IFF covers metals, oxides, 2D materials, cement minerals, and organic compounds. [ 1 ] The typical accuracy is ~0.5% for lattice parameters, ~5% for surface energies, and ~10% for elastic moduli, including documented variations for individual compounds. All-atom models and simulation inputs for bulk materials and interfaces can be built using Materials Studio, [ 2 ] VMD , LAMMPS , CHARMM-GUI , as well as other editing programs. [ 14 ] Simulations and analysis can be carried out using many molecular dynamics programs such as Discover, Forcite, LAMMPS , NAMD , GROMACS , and CHARMM . IFF uses employs the same potential energy function as other common force fields (CHARMM, [ 15 ] AMBER, [ 16 ] OPLS-AA, [ 17 ] CVFF, [ 18 ] DREIDING, [ 19 ] GROMOS, [ 20 ] PCFF, [ 21 ] COMPASS), including options for 12-6 and 9-6 Lennard-Jones potentials , and can be used standalone or as a plugin to these force fields to utilize existing parameters.
Accurate interatomic potentials are essential to analyze assemblies of atoms, molecules, and nanostructures up to the small microscale. IFF is used in molecular dynamics simulations of nanomaterials and biological interfaces. Structures up to ten thousands of atoms can be analyzed on a workstation, and up to a billion atoms using supercomputing . Examples include properties of metals and alloys, [ 22 ] [ 23 ] mineral-organic interfaces, [ 24 ] protein- and DNA-nanomaterial interactions, [ 25 ] earth and building materials, carbon nanostructures, batteries, and polymer composites. [ 26 ] [ 27 ] The simulations visualize atomically resolved processes and quantify relationships to macroscale properties that are elusive from experiments due to limitations in imaging and tracking of atoms. Modeling thereby complements experimental studies by X-ray diffraction , electron microscopy and tomography, such as transmission electron microscopy and atomic force microscopy , as well as several types of spectroscopy , calorimetry , and electrochemical measurements. Knowledge of the 3D atomic structures and dynamic changes over time is key to understanding the function of sensors, molecular signatures of diseases, and material properties. Computations with IFF can also be used to screen large numbers of hypothetical materials for guidance in synthesis and processing.
A database in IFF provides simulation-ready models of crystal structures and crystallographic surfaces of metals and minerals. Often, variable surface chemistry is important, such as in pH-responsive surfaces of silica , hydroxyapatite , and cement minerals . [ 28 ] The model options in the database incorporate extensive experimental data, which can be selected and customized by users. For example, models for silica cover the flexible area density of silanol groups and siloxide groups according to data from differential thermal gravimetry , spectroscopy, zeta potentials , surface titration, and pK values . [ 29 ] Similarly, hydroxyapatite minerals in bone and teeth displays surfaces that differ in dihydrogenphosphate versus monohydrogenphosphate content as a function of pH value. The surface chemistry is often as critical as good interatomic potentials to predict the dynamics of electrolyte interfaces, molecular recognition, and surface reactions.
IFF is primarily a classical potential with limited applicability to chemical reactions. Quantitative simulations of reactions is, however, a natural extension due to an interpretable representation of chemical bonding and electronic structure. Simulations of the relative activity of Pd nanoparticle catalysts in C-C Stille coupling , hydration reactions, and cis-trans isomerization reactions of azobenzene have been reported. [ 30 ] A general pathway to simulate reactions are QM/MM simulations. [ 31 ] Other pathways to implement reactions are user-defined changes in bond connectivity during the simulations, and use of a Morse potential instead of a harmonic bond potential to enable bond breaking in stress-strain simulations. | https://en.wikipedia.org/wiki/Interface_force_field |
Interfacial polymerization is a type of step-growth polymerization in which polymerization occurs at the interface between two immiscible phases (generally two liquids), resulting in a polymer that is constrained to the interface. [ 1 ] [ 2 ] [ 3 ] There are several variations of interfacial polymerization, which result in several types of polymer topologies, such as ultra- thin films , [ 4 ] [ 5 ] nanocapsules , [ 6 ] and nanofibers , [ 7 ] to name just a few. [ 1 ] [ 2 ]
Interfacial polymerization (then termed "interfacial polycondensation") was first discovered by Emerson L. Wittbecker and Paul W. Morgan in 1959 as an alternative to the typically high-temperature and low-pressure melt polymerization technique. [ 3 ] As opposed to melt polymerization, interfacial polymerization reactions can be accomplished using standard laboratory equipment and under atmospheric conditions. [ 3 ]
This first interfacial polymerization was accomplished using the Schotten–Baumann reaction , [ 3 ] a method to synthesize amides from amines and acid chlorides . In this case, a polyamide , usually synthesized via melt polymerization, was synthesized from diamine and diacid chloride monomers. [ 1 ] [ 3 ] The diacid chloride monomers were placed in an organic solvent (benzene) and the diamene monomers in a water phase, such that when the monomers reached the interface they would polymerize. [ 3 ]
Since 1959, interfacial polymerization has been extensively researched and used to prepare not only polyamides but also polyanilines , polyimides , polyurethanes , polyureas , polypyrroles , polyesters , polysulfonamides, polyphenyl esters and polycarbonates . [ 2 ] [ 8 ] In recent years, polymers synthesized by interfacial polymerization have been used in applications where a particular topological or physical property is desired, such as conducting polymers for electronics, water purification membranes , and cargo-loading microcapsules. [ 1 ] [ 2 ]
The most commonly used interfacial polymerization methods fall into 3 broad types of interfaces: liquid-solid interfaces, liquid-liquid interfaces, and liquid-in-liquid emulsion interfaces. [ 1 ] In the liquid-liquid and liquid-in-liquid emulsion interfaces, either one or both liquid phases may contain monomers. [ 1 ] [ 3 ] There are also other interface categories, rarely used, including liquid-gas, solid-gas, and solid-solid. [ 1 ]
In a liquid-solid interface, polymerization begins at the interface, and results in a polymer attached to the surface of the solid phase. In a liquid-liquid interface with monomer dissolved in one phase, polymerization occurs on only one side of the interface, whereas in liquid-liquid interfaces with monomer dissolved in both phases, polymerization occurs on both sides. [ 2 ] An interfacial polymerization reaction may proceed either stirred or unstirred. In a stirred reaction, the two phases are combined using vigorous agitation, resulting in a higher interfacial surface area and a higher polymer yield. [ 2 ] [ 3 ] In the case of capsule synthesis, the size of the capsule is directly determined by the stirring rate of the emulsion. [ 2 ]
Although interfacial polymerization appears to be a relatively straightforward process, there are several experimental variables that can be modified in order to design specific polymers or modify polymer characteristics. [ 2 ] [ 3 ] Some of the more notable variables include the identity of the organic solvent, monomer concentration, reactivity, solubility, the stability of the interface, and the number of functional groups present on the monomers. [ 2 ] [ 3 ] The identity of the organic solvent is of utmost importance, as it affects several other factors such as monomer diffusion, reaction rate, and polymer solubility and permeability. [ 3 ] The number of functional groups present on the monomer is also important, as it affects the polymer topology: a di-substituted monomer will form linear chains whereas a tri- or tetra-substituted monomer forms branched polymers. [ 3 ]
Most interfacial polymerizations are synthesized on a porous support in order to provide additional mechanical strength, allowing delicate nano films to be used in industrial applications. [ 2 ] In this case, a good support would consist of pores ranging from 1 to 100 nm. [ 2 ] Free-standing films, by contrast, do not use a support, and are often used to synthesize unique topologies such as micro- or nanocapsules. [ 2 ] In the case of polyurethanes and polyamides especially, the film can be pulled continuously from the interface in an unstirred reaction, forming "ropes" of polymeric film. [ 3 ] [ 8 ] As the polymer precipitates, it can be withdrawn continuously.
It is interesting to note that the molecular weight distribution of polymers synthesized by interfacial polymerization is broader than the Flory–Schulz distribution due to the high concentration of monomers near the interfacial site. [ 9 ] Because the two solutions used in this reaction are immiscible and the rate of reaction is high, this reaction mechanism tends to produce a small number of long polymer chains of high molecular weight . [ 10 ]
Interfacial polymerization has proven difficult to model accurately due to its nature as a nonequilibrium process . [ 7 ] [ 9 ] [ 11 ] These models provide either analytical or numerical solutions. [ 9 ] [ 11 ] The wide range of variables involved in interfacial polymerization has led to several different approaches and several different models. [ 1 ] [ 7 ] [ 9 ] [ 11 ] One of the more general models of interfacial polymerization, summarized by Berezkin and co-workers, involves treating interfacial polymerization as a heterogenous mass transfer combined with a second-order chemical reaction. [ 9 ] In order to take into account different variables, this interfacial polymerization model is divided into three scales, yielding three different models: the kinetic model, the local model, and the macrokinetic model. [ 9 ]
The kinetic model is based on the principles of kinetics, assumes uniform chemical distribution, and describes the system at a molecular level. [ 9 ] This model takes into account thermodynamic qualities such as mechanisms, activation energies, rate constants, and equilibrium constants. [ 9 ] The kinetic model is typically incorporated into either the local or the macrokinetic model in order to provide greater accuracy. [ 9 ]
The local model is used to determine the characteristics of polymerization at a section around the interface, termed the diffusion boundary layer. [ 9 ] This model can be used to describe a system in which the monomer distribution and concentration are inhomogeneous, and is restricted to a small volume. [ 9 ] Parameters determined using the local model include the mass transfer weight, the degree of polymerization, topology near the interface, and the molecular weight distribution of the polymer. [ 9 ] Using local modeling, the dependence of monomer mass transfer characteristics and polymer characteristics as a function of kinetic, diffusion, and concentration factors can be analyzed. [ 9 ] One approach to calculating a local model can be represented by the following differential equation:
∂ c i ∂ t = ∂ ∂ y ( D i ∂ c i ∂ y ) + J i {\displaystyle {\partial c_{i} \over \partial t}={\partial \over \partial y}(D_{i}{\partial c_{i} \over \partial y})+J_{i}}
in which c i is the molar concentration of functional groups in the i th component of a monomer or polymer, t is the elapsed time, y is a coordinate normal to the surface/interface, D i is the molecular diffusion coefficient of the functional groups of interest, and J i is the thermodynamic rate of reaction. [ 9 ] Although precise, no analytical solution exists for this differential equation, and as such solutions must be found using approximate or numerical techniques. [ 9 ]
In the macrokinetic model, the progression of an entire system is predicted. One important assumption of the macrokinetic model is that each mass transfer process is independent, and can therefore be described by a local model. [ 9 ] The macrokinetic model may be the most important, as it can provide feedback on the efficiency of the reaction process, important in both laboratory and industrial applications. [ 9 ]
More specific approaches to modeling interfacial polymerization are described by Ji and co-workers, and include modeling of thin-film composite (TFC) membranes, [ 11 ] tubular fibers, hollow membranes, [ 7 ] and capsules. [ 1 ] [ 12 ] These models take into account both reaction- and diffusion-controlled interfacial polymerization under non-steady-state conditions. [ 7 ] [ 11 ] One model is for thin film composite (TFC) membranes, and describes the thickness of the composite film as a function of time:
t = − ( E 0 B 0 + A 0 D 0 B 0 2 + C 0 A 0 2 B 0 2 ) ln ( 1 − X X m a x ) − C 0 2 B 0 X 2 − ( D 0 B 0 + C 0 A 0 B 0 2 ) X {\displaystyle t=-({E_{0} \over B_{0}}+{A_{0}D_{0} \over B_{0}^{2}}+{C_{0}A_{0}^{2} \over B_{0}^{2}})\ln(1-{X \over X_{max}})-{C_{0} \over 2B_{0}}X^{2}-({D_{0} \over B_{0}}+{C_{0}A_{0} \over B_{0}^{2}})X}
Where A 0 , B 0 , C 0 , D 0 , and E 0 are constants determined by the system, X is the film thickness, and X max is the maximum value of film thickness, which can be determined experimentally. [ 11 ]
Another model for interfacial polymerization of capsules, or encapsulation, is also described:
t = A 0 R m i n 5 E 0 I 4 + B 0 R m i n 4 E 0 I 3 + C 0 R m i n 2 E 0 I 2 + D 0 R m i n E 0 I 1 {\displaystyle t=A_{0}{R_{min}}^{5}E_{0}I_{4}+B_{0}{R_{min}}^{4}E_{0}I_{3}+C_{0}{R_{min}}^{2}E_{0}I_{2}+D_{0}{R_{min}}E_{0}I_{1}}
Where A 0 , B 0 , C 0 , D 0 , E 0 , I 1 , I 2 , I 3 , and I 4 are constants determined by the system and R min is the minimum value of the inside diameter of the polymeric capsule wall. [ 12 ]
There are several assumptions made by these and similar models, including but not limited to uniformity of monomer concentration, temperature, and film density, and second-order reaction kinetics. [ 7 ] [ 11 ]
Interfacial polymerization has found much use in industrial applications, especially as a route to synthesize conducting polymers for electronics. [ 1 ] [ 2 ] Conductive polymers synthesized by interfacial polymerization such as polyaniline (PANI), Polypyrrole (PPy), poly(3,4-ethylenedioxythiophene), and polythiophene (PTh) have found applications as chemical sensors , [ 13 ] fuel cells , [ 14 ] supercapacitors , and nanoswitches. [ 1 ]
PANI nanofibers are the most commonly used for sensing applications. [ 1 ] [ 2 ] These nanofibers have been shown to detect various gaseous chemicals, such as hydrogen chloride (HCl), ammonia (NH 3 ), Hydrazine (N 2 H 4 ), chloroform (CHCl 3 ), and methanol (CH 3 OH). [ 1 ] PANI nanofibers can be further fined-tuned by doping and modifying the polymer chain conformation, among other methods, to increase selectivity to certain gases. [ 1 ] [ 2 ] [ 13 ] A typical PANI chemical sensor consists of a substrate, an electrode, and a selective polymer layer. [ 13 ] PANI nanofibers, like other chemiresistors , detect by a change in electrical resistance/conductivity in response to the chemical environment. [ 13 ]
PPy-coated ordered mesoporous carbon (OMC) composites can be used in direct methanol fuel cell applications. [ 1 ] [ 14 ] The polymerization of PPy onto the OMC reduces interfacial electrical resistances without altering the open mesopore structure, making PPy-coated OMC composites a more ideal material for fuel cells than plain OMCs. [ 14 ]
Composite polymer films synthesized via a liquid-solid interface are the most commonly used to synthesize membranes for reverse osmosis and other applications. [ 1 ] [ 2 ] [ 4 ] One added benefits of using polymers prepared by interfacial polymerization is that several properties, such as pore size and interconnectivity, can be fined-tuned to create a more ideal product for specific applications. [ 1 ] [ 4 ] [ 5 ] For example, synthesizing a polymer with a pore size somewhere between the molecular size of hydrogen gas ( H 2 ) and carbon dioxide (CO 2 ) results in a membrane selectively-permeable to H 2 , but not to CO 2 , effectively separating the compounds. [ 1 ] [ 5 ]
Compared to previous methods of capsule synthesis, interfacial polymerization is an easily modified synthesis that results in capsules with a wide range of properties and functionalities. [ 1 ] [ 2 ] Once synthesized, the capsules can enclose drugs, [ 6 ] quantum dots , [ 1 ] and other nanoparticles, to list a few examples. Further fine-tuning of the chemical and topological properties of these polymer capsules could prove an effective route to create drug-delivery systems. [ 1 ] [ 6 ] | https://en.wikipedia.org/wiki/Interfacial_polymerization |
Interfacial rheology is a branch of rheology that studies the flow of matter at the interface between a gas and a liquid or at the interface between two immiscible liquids. The measurement is done while having surfactants, nanoparticles or other surface active compounds present at the interface. Unlike in bulk rheology, the deformation of the bulk phase is not of interest in interfacial rheology and its effect is aimed to be minimized. Instead, the flow of the surface active compounds is of interest..
The deformation of the interface can be done either by changing the size or shape of the interface. Therefore interfacial rheological methods can be divided into two categories: dilational and shear rheology methods.
In dilatational interfacial rheology, the size of the interface is changing over time. The change in the surface stress or surface tension of the interface is being measured during this deformation. Based on the response, interfacial viscoelasticity is calculated according to well established theories: [ 1 ] [ 2 ]
| E | = d γ d l n A = A d γ d A {\displaystyle \left\vert E\right\vert ={d\gamma \over dlnA}=A{d\gamma \over dA}}
E ′ = | E | cos δ {\displaystyle {\begin{aligned}E'&=\left\vert E\right\vert \cos \delta \end{aligned}}}
E ″ = | E | sin δ {\displaystyle {\begin{aligned}E''&=\left\vert E\right\vert \sin \delta \end{aligned}}}
where
Most commonly, the measurement of dilational interfacial rheology is conducted with an optical tensiometer combined to a pulsating drop module. A pendant droplet with surface active molecules in it is formed and pulsated sinusoidally. The changes in the interfacial area causes changes in the molecular interactions which then changes the surface tension. [ 3 ] Typical measurements include performing a frequency sweep for the solution to study the kinetics of the surfactant.
In another measurement method suitable especially for insoluble surfactants, a Langmuir trough is used in an oscillating barrier mode. In this case, two barriers that limit the interfacial area are being oscillated sinusoidally and the change in surface tension measured. [ 4 ]
In interfacial shear rheology, the interfacial area remains the same throughout the measurement. Instead, the interfacial area is sheared in order to be able to measure the surface stress present. The equations are similar to dilatational interfacial rheology but shear modulus is often marked with G instead of E like in dilational methods. In a general case, G and E are not equal. [ 5 ]
Since interfacial rheological properties are relatively weak, it causes challenges for the measurement equipment. For high sensitivity, it is essential to maximize the contribution of the interface while minimizing the contribution of the bulk phase. The Boussinesq number, Bo, depicts how sensitive a measurement method is for detecting the interfacial viscoelasticity. [ 5 ]
The commercialized measurement techniques for interfacial shear rheology include magnetic needle method, rotating ring method and rotating bicone method. [ 6 ] The magnetic needle method, developed by Brooks et al [ 7 ] ., has the highest Boussinesq number of the commercialized methods. In this method, a thin magnetic needle is oscillated at the interface using a magnetic field. By following the movement of the needle with a camera, the viscoelastic properties of the interface can be detected. This method is often used in combination with a Langmuir trough in order to be able to conduct the experiment as a function of the packing density of the molecules or particles.
When surfactants are present in a liquid, they tend to adsorb in the liquid-air or liquid-liquid interface. Interfacial rheology deals with the response of the adsorbed interfacial layer on the deformation. The response depends on the layer composition, and thus interfacial rheology is relevant in many applications in which adsorbed layer play a crucial role, for example in development surfactants , foams and emulsions . Many biological systems like pulmonary surfactant and meibum are dependent on interfacial viscoelasticity for their functionality. [ 8 ] Interfacial rheology has been employed to understand the structure-function relationship of these physiological interfaces, how compositional deviations cause diseases such as infant respiratory distress syndrome or dry eye syndrome , and has helped to develop therapies like artificial pulmonary surfactant replacements and eye drops . [ 9 ]
Interfacial rheology enables the study of surfactant kinetics, and the viscoelastic properties of the adsorbed interfacial layer correlate well with emulsion and foam stability. Surfactants and surface active polymers used are for stabilising emulsions and foams in food and cosmetic industries. Proteins are surface active and adsorb at the interface, where they can change conformation and influence the interfacial properties. [ 10 ] Natural surfactants like asphaltenes and resins stabilize water-oil emulsions in crude oil applications, and by understanding their behavior the crude oil separation process can be enhanced. Also enhanced oil recovery efficiency can be optimized. [ 11 ]
Specialized setups that allow bulk exchange during interfacial rheology measurements are used to investigate the response of adsorbed proteins or surfactants upon changes in pH or salinity . [ 12 ] These setups can also be used to mimic more complex conditions like the gastric environment to investigate the in vitro displacement or enzymatic hydrolysis of polymers adsorbed at oil-water interfaces to understand how respective emulsion are digested the stomach. [ 13 ]
Interfacial rheology allows the probation of bacteria adsorption and biofilm formation at liquid-air or liquid-liquid interfaces. [ 14 ]
In food science, interfacial rheology was used to understand the stability of emulsions like mayonnaise , [ 15 ] the stability of espresso foam , [ 16 ] the film formed on black tea , [ 17 ] or the formation of kombucha biofilms . [ 18 ] | https://en.wikipedia.org/wiki/Interfacial_rheology |
In information theory , the interference channel is the basic model used to analyze the effect of interference in communication channels. The model consists of two pairs of users communicating through a shared channel. The problem of interference between two mobile users in close proximity or crosstalk between two parallel landlines are two examples where this model is applicable.
Unlike in the point-to-point channel , where the amount of information that can be sent through the channel is limited by the noise that distorts the transmitted signal, in the interference channel the presence of the signal from the other user may also impair the communication. However, since the transmitted signals are not purely random (otherwise they would not be decodable), the receivers may be able to reduce the effect of the interference by partially or totally decoding the undesired signal.
The mathematical model for this channel is the following:
where, for i ∈ { 1 , 2 } {\displaystyle i\in \{1,2\}} :
The capacity of this channel model is not known in general; only for special cases of p ( y 1 , y 2 | x 1 , x 2 ) {\displaystyle p(y_{1},y_{2}|x_{1},x_{2})} the capacity has been calculated, e.g., in the case of strong interference or deterministic channels. [ 1 ]
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Interference_channel |
Interference microscopy involving measurements of differences in the path between two beams of light that have been split.
Types include:
This optics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Interference_microscopy |
The Interference of the footings is a phenomenon that is observed when two footings are closely spaced. The buildings when are to be constructed nearby to each other, the architectural requirements or the less availability of space for the construction forces the engineers to place the foundation footings close to each other, and when foundations are placed close to each other with similar soil conditions, the Ultimate Bearing Capacity of each foundation may change due to the interference effect of the failure surface in the soil.
Foundations or group of foundations are important components of the structure through which the superficial structural loads are transmitted to the underlying foundation soil or bed on which the foundations are laid. The structural loads are transmitted to the foundation soil safely such that neither the foundation fail nor the foundation soil fails either in shear or in excessive settlement. The foundations are basically designed based on two criterion namely Bearing Capacity and Settlement criterion. Many classical theories have been postulated for the isolated foundations by many pioneers like Terzhagi (1943), Meyerhoff (1963), Hansen (1970) and Vesic (1973). In general as per the Terzaghi (1943), when an isolated shallow foundation is loaded, the stress or the failure zone in the foundation soil extends in horizontal direction on either side of the footing to about twice the width of the footing and in vertical downward direction to about three times the width of the footing. Unless until the stress or failure zone of individual footings do not interfere, the individual footings behave as an isolated footing. However, in many a situations such as lack of construction space, structural restrictions, rapid urbanization, architecture of the building, structures close to each other etc. In such situations the foundations or group of foundations may be placed close to each other. In such cases the stress isobars or the failure zone of closely spaced isolated footings may interfere with each other leading to the phenomenon called Interference. Owing to the phenomenon of footing interference, the failure mechanism, load-settlement, bearing capacity, settlement, rotational characteristics etc. of an isolated footing may be altered and therefore the classical theories as postulated in the literature for isolated footings cannot be applied. Due to interference the stress isobars of individual interacting footings coalesce to form a single isobar of larger dimensions altering the characteristic behavior of an isolated footing. Therefore, the study of interference of closely spaced footings is one of the significant practical importances.
Stuart [ 1 ] was the first pioneer to study the interference phenomenon of closely spaced surface strip footing. He examined the effect of footing interference on ultimate bearing capacity of strip footings by theoretical analysis using limit equilibrium method, assuming a non-linear failure surface wherein the cross-section composed of logarithmic spiral and straight line portion tangent to the curvilinear portion. Further Stuart (1962) carried out few small-scale laboratory experiments and compared the results of theoretical analysis with that of the experimental results and concluded that the ultimate bearing capacity of two interfering footings increase with decrease in spacing between the footings and attains a peak magnitude at some spacing termed as critical spacing. The study of Stuart (1962) was further extended by West and Stuart (1965) by performing a series of small-scale laboratory tests to examine the effect of interference on bearing capacity of strip footings resting on the surface of cohesion-less soil bed. Moreover, West and Stuart [ 2 ] (1965) carried out few theoretical analyses using method of stress characteristics to observe the eccentricity of load and reactions at the base of footing resulting from interference effect for footings resting on the surface of sand. The results obtained from this theory were smaller than those observed by Stuart (1962) using limit equilibrium method; however the trend was similar to the variation as observed by Stuart (1962) and the results obtained by experiments reasonably matched with those of the theoretical analysis. The researchers carried out the study by theoretical or numerical techniques for interfering footings by making use of the following methods : Method of stress characteristics, Analytical method, Probabilistic approach, Upper bound limits analysis, Lower bound limits analysis, Finite element method, Finite difference method, Distinct element method. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ]
The bearing capacity of soil is influenced by many factors for instance soil strength, foundation width and depth, soil weight and surcharge, and spacing between foundations. These factors are related to the loads exerted on the soil and considerably affect the bearing capacity. | https://en.wikipedia.org/wiki/Interference_of_the_footings |
Interference reflection microscopy ( IRM ), also called Reflection Interference Contrast Microscopy ( RICM ) or Reflection Contrast Microscopy ( RCM ) depending on the specific optical elements used, is an optical microscopy technique that leverages thin-film interference effects to form an image of an object on a glass surface. The intensity of the signal is a measure of proximity of the object to the glass surface. This technique can be used to study events at the cell membrane without the use of a (fluorescent) label as is the case for TIRF microscopy .
In 1964, Adam S. G. Curtis coined the term Interference Reflection Microscopy ( IRM ), using it in the field of cell biology to study embryonic chick heart fibroblasts. [ 1 ] [ 2 ] He used IRM to look at adhesion sites and distances of fibroblasts, noting that contact with the glass was mostly limited to the cell periphery and the pseudopodia . [ 1 ]
In 1975, Johan Sebastiaan Ploem introduced an improvement to IRM (published in a book chapter [ 3 ] ), which he called Reflection Contrast Microscopy ( RCM ). [ 4 ] The improvement is to use a so-called anti-flex objective and crossed polarizers to further reduce stray light in the optical system. Today, this scheme is mainly referred to as Reflection Interference Contrast Microscopy ( RICM ), [ 5 ] [ 6 ] the name of which was introduced by Bareiter-Hahn and Konrad Beck in 1979. [ 7 ]
However, the term IRM is sometimes used to describe an RICM setup. The multiplicity of names used to describe the technique has caused some confusion, and was discussed as early as 1985 by Verschueren. [ 8 ]
To form an image of the attached cell, light of a specific wavelength is passed through a polarizer . This linear polarized light is reflected by a beam splitter towards the objective , which focuses the light on the specimen. The glass surface is reflective to a certain degree and will reflect the polarized light. Light that is not reflected by the glass will travel into the cell and be reflected by the cell membrane. Three situations can occur. First, when the membrane is close to the glass, the reflected light from the glass is shifted half of a wavelength, so that light reflected from the membrane will have a phase shift compared to the reflected light from the glass phases and therefore cancel each other out ( interference ). This interference results in a dark pixel in the final image (the left case in the figure). Second, when the membrane is not attached to the glass, the reflection from the membrane has a smaller phase shift compared to the reflected light from the glass, and therefore they will not cancel each other out, resulting in a bright pixel in the image (the right case in the figure). Third, when there is no specimen, only the reflected light from the glass is detected and will appear as bright pixels in the final image.
The reflected light will travel back to the beam splitter and pass through a second polarizer, which eliminates scattered light, before reaching the detector (usually a CCD camera ) in order to form the final picture. The polarizers can increase the efficiency by reducing scattered light; however in a modern setup with a sensitive digital camera, they are not required. [ 9 ]
Reflection is caused by a change in the refraction index, so on every boundary a part of the light will be reflected. The amount of reflection is given by the reflection coefficient r 12 {\displaystyle r_{12}\!} , according to the following rule: [ 8 ]
r 12 = n 1 − n 2 n 1 + n 2 {\displaystyle r_{12}={\frac {n_{1}-n_{2}}{n_{1}+n_{2}}}}
Reflectivity R {\displaystyle R\!} is a ratio of the reflected light intensity ( I r {\displaystyle I_{r}\!} ) and the incoming light intensity ( I i {\displaystyle I_{i}\!} ): [ 8 ]
R = I r I i = [ n 1 − n 2 n 1 + n 2 ] 2 = r 12 2 {\displaystyle R={\frac {I_{r}}{I_{i}}}=\left\lbrack {\frac {n_{1}-n_{2}}{n_{1}+n_{2}}}\right\rbrack ^{2}={r_{12}}^{2}}
Using typical refractive indices for glass (1.50–1.54, see list ), water (1.31, see list ), the cell membrane (1.48) [ 10 ] and the cytosol (1.35), [ 10 ] one can calculate the fraction of light being reflected by each interface. The amount of reflection increases as the difference between refractive indices increases, resulting in a large reflection from the interface between the glass surface and the culture medium (about equal to water: 1.31–1.33). This means that without a cell the image will be bright, whereas when the cell is attached, the difference between medium and the membrane causes a large reflection that is slightly shifted in phase, causing interference with the light reflected by the glass. Because the amplitude of the light reflected from the medium-membrane interface is decreased due to scattering, the attached area will appear darker but not completely black. Because the cone of light focused on the sample gives rise to different angles of incident light, there is a broad range of interference patterns. When the patterns differ by less than 1 wavelength (the zero-order fringe), the patterns converge, resulting in increased intensity. This can be obtained by using an objective with a numerical aperture greater than 1. [ 8 ]
In order to image cells using IRM, a microscope needs at least the following elements: 1) a light source, such as a halogen lamp, 2) an optical filter (which passes a small range of wavelengths), and 3) a beam splitter (which reflects 50% and transmits 50% of the chosen wavelength).
The light source needs to produce high intensity light, as a lot of light will be lost by the beam splitter and the sample itself. Different wavelengths result in different IRM images; Bereiter-Hahn and colleagues showed that for their PtK 2 cells, light with a wavelength of 546 nm resulted in better contrast than blue light with a wavelength of 436 nm. [ 7 ] There have been many refinements to the basic theory of IRM, most of which increase the efficiency and yield of the image formation. By placing polarizers and a quarter wave plate between the beam splitter and the specimen, the linear polarized light can be converted into circular polarized light and afterwards be converted back to linear polarized light, which increases the efficiency of the system. The circular polarizer article discusses this process in detail. Furthermore, by including a second polarizer, which is rotated 90° compared to the first polarizer, stray light can be prevented from reaching the detector, increasing the signal to noise ratio (see Figure 2 of Verschueren [ 8 ] ).
There are several ways IRM can be used to study biological samples. Early examples of uses of the technique focused on cell adhesion [ 1 ] and cell migration . [ 11 ]
More recently, the technique has been used to study exocytosis in chromaffin cells . [ 9 ] When imaged using DIC, chromaffin cells appear as round cells with small protrusions. When the same cell is imaged using IRM, the footprint of the cell on the glass can be clearly seen as a dark area with small protrusions. When vesicles fuse with the membrane, they appear as small light circles within the dark footprint (bright spots in the top cell in the right panel).
An example of vesicle fusion in chromaffin cells using IRM is shown in movie 1. Upon stimulation with 60 mM potassium , multiple bright spots begin to appear inside the dark footprint of the chromaffin cell as a result of exocytosis of dense core granules . Because IRM doesn't require a fluorescent label, it can be combined with other imaging techniques, such as epifluorescence and TIRF microscopy to study protein dynamics together with vesicle exocytosis and endocytosis. Another benefit of the lack of fluorescent labels is reduced phototoxicity . | https://en.wikipedia.org/wiki/Interference_reflection_microscopy |
Interferome is an online bioinformatics database of interferon-regulated genes (IRGs). [ 1 ] These Interferon Regulated Genes are also known as Interferon Stimulated Genes (ISGs). The database contains information on type I (IFN alpha, beta), type II (IFN gamma) and type III (IFN lambda) regulated genes and is regularly updated. It is used by the interferon and cytokine research community [ 2 ] both as an analysis tool and an information resource. Interferons were identified as antiviral proteins more than 50 years ago. However, their involvement in immunomodulation , cell proliferation , inflammation and other homeostatic processes has been since identified. These cytokines are used as therapeutics in many diseases such as chronic viral infections, cancer and multiple sclerosis . [ 3 ] These interferons regulate the transcription of approximately 2000 genes in an interferon subtype, dose, cell type and stimulus dependent manner. This database of interferon regulated genes is an attempt at integrating information from high-throughput experiments and molecular biology databases to gain a detailed understanding of interferon biology.
Interferome comprises the following data sets:
Interferome offers many ways of searching and retrieving data from the database:
Interferome is managed by a team at Monash University : Monash Institute of Medical Research and the University of Cambridge | https://en.wikipedia.org/wiki/Interferome |
Interferometric microscopy or imaging interferometric microscopy is the concept of microscopy which
is related to holography , synthetic-aperture imaging , and off-axis -dark-field illumination techniques.
Interferometric microscopy allows enhancement of resolution of optical microscopy due to interferometric ( holographic )
registration of several partial images (amplitude and phase) and the numerical combining.
In interferometric microscopy, the image of a micro-object is synthesized numerically as a coherent combination
of partial images with registered amplitude and phase. [ 1 ] [ 2 ] For registration of partial images, a conventional holographic set-up is used with a reference wave, as is usual in optical holography . Capturing multiple exposures allows the numerical emulation of a large numerical aperture objective from images obtained with an objective lens with smaller-value numerical aperture. [ 1 ] Similar techniques allows scanning and precise detection of small particles. [ 3 ] As the combined image keeps both amplitude and phase information, the interferometric microscopy can be especially efficient for the phase objects, [ 3 ] allowing detection of light variations of index of refraction, which cause the phase shift or the light passing through for a small fraction of a radian.
Although the Interferometric microscopy has been demonstrated only for optical images (visible light), this technique may find application in high resolution atom optics , or optics of neutral atom beams (see Atomic de Broglie microscope ), where the Numerical aperture is usually very limited
. [ 4 ] | https://en.wikipedia.org/wiki/Interferometric_microscopy |
Interferometric scattering microscopy ( iSCAT ) refers to a class of methods that detect and image a subwavelength object by interfering the light scattered by it with a reference light field. The underlying physics is shared by other conventional interferometric methods such as phase contrast or differential interference contrast , or reflection interference microscopy. The key feature of iSCAT is the detection of elastic scattering from subwavelength particles, also known as Rayleigh scattering , in addition to reflected or transmission signals from supra-wavelength objects. Typically, the challenge is the detection of tiny signals on top of large and complex, speckle-like backgrounds. iSCAT has been used to investigate nanoparticles such as viruses, proteins, lipid vesicles, DNA, exosomes, metal nanoparticles, semiconductor quantum dots, charge carriers and single organic molecules without the need for a fluorescent label.
The principle of interference plays a central role in many imaging methods, including bright-field imaging because it can be described as the interference between the illumination field and the one that has interacted with the object, i.e. through extinction. In fact, even microscopy based on the interference with an external light field is more than one hundred years old.
The first iSCAT-type of measurements were performed in the biophysics community in the 1990s. [ 1 ] A systematic development of the method for the detection of nano-objects started in the early 2000s as a general effort to explore fluorescence-free options for studying single molecules and nano-objects. [ 2 ] In particular, gold nanoparticles down to a size of 5 nm were imaged via the interference of their scattered light with a reflected beam from the cover-slip supporting them. Using a supercontinuum laser additionally allowed for recording the particles' plasmon spectra. [ 2 ] The early measurements were limited by residual speckle-like background. A new approach to background subtraction and the acronym iSCAT were introduced in 2009. [ 3 ] Since then, a series of important works has been reported by various groups. [ 4 ] [ 5 ] [ 6 ] [ 7 ] Notably, further innovations in background and noise suppression have led to the development of new quantification methods such as mass photometry (originally introduced as iSCAMS), in which ultrasensitive and accurate interferometric detection is converted into a quantitative means for measuring the molecular mass of single biomolecules. [ 8 ]
When a reference light is superposed with an object's scattered light, the intensity at the detector can be described by, [ 2 ] [ 7 ]
I d e t ∝ | E r ¯ + E s ¯ | 2 = I r + I s + 2 E r E s cos ϕ {\displaystyle I_{det}\propto |{\overline {E_{r}}}+{\overline {E_{s}}}|^{2}=I_{r}+I_{s}+2E_{r}E_{s}\cos \phi }
where E r ¯ = E r e i ϕ r {\textstyle {\overline {E_{r}}}=E_{r}e^{i\phi _{r}}} and E s ¯ = E s e i ϕ s {\displaystyle {\overline {E_{s}}}=E_{s}e^{i\phi _{s}}} are the complex electric fields of the reference and scattered light. The resulting terms are the intensity of the reference beam ( I r = | E r ¯ | 2 ) {\displaystyle (I_{r}=|{\overline {E_{r}}}|^{2})} , the pure scattered light from the object ( I s = | E s ¯ | 2 ) {\displaystyle (I_{s}=|{\overline {E_{s}}}|^{2})} , and the cross-term ( 2 E r E s cos ϕ ) {\displaystyle (2E_{r}E_{s}\cos \phi )} which contains a phase ϕ = ϕ r − ϕ s {\displaystyle \phi =\phi _{r}-\phi _{s}} . This phase comprises a Gouy phase component from the variations of the wave vectors, a scattering phase component from the material properties of the object, and a sinusoidally modulating phase component which depends on the position of the particle.
In general, the reference beam can take a different path than the scattered light within the optical setup, as long as they are coherent and interfere on the detector. However, the technique becomes simpler and more stable if both beams share the same optical path. Therefore, the reflected light off the cover-slip or the transmitted beam through the sample is typically used as the reference. For the interference to occur, it is necessary that both light waves (scattered light and reference light) are coherent. Interestingly, a light source with a large coherence length on the order of meters or more (like in modern narrow-band laser systems) is typically not needed. In the most common iSCAT realization schemes where the reflected light of a cover-slip is used as a reference and the scattering particle is not more than a few hundreds of nanometers above the glass, even "incoherent" light, e.g. from LEDs, can be used. [ 9 ]
iSCAT has been used in multiple applications. These can be grouped roughly as: | https://en.wikipedia.org/wiki/Interferometric_scattering_microscopy |
Interferon beta-1a (also interferon beta 1-alpha ) is a cytokine in the interferon family used to treat multiple sclerosis (MS). [ 5 ] It is produced by mammalian cells, while interferon beta-1b is produced in modified E. coli . [ 6 ] Some research indicates that interferon injections may result in an 18–38% reduction in the rate of MS relapses. [ 7 ]
Interferon beta has not been shown to slow the advance of disability. [ 8 ] [ 9 ] [ 10 ] [ 11 ] Interferons are not a cure for MS (there is no known cure); the claim is that interferons may slow the progress of the disease if started early and continued for the duration of the disease. [ 12 ]
The earliest clinical presentation of relapsing-remitting multiple sclerosis is the clinically isolated syndrome (CIS), that is, a single attack of a single symptom. During a CIS, there is a subacute attack suggestive of demyelination which should be included in the spectrum of MS phenotypes. [ 13 ] Treatment with interferons after an initial attack decreases the risk of developing clinical definite MS. [ 14 ] [ 15 ]
Medications are modestly effective at decreasing the number of attacks in relapsing-remitting multiple sclerosis [ 16 ] and in reducing the accumulation of brain lesions, which is measured using gadolinium - enhanced magnetic resonance imaging (MRI). [ 14 ] Interferons reduce relapses by approximately 30% and their safe profile make them the first-line treatments. [ 14 ] Nevertheless, not all the patients are responsive to these therapies. It is known that 30% of MS patients are non-responsive to Beta interferon. [ 17 ] They can be classified in genetic, pharmacological and pathogenetic non-responders. [ 17 ] One of the factors related to non-respondance is the presence of high levels of interferon beta neutralizing antibodies . Interferon therapy, and specially interferon beta 1b, induces the production of neutralizing antibodies, usually in the second 6 months of treatment, in 5 to 30% of treated patients. [ 14 ] Moreover, a subset of RRMS patients with specially active MS, sometimes called "rapidly worsening MS" are normally non-responders to interferon beta 1a. [ 18 ] [ 19 ]
While more studies of the long-term effects of the drugs are needed, [ 12 ] [ 14 ] existing data on the effects of interferons indicate that early-initiated long-term therapy is safe and it is related to better outcomes. [ 12 ]
Interferon beta-1a is available only in injectable forms, and can cause skin reactions at the injection site that may include cutaneous necrosis . Skin reactions with interferon beta are more common with subcutaneous administration and vary greatly in their clinical presentation. [ 20 ] They usually appear within the first month of treatment albeit their frequence and importance diminish after six months of treatment. [ 20 ] Skin reactions are more prevalent in women. [ 20 ] Mild skin reactions usually do not impede treatment whereas necroses appear in around 5% of patients and lead to the discontinuation of the therapy. [ 20 ] Also over time, a visible dent at the injection site due to the local destruction of fat tissue, known as lipoatrophy , may develop, however, this rarely occurs with interferon treatment. [ 21 ]
Interferons , a subclass of cytokines , are produced in the body during illnesses such as influenza in order to help fight the infection. They are responsible for many of the symptoms of influenza infections, including fever , muscle aches , fatigue , and headaches . [ 22 ] Many patients report influenza-like symptoms hours after taking interferon beta that usually improve within 24 hours, being such symptoms related to the temporary increase of cytokines. [ 14 ] [ 20 ] This reaction tends to disappear after 3 months of treatment and its symptoms can be treated with over-the-counter nonsteroidal anti-inflammatory drugs , such as ibuprofen , that reduce fever and pain. [ 20 ] Another common transient secondary effect with interferon-beta is a functional deterioration of already existing symptoms of the disease. [ 20 ] Such deterioration is similar to the one produced in MS patients due to heat, fever or stress ( Uhthoff's phenomenon ), usually appears within 24 hours of treatment, is more common in the initial months of treatment, and may last several days. [ 20 ] A symptom specially sensitive to worsening is spasticity . [ 20 ] Interferon-beta can also reduce numbers of white blood cells ( leukopenia ), lymphocytes ( lymphopenia ) and neutrophils ( neutropenia ), as well as affect liver function. [ 20 ] In most cases these effects are non-dangerous and reversible after cessation or reduction of treatment. [ 20 ] Nevertheless, recommendation is that all patients should be monitored through laboratory blood analyses , including liver function tests , to ensure safe use of interferons. [ 20 ]
To help prevent injection-site reactions, patients are advised to rotate injection sites and use an aseptic injection technique. Injection devices are available to optimize the injection process. Side effects are often onerous enough that many patients ultimately discontinue taking interferons [ citation needed ] (or glatiramer acetate , a comparable disease-modifying therapy requiring regular injections).
Interferon beta balances the expression of pro- and anti-inflammatory agents in the brain, and reduces the number of inflammatory cells that cross the blood brain barrier . [ 23 ] Overall, therapy with interferon beta leads to a reduction of neuron inflammation. [ 23 ] Moreover, it is also thought to increase the production of nerve growth factor and consequently improve neuronal survival. [ 23 ] In vitro, interferon beta reduces production of Th17 cells which are a subset of T lymphocytes believed to have a role in the pathophysiology of MS. [ 24 ]
Avonex was approved in the US in 1996, [ 25 ] and in the European Union in 1997, and is registered in more than 80 countries worldwide. [ citation needed ] It is the leading MS therapy in the US, with around 40% of the overall market, and in the European Union, with around 30% of the overall market. [ citation needed ] It is produced by the Biogen biotechnology company, originally under competition protection in the US under the Orphan Drug Act .
Avonex is sold in three formulations, a lyophilized powder requiring reconstitution, a pre-mixed liquid syringe kit, and a pen; it is administered via intramuscular injection . [ 1 ]
Rebif is a disease-modifying drug (DMD) used to treat multiple sclerosis in cases of clinically isolated syndromes as well as relapsing forms of multiple sclerosis and is similar to the interferon beta protein produced by the human body. It is co-marketed by Merck Serono and Pfizer in the US under an exception to the Orphan Drug Act . [ citation needed ] It was approved in the European Union in 1998, and in the US in 2002; it has since been approved in more than 90 countries worldwide including Canada and Australia. [ citation needed ] EMD Serono has had sole rights to Rebif in the US since January 2016. [ 26 ] [ 27 ] Rebif is administered via subcutaneous injection . [ 2 ]
Cinnovex is the brand name of recombinant Interferon beta-1a, which is manufactured as biosimilar/biogeneric in Iran . It is produced in a lyophilized form and sold with distilled water for injection. Cinnovex was developed at the Fraunhofer Society in collaboration with CinnaGen , and is the first therapeutic protein from a Fraunhofer laboratory to be approved as biogeneric / biosimilar medicine. There are several clinical studies to prove the similarity of CinnoVex and Avonex. [ 28 ] A more water-soluble variant is currently being investigated by the Vakzine Projekt Management (VPM) GmbH in Braunschweig, Germany.
Plegridy is a brand name of a pegylated form of Interferon beta-1a. Plegridy's advantage is it only needs injecting once every two weeks. [ 29 ]
Closely related to interferon beta-1a is interferon beta-1b , which is also indicated for MS, but is formulated with a different dose and administered with a different frequency. Each drug has a different safety/efficacy profile. [ 30 ] Interferon beta-1b is marketed only by Bayer in the US as Betaseron, and outside the US as Betaferon.
In the United States, as of 2015 [update] , the cost is between US$1,284 and US$1,386 per 30 mcg vial. [ 31 ] As of 2020, the National Average Drug Acquisition Cost (NADAC) in the United States for Avonex was $6,872.94 for a 30 mcg kit. [ 32 ]
Avonex and Rebif are on the top ten best-selling multiple sclerosis drugs of 2013. [ 33 ]
It is an example of a specialty drug that would only be available through a specialty pharmacy . This is because it requires a refrigerated chain of distribution and costs $17,000 a year. [ 34 ]
Interferon beta-1a administered subcutaneously or intravenously was investigated since March 2020 as a potential treatment in patients hospitalized with COVID-19 in a multinational Solidarity trial (initially in combination with lopinavir ) but it did not reduce in-hospital mortality compared to local standard of care. [ 35 ]
SNG001, an inhalation formulation of interferon beta-1a, is being developed as a treatment for COVID-19 by Synairgen . [ 36 ] [ 37 ] A pilot trial in hospitalized patients showed higher odds of clinical improvement with SNG001 compared to placebo [ 38 ] and in January 2021 a phase 3 trial in this population started. [ 39 ] | https://en.wikipedia.org/wiki/Interferon_beta-1a |
Interferon-γ release assays (IGRA) are medical tests used in the diagnosis of some infectious diseases, especially tuberculosis . Interferon-γ (IFN-γ) release assays rely on the fact that T-lymphocytes will release IFN-γ when exposed to specific antigens. These tests are mostly developed for the field of tuberculosis diagnosis , but in theory, may be used in the diagnosis of other diseases that rely on cell-mediated immunity, e.g. cytomegalovirus and leishmaniasis and COVID-19 . [ 1 ] For example, in patients with cutaneous adverse drug reactions , the challenge of peripheral blood lymphocytes with the drug causing the reaction produced a positive test result for half of the drugs tested. [ 2 ]
There are currently [ when? ] two IFN-γ release assays available for the diagnosis of tuberculosis:
The former test quantitates the amount of IFN-γ produced in response to the ESAT-6 and CFP-10 antigens from Mycobacterium tuberculosis , which are distinguishable from those present in BCG and most other non-tuberculous mycobacteria . The latter test determines the total number of individual effector T cells expressing IFN-γ . [ citation needed ]
The indications for the test are still disputed. It has been evaluated for the diagnosis of latent tuberculosis in HIV patients (who frequently have a negative Mantoux test ). [ 4 ]
This medical diagnostic article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Interferon_gamma_release_assay |
In hydrology , interflow is the lateral movement of water in the unsaturated zone, or vadose zone , that returns to the surface or enters a stream. [ 1 ] Interflow is sometimes used interchangeably with throughflow ; [ 1 ] however, throughflow is specifically the subcomponent of interflow that returns to the surface, as overland flow, prior to entering a stream or becoming groundwater. [ 2 ] Interflow occurs when water infiltrates (see infiltration (hydrology) ) into the subsurface, hydraulic conductivity decreases with depth, and lateral flow proceeds downslope. [ 1 ] As water accumulates in the subsurface, saturation may occur, and interflow may exfiltrate as return flows, becoming overland flow. [ 1 ] | https://en.wikipedia.org/wiki/Interflow |
An intergenic region is a stretch of DNA sequences located between genes . [ 1 ] Intergenic regions may contain functional elements and junk DNA .
Intergenic regions may contain a number of functional DNA sequences such as promoters and regulatory elements , enhancers , spacers , and (in eukaryotes) centromeres . [ 2 ] They may also contain origins of replication , scaffold attachment regions , and transposons and viruses . [ 2 ]
Non-functional DNA elements such as pseudogenes and repetitive DNA , both of which are types of junk DNA , can also be found in intergenic regions—although they may also be located within genes in introns. [ 2 ] It is possible that these regions contain as of yet unidentified functional elements, such as non-coding genes or regulatory sequences. [ 3 ] This indeed occurs occasionally, but the amount of functional DNA discovered usually constitute only a tiny fraction of the overall amount of intergenic or intronic DNA. [ 3 ]
In humans, intergenic regions comprise about 50% of the genome , whereas this number is much less in bacteria (15%) and yeast (30%). [ 4 ]
As with most other non-coding DNA, the GC-content of intergenic regions vary considerably among species. For example in Plasmodium falciparum , many intergenic regions have an AT content of 90%. [ 5 ]
Functional elements in intergenic regions will evolve slowly because their sequence is maintained by negative selection . In species with very large genomes, a large percentage of intergenic regions is probably junk DNA and it will evolve at the neutral rate of evolution. [ 6 ] [ 7 ] [ verification needed ] Junk DNA sequences are not maintained by purifying selection but gain-of-function mutations with deleterious fitness effects can occur. [ 8 ]
Phylostratigraphic inference and bioinformatics methods have shown that intergenic regions can—on geological timescales—transiently evolve into open reading frame sequences that mimic those of protein coding genes, and can therefore lead to the evolution of novel protein-coding genes in a process known as de novo gene birth . [ 9 ] | https://en.wikipedia.org/wiki/Intergenic_region |
The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services ( IPBES ) is an intergovernmental organization established to improve communication between science and policy on issues of biodiversity and ecosystem services . [ 1 ] It serves a similar role to the Intergovernmental Panel on Climate Change (IPCC). [ 2 ] The IPBES Bureau has agreed, on the basis of a proposal by the secretariat for the purposes of brand unity and brand recognition, to adopt a common pronunciation of the IPBES acronym. In keeping with widespread linguistic convention, the acronym is officially pronounced as “ip-bes” – “ip” as in “hip” and “bes” as in “best”. [ 3 ]
In 2010 a resolution by the 65th session of the United Nations General Assembly urged the United Nations Environment Programme to convene a plenary meeting to establish IPBES. [ 4 ] [ 5 ] In 2013 an initial conceptual framework was adopted for the prospective IPBES plenary. [ 5 ]
From 29 April to 4 May 2019, representatives of the 132 IPBES members met in Paris, France, to discuss the Global Assessment Report on Biodiversity and Ecosystem Services [ 6 ] and to adopt its summary for policymakers (SPM). On 6 May 2019, the 40-page document was released, aiming to empower policymakers with the knowledge and evidence to make better decisions when developing policies and taking actions for the benefit of humans and nature. [ 7 ] [ 8 ]
On October 29, 2020, the organization issued a preliminary report through Zenodo on its workshop, held virtually on 27–31 July 2020, [ 12 ] that proposes a plan for international cooperation to lower risks for pandemics . Lowering the frequency and severity of pandemics through the implementation of worldwide policies is the objective of the organization. An article on the report was published by Medical News Today on November 7, 2020, that explicates information in the report. [ 13 ]
The 5th IPBES Plenary in 2017 noted that the concept of nature's contributions to people would be used in current and future IPBES assessments. The concept of “nature's contributions to people” has since replaced the use of the phrase “nature's benefits to people” that had been used in the conceptual framework as initially adopted in 2013. This change was met with objection from some scientists, who worried that the new term would be confusing and that NCPs were not significantly different from ecosystem services. [ 14 ]
In June 2021, IPBES and IPCC released a co-sponsored workshop report on biodiversity and climate change . The workshop produced a summary report covering outcomes, [ 15 ] and a 250-page scientific outcome report. [ 16 ]
The Nexus Assessment is a landmark new report by IPBES that came out on 17 December 2024. It looks at how biodiversity, water, food and health are all connected, and it's the most ambitious scientific assessment of these links ever done. It also looks at more than five dozen different ways to deal with the problem, to make the most of the benefits across the five 'nexus elements': biodiversity, water, food, health and climate change. [ 27 ]
The IPBES Assessment Report on the Underlying Causes of Biodiversity Loss and the Determinants of Transformative Change and Options for Achieving the 2050 Vision for Biodiversity – also known as the Transformative Change Assessment – came out on 18 December 2024. It builds on the 2019 IPBES Global Assessment Report, which found that the only way to achieve global development goals is through transformative change. [ 28 ]
In October 2022, the IPBES and the IPCC shared the Gulbenkian Prize for Humanity , because the two intergovernmental organisations "produce scientific knowledge, alert society, and inform decision-makers to make better choices for combatting climate change and the loss of biodiversity". [ 29 ]
The Blue Planet Prize is awarded annually to individuals and organisations that have demonstrated exceptional accomplishments in scientific research and its practical applications. These achievements have contributed to the identification of solutions to pressing global environmental challenges.
The 2024 laureates include the IPBES, recognised as "the leading global authority on the state of knowledge and science about biodiversity, ecosystem services and nature's contributions to people". [ 30 ] | https://en.wikipedia.org/wiki/Intergovernmental_Science-Policy_Platform_on_Biodiversity_and_Ecosystem_Services |
In zoology , intergradation is the way in which two distinct subspecies are connected via areas where populations are found that have the characteristics of both. There are two types of intergradation: primary and secondary intergradation.
This occurs in cases where two subspecies are connected via one or more intermediate populations, each of which is in turn intermediate to its adjacent populations and exhibits more or less the same amount of variability as any other population within the species . Adjacent populations and subspecies are subject to cline intergradation, and in these situations it is usually taken for granted that the clines are causally related (by natural selection ) to environmental gradients . [ 1 ]
When contact between a geographically isolated subspecies is reestablished with the main body of the species or with another isolate subspecies, interbreeding takes place as long as the isolate has not yet evolved an effective set of isolating mechanisms. Consequently, a relatively distinct zone or belt of hybridization will develop depending on the degree of genetic and phenotypic difference that was achieved by the previously isolated subspecies . [ 1 ] | https://en.wikipedia.org/wiki/Intergradation |
In materials science , intergranular corrosion ( IGC ), also known as intergranular attack ( IGA ), is a form of corrosion where the boundaries of crystallites of the material are more susceptible to corrosion than their insides. ( Cf. transgranular corrosion.)
This situation can happen in otherwise corrosion-resistant alloys, when the grain boundaries are depleted, known as grain boundary depletion , of the corrosion-inhibiting elements such as chromium by some mechanism. In nickel alloys and austenitic stainless steels , where chromium is added for corrosion resistance, the mechanism involved is precipitation of chromium carbide at the grain boundaries, resulting in the formation of chromium-depleted zones adjacent to the grain boundaries (this process is called sensitization ). Around 12% chromium is minimally required to ensure passivation, a mechanism by which an ultra thin invisible film, known as passive film, forms on the surface of stainless steels. This passive film protects the metal from corrosive environments. The self-healing property of the passive film make the steel stainless. Selective leaching often involves grain boundary depletion mechanisms.
These zones also act as local galvanic couples , causing local galvanic corrosion . This condition happens when the material is heated to temperatures around 700 °C for too long a time, and often occurs during welding or an improper heat treatment . When zones of such material form due to welding, the resulting corrosion is termed weld decay . Stainless steels can be stabilized against this behavior by addition of titanium , niobium , or tantalum , which form titanium carbide , niobium carbide and tantalum carbide preferentially to chromium carbide, by lowering the content of carbon in the steel and in case of welding also in the filler metal under 0.02%, or by heating the entire part above 1000 °C and quenching it in water, leading to dissolution of the chromium carbide in the grains and then preventing its precipitation. Another possibility is to keep the welded parts thin enough so that, upon cooling, the metal dissipates heat too quickly for chromium carbide to precipitate. The ASTM A923, [ 1 ] ASTM A262, [ 2 ] and other similar tests are often used to determine when stainless steels are susceptible to intergranular corrosion. The tests require etching with chemicals that reveal the presence of intermetallic particles, sometimes combined with Charpy V-Notch and other mechanical testing.
Another related kind of intergranular corrosion is termed knifeline attack ( KLA ). Knifeline attack impacts steels stabilized by niobium, such as 347 stainless steel. Titanium, niobium, and their carbides dissolve in steel at very high temperatures. At some cooling regimes (depending on the rate of cooling), niobium carbide does not precipitate and the steel then behaves like unstabilized steel, forming chromium carbide instead. This affects only a thin zone several millimeters wide in the very vicinity of the weld, making it difficult to spot and increasing the corrosion speed. Structures made of such steels have to be heated in a whole to about 1065 °C (1950 °F), when the chromium carbide dissolves and niobium carbide forms. The cooling rate after this treatment is not important, as the carbon that would otherwise pose risk of formation of chromium carbide is already sequestered as niobium carbide. [1]
Aluminium -based alloys may be sensitive to intergranular corrosion if there are layers of materials acting as anodes between the aluminium-rich crystals. High strength aluminium alloys, especially when extruded or otherwise subjected to high degree of working, can undergo exfoliation corrosion (metallurgy) , where the corrosion products build up between the flat, elongated grains and separate them, resulting in lifting or leafing effect and often propagating from edges of the material through its entire structure. [2] Intergranular corrosion is a concern especially for alloys with high content of copper .
Other kinds of alloys can undergo exfoliation as well; the sensitivity of cupronickel increases together with its nickel content. A broader term for this class of corrosion is lamellar corrosion . Alloys of iron are susceptible to lamellar corrosion, as the volume of iron oxides is about seven times higher than the volume of original metal, leading to formation of internal tensile stresses tearing the material apart. Similar effect leads to formation of lamellae in stainless steels, due to the difference of thermal expansion of the oxides and the metal. [3]
Copper-based alloys become sensitive when depletion of copper content in the grain boundaries occurs.
Anisotropic alloys, where extrusion or heavy working leads to formation of long, flat grains, are especially prone to intergranular corrosion. [4]
Intergranular corrosion induced by environmental stresses is termed stress corrosion cracking . Inter granular corrosion can be detected by ultrasonic and eddy current methods.
Sensitization refers to the precipitation of carbides at grain boundaries in a stainless steel or alloy, causing the steel or alloy to be susceptible to intergranular corrosion or intergranular stress corrosion cracking.
Certain alloys when exposed to a temperature characterized as a sensitizing temperature become particularly susceptible to intergranular corrosion. In a corrosive atmosphere, the grain interfaces of these sensitized alloys become very reactive and intergranular corrosion results. This is characterized by a localized attack at and adjacent to grain boundaries with relatively little corrosion of the grains themselves. The alloy disintegrates (grains fall out) and/or loses its strength.
The photos show the typical microstructure of a normalized (unsensitized) type 304 stainless steel and a heavily sensitized steel. The samples have been polished and etched before taking the photos , and the sensitized areas show as wide, dark lines where the etching fluid has caused corrosion. The dark lines consist of carbides and corrosion products.
Intergranular corrosion is generally considered to be caused by the segregation of impurities at the grain boundaries or by enrichment or depletion of one of the alloying elements in the grain boundary areas. Thus in certain aluminium alloys , small amounts of iron have been shown to segregate in the grain boundaries and cause intergranular corrosion. Also, it has been shown that the zinc content of a brass is higher at the grain boundaries and subject to such corrosion. High-strength aluminium alloys such as the Duralumin -type alloys (Al-Cu) which depend upon precipitated phases for strengthening are susceptible to intergranular corrosion following sensitization at temperatures of about 120 °C. Nickel -rich alloys such as Inconel 600 and Incoloy 800 show similar susceptibility. Die-cast zinc alloys containing aluminum exhibit intergranular corrosion by steam in a marine atmosphere. Cr-Mn and Cr-Mn-Ni steels are also susceptible to intergranular corrosion following sensitization in the temperature range of 420 °C–850 °C. In the case of the austenitic stainless steels , when these steels are sensitized by being heated in the temperature range of about 520 °C to 800 °C, depletion of chromium in the grain boundary region occurs, resulting in susceptibility to intergranular corrosion. Such sensitization of austenitic stainless steels can readily occur because of temperature service requirements, as in steam generators , or as a result of subsequent welding of the formed structure.
Several methods have been used to control or minimize the intergranular corrosion of susceptible alloys, particularly of the austenitic stainless steels . For example, a high-temperature solution heat treatment , commonly termed solution- annealing , quench -annealing or solution-quenching, has been used. The alloy is heated to a temperature of about 1,060 °C to 1,120 °C and then water quenched. This method is generally unsuitable for treating large assemblies, and also ineffective where welding is subsequently used for making repairs or for attaching other structures.
Another control technique for preventing intergranular corrosion involves incorporating strong carbide formers or stabilizing elements such as niobium or titanium in the stainless steels. Such elements have a much greater affinity for carbon than does chromium ; carbide formation with these elements reduces the carbon available in the alloy for formation of chromium carbides . Such a stabilized titanium-bearing austenitic chromium-nickel-copper stainless steel is shown in U.S. Pat. No. 3,562,781. Or the stainless steel may initially be reduced in carbon content below 0.03 percent so that insufficient carbon is provided for carbide formation. These techniques are expensive and only partially effective since sensitization may occur with time. The low-carbon steels also frequently exhibit lower strengths at high temperatures. | https://en.wikipedia.org/wiki/Intergranular_corrosion |
In fracture mechanics , intergranular fracture , intergranular cracking or intergranular embrittlement occurs when a crack propagates along the grain boundaries of a material, usually when these grain boundaries are weakened. [ 1 ] The more commonly seen transgranular fracture occurs when the crack grows through the material grains. As an analogy, in a wall of bricks, intergranular fracture would correspond to a fracture that takes place in the mortar that keeps the bricks together.
Intergranular cracking is likely to occur if there is a hostile environmental influence and is favored by larger grain sizes and higher stresses . [ 1 ] Intergranular cracking is possible over a wide range of temperatures. [ 2 ] While transgranular cracking is favored by strain localization (which in turn is encouraged by smaller grain sizes), intergranular fracture is promoted by strain homogenization resulting from coarse grains. [ 3 ]
Embrittlement , or loss of ductility, is often accompanied by a change in fracture mode from transgranular to intergranular fracture. [ 4 ] This transition is particularly significant in the mechanism of impurity-atom embrittlement. [ 4 ] Additionally, hydrogen embrittlement is a common category of embrittlement in which intergranular fracture can be observed. [ 5 ]
Intergranular fracture can occur in a wide variety of materials, including steel alloys, copper alloys, aluminum alloys, and ceramics. [ 6 ] [ 7 ] [ 3 ] In metals with multiple lattice orientations, when one lattice ends and another begins, the fracture changes direction to follow the new grain. This results in a fairly jagged looking fracture with straight edges of the grain and a shiny surface may be seen. In ceramics, intergranular fractures propagate through grain boundaries, producing smooth bumpy surfaces where grains can be easily identified.
Though it is easy to identify intergranular cracking, pinpointing the cause is more complex as the mechanisms are more varied, compared to transgranular fracture. [ 6 ] There are several other processes that can lead to intergranular fracture or preferential crack propagation at the grain boundaries: [ 8 ] [ 6 ]
From an energy standpoint, the energy released by intergranular crack propagation is higher than that predicted by Griffith theory , implying that the additional energy term to propagate a crack comes from a grain-boundary mechanism. [ 9 ]
Intergranular fracture can be categorized into the following: [ 6 ]
At room temperature, intergranular fracture is commonly associated with altered cohesion resulting from segregation of solutes or impurities at the grain boundaries. [ 10 ] Examples of solutes known to influence intergranular fracture are sulfur, phosphorus, arsenic, and antimony specifically in steels, lead in aluminum alloys, and hydrogen in numerous structural alloys. [ 10 ] At high impurity levels, especially in the case of hydrogen embrittlement , the likelihood of intergranular fracture is greater. [ 6 ] Solutes like hydrogen are hypothesized to stabilize and increase the density of strain-induced vacancies, [ 11 ] leading to microcracks and microvoids at grain boundaries. [ 5 ]
Intergranular cracking is dependent on the relative orientation of the common boundary between two grains. The path of intergranular fracture typically occurs along the highest-angle grain boundary. [ 6 ] In a study, it was shown that cracking was never exhibited for boundaries with misorientation of up to 20 degrees, regardless of boundary type. [ 12 ] At greater angles, large areas of cracked, uncracked, and mixed behavior were seen. The results imply that the degree of grain boundary cracking, and hence intergranular fracture, is largely determined by boundary porosity, or the amount of atomic misfit. [ 12 ] | https://en.wikipedia.org/wiki/Intergranular_fracture |
The Interim Register of Marine and Nonmarine Genera ( IRMNG ) is a taxonomic database which attempts to cover published genus names for all domains of life (also including subgenera in zoology), from 1758 in zoology (1753 in botany) up to the present, arranged in a single, internally consistent taxonomic hierarchy, for the benefit of Biodiversity Informatics initiatives plus general users of biodiversity (taxonomic) information. In addition to containing over 500,000 published genus name instances as at July 2024 (also including subgeneric names in zoology), the database holds over 1.7 million species names (1.3 million listed as "accepted"), although this component of the data is not maintained in as current or complete state as the genus-level holdings. IRMNG can be queried online for access to the latest version of the dataset and is also made available as periodic snapshots or data dumps for import/upload into other systems as desired. The database was commenced in 2006 at the then CSIRO Division of Marine and Atmospheric Research in Australia and, since 2016, has been hosted at the Flanders Marine Institute (VLIZ) in Belgium.
IRMNG contains scientific names (only) of the genera (plus zoological subgenera, see below), a subset of species , and principal higher ranks of most plants, animals and other kingdoms , both living and extinct, within a standardized taxonomic hierarchy, with associated machine-readable information on habitat (e.g. marine /nonmarine) and extant / fossil status for the majority of entries. [ 1 ] The database aspires to provide complete coverage of both accepted and unaccepted genus names across all kingdoms, with a subset only of species names included as a secondary activity. The names in IRMNG fall within the governance of the International Code of Zoological Nomenclature for zoology (covering animals, zoological protists, and trace fossils attributable to the activities of animals), the International Code of Nomenclature for algae, fungi, and plants (ICN or ICNafp) for botany including those groups, the International Code of Nomenclature of Prokaryotes for Bacteria and Archaea , and the International Committee on Taxonomy of Viruses for that group. [ 2 ]
In its July 2024 release, IRMNG contained 508,851 genus names, of which 240,625 were listed as "accepted", 123,117 "unaccepted", 23,192 of "other" status i.e. interim unpublished, nomen dubium , nomen nudum , taxon inquirendum or temporary name, and 121,917 as "uncertain" (unassessed for taxonomic status at this time). [ 3 ] The data originate from a range of (frequently domain-specific) print, online and database sources, including (among others) Nomenclator Zoologicus for animals and Index Nominum Genericorum for plants, and are reorganised into a common data structure to support a variety of online queries, generation of individual taxon pages, and bulk data supply to other biodiversity informatics projects. IRMNG content can be queried and displayed freely via the web, and download files of the data down to the taxonomic rank of genus as at specific dates are available in the Darwin Core Archive (DwC-A) format. The data include homonyms (with their authorities), including both available (validly published) and selected unavailable names. [ 4 ]
Since in zoology (only) names of subgenera are included, along with genera, in the "genus-group" and are deemed by the "principle of coordination" to have been simultaneously published at both ranks even if not explicitly so at the time of original publication, [ 5 ] they are included as available generic names in the IRMNG compilation but marked as "unaccepted names" (not currently used as the accepted name for a genus) unless where they are currently in use as the accepted name for a genus. By contrast, the botanical Code (ICN), which covers Algae, Fungi and Plants, lacks such a provision, so subgenera published under that Code are not included in IRMNG except where they have been explicitly re-ranked to be botanical genera, in which case a new "name" is considered to have been created at that point with authorship given in the form of the original author(s) of the subgenus in parentheses, followed by the name of the author(s) responsible for the newly elevated status. [ 6 ] [ a ]
Estimates for "accepted names" as held at May 2023 are as follows, broken down by kingdom, following the methodology used in Rees et al., 2020 (updated using 2023 data):
IRMNG was initiated and designed by Australian biologist and data manager Tony Rees in 2006. [ 1 ] [ 7 ] For his work on this and other projects, GBIF awarded him the 2014 Ebbe Nielsen Prize . [ 7 ] From 2006 to 2014 IRMNG was located at CSIRO Marine and Atmospheric Research , and was moved to the Flanders Marine Institute (VLIZ) over the period 2014–2016; from 2016 onwards all releases have been available via its new website www.irmng.org which is hosted by VLIZ. [ 1 ] [ 8 ] VLIZ also hosts the World Register of Marine Species (WoRMS), using a common infrastructure. [ 9 ]
Content from IRMNG is used by several global Biodiversity Informatics projects including Open Tree of Life , [ 10 ] the Global Biodiversity Information Facility (GBIF), [ 11 ] and the Encyclopedia of Life (EOL), [ 12 ] in addition to others including the Atlas of Living Australia [ 13 ] and the Global Names Architecture (GNA)'s Global Names Resolver . [ 14 ] From 2018 onwards, IRMNG data are also being used to populate the taxonomic hierarchy and provide generic names for a range of taxa in the areas of protists (kingdoms Protozoa and Chromista ) and plant algae ( Charophyta , Chlorophyta , Glaucophyta and Rhodophyta ) in the Catalogue of Life . [ 15 ] | https://en.wikipedia.org/wiki/Interim_Register_of_Marine_and_Nonmarine_Genera |
In Einstein 's theory of general relativity , the interior Schwarzschild metric (also interior Schwarzschild solution or Schwarzschild fluid solution ) is an exact solution for the gravitational field in the interior of a non-rotating spherical body which consists of an incompressible fluid (implying that density is constant throughout the body) and has zero pressure at the surface. This is a static solution, meaning that it does not change over time. It was discovered by Karl Schwarzschild in 1916, who earlier had found the exterior Schwarzschild metric . [ 1 ]
The interior Schwarzschild metric is framed in a spherical coordinate system with the body's centre located at the origin, plus the time coordinate. Its line element is [ 2 ] [ 3 ]
where
This solution is valid for r ≤ r g {\displaystyle r\leq r_{g}} . For a complete metric of the sphere's gravitational field, the interior Schwarzschild metric has to be matched with the exterior one,
at the surface. It can easily be seen that the two have the same value at the surface, i.e., at r = r g {\displaystyle r=r_{g}} .
Defining a parameter R 2 = r g 3 / r s {\displaystyle {\mathcal {R}}^{2}=r_{g}^{3}/r_{s}} , we get
We can also define an alternative radial coordinate η = arcsin r R {\displaystyle \eta =\arcsin {\frac {r}{\mathcal {R}}}} and a corresponding parameter η g = arcsin r g R = arcsin r s r g {\displaystyle \eta _{g}=\arcsin {\frac {r_{g}}{\mathcal {R}}}=\arcsin {\sqrt {\frac {r_{s}}{r_{g}}}}} , yielding [ 4 ]
With g r r = ( 1 − r s r 2 / r g 3 ) − 1 {\displaystyle g_{rr}=(1-r_{s}r^{2}/r_{g}^{3})^{-1}} and the area A = 4 π r 2 {\displaystyle A=4\pi r^{2}} , the integral for the proper volume is
which is larger than the volume of a euclidean reference shell.
The fluid has a constant density by definition. It is given by
where κ = 8 π G / c 2 {\displaystyle \kappa =8\pi G/c^{2}} is the Einstein gravitational constant . [ 3 ] [ 5 ] It may be counterintuitive that the density is the mass divided by the volume of a sphere with radius r g {\displaystyle r_{g}} , which seems to disregard that this is less than the proper radius, and that space inside the body is curved so that the volume formula for a "flat" sphere shouldn't hold at all. However, M {\displaystyle M} is the mass measured from the outside, for example by observing a test particle orbiting the gravitating body (the " Kepler mass"), which in general relativity is not necessarily equal to the proper mass. This mass difference exactly cancels out the difference of the volumes.
The pressure of the incompressible fluid can be found by calculating the Einstein tensor G μ ν {\displaystyle G_{\mu \nu }} from the metric. The Einstein tensor is diagonal (i.e., all off-diagonal elements are zero), meaning there are no shear stresses , and has equal values for the three spatial diagonal components, meaning pressure is isotropic . Its value is
As expected, the pressure is zero at the surface of the sphere and increases towards the centre. It becomes infinite at the centre if cos η g = 1 / 3 {\displaystyle \cos \eta _{g}=1/3} , which corresponds to r s = 8 9 r g {\displaystyle r_{s}={\frac {8}{9}}r_{g}} or η g ≈ 70.5 ∘ {\displaystyle \eta _{g}\approx 70.5^{\circ }} , which is true for a body that is extremely dense or large. Such a body suffers gravitational collapse into a black hole . As this is a time dependent process, the Schwarzschild solution does not hold any longer. [ 2 ] [ 3 ]
Gravitational redshift for radiation from the sphere's surface (for example, light from a star) is
From the stability condition cos η g > 1 / 3 {\displaystyle \cos \eta _{g}>1/3} follows z < 2 {\displaystyle z<2} . [ 3 ]
The spatial curvature of the interior Schwarzschild metric can be visualized by taking a slice (1) with constant time and (2) through the sphere's equator, i.e. t = c o n s t . , θ = π / 2 {\displaystyle t=const.,\theta =\pi /2} . This two-dimensional slice can be embedded in a three-dimensional Euclidean space and then takes the shape of a spherical cap with radius R {\displaystyle {\mathcal {R}}} and half opening angle η g {\displaystyle \eta _{g}} . Its Gaussian curvature K {\displaystyle K} is proportional to the fluid's density and equals R − 2 = r s / r g 3 = ρ κ / 3 {\displaystyle {\mathcal {R}}^{-2}=r_{s}/r_{g}^{3}=\rho \kappa /3} . As the exterior metric can be embedded in the same way (yielding Flamm's paraboloid ), a slice of the complete solution can be drawn like this: [ 5 ] [ 6 ]
In this graphic, the blue circular arc represents the interior metric, and the black parabolic arcs with the equation w = 2 r s ( r − r s ) {\displaystyle w=2{\sqrt {r_{s}(r-r_{s})}}} represent the exterior metric, or Flamm's paraboloid. The η {\displaystyle \eta } -coordinate is the angle measured from the centre of the cap, that is, from "above" the slice. The proper radius of the sphere – intuitively, the length of a measuring rod spanning from its centre to a point on its surface – is half the length of the circular arc, or η g R {\displaystyle \eta _{g}{\mathcal {R}}} .
This is a purely geometric visualization and does not imply a physical "fourth spatial dimension" into which space would be curved. (Intrinsic curvature does not imply extrinsic curvature .)
Here are the relevant parameters for some astronomical objects, disregarding rotation and inhomogeneities such as deviation from the spherical shape and variation in density.
The interior Schwarzschild solution was the first static spherically symmetric perfect fluid solution that was found. It was published on 24 February 1916, only three months after Einstein's field equations and one month after Schwarzschild's exterior solution. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Interior_Schwarzschild_metric |
In abstract algebra , an interior algebra is a certain type of algebraic structure that encodes the idea of the topological interior of a set. Interior algebras are to topology and the modal logic S4 what Boolean algebras are to set theory and ordinary propositional logic . Interior algebras form a variety of modal algebras .
An interior algebra is an algebraic structure with the signature
where
is a Boolean algebra and postfix I designates a unary operator , the interior operator , satisfying the identities:
x I is called the interior of x .
The dual of the interior operator is the closure operator C defined by x C = (( x ′) I )′. x C is called the closure of x . By the principle of duality , the closure operator satisfies the identities:
If the closure operator is taken as primitive, the interior operator can be defined as x I = (( x ′) C )′. Thus the theory of interior algebras may be formulated using the closure operator instead of the interior operator, in which case one considers closure algebras of the form ⟨ S , ·, +, ′, 0, 1, C ⟩, where ⟨ S , ·, +, ′, 0, 1⟩ is again a Boolean algebra and C satisfies the above identities for the closure operator. Closure and interior algebras form dual pairs, and are paradigmatic instances of "Boolean algebras with operators." The early literature on this subject (mainly Polish topology) invoked closure operators, but the interior operator formulation eventually became the norm [ citation needed ] following the work of Wim Blok .
Elements of an interior algebra satisfying the condition x I = x are called open . The complements of open elements are called closed and are characterized by the condition x C = x . An interior of an element is always open and the closure of an element is always closed. Interiors of closed elements are called regular open and closures of open elements are called regular closed . Elements that are both open and closed are called clopen . 0 and 1 are clopen.
An interior algebra is called Boolean if all its elements are open (and hence clopen). Boolean interior algebras can be identified with ordinary Boolean algebras as their interior and closure operators provide no meaningful additional structure. A special case is the class of trivial interior algebras, which are the single element interior algebras characterized by the identity 0 = 1.
Interior algebras, by virtue of being algebraic structures , have homomorphisms . Given two interior algebras A and B , a map f : A → B is an interior algebra homomorphism if and only if f is a homomorphism between the underlying Boolean algebras of A and B , that also preserves interiors and closures. Hence:
Topomorphisms are another important, and more general, class of morphisms between interior algebras. A map f : A → B is a topomorphism if and only if f is a homomorphism between the Boolean algebras underlying A and B , that also preserves the open and closed elements of A . Hence:
(Such morphisms have also been called stable homomorphisms and closure algebra semi-homomorphisms .) Every interior algebra homomorphism is a topomorphism, but not every topomorphism is an interior algebra homomorphism.
Early research often considered mappings between interior algebras that were homomorphisms of the underlying Boolean algebras but that did not necessarily preserve the interior or closure operator. Such mappings were called Boolean homomorphisms . (The terms closure homomorphism or topological homomorphism were used in the case where these were preserved, but this terminology is now redundant as the standard definition of a homomorphism in universal algebra requires that it preserves all operations.) Applications involving countably complete interior algebras (in which countable meets and joins always exist, also called σ-complete ) typically made use of countably complete Boolean homomorphisms also called Boolean σ-homomorphisms —these preserve countable meets and joins.
The earliest generalization of continuity to interior algebras was Sikorski 's, based on the inverse image map of a continuous map . This is a Boolean homomorphism, preserves unions of sequences and includes the closure of an inverse image in the inverse image of the closure. Sikorski thus defined a continuous homomorphism as a Boolean σ-homomorphism f between two σ-complete interior algebras such that f ( x ) C ≤ f ( x C ). This definition had several difficulties: The construction acts contravariantly producing a dual of a continuous map rather than a generalization. On the one hand σ-completeness is too weak to characterize inverse image maps (completeness is required), on the other hand it is too restrictive for a generalization. (Sikorski remarked on using non-σ-complete homomorphisms but included σ-completeness in his axioms for closure algebras .) Later J. Schmid defined a continuous homomorphism or continuous morphism for interior algebras as a Boolean homomorphism f between two interior algebras satisfying f ( x C ) ≤ f ( x ) C . This generalizes the forward image map of a continuous map—the image of a closure is contained in the closure of the image. This construction is covariant but not suitable for category theoretic applications as it only allows construction of continuous morphisms from continuous maps in the case of bijections. (C. Naturman returned to Sikorski's approach while dropping σ-completeness to produce topomorphisms as defined above. In this terminology, Sikorski's original "continuous homomorphisms" are σ-complete topomorphisms between σ-complete interior algebras.)
Given a topological space X = ⟨ X , T ⟩ one can form the power set Boolean algebra of X :
and extend it to an interior algebra
where I is the usual topological interior operator. For all S ⊆ X it is defined by
For all S ⊆ X the corresponding closure operator is given by
S I is the largest open subset of S and S C is the smallest closed superset of S in X . The open, closed, regular open, regular closed and clopen elements of the interior algebra A ( X ) are just the open, closed, regular open, regular closed and clopen subsets of X respectively in the usual topological sense.
Every complete atomic interior algebra is isomorphic to an interior algebra of the form A ( X ) for some topological space X . Moreover, every interior algebra can be embedded in such an interior algebra giving a representation of an interior algebra as a topological field of sets . The properties of the structure A ( X ) are the very motivation for the definition of interior algebras. Because of this intimate connection with topology, interior algebras have also been called topo-Boolean algebras or topological Boolean algebras .
Given a continuous map between two topological spaces
we can define a complete topomorphism
by
for all subsets S of Y . Every complete topomorphism between two complete atomic interior algebras can be derived in this way. If Top is the category of topological spaces and continuous maps and Cit is the category of complete atomic interior algebras and complete topomorphisms then Top and Cit are dually isomorphic and A : Top → Cit is a contravariant functor that is a dual isomorphism of categories. A ( f ) is a homomorphism if and only if f is a continuous open map .
Under this dual isomorphism of categories many natural topological properties correspond to algebraic properties, in particular connectedness properties correspond to irreducibility properties:
The modern formulation of topological spaces in terms of topologies of open subsets, motivates an alternative formulation of interior algebras: A generalized topological space is an algebraic structure of the form
where ⟨ B , ·, +, ′, 0, 1⟩ is a Boolean algebra as usual, and T is a unary relation on B (subset of B ) such that:
T is said to be a generalized topology in the Boolean algebra.
Given an interior algebra its open elements form a generalized topology. Conversely given a generalized topological space
we can define an interior operator on B by b I = Σ{ a ∈ T | a ≤ b } thereby producing an interior algebra whose open elements are precisely T . Thus generalized topological spaces are equivalent to interior algebras.
Considering interior algebras to be generalized topological spaces, topomorphisms are then the standard homomorphisms of Boolean algebras with added relations, so that standard results from universal algebra apply.
The topological concept of neighbourhoods can be generalized to interior algebras: An element y of an interior algebra is said to be a neighbourhood of an element x if x ≤ y I . The set of neighbourhoods of x is denoted by N ( x ) and forms a filter . This leads to another formulation of interior algebras:
A neighbourhood function on a Boolean algebra is a mapping N from its underlying set B to its set of filters, such that:
The mapping N of elements of an interior algebra to their filters of neighbourhoods is a neighbourhood function on the underlying Boolean algebra of the interior algebra. Moreover, given a neighbourhood function N on a Boolean algebra with underlying set B , we can define an interior operator by x I = max{y ∈ B | x ∈ N ( y )} thereby obtaining an interior algebra. N ( x ) {\displaystyle N(x)} will then be precisely the filter of neighbourhoods of x in this interior algebra. Thus interior algebras are equivalent to Boolean algebras with specified neighbourhood functions.
In terms of neighbourhood functions, the open elements are precisely those elements x such that x ∈ N ( x ) . In terms of open elements x ∈ N ( y ) if and only if there is an open element z such that y ≤ z ≤ x .
Neighbourhood functions may be defined more generally on (meet)-semilattices producing the structures known as neighbourhood (semi)lattices . Interior algebras may thus be viewed as precisely the Boolean neighbourhood lattices i.e. those neighbourhood lattices whose underlying semilattice forms a Boolean algebra.
Given a theory (set of formal sentences) M in the modal logic S4 , we can form its Lindenbaum–Tarski algebra :
where ~ is the equivalence relation on sentences in M given by p ~ q if and only if p and q are logically equivalent in M , and M / ~ is the set of equivalence classes under this relation. Then L ( M ) is an interior algebra. The interior operator in this case corresponds to the modal operator □ ( necessarily ), while the closure operator corresponds to ◊ ( possibly ). This construction is a special case of a more general result for modal algebras and modal logic.
The open elements of L ( M ) correspond to sentences that are only true if they are necessarily true, while the closed elements correspond to those that are only false if they are necessarily false.
Because of their relation to S4 , interior algebras are sometimes called S4 algebras or Lewis algebras , after the logician C. I. Lewis , who first proposed the modal logics S4 and S5 .
Since interior algebras are (normal) Boolean algebras with operators , they can be represented by fields of sets on appropriate relational structures. In particular, since they are modal algebras , they can be represented as fields of sets on a set with a single binary relation , called a Kripke frame . The Kripke frames corresponding to interior algebras are precisely the preordered sets . Preordered sets (also called S4-frames ) provide the Kripke semantics of the modal logic S4 , and the connection between interior algebras and preorders is deeply related to their connection with modal logic.
Given a preordered set X = ⟨ X , «⟩ we can construct an interior algebra
from the power set Boolean algebra of X where the interior operator I is given by
The corresponding closure operator is given by
S I is the set of all worlds inaccessible from worlds outside S , and S C is the set of all worlds accessible from some world in S . Every interior algebra can be embedded in an interior algebra of the form B ( X ) for some preordered set X giving the above-mentioned representation as a field of sets (a preorder field ).
This construction and representation theorem is a special case of the more general result for modal algebras and Kripke frames. In this regard, interior algebras are particularly interesting because of their connection to topology . The construction provides the preordered set X with a topology , the Alexandrov topology , producing a topological space T ( X ) whose open sets are:
The corresponding closed sets are:
In other words, the open sets are the ones whose worlds are inaccessible from outside (the up-sets ), and the closed sets are the ones for which every outside world is inaccessible from inside (the down-sets ). Moreover, B ( X ) = A ( T ( X )).
Any monadic Boolean algebra can be considered to be an interior algebra where the interior operator is the universal quantifier and the closure operator is the existential quantifier. The monadic Boolean algebras are then precisely the variety of interior algebras satisfying the identity x IC = x I . In other words, they are precisely the interior algebras in which every open element is closed or equivalently, in which every closed element is open. Moreover, such interior algebras are precisely the semisimple interior algebras. They are also the interior algebras corresponding to the modal logic S5 , and so have also been called S5 algebras .
In the relationship between preordered sets and interior algebras they correspond to the case where the preorder is an equivalence relation , reflecting the fact that such preordered sets provide the Kripke semantics for S5 . This also reflects the relationship between the monadic logic of quantification (for which monadic Boolean algebras provide an algebraic description ) and S5 where the modal operators □ ( necessarily ) and ◊ ( possibly ) can be interpreted in the Kripke semantics using monadic universal and existential quantification, respectively, without reference to an accessibility relation.
The open elements of an interior algebra form a Heyting algebra and the closed elements form a dual Heyting algebra. The regular open elements and regular closed elements correspond to the pseudo-complemented elements and dual pseudo-complemented elements of these algebras respectively and thus form Boolean algebras. The clopen elements correspond to the complemented elements and form a common subalgebra of these Boolean algebras as well as of the interior algebra itself. Every Heyting algebra can be represented as the open elements of an interior algebra and the latter may be chosen to be an interior algebra generated by its open elements—such interior algebras correspond one-to-one with Heyting algebras (up to isomorphism) being the free Boolean extensions of the latter.
Heyting algebras play the same role for intuitionistic logic that interior algebras play for the modal logic S4 and Boolean algebras play for propositional logic . The relation between Heyting algebras and interior algebras reflects the relationship between intuitionistic logic and S4 , in which one can interpret theories of intuitionistic logic as S4 theories closed under necessity . The one-to-one correspondence between Heyting algebras and interior algebras generated by their open elements reflects the correspondence between extensions of intuitionistic logic and normal extensions of the modal logic S4.Grz .
Given an interior algebra A , the closure operator obeys the axioms of the derivative operator , D . Hence we can form a derivative algebra D ( A ) with the same underlying Boolean algebra as A by using the closure operator as a derivative operator.
Thus interior algebras are derivative algebras . From this perspective, they are precisely the variety of derivative algebras satisfying the identity x D ≥ x . Derivative algebras provide the appropriate algebraic semantics for the modal logic wK4 . Hence derivative algebras stand to topological derived sets and wK4 as interior/closure algebras stand to topological interiors/closures and S4 .
Given a derivative algebra V with derivative operator D , we can form an interior algebra I ( V ) with the same underlying Boolean algebra as V , with interior and closure operators defined by x I = x · x ′ D ′ and x C = x + x D , respectively. Thus every derivative algebra can be regarded as an interior algebra. Moreover, given an interior algebra A , we have I ( D ( A )) = A . However, D ( I ( V )) = V does not necessarily hold for every derivative algebra V .
Stone duality provides a category theoretic duality between Boolean algebras and a class of topological spaces known as Boolean spaces . Building on nascent ideas of relational semantics (later formalized by Kripke ) and a result of R. S. Pierce, Jónsson , Tarski and G. Hansoul extended Stone duality to Boolean algebras with operators by equipping Boolean spaces with relations that correspond to the operators via a power set construction . In the case of interior algebras the interior (or closure) operator corresponds to a pre-order on the Boolean space. Homomorphisms between interior algebras correspond to a class of continuous maps between the Boolean spaces known as pseudo-epimorphisms or p-morphisms for short. This generalization of Stone duality to interior algebras based on the Jónsson–Tarski representation was investigated by Leo Esakia and is also known as the Esakia duality for S4-algebras (interior algebras) and is closely related to the Esakia duality for Heyting algebras.
Whereas the Jónsson–Tarski generalization of Stone duality applies to Boolean algebras with operators in general, the connection between interior algebras and topology allows for another method of generalizing Stone duality that is unique to interior algebras. An intermediate step in the development of Stone duality is Stone's representation theorem , which represents a Boolean algebra as a field of sets . The Stone topology of the corresponding Boolean space is then generated using the field of sets as a topological basis . Building on the topological semantics introduced by Tang Tsao-Chen for Lewis's modal logic, McKinsey and Tarski showed that by generating a topology equivalent to using only the complexes that correspond to open elements as a basis, a representation of an interior algebra is obtained as a topological field of sets —a field of sets on a topological space that is closed with respect to taking interiors or closures. By equipping topological fields of sets with appropriate morphisms known as field maps , C. Naturman showed that this approach can be formalized as a category theoretic Stone duality in which the usual Stone duality for Boolean algebras corresponds to the case of interior algebras having redundant interior operator (Boolean interior algebras).
The pre-order obtained in the Jónsson–Tarski approach corresponds to the accessibility relation in the Kripke semantics for an S4 theory, while the intermediate field of sets corresponds to a representation of the Lindenbaum–Tarski algebra for the theory using the sets of possible worlds in the Kripke semantics in which sentences of the theory hold. Moving from the field of sets to a Boolean space somewhat obfuscates this connection. By treating fields of sets on pre-orders as a category in its own right this deep connection can be formulated as a category theoretic duality that generalizes Stone representation without topology. R. Goldblatt had shown that with restrictions to appropriate homomorphisms such a duality can be formulated for arbitrary modal algebras and Kripke frames. Naturman showed that in the case of interior algebras this duality applies to more general topomorphisms and can be factored via a category theoretic functor through the duality with topological fields of sets. The latter represent the Lindenbaum–Tarski algebra using sets of points satisfying sentences of the S4 theory in the topological semantics. The pre-order can be obtained as the specialization pre-order of the McKinsey–Tarski topology. The Esakia duality can be recovered via a functor that replaces the field of sets with the Boolean space it generates. Via a functor that instead replaces the pre-order with its corresponding Alexandrov topology, an alternative representation of the interior algebra as a field of sets is obtained where the topology is the Alexandrov bico-reflection of the McKinsey–Tarski topology. The approach of formulating a topological duality for interior algebras using both the Stone topology of the Jónsson–Tarski approach and the Alexandrov topology of the pre-order to form a bi-topological space has been investigated by G. Bezhanishvili, R.Mines, and P.J. Morandi. The McKinsey–Tarski topology of an interior algebra is the intersection of the former two topologies.
Grzegorczyk proved the first-order theory of closure algebras undecidable . [ 1 ] [ 2 ] Naturman demonstrated that the theory is hereditarily undecidable (all its subtheories are undecidable) and demonstrated an infinite chain of elementary classes of interior algebras with hereditarily undecidable theories. | https://en.wikipedia.org/wiki/Interior_algebra |
In mathematics , the interior extremum theorem , also known as Fermat's theorem , is a theorem which states that at the local extrema of a differentiable function , its derivative is always zero. It belongs to the mathematical field of real analysis and is named after French mathematician Pierre de Fermat .
By using the interior extremum theorem, the potential extrema of a function f {\displaystyle f} , with derivative f ′ {\displaystyle f'} , can found by solving an equation involving f ′ {\displaystyle f'} . The interior extremum theorem gives only a necessary condition for extreme function values, as some stationary points are inflection points (not a maximum or minimum). The function's second derivative , if it exists, can sometimes be used to determine whether a stationary point is a maximum or minimum.
Pierre de Fermat proposed in a collection of treatises titled Maxima et minima a method to find maximum or minimum, similar to the modern interior extremum theorem, albeit with the use of infinitesimals rather than derivatives. [ 1 ] : 456–457 [ 2 ] : 2 After Marin Mersenne passed the treatises onto René Descartes , Descartes was doubtful, remarking "if [...] he speaks of wanting to send you still more papers, I beg of you to ask him to think them out more carefully than those preceding". [ 2 ] : 3 Descartes later agreed that the method was valid. [ 2 ] : 8
One way to state the interior extremum theorem is that, if a function has a local extremum at some point and is differentiable there, then the function's derivative at that point must be zero. In precise mathematical language:
Another way to understand the theorem is via the contrapositive statement: if the derivative of a function at any point is not zero, then there is not a local extremum at that point. Formally:
The global extrema of a function f on a domain A occur only at boundaries , non-differentiable points, and stationary points.
If x 0 {\displaystyle x_{0}} is a global extremum of f , then one of the following is true: [ 2 ] : 1
A similar statement holds for the partial derivatives of multivariate functions . Suppose that some real-valued function of the real numbers f = f ( t 1 , t 2 , … , t k ) {\displaystyle f=f(t_{1},t_{2},\ldots ,t_{k})} has an extremum at a point C {\displaystyle C} , defined by C = ( a 1 , a 2 , … , a k ) {\displaystyle C=(a_{1},a_{2},\ldots ,a_{k})} . If f {\displaystyle f} is differentiable at C {\displaystyle C} , then: ∂ ∂ t i f ( a i ) = 0 {\displaystyle {\frac {\partial }{\partial t_{i}}}f(a_{i})=0} where i = 1 , 2 , … , k {\displaystyle i=1,2,\ldots ,k} . [ 4 ] : 16
The statement can also be extended to differentiable manifolds . If f : M → R {\displaystyle f:M\to \mathbb {R} } is a differentiable function on a manifold M {\displaystyle M} , then its local extrema must be critical points of f {\displaystyle f} , in particular points where the exterior derivative d f {\displaystyle df} is zero. [ 5 ] [ better source needed ]
The interior extremum theorem is central for determining maxima and minima of piecewise differentiable functions of one variable: an extremum is either a stationary point (that is, a zero of the derivative), a non-differentiable point (that is a point where the function is not differentiable ), or a boundary point of the domain of the function . Since the number of these points is typically finite, the computation of the values of the function at these points provide the maximum and the minimun, simply by comparing the obtained values. [ 6 ] : 25 [ 2 ] : 1
Suppose that x 0 {\displaystyle x_{0}} is a local maximum. (A similar argument applies if x 0 {\displaystyle x_{0}} is a local minimum.) Then there is some neighbourhood around x 0 {\displaystyle x_{0}} such that f ( x 0 ) ≥ f ( x ) {\displaystyle f(x_{0})\geq f(x)} for all x {\displaystyle x} within that neighborhood. If x > x 0 {\displaystyle x>x_{0}} , then the difference quotient f ( x ) − f ( x 0 ) x − x 0 {\displaystyle {\frac {f(x)-f(x_{0})}{x-x_{0}}}} is non-positive for x {\displaystyle x} in this neighborhood. This implies lim x → x 0 + f ( x ) − f ( x 0 ) x − x 0 ≤ 0. {\displaystyle \lim _{x\rightarrow x_{0}^{+}}{\frac {f(x)-f(x_{0})}{x-x_{0}}}\leq 0.} Similarly, if x < x 0 {\displaystyle x<x_{0}} , then the difference quotient is non-negative, and so lim x → x 0 − f ( x ) − f ( x 0 ) x − x 0 ≥ 0. {\displaystyle \lim _{x\rightarrow x_{0}^{-}}{\frac {f(x)-f(x_{0})}{x-x_{0}}}\geq 0.} Since f {\displaystyle f} is differentiable, the above limits must both be equal to f ′ ( x 0 ) {\displaystyle f'(x_{0})} . This is only possible if both limits are equal to 0, so f ′ ( x 0 ) = 0 {\displaystyle f'(x_{0})=0} . [ 7 ] : 182 | https://en.wikipedia.org/wiki/Interior_extremum_theorem |
Interkinesis or interphase II is a period of rest that cells of some species enter during meiosis between meiosis I and meiosis II . [ 1 ] [ 2 ] No DNA replication occurs during interkinesis; however, replication does occur during the interphase I stage of meiosis (See meiosis I ). During interkinesis, the spindles of the first meiotic division disassembles and the microtubules reassemble into two new spindles for the second meiotic division. [ 3 ] Interkinesis follows telophase I; however, many plants skip telophase I and interkinesis, going immediately into prophase II. Each chromosome still consists of two chromatids . In this stage other organelle number may also increase.
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Interkinesis |
Interlaced video (also known as interlaced scan ) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth . The interlaced signal contains two fields of a video frame captured consecutively. This enhances motion perception to the viewer, and reduces flicker by taking advantage of the characteristics of the human visual system. [ 1 ]
This effectively doubles the time resolution (also called temporal resolution ) as compared to non-interlaced footage (for frame rates equal to field rates). Interlaced signals require a display that is natively capable of showing the individual fields in a sequential order. CRT displays and ALiS plasma displays are made for displaying interlaced signals.
Interlaced scan refers to one of two common methods for "painting" a video image on an electronic display screen (the other being progressive scan ) by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame. One field contains all odd-numbered lines in the image; the other contains all even-numbered lines.
Sometimes in interlaced video a field is called a frame which can lead to confusion. [ 2 ]
A Phase Alternating Line (PAL)-based television set display, for example, scans 50 fields every second (25 odd and 25 even). The two sets of 25 fields work together to create a full frame every 1/25 of a second (or 25 frames per second ), but with interlacing create a new half frame every 1/50 of a second (or 50 fields per second). [ 3 ] To display interlaced video on progressive scan displays, playback applies deinterlacing to the video signal (which adds input lag ).
The European Broadcasting Union argued against interlaced video in production and broadcasting. Until the early 2010s, they recommended 720p 50 fps (frames per second) for the current production format—and were working with the industry to introduce 1080p 50 as a future-proof production standard. 1080p 50 offers higher vertical resolution, better quality at lower bitrates, and easier conversion to other formats, such as 720p 50 and 1080i 50. [ 4 ] [ 5 ] The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be eliminated because some information is lost between frames.
Despite arguments against it, [ 6 ] [ 7 ] television standards organizations continue to support interlacing. It is still included in digital video transmission formats such as DV , DVB , and ATSC . New video compression standards like High Efficiency Video Coding are optimized for progressive scan video, but sometimes do support interlaced video.
Progressive scan captures, transmits, and displays an image in a path similar to text on a page—line by line, top to bottom.
The interlaced scan pattern in a standard definition CRT display also completes such a scan, but in two passes (two fields). The first pass displays the first and all odd numbered lines, from the top left corner to the bottom right corner. The second pass displays the second and all even numbered lines, filling in the gaps in the first scan.
This scan of alternate lines is called interlacing . A field is an image that contains only half of the lines needed to make a complete picture. In the days of CRT displays, the afterglow of the display's phosphor aided this effect.
Interlacing provides full vertical detail with the same bandwidth that would be required for a full progressive scan, but with twice the perceived frame rate and refresh rate . To prevent flicker, all analog broadcast television systems used interlacing.
Format identifiers like 576i50 and 720p50 specify the frame rate for progressive scan formats, but for interlaced formats they typically specify the field rate (which is twice the frame rate). This can lead to confusion, because industry-standard SMPTE timecode formats always deal with frame rate, not field rate. To avoid confusion, SMPTE and EBU always use frame rate to specify interlaced formats, e.g., 480i60 is 480i/30, 576i50 is 576i/25, and 1080i50 is 1080i/25. This convention assumes that one complete frame in an interlaced signal consists of two fields in sequence.
One of the most important factors in analog television is signal bandwidth, measured in megahertz. The greater the bandwidth, the more expensive and complex the entire production and broadcasting chain. This includes cameras, storage systems, broadcast systems—and reception systems: terrestrial, cable, satellite, Internet, and end-user displays ( TVs and computer monitors ).
For a fixed bandwidth, interlace provides a video signal with twice the display refresh rate for a given line count (versus progressive scan video at a similar frame rate—for instance 1080i at 60 half-frames per second, vs. 1080p at 30 full frames per second). The higher refresh rate improves the appearance of an object in motion, because it updates its position on the display more often, and when an object is stationary, human vision combines information from multiple similar half-frames to produce the same perceived resolution as that provided by a progressive full frame. This technique is only useful, though, if source material is available in higher refresh rates. Cinema movies are typically recorded at 24fps, and therefore do not benefit from interlacing, a solution which reduces the maximum video bandwidth to 5 MHz without reducing the effective picture scan rate of 60 Hz.
Given a fixed bandwidth and high refresh rate, interlaced video can also provide a higher spatial resolution than progressive scan. For instance, 1920×1080 pixel resolution interlaced HDTV with a 60 Hz field rate (known as 1080i60 or 1080i/30) has a similar bandwidth to 1280×720 pixel progressive scan HDTV with a 60 Hz frame rate (720p60 or 720p/60), but achieves approximately twice the spatial resolution for low-motion scenes.
However, bandwidth benefits only apply to an analog or uncompressed digital video signal. With digital video compression, as used in all current digital TV standards, interlacing introduces additional inefficiencies. [ 9 ] EBU has performed tests that show that the bandwidth savings of interlaced video over progressive video is minimal, even with twice the frame rate. I.e., 1080p50 signal produces roughly the same bit rate as 1080i50 (aka 1080i/25) signal, [ 5 ] and 1080p50 actually requires less bandwidth to be perceived as subjectively better than its 1080i/25 (1080i50) equivalent when encoding a "sports-type" scene. [ 10 ]
Interlacing can be exploited to produce 3D TV programming, especially with a CRT display and especially for color filtered glasses by transmitting the color keyed picture for each eye in the alternating fields. This does not require significant alterations to existing equipment. Shutter glasses can be adopted as well, obviously with the requirement of achieving synchronisation. If a progressive scan display is used to view such programming, any attempt to deinterlace the picture will render the effect useless. For color filtered glasses the picture has to be either buffered and shown as if it was progressive with alternating color keyed lines, or each field has to be line-doubled and displayed as discrete frames. The latter procedure is the only way to suit shutter glasses on a progressive display.
Interlaced video is designed to be captured, stored, transmitted, and displayed in the same interlaced format. Because each interlaced video frame is two fields captured at different moments in time, interlaced video frames can exhibit motion artifacts known as interlacing effects , or combing , if recorded objects move fast enough to be in different positions when each individual field is captured. These artifacts may be more visible when interlaced video is displayed at a slower speed than it was captured, or in still frames.
While there are simple methods to produce somewhat satisfactory progressive frames from the interlaced image, for example by doubling the lines of one field and omitting the other (halving vertical resolution), or anti-aliasing the image in the vertical axis to hide some of the combing, there are sometimes methods of producing results far superior to these. If there is only sideways (X axis) motion between the two fields and this motion is even throughout the full frame, it is possible to align the scanlines and crop the left and right ends that exceed the frame area to produce a visually satisfactory image. Minor Y axis motion can be corrected similarly by aligning the scanlines in a different sequence and cropping the excess at the top and bottom. Often the middle of the picture is the most necessary area to put into check, and whether there is only X or Y axis alignment correction, or both are applied, most artifacts will occur towards the edges of the picture. However, even these simple procedures require motion tracking between the fields, and a rotating or tilting object, or one that moves in the Z axis (away from or towards the camera) will still produce combing, possibly even looking worse than if the fields were joined in a simpler method.
Some deinterlacing processes can analyze each frame individually and decide the best method. The best and only perfect conversion in these cases is to treat each frame as a separate image, but that may not always be possible. For framerate conversions and zooming it would mostly be ideal to line-double each field to produce a double rate of progressive frames, resample the frames to the desired resolution and then re-scan the stream at the desired rate, either in progressive or interlaced mode.
Interlace introduces a potential problem called interline twitter , a form of moiré . This aliasing effect only shows up under certain circumstances—when the subject contains vertical detail that approaches the horizontal resolution of the video format. For instance, a finely striped jacket on a news anchor may produce a shimmering effect. This is twittering . Television professionals avoid wearing clothing with fine striped patterns for this reason. Professional video cameras or computer-generated imagery systems apply a low-pass filter to the vertical resolution of the signal to prevent interline twitter.
Interline twitter is the primary reason that interlacing is less suited for computer displays. Each scanline on a high-resolution computer monitor typically displays discrete pixels, each of which does not span the scanline above or below. When the overall interlaced framerate is 60 frames per second, a pixel (or more critically for e.g. windowing systems or underlined text, a horizontal line) that spans only one scanline in height is visible for the 1/60 of a second that would be expected of a 60 Hz progressive display - but is then followed by 1/60 of a second of darkness (whilst the opposite field is scanned), reducing the per-line/per-pixel refresh rate to 30 frames per second with quite obvious flicker.
To avoid this, standard interlaced television sets typically do not display sharp detail. When computer graphics appear on a standard television set, the screen is either treated as if it were half the resolution of what it actually is (or even lower), or rendered at full resolution and then subjected to a low-pass filter in the vertical direction (e.g. a "motion blur" type with a 1-pixel distance, which blends each line 50% with the next, maintaining a degree of the full positional resolution and preventing the obvious "blockiness" of simple line doubling whilst actually reducing flicker to less than what the simpler approach would achieve). If text is displayed, it is large enough so that any horizontal lines are at least two scanlines high. Most fonts for television programming have wide, fat strokes, and do not include fine-detail serifs that would make the twittering more visible; in addition, modern character generators apply a degree of anti-aliasing that has a similar line-spanning effect to the aforementioned full-frame low-pass filter.
This animation demonstrates the interline twitter effect using the Indian Head test card . On the left are two progressive scan images. Center are two interlaced images. Right are two images with line doublers . Top are original resolution, bottom are with anti-aliasing. The two interlaced images use half the bandwidth of the progressive one. The interlaced scan (center) precisely duplicates the pixels of the progressive image (left), but interlace causes details to twitter. A line doubler operating in "bob" (interpolation) mode would produce the images at far right. Real interlaced video blurs such details to prevent twitter, as seen in the bottom row, but such softening (or anti-aliasing) comes at the cost of image clarity. But even the best line doubler could never restore the bottom center image to the full resolution of the progressive image.
ALiS plasma panels and the old CRTs can display interlaced video directly, but modern computer video displays and TV sets are mostly based on LCD technology, which mostly use progressive scanning.
Displaying interlaced video on a progressive scan display requires a process called deinterlacing . This is can be an imperfect technique, especially if the frame rate isn't doubled in the deinterlaced output. Providing the best picture quality for interlaced video signals without doubling the frame rate requires expensive and complex devices and algorithms, and can cause various artifacts. For television displays, deinterlacing systems are integrated into progressive scan TV sets that accept interlaced signal, such as broadcast SDTV signal.
Most modern computer monitors do not support interlaced video, besides some legacy medium-resolution modes (and possibly 1080i as an adjunct to 1080p), and support for standard-definition video (480/576i or 240/288p) is particularly rare given its much lower line-scanning frequency vs typical "VGA"-or-higher analog computer video modes. Playing back interlaced video from a DVD, digital file or analog capture card on a computer display instead requires some form of deinterlacing in the player software and/or graphics hardware, which often uses very simple methods to deinterlace. This means that interlaced video often has visible artifacts on computer systems. Computer systems may be used to edit interlaced video, but the disparity between computer video display systems and interlaced television signal formats means that the video content being edited cannot be viewed properly without separate video display hardware.
Current manufacture TV sets employ a system of intelligently extrapolating the extra information that would be present in a progressive signal entirely from an interlaced original. In theory: this should simply be a problem of applying the appropriate algorithms to the interlaced signal, as all information should be present in that signal. In practice, results are currently variable, and depend on the quality of the input signal and amount of processing power applied to the conversion. The biggest impediment, at present, is artifacts in the lower quality interlaced signals (generally broadcast video), as these are not consistent from field to field. On the other hand, high bit rate interlaced signals such as from HD camcorders operating in their highest bit rate mode work well.
Deinterlacing algorithms temporarily store a few frames of interlaced images and then extrapolate extra frame data to make a smooth flicker-free image. This frame storage and processing results in a slight display lag that is visible in business showrooms with a large number of different models on display. Unlike the old unprocessed NTSC signal, the screens do not all follow motion in perfect synchrony. Some models appear to update slightly faster or slower than others. Similarly, the audio can have an echo effect due to different processing delays.
When motion picture film was developed, the movie screen had to be illuminated at a high rate to prevent visible flicker . The exact rate necessary varies by brightness — 50 Hz is (barely) acceptable for small, low brightness displays in dimly lit rooms, whilst 80 Hz or more may be necessary for bright displays that extend into peripheral vision. The film solution was to project each frame of film three times using a three-bladed shutter: a movie shot at 16 frames per second illuminated the screen 48 times per second. Later, when sound film became available, the higher projection speed of 24 frames per second enabled a two-bladed shutter to produce 48 times per second illumination—but only in projectors incapable of projecting at the lower speed.
This solution could not be used for television. To store a full video frame and display it twice requires a frame buffer —electronic memory ( RAM )—sufficient to store a video frame. This method did not become feasible until the late 1980s and with digital technology. In addition, avoiding on-screen interference patterns caused by studio lighting and the limits of vacuum tube technology required that CRTs for TV be scanned at AC line frequency. (This was 60 Hz in the US, 50 Hz Europe.)
Several different interlacing patents have been proposed since 1914 in the context of still or moving image transmission, but few of them were practicable. [ 11 ] [ 12 ] [ 13 ] In 1926, Ulises Armand Sanabria demonstrated television to 200,000 people attending Chicago Radio World's Fair. Sanabria's system was mechanically scanned using a 'triple interlace' Nipkow disc with three offset spirals and was thus a 3:1 scheme rather than the usual 2:1. It worked with 45 line 15 frames per second images being transmitted. With 15 frames per second and a 3:1 interlace the field rate was 45 fields per second yielding (for the time) a very steady image. He did not apply for a patent for his interlaced scanning until May 1931. [ 13 ]
In 1930, German Telefunken engineer Fritz Schröter first formulated and patented the concept of breaking a single image frame into successive interlaced lines, based on his earlier experiments with phototelegraphy. [ 11 ] [ 14 ] In the US, RCA engineer Randall C. Ballard patented the same idea in 1932, initially for the purpose of reformatting sound film to television rather than for the transmission of live images. [ 11 ] [ 15 ] [ 16 ] Commercial implementation began in 1934 as cathode-ray tube screens became brighter, increasing the level of flicker caused by progressive (sequential) scanning. [ 12 ]
In 1936, when the UK was setting analog standards, early thermionic valve based CRT drive electronics could only scan at around 200 lines in 1/50 of a second (i.e. approximately a 10 kHz repetition rate for the sawtooth horizontal deflection waveform). Using interlace, a pair of 202.5-line fields could be superimposed to become a sharper 405 line frame (with around 377 used for the actual image, and yet fewer visible within the screen bezel; in modern parlance, the standard would be "377i"). The vertical scan frequency remained 50 Hz, but visible detail was noticeably improved. As a result, this system supplanted John Logie Baird 's 240 line mechanical progressive scan system that was also being trialled at the time.
From the 1940s onward, improvements in technology allowed the US and the rest of Europe to adopt systems using increasingly higher line-scan frequencies and more radio signal bandwidth to produce higher line counts at the same frame rate, thus achieving better picture quality. However the fundamentals of interlaced scanning were at the heart of all of these systems. The US adopted the 525 line system, later incorporating the composite color standard known as NTSC , Europe adopted the 625 line system, and the UK switched from its idiosyncratic 405 line system to (the much more US-like) 625 to avoid having to develop a (wholly) unique method of color TV. France switched from its similarly unique 819 line monochrome system to the more European standard of 625. Europe in general, including the UK, then adopted the PAL color encoding standard, which was essentially based on NTSC, but inverted the color carrier phase with each line (and frame) in order to cancel out the hue-distorting phase shifts that dogged NTSC broadcasts. France instead adopted its own unique, twin-FM-carrier based SECAM system, which offered improved quality at the cost of greater electronic complexity, and was also used by some other countries, notably Russia and its satellite states. Though the color standards are often used as synonyms for the underlying video standard - NTSC for 525i/60, PAL/SECAM for 625i/50 - there are several cases of inversions or other modifications; e.g. PAL color is used on otherwise "NTSC" (that is, 525i/60) broadcasts in Brazil , as well as vice versa elsewhere, along with cases of PAL bandwidth being squeezed to 3.58 MHz to fit in the broadcast waveband allocation of NTSC, or NTSC being expanded to take up PAL's 4.43 MHz.
Interlacing was ubiquitous in displays until the 1970s, when the needs of computer monitors resulted in the reintroduction of progressive scan, including on regular TVs or simple monitors based on the same circuitry; most CRT based displays are entirely capable of displaying both progressive and interlace regardless of their original intended use, so long as the horizontal and vertical frequencies match, as the technical difference is simply that of either starting/ending the vertical sync cycle halfway along a scanline every other frame (interlace), or always synchronising right at the start/end of a line (progressive). Interlace is still used for most standard definition TVs, and the 1080i HDTV broadcast standard, but not for LCD , micromirror ( DLP ), or most plasma displays ; these displays do not use a raster scan to create an image (their panels may still be updated in a left-to-right, top-to-bottom scanning fashion, but always in a progressive fashion, and not necessarily at the same rate as the input signal), and so cannot benefit from interlacing (where older LCDs use a "dual scan" system to provide higher resolution with slower-updating technology, the panel is instead divided into two adjacent halves that are updated simultaneously ): in practice, they have to be driven with a progressive scan signal. The deinterlacing circuitry to get progressive scan from a normal interlaced broadcast television signal can add to the cost of a television set using such displays. Currently, progressive displays dominate the HDTV market.
In the 1970s, computers and home video game systems began using TV sets as display devices. At that point, a 480-line NTSC signal was well beyond the graphics abilities of low cost computers, so these systems used a simplified video signal that made each video field scan directly on top of the previous one, rather than each line between two lines of the previous field, along with relatively low horizontal pixel counts. This marked the return of progressive scanning not seen since the 1920s. Since each field became a complete frame on its own, modern terminology would call this 240p on NTSC sets, and 288p on PAL . While consumer devices were permitted to create such signals, broadcast regulations prohibited TV stations from transmitting video like this. Computer monitor standards such as the TTL-RGB mode available on the CGA and e.g. BBC Micro were further simplifications to NTSC, which improved picture quality by omitting modulation of color, and allowing a more direct connection between the computer's graphics system and the CRT.
By the mid-1980s, computers had outgrown these video systems and needed better displays. Most home and basic office computers suffered from the use of the old scanning method, with the highest display resolution being around 640x200 (or sometimes 640x256 in 625-line/50 Hz regions), resulting in a severely distorted tall narrow pixel shape, making the display of high resolution text alongside realistic proportioned images difficult (logical "square pixel" modes were possible but only at low resolutions of 320x200 or less). Solutions from various companies varied widely. Because PC monitor signals did not need to be broadcast, they could consume far more than the 6, 7 and 8 MHz of bandwidth that NTSC and PAL signals were confined to. IBM's Monochrome Display Adapter and Enhanced Graphics Adapter as well as the Hercules Graphics Card and the original Macintosh computer generated video signals of 342 to 350p, at 50 to 60 Hz, with approximately 16 MHz of bandwidth, some enhanced PC clones such as the AT&T 6300 (aka Olivetti M24 ) as well as computers made for the Japanese home market managed 400p instead at around 24 MHz, and the Atari ST pushed that to 71 Hz with 32 MHz bandwidth - all of which required dedicated high-frequency (and usually single-mode, i.e. not "video"-compatible) monitors due to their increased line rates. The Commodore Amiga instead created a true interlaced 480i60/576i50 RGB signal at broadcast video rates (and with a 7 or 14 MHz bandwidth), suitable for NTSC/PAL encoding (where it was smoothly decimated to 3.5~4.5 MHz). This ability (plus built-in genlocking ) resulted in the Amiga dominating the video production field until the mid-1990s, but the interlaced display mode caused flicker problems for more traditional PC applications where single-pixel detail is required, with "flicker-fixer" scan-doubler peripherals plus high-frequency RGB monitors (or Commodore's own specialist scan-conversion A2024 monitor) being popular, if expensive, purchases amongst power users. 1987 saw the introduction of VGA , on which PCs soon standardized, as well as Apple's Macintosh II range which offered displays of similar, then superior resolution and color depth, with rivalry between the two standards (and later PC quasi-standards such as XGA and SVGA) rapidly pushing up the quality of display available to both professional and home users.
In the late 1980s and early 1990s, monitor and graphics card manufacturers introduced newer high resolution standards that once again included interlace. These monitors ran at higher scanning frequencies, typically allowing a 75 to 90 Hz field rate (i.e. 37.5 to 45 Hz frame rate), and tended to use longer-persistence phosphors in their CRTs, all of which was intended to alleviate flicker and shimmer problems. Such monitors proved generally unpopular, outside of specialist ultra-high-resolution applications such as CAD and DTP which demanded as many pixels as possible, with interlace being a necessary evil and better than trying to use the progressive-scan equivalents. Whilst flicker was often not immediately obvious on these displays, eyestrain and lack of focus nevertheless became a serious problem, and the trade-off for a longer afterglow was reduced brightness and poor response to moving images, leaving visible and often off-colored trails behind. These colored trails were a minor annoyance for monochrome displays, and the generally slower-updating screens used for design or database-query purposes, but much more troublesome for color displays and the faster motions inherent in the increasingly popular window-based operating systems, as well as the full-screen scrolling in WYSIWYG word-processors, spreadsheets, and of course for high-action games. Additionally, the regular, thin horizontal lines common to early GUIs, combined with low color depth that meant window elements were generally high-contrast (indeed, frequently stark black-and-white), made shimmer even more obvious than with otherwise lower fieldrate video applications. As rapid technological advancement made it practical and affordable, barely a decade after the first ultra-high-resolution interlaced upgrades appeared for the IBM PC, to provide sufficiently high pixel clocks and horizontal scan rates for hi-rez progressive-scan modes in first professional and then consumer-grade displays, the practice was soon abandoned. For the rest of the 1990s, monitors and graphics cards instead made great play of their highest stated resolutions being "non-interlaced", even where the overall framerate was barely any higher than what it had been for the interlaced modes (e.g. SVGA at 56p versus 43i to 47i), and usually including a top mode technically exceeding the CRT's actual resolution (number of color-phosphor triads) which meant there was no additional image clarity to be gained through interlacing and/or increasing the signal bandwidth still further. This experience is why the PC industry today remains against interlace in HDTV, and lobbied for the 720p standard, and continues to push for the adoption of 1080p (at 60 Hz for NTSC legacy countries, and 50 Hz for PAL); however, 1080i remains the most common HD broadcast resolution, if only for reasons of backward compatibility with older HDTV hardware that cannot support 1080p - and sometimes not even 720p - without the addition of an external scaler, similar to how and why most SD-focussed digital broadcasting still relies on the otherwise obsolete MPEG2 standard embedded into e.g. DVB-T . | https://en.wikipedia.org/wiki/Interlaced_video |
The interleukin-1 receptor (IL-1R) associated kinase ( IRAK ) family [ 1 ] plays a crucial role in the protective response to pathogens introduced into the human body by inducing acute inflammation followed by additional adaptive immune responses. IRAKs are essential components of the Interleukin-1 receptor signaling pathway and some Toll-like receptor signaling pathways. Toll-like receptors (TLRs) detect microorganisms by recognizing specific pathogen-associated molecular patterns (PAMPs) and IL-1R family members respond the interleukin-1 (IL-1) family cytokines. These receptors initiate an intracellular signaling cascade through adaptor proteins, primarily, MyD88 . [ 2 ] [ 3 ] This is followed by the activation of IRAKs. TLRs and IL-1R members have a highly conserved amino acid sequence in their cytoplasmic domain called the Toll/Interleukin-1 (TIR) domain. [ 4 ] The elicitation of different TLRs/IL-1Rs results in similar signaling cascades due to their homologous TIR motif leading to the activation of mitogen-activated protein kinases (MAPKs) and the IκB kinase (IKK) complex, which initiates a nuclear factor-κB (NF-κB) and AP-1-dependent transcriptional response of pro-inflammatory genes. [ 5 ] [ 4 ] Understanding the key players and their roles in the TLR/IL-1R pathway is important because the presence of mutations causing the abnormal regulation of Toll/IL-1R signaling leading to a variety of acute inflammatory and autoimmune diseases. [ 6 ]
IRAKs are membrane proximal putative serine-threonine kinases. Four IRAK family members have been described in humans: IRAK1 , IRAK2 , IRAKM , and IRAK4 . Two are active kinases, IRAK-1 and IRAK-4, and two are inactive, IRAK-2 and IRAK-M, but all regulate the nuclear factor-κB (NF-κB) and mitogen-activated protein kinase (MAPK) pathways. [ 5 ]
Some special/significant features of each IRAK family member:
IRAKs were first identified in 1994 by Michael Martin and colleagues when they successfully co-precipitated a protein kinase with type I interleukin-1 receptors (IL-1RI) from human T cells. They speculated that this kinase was the link between the T cell's transmembrane IL-1 receptor and the cytosolic signalling pathway's downstream components. [ 11 ]
The name “IRAK” came from Zhaodan Cao and colleagues in 1995. The DNA sequence analysis of IRAK's domains revealed many conserved amino acids with the serine/threonine specific protein kinase Pelle in Drosophila , that functions downstream of a Toll receptor. Cao's lab confirmed the kinase's activity as necessarily associated with the IL-1 receptor by immunoprecipitating the IL-1 receptors from different cell types treated with IL-1 and without IL-1. Even cells without over-expressed IL-1 receptors showed kinase activity when exposed to IL-1, and were able to co-precipitate a protein kinase with endogenous IL-1 receptors. Thus the human IL-1 receptor's accessory protein was named Interleukin-1 Receptor-Associated Kinase. [ 12 ]
In 1997, MyD88 was identified as the cytosolic protein that recruits IRAKs to the cytosolic domains of IL-1 receptors, mediating IL-1's signal transduction to the cytosolic signal cascade. [ 13 ] Subsequent studies associated IRAKs with multiple signalling pathways triggered by interleukin, and specified multiple IRAK types. [ 14 ] [ 5 ]
All IRAK family members are multidomain proteins consisting of a conserved N-terminal Death Domain (DD) and a central kinase domain (KD). The DD is a protein interaction motif that important for interacting with other signaling molecules such as the adaptor protein MyD88 and other IRAK members. The KD is responsible for the kinase activity of IRAK proteins and consists of 12 subdomains. All IRAK KDs have an ATP binding pocket with an invariable lysine residue in subdomain II, however, only IRAK-1 and IRAK-4 have an aspartate residue in the catalytic site of subdomain VI, which is thought to be critical for kinase activity. It is thought that IRAK-2 and IRAK-M are catalytically inactive because they lack this aspartate residue in the KD. [ 5 ]
IRAK-1 contains a region that is rich in serine, proline, and threonine (proST). It is thought that IRAK-1 undergoes hyperphosphorylation in this region. The proST region also contains two proline (P), glutamic acid (E), serine (S) and threonine (T)-rich (PEST) sequences that are thought to promote the degradation of IRAK-1. [ 5 ] [ 15 ]
Interleukin-1 receptors (IL-1Rs) are cytokine receptors that transduce an intracellular signaling cascade in response to the binding of the inflammatory cytokine interleukin-1 (IL-1). This signaling cascade results in the initiation of transcription of certain genes involved in inflammation. Because IL-1Rs do not possess intrinsic kinase activity, they rely on the recruitment of adaptor molecules, such as IRAKs, to transduce their signals.
IL-1 binding to IL-1R complex triggers the recruitment of the adaptor molecule MyD88 through interactions with the TIR domain. MyD88 brings IRAK-4 to the receptor complex. Preformed complexes of the adaptor molecule Tollip and IRAK-1 are also recruited to the receptor complex, allowing IRAK-1 to bind MyD88. IRAK-1 binding to MyD88 brings it into close proximity with IRAK-4 so that IRAK-4 can phosphorylate and activate IRAK-1. Once phosphorylated, IRAK-1 recruits the adaptor protein TNF receptor associated factor 6 (TRAF6) and the IRAK-1-TRAF6 complex dissociates from the IL-1R complex. The IRAK-1-TRAF6 complex interacts with a pre-existing complex at the plasma membrane consisting of TGF-β activated kinase 1 (TAK1), and two TAK binding proteins, TAB1 and TAB2. TAK1 is a mitogen-activated protein kinase kinase kinase (MAPKKK). This interaction leads to the phosphorylation of TAB2 and TAK1, which then translocate to the cytosol with TRAF6 and TAB1. IRAK-1 remains at the membrane and is targeted for degradation by ubiquitination. Once the TAK1-TRAF6-TAB1-TAB2 complex is in the cytosol, ubiquitination of TRAF6 in triggers the activation of TAK1 kinase activity. TAK1 can then activate two transcription pathways, the nuclear factor-κB (NF-κB) pathway and the mitogen-activated protein kinase (MAPK) pathway. To activate the NF-κB pathway, TAK1 phosphorylates the IκB kinase (IKK) complex, which subsequently phosphorylates the NF-κB inhibitor, IκB, targeting it for degradation by the proteasome. Once IκB is removed, the NF-κB proteins p65 and p50 are free to translocate into the nucleus and activate transcription of proinflammatory genes. To activate the MAPK pathway, TAK1 phosphorylates MAPK kinase (MKK) 3/4/6, which then phosphorylate members of the MAPK family, c-Jun N-terminal kinase (JNK) and p38. Phosphorylated JNK/p38 can then translocate into the nucleus and phosphorylate and activate transcription factors such as c-Fos and c-Jun. [ 5 ]
Toll-like receptors (TLRs) are important innate immune receptors that recognize pathogen associated molecular patterns (PAMPs) and initiate the appropriate immune response to eliminate a particular pathogen. PAMPs are conserved motifs associated with microorganisms that are not found in host cells, such as, bacterial lipopolysaccharide (LPS), viral double-stranded RNA, etc. TLRs are similar to IL-1Rs in that they do not possess intrinsic kinase activity and require adaptor molecules to relay their signals. Stimulation of TLRs can also result in NF-κB and MAPK mediated transcription, similar to the IL-1R signaling pathway. [ 15 ] [ 16 ]
It has been shown that IRAK-1 is essential for TLR7 and TLR9 interferon (IFN) induction. TLR7 and TLR9 in plasmacytoid dendritic cells (pDCs) recognize viral nucleic acids and trigger the production of interferon-α (IFN-α), an important cytokine for inducing an antiviral state in host cells. TLR7 and TLR9 mediated IFN-α induction requires the formation of a complex consisting of MyD88, TRAF6 and the interferon regulatory factor 7 (IRF7). IRF7 is a transcription factor that translocates into the nucleus when activated and initiates transcription of IFN-α. IRAK-1 was shown to directly phosphorylates IRF7 in vitro and the kinase activity of IRAK-1 was shown to be essential for IRF7 transcriptional activation. [ 16 ] It was subsequently shown that IRAK-1 is required for the activation of interferon regulatory factor 5 (IRF5). IRF5 is another transcription factor that induces IFN production following stimulation of TLR7, TLR8 and TLR9 by specific viruses. In order to be activated, IRF5 must be polyubiquitinated by TRAF6. It has been shown that TRAF6-mediated ubiquitination of IRF5 is dependent on the kinase activity of IRAK-1. [ 17 ] [ 18 ]
IRAK-1 has also been shown to play a critical role in TLR4 interleukin-10 (IL-10) induction. TLR4 recognizes bacterial LPS and triggers the transcription of IL-10, a cytokine regulating the inflammatory response. IL-10 transcription is activated by signal transducer and activator of transcription 3 (STAT3). IRAK-1 forms a complex with STAT3 and the IL-10 promoter element in the nucleus and is required for STAT3 phosphorylation and activation of IL-10 transcription. [ 19 ]
IRAK-2 plays an important role in TLR-mediated NF-κB activation. Knocking down IRAK-2 has been shown to impair NF-κB activation by TLR3, TLR4 and TLR8. The mechanism of how IRAK-2 functions is still unknown, however, IRAK-2 has been shown to interact with a TIR adaptor protein that does not bind to IRAK-1, called Mal/TIRAP. Mal/TIRAP has been specifically implicated in TLR2 and TLR4 mediated NF-κB signaling. In addition, it has been shown that IRAK-2 is recruited to the TLR3 receptor. IRAK-2 is the only IRAK family member that is known to play a role in TLR3 signaling. [ 20 ] [ 15 ]
One of the most distinct features of IRAK-M is that it is a negative regulator of TLR signaling to prevent excessive inflammation. It is thought that IRAK-M enhances the binding of MyD88 to IRAK-1 and IRAK-4, preventing IRAK-1 from dissociating from the receptor complex and inducing downstream NF-κB and MAPK signaling. It has also been shown that IRAK-M negatively regulates the alternative NF-κB pathway in TLR2 signaling. The alternative NF-κB pathway is predominantly triggered by CD40, lymphotoxin β receptor (LT), and the B-cell activating receptor belonging to the TNF family (BAFF receptor). The alternative NF-κB pathway involves the activation of NF-κN-inducing kinase (NIK) and subsequent phosphorylation of the transcription factors p100/RelB in an IKKα-dependent mechanism. It was observed that IRAK-M knockout resulted in increased induction of the alternative NF-κB pathway but not the classical pathway. The mechanism by which IRAK-M inhibits NF-κB signaling is still unknown. [ 15 ] [ 20 ]
IRAK-4 is an essential component of MyD88 mediated signaling pathways and is therefore critical for both IL-1R and TLR signaling. MyD88 acts as a scaffold protein for the interaction between IRAK-1 and IRAK-4, allowing IRAK-4 to phosphorylate IRAK-1, leading to autophosphorylation and activation of IRAK-1 [1,2]. IRAK-4 is critical for IL-1R and TLR NF-κB and MAPK signaling pathways as well as TLR7/9 MyD88-mediated interferon activation. [ 21 ]
Interleukin 1 is a cytokine that acts locally and systemically in the innate immune system. IL-1a and IL-1ß are known for causing inflammation, but can also cause induction of other proinflammatory cytokines, and fever. Because IRAKs are a crucial step in the IL-1 receptor signalling pathway, deficiencies or over-expression of IRAKs can cause suboptimal or overactive cellular response to IL-1a and IL-1ß. Thus Interleukin-1 Receptor Associated Kinases are promising therapeutic targets for autoimmune-, immunodeficiency-, and cancer-related disorders. [ 22 ] [ 23 ]
Inflammation signalling is known to be a major factor in many cancer types, and an inflammatory microclimate is a key aspect of human tumours. IL-1ß, which activates the inflammatory signalling pathway containing IRAKs, is directly involved in tumour cell growth, angiogenesis, invasion, and metastasis. In tumour cells containing the L265P MyD88 mutant, protein-signalling complexes spontaneously assemble, activating IRAK-4's kinase activity and promoting inflammation and growth independent of Interleukin-1 signalling. IRAK-4 inhibiting drugs are thus a potential therapeutic treatment for lymphoid malignancies with the L265P MyD88 mutation, especially in Waldenström's Macroglobulinaemia, in which BTK and IRAK1/4 inhibitors have shown promising but unconfirmed results. [ 24 ]
In 2013, Garrett Rhyasen and his colleagues at the University of Cincinnati studied the contribution of active IRAK-1 and IRAK-4 in human myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML). They found that IRAK1 knockout therapy incited apoptosis and impaired leukemic progenitor activity. They also established that IRAK4, while imperative to proliferation of human hematologic malignancies, is not imperative to the pathogenesis of MDS/AML. [ 25 ] Further testing of IRAK-inhibitory therapy could prove essential to cancer therapy development. [ 24 ] [ 25 ]
Autoimmune disorders such as MS, rheumatoid arthritis, lupus and psoriasis are caused by innate immune system deregulation inducing chronic inflammation. [ 26 ] In most cases, inhibition of IRAK-1 and IRAK-4 are suspected to the most effective targets for knockout drugs, as their functions are integral to the cytokine pathways inducing chronic inflammation. [ 27 ]
Mutations in the gene for IRAK-M have been identified as contributors to early onset asthma. Compromised IRAK-M leads to overproduction of inflammatory cytokines in the lungs, eventually triggering T cell mediated allergic reactions and exacerbation of asthma symptoms. Researchers have proposed that increasing IRAK-M function in these individuals may moderate asthma symptoms. [ 28 ] | https://en.wikipedia.org/wiki/Interleukin-1_receptor_associated_kinase |
Interleukin-38 (IL-38) is a member of the interleukin-1 ( IL-1) family and the interleukin-36 ( IL-36 ) subfamily. It is important for the inflammation and host defense . This cytokine is named IL-1F10 in humans and has similar three dimensional structure as IL-1 receptor antagonist ( IL-1Ra ). The organisation of IL-1F10 gene is conserved with other members of IL-1 family within chromosome 2q13. IL-38 is produced by mammalian cells may bind the IL-1 receptor type I . It is expressed in basal epithelia of skin, in proliferating B cells of the tonsil, in spleen and other tissues. This cytokine is playing important role in regulation of innate and adaptive immunity. [ 1 ]
IL-38 probably originated from a common ancestral gene - an ancient IL-1RN gene. [ 2 ] This cytokine has 41% homology with IL-1Ra and 43% homology with IL-36Ra . IL-38 is expressed in skin , spleen, tonsil , thymus , heart , placenta and fetal liver . [ 3 ] In tissues which do not play a special role in immune response , IL-38 is expressed in low quantity similar to other members of the IL-1 family. [ 4 ] In disease setting, specially when the activation of inflammatory response is dysregulated, the expression of IL-38 is changed. For example, in case of spondylitis ankylopoetica , [ 5 ] cardiovascular disease , [ 6 ] rheumatoid arthritis [ 7 ] or hidradenitis suppurativa . [ 8 ]
According to consensus of cleaving site of IL-1 family, it is predicted that two amino acids (AA) should be removed to generate a processed 3-152AA IL-38 protein. The protease which cleaves IL-38 is still unknown as well as it is still not known which form of IL-38 is the natural variant present in the human body . It was reported that 20-152AA IL-38 form has increased biological activity . [ 9 ]
IL-38 has non-characteristic dose-response curve and it binds to IL-36R (IL-1R6). This cytokine is blocking Candida -induced interleukin-17 (IL-17) response better in low concentration than in higher concentration even if induction of cytokine is not blocked. [ 10 ] So it is possible that IL-38 released by apoptotic cells can bind to the Three Immunoglobulin Domain-containing IL-1 receptor-related 2 (TIGIRR-2, gene name IL1RAPL1, also known as IL-1R9) and IL-38 will have in this case an antagonistic effect on induction of inflammatory cytokine . It is possible that IL-38 would be first ligand of TIGIRR-2, a former orphan receptor of the IL-1 Family. [ 9 ]
Studies showed that IL-38 could play an important role in rheumatic diseases. [ 11 ] [ 12 ] [ 13 ] IL-38 is also one of the five proteins which are related with C-reactive protein (CRP) levels in the serum . [ 14 ] The association of IL-38 with CRP could mean that IL-38 will play role also in inflammatory diseases as cardiovascular disease.
The observation of knockdown of IL-38 with siRNA in peripheral blood mononuclear cells shows that production of interleukin-6 (IL-6) , APRIL and CCL-2 were increased in response to TLR ligands, so IL-38 acted like antagonist in this case. [ 15 ] There are also studies which show agonistic effect. [ 9 ] [ 10 ] [ 16 ] In one study was compared the function of full-length IL-38 and truncated IL-38 and showed that high concentrations of the truncated IL-38 decreased production of IL-6 in response to interleukin-1β (IL-1β) in human macrophages , while full-length form increased IL-6 in the same concentrations. So IL-38 could have agonistic and also antagonistic effects which depend on processing and concentration. [ 9 ]
Also when spontaneous murine model of systemic lupus erythematosus (SLE) was treated with recombinant IL-38, mice had less symptoms like proteinuria and skin lesions. [ 17 ] Also serum levels of IL-17 and interleukin-22 were lower in these mice what approves in vitro observation that IL-38 could inhibit Th17 responses. Patients with SLE had higher concentrations of IL-38 in the serum than healthy patients and also patients with active disease had higher concentrations of IL-38 in the serum than patients with inactive form. [ 15 ]
Sjogren's disease is disease related to SLE. Biopsy of gland of patients with primary Sjogren's disease shows that the expression of IL-38 was increased here. For modulation of this disease is important axis of IL-36. IL-38 is probably antagonist of IL-36 signaling similar as IL-36Ra what can play an important role in the pathogenesis of this autoimmune disease. [ 18 ]
IL-38 was found also in the synovium of patients with rheumatoid arthritis and as well in mice with collagen-induced arthritis (CIA). IL-38 concentrations correlated with IL-1β. The overexpression of IL-38 in murine model of arthritis and serum transfer-induced arthritis ameliorate these diseases but not in case of antigen-induced arthritis. TNF production and IL-17 responses were decreased in these models. These data shows that IL-38 could have anti-inflammatory properties in rheumatoid arthritis and probably could be use in a therapeutic strategy. [ 19 ] | https://en.wikipedia.org/wiki/Interleukin-38 |
1IRL , 1M47 , 1M48 , 1M49 , 1M4A , 1M4B , 1M4C , 1NBP , 1PW6 , 1PY2 , 1QVN , 1Z92 , 2B5I , 2ERJ , 3QAZ , 3QB1 , 3INK , 4NEJ , 4NEM
3558
16183
ENSG00000109471
ENSMUSG00000027720
P60568
P04351
NM_000586
NM_008366
NP_000577
NP_032392
Interleukin-2 ( IL-2 ) is an interleukin , which is a type of cytokine signaling molecule forming part of the immune system . It is a 15.5–16 kDa protein [ 5 ] that regulates the activities of white blood cells (leukocytes, often lymphocytes ) that are responsible for immunity. IL-2 is part of the body's natural response to microbial infection , and in discriminating between foreign ("non-self") and "self". IL-2 mediates its effects by binding to IL-2 receptors , which are expressed by lymphocytes. The major sources of IL-2 are activated CD4 + T cells and activated CD8 + T cells . [ 6 ] Put shortly the function of IL-2 is to stimulate the growth of helper, cytotoxic and regulatory T cells.
IL-2 is a member of a specific family of cytokines, each member of which has a four alpha helix bundle ; this cytokine family also includes IL-4 , IL-7 , IL-9 , IL-15 and IL-21 . IL-2 signals through a IL-2 receptor , a complex consisting of three chains, termed alpha ( CD25 ), beta ( CD122 ) and gamma ( CD132 ). The gamma chain is common to all family members. [ 6 ]
The IL-2 receptor (IL-2R) α subunit binds IL-2 with low affinity (K d ~ 10 −8 M). Interaction of IL-2 and CD25 alone does not lead to signal transduction due to its short intracellular chain but has the ability (when bound to the β and γ subunit) to increase the IL-2R affinity 100-fold. [ 7 ] [ 5 ] Heterodimerization of the β and γ subunits of IL-2R is essential for signalling in T cells . [ 8 ] IL-2 can signalize either via intermediate-affinity dimeric CD122/CD132 IL-2R (K d ~ 10 −9 M) or high-affinity trimeric CD25/CD122/CD132 IL-2R (K d ~ 10 −11 M). [ 7 ] Dimeric IL-2R is expressed by memory CD8 + T cells and NK cells , whereas regulatory T cells and activated T cells express high levels of trimeric IL-2R. [ 5 ]
Instructions to express proteins in response to an IL-2 signal (called IL-2 transduction) can take place via 3 different signaling pathways ; they are: (1) the JAK-STAT pathway, (2) the PI3K/Akt/mTOR pathway and (3) the MAPK/ERK pathway. [ 5 ] The signalling is commenced by IL-2 binding to its receptor, following which cytoplasmatic domains of CD122 and CD132 heterodimerize . This leads to the activation of Janus kinases JAK1 and JAK3 which subsequently phosphorylate T338 on CD122. This phosphorylation recruits STAT transcription factors , predominantly STAT5 , which dimerize and migrate to the cell nucleus where they bind to DNA . [ 9 ] with an "express other proteins" signal. The proteins expressed by means of the three pathways include bcl-6 (the PI3K/Akt/mTOR pathway), CD25 & prdm-1 (the JAK-STAT pathway) and certain cyclins (the MAPK/ERK pathway).
Gene expression regulation for IL-2 can be on multiple levels or by different ways. One of the checkpoints (in other words one of the things which needs to be done before IL-2 is expressed) is that there must be signaling through a conjunction of a T Cell Receptor (a TCR) and an HLA-peptide complex. As a result of that conjunction a signalling pathway (signalling a cell's protein making machinery to express or 'make' IL-2), a PhosphoLipase-C (PLC) dependent pathway, is set up. PLC activates 3 major transcription factors and their pathways: NFAT , NFkB and AP-1 . In addition and after costimulation from CD28 the optimal activation of expression of IL-2 and these pathways is induced. In summary therefore before a cell will make IL-2 in accordance with this pathway there have to be two reactions: TCR+HLA and protein complex on the one hand and CD28 costimulation on the other indeed mere IL-2 ligation to its receptor is too low affinity to enable pathway.
At the same time Oct-1 is expressed. It helps the activation. Oct1 is expressed in T-lymphocytes and Oct2 is induced after cell activation.
NFAT has multiple family members, all of them are located in cytoplasm and signaling goes through calcineurin, NFAT is dephosphorylated and therefore translocated to the nucleus.
AP-1 is a dimer and is composed of c-Jun and c-Fos proteins. It cooperates with other transcription factors including NFkB and Oct.
NFkB is translocated to the nucleus after costimulation through CD28. NFkB is a heterodimer and there are two binding sites on the IL-2 promoter.
IL-2 has essential roles in key functions of the immune system, tolerance and immunity , primarily via its direct effects on T cells . In the thymus , where T cells mature, it prevents autoimmune diseases by promoting the differentiation of certain immature T cells into regulatory T cells , which suppress other T cells that are otherwise primed to attack normal healthy cells in the body. IL-2 enhances activation-induced cell death (AICD) . [ 5 ] IL-2 also promotes the differentiation of T cells into effector T cells and into memory T cells when the initial T cell is also stimulated by an antigen , thus helping the body fight off infections. [ 6 ] Together with other polarizing cytokines, IL-2 stimulates naive CD4 + T cell differentiation into T h 1 and T h 2 lymphocytes while it impedes differentiation into T h 17 and folicular T h lymphocytes. [ 10 ] [ 11 ]
IL-2 increases the cell killing activity of both natural killer cells and cytotoxic T cells . [ 11 ]
Its expression and secretion is tightly regulated and functions as part of both transient positive and negative feedback loops in mounting and dampening immune responses. Through its role in the development of T cell immunologic memory, which depends upon the expansion of the number and function of antigen-selected T cell clones, it plays a key role in enduring cell-mediated immunity . [ 6 ] [ 12 ]
IL-2 has been discovered in all classes of jawed vertebrates, including sharks, at a similar genomic location. [ 13 ] [ 14 ] In fish, IL-2 shares a single receptor alpha chain with its related cytokines IL-15 and IL-15-like (IL-15L). [ 15 ] This "IL-15Rα" receptor chain is similar to mammalian IL-15Rα, [ 16 ] and in tetrapod evolution a duplication of its coding gene plus further diversification created mammalian IL-2Rα. [ 17 ] [ 18 ] Sequences, and structural analysis of grass carp IL-2, suggest that fish IL-2 binds IL-15Rα in a manner reminiscent of how mammalian IL-15 binds to IL-15Rα. [ 18 ] [ 19 ]
Despite fish IL-2 and IL-15 sharing the same IL-15Rα chain, the stability of fish IL-2 is independent of it whereas IL-15 and especially IL-15L depend on binding to (co-presentation with) IL-15Rα for their stability and function. [ 15 ] This suggests that, like in mammals, fish IL-2, in contrast to fish IL-15 and IL-15L, is not relying on "in trans" presentation by its receptor alpha chain. As a free cytokine, mammalian IL-2 that is secreted by activated T cells is important for a negative feedback loop by the stimulation of regulatory T cells, the latter being the cells with the highest constitutive IL-2Rα (aka CD25) expression. [ 20 ] [ 21 ] Besides this negative feedback loop, mammalian IL-2 also participates in a positive feedback loop because activated T cells enhance their own IL-2Rα expression. [ 20 ] [ 21 ] As in mammals, fish IL-2 also stimulates T cell proliferation [ 22 ] and appears to preferentially stimulate regulatory T cells. [ 23 ] Fish IL-2 induces the expression of cytokines of both type 1 (Th1) and type 2 (Th2) immunity. [ 15 ] [ 24 ]
As has been found in some studies on mammalian IL-2, [ 25 ] data suggest that fish IL-2 can form homodimers and that this is an ancient property of the IL-2/15/15L-family cytokines. [ 15 ]
Homologues of IL-2 have not been reported for jawless fish (hagfish and lamprey) or invertebrates.
While the causes of itchiness are poorly understood, some evidence indicates that IL-2 is involved in itchy psoriasis . [ 26 ]
Aldesleukin is a form of recombinant interleukin-2. It is manufactured using recombinant DNA technology and is marketed as a protein therapeutic and branded as Proleukin. It has been approved by the Food and Drug Administration (FDA) with a black box warning and in several European countries for the treatment of cancers ( malignant melanoma , renal cell cancer ) in large intermittent doses and has been extensively used in continuous doses. [ 27 ] [ 28 ] [ 29 ]
Interking is a recombinant IL-2 with a serine at residue 125, sold by Shenzhen Neptunus. [ 30 ]
Neoleukin 2/15 is a computationally designed mimic of IL-2 that was designed to avoid common side effects. [ 31 ] However, clinical trials into this candidate were discontinued. [ 32 ]
Various dosages of IL-2 across the United States and across the world are used. The efficacy and side effects of different dosages is often a point of disagreement.
The commercial interest in local IL-2 therapy has been very low. Because only a very low dose IL-2 is used, treatment of a patient would cost about $500 commercial value of the patented IL-2. The commercial return on investment is too low to stimulate additional clinical studies for the registration of intratumoral IL-2 therapy.
Usually, in the U.S., the higher dosage option is used, affected by cancer type, response to treatment and general patient health. Patients are typically treated for five consecutive days, three times a day, for fifteen minutes. The following approximately 10 days help the patient to recover between treatments. IL-2 is delivered intravenously on an inpatient basis to enable proper monitoring of side effects. [ 33 ]
A lower dose regimen involves injection of IL-2 under the skin typically on an outpatient basis. It may alternatively be given on an inpatient basis over 1–3 days, similar to and often including the delivery of chemotherapy . [ 33 ]
Intralesional IL-2 is commonly used to treat in-transit melanoma metastases and has a high complete response rate. [ 34 ]
In preclinical and early clinical studies, local application of IL-2 in the tumor has been shown to be clinically more effective in anticancer therapy than systemic IL-2 therapy, over a broad range of doses, without serious side effects. [ 35 ]
Tumour blood vessels are more vulnerable than normal blood vessels to the actions of IL-2. When injected inside a tumor, i.e. local application, a process mechanistically similar to the vascular leakage syndrome, occurs in tumor tissue only. Disruption of the blood flow inside of the tumor effectively destroys tumor tissue. [ 36 ]
In local application, the systemic dose of IL-2 is too low to cause side effects, since the total dose is about 100 to 1000 fold lower. Clinical studies showed painful injections at the site of radiation as the most important side effect, reported by patients. In the case of irradiation of nasopharyngeal carcinoma the five-year disease-free survival increased from 8% to 63% by local IL-2 therapy [ 37 ]
Systemic IL-2 has a narrow therapeutic window , and the level of dosing usually determines the severity of the side effects. [ 38 ] In the case of local IL-2 application, the therapeutic window spans several orders of magnitude. [ 35 ]
Some common side effects: [ 33 ]
More serious and dangerous side effects sometimes are seen, such as breathing problems, serious infections , seizures , allergic reactions , heart problems, kidney failure or a variety of other possible complications. [ 33 ] The most common adverse effect of high-dose IL-2 therapy is vascular leak syndrome (VLS; also termed capillary leak syndrome ). It is caused by lung endothelial cells expressing high-affinity IL-2R. These cells, as a result of IL-2 binding, causes increased vascular permeability. Thus, intravascular fluid extravasate into organs, predominantly lungs, which leads to life-threatening pulmonary or brain oedema. [ 39 ]
Other drawbacks of IL-2 cancer immunotherapy are its short half-life in circulation and its ability to predominantly expand regulatory T cells at high doses. [ 5 ] [ 6 ]
Intralesional IL-2 used to treat in-transit melanoma metastases is generally well tolerated. [ 34 ] This is also the case for intralesional IL-2 in other forms of cancer, like nasopharyngeal carcinoma. [ 37 ]
Eisai markets a drug called denileukin diftitox (trade name Ontak), which is a recombinant fusion protein of the human IL-2 ligand and the diphtheria toxin . [ 40 ] This drug binds to IL-2 receptors and introduces the diphtheria toxin into cells that express those receptors, killing the cells. In some leukemias and lymphomas, malignant cells express the IL-2 receptor, so denileukin diftitox can kill them. In 1999 Ontak was approved by the U.S. Food and Drug Administration (FDA) for treatment of cutaneous T cell lymphoma (CTCL). [ 41 ]
IL-2 does not follow the classical dose-response curve of chemotherapeutics. The immunological activity of high and low dose IL-2 show sharp contrast. This might be related to different distribution of IL-2 receptors (CD25, CD122, CD132) on different cell populations, resulting in different cells that are activated by high and low dose IL-2. In general high doses are immune suppressive, while low doses can stimulate type 1 immunity. [ 42 ] Low-dose IL-2 has been reported to reduce hepatitis C and B infection. [ 43 ]
IL-2 has been used in clinical trials for the treatment of chronic viral infections and as a booster (adjuvant) for vaccines. The use of large doses of IL-2 given every 6–8 weeks in HIV therapy, similar to its use in cancer therapy, was found to be ineffective in preventing progression to an AIDS diagnosis in two large clinical trials published in 2009. [ 44 ]
More recently low dose IL-2 has shown early success in modulating the immune system in disease like type 1 diabetes and vasculitis. [ 45 ] There are also promising studies looking to use low dose IL-2 in ischaemic heart disease. [ 46 ]
IL-2 cannot accomplish its role as a promising immunotherapeutic agent due to significant drawbacks which are listed above. Some of the issues can be overcome using IL-2 ic. They are composed of IL-2 and some of its monoclonal antibody (mAb) and can potentiate biologic activity of IL-2 in vivo . The main mechanism of this phenomenon in vivo is due to the prolongation of the cytokine half-life in circulation. Depending on the clone of IL-2 mAb, IL-2 ic can selectively stimulate either CD25 high (IL-2/JES6-1 complexes), or CD122 high cells (IL-2/S4B6). IL-2/S4B6 immune complexes have high stimulatory activity for NK cells and memory CD8 + T cells and they could thus replace the conventional IL-2 in cancer immunotherapy . On the other hand, IL-2/JES6-1 highly selectively stimulate regulatory T cells and they could be potentially useful for transplantations and in treatment of autoimmune diseases . [ 47 ] [ 5 ]
According to an immunology textbook: "IL-2 is particularly important historically, as it is the first type I cytokine that was cloned, the first type I cytokine for which a receptor component was cloned, and was the first short-chain type I cytokine whose receptor structure was solved. Many general principles have been derived from studies of this cytokine including its being the first cytokine demonstrated to act in a growth factor–like fashion through specific high-affinity receptors, analogous to the growth factors being studied by endocrinologists and biochemists". [ 48 ] : 712
In the mid-1960s, studies reported "activities" in leukocyte-conditioned media that promoted lymphocyte proliferation. [ 49 ] : 16 In the mid-1970s, it was discovered that T-cells could be selectively proliferated when normal human bone marrow cells were cultured in conditioned medium obtained from phytohemagglutinin -stimulated normal human lymphocytes. [ 48 ] : 712 The key factor was isolated from cultured mouse cells in 1979 and from cultured human cells in 1980. [ 50 ] The gene for human IL-2 was cloned in 1982 after an intense competition. [ 51 ] : 76
Commercial activity to bring an IL-2 drug to market was intense in the 1980s and 1990s. By 1983, Cetus Corporation had created a proprietary recombinant version of IL-2 (Aldesleukin, later branded as Proleukin), with the alanine removed from its N-terminal and residue 125 replaced with serine. [ 51 ] : 76–77 [ 52 ] : 201 [ 53 ] Amgen later entered the field with its own proprietary, mutated, recombinant protein and Cetus and Amgen were soon competing scientifically and in the courts; Cetus won the legal battles and forced Amgen out of the field. [ 51 ] : 151 By 1990 Cetus had gotten aldesleukin approved in nine European countries but in that year, the U.S. Food and Drug Administration (FDA) refused to approve Cetus' application to market IL-2. [ 29 ] The failure led to the collapse of Cetus, and in 1991 the company was sold to Chiron Corporation . [ 54 ] [ 55 ] Chiron continued the development of IL-2, which was finally approved by the FDA as Proleukin for metastatic renal carcinoma in 1992. [ 56 ]
By 1993 aldesleukin was the only approved version of IL-2, but Roche was also developing a proprietary, modified, recombinant IL-2 called teceleukin, with a methionine added at is N-terminal, and Glaxo was developing a version called bioleukin, with a methionine added at is N-terminal and residue 125 replaced with alanine. Dozens of clinical trials had been conducted of recombinant or purified IL-2, alone, in combination with other drugs, or using cell therapies, in which cells were taken from patients, activated with IL-2, then reinfused. [ 53 ] [ 57 ] Novartis acquired Chiron in 2006 [ 58 ] and licensed the US aldesleukin business to Prometheus Laboratories in 2010 [ 59 ] before global rights to Proleukin were subsequently acquired by Clinigen in 2018 and 2019. | https://en.wikipedia.org/wiki/Interleukin_2 |
3OG6 , 3OG4
282618
330496
ENSG00000182393
ENSMUSG00000059128
Q8IU54
Q4VK74
NM_172140
NM_001024673
NP_742152
NP_001019844
Interleukin-29 ( IL-29 ) is a cytokine and it belongs to type III interferons group, also termed interferons λ (IFN-λ). IL-29 (alternative name IFNλ1) plays an important role in the immune response against pathogenes and especially against viruses by mechanisms similar to type I interferons , but targeting primarily cells of epithelial origin and hepatocytes . [ 5 ] [ 6 ]
IL-29 is encoded by the IFNL1 gene located on chromosome 19 in humans. [ 5 ] [ 7 ] It is a pseudogene in mice meaning the IL-29 protein is not produced in them. [ 5 ]
IL-29 is, with the rest of IFN-λ, structurally related to the IL-10 family , but its primary amino acid sequence (and also function) is more similar to type I interferons. [ 5 ] It binds to a heterodimeric receptor composed of one subunit IFNL1R specific for IFN-λ and a second subunit IL10RB shared among the IL-10 family cytokines. [ 5 ]
IL-29 exhibits antiviral effects by inducing similar signaling pathways as type I interferons. [ 5 ] IL-29 receptor signals through JAK-STAT pathways leading to activated expression of interferon-stimulated genes and production of antiviral proteins. [ 8 ] Further consequences of IL-29 signalization comprise the upregulated expression of MHC class I molecules, [ 5 ] or enhanced expression of the costimulatory molecules and chemokine receptors on pDC , which are the main producers of IFN-α . [ 8 ]
IL-29 expression is dominant in virus-infected epithelial cells of the respiratory , gastrointestinal and urogenital tracts, also in other mucosal tissues and skin . Hepatocytes infected by HCV or HBV viruses stimulate the immune response by producing IL-29 (IFN-λ in general) rather than type I interferons. [ 5 ] [ 6 ] It is also produced by maturing macrophages, dendritic cells or mastocytes. [ 6 ]
It plays a role in defense against pathogens apart from viruses. [ 5 ] It affects the function of both innate and adaptive immune system. Besides described antiviral effects, IL-29 modulates cytokine production of other cells, for example, it increases secretion of IL-6 , IL-8 and IL-10 by monocytes and macrophages , enhances the responsiveness of macrophages to IFN-γ by increased expression of IFNGR1 , stimulates T cell polarization towards Th1 phenotype and also B cell response to IL-29 was reported. [ 8 ]
The impact of IL-29 on cancer cells is complicated depending on cancer cell type. It shows protective tumor inhibiting effects in many cases such as skin , lung , colorectal or hepatocellular cancer, but shows tumor promoting effects on multiple myeloma cells. [ 6 ] IFN-λ have potential as cancer therapy , with effects on more restricted cell types and fewer side-effects than type I interferons. [ 5 ] [ 6 ]
Abnormal expression of IL-29 could be involved in the pathogenesis of the autoimmune diseases by enhancing the production of inflammatory cytokines , chemokines, and other autoimmune‐related components. High levels of IL-29 in serum or disease-specific tissue was observed in patients with rheumatoid arthritis , osteoarthritis , systemic lupus erythematosus , Sjögren's syndrome , psoriasis , atopic dermatitis , Hashimoto's thyroiditis , systemic sclerosis and uveitis . [ 8 ] | https://en.wikipedia.org/wiki/Interleukin_29 |
4DKC , 4DKD , 4DKE , 4DKF
146433
76527
ENSG00000157368
ENSMUSG00000031750
Q6ZMJ4
Q8R1R4
NM_001172771 NM_001172772 NM_152456
NM_001135100 NM_029646
NP_001166242 NP_001166243 NP_689669
NP_001128572 NP_083922
Interleukin 34 (IL-34) is a protein belonging to a group of cytokines called interleukins . It was originally identified in humans, by large scale screening of secreted proteins ; chimpanzee, murine, rat and chicken interleukin 34 orthologs have also been found. The protein is composed of 241 amino acids , 39 kilodaltons in mass, and forms homodimers . IL-34 increases growth or survival of immune cells known as monocytes ; it elicits its activity by binding the Colony stimulating factor 1 receptor .
Messenger RNA (mRNA) expression of human IL-34 is most abundant in spleen but occurs in several other tissues: thymus , liver , small intestine , colon , prostate gland , lung , heart , brain , kidney , testes , and ovary . The discovery of IL-34 protein in the red pulp of the spleen suggests involvement in growth and development of myeloid cells , consistent with its activity on monocytes. [ 5 ]
Interleukin-34 at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
This protein -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Interleukin_34 |
4Q3H
3579
12765
ENSG00000180871
ENSMUSG00000026180
P25025
P35343
NM_001168298 NM_001557
NM_009909
NP_001161770 NP_001548
NP_034039
Interleukin 8 receptor, beta is a chemokine receptor . IL8RB is also known as CXCR2 , and CXCR2 is now the IUPHAR Committee on Receptor Nomenclature and Drug classification-recommended name. [ 5 ]
The protein encoded by this gene is a member of the G-protein-coupled receptor family. This protein is a receptor for interleukin 8 (IL8). It binds to IL8 with high affinity, and transduces the signal through a G-protein-activated second messenger system (G i/o -coupled [ 6 ] ). This receptor also binds to chemokine (C-X-C motif) ligand 1 ( CXCL1 /MGSA), a protein with melanoma growth stimulating activity, and has been shown to be a major component required for serum-dependent melanoma cell growth. In addition, it binds ligands CXCL2 , CXCL3 , and CXCL5 . [ citation needed ]
The angiogenic effects of IL8 in intestinal microvascular endothelial cells are found to be mediated by this receptor. Knockout studies in mice suggested that this receptor controls the positioning of oligodendrocyte precursors in developing spinal cord by arresting their migration. IL8RB , IL8RA , which encodes another high affinity IL8 receptor, and IL8RBP , a pseudogene of IL8RB , form a gene cluster in a region mapped to chromosome 2q33-q36. [ 7 ]
Mutations in CXCR2 cause hematological traits . [ 8 ]
Knock-down studies involving the chemokine receptor CXCR2 alleviates both replicative and oncogene-induced senescence (OIS) and diminishes the DNA-damage response. Also, ectopic expression of CXCR2 results in premature senescence via a p53 -dependent mechanism. [ 9 ] | https://en.wikipedia.org/wiki/Interleukin_8_receptor,_beta |
Interleukin receptors are a family of cytokine receptors for interleukins . They belong to the immunoglobulin superfamily .
There are two main families of Interleukin receptors, Type 1 and Type 2 .
Type 1 interleukin receptors include: [ 1 ] [ 2 ]
Type 2 interleukin receptors are Type II cytokine receptors . They include: [ 3 ]
This membrane protein –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Interleukin_receptor |
Interlocus contest evolution (ICE) is a process of intergenomic conflict by which different loci within a single genome antagonistically coevolve . ICE supposes that the Red Queen process , which is characterized by a never-ending antagonistic evolutionary arms race , does not only apply to species but also to genes within the genome of a species. [ 1 ]
Because sexual recombination allows different gene loci to evolve semi-autonomously, genes have the potential to coevolve antagonistically. ICE occurs when "an allelic substitution at one locus selects for a new allele at the interacting locus, and vice versa." As a result, ICE can lead to a chain reaction of perpetual gene substitution at antagonistically interacting loci, and no stable equilibrium can be achieved. The rate of evolution thus increases at that locus. [ 1 ]
ICE is thought to be the dominant mode of evolution for genes controlling social behavior. [ 1 ] The ICE process can explain many biological phenomena, including intersexual conflict, parent offspring conflict , and interference competition.
A fundamental conflict between the sexes lies in differences in investment: males generally invest predominantly in fertilization while females invest predominantly in offspring. [ 2 ] This conflict manifests itself in many traits associated with sexual reproduction . Genes expressed in only one sex are selectively neutral in the other sex; male- and female-linked genes can therefore be acted upon separated by selection and will evolve semi-autonomously. [ 1 ] Thus, one sex of a species may evolve to better itself rather than better the species as a whole, sometimes with negative results for the opposite sex: loci will antagonistically coevolve to enhance male reproductive success at females’ expense on the one hand, and to enhance female resistance to male coercion on the other. [ 3 ] This is an example of intralocus sexual conflict , and is unlikely to be resolved fully throughout the genome. However, in some cases this conflict may be resolved by the restriction of the gene’s expression to only the sex that it benefits, resulting in sexual dimorphism . [ 4 ]
The ICE theory can explain the differentiation of the human X- and Y-chromosomes . Semi-autonomous evolution may have promoted genes beneficial to females in the X-chromosome even when detrimental to males, and genes beneficial to males in the Y-chromosome, even when detrimental to females. As the distribution of the X-chromosome is three times as large as the Y-chromosome (the X-chromosome occurs in 3/4 of offspring genes, while the Y-chromosome occurs in only 1/4), the Y-chromosome has a reduced opportunity for rapid evolution. Thus, the Y-chromosome has "shed" its genes to leave only the essential ones (such as the SRY gene ), which gives rise to the differences in the X- and Y-chromosomes. [ 5 ]
A father, mother and offspring may differ in the optimal resource allocation to the offspring. This co-evolutionary conflict can be considered in the context of ICE. Selection will favor genes in the male to maximize female investment in the current offspring, no matter the consequences to the female's reproduction later in life, while selection will favor genes in the female that increase her overall lifetime fitness . Genes expressed in the offspring will be selected to produce an intermediary level of resource allocation between the male-benefit and female-benefit loci. This three-way conflict again occurs when parents feed their offspring, as the optimum feeding rate and optimum point in time to discontinue feeding differ between father, mother and offspring. [ 1 ]
ICE can also explain the theory of interference competition , which is most likely to be associated with opposing sets of genes that determine the outcome of competition between individuals. Different sets of genes may code for signal or receiver phenotypes , such as in the context of threat displays : when a competing male can win more contests by intimidation, rather than by fighting, selection will favor the accumulation of deceitful genes that may not be honest indicators of the male’s fighting capability. [ 1 ]
For example, primitive male elephant seals may have used the lowest frequencies in the threat call of a rival as an indication of body size. The elephant seal's enormous nose may have evolved as a resonating device to amplify low frequencies, [ 6 ] illustrating selection that favors the production of low-frequency threat vocalizations. However, this counter-selects for receptor systems that provide an increased threshold required for intimidation, which in turn selects for deeper threat vocalizations. The rapid divergence of threat displays among closely related species provides further evidence in support of the co-evolutionary arms race within the genome of a single species, driven by the ICE process. [ 1 ] | https://en.wikipedia.org/wiki/Interlocus_contest_evolution |
Interlocus sexual conflict is a type of sexual conflict that occurs through the interaction of a set of antagonistic alleles at two or more different loci , or the location of a gene on a chromosome, in males and females, resulting in the deviation of either or both sexes from the fitness optima for the traits. [ 1 ] A co-evolutionary arms race is established between the sexes in which either sex evolves a set of antagonistic adaptations that is detrimental to the fitness of the other sex. [ 2 ] The potential for reproductive success in one organism is strengthened while the fitness of the opposite sex is weakened. Interlocus sexual conflict can arise due to aspects of male–female interactions such as mating frequency, fertilization , relative parental effort, female remating behavior, and female reproductive rate. [ 3 ]
As the sexes demonstrate a significant investment discrepancy for reproduction, interlocus sexual conflict can arise. To achieve reproductive success , a species member will display reproductive characteristics that enhance their ability to reproduce, regardless of whether the fitness of their mate is negatively affected. [ 4 ] Sperm production by males is substantially less biologically costly than egg production by females, and sperm are produced in much greater quantities. Consequently, males invest more energy into mating frequency, while females are choosier with mates and invest their energy into offspring quality. [ 5 ]
The evolutionary pathways resulting from interlocus sexual conflict form part of interlocus contest evolution , a theory describing the coevolution of different loci in a species through the process of intergenomic conflict . [ 6 ] This has led to the proposal that sexual antagonistic coevolution is fueled by interlocus sexual conflict. [ 6 ]
Well-evidenced examples come exclusively from the insect world, with the majority of research being conducted in yellow dung flies, Scathophaga stercoraria , and fruit flies, Drosophila melanogaster . Examples outside of these taxa are theoretical, though currently not well studied. [ 7 ]
Interlocus sexual conflict differs from intralocus sexual conflict , a similar theory in which a set of antagonistic alleles resides on the same locus in both sexes.
The first model of interlocus sexual conflict, the genetic threshold model, was developed by Parker to explain sexual conflict among yellow dung flies. [ 2 ] Further investigation of sexual conflict theory remained relatively untouched until Rice predicted that genes for sexually antagonistic traits exist at the same loci of the sex chromosomes in both sexes, which led to the development of intralocus sexual conflict. Rice's genetic model of X-linkage influencing sexual dimorphism demonstrated that alleles for reproductive traits will persist if they increase the fitness of one sex, regardless of the associated cost for their mate. [ 8 ]
An expansion of Parker's genetic threshold model was later used to examine how sex-linked harming alleles, or mutant alleles that cause males to harm females during reproduction, proliferate within a population and initiate interlocus sexual conflict. [ 9 ] In a population of fruit flies where a Y-linked harming allele decreases the fitness of a female mate, an indirect cost is imposed on the male's fitness. Consequently, the harming allele is only favored in circumstances where the difference between offspring sired by harming males and normal males is greater for harming males, or harming males are at a fitness advantage.
The chase-away sexual selection model, proposed by Holland and Rice, enabled the prediction that mating discrimination by females will drive the evolution of male display features toward extreme phenotypes . As a result, an arms race develops where female mate choice drives male morphology. [ 10 ] A model of antagonistic coevolution by Arnqvist and Rowe highlighted the example of abdominal spines in female water striders, Gerris incognitus , to demonstrate how this arms race leads to evolutionary adaptations in females. Female water striders achieve control over copulatory acts by using their spines as defense against aggressive males. [ 11 ]
Interlocus sexual conflict forms the basis for interlocus contest evolution (ICE), characterized by the coevolution of genes at different loci in a species through intergenomic conflict. [ 6 ] In other words, a disequilibrium forms as alleles for reproductive traits are substituted at different loci in opposing sexes, resulting in rapid evolution of the trait at the locus, which further fuels an arms race between the sexes.
The Red Queen hypothesis postulates that evolution of a trait in one species will drive antagonistic coevolution in an opposing species and can be used to explain coevolution in cases of predatory behaviour, host-parasite relationships, and sexual selection. [ 12 ] Of interest to interlocus sexual conflict, the Red Queen hypothesis allows for the evolution of traits that enhance reproductive fitness. [ 6 ] ICE extends from this hypothesis, proposing that antagonistic coevolution does not require opposing species, but can be applied to genes at different loci in a single species.
The genetic basis of the distinction between interlocus sexual conflict and intralocus sexual conflict is the location of the interacting antagonistic alleles . Conflict in which the antagonistic alleles are located at the same locus is termed intralocus sexual conflict. [ 9 ] This occurs when males and females undergo different selective pressures at the same locus, resulting in either sex limiting the fitness of the other sex. [ 13 ]
Importantly, many examples of sexual conflict are not categorized into interlocus sexual conflict or intralocus sexual conflict, as the genetic locations of the interacting alleles for these traits are not known or specified. It is critical to note when interpreting information regarding sexual conflict that these terms are sometimes used interchangeably, despite this being incorrect. [ 14 ]
Sexual antagonistic coevolution is characterized by an arms race between the sexes in which one sex experiences changes in morphology or behaviour to compensate for the negative effects of the reproductive traits of the opposite sex. Both sexes strive to maintain an optimal fitness level, but do so at the expense of their mate's fitness. For interlocus sexual conflict to be a valid cause of antagonistic coevolution, the harm induced by the males across all loci has to outweigh the indirect benefits that the females gain by interacting with males. [ 15 ]
Through Parker's genetic threshold model, it was discovered that female yellow dung flies can be injured in battles between male suitors. Males are selected to evolve traits for competitive ability that would increase their reproductive success, but females would evolve a set of antagonistic adaptations to reduce their chances of being injured during these interactions. [ 2 ] Male yellow dung flies use pheromones, seminal fluid proteins (SPFs), and aggressive behaviour attributable to their size to manipulate females during courtship. As yellow dung flies are a polyandrous species, females obtain sperm from multiple males which is stored for fertilization . Larger males have a competitive advantage in displacing the sperm of other males, enhancing the likelihood of their sperm fertilizing the eggs. [ 16 ] This phenomenon is termed sperm competition . In response, females have evolved larger spermathecae , spermicides , and an enhanced ability to select sperm based on the fitness of male suitors.
Scathophaga stercoraria displaying either polyandry or monogamy differ in female fitness. When females are placed in enforced polyandrous or monogamous mating conditions, females from polyandrous conditions exhibit substantially reduced fitness, displaying decreased egg production, decreased number of offspring, and a shortened life span compared to monogamous females after only one mating experience. [ 17 ] Initially, it was suggested that the sexy son hypothesis was enough to compensate for the direct impact of antagonistic coevolution on female fitness. [ 9 ] However, the detrimental fitness impact in females singly-mated with a polyandrous male suggests adaptations to resist harm by males requires competition, and is therefore better explained by interlocus sexual conflict. [ 17 ]
Drosophila melanogaster are a promiscuous species in which mate choice is a recurring event, fostering the development of interlocus sexual conflict. [ 18 ]
The ejaculate of male fruit flies contains seminal fluid proteins (SFPs) that play a significant role in determining female fitness. [ 18 ] SPFs are capable of influencing processes such as oogenesis , [ 19 ] sperm storage, [ 20 ] and the onset of ovulation . [ 21 ] This ultimately leads to a decrease in female fitness, as increasing behaviours such as egg-laying can decrease the success of fertilization, [ 22 ] delay remating, [ 19 ] and impact the female's life span. [ 23 ] In response to the negative effects of SPFs, female fruit flies have evolved resistance tactics to hyperactive males and refractoriness, resulting in interlocus sexual conflict. This has been supported in studies revealing the rapid evolution of SPF genes. [ 24 ]
In a study examining fruit flies under polygamous and monogamous conditions, it was discovered that antagonistic coevolution decreases in monogamy, as the organisms mate with only one opposite-sex member and there is no competition among males to mate with the female. [ 18 ]
In another laboratory study, a mutation that reduces the attractiveness of females was introduced into the genome of the experimental females. By reducing the attractiveness of the females expressing the trait, the mutation provided females with resistance to the direct costs of re-mating and male courtship. These results show that the resistance allele significantly accumulated in the experimental group, suggesting that the direct costs of male-courtship are greater than the indirect benefits of male-courtship. [ 15 ]
Reciprocal crosses of Drosophila melanogaster have been used to investigate the evolution of sexual traits under allopatric conditions. In divergent populations, organisms will respond adaptively to local mates but not foreign mates. As a result, the female remating rate decreased significantly upon introduction of foreign males. Females are most resistant to males they coevolved with in local conditions, but show limited defense against foreign males. [ 25 ] | https://en.wikipedia.org/wiki/Interlocus_sexual_conflict |
Intermedia is an art theory term coined in the mid-1960s by Fluxus artist Dick Higgins to describe the strategies of interdisciplinarity that occur within artworks existing between artistic genres. [ 1 ] [ 2 ] [ 3 ] It was also used by John Brockman to refer to works in expanded cinema that were associated with Jonas Mekas ' Film-Makers’ Cinematheque. [ 4 ] [ 5 ] Gene Youngblood also described intermedia, beginning in his Intermedia column for the Los Angeles Free Press beginning in 1967 as a part of a global network of multiple media that was expanding consciousness . Youngblood gathered and expanded upon intermedia ideas from this series of columns in his 1970 book Expanded Cinema , with an introduction by Buckminster Fuller . Over the years, intermedia has been used almost interchangeably with multi-media and more recently with the categories of digital media , technoetics , electronic media and post-conceptualism .
The areas such as those between drawing and poetry , or between painting and theatre could be described as intermedia. With repeated occurrences, these new genres between genres could develop their own names (e.g. visual poetry , performance art ); historically, an example is haiga , which combined brush painting and haiku into one composition. [ 6 ]
Dick Higgins described the tendency of what he thought was the most interesting and best in the new art to cross boundaries of recognized media or even to fuse the boundaries of art with media [ 7 ] that had not previously been considered for art forms, including computers.
Part of the reason that Duchamp 's objects are fascinating while Picasso 's voice is fading is that the Duchamp pieces are truly between media, between sculpture and something else, while a Picasso is readily classifiable as a painted ornament. Similarly, by invading the land between collage and photography , the German John Heartfield produced what are probably the greatest graphics of our century ...
With characteristic modesty, Dick Higgins often noted that Samuel Taylor Coleridge had first used the term. [ 8 ]
In 1968, Hans Breder founded the first university program in the United States to offer an M.F.A. in intermedia. The Intermedia Area at The University of Iowa graduated artists such as Ana Mendieta and Charles Ray . In addition, the program developed a substantial visiting artist tradition, bringing artists such as Dick Higgins , Vito Acconci , Allan Kaprow , Karen Finley , Robert Wilson , Eric Andersen and others to work directly with Intermedia students. Two other prominent University programs that focus on intermedia are the Intermedia program at Arizona State University and the Intermedia M.F.A. at the University of Maine , founded and directed by Fluxus scholar and author Owen Smith. Additionally, the Roski School of Fine Arts at the University of Southern California features Intermedia as an area of emphasis in their B.A. and B.F.A. programs. The University of Maryland, Baltimore County offers an M.F.A. in Intermedia and Digital Art . Concordia University in Montreal , QC offers a B.F.A. in Intermedia/Cyberarts. [ 9 ] Herron School of Art and Design , Indiana University , Purdue University , Indianapolis, has a M.F.A. Program with Photography and Intermedia degrees. [ 10 ] The University of Oregon offers a Master of Music degree in Intermedia Music Technology . [ 11 ] The Pacific Northwest College of Art offers a B.F.A. in Intermedia. [ 12 ]
In the United Kingdom , Edinburgh College of Art (within the University of Edinburgh ) introduced a BA (Hons) Degree in Intermedia Arts, and intermedia can be a focus of study in Masters programmes. [ 13 ] The Academy of Fine Arts [AVU] in Prague offers a Masters in Intermedia Studies founded by Milan Knížák [ 14 ] and The Hungarian University of Fine Arts has an Intermedia Program. [ 15 ] | https://en.wikipedia.org/wiki/Intermedia |
Intermedia was the third notable hypertext project to emerge from Brown University , after HES (1967) and FRESS (1969). Intermedia was started in 1985 by Norman Meyrowitz , who had been associated with sooner hypertext research at Brown. The Intermedia project coincided with the establishment of the Institute for Research in Information and Scholarship (IRIS). Some of the materials that came from Intermedia, authored by Meyrowitz, Nancy Garrett, and Karen Catlin were used in the development of HTML.
Intermedia ran on A/UX version 1.1. Intermedia was programmed using an object-oriented toolkit and standard DBMS functions. Intermedia supported bi-directional, dual-anchor links for both text and graphics. Small icons are used as anchor markers. Intermedia properties include author, creation date, title, and keywords. Link information is stored by the system apart from the source text. More than one such set of data can be kept, which allows each user to have their own "web" of information. Intermedia has complete multi-user support, with three levels of access rights: read, write, and annotate, which is similar to Unix permissions.
As promising as Intermedia was, it used a lot of resources for its time (it required 4 MB of RAM and 80 MB of hard drive space in 1989). It was also highly tied to A/UX, a less popular Unix-like operating system that ran on Apple Macintosh computers; thus, it wasn't very portable. In 1991, changes in A/UX and lack of funding ended the Intermedia project. | https://en.wikipedia.org/wiki/Intermedia_(hypertext) |
Intermediate Data Format (IDF) files are used interoperate between electronic design automation (EDA) software and solid modeling mechanical computer-aided design (CAD) software.
The format was devised by David Kehmeier at the Mentor Graphics Corporation. [ 1 ]
The EMN File contains the PCB outline, the position of the parts, positions of holes and milling, keep out regions and keep in regions.
The EMP file contains the outline and height of the parts.
Some CAD software allows the use of a map file to load more detailed part models. [ 2 ]
STEP - also known as ISO 10303-21 - has both advantages and disadvantages over IDF.
If both MCAD and ECAD software support STEP, both programs can interchange more detailed models (at the cost of increased file size).
Step models that are rendered correctly in the ECAD software can cause problems in the MCAD [ citation needed ] .
IDF does allow the communication of keep out areas and part placements more directly.
IDF is a very simple and robust format. If necessary, the files can be edited by hand in a text editor.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intermediate_Data_Format |
In igneous petrology , an intermediate composition refers to the chemical composition of a rock that has 51.5–63 wt% SiO 2 being an intermediate between felsic and mafic compositions. Typical intermediate rocks include andesite and trachyandesite among volcanic rocks and diorite and granodiorite among plutonic rocks .
Volcanic rocks : Subvolcanic rocks : Plutonic rocks :
Picrite basalt Peridotite
Basalt Diabase (Dolerite) Gabbro
Andesite Microdiorite Diorite
Dacite Microgranodiorite Granodiorite
Rhyolite Microgranite Granite
This article related to petrology is a stub . You can help Wikipedia by expanding it .
This geochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intermediate_composition |
In communications and electronic engineering , an intermediate frequency ( IF ) is a frequency to which a carrier wave is shifted as an intermediate step in transmission or reception. [ 1 ] The intermediate frequency is created by mixing the carrier signal with a local oscillator signal in a process called heterodyning , resulting in a signal at the difference or beat frequency . Intermediate frequencies are used in superheterodyne radio receivers , in which an incoming signal is shifted to an IF for amplification before final detection is done.
Conversion to an intermediate frequency is useful for several reasons. When several stages of filters are used, they can all be set to a fixed frequency, which makes them easier to build and to tune. Lower frequency transistors generally have higher gains so fewer stages are required. It's easier to make sharply selective filters at lower fixed frequencies.
There may be several such stages of intermediate frequency in a superheterodyne receiver; two or three stages are called double (alternatively, dual ) or triple conversion , respectively.
Intermediate frequencies are used for three general reasons. [ 2 ] [ 3 ] At very high ( gigahertz ) frequencies, signal processing circuitry performs poorly. Active devices such as transistors cannot deliver much amplification ( gain ). [ 1 ] Ordinary circuits using capacitors and inductors must be replaced with cumbersome high frequency techniques such as striplines and waveguides . So a high frequency signal is converted to a lower IF for more convenient processing. For example, in satellite dishes , the microwave downlink signal received by the dish is converted to a much lower IF at the dish so that a relatively inexpensive coaxial cable can carry the signal to the receiver inside the building. Bringing the signal in at the original microwave frequency would require an expensive waveguide .
In receivers that can be tuned to different frequencies, a second reason is to convert the various different frequencies of the stations to a common frequency for processing. It is difficult to build multistage amplifiers , filters , and detectors that can have all stages track the tuning of different frequencies, but it is comparatively easy to build tunable oscillators . Superheterodyne receivers tune in different frequencies by adjusting the frequency of the local oscillator on the input stage, and all processing after that is done at the same fixed frequency: the IF. Without using an IF, all the complicated filters and detectors in a radio or television would have to be tuned in unison each time the frequency was changed as was necessary in the early tuned radio frequency receivers (TRF). A more important advantage is that it gives the receiver a constant bandwidth over its tuning range. The bandwidth of a filter is proportional to its center frequency. In receivers like the TRF in which the filtering is done at the incoming RF frequency, as the receiver is tuned to higher frequencies, its bandwidth increases.
The main reason for using an intermediate frequency is to improve frequency selectivity . [ 1 ] In communication circuits, a very common task is to separate out, or extract, signals or components of a signal that are close together in frequency. This is called filtering . Some examples are: picking up a radio station among several that are close in frequency, or extracting the chrominance subcarrier from a TV signal. With all known filtering techniques the filter's bandwidth increases proportionately with the frequency. So a narrower bandwidth and more selectivity can be achieved by converting the signal to a lower IF and performing the filtering at that frequency. FM and television broadcasting with their narrow channel widths, as well as more modern telecommunications services such as cell phones and cable television , would be impossible without using frequency conversion. [ 4 ]
Perhaps the most commonly used intermediate frequencies for broadcast receivers are around 455 kHz for AM receivers and 10.7 MHz for FM receivers. In special purpose receivers other frequencies can be used. A dual-conversion receiver may have two intermediate frequencies, a higher one to improve image rejection and a second, lower one, for desired selectivity. A first intermediate frequency may even be higher than the input signal, so that all undesired responses can be easily filtered out by a fixed-tuned RF stage. [ 5 ]
In a digital receiver, the analog-to-digital converter (ADC) operates at low sampling rates, so input RF must be mixed down to IF to be processed. Intermediate frequency tends to be lower frequency range compared to the transmitted RF frequency. However, the choices for the IF are most dependent on the available components such as mixer , filters, amplifiers and others that can operate at lower frequency. There are other factors involved in deciding the IF, because lower IF is susceptible to noise and higher IF can cause clock jitters.
Modern satellite television receivers use several intermediate frequencies. [ 6 ] The 500 television channels of a typical system are transmitted from the satellite to subscribers in the Ku microwave band, in two subbands of 10.7–11.7 and 11.7–12.75 GHz. The downlink signal is received by a satellite dish . In the box at the focus of the dish, called a low-noise block downconverter (LNB), each block of frequencies is converted to the IF range of 950–2150 MHz by two fixed frequency local oscillators at 9.75 and 10.6 GHz. One of the two blocks is selected by a control signal from the set top box inside, which switches on one of the local oscillators. This IF is carried into the building to the television receiver on a coaxial cable. At the cable company's set top box , the signal is converted to a lower IF of 480 MHz for filtering, by a variable frequency oscillator. [ 6 ] This is sent through a 30 MHz bandpass filter, which selects the signal from one of the transponders on the satellite, which carries several channels. Further processing selects the channel desired, demodulates it and sends the signal to the television.
An intermediate frequency was first used in the superheterodyne radio receiver, invented by American scientist Major Edwin Armstrong in 1918, during World War I . [ 7 ] [ 8 ] A member of the Signal Corps , Armstrong was building radio direction finding equipment to track German military signals at the then-very high frequencies of 500 to 3500 kHz. The triode vacuum tube amplifiers of the day would not amplify stably above 500 kHz; however, it was easy to get them to oscillate above that frequency. Armstrong's solution was to set up an oscillator tube that would create a frequency near the incoming signal and mix it with the incoming signal in a mixer tube, creating a heterodyne or signal at the lower difference frequency where it could be amplified easily. For example, to pick up a signal at 1500 kHz the local oscillator would be tuned to 1450 kHz. Mixing the two created an intermediate frequency of 50 kHz, which was well within the capability of the tubes. The name superheterodyne was a contraction of supersonic heterodyne , to distinguish it from receivers in which the heterodyne frequency was low enough to be directly audible, and which were used for receiving continuous wave (CW) Morse code transmissions (not speech or music).
After the war, in 1920, Armstrong sold the patent for the superheterodyne to Westinghouse , who subsequently sold it to RCA . The increased complexity of the superheterodyne circuit compared to earlier regenerative or tuned radio frequency receiver designs slowed its use, but the advantages of the intermediate frequency for selectivity and static rejection eventually won out; by 1930, most radios sold were 'superhets'. During the development of radar in World War II , the superheterodyne principle was essential for downconversion of the very high radar frequencies to intermediate frequencies. Since then, the superheterodyne circuit, with its intermediate frequency, has been used in virtually all radio receivers. | https://en.wikipedia.org/wiki/Intermediate_frequency |
In mathematical logic , a superintuitionistic logic is a propositional logic extending intuitionistic logic . Classical logic is the strongest consistent superintuitionistic logic; thus, consistent superintuitionistic logics are called intermediate logics (the logics are intermediate between intuitionistic logic and classical logic). [ 1 ]
A superintuitionistic logic is a set L of propositional formulas in a countable set of
variables p i satisfying the following properties:
Such a logic is intermediate if furthermore
There exists a continuum of different intermediate logics and just as many such logics exhibit the disjunction property (DP).
Superintuitionistic or intermediate logics form a complete lattice with intuitionistic logic as the bottom and the inconsistent logic (in the case of superintuitionistic logics) or classical logic (in the case of intermediate logics) as the top. Classical logic is the only coatom in the lattice of superintuitionistic logics; the lattice of intermediate logics also has a unique coatom, namely SmL [ citation needed ] .
The tools for studying intermediate logics are similar to those used for intuitionistic logic, such as Kripke semantics . For example, Gödel–Dummett logic has a simple semantic characterization in terms of total orders . Specific intermediate logics may be given by semantical description.
Others are often given by adding one or more axioms to intuitionistic logic (usually denoted as intuitionistic propositional calculus IPC , but also Int , IL or H ). Examples include:
Generalized variants of the above (but actually equivalent principles over intuitionistic logic) are, respectively,
This list is, for the most part, not any sort of ordering. For example, LC is known not to prove all theorems of SmL , but it does not directly compare in strength to BD 2 . Likewise, e.g., KP does not compare to SL . The list of equalities for each logic is by no means exhaustive either. For example, as with WPEM and De Morgan's law, several forms of DGP using conjunction may be expressed.
Even (¬¬ p ∨ ¬ p ) ∨ (¬¬ p → p ), a further weakening of WPEM, is not a theorem of IPC .
It may also be worth noting that, taking all of intuitionistic logic for granted, the equalities notably rely on explosion. For example, over mere minimal logic , as principle PEM is already equivalent to Consequentia mirabilis, but there does not imply the stronger DNE, nor PP, and is not comparable to DGP.
Going on:
Furthermore:
The propositional logics SL and KP do have the disjunction property DP. Kleene realizability logic and the strong Medvedev's logic do have it as well. There is no unique maximal logic with DP on the lattice.
Note that if a consistent theory validates WPEM but still has independent statements when assuming PEM, then it cannot have DP.
Given a Heyting algebra H , the set of propositional formulas that are valid in H is an intermediate logic. Conversely, given an intermediate logic it is possible to construct its Lindenbaum–Tarski algebra , which is then a Heyting algebra.
An intuitionistic Kripke frame F is a partially ordered set , and a Kripke model M is a Kripke frame with valuation such that { x ∣ M , x ⊩ p } {\displaystyle \{x\mid M,x\Vdash p\}} is an upper subset of F . The set of propositional formulas that are valid in F is an intermediate logic. Given an intermediate logic L it is possible to construct a Kripke model M such that the logic of M is L (this construction is called the canonical model ). A Kripke frame with this property may not exist, but a general frame always does.
Let A be a propositional formula. The Gödel– Tarski translation of A is defined recursively as follows:
If M is a modal logic extending S4 then ρ M = { A | T ( A ) ∈ M } is a superintuitionistic logic, and M is called a modal companion of ρ M . In particular:
For every intermediate logic L there are many modal logics M such that L = ρ M . | https://en.wikipedia.org/wiki/Intermediate_logic |
An Intermediate Luminosity Optical Transient (ILOT) is an astronomical object which undergoes an optically detectable explosive event with an absolute magnitude ( M ) brighter than a classical nova ( M ~ −8) but fainter than that of a supernova ( M ~ −17). That nine magnitude range corresponds to a factor of nearly 4000 in luminosity, so the ILOT class may include a wide variety of objects. The term ILOT first appeared in a 2009 paper discussing the nova-like event NGC 300 OT2008-1 . [ 1 ] As the term has gained more widespread use, [ 2 ] it has begun to be applied to some objects like KjPn 8 and CK Vulpeculae for which no transient event has been observed, but which may have been dramatically affected by an ILOT event in the past. [ 3 ] [ 4 ] The number of ILOTs known is expected to increase substantially when the Vera C. Rubin Observatory becomes operational.
A very wide variety of objects have been classified as ILOTs in the astronomical literature. Kashi and Soker proposed a model for the outburt of ASASSN-15qi, [ 5 ] in which a Jupiter-mass planet is tidally destroyed and accreted onto a young main sequence star. [ 6 ] Luminous red novae , believed be caused by the merger of two stars, are classified as ILOTs. [ 7 ] Some luminous blue variables , such as η Car have been classified as ILOTs. [ 8 ] Some objects which have been classified as failed supernovae may be ILOTs. [ 9 ] The common thread tying all of these objects together is a transfer of a large amount of mass (0.001 M ⊙ to a few M ⊙ ) from a planet or star to a companion star, over a short period of time, leading to a massive eruption. That large range in accretion mass explains the large range in ILOT event brightness. [ 10 ] | https://en.wikipedia.org/wiki/Intermediate_luminosity_optical_transient |
Intermediate mesoderm or intermediate mesenchyme is a narrow section of the mesoderm (one of the three primary germ layers ) located between the paraxial mesoderm and the lateral plate of the developing embryo . [ 1 ] The intermediate mesoderm develops into vital parts of the urogenital system ( kidneys , gonads and respective tracts).
Factors regulating the formation of the intermediate mesoderm are not fully understood. It is believed that bone morphogenic proteins , or BMPs, specify regions of growth along the dorsal-ventral axis of the mesoderm and plays a central role in formation of the intermediate mesoderm. [ 2 ] Vg1/ Nodal signalling is an identified regulator of intermediate mesoderm formation acting through BMP signalling. [ 3 ] Excess Vg1/Nodal signalling during early gastrulation stages results in expansion of the intermediate mesoderm at the expense of the adjacent paraxial mesoderm, whereas inhibition of Vg1/Nodal signalling represses intermediate mesoderm formation. [ 4 ] A link has been established between Vg1/Nodal signalling and BMP signalling, whereby Vg1/Nodal signalling regulates intermediate mesoderm formation by modulating the growth-inducing effects of BMP signalling. [ 4 ]
Other necessary markers of intermediate mesoderm induction include the odd-skipped related gene ( Osr1 ) and paired-box-2 gene ( Pax2 ) which require intermediate levels of BMP signalling to activate [ 3 ] Markers of early intermediate mesoderm formation are often not exclusive to the intermediate mesoderm. This can be seen in early stages of intermediate mesoderm differentiation where higher levels of BMP stimulate growth of lateral plate tissue, whilst lower concentrations lead to paraxial mesoderm and somite formation. [ 5 ] Osr1, which encodes a zinc-finger DNA-binding protein, and LIM-type homeobox gene ( Lhx1 ) expression overlaps the intermediate mesoderm as well as the lateral plate. Osr1 has expression domains encompassing the entire length of the anterior-posterior (AP) axis from the first somites. It is not until the 4th-8th somite stage that markers with greater specificity to the intermediate mesoderm are identified including Pax2/8 genes activated from the 6th somite (Bouchard, 2002). Lhx1 expression also becomes more restricted to the intermediate mesoderm. [ 1 ] Genetic analyses in animal studies show that Lhx1 , Osr1 and Pax2/8 signalling are all critical in specification of the intermediate mesoderm into its early derivatives. [ 5 ]
As development proceeds, the intermediate mesoderm differentiates sequentially along the anterior-posterior axis into three successive stages of the early mammalian and avian urogenital system, named pronephros , mesonephros and metanephros respectively ( anamniote embryos form only a pronephros and mesonephros). [ 2 ] The intermediate mesoderm will eventually develop into the kidney and parts of both male and female reproductive systems.
Early kidney structures include the pronephros and mesonephros, whose complexity, size and duration can vary greatly between vertebrate species. [ 1 ] The adult kidney, also referred to as the metanephric kidney , forms at the posterior end of the intermediate mesoderm after the degeneration of previous, less complex kidney structures. [ 1 ]
During early development (approximately day 22 in humans ), the pronephric duct forms from the intermediate mesoderm, ventral to the anterior somites. The cells of the pronephric duct migrate caudally whilst inducing adjacent mesenchyme to form the tubules of the initial kidney-like structure called the pronephros. [ 6 ] This process is regulated by Pax2/8 markers. [ 7 ] The pronephros is active in adult forms of some primitive fish and acts as the primary excretory system in amphibious larvae and embryonic forms of more advanced fish . [ 8 ] In mammals however, the pronephric tubules and the anterior portion of the pronephric duct degenerates in 3.5 weeks to be succeeded by the mesonephros, the embryonic kidney. [ 6 ]
The mesonephros is constituted of a set of new tubules formed from the lateral and ventral sides of the gonadal ridge joining the cloaca . [ 5 ] The mesonephros functions between the 6th and 10th weeks of embryological life of mammals as a temporary kidney, but serves as the permanent excretory organ of aquatic vertebrates. By 8 weeks post- conception , the human mesonephros reaches maximum size and begins to regress, with complete regression occurring by week 16. [ 6 ] Despite its transiency, the mesonephros is crucial for the development of structures such as the Wolffian duct (or mesonephric duct), which in turn gives rise to the ureteric bud of the metanephric kidney. [ 9 ]
The permanent kidney of amniotes , the metanephros, develops during the 10th week in human embryos and is formed by the reciprocal interactions of the metanephrogenic blastema (or metaneophrogenic mesenchyme) and the ureteric bud. [ 6 ] Gonadal derived neurotrophic factor (GDNF) secreted by the metanephrogenic blastema activates the receptor tyrosine kinase RET , via the co-receptor GFRα1 and triggers outgrowth of Ret positive cells from the nephric duct towards the GDNF signal, promoting ureteric bud outgrowth and invasion. [ 1 ] Once the bud invades the metanephrogenic blastema, a permissive signal in the form of Wnt proteins is activated and stimulates the condensation of metanephric mesenchymal cells around the ureteric bud tips, beginning the polarisation of the blastema to generate the epithelial cells of parts of the nephron : the proximal tubules , loops of Henle and the distal convoluted tubules . [ 1 ] The ureteric bud secretes FGF2 (fibroblast growth factor 2) and BMP7 (bone morphogenic protein 7) to prevent apoptosis in the kidney mesenchyme. [ 2 ] Condensing mesenchyme then secretes paracrine factors that mediate branching of the ureteric bud to give rise to the ureter and collecting duct of the adult kidney. [ 10 ]
Wilms' tumor (WT), also known as nephroblastoma, is an embryonic tumor originating from metanephric blastemal cells that are incapable of completing the mesenchymal-epithelial transition (MET), a crucial process during kidney differentiation involving the transition from a multipolar, spindle-shaped mesenchymal cell to a planar assembly of polarized epithelial cells. [ 11 ] As a consequence, WTs have a triphasic histology composed of three morphogenically distinct cell types: undifferentiated blastemal cells, epithelial cells, and stromal cells. [ 11 ] The Wnt/ βcatenin signalling pathway is crucial for initiating MET, where specifically the WNT4 protein is required for induction of epithelial renal vesicles and the transition from mesenchymal to epithelial cells. [ 12 ] WTs are often a result of a genetic deletions or inactivating mutations in WT1 (Wilms tumor 1), which subsequently inhibits Wnt/βcatenin signalling and prevents MET progression. [ 11 ] [ 12 ]
Persistent Müllerian duct syndrome (PMDS) is a congenital disorder of male sexual development and is a form of pseudohermaphroditism . Males with PMDS retain normal male reproductive organs and external genitalia , but also possess internal female reproductive organs such as the uterus and fallopian tubes . [ 13 ] PMDS is primarily caused by a mutation in the anti-Müllerian hormone (AMH) gene (PMDS Type 1) or AMHR2 gene (PMDS Type 2). In PMDS Type 1, AMH is either not produced, produced in deficient quantities, defective, or secreted at the wrong critical time for male differentiation. PMDS Type 2 is a result of AMH receptor insensitivity to AMH molecules. [ 14 ] In a smaller percentage of cases, the cause of PMDS is not fully understood but is related to complex malformations of the urogenital region and paramesonephric ducts during male gonadal development. [ 13 ] | https://en.wikipedia.org/wiki/Intermediate_mesoderm |
In mathematical analysis , the intermediate value theorem states that if f {\displaystyle f} is a continuous function whose domain contains the interval [ a , b ] , then it takes on any given value between f ( a ) {\displaystyle f(a)} and f ( b ) {\displaystyle f(b)} at some point within the interval.
This has two important corollaries :
This captures an intuitive property of continuous functions over the real numbers : given f {\displaystyle f} continuous on [ 1 , 2 ] {\displaystyle [1,2]} with the known values f ( 1 ) = 3 {\displaystyle f(1)=3} and f ( 2 ) = 5 {\displaystyle f(2)=5} , then the graph of y = f ( x ) {\displaystyle y=f(x)} must pass through the horizontal line y = 4 {\displaystyle y=4} while x {\displaystyle x} moves from 1 {\displaystyle 1} to 2 {\displaystyle 2} . It represents the idea that the graph of a continuous function on a closed interval can be drawn without lifting a pencil from the paper.
The intermediate value theorem states the following:
Consider an interval I = [ a , b ] {\displaystyle I=[a,b]} of real numbers R {\displaystyle \mathbb {R} } and a continuous function f : I → R {\displaystyle f\colon I\to \mathbb {R} } . Then
Remark: Version II states that the set of function values has no gap. For any two function values c , d ∈ f ( I ) {\displaystyle c,d\in f(I)} with c < d {\displaystyle c<d} all points in the interval [ c , d ] {\displaystyle {\bigl [}c,d{\bigr ]}} are also function values, [ c , d ] ⊆ f ( I ) . {\displaystyle {\bigl [}c,d{\bigr ]}\subseteq f(I).} A subset of the real numbers with no internal gap is an interval. Version I is naturally contained in Version II .
The theorem depends on, and is equivalent to, the completeness of the real numbers . The intermediate value theorem does not apply to the rational numbers Q because gaps exist between rational numbers; irrational numbers fill those gaps. For example, the function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} for x ∈ Q {\displaystyle x\in \mathbb {Q} } satisfies f ( 0 ) = 0 {\displaystyle f(0)=0} and f ( 2 ) = 4 {\displaystyle f(2)=4} . However, there is no rational number x {\displaystyle x} such that f ( x ) = 2 {\displaystyle f(x)=2} , because 2 {\displaystyle {\sqrt {2}}} is an irrational number.
Despite the above, there is a version of the intermediate value theorem for polynomials over a real closed field ; see the Weierstrass Nullstellensatz .
The theorem may be proven as a consequence of the completeness property of the real numbers as follows: [ 3 ]
We shall prove the first case, f ( a ) < u < f ( b ) {\displaystyle f(a)<u<f(b)} . The second case is similar.
Let S {\displaystyle S} be the set of all x ∈ [ a , b ] {\displaystyle x\in [a,b]} such that f ( x ) < u {\displaystyle f(x)<u} . Then S {\displaystyle S} is non-empty since a {\displaystyle a} is an element of S {\displaystyle S} . Since S {\displaystyle S} is non-empty and bounded above by b {\displaystyle b} , by completeness, the supremum c = sup S {\displaystyle c=\sup S} exists. That is, c {\displaystyle c} is the smallest number that is greater than or equal to every member of S {\displaystyle S} .
Note that, due to the continuity of f {\displaystyle f} at a {\displaystyle a} , we can keep f ( x ) {\displaystyle f(x)} within any ε > 0 {\displaystyle \varepsilon >0} of f ( a ) {\displaystyle f(a)} by keeping x {\displaystyle x} sufficiently close to a {\displaystyle a} . Since f ( a ) < u {\displaystyle f(a)<u} is a strict inequality, consider the implication when ε {\displaystyle \varepsilon } is the distance between u {\displaystyle u} and f ( a ) {\displaystyle f(a)} . No x {\displaystyle x} sufficiently close to a {\displaystyle a} can then make f ( x ) {\displaystyle f(x)} greater than or equal to u {\displaystyle u} , which means there are values greater than a {\displaystyle a} in S {\displaystyle S} . A more detailed proof goes like this:
Choose ε = u − f ( a ) > 0 {\displaystyle \varepsilon =u-f(a)>0} . Then ∃ δ > 0 {\displaystyle \exists \delta >0} such that ∀ x ∈ [ a , b ] {\displaystyle \forall x\in [a,b]} , | x − a | < δ ⟹ | f ( x ) − f ( a ) | < u − f ( a ) ⟹ f ( x ) < u . {\displaystyle |x-a|<\delta \implies |f(x)-f(a)|<u-f(a)\implies f(x)<u.} Consider the interval [ a , min ( a + δ , b ) ) = I 1 {\displaystyle [a,\min(a+\delta ,b))=I_{1}} . Notice that I 1 ⊆ [ a , b ] {\displaystyle I_{1}\subseteq [a,b]} and every x ∈ I 1 {\displaystyle x\in I_{1}} satisfies the condition | x − a | < δ {\displaystyle |x-a|<\delta } . Therefore for every x ∈ I 1 {\displaystyle x\in I_{1}} we have f ( x ) < u {\displaystyle f(x)<u} . Hence c {\displaystyle c} cannot be a {\displaystyle a} .
Likewise, due to the continuity of f {\displaystyle f} at b {\displaystyle b} , we can keep f ( x ) {\displaystyle f(x)} within any ε > 0 {\displaystyle \varepsilon >0} of f ( b ) {\displaystyle f(b)} by keeping x {\displaystyle x} sufficiently close to b {\displaystyle b} . Since u < f ( b ) {\displaystyle u<f(b)} is a strict inequality, consider the similar implication when ε {\displaystyle \varepsilon } is the distance between u {\displaystyle u} and f ( b ) {\displaystyle f(b)} . Every x {\displaystyle x} sufficiently close to b {\displaystyle b} must then make f ( x ) {\displaystyle f(x)} greater than u {\displaystyle u} , which means there are values smaller than b {\displaystyle b} that are upper bounds of S {\displaystyle S} . A more detailed proof goes like this:
Choose ε = f ( b ) − u > 0 {\displaystyle \varepsilon =f(b)-u>0} . Then ∃ δ > 0 {\displaystyle \exists \delta >0} such that ∀ x ∈ [ a , b ] {\displaystyle \forall x\in [a,b]} , | x − b | < δ ⟹ | f ( x ) − f ( b ) | < f ( b ) − u ⟹ f ( x ) > u . {\displaystyle |x-b|<\delta \implies |f(x)-f(b)|<f(b)-u\implies f(x)>u.} Consider the interval ( max ( a , b − δ ) , b ] = I 2 {\displaystyle (\max(a,b-\delta ),b]=I_{2}} . Notice that I 2 ⊆ [ a , b ] {\displaystyle I_{2}\subseteq [a,b]} and every x ∈ I 2 {\displaystyle x\in I_{2}} satisfies the condition | x − b | < δ {\displaystyle |x-b|<\delta } . Therefore for every x ∈ I 2 {\displaystyle x\in I_{2}} we have f ( x ) > u {\displaystyle f(x)>u} . Hence c {\displaystyle c} cannot be b {\displaystyle b} .
With c ≠ a {\displaystyle c\neq a} and c ≠ b {\displaystyle c\neq b} , it must be the case c ∈ ( a , b ) {\displaystyle c\in (a,b)} . Now we claim that f ( c ) = u {\displaystyle f(c)=u} .
Fix some ε > 0 {\displaystyle \varepsilon >0} . Since f {\displaystyle f} is continuous at c {\displaystyle c} , ∃ δ 1 > 0 {\displaystyle \exists \delta _{1}>0} such that ∀ x ∈ [ a , b ] {\displaystyle \forall x\in [a,b]} , | x − c | < δ 1 ⟹ | f ( x ) − f ( c ) | < ε {\displaystyle |x-c|<\delta _{1}\implies |f(x)-f(c)|<\varepsilon } .
Since c ∈ ( a , b ) {\displaystyle c\in (a,b)} and ( a , b ) {\displaystyle (a,b)} is open, ∃ δ 2 > 0 {\displaystyle \exists \delta _{2}>0} such that ( c − δ 2 , c + δ 2 ) ⊆ ( a , b ) {\displaystyle (c-\delta _{2},c+\delta _{2})\subseteq (a,b)} . Set δ = min ( δ 1 , δ 2 ) {\displaystyle \delta =\min(\delta _{1},\delta _{2})} . Then we have f ( x ) − ε < f ( c ) < f ( x ) + ε {\displaystyle f(x)-\varepsilon <f(c)<f(x)+\varepsilon } for all x ∈ ( c − δ , c + δ ) {\displaystyle x\in (c-\delta ,c+\delta )} . By the properties of the supremum, there exists some a ∗ ∈ ( c − δ , c ] {\displaystyle a^{*}\in (c-\delta ,c]} that is contained in S {\displaystyle S} , and so f ( c ) < f ( a ∗ ) + ε < u + ε . {\displaystyle f(c)<f(a^{*})+\varepsilon <u+\varepsilon .} Picking a ∗ ∗ ∈ ( c , c + δ ) {\displaystyle a^{**}\in (c,c+\delta )} , we know that a ∗ ∗ ∉ S {\displaystyle a^{**}\not \in S} because c {\displaystyle c} is the supremum of S {\displaystyle S} . This means that f ( c ) > f ( a ∗ ∗ ) − ε ≥ u − ε . {\displaystyle f(c)>f(a^{**})-\varepsilon \geq u-\varepsilon .} Both inequalities u − ε < f ( c ) < u + ε {\displaystyle u-\varepsilon <f(c)<u+\varepsilon } are valid for all ε > 0 {\displaystyle \varepsilon >0} , from which we deduce f ( c ) = u {\displaystyle f(c)=u} as the only possible value, as stated.
We will only prove the case of f ( a ) < u < f ( b ) {\displaystyle f(a)<u<f(b)} , as the f ( a ) > u > f ( b ) {\displaystyle f(a)>u>f(b)} case is similar. [ 4 ]
Define g ( x ) = f ( x ) − u {\displaystyle g(x)=f(x)-u} which is equivalent to f ( x ) = g ( x ) + u {\displaystyle f(x)=g(x)+u} and lets us rewrite f ( a ) < u < f ( b ) {\displaystyle f(a)<u<f(b)} as g ( a ) < 0 < g ( b ) {\displaystyle g(a)<0<g(b)} , and we have to prove, that g ( c ) = 0 {\displaystyle g(c)=0} for some c ∈ [ a , b ] {\displaystyle c\in [a,b]} , which is more intuitive. We further define the set S = { x ∈ [ a , b ] : g ( x ) ≤ 0 } {\displaystyle S=\{x\in [a,b]:g(x)\leq 0\}} . Because g ( a ) < 0 {\displaystyle g(a)<0} we know, that a ∈ S {\displaystyle a\in S} so, that S {\displaystyle S} is not empty. Moreover, as S ⊆ [ a , b ] {\displaystyle S\subseteq [a,b]} , we know that S {\displaystyle S} is bounded and non-empty, so by Completeness, the supremum c = sup ( S ) {\displaystyle c=\sup(S)} exists.
There are 3 cases for the value of g ( c ) {\displaystyle g(c)} , those being g ( c ) < 0 , g ( c ) > 0 {\displaystyle g(c)<0,g(c)>0} and g ( c ) = 0 {\displaystyle g(c)=0} . For contradiction, let us assume, that g ( c ) < 0 {\displaystyle g(c)<0} . Then, by the definition of continuity, for ϵ = 0 − g ( c ) {\displaystyle \epsilon =0-g(c)} , there exists a δ > 0 {\displaystyle \delta >0} such that x ∈ ( c − δ , c + δ ) {\displaystyle x\in (c-\delta ,c+\delta )} implies, that | g ( x ) − g ( c ) | < − g ( c ) {\displaystyle |g(x)-g(c)|<-g(c)} , which is equivalent to g ( x ) < 0 {\displaystyle g(x)<0} . If we just chose x = c + δ N {\displaystyle x=c+{\frac {\delta }{N}}} , where N > δ b − c + 1 {\displaystyle N>{\frac {\delta }{b-c}}+1} , then as 1 < N {\displaystyle 1<N} , x < c + δ {\displaystyle x<c+\delta } , from which we get g ( x ) < 0 {\displaystyle g(x)<0} and c < x < b {\displaystyle c<x<b} , so x ∈ S {\displaystyle x\in S} . It follows that x {\displaystyle x} is an upper bound for S {\displaystyle S} . However, x > c {\displaystyle x>c} , contradicting the upper bound property of the least upper bound c {\displaystyle c} , so g ( c ) ≥ 0 {\displaystyle g(c)\geq 0} . Assume then, that g ( c ) > 0 {\displaystyle g(c)>0} . We similarly chose ϵ = g ( c ) − 0 {\displaystyle \epsilon =g(c)-0} and know, that there exists a δ > 0 {\displaystyle \delta >0} such that x ∈ ( c − δ , c + δ ) {\displaystyle x\in (c-\delta ,c+\delta )} implies | g ( x ) − g ( c ) | < g ( c ) {\displaystyle |g(x)-g(c)|<g(c)} . We can rewrite this as − g ( c ) < g ( x ) − g ( c ) < g ( c ) {\displaystyle -g(c)<g(x)-g(c)<g(c)} which implies, that g ( x ) > 0 {\displaystyle g(x)>0} . If we now chose x = c − δ 2 {\displaystyle x=c-{\frac {\delta }{2}}} , then g ( x ) > 0 {\displaystyle g(x)>0} and a < x < c {\displaystyle a<x<c} . It follows that x {\displaystyle x} is an upper bound for S {\displaystyle S} . However, x < c {\displaystyle x<c} , which contradict the least property of the least upper bound c {\displaystyle c} , which means, that g ( c ) > 0 {\displaystyle g(c)>0} is impossible. If we combine both results, we get that g ( c ) = 0 {\displaystyle g(c)=0} or f ( c ) = u {\displaystyle f(c)=u} is the only remaining possibility.
Remark: The intermediate value theorem can also be proved using the methods of non-standard analysis , which places "intuitive" arguments involving infinitesimals on a rigorous [ clarification needed ] footing. [ 5 ]
A form of the theorem was postulated as early as the 5th century BCE, in the work of Bryson of Heraclea on squaring the circle . Bryson argued that, as circles larger than and smaller than a given square both exist, there must exist a circle of equal area. [ 6 ] The theorem was first proved by Bernard Bolzano in 1817. Bolzano used the following formulation of the theorem: [ 7 ]
Let f , φ {\displaystyle f,\varphi } be continuous functions on the interval between α {\displaystyle \alpha } and β {\displaystyle \beta } such that f ( α ) < φ ( α ) {\displaystyle f(\alpha )<\varphi (\alpha )} and f ( β ) > φ ( β ) {\displaystyle f(\beta )>\varphi (\beta )} . Then there is an x {\displaystyle x} between α {\displaystyle \alpha } and β {\displaystyle \beta } such that f ( x ) = φ ( x ) {\displaystyle f(x)=\varphi (x)} .
The equivalence between this formulation and the modern one can be shown by setting φ {\displaystyle \varphi } to the appropriate constant function . Augustin-Louis Cauchy provided the modern formulation and a proof in 1821. [ 8 ] Both were inspired by the goal of formalizing the analysis of functions and the work of Joseph-Louis Lagrange . The idea that continuous functions possess the intermediate value property has an earlier origin. Simon Stevin proved the intermediate value theorem for polynomials (using a cubic as an example) by providing an algorithm for constructing the decimal expansion of the solution. The algorithm iteratively subdivides the interval into 10 parts, producing an additional decimal digit at each step of the iteration. [ 9 ] Before the formal definition of continuity was given, the intermediate value property was given as part of the definition of a continuous function. Proponents include Louis Arbogast , who assumed the functions to have no jumps, satisfy the intermediate value property and have increments whose sizes corresponded to the sizes of the increments of the variable. [ 10 ] Earlier authors held the result to be intuitively obvious and requiring no proof. The insight of Bolzano and Cauchy was to define a general notion of continuity (in terms of infinitesimals in Cauchy's case and using real inequalities in Bolzano's case), and to provide a proof based on such definitions.
A Darboux function is a real-valued function f that has the "intermediate value property," i.e., that satisfies the conclusion of the intermediate value theorem: for any two values a and b in the domain of f , and any y between f ( a ) and f ( b ) , there is some c between a and b with f ( c ) = y . The intermediate value theorem says that every continuous function is a Darboux function. However, not every Darboux function is continuous; i.e., the converse of the intermediate value theorem is false.
As an example, take the function f : [0, ∞) → [−1, 1] defined by f ( x ) = sin(1/ x ) for x > 0 and f (0) = 0 . This function is not continuous at x = 0 because the limit of f ( x ) as x tends to 0 does not exist; yet the function has the intermediate value property. Another, more complicated example is given by the Conway base 13 function .
In fact, Darboux's theorem states that all functions that result from the differentiation of some other function on some interval have the intermediate value property (even though they need not be continuous).
Historically, this intermediate value property has been suggested as a definition for continuity of real-valued functions; [ 11 ] this definition was not adopted.
The Poincaré-Miranda theorem is a generalization of the Intermediate value theorem from a (one-dimensional) interval to a (two-dimensional) rectangle, or more generally, to an n -dimensional cube .
Vrahatis [ 12 ] presents a similar generalization to triangles, or more generally, n -dimensional simplices . Let D n be an n -dimensional simplex with n +1 vertices denoted by v 0 ,..., v n . Let F =( f 1 ,..., f n ) be a continuous function from D n to R n , that never equals 0 on the boundary of D n . Suppose F satisfies the following conditions:
Then there is a point z in the interior of D n on which F ( z )=(0,...,0).
It is possible to normalize the f i such that f i ( v i )>0 for all i ; then the conditions become simpler:
The theorem can be proved based on the Knaster–Kuratowski–Mazurkiewicz lemma . In can be used for approximations of fixed points and zeros. [ 13 ]
The intermediate value theorem is closely linked to the topological notion of connectedness and follows from the basic properties of connected sets in metric spaces and connected subsets of R in particular:
In fact, connectedness is a topological property and (*) generalizes to topological spaces : If X {\displaystyle X} and Y {\displaystyle Y} are topological spaces, f : X → Y {\displaystyle f\colon X\to Y} is a continuous map, and X {\displaystyle X} is a connected space , then f ( X ) {\displaystyle f(X)} is connected. The preservation of connectedness under continuous maps can be thought of as a generalization of the intermediate value theorem, a property of continuous, real-valued functions of a real variable, to continuous functions in general spaces.
Recall the first version of the intermediate value theorem, stated previously:
Intermediate value theorem ( Version I ) — Consider a closed interval I = [ a , b ] {\displaystyle I=[a,b]} in the real numbers R {\displaystyle \mathbb {R} } and a continuous function f : I → R {\displaystyle f\colon I\to \mathbb {R} } . Then, if u {\displaystyle u} is a real number such that min ( f ( a ) , f ( b ) ) < u < max ( f ( a ) , f ( b ) ) {\displaystyle \min(f(a),f(b))<u<\max(f(a),f(b))} , there exists c ∈ ( a , b ) {\displaystyle c\in (a,b)} such that f ( c ) = u {\displaystyle f(c)=u} .
The intermediate value theorem is an immediate consequence of these two properties of connectedness: [ 14 ]
By (**) , I = [ a , b ] {\displaystyle I=[a,b]} is a connected set. It follows from (*) that the image, f ( I ) {\displaystyle f(I)} , is also connected. For convenience, assume that f ( a ) < f ( b ) {\displaystyle f(a)<f(b)} . Then once more invoking (**) , f ( a ) < u < f ( b ) {\displaystyle f(a)<u<f(b)} implies that u ∈ f ( I ) {\displaystyle u\in f(I)} , or f ( c ) = u {\displaystyle f(c)=u} for some c ∈ I {\displaystyle c\in I} . Since u ≠ f ( a ) , f ( b ) {\displaystyle u\neq f(a),f(b)} , c ∈ ( a , b ) {\displaystyle c\in (a,b)} must actually hold, and the desired conclusion follows. The same argument applies if f ( b ) < f ( a ) {\displaystyle f(b)<f(a)} , so we are done. Q.E.D.
The intermediate value theorem generalizes in a natural way: Suppose that X is a connected topological space and ( Y , <) is a totally ordered set equipped with the order topology , and let f : X → Y be a continuous map. If a and b are two points in X and u is a point in Y lying between f ( a ) and f ( b ) with respect to < , then there exists c in X such that f ( c ) = u . The original theorem is recovered by noting that R is connected and that its natural topology is the order topology.
The Brouwer fixed-point theorem is a related theorem that, in one dimension, gives a special case of the intermediate value theorem.
In constructive mathematics , the intermediate value theorem is not true. Instead, the weakened conclusion one must take states that the value may only be found in some range which may be arbitrarily small.
A similar result is the Borsuk–Ulam theorem , which says that a continuous map from the n {\displaystyle n} -sphere to Euclidean n {\displaystyle n} -space will always map some pair of antipodal points to the same place.
Take f {\displaystyle f} to be any continuous function on a circle. Draw a line through the center of the circle, intersecting it at two opposite points A {\displaystyle A} and B {\displaystyle B} . Define d {\displaystyle d} to be f ( A ) − f ( B ) {\displaystyle f(A)-f(B)} . If the line is rotated 180 degrees, the value − d will be obtained instead. Due to the intermediate value theorem there must be some intermediate rotation angle for which d = 0 , and as a consequence f ( A ) = f ( B ) at this angle.
In general, for any continuous function whose domain is some closed convex n {\displaystyle n} -dimensional shape and any point inside the shape (not necessarily its center), there exist two antipodal points with respect to the given point whose functional value is the same.
The theorem also underpins the explanation of why rotating a wobbly table will bring it to stability (subject to certain easily met constraints). [ 16 ] | https://en.wikipedia.org/wiki/Intermediate_value_theorem |
An intermetallic (also called intermetallic compound , intermetallic alloy , ordered intermetallic alloy , long-range-ordered alloy ) is a type of metallic alloy that forms an ordered solid-state compound between two or more metallic elements. Intermetallics are generally hard and brittle, with good high-temperature mechanical properties. [ 1 ] [ 2 ] [ 3 ] They can be classified as stoichiometric or nonstoichiometic. [ 1 ]
The term "intermetallic compounds" applied to solid phases has long been in use. However, Hume-Rothery argued that it misleads, suggesting a fixed stoichiometry and a clear decomposition into species . [ 4 ]
In 1967 Gustav Ernst Robert Schulze [ de ] defined intermetallic compounds as solid phases containing two or more metallic elements, with optionally one or more non-metallic elements, whose crystal structure differs from that of the other constituents . [ 5 ] This definition includes:
The definition of metal includes: [ citation needed ]
Homogeneous and heterogeneous solid solutions of metals, and interstitial compounds such as carbides and nitrides are excluded under this definition. However, interstitial intermetallic compounds are included, as are alloys of intermetallic compounds with a metal. [ citation needed ]
In common use, the research definition, including post-transition metals and metalloids , is extended to include compounds such as cementite , Fe 3 C. These compounds, sometimes termed interstitial compounds , can be stoichiometric , and share properties with the above intermetallic compounds. [ citation needed ]
The term intermetallic is used [ 6 ] to describe compounds involving two or more metals such as the cyclopentadienyl complex Cp 6 Ni 2 Zn 4 .
A B2 intermetallic compound has equal numbers of atoms of two metals such as aluminum and iron, arranged as two interpenetrating simple cubic lattices of the component metals. [ 7 ]
Intermetallic compounds are generally brittle at room temperature and have high melting points. Cleavage or intergranular fracture modes are typical of intermetallics due to limited independent slip systems required for plastic deformation. However, some intermetallics have ductile fracture modes such as Nb–15Al–40Ti. Others can exhibit improved ductility by alloying with other elements to increase grain boundary cohesion. Alloying of other materials such as boron to improve grain boundary cohesion can improve ductility. [ 8 ] They may offer a compromise between ceramic and metallic properties when hardness and/or resistance to high temperatures is important enough to sacrifice some toughness and ease of processing. They can display desirable magnetic and chemical properties, due to their strong internal order and mixed ( metallic and covalent / ionic ) bonding, respectively. Intermetallics have given rise to various novel materials developments. [ citation needed ]
(°C)
(kg/m 3 )
Examples include alnico and the hydrogen storage materials in nickel metal hydride batteries. Ni 3 Al , which is the hardening phase in the familiar nickel-base super alloys , and the various titanium aluminides have attracted interest for turbine blade applications, while the latter is also used in small quantities for grain refinement of titanium alloys . Silicides , intermetallics involving silicon, serve as barrier and contact layers in microelectronics . [ 9 ] Others include:
The unintended formation of intermetallics can cause problems. For example, intermetallics of gold and aluminium can be a significant cause of wire bond failures in semiconductor devices and other microelectronics devices. The management of intermetallics is a major issue in the reliability of solder joints between electronic components. [ citation needed ]
Intermetallic particles often form during solidification of metallic alloys, and can be used as a dispersion strengthening mechanism. [ 1 ]
Examples of intermetallics through history include:
German type metal is described as breaking like glass, without bending, softer than copper, but more fusible than lead. [ 12 ] : 454 The chemical formula does not agree with the one above; however, the properties match with an intermetallic compound or an alloy of one. [ citation needed ] | https://en.wikipedia.org/wiki/Intermetallic |
Intermetallic particles form during solidification of metallic alloys.
Al-Si-Cu-Mg alloys form Al5FeSi- plate like intermetallic phases like -Al8Fe2Si, Al2Cu, etc. The size and morphology of these intermetallic phases in these alloys control the mechanical properties of these alloys, especially strength and ductility. [ 1 ] The size of these phases depends on the secondary dendrite arm spacing, [ 2 ] as well as the Si content of the alloy, [ 3 ] [ 4 ] [ 5 ] [ 6 ] of the primary phase in the micro structure.
In-situ synchrotron diffraction experiment [ 9 ] on Electron alloy -WE 43 (Mg4Y3Nd) shows that this alloy form the following intermetallic phases ;Mg12Nd, Mg14Y4Nd, and Mg24Y5.
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intermetallic_particle |
In dynamical systems , intermittency is the irregular alternation of phases of apparently periodic and chaotic dynamics ( Pomeau–Manneville dynamics ), or different forms of chaotic dynamics (crisis-induced intermittency). [ 1 ] [ 2 ]
Experimentally, intermittency appears as long periods of almost periodic behavior interrupted by chaotic behavior. As control variables change, the chaotic behavior become more frequent until the system is fully chaotic. This progression is known as the intermittency route to chaos .
Pomeau and Manneville described three routes to intermittency where a nearly periodic system shows irregularly spaced bursts of chaos. [ 3 ] These (type I, II and III) correspond to the approach to a saddle-node bifurcation , a subcritical Hopf bifurcation , or an inverse period-doubling bifurcation . In the apparently periodic phases the behaviour is only nearly periodic, slowly drifting away from an unstable periodic orbit . Eventually the system gets far enough away from the periodic orbit to be affected by chaotic dynamics in the rest of the state space , until it gets close to the orbit again and returns to the nearly periodic behaviour. Since the time spent near the periodic orbit depends sensitively on how closely the system entered its vicinity (in turn determined by what happened during the chaotic period) the length of each phase is unpredictable.
Another kind, on-off intermittency, occurs when a previously transversally stable chaotic attractor with dimension less than the embedding space begins to lose stability. Near unstable orbits within the attractor orbits can escape into the surrounding space, producing a temporary burst before returning to the attractor. [ 4 ]
In crisis-induced intermittency a chaotic attractor suffers a crisis , where two or more attractors cross the boundaries of each other's basin of attraction . As an orbit moves through the first attractor it can cross over the boundary and become attracted to the second attractor, where it will stay until its dynamics moves it across the boundary again.
Intermittent behaviour is commonly observed in fluid flows that are turbulent or near the transition to turbulence. In highly turbulent flows, intermittency is seen in the irregular dissipation of kinetic energy [ 5 ] and the anomalous scaling of velocity increments. [ 6 ] Understanding and modeling atmospheric flow and turbulence under such conditions are further complicated by “turbulence intermittency,” which manifests as periods of strong turbulent activity interspersed in a more quiescent airflow. [ 7 ] It is also seen in the irregular alternation between turbulent and non-turbulent fluid that appear in turbulent jets and other turbulent free shear flows. In pipe flow and other wall bounded shear flows, there are intermittent puffs that are central to the process of transition from laminar to turbulent flow. Intermittent behavior has also been experimentally demonstrated in circuit oscillators and chemical reactions. | https://en.wikipedia.org/wiki/Intermittency |
The intermittent inductive automatic train stop (also referred to as IIATS or just automatic train stop or ATS ) is a train protection system used in North American mainline railroad and rapid transit systems. It makes use of magnetic reluctance to trigger a passing train to take some sort of action. The system was developed in the 1920s by the General Railway Signal Company as an improvement on existing mechanical train stop systems and saw limited adoption before being overtaken by more advanced cab signaling and automatic train control systems. The system remains in use after having been introduced in the 1920s.
The technology works by having the state of a track mounted shoe read by a receiver mounted to a truck on the leading locomotive or car. In the standard implementation the shoe is mounted to the ties a few inches outside the right hand running rail, although in theory the shoe could be mounted anywhere on the ties. [ 1 ] The system is binary with the shoe presenting either an on or off state to the receiver. In order to be failsafe when the shoe is energized it presents an off state to the receiver, while the non-energized state presents an on state which triggers an action. This allows things like permanent speed restrictions or other hazards to be protected by non-active devices.
The receiver consists of a two coil electromagnet carefully aligned to pass about 1.5 inches above the surface of the inductor shoe. The inductor shoe consists of two metal plates set into a streamlined housing designed to deflect impacts of debris or misaligned receivers. The metal plates are connected through a choke circuit in the body of the shoe. When the choke circuit is open magnetic flux in the receiver's primary coil is able to induce a voltage in the receiver's secondary coil which in turn triggers an action in the locomotive. When the circuit is closed the choke eliminates the magnetic field and the voltage induced by it allowing the locomotive to pass without activation. Where unconditional activation was desired specially shaped metal plates could be used in place of a fully functional shoe, however the design of the system can result in accidental activations when the train passes over switches or other metal objects in the track area.
The most common use case for the ATS system was to alert the railroad engineer of an impending hazard and if the alert was not acknowledged, stop the train by means of a full service application of the brakes . When attached to signals the shoe would be energized when the signal was displaying a clear indication. Any other signal indication would de-energize the shoe and trigger an alarm in the cab. If the engineer did not cancel the alarm within 5–8 seconds a penalty brake application would be initiated and could not be reset until the train came to a complete stop. [ 1 ] Unlike mechanical train stops or other train stop systems, IIATS was not generally used to automatically stop a train if it passed a stop signal and in practice could not be used for this purpose as the shoes were placed only a few feet from the signal they protected and would not present sufficient braking distance for the train to stop.
On bi-directionally signaled lines two shoes would be needed, one for each direction of travel as locomotives would only have a sensor to detect the shoes on one side of the train. The receivers can also be designed for easy removal to prevent damage when operating in non-equipped territory or to cut costs when only a small portion of the railroad requires ATS equipped locomotives. Inert inductors are sometimes placed in advance of certain speed restrictions as an alert or at engine terminals to test the functionality of the ATS system.
On a few light rail lines IIATS has been employed in a manner similar to mechanical train stops, stopping the train if it passes an absolute stop signal. It is useful where light rail shares tracks with mainline railroad trains as mechanical trips may be damaged by or interfere with freight operations and because light rail vehicles can be brought to a stop much more quickly than a mainline railroad train without requiring complex signal overlaps
Starting in the 1930s the US Interstate Commerce Commission , in its role as a federal railroad regulator, encouraged railroads to adopt new safety technologies to decrease the rate of railroad accidents. IIATS was offered by the General Railway Signal Company of Rochester, NY as one such technology and it was adopted by the New York Central railroad for use on its high speed Water Level Route between New York and Chicago and on a number of other lines. The Southern Railway also chose to adopt ATS on most of its main lines eventually covering 2700 route miles. In addition the Chicago and North Western Railway installed the system on some of its Chicago area commuter lines.
After the Naperville train disaster caused by a missed signal, the ICC required additional technical safety systems for any train traveling at or above 80 mph with the rule taking effect in 1951. Those railroads still interested in high speed operations IIATS met the minimum ICC requirements with a lower cost compared to other cab signaling or automatic train control systems, however with rail travel facing increased competition from cars and airplanes most railroads simply choose to accept the new speed limit. Only the Atchison, Topeka and Santa Fe choose to fully equip its Chicago to Los Angeles and Los Angeles to San Diego main lines in support of the Super Chief and other premier high speed trains.
IIATS installations reached their peak in 1954 with a total of 8650 road miles, 14400 track miles, and 3850 locomotives equipped with the system. However, with the collapse of long distance passenger rail travel and the general North American railroad industry malaise in 1971, the bankrupt Penn Central was permitted to remove IIATS from its Water Level Route along with the Southern and other railroads with test or pilot IIATS systems. Even the ATSF and successor BNSF were gradually allowed by regulators to remove IIATS from parts of previously equipped lines due to the reduced passenger traffic. At the dawn of the 21st century the only IIATS equipped lines were the MetroLink and Coaster line between San Diego and Fullerton, [ 1 ] parts of the former ATSF Super Chief route in California, Arizona, New Mexico, Colorado, Kansas and Missouri and the former Chicago and North Western Railway North Line , Northwest Line out of Chicago operated by Union Pacific on behalf of Metra
When the NJ Transit River Line opened in 2004 it featured a new IIATS system. This is a light rail systems running on shared track with main line freight traffic and IIATS is used to enforce a full stop at equipped signals instead of as a warning system. | https://en.wikipedia.org/wiki/Intermittent_inductive_automatic_train_stop |
Intermittent rhythmic delta activity ( IRDA ) is a type of brain wave abnormality found in electroencephalograms (EEG). [ 1 ]
It can be classified based on the area of brain it originates from:
It can also be
It can be caused by a number of different reasons, some benign , unknown reasons, but also are commonly associated with lesions , tumors , and encephalopathies . [ 3 ] Association with periventricular white matter disease and cortical atrophy has been documented and they are more likely to show up during acute metabolic derangements such as uremia and hyperglycemia . [ 4 ]
This article about a medical condition affecting the nervous system is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intermittent_rhythmic_delta_activity |
A piped water supply and distribution system is intermittent when water continuity is for less than 24 hours a day or not on all days of the week. [ 1 ] [ 2 ] During this continuity defining factors are water pressure and equity. [ 3 ] [ 4 ] At least 45 countries have intermittent water supply (IWS) systems. [ 5 ] It is contrasted with a continuous or " 24/7 " water supply, the service standard. [ 6 ] [ 7 ] No system is intentionally designed to be intermittent, but they may become that way because of system overexpansion, leakage and other factors. [ 8 ] [ 9 ] As of 2022, there was no feasible method for modelling IWS, including no computer-aided tools. [ 1 ] Contamination issues can be associated with an intermittent water distribution system. [ 10 ] Global public health impact includes millions of cases of infections and diarrhea, and 1560 deaths annually. [ 11 ]
A continuous supply is not practical in all situations. [ 3 ] In the short term, an IWS may have some benefits. [ 12 ] These may include addressing demand with a limited supply in a more economical manner. [ 13 ] An intermittent supply may be temporary (e.g., when water reserves are low) or permanent (e.g., where the piped system cannot sustain a continuous supply). [ 6 ] Associated factors resulting from an intermittent supply include water extraction by users at the same time, resulting in low pressure and a possible higher peak demand. [ 14 ]
A large share of water supply systems around the world are intermittent; in other words, intermittent water supply is a norm. [ 15 ] [ 16 ] About 1.3 billion people have a piped supply that is intermittent, including large populations in Africa, Asia, and Latin America. [ 1 ] [ 14 ] This does not include those who do not get piped water at all, about 2.7 billion people. [ 1 ] Countries with intermittent supply in some areas and continuous supply in others include India [ 17 ] and South Africa. [ 18 ] In India, various cities are at various stages of constructing 24/7 supply systems, such as Chandigarh , [ 19 ] Delhi , [ 20 ] Shimla , [ 21 ] and Coimbatore . [ 22 ] In Cambodia, Phnom Penh increased coverage from 25% to 85% and duration from 10 to 24 hours a day between 1993 and 2004. [ 23 ]
Installation of storage and pumps at residences may offset the intermittency of the water supply. [ 6 ] Roof tanks are a common feature in countries where the water supply is intermittent. [ 24 ] In Jordan , most houses have one or more ground or roof tanks. An intermittent supply can be supplemented with other non-piped sources such as packaged drinking and cooking water bought from local shops or delivered to the house. [ 25 ] | https://en.wikipedia.org/wiki/Intermittent_water_supply |
Intermodulation ( IM ) or intermodulation distortion ( IMD ) is the amplitude modulation of signals containing two or more different frequencies , caused by nonlinearities or time variance in a system. The intermodulation between frequency components will form additional components at frequencies that are not just at harmonic frequencies ( integer multiples ) of either, like harmonic distortion , but also at the sum and difference frequencies of the original frequencies and at sums and differences of multiples of those frequencies.
Intermodulation is caused by non-linear behaviour of the signal processing (physical equipment or even algorithms) being used. The theoretical outcome of these non-linearities can be calculated by generating a Volterra series of the characteristic, or more approximately by a Taylor series . [ 1 ]
Practically all audio equipment has some non-linearity, so it will exhibit some amount of IMD, which however may be low enough to be imperceptible by humans. Due to the characteristics of the human auditory system , the same percentage of IMD is perceived as more bothersome when compared to the same amount of harmonic distortion. [ 2 ] [ 3 ] [ dubious – discuss ]
Intermodulation is also usually undesirable in radio, as it creates unwanted spurious emissions , often in the form of sidebands . For radio transmissions this increases the occupied bandwidth, leading to adjacent channel interference , which can reduce audio clarity or increase spectrum usage.
IMD is only distinct from harmonic distortion in that the stimulus signal is different. The same nonlinear system will produce both total harmonic distortion (with a solitary sine wave input) and IMD (with more complex tones). In music, for instance, IMD is intentionally applied to electric guitars using overdriven amplifiers or effects pedals to produce new tones at sub harmonics of the tones being played on the instrument. See Power chord#Analysis .
IMD is also distinct from intentional modulation (such as a frequency mixer in superheterodyne receivers ) where signals to be modulated are presented to an intentional nonlinear element ( multiplied ). See non-linear mixers such as mixer diodes and even single- transistor oscillator-mixer circuits. However, while the intermodulation products of the received signal with the local oscillator signal are intended, superheterodyne mixers can, at the same time, also produce unwanted intermodulation effects from strong signals near in frequency to the desired signal that fall within the passband of the receiver.
A linear time-invariant system cannot produce intermodulation. If the input of a linear time-invariant system is a signal of a single frequency, then the output is a signal of the same frequency; only the amplitude and phase can differ from the input signal.
Non-linear systems generate harmonics in response to sinusoidal input, meaning that if the input of a non-linear system is a signal of a single frequency, f a , {\displaystyle ~f_{a},} then the output is a signal which includes a number of integer multiples of the input frequency signal; (i.e. some of f a , 2 f a , 3 f a , 4 f a , … {\displaystyle ~f_{a},2f_{a},3f_{a},4f_{a},\ldots } ).
Intermodulation occurs when the input to a non-linear system is composed of two or more frequencies. Consider an input signal that contains three frequency components at f a {\displaystyle ~f_{a}} , f b {\displaystyle ~f_{b}} , and f c {\displaystyle ~f_{c}} ; which may be expressed as
where the M {\displaystyle \ M} and ϕ {\displaystyle \ \phi } are the amplitudes and phases of the three components, respectively.
We obtain our output signal, y ( t ) {\displaystyle \ y(t)} , by passing our input through a non-linear function G {\displaystyle G} :
y ( t ) {\displaystyle \ y(t)} will contain the three frequencies of the input signal, f a {\displaystyle ~f_{a}} , f b {\displaystyle ~f_{b}} , and f c {\displaystyle ~f_{c}} (which are known as the fundamental frequencies), as well as a number of linear combinations of the fundamental frequencies, each in the form
where k a {\displaystyle ~k_{a}} , k b {\displaystyle ~k_{b}} , and k c {\displaystyle ~k_{c}} are arbitrary integers which can assume positive or negative values. These are the intermodulation products (or IMPs ).
In general, each of these frequency components will have a different amplitude and phase, which depends on the specific non-linear function being used, and also on the amplitudes and phases of the original input components.
More generally, given an input signal containing an arbitrary number N {\displaystyle N} of frequency components f a , f b , … , f N {\displaystyle f_{a},f_{b},\ldots ,f_{N}} , the output signal will contain a number of frequency components, each of which may be described by
where the coefficients k a , k b , … , k N {\displaystyle k_{a},k_{b},\ldots ,k_{N}} are arbitrary integer values.
The order O {\displaystyle \ O} of a given intermodulation product is the sum of the absolute values of the coefficients,
For example, in our original example above, third-order intermodulation products (IMPs) occur where | k a | + | k b | + | k c | = 3 {\displaystyle \ |k_{a}|+|k_{b}|+|k_{c}|=3} :
In many radio and audio applications, odd-order IMPs are of most interest, as they fall within the vicinity of the original frequency components, and may therefore interfere with the desired behaviour. For example, intermodulation distortion from the third order ( IMD3 ) of a circuit can be seen by looking at a signal that is made up of two sine waves , one at f 1 {\displaystyle f_{1}} and one at f 2 {\displaystyle f_{2}} . When you cube the sum of these sine waves you will get sine waves at various frequencies including 2 × f 2 − f 1 {\displaystyle 2\times f_{2}-f_{1}} and 2 × f 1 − f 2 {\displaystyle 2\times f_{1}-f_{2}} . If f 1 {\displaystyle f_{1}} and f 2 {\displaystyle f_{2}} are large but very close together then 2 × f 2 − f 1 {\displaystyle 2\times f_{2}-f_{1}} and 2 × f 1 − f 2 {\displaystyle 2\times f_{1}-f_{2}} will be very close to f 1 {\displaystyle f_{1}} and f 2 {\displaystyle f_{2}} .
As explained in a previous section , intermodulation can only occur in non-linear systems. Non-linear systems are generally composed of active components, meaning that the components must be biased with an external power source which is not the input signal (i.e. the active components must be "turned on").
Passive intermodulation (PIM), however, occurs in passive devices (which may include cables, antennas etc.) that are subjected to two or more high power tones. [ 4 ] The PIM product is the result of the two (or more) high power tones mixing at device nonlinearities such as junctions of dissimilar metals or metal-oxide junctions, such as loose corroded connectors. The higher the signal amplitudes, the more pronounced the effect of the nonlinearities, and the more prominent the intermodulation that occurs — even though upon initial inspection, the system would appear to be linear and unable to generate intermodulation.
The requirement for "two or more high power tones" need not be discrete tones. Passive intermodulation can also occur between different frequencies (i.e. different "tones") within a single broadband carrier. These PIMs would show up as sidebands in a telecommunication signal, which interfere with adjacent channels and impede reception.
Passive intermodulations are a major concern in modern communication systems in cases when a single antenna is used for both high power transmission signals as well as low power receive signals (or when a transmit antenna is in close proximity to a receive antenna). Although the power in the passive intermodulation signal is typically many orders of magnitude lower than the power of the transmit signal, the power in the passive intermodulation signal is often times on the same order of magnitude (and possibly higher) than the power of the receive signal. Therefore, if a passive intermodulation finds its way to receive path, it cannot be filtered or separated from the receive signal. The receive signal would therefore be clobbered by the passive intermodulation signal. [ 5 ]
Ferromagnetic materials are the most common materials to avoid and include ferrites, nickel, (including nickel plating) and steels (including some stainless steels). These materials exhibit hysteresis when exposed to reversing magnetic fields, resulting in PIM generation.
Passive intermodulation can also be generated in components with manufacturing or workmanship defects, such as cold or cracked solder joints or poorly made mechanical contacts. If these defects are exposed to high radio frequency currents, passive intermodulation can be generated. As a result, radio frequency equipment manufacturers perform factory PIM tests on components, to eliminate passive intermodulation caused by these design and manufacturing defects.
Passive intermodulation can also be inherent in the design of a high power radio frequency component where radio frequency current is forced to narrow channels or restricted.
In the field, passive intermodulation can be caused by components that were damaged in transit to the cell site, installation workmanship issues and by external passive intermodulation sources. Some of these include:
IEC 62037 is the international standard for passive intermodulation testing and gives specific details as to passive intermodulation measurement setups. The standard specifies the use of two +43 dBm (20 W) tones for the test signals for passive intermodulation testing. This power level has been used by radio frequency equipment manufacturers for more than a decade to establish PASS / FAIL specifications for radio frequency components.
Slew-induced distortion (SID) can produce intermodulation distortion (IMD) when the first signal is slewing (changing voltage) at the limit of the amplifier's power bandwidth product. This induces an effective reduction in gain, partially amplitude-modulating the second signal. If SID only occurs for a portion of the signal, it is called "transient" intermodulation distortion. [ 6 ]
Intermodulation distortion in audio is usually specified as the root mean square (RMS) value of the various sum-and-difference signals as a percentage of the original signal's root mean square voltage, although it may be specified in terms of individual component strengths, in decibels , as is common with radio frequency work. Audio system measurements (Audio IMD) include SMPTE standard RP120-1994 [ 6 ] where two signals (at 60 Hz and 7 kHz, with 4:1 amplitude ratios) are used for the test; many other standards (such as DIN, CCIF) use other frequencies and amplitude ratios. Opinion varies over the ideal ratio of test frequencies (e.g. 3:4, [ 7 ] or almost — but not exactly — 3:1 for example).
After feeding the equipment under test with low distortion input sinewaves, the output distortion can be measured by using an electronic filter to remove the original frequencies, or spectral analysis may be made using Fourier transformations in software or a dedicated spectrum analyzer , or when determining intermodulation effects in communications equipment, may be made using the receiver under test itself.
In radio applications, intermodulation may be measured as adjacent channel power ratio . Hard to test are intermodulation signals in the GHz-range generated from passive devices (PIM: passive intermodulation). Manufacturers of these scalar PIM-instruments are Summitek and Rosenberger. The newest developments are PIM-instruments to measure also the distance to the PIM-source. Anritsu offers a radar-based solution with low accuracy and Heuermann offers a frequency converting vector network analyzer solution with high accuracy. | https://en.wikipedia.org/wiki/Intermodulation |
An intermolecular force ( IMF ; also secondary force ) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction
or repulsion which act between atoms and other types of neighbouring particles (e.g. atoms or ions ). Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond , involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. [ 1 ] Both sets of forces are essential parts of force fields frequently used in molecular mechanics .
The first reference to the nature of microscopic forces is found in Alexis Clairaut 's work Théorie de la figure de la Terre, published in Paris in 1743. [ 2 ] Other scientists who have contributed to the investigation of microscopic forces include: Laplace , Gauss , Maxwell , Boltzmann and Pauling .
Attractive intermolecular forces are categorized into the following types:
Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity , pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and intermolecular pair potentials , such as the Mie potential , Buckingham potential or Lennard-Jones potential .
In the broadest sense, it can be understood as such interactions between any particles ( molecules , atoms , ions and molecular ions ) in which the formation of chemical (that is, ionic, covalent or metallic) bonds does not occur. In other words, these interactions are significantly weaker than covalent ones and do not lead to a significant restructuring of the electronic structure of the interacting particles. (This is only partially true. For example, all enzymatic and catalytic reactions begin with a weak intermolecular interaction between a substrate and an enzyme or a molecule with a catalyst , but several such weak interactions with the required spatial configuration of the active center of the enzyme lead to significant restructuring changes the energy state of molecules or substrate, which ultimately leads to the breaking of some and the formation of other covalent chemical bonds. Strictly speaking, all enzymatic reactions begin with intermolecular interactions between the substrate and the enzyme, therefore the importance of these interactions is especially great in biochemistry and molecular biology , [ 3 ] and is the basis of enzymology ).
A hydrogen bond refers to the attraction between a hydrogen atom that is covalently bonded to an element with high electronegativity , usually nitrogen , oxygen , or fluorine , and another highly electronegative atom. [ 4 ] The hydrogen bond is often described as a strong electrostatic interaction. However, it also has some features of covalent bonding: it is directional, stronger than a van der Waals force interaction, produces interatomic distances shorter than the sum of their van der Waals radii , and usually involves a limited number of interaction partners, which can be interpreted as a kind of valence . The number of hydrogen bonds formed between molecules is equal to the number of active pairs. The molecule which donates its hydrogen is termed the donor molecule, while the molecule containing lone pair participating in H bonding is termed the acceptor molecule. The number of active pairs is equal to the common number between number of hydrogens the donor has and the number of lone pairs the acceptor has.
Though both are not depicted in the diagram, water molecules have four active bonds. The oxygen atom’s two lone pairs interact with a hydrogen each, forming two additional hydrogen bonds, and the second hydrogen atom also interacts with a neighbouring oxygen. Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides , which have little capability to hydrogen bond. Intramolecular hydrogen bonding is partly responsible for the secondary , tertiary , and quaternary structures of proteins and nucleic acids . It also plays an important role in the structure of polymers , both synthetic and natural. [ 5 ]
The attraction between cationic and anionic sites is a noncovalent, or intermolecular interaction which is usually referred to as ion pairing or salt bridge. [ 6 ] It is essentially due to electrostatic forces, although in aqueous medium the association is driven by entropy and often even endothermic. Most salts form crystals with characteristic distances between the ions; in contrast to many other noncovalent interactions, salt bridges are not directional and show in the solid state usually contact determined only by the van der Waals radii of the ions.
Inorganic as well as organic ions display in water at moderate ionic strength I similar salt bridge as association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability, etc.) of the ions. [ 7 ] The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye-Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol.
Dipole–dipole interactions (or Keesom interactions) are electrostatic interactions between molecules which have permanent dipoles. This interaction is stronger than the London forces but is weaker than ion-ion interaction because only partial charges are involved. These interactions tend to align the molecules to increase attraction (reducing potential energy ). An example of a dipole–dipole interaction can be seen in hydrogen chloride (HCl): the positive end of a polar molecule will attract the negative end of the other molecule and influence its position. Polar molecules have a net attraction between them. Examples of polar molecules include hydrogen chloride (HCl) and chloroform (CHCl 3 ).
Often molecules contain dipolar groups of atoms, but have no overall dipole moment on the molecule as a whole. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane and carbon dioxide . The dipole–dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole.
The Keesom interaction is a van der Waals force. It is discussed further in the section "Van der Waals forces".
Ion–dipole and ion–induced dipole forces are similar to dipole–dipole and dipole–induced dipole interactions but involve ions, instead of only polar and non-polar molecules. Ion–dipole and ion–induced dipole forces are stronger than dipole–dipole interactions because the charge of any ion is much greater than the charge of a dipole moment. Ion–dipole bonding is stronger than hydrogen bonding. [ 8 ]
An ion–dipole force consists of an ion and a polar molecule interacting. They align so that the positive and negative groups are next to one another, allowing maximum attraction. An important example of this interaction is hydration of ions in water which give rise to hydration enthalpy . The polar water molecules surround themselves around ions in water and the energy released during the process is known as hydration enthalpy. The interaction has its immense importance in justifying the stability of various ions (like Cu 2+ ) in water.
An ion–induced dipole force consists of an ion and a non-polar molecule interacting. Like a dipole–induced dipole force, the charge of the ion causes distortion of the electron cloud on the non-polar molecule. [ 9 ]
The van der Waals forces arise from interaction between uncharged atoms or molecules, leading not only to such phenomena as the cohesion of condensed phases and physical absorption of gases, but also to a universal force of attraction between macroscopic bodies. [ 10 ]
The first contribution to van der Waals forces is due to electrostatic interactions between rotating permanent dipoles, quadrupoles (all molecules with symmetry lower than cubic), and multipoles. It is termed the Keesom interaction , named after Willem Hendrik Keesom . [ 11 ] These forces originate from the attraction between permanent dipoles (dipolar molecules) and are temperature dependent. [ 10 ]
They consist of attractive interactions between dipoles that are ensemble averaged over different rotational orientations of the dipoles. It is assumed that the molecules are constantly rotating and never get locked into place. This is a good assumption, but at some point molecules do get locked into place. The energy of a Keesom interaction depends on the inverse sixth power of the distance, unlike the interaction energy of two spatially fixed dipoles, which depends on the inverse third power of the distance. The Keesom interaction can only occur among molecules that possess permanent dipole moments, i.e., two polar molecules. Also Keesom interactions are very weak van der Waals interactions and do not occur in aqueous solutions that contain electrolytes. The angle averaged interaction is given by the following equation:
where d = electric dipole moment, ε 0 {\displaystyle \varepsilon _{0}} = permittivity of free space, ε r {\displaystyle \varepsilon _{r}} = dielectric constant of surrounding material, T = temperature, k B {\displaystyle k_{\text{B}}} = Boltzmann constant, and r = distance between molecules.
The second contribution is the induction (also termed polarization) or Debye force, arising from interactions between rotating permanent dipoles and from the polarizability of atoms and molecules (induced dipoles). These induced dipoles occur when one molecule with a permanent dipole repels another molecule's electrons. A molecule with permanent dipole can induce a dipole in a similar neighboring molecule and cause mutual attraction. Debye forces cannot occur between atoms. The forces between induced and permanent dipoles are not as temperature dependent as Keesom interactions because the induced dipole is free to shift and rotate around the polar molecule. The Debye induction effects and Keesom orientation effects are termed polar interactions. [ 10 ]
The induced dipole forces appear from the induction (also termed polarization ), which is the attractive interaction between a permanent multipole on one molecule with an induced (by the former di/multi-pole) 31 on another. [ 12 ] [ 13 ] [ 14 ] This interaction is called the Debye force , named after Peter J. W. Debye .
One example of an induction interaction between permanent dipole and induced dipole is the interaction between HCl and Ar. In this system, Ar experiences a dipole as its electrons are attracted (to the H side of HCl) or repelled (from the Cl side) by HCl. [ 12 ] [ 13 ] The angle averaged interaction is given by the following equation:
where α 2 {\displaystyle \alpha _{2}} = polarizability.
This kind of interaction can be expected between any polar molecule and non-polar/symmetrical molecule. The induction-interaction force is far weaker than dipole–dipole interaction, but stronger than the London dispersion force .
The third and dominant contribution is the dispersion or London force (fluctuating dipole–induced dipole), which arises due to the non-zero instantaneous dipole moments of all atoms and molecules. Such polarization can be induced either by a polar molecule or by the repulsion of negatively charged electron clouds in non-polar molecules. Thus, London interactions are caused by random fluctuations of electron density in an electron cloud. An atom with a large number of electrons will have a greater associated London force than an atom with fewer electrons. The dispersion (London) force is the most important component because all materials are polarizable, whereas Keesom and Debye forces require permanent dipoles. The London interaction is universal and is present in atom-atom interactions as well. For various reasons, London interactions (dispersion) have been considered relevant for interactions between macroscopic bodies in condensed systems. Hamaker developed the theory of van der Waals between macroscopic bodies in 1937 and showed that the additivity of these interactions renders them considerably more long-range. [ 10 ]
(kJ/mol)
This comparison is approximate. The actual relative strengths will vary depending on the molecules involved. For instance, the presence of water creates competing interactions that greatly weaken the strength of both ionic and hydrogen bonds. [ 18 ] We may consider that for static systems, Ionic bonding and covalent bonding will always be stronger than intermolecular forces in any given substance. But it is not so for big moving systems like enzyme molecules interacting with substrate molecules. [ 19 ] Here the numerous intramolecular (most often - hydrogen bonds ) bonds form an active intermediate state where the intermolecular bonds cause some of the covalent bond to be broken, while the others are formed, in this way enabling the thousands of enzymatic reactions , so important for living organisms .
Intermolecular forces are repulsive at short distances and attractive at long distances (see the Lennard-Jones potential ). [ 20 ] [ 21 ] In a gas, the repulsive force chiefly has the effect of keeping two molecules from occupying the same volume. This gives a real gas a tendency to occupy a larger volume than an ideal gas at the same temperature and pressure. The attractive force draws molecules closer together and gives a real gas a tendency to occupy a smaller volume than an ideal gas. Which interaction is more important depends on temperature and pressure (see compressibility factor ).
In a gas, the distances between molecules are generally large, so intermolecular forces have only a small effect. The attractive force is not overcome by the repulsive force, but by the thermal energy of the molecules. Temperature is the measure of thermal energy, so increasing temperature reduces the influence of the attractive force. In contrast, the influence of the repulsive force is essentially unaffected by temperature.
When a gas is compressed to increase its density, the influence of the attractive force increases. If the gas is made sufficiently dense, the attractions can become large enough to overcome the tendency of thermal motion to cause the molecules to disperse. Then the gas can condense to form a solid or liquid, i.e., a condensed phase. Lower temperature favors the formation of a condensed phase. In a condensed phase, there is very nearly a balance between the attractive and repulsive forces.
Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding , [ 22 ] van der Waals force [ 23 ] and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. [ 24 ] One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index , which is based on the electron density of the system. London dispersion forces play a big role with this.
Concerning electron density topology, recent methods based on electron density gradient methods have emerged recently, notably with the development of IBSI (Intrinsic Bond Strength Index), [ 25 ] relying on the IGM (Independent Gradient Model) methodology. [ 26 ] [ 27 ] [ 28 ] | https://en.wikipedia.org/wiki/Intermolecular_force |
Intermuscular Coherence is a measure to quantify correlations between the activity of two muscles , which is often assessed using electromyography . The correlations in muscle activity are quantified in frequency domain , [ 1 ] and therefore referred to as intermuscular coherence . [ 2 ]
The synchronisation of motor units of a single muscle in animals and humans are known for decades. The early studies that investigated the relationship of EMG activity used time-domain cross-correlation to quantify common input. [ 3 ] [ 4 ] The explicit notion of presence of synchrony between motor units of two different muscles was reported at a later time. [ 5 ] In the 1990s, coherence analysis was introduced to examine in frequency content of common input. [ 2 ]
Intermuscular coherence can be used to investigate the neural circuitry involved in motor control. Correlated muscle activity indicates common input to the motor unit pools of both muscles [ 6 ] [ 7 ] and reflects shared neural pathways (including cortical, subcortical and spinal) that contribute to muscle activity and movement. [ 8 ] The strength of intermuscular coherence is dependent on the relationship between muscles and is generally stronger between muscle pairs that are anatomically and functionally closely related. [ 9 ] [ 10 ] Intermuscular coherence can therefore be used to identify impairments in motor pathways. [ 11 ] [ 12 ]
This medical diagnostic article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intermuscular_coherence |
Internal Coordinate Mechanics ( ICM ) is a software program and algorithm to predict low-energy conformations of molecules by sampling the space of internal coordinates ( bond lengths , bond angles and dihedral angles ) defining molecular geometry . In ICM each molecule is constructed as a tree from an entry atom where each next atom is built iteratively from the preceding three atoms via three internal variables. The rings kept rigid or imposed via additional restraints. ICM is used for modelling peptides and interactions with substrates and coenzymes . [ 1 ]
ICM also is a programming environment for various tasks in computational chemistry and computational structural biology , sequence analysis and rational drug design . The original goal was to develop algorithms for energy optimization of several biopolymers with respect to an arbitrary subset of internal coordinates such as bond lengths, bond angles torsion angles and phase angles. The efficient and general global optimization method which evolved from the original ICM method is still the central piece of the program. It is this basic algorithm which is used for peptide prediction, homology modeling and loop simulations, flexible macromolecular docking and energy refinement. However the complexity of problems related to structure prediction and analysis, as well as the desire for perfection, compactness and consistency, led to the program's expansion into neighboring areas such as graphics, chemistry, sequence analysis and database searches, mathematics , statistics and plotting.
The original meaning became too narrow, but the name was kept. The current integrated ICM shell contains hundreds of variables, functions, commands, database and web tools, novel algorithms for structure prediction and analysis into a powerful, yet compact program which is still called ICM. The seven principal areas are centered on a general core of shell-language and data analysis and visualization. | https://en.wikipedia.org/wiki/Internal_Coordinate_Mechanics |
The Internal Market Information System ( IMI ) is an IT-based network that links public bodies in the European Economic Area . It was developed by the European Commission together with the Member States of the European Union to speed up cross-border administrative cooperation. IMI allows public administrations at national, regional and local level to identify their counterparts in other countries and to exchange information with them. Pre-translated questions and answers as well as machine translation make it possible for them to use their own language to communicate.
Internal market legislation of the European Union (EU) makes it mandatory for competent authorities in Member States to assist their counterparts abroad by providing them with information. Some legislation also requires communication between Member States and the European Commission (for example for the notification of national implementing measures of European Union law ). IMI has been developed in order to facilitate this day-to-day exchange of information.
IMI was launched in February 2008. Development and maintenance has been funded by the programme Interoperability Solutions for European Public Administrations (ISA) since July 2010. ISA is the successor to the IDABC program, which initially funded IMI and came to an end on 31 December 2009.
IMI is one of the governance tools of the Single Market. Other such tools are Your Europe, [ 1 ] Your Europe Advice, [ 2 ] Solvit [ 3 ] and the Points of Single Contact. [ 4 ] IMI applies a " Privacy by Design " approach – integrating privacy and data protection compliance in all stages of the design of IMI – which has been developed in consultation with the European Data Protection Supervisor (EDPS).
IMI has been rolled out in a decentralised way. Therefore, the practical implementation of IMI is the responsibility of the individual Member States. There are several actors that play a role in the IMI network.
Competent authorities are the end users of IMI. They are public bodies that have been given the responsibility to deal with certain elements of application of internal market legislation. They can function on national, regional or local level.
There is one national IMI coordinator (NIMIC) per Member State, often located in a national ministry. Their task is to ensure the smooth operation of IMI in their country. IMI coordinators may delegate some of their responsibilities to additional coordinators who are in charge of, for example, one legislative area or a geographical region, depending on each Member State’s administrative structures.
The European Commission is responsible for maintenance and development of the tool, helpdesk services and training. It also manages and supports the network of IMI coordinators, promotes further expansion of IMI and reports on the functioning of the system.
IMI offers a number of workflows to its users in order to facilitate different types of administrative cooperation across the Member states of the European Economic Area .
When a competent authority needs information from a counterpart abroad, it can send a request for information. This exchange mechanism uses lists of pre-translated questions and answers available in all EU languages. It is also possible to attach documents. Only the competent authorities that are directly involved in an information exchange have access to the content. A practical example of an information request is when a German teacher would like to continue his activities in Portugal . The Portuguese authority needs to verify the authenticity of his scanned diploma. It can then use IMI to send an information request to its partner authority in Germany . This authority can accept the request and send an answer back to the authority in Portugal. Due to the pre-translated question and answer sets, both authorities can communicate in their own language.
Notifications are based on one-to-many information exchanges where authorities can alert or notify one or more competent authorities and/or the European Commission . For example, the Services Directive requires that Member States alert each other of possible dangers to the health and safety of people or to the environment caused in the provision of services. [ 5 ]
IMI information repositories are databases storing specific information for certain policy areas. An example of such a repository is the directory of registers maintained by competent authorities throughout the European Economic Area . This directory is equipped with multilingual search functions. The content of a repository can be accessed either by a restricted group of competent authorities or by all IMI users.
IMI is used in all Member States of the European Economic Area for the administrative cooperation required by the Directive on the Recognition of Professional Qualifications (2005/36/EC), [ 6 ] by the Directive on services in the internal market (2006/123/EC) and, on a pilot basis, by the Posting of Workers Directive. [ 7 ] Since November 2012, it provides a repository for information on licence holders for the cross-border road transport of Euro cash [ 8 ] and an IT platform for the problem solving network Solvit . IMI is being expanded to cover further legislative areas. For example, the Directive on Patients' Rights in Cross-border Healthcare. [ 9 ]
IMI aims to "become a flexible toolkit at the service of administrative cooperation, contributing to the improved governance of the Internal Market ." [ 10 ]
The IMI Regulation [ 11 ] which came into force in December 2012 is an EU law establishing a comprehensive legal framework for IMI. [ 12 ] It provides a complete set of rules for the processing of personal data in IMI and prescribes a method for future expansion of IMI to additional policy areas. | https://en.wikipedia.org/wiki/Internal_Market_Information_System |
Right Interior Exterior
Adjacent Vertical Complementary Supplementary
Dihedral
In geometry , an angle of a polygon is formed by two adjacent sides . For a simple polygon (non-self-intersecting), regardless of whether it is convex or non-convex , this angle is called an internal angle (or interior angle) if a point within the angle is in the interior of the polygon. A polygon has exactly one internal angle per vertex .
If every internal angle of a simple polygon is less than a straight angle ( π radians or 180°), then the polygon is called convex .
In contrast, an external angle (also called a turning angle or exterior angle) is an angle formed by one side of a simple polygon and a line extended from an adjacent side . [ 1 ] : pp. 261–264
The interior angle concept can be extended in a consistent way to crossed polygons such as star polygons by using the concept of directed angles . In general, the interior angle sum in degrees of any closed polygon, including crossed (self-intersecting) ones, is then given by 180( n − 2 k )° , where n is the number of vertices, and the strictly positive integer k is the number of total (360°) revolutions one undergoes by walking around the perimeter of the polygon. In other words, the sum of all the exterior angles is 2 πk radians or 360 k degrees. Example: for ordinary convex polygons and concave polygons , k = 1 , since the exterior angle sum is 360°, and one undergoes only one full revolution by walking around the perimeter.
Consider a polyhedron that is topologically equivalent to a sphere , such as any convex polyhedron . Any vertex of the polyhedron will have several facets that meet at that vertex. Each of these facets will have an interior angle at that vertex and the sum of the interior angles at a vertex can be said to be the interior angle associated with that vertex of the polyhedron. The value of 2 π radians (or 360 degrees) minus that interior angle can be said to be the exterior angle associated with that vertex, also known by other names such as angular defect . The sum of these exterior angles across all vertices of the polyhedron will necessarily be 4 π radians (or 720 degrees), and the sum of the interior angles will necessarily be 2 π ( n − 2) radians (or 360( n − 2) degrees) where n is the number of vertices. A proof of this can be obtained by using the formulas for the sum of interior angles of each facet together with the fact that the Euler characteristic of a sphere is 2.
For example, a rectangular solid will have three rectangular facets meeting at any vertex, with each of these facets having a 90° internal angle at that vertex, so each vertex of the rectangular solid is associated with an interior angle of 3 × 90° = 270° and an exterior angle of 360° − 270° = 90° . The sum of these exterior angles over all eight vertices is 8 × 90° = 720° . The sum of these interior angles over all eight vertices is 8 × 270° = 2160° . | https://en.wikipedia.org/wiki/Internal_and_external_angles |
The internal carotid venous plexus is a network of veins surrounding the internal carotid artery as it passes through the carotid canal . [ 2 ] The plexus interconnects the internal jugular vein (extracranially) and cavernous sinus (intracranially). [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Internal_carotid_venous_plexus |
An internal combustion engine ( ICE or IC engine ) is a heat engine in which the combustion of a fuel occurs with an oxidizer (usually air) in a combustion chamber that is an integral part of the working fluid flow circuit. In an internal combustion engine, the expansion of the high- temperature and high- pressure gases produced by combustion applies direct force to some component of the engine. The force is typically applied to pistons ( piston engine ), turbine blades ( gas turbine ), a rotor (Wankel engine) , or a nozzle ( jet engine ). This force moves the component over a distance. This process transforms chemical energy into kinetic energy which is used to propel, move or power whatever the engine is attached to.
The first commercially successful internal combustion engines were invented in the mid-19th century. The first modern internal combustion engine, the Otto engine , was designed in 1876 by the German engineer Nicolaus Otto . [ 1 ] The term internal combustion engine usually refers to an engine in which combustion is intermittent , such as the more familiar two-stroke and four-stroke piston engines, along with variants, such as the six-stroke piston engine and the Wankel rotary engine . A second class of internal combustion engines use continuous combustion: gas turbines , jet engines and most rocket engines , each of which are internal combustion engines on the same principle as previously described. [ 1 ] [ 2 ] In contrast, in external combustion engines , such as steam or Stirling engines , energy is delivered to a working fluid not consisting of, mixed with, or contaminated by combustion products. Working fluids for external combustion engines include air, hot water, pressurized water or even boiler -heated liquid sodium .
While there are many stationary applications, most ICEs are used in mobile applications and are the primary power supply for vehicles such as cars , aircraft and boats . ICEs are typically powered by hydrocarbon -based fuels like natural gas , gasoline , diesel fuel , or ethanol . Renewable fuels like biodiesel are used in compression ignition (CI) engines and bioethanol or ETBE (ethyl tert-butyl ether) produced from bioethanol in spark ignition (SI) engines. As early as 1900 the inventor of the diesel engine, Rudolf Diesel , was using peanut oil to run his engines. [ 3 ] Renewable fuels are commonly blended with fossil fuels. Hydrogen , which is rarely used, can be obtained from either fossil fuels or renewable energy.
Various scientists and engineers contributed to the development of internal combustion engines. In 1791, John Barber developed the gas turbine . In 1794 Thomas Mead patented a gas engine . Also in 1794, Robert Street patented an internal combustion engine, which was also the first to use liquid fuel , and built an engine around that time. In 1798, John Stevens built the first American internal combustion engine. In 1807, French engineers Nicéphore Niépce (who went on to invent photography ) and Claude Niépce ran a prototype internal combustion engine, using controlled dust explosions, the Pyréolophore , which was granted a patent by Napoleon Bonaparte . This engine powered a boat on the Saône river in France. [ 4 ] [ 5 ] In the same year, Swiss engineer François Isaac de Rivaz invented a hydrogen-based internal combustion engine and powered the engine by electric spark. In 1808, De Rivaz fitted his invention to a primitive working vehicle – "the world's first internal combustion powered automobile". [ 6 ] In 1823, Samuel Brown patented the first internal combustion engine to be applied industrially.
In 1854, in the UK, the Italian inventors Eugenio Barsanti and Felice Matteucci obtained the certification: "Obtaining Motive Power by the Explosion of Gases". In 1857 the Great Seal Patent Office conceded them patent No.1655 for the invention of an "Improved Apparatus for Obtaining Motive Power from Gases". [ 7 ] [ 8 ] [ 9 ] [ 10 ] Barsanti and Matteucci obtained other patents for the same invention in France, Belgium and Piedmont between 1857 and 1859. [ 11 ] [ 12 ] In 1860, Belgian engineer Jean Joseph Etienne Lenoir produced a gas-fired internal combustion engine. [ 13 ] In 1864, Nicolaus Otto patented the first atmospheric gas engine. In 1872, American George Brayton invented the first commercial liquid-fueled internal combustion engine. In 1876, Nicolaus Otto began working with Gottlieb Daimler and Wilhelm Maybach , patented the compressed charge, four-cycle engine. In 1879, Karl Benz patented a reliable two-stroke gasoline engine. Later, in 1886, Benz began the first commercial production of motor vehicles with an internal combustion engine, in which a three-wheeled, four-cycle engine and chassis formed a single unit. [ 14 ] In 1892, Rudolf Diesel developed the first compressed charge, compression ignition engine. In 1926, Robert Goddard launched the first liquid-fueled rocket. In 1939, the Heinkel He 178 became the world's first jet aircraft .
At one time, the word engine (via Old French , from Latin ingenium , "ability") meant any piece of machinery —a sense that persists in expressions such as siege engine . A "motor" (from Latin motor , "mover") is any machine that produces mechanical power . Traditionally, electric motors are not referred to as "engines"; however, combustion engines are often referred to as "motors". (An electric engine refers to a locomotive operated by electricity.)
In boating, an internal combustion engine that is installed in the hull is referred to as an engine, but the engines that sit on the transom are referred to as motors. [ 15 ]
Reciprocating piston engines are by far the most common power source for land and water vehicles , including automobiles , motorcycles , ships and to a lesser extent, locomotives (some are electrical but most use diesel engines [ 16 ] [ 17 ] ). Rotary engines of the Wankel design are used in some automobiles, aircraft and motorcycles. These are collectively known as internal-combustion-engine vehicles (ICEV). [ 18 ]
Where high power-to-weight ratios are required, internal combustion engines appear in the form of combustion turbines , or sometimes Wankel engines. Powered aircraft typically use an ICE which may be a reciprocating engine. Airplanes can instead use jet engines and helicopters can instead employ turboshafts ; both of which are types of turbines. In addition to providing propulsion, aircraft may employ a separate ICE as an auxiliary power unit . Wankel engines are fitted to many unmanned aerial vehicles .
ICEs drive large electric generators that power electrical grids. They are found in the form of combustion turbines with a typical electrical output in the range of some 100 MW. Combined cycle power plants use the high temperature exhaust to boil and superheat water steam to run a steam turbine . Thus, the efficiency is higher because more energy is extracted from the fuel than what could be extracted by the combustion engine alone.
Combined cycle power plants achieve efficiencies in the range of 50–60%. In a smaller scale, stationary engines like gas engines or diesel generators are used for backup or for providing electrical power to areas not connected to an electric grid .
Small engines (usually 2‐stroke single cylinder gasoline/petrol engines) are a common power source for lawnmowers , string trimmers , chainsaws , leaf blowers , pressure washers , radio-controlled cars , snowmobiles , jet skis , outboard motors , mopeds , and motorcycles .
There are several possible ways to classify internal combustion engines.
By number of strokes:
By type of ignition:
By mechanical/thermodynamic cycle (these cycles are infrequently used but are commonly found in hybrid vehicles , along with other vehicles manufactured for fuel efficiency [ 20 ] ):
The base of a reciprocating internal combustion engine is the engine block , which is typically made of cast iron (due to its good wear resistance and low cost) [ 22 ] or aluminum . In the latter case, the cylinder liners are made of cast iron or steel, [ 23 ] or a coating such as nikasil or alusil . The engine block contains the cylinders . In engines with more than one cylinder they are usually arranged either in 1 row ( straight engine ) or 2 rows ( boxer engine or V engine ); 3 or 4 rows are occasionally used ( W engine ) in contemporary engines, and other engine configurations are possible and have been used. Single-cylinder engines (or thumpers ) are common for motorcycles and other small engines found in light machinery. On the outer side of the cylinder, passages that contain cooling fluid are cast into the engine block whereas, in some heavy duty engines, the passages are the types of removable cylinder sleeves which can be replaceable. [ 22 ] Water-cooled engines contain passages in the engine block where cooling fluid circulates (the water jacket ). Some small engines are air-cooled, and instead of having a water jacket the cylinder block has fins protruding away from it to cool the engine by directly transferring heat to the air. The cylinder walls are usually finished by honing to obtain a cross hatch , which is able to retain more oil. A too rough surface would quickly harm the engine by excessive wear on the piston.
The pistons are short cylindrical parts which seal one end of the cylinder from the high pressure of the compressed air and combustion products and slide continuously within it while the engine is in operation. In smaller engines, the pistons are made of aluminum; while in larger applications, they are typically made of cast iron. [ 22 ] In performance applications, pistons can also be titanium or forged steel for greater strength. The top surface of the piston is called its crown and is typically flat or concave. Some two-stroke engines use pistons with a deflector head . Pistons are open at the bottom and hollow except for an integral reinforcement structure (the piston web). When an engine is working, the gas pressure in the combustion chamber exerts a force on the piston crown which is transferred through its web to a gudgeon pin . Each piston has rings fitted around its circumference that mostly prevent the gases from leaking into the crankcase or the oil into the combustion chamber. [ 24 ] A ventilation system drives the small amount of gas that escapes past the pistons during normal operation (the blow-by gases) out of the crankcase so that it does not accumulate contaminating the oil and creating corrosion. [ 22 ] In two-stroke gasoline engines the crankcase is part of the air–fuel path and due to the continuous flow of it, two-stroke engines do not need a separate crankcase ventilation system.
The cylinder head is attached to the engine block by numerous bolts or studs . It has several functions. The cylinder head seals the cylinders on the side opposite to the pistons; it contains short ducts (the ports ) for intake and exhaust and the associated intake valves that open to let the cylinder be filled with fresh air and exhaust valves that open to allow the combustion gases to escape. The valves are often poppet valves [ 25 ] [ 26 ] but they can also be rotary valves [ 27 ] or sleeve valves . [ 28 ] However, 2-stroke crankcase scavenged engines connect the gas ports directly to the cylinder wall without poppet valves; the piston controls their opening and occlusion instead. The cylinder head also holds the spark plug in the case of spark ignition engines and the injector for engines that use direct injection. All CI (compression ignition) engines use fuel injection, usually direct injection but some engines instead use indirect injection . SI (spark ignition) engines can use a carburetor or fuel injection as port injection or direct injection . Most SI engines have a single spark plug per cylinder but some have 2 . A head gasket prevents the gas from leaking between the cylinder head and the engine block. The opening and closing of the valves is controlled by one or several camshafts and springs—or in some engines—a desmodromic mechanism that uses no springs. The camshaft may press directly the stem of the valve or may act upon a rocker arm , again, either directly or through a pushrod .
The crankcase is sealed at the bottom with a sump that collects the falling oil during normal operation to be cycled again. The cavity created between the cylinder block and the sump houses a crankshaft that converts the reciprocating motion of the pistons to rotational motion. The crankshaft is held in place relative to the engine block by main bearings , which allow it to rotate. Bulkheads in the crankcase form a half of every main bearing; the other half is a detachable cap. In some cases a single main bearing deck is used rather than several smaller caps. A connecting rod is connected to offset sections of the crankshaft (the crankpins ) in one end and to the piston in the other end through the gudgeon pin and thus transfers the force and translates the reciprocating motion of the pistons to the circular motion of the crankshaft. The end of the connecting rod attached to the gudgeon pin is called its small end, and the other end, where it is connected to the crankshaft, the big end. The big end has a detachable half to allow assembly around the crankshaft. It is kept together to the connecting rod by removable bolts.
The cylinder head has an intake manifold and an exhaust manifold attached to the corresponding ports. The intake manifold connects to the air filter directly, or to a carburetor when one is present, which is then connected to the air filter . It distributes the air incoming from these devices to the individual cylinders. The exhaust manifold is the first component in the exhaust system . It collects the exhaust gases from the cylinders and drives it to the following component in the path. The exhaust system of an ICE may also include a catalytic converter and muffler . The final section in the path of the exhaust gases is the tailpipe .
The top dead center (TDC) of a piston is the position where it is nearest to the valves; bottom dead center (BDC) is the opposite position where it is furthest from them. A stroke is the movement of a piston from TDC to BDC or vice versa, together with the associated process. While an engine is in operation, the crankshaft rotates continuously at a nearly constant speed . In a 4-stroke ICE, each piston experiences 2 strokes per crankshaft revolution in the following order. Starting the description at TDC, these are: [ 29 ] [ 30 ]
The defining characteristic of this kind of engine is that each piston completes a cycle every crankshaft revolution. The 4 processes of intake, compression, power and exhaust take place in only 2 strokes so that it is not possible to dedicate a stroke exclusively for each of them. Starting at TDC the cycle consists of:
While a 4-stroke engine uses the piston as a positive displacement pump to accomplish scavenging taking 2 of the 4 strokes, a 2-stroke engine uses the last part of the power stroke and the first part of the compression stroke for combined intake and exhaust. The work required to displace the charge and exhaust gases comes from either the crankcase or a separate blower. For scavenging, expulsion of burned gas and entry of fresh mix, two main approaches are described: Loop scavenging, and Uniflow scavenging. SAE news published in the 2010s that 'Loop Scavenging' is better under any circumstance than Uniflow Scavenging. [ 19 ]
Some SI engines are crankcase scavenged and do not use poppet valves. Instead, the crankcase and the part of the cylinder below the piston is used as a pump. The intake port is connected to the crankcase through a reed valve or a rotary disk valve driven by the engine. For each cylinder, a transfer port connects in one end to the crankcase and in the other end to the cylinder wall. The exhaust port is connected directly to the cylinder wall. The transfer and exhaust port are opened and closed by the piston. The reed valve opens when the crankcase pressure is slightly below intake pressure, to let it be filled with a new charge; this happens when the piston is moving upwards. When the piston is moving downwards the pressure in the crankcase increases and the reed valve closes promptly, then the charge in the crankcase is compressed. When the piston is moving downwards, it also uncovers the exhaust port and the transfer port and the higher pressure of the charge in the crankcase makes it enter the cylinder through the transfer port, blowing the exhaust gases. Lubrication is accomplished by adding two-stroke oil to the fuel in small ratios. Petroil refers to the mix of gasoline with the aforesaid oil. This kind of 2-stroke engine has a lower efficiency than comparable 4-strokes engines and releases more polluting exhaust gases for the following conditions:
The main advantage of 2-stroke engines of this type is mechanical simplicity and a higher power-to-weight ratio than their 4-stroke counterparts. Despite having twice as many power strokes per cycle, less than twice the power of a comparable 4-stroke engine is attainable in practice.
In the US, 2-stroke engines were banned for road vehicles due to the pollution. Off-road only motorcycles are still often 2-stroke but are rarely road legal. However, many thousands of 2-stroke lawn maintenance engines are in use. [ citation needed ]
Using a separate blower avoids many of the shortcomings of crankcase scavenging, at the expense of increased complexity which means a higher cost and an increase in maintenance requirement. An engine of this type uses ports or valves for intake and valves for exhaust, except opposed piston engines , which may also use ports for exhaust. The blower is usually of the Roots-type but other types have been used too. This design is commonplace in CI engines, and has been occasionally used in SI engines.
CI engines that use a blower typically use uniflow scavenging . In this design the cylinder wall contains several intake ports placed uniformly spaced along the circumference just above the position that the piston crown reaches when at BDC. An exhaust valve or several like that of 4-stroke engines is used. The final part of the intake manifold is an air sleeve that feeds the intake ports. The intake ports are placed at a horizontal angle to the cylinder wall (I.e: they are in plane of the piston crown) to give a swirl to the incoming charge to improve combustion. The largest reciprocating IC are low speed CI engines of this type; they are used for marine propulsion (see marine diesel engine ) or electric power generation and achieve the highest thermal efficiencies among internal combustion engines of any kind. Some diesel–electric locomotive engines operate on the 2-stroke cycle. The most powerful of them have a brake power of around 4.5 MW or 6,000 HP . The EMD SD90MAC class of locomotives are an example of such. The comparable class GE AC6000CW , whose prime mover has almost the same brake power, uses a 4-stroke engine.
An example of this type of engine is the Wärtsilä-Sulzer RTA96-C turbocharged 2-stroke diesel, used in large container ships. It is the most efficient and powerful reciprocating internal combustion engine in the world with a thermal efficiency over 50%. [ 31 ] [ 32 ] [ 33 ] For comparison, the most efficient small four-stroke engines are around 43% thermally-efficient (SAE 900648); [ citation needed ] size is an advantage for efficiency due to the increase in the ratio of volume to surface area.
See the external links for an in-cylinder combustion video in a 2-stroke, optically accessible motorcycle engine.
Dugald Clerk developed the first two-cycle engine in 1879. It used a separate cylinder which functioned as a pump in order to transfer the fuel mixture to the cylinder. [ 19 ]
In 1899 John Day simplified Clerk's design into the type of 2 cycle engine that is very widely used today. [ 34 ] Day cycle engines are crankcase scavenged and port timed. The crankcase and the part of the cylinder below the exhaust port is used as a pump. The operation of the Day cycle engine begins when the crankshaft is turned so that the piston moves from BDC upward (toward the head) creating a vacuum in the crankcase/cylinder area. The carburetor then feeds the fuel mixture into the crankcase through a reed valve or a rotary disk valve (driven by the engine). There are cast in ducts from the crankcase to the port in the cylinder to provide for intake and another from the exhaust port to the exhaust pipe. The height of the port in relationship to the length of the cylinder is called the "port timing".
On the first upstroke of the engine there would be no fuel inducted into the cylinder as the crankcase was empty. On the downstroke, the piston now compresses the fuel mix, which has lubricated the piston in the cylinder and the bearings due to the fuel mix having oil added to it. As the piston moves downward it first uncovers the exhaust, but on the first stroke there is no burnt fuel to exhaust. As the piston moves downward further, it uncovers the intake port which has a duct that runs to the crankcase. Since the fuel mix in the crankcase is under pressure, the mix moves through the duct and into the cylinder.
Because there is no obstruction in the cylinder of the fuel to move directly out of the exhaust port prior to the piston rising far enough to close the port, early engines used a high domed piston to slow down the flow of fuel. Later the fuel was "resonated" back into the cylinder using an expansion chamber design. When the piston rose close to TDC, a spark ignited the fuel. As the piston is driven downward with power, it first uncovers the exhaust port where the burned fuel is expelled under high pressure and then the intake port where the process has been completed and will keep repeating.
Later engines used a type of porting devised by the Deutz company to improve performance. It was called the Schnurle Reverse Flow system. DKW licensed this design for all their motorcycles. Their DKW RT 125 was one of the first motor vehicles to achieve over 100 mpg as a result. [ 35 ]
Internal combustion engines require ignition of the mixture, either by spark ignition (SI) or compression ignition (CI) . Before the invention of reliable electrical methods, hot tube and flame methods were used. Experimental engines with laser ignition have been built. [ 36 ]
The spark-ignition engine was a refinement of the early engines which used Hot Tube ignition. When Bosch developed the magneto it became the primary system for producing electricity to energize a spark plug. [ 37 ] Many small engines still use magneto ignition. Small engines are started by hand cranking using a recoil starter or hand crank. Prior to Charles F. Kettering of Delco's development of the automotive starter all gasoline engined automobiles used a hand crank. [ 38 ]
Larger engines typically power their starting motors and ignition systems using the electrical energy stored in a lead–acid battery . The battery's charged state is maintained by an automotive alternator or (previously) a generator which uses engine power to create electrical energy storage.
The battery supplies electrical power for starting when the engine has a starting motor system, and supplies electrical power when the engine is off. The battery also supplies electrical power during rare run conditions where the alternator cannot maintain more than 13.8 volts (for a common 12 V automotive electrical system). As alternator voltage falls below 13.8 volts, the lead-acid storage battery increasingly picks up electrical load. During virtually all running conditions, including normal idle conditions, the alternator supplies primary electrical power.
Some systems disable alternator field (rotor) power during wide-open throttle conditions. Disabling the field reduces alternator pulley mechanical loading to nearly zero, maximizing crankshaft power. In this case, the battery supplies all primary electrical power.
Gasoline engines take in a mixture of air and gasoline and compress it by the movement of the piston from bottom dead center to top dead center when the fuel is at maximum compression. The reduction in the size of the swept area of the cylinder and taking into account the volume of the combustion chamber is described by a ratio. Early engines had compression ratios of 6 to 1. As compression ratios were increased, the efficiency of the engine increased as well.
With early induction and ignition systems the compression ratios had to be kept low. With advances in fuel technology and combustion management, high-performance engines can run reliably at 12:1 ratio. With low octane fuel, a problem would occur as the compression ratio increased as the fuel was igniting due to the rise in temperature that resulted. Charles Kettering developed a lead additive which allowed higher compression ratios, which was progressively abandoned for automotive use from the 1970s onward, partly due to lead poisoning concerns.
The fuel mixture is ignited at different progressions of the piston in the cylinder. At low rpm, the spark is timed to occur close to the piston achieving top dead center. In order to produce more power, as rpm rises the spark is advanced sooner during piston movement. The spark occurs while the fuel is still being compressed progressively more as rpm rises. [ 39 ]
The necessary high voltage, typically 10,000 volts, is supplied by an induction coil or transformer. The induction coil is a fly-back system, using interruption of electrical primary system current through some type of synchronized interrupter. The interrupter can be either contact points or a power transistor. The problem with this type of ignition is that as RPM increases the availability of electrical energy decreases. This is especially a problem, since the amount of energy needed to ignite a more dense fuel mixture is higher. The result was often a high RPM misfire.
Capacitor discharge ignition was developed. It produces a rising voltage that is sent to the spark plug. CD system voltages can reach 60,000 volts. [ 40 ] CD ignitions use step-up transformers . The step-up transformer uses energy stored in a capacitance to generate electric spark . With either system, a mechanical or electrical control system provides a carefully timed high-voltage to the proper cylinder. This spark, via the spark plug, ignites the air-fuel mixture in the engine's cylinders.
While gasoline internal combustion engines are much easier to start in cold weather than diesel engines, they can still have cold weather starting problems under extreme conditions. For years, the solution was to park the car in heated areas. In some parts of the world, the oil was actually drained and heated overnight and returned to the engine for cold starts. In the early 1950s, the gasoline Gasifier unit was developed, where, on cold weather starts, raw gasoline was diverted to the unit where part of the fuel was burned causing the other part to become a hot vapor sent directly to the intake valve manifold. This unit was quite popular until electric engine block heaters became standard on gasoline engines sold in cold climates. [ 41 ]
For ignition, diesel, PPC and HCCI engines rely solely on the high temperature and pressure created by the engine in its compression process. The compression level that occurs is usually twice or more than a gasoline engine. Diesel engines take in air only, and shortly before peak compression, spray a small quantity of diesel fuel into the cylinder via a fuel injector that allows the fuel to instantly ignite. HCCI type engines take in both air and fuel, but continue to rely on an unaided auto-combustion process, due to higher pressures and temperature. This is also why diesel and HCCI engines are more susceptible to cold-starting issues, although they run just as well in cold weather once started. Light duty diesel engines with indirect injection in automobiles and light trucks employ glowplugs (or other pre-heating: see Cummins ISB#6BT ) that pre-heat the combustion chamber just before starting to reduce no-start conditions in cold weather. Most diesels also have a battery and charging system; nevertheless, this system is secondary and is added by manufacturers as a luxury for the ease of starting, turning fuel on and off (which can also be done via a switch or mechanical apparatus), and for running auxiliary electrical components and accessories. Most new engines rely on electrical and electronic engine control units (ECU) that also adjust the combustion process to increase efficiency and reduce emissions.
Surfaces in contact and relative motion to other surfaces require lubrication to reduce wear, noise and increase efficiency by reducing the power wasting in overcoming friction , or to make the mechanism work at all. Also, the lubricant used can reduce excess heat and provide additional cooling to components. At the very least, an engine requires lubrication in the following parts:
In 2-stroke crankcase scavenged engines, the interior of the crankcase, and therefore the crankshaft, connecting rod and bottom of the pistons are sprayed by the two-stroke oil in the air-fuel-oil mixture which is then burned along with the fuel. The valve train may be contained in a compartment flooded with lubricant so that no oil pump is required.
In a splash lubrication system no oil pump is used. Instead the crankshaft dips into the oil in the sump and due to its high speed, it splashes the crankshaft, connecting rods and bottom of the pistons. The connecting rod big end caps may have an attached scoop to enhance this effect. The valve train may also be sealed in a flooded compartment, or open to the crankshaft in a way that it receives splashed oil and allows it to drain back to the sump. Splash lubrication is common for small 4-stroke engines.
In a forced (also called pressurized ) lubrication system , lubrication is accomplished in a closed-loop which carries motor oil to the surfaces serviced by the system and then returns the oil to a reservoir. The auxiliary equipment of an engine is typically not serviced by this loop; for instance, an alternator may use ball bearings sealed with their own lubricant. The reservoir for the oil is usually the sump, and when this is the case, it is called a wet sump system. When there is a different oil reservoir the crankcase still catches it, but it is continuously drained by a dedicated pump; this is called a dry sump system.
On its bottom, the sump contains an oil intake covered by a mesh filter which is connected to an oil pump then to an oil filter outside the crankcase. From there it is diverted to the crankshaft main bearings and valve train. The crankcase contains at least one oil gallery (a conduit inside a crankcase wall) to which oil is introduced from the oil filter. The main bearings contain a groove through all or half its circumference; the oil enters these grooves from channels connected to the oil gallery. The crankshaft has drillings that take oil from these grooves and deliver it to the big end bearings. All big end bearings are lubricated this way. A single main bearing may provide oil for 0, 1 or 2 big end bearings. A similar system may be used to lubricate the piston, its gudgeon pin and the small end of its connecting rod; in this system, the connecting rod big end has a groove around the crankshaft and a drilling connected to the groove which distributes oil from there to the bottom of the piston and from then to the cylinder.
Other systems are also used to lubricate the cylinder and piston. The connecting rod may have a nozzle to throw an oil jet to the cylinder and bottom of the piston. That nozzle is in movement relative to the cylinder it lubricates, but always pointed towards it or the corresponding piston.
Typically forced lubrication systems have a lubricant flow higher than what is required to lubricate satisfactorily, in order to assist with cooling. Specifically, the lubricant system helps to move heat from the hot engine parts to the cooling liquid (in water-cooled engines) or fins (in air-cooled engines) which then transfer it to the environment. The lubricant must be designed to be chemically stable and maintain suitable viscosities within the temperature range it encounters in the engine.
Common cylinder configurations include the straight or inline configuration , the more compact V configuration , and the wider but smoother flat or boxer configuration . Aircraft engines can also adopt a radial configuration , which allows more effective cooling. More unusual configurations such as the H , U , X , and W have also been used.
Multiple cylinder engines have their valve train and crankshaft configured so that pistons are at different parts of their cycle. It is desirable to have the pistons' cycles uniformly spaced (this is called even firing ) especially in forced induction engines; this reduces torque pulsations [ 42 ] and makes inline engines with more than 3 cylinders statically balanced in its primary forces. However, some engine configurations require odd firing to achieve better balance than what is possible with even firing. For instance, a 4-stroke I2 engine has better balance when the angle between the crankpins is 180° because the pistons move in opposite directions and inertial forces partially cancel, but this gives an odd firing pattern where one cylinder fires 180° of crankshaft rotation after the other, then no cylinder fires for 540°. With an even firing pattern, the pistons would move in unison and the associated forces would add.
Multiple crankshaft configurations do not necessarily need a cylinder head at all because they can instead have a piston at each end of the cylinder called an opposed piston design. Because fuel inlets and outlets are positioned at opposed ends of the cylinder, one can achieve uniflow scavenging, which, as in the four-stroke engine is efficient over a wide range of engine speeds. Thermal efficiency is improved because of a lack of cylinder heads. This design was used in the Junkers Jumo 205 diesel aircraft engine, using two crankshafts at either end of a single bank of cylinders, and most remarkably in the Napier Deltic diesel engines. These used three crankshafts to serve three banks of double-ended cylinders arranged in an equilateral triangle with the crankshafts at the corners. It was also used in single-bank locomotive engines , and is still used in marine propulsion engines and marine auxiliary generators.
Most truck and automotive diesel engines use a cycle reminiscent of a four-stroke cycle, but with temperature increase by compression causing ignition, rather than needing a separate ignition system. This variation is called the diesel cycle. In the diesel cycle, diesel fuel is injected directly into the cylinder so that combustion occurs at constant pressure, as the piston moves.
The Otto cycle is the most common cycle for most cars' internal combustion engines that use gasoline as a fuel. It consists of the same major steps as described for the four-stroke engine: Intake, compression, ignition, expansion and exhaust.
In 1879, Nicolaus Otto manufactured and sold a double expansion engine (the double and triple expansion principles had ample usage in steam engines), with two small cylinders at both sides of a low-pressure larger cylinder, where a second expansion of exhaust stroke gas took place; the owner returned it, alleging poor performance. In 1906, the concept was incorporated in a car built by EHV ( Eisenhuth Horseless Vehicle Company ); [ 43 ] and in the 21st century Ilmor designed and successfully tested a 5-stroke double expansion internal combustion engine, with high power output and low SFC (Specific Fuel Consumption). [ 44 ]
The six-stroke engine was invented in 1883. Four kinds of six-stroke engines use a regular piston in a regular cylinder (Griffin six-stroke, Bajulaz six-stroke, Velozeta six-stroke and Crower six-stroke), firing every three crankshaft revolutions. These systems capture the waste heat of the four-stroke Otto cycle with an injection of air or water.
The Beare Head and "piston charger" engines operate as opposed-piston engines , two pistons in a single cylinder, firing every two revolutions rather than every four like a four-stroke engine.
The first internal combustion engines did not compress the mixture. The first part of the piston downstroke drew in a fuel-air mixture, then the inlet valve closed and, in the remainder of the down-stroke, the fuel-air mixture fired. The exhaust valve opened for the piston upstroke. These attempts at imitating the principle of a steam engine were very inefficient. There are a number of variations of these cycles, most notably the Atkinson and Miller cycles .
Split-cycle engines separate the four strokes of intake, compression, combustion and exhaust into two separate but paired cylinders. The first cylinder is used for intake and compression. The compressed air is then transferred through a crossover passage from the compression cylinder into the second cylinder, where combustion and exhaust occur. A split-cycle engine is really an air compressor on one side with a combustion chamber on the other.
Previous split-cycle engines have had two major problems—poor breathing (volumetric efficiency) and low thermal efficiency. However, new designs are being introduced that seek to address these problems. The Scuderi Engine addresses the breathing problem by reducing the clearance between the piston and the cylinder head through various turbocharging techniques. The Scuderi design requires the use of outwardly opening valves that enable the piston to move very close to the cylinder head without the interference of the valves. Scuderi addresses the low thermal efficiency via firing after top dead center (ATDC).
Firing ATDC can be accomplished by using high-pressure air in the transfer passage to create sonic flow and high turbulence in the power cylinder.
Jet engines use a number of rows of fan blades to compress air which then enters a combustor where it is mixed with fuel (typically JP fuel) and then ignited. The burning of the fuel raises the temperature of the air which is then exhausted out of the engine creating thrust. A modern turbofan engine can operate at as high as 48% efficiency. [ 45 ]
There are six sections to a turbofan engine:
A gas turbine compresses air and uses it to turn a turbine . It is essentially a jet engine which directs its output to a shaft. There are three stages to a turbine: 1) air is drawn through a compressor where the temperature rises due to compression, 2) fuel is added in the combustor , and 3) hot air is exhausted through turbine blades which rotate a shaft connected to the compressor.
A gas turbine is a rotary machine similar in principle to a steam turbine and it consists of three main components: a compressor, a combustion chamber, and a turbine. The temperature of the air, after being compressed in the compressor, is increased by burning fuel in it. The heated air and the products of combustion expand in a turbine, producing work output. About 2 ⁄ 3 of the work drives the compressor: the rest (about 1 ⁄ 3 ) is available as useful work output. [ 47 ]
Gas turbines are among the most efficient internal combustion engines. The General Electric 7HA and 9HA turbine combined cycle electrical plants are rated at over 61% efficiency. [ 48 ]
A gas turbine is a rotary machine somewhat similar in principle to a steam turbine. It consists of three main components: compressor, combustion chamber, and turbine. The air is compressed by the compressor where a temperature rise occurs. The temperature of the compressed air is further increased by combustion of injected fuel in the combustion chamber which expands the air. This energy rotates the turbine which powers the compressor via a mechanical coupling. The hot gases are then exhausted to provide thrust.
Gas turbine cycle engines employ a continuous combustion system where compression, combustion, and expansion occur simultaneously at different places in the engine—giving continuous power. Notably, the combustion takes place at constant pressure, rather than with the Otto cycle, constant volume.
The Wankel engine (rotary engine) does not have piston strokes. It operates with the same separation of phases as the four-stroke engine with the phases taking place in separate locations in the engine. In thermodynamic terms it follows the Otto engine cycle, so may be thought of as a "four-phase" engine. While it is true that three power strokes typically occur per rotor revolution, due to the 3:1 revolution ratio of the rotor to the eccentric shaft, only one power stroke per shaft revolution actually occurs. The drive (eccentric) shaft rotates once during every power stroke instead of twice (crankshaft), as in the Otto cycle, giving it a greater power-to-weight ratio than piston engines. This type of engine was most notably used in the Mazda RX-8 , the earlier RX-7 , and other vehicle models. The engine is also used in unmanned aerial vehicles, where the small size and weight and the high power-to-weight ratio are advantageous.
Forced induction is the process of delivering compressed air to the intake of an internal combustion engine. A forced induction engine uses a gas compressor to increase the pressure, temperature and density of the air . An engine without forced induction is considered a naturally aspirated engine .
Forced induction is used in the automotive and aviation industry to increase engine power and efficiency. It particularly helps aviation engines, as they need to operate at high altitude.
Forced induction is achieved by a supercharger , where the compressor is directly powered from the engine shaft or, in the turbocharger , from a turbine powered by the engine exhaust.
All internal combustion engines depend on combustion of a chemical fuel , typically with oxygen from the air (though it is possible to inject nitrous oxide to do more of the same thing and gain a power boost). The combustion process typically results in the production of a great quantity of thermal energy, as well as the production of steam and carbon dioxide and other chemicals at very high temperature; the temperature reached is determined by the chemical make up of the fuel and oxidizers (see stoichiometry ), as well as by the compression and other factors.
The most common modern fuels are made up of hydrocarbons and are derived mostly from fossil fuels ( petroleum ). Fossil fuels include diesel fuel , gasoline and petroleum gas , and the rarer use of propane . Except for the fuel delivery components, most internal combustion engines that are designed for gasoline use can run on natural gas or liquefied petroleum gases without major modifications. Large diesels can run with air mixed with gases and a pilot diesel fuel ignition injection. Liquid and gaseous biofuels , such as ethanol and biodiesel (a form of diesel fuel that is produced from crops that yield triglycerides such as soybean oil), can also be used. Engines with appropriate modifications can also run on hydrogen gas, wood gas , or charcoal gas , as well as from so-called producer gas made from other convenient biomass. Experiments have also been conducted using powdered solid fuels, such as the magnesium injection cycle .
Presently, fuels used include:
Even fluidized metal powders and explosives have seen some use. Engines that use gases for fuel are called gas engines and those that use liquid hydrocarbons are called oil engines; however, gasoline engines are also often colloquially referred to as "gas engines" (" petrol engines " outside North America).
The main limitations on fuels are that it must be easily transportable through the fuel system to the combustion chamber , and that the fuel releases sufficient energy in the form of heat upon combustion to make practical use of the engine.
Diesel engines are generally heavier, noisier, and more powerful at lower speeds than gasoline engines . They are also more fuel-efficient in most circumstances and are used in heavy road vehicles, some automobiles (increasingly so for their increased fuel efficiency over gasoline engines), ships, railway locomotives , and light aircraft . Gasoline engines are used in most other road vehicles including most cars, motorcycles , and mopeds . In Europe , sophisticated diesel-engined cars have taken over about 45% of the market since the 1990s. There are also engines that run on hydrogen , methanol , ethanol , liquefied petroleum gas (LPG), biodiesel , paraffin and tractor vaporizing oil (TVO).
Hydrogen could eventually replace conventional fossil fuels in traditional internal combustion engines. Alternatively fuel cell technology may come to deliver its promise and the use of the internal combustion engines could even be phased out.
Although there are multiple ways of producing free hydrogen, those methods require converting combustible molecules into hydrogen or consuming electric energy. Unless that electricity is produced from a renewable source—and is not required for other purposes—hydrogen does not solve any energy crisis . In many situations the disadvantage of hydrogen, relative to carbon fuels, is its storage . Liquid hydrogen has extremely low density (14 times lower than water) and requires extensive insulation—whilst gaseous hydrogen requires heavy tankage. Even when liquefied, hydrogen has a higher specific energy but the volumetric energetic storage is still roughly five times lower than gasoline. However, the energy density of hydrogen is considerably higher than that of electric batteries, making it a serious contender as an energy carrier to replace fossil fuels. The 'Hydrogen on Demand' process (see direct borohydride fuel cell ) creates hydrogen as needed, but has other issues, such as the high price of the sodium borohydride that is the raw material.
Since air is plentiful at the surface of the earth, the oxidizer is typically atmospheric oxygen, which has the advantage of not being stored within the vehicle. This increases the power-to-weight and power-to-volume ratios. Other materials are used for special purposes, often to increase power output or to allow operation under water or in space.
Cooling is required to remove excessive heat—high temperature can cause engine failure, usually from wear (due to high-temperature-induced failure of lubrication), cracking or warping. Two most common forms of engine cooling are air-cooled and water-cooled . Most modern automotive engines are both water and air-cooled, as the water/liquid-coolant is carried to air-cooled fins and/or fans, whereas larger engines may be singularly water-cooled as they are stationary and have a constant supply of water through water-mains or fresh-water, while most power tool engines and other small engines are air-cooled. Some engines (air or water-cooled) also have an oil cooler . In some engines, especially for turbine engine blade cooling and liquid rocket engine cooling , fuel is used as a coolant, as it is simultaneously preheated before injecting it into a combustion chamber.
Internal combustion engines must have their cycles started. In reciprocating engines this is accomplished by turning the crankshaft (Wankel Rotor Shaft) which induces the cycles of intake, compression, combustion, and exhaust. The first engines were started with a turn of their flywheels , while the first vehicle (the Daimler Reitwagen) was started with a hand crank. All ICE engined automobiles were started with hand cranks until Charles Kettering developed the electric starter for automobiles. [ 51 ] This method is now the most widely used, even among non-automobiles.
As diesel engines have become larger and their mechanisms heavier, air starters have come into use. [ 52 ] This is due to the lack of torque in electric starters. Air starters work by pumping compressed air into the cylinders of an engine to start it turning.
Two-wheeled vehicles may have their engines started in one of four ways:
There are also starters where a spring is compressed by a crank motion and then used to start an engine.
Some small engines use a pull-rope mechanism called "recoil starting", as the rope rewinds itself after it has been pulled out to start the engine. This method is commonly used in pushed lawn mowers and other settings where only a small amount of torque is needed to turn an engine over.
Turbine engines are frequently started by an electric motor or by compressed air.
Engine types vary greatly in a number of different ways:
Once ignited and burnt, the combustion products—hot gases—have more available thermal energy than the original compressed fuel-air mixture (which had higher chemical energy ). This available energy is manifested as a higher temperature and pressure that can be converted into kinetic energy by the engine. In a reciprocating engine, the high-pressure gases inside the cylinders drive the engine's pistons.
Once the available energy has been removed, the remaining hot gases are vented (often by opening a valve or exposing the exhaust outlet) and this allows the piston to return to its previous position (top dead center, or TDC). The piston can then proceed to the next phase of its cycle, which varies between engines. Any thermal energy that is not translated into work is normally considered a waste product and is removed from the engine either by an air or liquid cooling system.
Internal combustion engines are considered heat engines (since the release of chemical energy in combustion has the same effect as heat transfer into the engine) and as such their theoretical efficiency can be approximated by idealized thermodynamic cycles . The thermal efficiency of a theoretical cycle cannot exceed that of the Carnot cycle , whose efficiency is determined by the difference between the lower and upper operating temperatures of the engine. The upper operating temperature of an engine is limited by two main factors; the thermal operating limits of the materials, and the auto-ignition resistance of the fuel. All metals and alloys have a thermal operating limit, and there is significant research into ceramic materials that can be made with greater thermal stability and desirable structural properties. Higher thermal stability allows for a greater temperature difference between the lower (ambient) and upper operating temperatures, hence greater thermodynamic efficiency. Also, as the cylinder temperature rises, the fuel becomes more prone to auto-ignition. This is caused when the cylinder temperature nears the flash point of the charge. At this point, ignition can spontaneously occur before the spark plug fires, causing excessive cylinder pressures. Auto-ignition can be mitigated by using fuels with high auto-ignition resistance ( octane rating ), however it still puts an upper bound on the allowable peak cylinder temperature.
The thermodynamic limits assume that the engine is operating under ideal conditions: a frictionless world, ideal gases, perfect insulators, and operation for infinite time. Real world applications introduce complexities that reduce efficiency. For example, a real engine runs best at a specific load, termed its power band . The engine in a car cruising on a highway is usually operating significantly below its ideal load, because it is designed for the higher loads required for rapid acceleration. [ citation needed ] In addition, factors such as wind resistance reduce overall system efficiency. Vehicle fuel economy is measured in miles per gallon or in liters per 100 kilometers. The volume of hydrocarbon assumes a standard energy content.
Even when aided with turbochargers and stock efficiency aids, most engines retain an average efficiency of about 18–20%. [ 53 ] However, the latest technologies in Formula One engines have seen a boost in thermal efficiency past 50%. [ 54 ] There are many inventions aimed at increasing the efficiency of IC engines. In general, practical engines are always compromised by trade-offs between different properties such as efficiency, weight, power, heat, response, exhaust emissions, or noise. Sometimes economy also plays a role in not only the cost of manufacturing the engine itself, but also manufacturing and distributing the fuel. Increasing the engine's efficiency brings better fuel economy but only if the fuel cost per energy content is the same.
For stationary and shaft engines including propeller engines, fuel consumption is measured by calculating the brake specific fuel consumption , which measures the mass flow rate of fuel consumption divided by the power produced.
For internal combustion engines in the form of jet engines, the power output varies drastically with airspeed and a less variable measure is used: thrust specific fuel consumption (TSFC), which is the mass of propellant needed to generate impulses that is measured in either pound force-hour or the grams of propellant needed to generate an impulse that measures one kilonewton-second.
For rockets, TSFC can be used, but typically other equivalent measures are traditionally used, such as specific impulse and effective exhaust velocity .
Internal combustion engines such as reciprocating internal combustion engines produce air pollution emissions, due to incomplete combustion of carbonaceous fuel. The main derivatives of the process are carbon dioxide CO 2 , water and some soot —also called particulate matter (PM). [ 55 ] The effects of inhaling particulate matter have been studied in humans and animals and include asthma, lung cancer, cardiovascular issues, and premature death. [ 56 ] There are, however, some additional products of the combustion process that include nitrogen oxides and sulfur and some uncombusted hydrocarbons, depending on the operating conditions and the fuel-air ratio.
Carbon dioxide emissions from internal combustion engines (particularly ones using fossil fuels such as gasoline and diesel) contribute to human-induced climate change . Increasing the engine's fuel efficiency can reduce, but not eliminate, the amount of CO 2 emissions as carbon-based fuel combustion produces CO 2 . Since removing CO 2 from engine exhaust is impractical, there is increasing interest in alternatives. Sustainable fuels such as biofuels , synfuels , and electric motors powered by batteries are examples.
Not all of the fuel is completely consumed by the combustion process. A small amount of fuel is present after combustion, and some of it reacts to form oxygenates, such as formaldehyde or acetaldehyde , or hydrocarbons not originally present in the input fuel mixture. Incomplete combustion usually results from insufficient oxygen to achieve the perfect stoichiometric ratio. The flame is "quenched" by the relatively cool cylinder walls, leaving behind unreacted fuel that is expelled with the exhaust. When running at lower speeds, quenching is commonly observed in diesel (compression ignition) engines that run on natural gas. Quenching reduces efficiency and increases knocking, sometimes causing the engine to stall. Incomplete combustion also leads to the production of carbon monoxide (CO). Further chemicals released are benzene and 1,3-butadiene that are also hazardous air pollutants .
Increasing the amount of air in the engine reduces emissions of incomplete combustion products, but also promotes reaction between oxygen and nitrogen in the air to produce nitrogen oxides ( NO x ). NO x is hazardous to both plant and animal health, and leads to the production of ozone ( O 3 ). Ozone is not emitted directly; rather, it is a secondary air pollutant, produced in the atmosphere by the reaction of NO x and volatile organic compounds in the presence of sunlight. Ground-level ozone is harmful to human health and the environment. Though the same chemical substance, ground-level ozone should not be confused with stratospheric ozone , or the ozone layer , which protects the earth from harmful ultraviolet rays.
Carbon fuels containing sulfur produce sulfur monoxides (SO) and sulfur dioxide ( SO 2 ) contributing to acid rain .
In the United States, nitrogen oxides, PM , carbon monoxide, sulfur dioxide, and ozone, are regulated as criteria air pollutants under the Clean Air Act to levels where human health and welfare are protected. Other pollutants, such as benzene and 1,3-butadiene, are regulated as hazardous air pollutants whose emissions must be lowered as much as possible depending on technological and practical considerations.
NO x , carbon monoxide and other pollutants are frequently controlled via exhaust gas recirculation which returns some of the exhaust back into the engine intake. Catalytic converters are used to convert exhaust chemicals to CO 2 (a greenhouse gas ), H 2 O (water vapour, also a greenhouse gas) and N 2 (nitrogen).
The emission standards used by many countries have special requirements for non-road engines which are used by equipment and vehicles that are not operated on the public roadways. The standards are separated from the road vehicles. [ 57 ]
Significant contributions to noise pollution are made by internal combustion engines. Automobile and truck traffic operating on highways and street systems produce noise, as do aircraft flights due to jet noise, particularly supersonic-capable aircraft. Rocket engines create the most intense noise.
Internal combustion engines continue to consume fuel and emit pollutants while idling. Idling is reduced by stop-start systems .
A good way to estimate the mass of carbon dioxide that is released when one litre of diesel fuel (or gasoline) is combusted can be found as follows: [ 58 ]
As a good approximation the chemical formula of diesel is C n H 2n . In reality diesel is a mixture of different molecules. As carbon has a molar mass of 12 g/mol and hydrogen (atomic) has a molar mass of about 1 g/mol, the fraction by weight of carbon in diesel is roughly 12 ⁄ 14 .
The reaction of diesel combustion is given by:
2 C n H 2n + 3n O 2 ⇌ 2n CO 2 + 2n H 2 O
Carbon dioxide has a molar mass of 44 g/mol as it consists of 2 atoms of oxygen (16 g/mol) and 1 atom of carbon (12 g/mol). So 12 g of carbon yields 44 g of carbon dioxide.
Diesel has a density of 0.838 kg per litre.
Putting everything together the mass of carbon dioxide that is produced by burning 1 litre of diesel can be calculated as:
0.838 k g / L ⋅ 12 14 ⋅ 44 12 = 2.63 k g / L {\displaystyle 0.838kg/L\cdot {\frac {12}{14}}\cdot {\frac {44}{12}}=2.63kg/L}
The figure obtained with this estimation is close to the values found in the literature.
For gasoline, with a density of 0.75 kg/L and a ratio of carbon to hydrogen atoms of about 6 to 14, the estimated value of carbon dioxide emission from burning 1 litre of gasoline is:
0.75 k g / L ⋅ 6 ⋅ 12 6 ⋅ 12 + 14 ⋅ 1 ⋅ 44 12 = 2.3 k g / L {\displaystyle 0.75kg/L\cdot {{\frac {6\cdot 12}{6\cdot 12+14}}\cdot 1}\cdot {\frac {44}{12}}=2.3kg/L}
The term parasitic loss is often applied to devices that take energy from the engine in order to enhance the engine's ability to create more energy or convert energy to motion. In the internal combustion engine, almost every mechanical component, including the drivetrain , causes parasitic loss and could thus be characterized as a parasitic load.
Bearings , oil pumps, piston rings , valve springs, flywheels , transmissions , driveshafts , and differentials all act as parasitic loads that rob the system of power. These parasitic loads can be divided into two categories: those inherent to the working of the engine and those drivetrain losses incurred in the systems that transfer power from the engine to the road (such as the transmission, driveshaft, differentials and axles).
For example, the former category (engine parasitic loads) includes the oil pump used to lubricate the engine, which is a necessary parasite that consumes power from the engine (its host). Another example of an engine parasitic load is a supercharger , which derives its power from the engine and creates more power for the engine. The power that the supercharger consumes is parasitic loss and is usually expressed in kilowatt or horsepower . While the power that the supercharger consumes in comparison to what it generates is small, it is still measurable or calculable. One of the desirable features of a turbocharger over a supercharger is the lower parasitic loss of the former. [ 59 ]
Drivetrain parasitic losses include both steady state and dynamic loads. Steady state loads occur at constant speeds and may originate in discrete components such as the torque converter , the transmission oil pump , and/or clutch drag, and in seal/bearing drag, churning of lubricant and gear windage / friction found throughout the system. Dynamic loads occur under acceleration and are caused by inertia of rotating components and/or increased friction. [ 60 ]
While rules of thumb such as a 15% power loss from drivetrain parasitic loads have been commonly repeated, the actual loss of energy due to parasitic loads varies between systems. It can be influenced by powertrain design, lubricant type and temperature and many other factors. [ 60 ] [ 61 ] In automobiles, drivetrain loss can be quantified by measuring the difference between power measured by an engine dynamometer and a chassis dynamometer . However, this method is primarily useful for measuring steady state loads and may not accurately reflect losses due to dynamic loads. [ 60 ] More advanced methods can be used in a laboratory setting, such as measuring in-cylinder pressure measurements, flow rate and temperature at certain points, and testing of individual parts or sub-assemblies to determine friction and pumping losses. [ 62 ]
For example, in a dynamometer test by Hot Rod magazine , a Ford Mustang equipped with a modified 357ci small-block Ford V8 engine and an automatic transmission had a measured drivetrain power loss averaging 33%. In the same test, a Buick equipped with a modified 455ci V8 engine and a 4-speed manual transmission was measured to have an average drivetrain power loss of 21%. [ 63 ]
Laboratory testing of a heavy-duty diesel engine determined that 1.3% of the fuel energy input was lost to parasitic loads of engine accessories such as water and oil pumps. [ 62 ]
Automotive engineers and tuners commonly make design choices that reduce parasitic loads in order to improve efficiency and power output. These may involve the choice of major engine components or systems, such as the use of dry sump lubrication system over a wet sump system. Alternately, this can be effected through substitution of minor components available as aftermarket modifications, such as exchanging a directly engine-driven fan for one equipped with a fan clutch or an electric fan. [ 63 ] Another modification to reduce parasitic loss, usually seen in track-only cars, is the replacement of an engine-driven water pump for an electrical water pump. [ 64 ] The reduction in parasitic loss from these changes may be due to reduced friction or many other variables that cause the design to be more efficient. [ citation needed ] | https://en.wikipedia.org/wiki/Internal_combustion_engine |
An internal control region is a sequence of DNA located with the coding region of eukaryotic genes that binds regulatory elements such as activators or repressors . This region can recruit RNA Polymerase or contribute to splicing . | https://en.wikipedia.org/wiki/Internal_control_region |
Internal conversion is an atomic decay process where an excited nucleus interacts electromagnetically with one of the orbital electrons of an atom. This causes the electron to be emitted (ejected) from the atom. [ 1 ] [ 2 ] Thus, in internal conversion (often abbreviated IC), a high-energy electron is emitted from the excited atom, but not from the nucleus. For this reason, the high-speed electrons resulting from internal conversion are not called beta particles , since the latter come from beta decay , where they are newly created in the nuclear decay process.
IC is possible whenever gamma decay is possible, except if the atom is fully ionized . In IC, the atomic number does not change, and thus there is no transmutation of one element to another. Also, neutrinos and the weak force are not involved in IC.
Since an electron is lost from the atom, a hole appears in an electron aura which is subsequently filled by other electrons that descend to the empty, yet lower energy level, and in the process emit characteristic X-ray (s), Auger electron (s), or both. The atom thus emits high-energy electrons and X-ray photons, none of which originate in that nucleus. The atom supplies the energy needed to eject the electron, which in turn causes the latter events and the other emissions.
Since primary electrons from IC carry a fixed (large) part of the characteristic decay energy, they have a discrete energy spectrum, rather than the spread (continuous) spectrum characteristic of beta particles . Whereas the energy spectrum of beta particles plots as a broad hump, the energy spectrum of internally converted electrons plots as a single sharp peak (see example below).
In the quantum model of the electron, there is non-zero probability of finding the electron within the nucleus. In internal conversion, the wavefunction of an inner shell electron (usually an s electron) penetrates the nucleus. When this happens, the electron may couple to an excited energy state of the nucleus and take the energy of the nuclear transition directly, without an intermediate gamma ray being first produced. The kinetic energy of the emitted electron is equal to the transition energy in the nucleus, minus the binding energy of the electron to the atom.
Most IC electrons come from the K shell (the 1s state), as these two electrons have the highest probability of being within the nucleus. However, the s states in the L, M, and N shells (i.e., the 2s, 3s, and 4s states) are also able to couple to the nuclear fields and cause IC electron ejections from those shells (called L or M or N internal conversion). Ratios of K-shell to other L, M, or N shell internal conversion probabilities for various nuclides have been prepared. [ 3 ]
An amount of energy exceeding the atomic binding energy of the s electron must be supplied to that electron in order to eject it from the atom to result in IC; that is to say, internal conversion cannot happen if the decay energy of the nucleus is less than a certain threshold.
Though s electrons are more likely for IC due to their superior nuclear penetration compared to electrons with greater orbital angular momentum, spectral studies show that p electrons (from shells L and higher) are occasionally ejected in the IC process. There are also a few radionuclides in which the decay energy is not sufficient to convert (eject) a 1s (K shell) electron, and these nuclides, to decay by internal conversion, must decay by ejecting electrons from the L or M or N shells (i.e., by ejecting 2s, 3s, or 4s electrons) as these binding energies are lower.
After the IC electron is emitted, the atom is left with a vacancy in one of its electron shells, usually an inner one. This hole will be filled with an electron from one of the higher shells, which causes another outer electron to fill its place in turn, causing a cascade. Consequently, one or more characteristic X-rays or Auger electrons will be emitted as the remaining electrons in the atom cascade down to fill the vacancies.
The decay scheme on the left shows that 203 Hg produces a continuous beta spectrum with maximum energy 214 keV, that leads to an excited state of the daughter nucleus 203 Tl. This state decays very quickly (within 2.8×10 −10 s) to the ground state of 203 Tl, emitting a gamma quantum of 279 keV.
The figure on the right shows the electron spectrum of 203 Hg, measured by means of a magnetic spectrometer . It includes the continuous beta spectrum and K-, L-, and M-lines due to internal conversion. Since the binding energy of the K electrons in 203 Tl is 85 keV, the K line has an energy of 279 − 85 = 194 keV. Due to lesser binding energies, the L- and M-lines have higher energies. Due to the finite energy resolution of the spectrometer, the "lines" have a Gaussian shape of finite width.
Internal conversion is favored whenever the energy available for a gamma transition is small, and it is also the primary mode of de-excitation for 0 + →0 + (i.e. E0) transitions. The 0 + →0 + transitions occur where an excited nucleus has zero-spin and positive parity , and decays to a ground state which also has zero-spin and positive parity (such as all nuclides with even number of protons and neutrons). In such cases, de-excitation cannot take place by emission of a gamma ray, since this would violate conservation of angular momentum, hence other mechanisms like IC predominate. This also shows that internal conversion (contrary to its name) is not a two-step process where a gamma ray would be first emitted and then converted.
The competition between IC and gamma decay is quantified in the form of the internal conversion coefficient which is defined as α = e / γ {\displaystyle \alpha =e/{\gamma }} where e {\displaystyle e} is the rate of conversion electrons and γ {\displaystyle \gamma } is the rate of gamma-ray emission observed from a decaying nucleus. For example, in the decay of the excited state at 35 keV of 125 Te (which is produced by the decay of 125 I ), 7% of decays emit energy as a gamma ray, while 93% release energy as conversion electrons. Therefore, this excited state of 125 Te has an IC coefficient of α = 93 / 7 = 13.3 {\displaystyle \alpha =93/7=13.3} .
For increasing atomic number (Z) and decreasing gamma-ray energy, IC coefficients increase. For example, calculated IC coefficients for electric dipole (E1) transitions, for Z = 40, 60, and 80, are shown in the figure. [ 4 ]
The energy of the emitted gamma ray is a precise measure of the difference in energy between the excited states of the decaying nucleus. In the case of conversion electrons, the binding energy must also be taken into account: The energy of a conversion electron is given as E = ( E i − E f ) − E B {\displaystyle E=(E_{i}-E_{f})-E_{B}} , where E i {\displaystyle E_{i}} and E f {\displaystyle E_{f}} are the energies of the nucleus in its initial and final states, respectively, while E B {\displaystyle E_{B}} is the binding energy of the electron.
Nuclei with zero-spin and high excitation energies (more than about 1.022 MeV) also can't rid themselves of energy by (single) gamma emission due to the constraint imposed by conservation of momentum, but they do have enough decay energy to decay by pair production . [ 5 ] In this type of decay, an electron and positron are both emitted from the atom at the same time, and conservation of angular momentum is solved by having these two product particles spin in opposite directions.
IC should not be confused with the similar photoelectric effect . When a gamma ray emitted by the nucleus of an atom hits another atom, it may be absorbed producing a photoelectron of well-defined energy (this used to be called "external conversion"). In IC, however, the process happens within one atom, and without a real intermediate gamma ray.
Just as an atom may produce an IC electron instead of a gamma ray if energy is available from within the nucleus, so an atom may produce an Auger electron instead of an X-ray if an electron is missing from one of the low-lying electron shells. (The first process can even precipitate the second one.) Like IC electrons, Auger electrons have a discrete energy, resulting in a sharp energy peak in the spectrum.
Electron capture also involves an inner shell electron, which in this case is retained in the nucleus (changing the atomic number) and leaving the atom (not nucleus) in an excited state. The atom missing an inner electron can relax by a cascade of X-ray emissions as higher energy electrons in the atom fall to fill the vacancy left in the electron cloud by the captured electron. Such atoms also typically exhibit Auger electron emission. Electron capture, like beta decay, also typically results in excited atomic nuclei, which may then relax to a state of lowest nuclear energy by any of the methods permitted by spin constraints, including gamma decay and internal conversion decay. | https://en.wikipedia.org/wiki/Internal_conversion |
Internal conversion is a transition from a higher to a lower electronic state in a molecule or atom. [ 1 ] It is sometimes called "radiationless de-excitation", because no photons are emitted. It differs from intersystem crossing in that, while both are radiationless methods of de-excitation, the molecular spin state for internal conversion remains the same, whereas it changes for intersystem crossing.
The energy of the electronically excited state is given off to vibrational modes of the molecule. The excitation energy is transformed into heat.
A classic example of this process is the quinine sulfate fluorescence , which can be quenched by the use of various halide salts . [ citation needed ] The excited molecule can de-excite by increasing the thermal energy of the surrounding solvated ions .
Several natural molecules perform a fast internal conversion. This ability to transform the excitation energy of photon into heat can be a crucial property for photoprotection by molecules such as melanin . [ 2 ] Fast internal conversion reduces the excited state lifetime, and thereby prevents bimolecular reactions. Bimolecular electron transfer always produces a reactive chemical species, free radicals . [ citation needed ] Nucleic acids (precisely the single, free nucleotides, not those bound in a DNA/RNA strand) have an extremely short lifetime due to a fast internal conversion. [ 3 ]
Both melanin and DNA have some of the fastest internal conversion rates. [ citation needed ]
In applications that make use of bimolecular electron transfer the internal conversion is undesirable. For example, it is advantageous to have a long-lived excited state in Grätzel cells (Dye-sensitized solar cells). [ citation needed ] | https://en.wikipedia.org/wiki/Internal_conversion_(chemistry) |
In nuclear physics , the internal conversion coefficient describes the rate of internal conversion .
The internal conversion coefficient may be empirically determined by the following formula: α = number of de-excitations via electron emission number of de-excitations via gamma-ray emission {\displaystyle \alpha ={\frac {\text{number of de-excitations via electron emission}}{\text{number of de-excitations via gamma-ray emission}}}}
There is no valid formulation for an equivalent concept for E0 (electric monopole) nuclear transitions.
There are theoretical calculations that can be used to derive internal conversion coefficients. Their accuracy is not generally under dispute, but since the quantum mechanical models they depend on only take into account electromagnetic interactions between the nucleus and electrons , there may be unforeseen effects.
Internal conversion coefficients can be looked up from tables, but this is time-consuming. Computer programs have been developed (see the BrIcc Program ) which present internal conversion coefficients quickly and easily.
Theoretical calculations of interest are the Rösel [1] , Hager-Seltzer [2] , and the Band [3] , superseded by the Band-Raman [4] calculation called BrIcc.
The Hager-Seltzer calculations omit the M and higher-energy shells on the grounds (usually valid) that those orbitals have little electron density at the nucleus and can be neglected. To first approximation this assumption is valid, upon comparing several internal conversion coefficients for different isotopes for transitions of about 100 keV.
The Band and Band-Raman calculations assume that the M shell may contribute to internal conversion to a non-negligible extent, and incorporates a general term (called "N+") which takes into account the small effect of any higher shells there may be, while the Rösel calculation works like the Band, but does not assume that all shells contribute and so generally terminates at the N shell.
Additionally, the Band-Raman calculation can now consider ("frozen orbitals") or neglect ("no hole") the effect of the electron vacancy; the frozen-orbitals approximation is considered generally superior. [5] | https://en.wikipedia.org/wiki/Internal_conversion_coefficient |
An internal drainage board ( IDB ) is a type of operating authority which is established in areas of special drainage need in England and Wales with permissive powers to undertake work to secure clean water drainage and water level management within drainage districts . The area of an IDB is not determined by county or metropolitan council boundaries, but by water catchment areas within a given region. IDBs are geographically concentrated in the Broads , Fens in East Anglia and Lincolnshire , Somerset Levels and Yorkshire .
In comparison with public bodies in other countries, IDBs are most similar to the Waterschappen of the Netherlands , Consorzi di bonifica e irrigazione of Italy , wateringen of Flanders and Northern France, Watershed Districts of Minnesota , United States and Marsh Bodies of Nova Scotia , Canada.
Much of their work involves the maintenance of rivers, drainage channels ( rhynes ), ordinary watercourses , pumping stations and other critical infrastructure , facilitating drainage of new developments, the ecological conservation and enhancement of watercourses , monitoring and advising on planning applications and making sure that any development is carried out in line with legislation ( NPPF ). IDBs are not responsible for watercourses designated as main rivers within their drainage districts ; the supervision of these watercourses is undertaken by the Environment Agency .
The precursors to internal drainage boards date back to 1252; however, the majority of today's IDBs were established by the national government following the passing of the Land Drainage Act 1930 and today predominantly operate under the Land Drainage Act 1991 [ 1 ] under which, an IDB is required to exercise a general supervision over all matters relating to water level management of land within its district. Some IDBs may also have other duties, powers and responsibilities under specific legislation for the district (for instance the Middle Level Commissioners are also a navigation authority). IDBs are responsible to Defra from whom all legislation/regulations affecting them are issued. The work of an IDB is closely linked with that of the Environment Agency which has a range of functions providing a supervisory role over them.
Defra brought IDBs under the jurisdiction of the Local Government Ombudsman (LGO) from 1 April 2004, and introduced a model complaints procedure for IDBs to operate. This move was aimed to increase the accountability of IDBs to the general public who have an interest in the way that IDBs are run and operate by providing an independent means of review. At this time Defra also revised and re-issued model statutory rules and procedures under which IDBs operate. [ 2 ]
There are 112 internal drainage boards in England as of 2018 [update] covering 1.2 million hectares (9.7% of England's total land area) and areas around The Wash , the Lincolnshire Coast, the lower reaches of the Trent and the Yorkshire Ouse , the Somerset Levels and the Fens have concentrations of adjacent IDBs covering broad areas of lowland. In other parts of the country IDBs stretch in narrow ‘fingers’ up river valleys, separated by less low-lying areas, especially in Norfolk and Suffolk , Sussex , Kent , West Yorkshire , Herefordshire / Shropshire and the northern Vale of York . The largest IDB (Lindsey Marsh DB) covers 52,757 hectares and the smallest (Cawdle Fen IDB) 181 hectares. 24 of the county councils in England include one or more IDB in their area as do six metropolitan districts , and 109 unitary authorities or district councils .
The Association of Drainage Authorities holds a definitive record of all IDBs within England and Wales and their boundaries. [ 3 ]
The Environment Agency acts as the internal drainage board for one internal drainage district in East Sussex. In Wales internal drainage districts are managed by Natural Resources Wales .
Internal drainage districts in England
Key to abbreviations: IDB = internal drainage board IDD = internal drainage district (Environment Agency administered) WLMB = water level management board WMB = water management board
IDBs have an important role in reducing flood risk through management of water levels and drainage in their districts. The water level management activities of internal drainage boards cover 1.2 million hectares of England which represents 9.7% of the total land area. Reducing the flood risk to ~600,000 people who live or work, and ~879,000 properties located in IDB districts. Whilst many thousands of people outside of these boundaries also derive reduced flood risk from IDB water level management activities. Several forms of critical infrastructure fall within IDB districts including; 56 major power stations (28%) are located within an Internal Drainage District, 68 other major industrial premises and 208 km of motorway. In fact a recent publication by the Association of Drainage Authorities identified that 53% of the installed capacity (potential maximum power output) of major power stations in England and Wales are located within an IDB.
Although of much reduced significance since the 1980s, many IDB districts in Yorkshire and Nottinghamshire lie in areas of coal reserves and drainage has been significantly affected by subsidence from mining. IDBs have played an important role in monitoring and mitigating the effects of this activity and have worked in close collaboration with the coal companies and the Coal Authority.
The fundamental role of an internal drainage board is to manage the water level within its district. The majority of lowland rivers and watercourses have been heavily modified by man or are totally artificial channels. All are engineered structures designed and constructed for the primary function of conveying surplus run-off to their outfall efficiently and safely, managing water levels to sustain a multitude of land functions. As with any engineered structure it must be maintained in order to function at or near its design capacity. Annual or bi-annual vegetation clearance and periodic de-silting (dredging) of these rivers and watercourses is therefore an essential component of the whole life cycle of these watercourses.
Accommodating sustainability within the design and maintenance process for lowland rivers and watercourses has to address three essential elements:
Many IDBs are redesigning watercourses to create a two-stage or bermed channel. These have been extensively created in the Lindsey Marsh Drainage Board area of East Lincolnshire to accommodate the three elements of lowland watercourse sustainability.
Berms are created at or near to the normal retained water level in the system. It is sometimes replanted with vegetation removed from the watercourse prior to improvement works but is often left to re-colonise naturally. In all cases this additional part of the channel profile allows for enhanced environmental value to develop. The area created above the berm also provides additional flood storage capacity whilst the low level channel can be maintained in such a manner that design conveyance conditions are achieved and flood risk controlled.
By widening the channel and the berm, the berm can be safely used as access for machinery carrying out channel maintenance. While in-channel habitat that develops can be retained for a much longer period during the summer months, flood storage is provided for rare or extreme events and a buffer zone between the channel and any adjacent land use is created.
The timing of vegetation clearance works is essential to striking a sustainable balance in lowland watercourses. The Conveyance Estimating System (CES) is a modelling tool developed through a Defra / Environment Agency research collaboration. IDBs use CES to estimate the seasonal variation of conveyance owing to vegetation growth and other physical parameters which they use to assess the impact of varying the timing of vegetation clearance operations. This is critical during the spring and early summer, the prime nesting season for aquatic birds, the breeding season for many protected mammal species such as water voles and the season when many rare species of plant life flower and seed. Many IDBs have developed vegetation control strategies in co-ordination with Natural England .
111 IDB districts require pumping to some degree for water level management and 79 are purely gravity boards (where no pumping is required). 53 IDBs have more than 95% of their area dependent on pumping. This means in England some 635,722 hectares (2,454.54 sq mi) of land in IDB districts rely on pumping, almost 51% of the total. A new pumping station was commissioned in April 2011 by the Middle Level Commissioners at Wiggenhall St Germans, Norfolk. The station replaced its 73-year-old predecessor and is vital to the flood risk management of 700 km 2 (270 sq mi) of surrounding Fenland and 20,000 residential properties. When running at full capacity, it is capable of draining five Olympic-size swimming pools every 2 minutes. [ 15 ]
During times of heavy rainfall and high river levels IDBs:
An IDB's priorities during flooding are:
Some IDBs are able to provide a 24-hour contact number and most extend office hours during severe emergencies. [ 16 ]
Associated with the powers to regulate activities that may impede drainage, IDBs provide comments to local planning authorities on developments in their district and when asked, make recommendations on measures required to manage flood risk and to provide adequate drainage.
Internal drainage boards in England have responsibilities associated with 398 Sites of Special Scientific Interest plus other designated environmental areas, in coordination with Natural England . Slow flowing drainage channels such as those managed by IDBs can form an important habitat for a diverse community of aquatic and emergent plants, invertebrates and higher organisms. IDB channels form one of the last refuges in the UK of the BAP registered spined loach (Cobitis Taenia), a small nocturnal bottom-feeding fish that have been recorded only in the lower parts of the Trent and Great Ouse catchments, and in some small rivers and drains in Lincolnshire and East Anglia. [ 17 ] All IDBs are currently engaging with their own individual biodiversity action plans which will further enhance their environmental role.
Many IDBs are involved with assisting major wetland biodiversity projects with organisations such as the RSPB , National Trust and the Wildfowl and Wetlands Trust . Many smaller conservation projects are co-ordinated with Wildlife Trusts and local authorities. Current projects include: The Great Fen Project (Middle Level Commissioners), [ 18 ] Newport Wetlands Reserve (Caldicot and Wentlooge Levels IDB) and WWT Welney (MLC). Middle Level Commissioners launched a three-year Otter Recovery Project in December 2007. It will build 33 otter holts and 15 other habitat areas. [ 19 ]
All properties within a drainage district are deemed to derive benefit from the activities of an IDB. Every property is therefore subject to a drainage rate paid annually to the IDB.
For the purposes of rating, properties are divided into:
Occupiers of all "other land" pay Council Tax or non-domestic rates to the local authority who then are charged by the board. This charge is called the "Special Levy". The board, therefore, only demands drainage rates direct on agricultural land and buildings. The basis of this is that each property has been allotted an "annual value" which were last revised in the early 1990s. The annual value is an amount equal to the yearly rent, or the rent that might be reasonably expected if let on a tenancy from year to year commencing 1 April 1988. The annual value remains the same from year to year. Each year the board lays a rate "in the £" to meet its estimated expenditure. This is multiplied by the annual value to produce the amount of drainage rate due on each property. [ 20 ]
Under Section 141 of the Water Resources Act 1991 [ 21 ] the Environment Agency may issue a precept to an IDB to recover a contribution that the agency considers fair towards their expenses.
Under Section 57 of the Land Drainage Act 1991, [ 22 ] in cases where a drainage district receives water from land at a higher level, the IDB may make an application to the Environment Agency for a contribution towards the expenses of dealing with that water.
District drainage commissioners (DDCs) are internal drainage boards set up under local legislation rather than the Land Drainage Act 1991 and its predecessor legislation. The majority of the provisions of the Land Drainage Acts, do however, apply to such commissioners and they are statutory public bodies. The most important in terms of size and revenue is the Middle Level Commissioners .
The majority of internal drainage boards are members of the Association of Drainage Authorities (ADA) their representative organisation. Through ADA the collective views of drainage authorities and other members involved in water level management are represented to government, regulators, other policy makers and stakeholders. [ 23 ] At a European level ADA represents IDBs through EUWMA . [ 24 ]
In 2013 it was announced that the Caldicot and Wentlooge Levels Internal Drainage Board was to be abolished in April 2015, after officials at the Wales Audit Office detailed a series of irregularities, including overpaying its chief executive, misuse of public funds, financial irregularities, and unlawful actions. [ 25 ] [ 26 ] | https://en.wikipedia.org/wiki/Internal_drainage_board |
The internal energy of a thermodynamic system is the energy of the system as a state function , measured as the quantity of energy necessary to bring the system from its standard internal state to its present internal state of interest, accounting for the gains and losses of energy due to changes in its internal state, including such quantities as magnetization . [ 1 ] [ 2 ] It excludes the kinetic energy of motion of the system as a whole and the potential energy of position of the system as a whole, with respect to its surroundings and external force fields. It includes the thermal energy, i.e. , the constituent particles' kinetic energies of motion relative to the motion of the system as a whole. Without a thermodynamic process, the internal energy of an isolated system cannot change, as expressed in the law of conservation of energy , a foundation of the first law of thermodynamics . [ 3 ] The notion has been introduced to describe the systems characterized by temperature variations, temperature being added to the set of state parameters, the position variables known in mechanics (and their conjugated generalized force parameters), in a similar way to potential energy of the conservative fields of force, gravitational and electrostatic. Its author is Rudolf Clausius . Without transfer of matter, internal energy changes equal the algebraic sum of the heat transferred and the work done. In systems without temperature changes, internal energy changes equal the work done by/on the system.
The internal energy cannot be measured absolutely. Thermodynamics concerns changes in the internal energy, not its absolute value. The processes that change the internal energy are transfers, into or out of the system, of substance, or of energy, as heat , or by thermodynamic work . [ 4 ] These processes are measured by changes in the system's properties, such as temperature, entropy , volume, electric polarization, and molar constitution . The internal energy depends only on the internal state of the system and not on the particular choice from many possible processes by which energy may pass into or out of the system. It is a state variable , a thermodynamic potential , and an extensive property . [ 5 ]
Thermodynamics defines internal energy macroscopically, for the body as a whole. In statistical mechanics , the internal energy of a body can be analyzed microscopically in terms of the kinetic energies of microscopic motion of the system's particles from translations , rotations , and vibrations , and of the potential energies associated with microscopic forces, including chemical bonds .
The unit of energy in the International System of Units (SI) is the joule (J). The internal energy relative to the mass with unit J/kg is the specific internal energy . The corresponding quantity relative to the amount of substance with unit J/ mol is the molar internal energy . [ 6 ]
The internal energy of a system depends on its entropy S, its volume V and its number of massive particles: U ( S , V ,{ N j }) . It expresses the thermodynamics of a system in the energy representation . As a function of state , its arguments are exclusively extensive variables of state. Alongside the internal energy, the other cardinal function of state of a thermodynamic system is its entropy, as a function, S ( U , V ,{ N j }) , of the same list of extensive variables of state, except that the entropy, S , is replaced in the list by the internal energy, U . It expresses the entropy representation . [ 7 ] [ 8 ] [ 9 ]
Each cardinal function is a monotonic function of each of its natural or canonical variables. Each provides its characteristic or fundamental equation, for example U = U ( S , V ,{ N j }) , that by itself contains all thermodynamic information about the system. The fundamental equations for the two cardinal functions can in principle be interconverted by solving, for example, U = U ( S , V ,{ N j }) for S , to get S = S ( U , V ,{ N j }) .
In contrast, Legendre transformations are necessary to derive fundamental equations for other thermodynamic potentials and Massieu functions . The entropy as a function only of extensive state variables is the one and only cardinal function of state for the generation of Massieu functions. It is not itself customarily designated a 'Massieu function', though rationally it might be thought of as such, corresponding to the term 'thermodynamic potential', which includes the internal energy. [ 8 ] [ 10 ] [ 11 ]
For real and practical systems, explicit expressions of the fundamental equations are almost always unavailable, but the functional relations exist in principle. Formal, in principle, manipulations of them are valuable for the understanding of thermodynamics.
The internal energy U {\displaystyle U} of a given state of the system is determined relative to that of a standard state of the system, by adding up the macroscopic transfers of energy that accompany a change of state from the reference state to the given state:
where Δ U {\displaystyle \Delta U} denotes the difference between the internal energy of the given state and that of the reference state,
and the E i {\displaystyle E_{i}} are the various energies transferred to the system in the steps from the reference state to the given state.
It is the energy needed to create the given state of the system from the reference state. From a non-relativistic microscopic point of view, it may be divided into microscopic potential energy, U micro,pot {\displaystyle U_{\text{micro,pot}}} , and microscopic kinetic energy, U micro,kin {\displaystyle U_{\text{micro,kin}}} , components:
The microscopic kinetic energy of a system arises as the sum of the motions of all the system's particles with respect to the center-of-mass frame, whether it be the motion of atoms, molecules, atomic nuclei, electrons, or other particles. The microscopic potential energy algebraic summative components are those of the chemical and nuclear particle bonds, and the physical force fields within the system, such as due to internal induced electric or magnetic dipole moment , as well as the energy of deformation of solids ( stress - strain ). Usually, the split into microscopic kinetic and potential energies is outside the scope of macroscopic thermodynamics.
Internal energy does not include the energy due to motion or location of a system as a whole. That is to say, it excludes any kinetic or potential energy the body may have because of its motion or location in external gravitational , electrostatic , or electromagnetic fields . It does, however, include the contribution of such a field to the energy due to the coupling of the internal degrees of freedom of the system with the field. In such a case, the field is included in the thermodynamic description of the object in the form of an additional external parameter.
For practical considerations in thermodynamics or engineering, it is rarely necessary, convenient, nor even possible, to consider all energies belonging to the total intrinsic energy of a sample system, such as the energy given by the equivalence of mass. Typically, descriptions only include components relevant to the system under study. Indeed, in most systems under consideration, especially through thermodynamics, it is impossible to calculate the total internal energy. [ 12 ] Therefore, a convenient null reference point may be chosen for the internal energy.
The internal energy is an extensive property : it depends on the size of the system, or on the amount of substance it contains.
At any temperature greater than absolute zero , microscopic potential energy and kinetic energy are constantly converted into one another, but the sum remains constant in an isolated system (cf. table). In the classical picture of thermodynamics, kinetic energy vanishes at zero temperature and the internal energy is purely potential energy. However, quantum mechanics has demonstrated that even at zero temperature particles maintain a residual energy of motion, the zero point energy . A system at absolute zero is merely in its quantum-mechanical ground state, the lowest energy state available. At absolute zero a system of given composition has attained its minimum attainable entropy .
The microscopic kinetic energy portion of the internal energy gives rise to the temperature of the system. Statistical mechanics relates the pseudo-random kinetic energy of individual particles to the mean kinetic energy of the entire ensemble of particles comprising a system. Furthermore, it relates the mean microscopic kinetic energy to the macroscopically observed empirical property that is expressed as temperature of the system. While temperature is an intensive measure, this energy expresses the concept as an extensive property of the system, often referred to as the thermal energy , [ 13 ] [ 14 ] The scaling property between temperature and thermal energy is the entropy change of the system.
Statistical mechanics considers any system to be statistically distributed across an ensemble of N {\displaystyle N} microstates . In a system that is in thermodynamic contact equilibrium with a heat reservoir, each microstate has an energy E i {\displaystyle E_{i}} and is associated with a probability p i {\displaystyle p_{i}} . The internal energy is the mean value of the system's total energy, i.e., the sum of all microstate energies, each weighted by its probability of occurrence:
This is the statistical expression of the law of conservation of energy .
Thermodynamics is chiefly concerned with the changes in internal energy Δ U {\displaystyle \Delta U} .
For a closed system, with mass transfer excluded, the changes in internal energy are due to heat transfer Q {\displaystyle Q} and due to thermodynamic work W {\displaystyle W} done by the system on its surroundings. [ note 1 ] Accordingly, the internal energy change Δ U {\displaystyle \Delta U} for a process may be written Δ U = Q − W (closed system, no transfer of substance) . {\displaystyle \Delta U=Q-W\quad {\text{(closed system, no transfer of substance)}}.}
When a closed system receives energy as heat, this energy increases the internal energy. It is distributed between microscopic kinetic and microscopic potential energies. In general, thermodynamics does not trace this distribution. In an ideal gas all of the extra energy results in a temperature increase, as it is stored solely as microscopic kinetic energy; such heating is said to be sensible .
A second kind of mechanism of change in the internal energy of a closed system changed is in its doing of work on its surroundings. Such work may be simply mechanical, as when the system expands to drive a piston, or, for example, when the system changes its electric polarization so as to drive a change in the electric field in the surroundings.
If the system is not closed, the third mechanism that can increase the internal energy is transfer of substance into the system. This increase, Δ U m a t t e r {\displaystyle \Delta U_{\mathrm {matter} }} cannot be split into heat and work components. [ 4 ] If the system is so set up physically that heat transfer and work that it does are by pathways separate from and independent of matter transfer, then the transfers of energy add to change the internal energy: Δ U = Q − W + Δ U matter (matter transfer pathway separate from heat and work transfer pathways) . {\displaystyle \Delta U=Q-W+\Delta U_{\text{matter}}\quad {\text{(matter transfer pathway separate from heat and work transfer pathways)}}.}
If a system undergoes certain phase transformations while being heated, such as melting and vaporization, it may be observed that the temperature of the system does not change until the entire sample has completed the transformation. The energy introduced into the system while the temperature does not change is called latent energy or latent heat , in contrast to sensible heat, which is associated with temperature change.
Thermodynamics often uses the concept of the ideal gas for teaching purposes, and as an approximation for working systems. The ideal gas consists of particles considered as point objects that interact only by elastic collisions and fill a volume such that their mean free path between collisions is much larger than their diameter. Such systems approximate monatomic gases such as helium and other noble gases . For an ideal gas the kinetic energy consists only of the translational energy of the individual atoms. Monatomic particles do not possess rotational or vibrational degrees of freedom, and are not electronically excited to higher energies except at very high temperatures .
Therefore, the internal energy of an ideal gas depends solely on its temperature (and the number of gas particles): U = U ( N , T ) {\displaystyle U=U(N,T)} . It is not dependent on other thermodynamic quantities such as pressure or density.
The internal energy of an ideal gas is proportional to its amount of substance (number of moles) N {\displaystyle N} and to its temperature T {\displaystyle T}
where c V {\displaystyle c_{V}} is the isochoric (at constant volume) molar heat capacity of the gas; c V {\displaystyle c_{V}} is constant for an ideal gas. The internal energy of any gas (ideal or not) may be written as a function of the three extensive properties S {\displaystyle S} , V {\displaystyle V} , N {\displaystyle N} (entropy, volume, number of moles ). In case of the ideal gas it is in the following way [ 15 ]
where c o n s t {\displaystyle \mathrm {const} } is an arbitrary positive constant and where R {\displaystyle R} is the universal gas constant . It is easily seen that U {\displaystyle U} is a linearly homogeneous function of the three variables (that is, it is extensive in these variables), and that it is weakly convex . Knowing temperature and pressure to be the derivatives T = ∂ U ∂ S , {\displaystyle T={\frac {\partial U}{\partial S}},} P = − ∂ U ∂ V , {\displaystyle P=-{\frac {\partial U}{\partial V}},} the ideal gas law P V = N R T {\displaystyle PV=NRT} immediately follows as below:
The above summation of all components of change in internal energy assumes that a positive energy denotes heat added to the system or the negative of work done by the system on its surroundings. [ note 1 ]
This relationship may be expressed in infinitesimal terms using the differentials of each term, though only the internal energy is an exact differential . [ 16 ] : 33 For a closed system, with transfers only as heat and work, the change in the internal energy is
expressing the first law of thermodynamics . It may be expressed in terms of other thermodynamic parameters. Each term is composed of an intensive variable (a generalized force) and its conjugate infinitesimal extensive variable (a generalized displacement).
For example, the mechanical work done by the system may be related to the pressure P {\displaystyle P} and volume change d V {\displaystyle \mathrm {d} V} . The pressure is the intensive generalized force, while the volume change is the extensive generalized displacement:
This defines the direction of work, W {\displaystyle W} , to be energy transfer from the working system to the surroundings, indicated by a positive term. [ note 1 ] Taking the direction of heat transfer Q {\displaystyle Q} to be into the working fluid and assuming a reversible process , the heat is
where T {\displaystyle T} denotes the temperature , and S {\displaystyle S} denotes the entropy .
The change in internal energy becomes
The expression relating changes in internal energy to changes in temperature and volume is
This is useful if the equation of state is known.
In case of an ideal gas, we can derive that d U = C V d T {\displaystyle dU=C_{V}\,dT} , i.e. the internal energy of an ideal gas can be written as a function that depends only on the temperature.
The expression relating changes in internal energy to changes in temperature and volume is
The equation of state is the ideal gas law
Solve for pressure:
Substitute in to internal energy expression:
Take the derivative of pressure with respect to temperature:
Replace:
And simplify:
To express d U {\displaystyle \mathrm {d} U} in terms of d T {\displaystyle \mathrm {d} T} and d V {\displaystyle \mathrm {d} V} , the term
is substituted in the fundamental thermodynamic relation
This gives
The term T ( ∂ S ∂ T ) V {\displaystyle T\left({\frac {\partial S}{\partial T}}\right)_{V}} is the heat capacity at constant volume C V . {\displaystyle C_{V}.}
The partial derivative of S {\displaystyle S} with respect to V {\displaystyle V} can be evaluated if the equation of state is known. From the fundamental thermodynamic relation, it follows that the differential of the Helmholtz free energy A {\displaystyle A} is given by
The symmetry of second derivatives of A {\displaystyle A} with respect to T {\displaystyle T} and V {\displaystyle V} yields the Maxwell relation :
This gives the expression above.
When considering fluids or solids, an expression in terms of the temperature and pressure is usually more useful:
where it is assumed that the heat capacity at constant pressure is related to the heat capacity at constant volume according to
The partial derivative of the pressure with respect to temperature at constant volume can be expressed in terms of the coefficient of thermal expansion
and the isothermal compressibility
by writing
and equating d V to zero and solving for the ratio d P /d T . This gives
Substituting ( 2 ) and ( 3 ) in ( 1 ) gives the above expression.
The internal pressure is defined as a partial derivative of the internal energy with respect to the volume at constant temperature:
In addition to including the entropy S {\displaystyle S} and volume V {\displaystyle V} terms in the internal energy, a system is often described also in terms of the number of particles or chemical species it contains:
where N j {\displaystyle N_{j}} are the molar amounts of constituents of type j {\displaystyle j} in the system. The internal energy is an extensive function of the extensive variables S {\displaystyle S} , V {\displaystyle V} , and the amounts N j {\displaystyle N_{j}} , the internal energy may be written as a linearly homogeneous function of first degree: [ 17 ]
where α {\displaystyle \alpha } is a factor describing the growth of the system. The differential internal energy may be written as
which shows (or defines) temperature T {\displaystyle T} to be the partial derivative of U {\displaystyle U} with respect to entropy S {\displaystyle S} and pressure P {\displaystyle P} to be the negative of the similar derivative with respect to volume V {\displaystyle V} ,
and where the coefficients μ i {\displaystyle \mu _{i}} are the chemical potentials for the components of type i {\displaystyle i} in the system. The chemical potentials are defined as the partial derivatives of the internal energy with respect to the variations in composition:
As conjugate variables to the composition { N j } {\displaystyle \lbrace N_{j}\rbrace } , the chemical potentials are intensive properties , intrinsically characteristic of the qualitative nature of the system, and not proportional to its extent. Under conditions of constant T {\displaystyle T} and P {\displaystyle P} , because of the extensive nature of U {\displaystyle U} and its independent variables, using Euler's homogeneous function theorem , the differential d U {\displaystyle \mathrm {d} U} may be integrated and yields an expression for the internal energy:
The sum over the composition of the system is the Gibbs free energy :
that arises from changing the composition of the system at constant temperature and pressure. For a single component system, the chemical potential equals the Gibbs energy per amount of substance, i.e. particles or moles according to the original definition of the unit for { N j } {\displaystyle \lbrace N_{j}\rbrace } .
For an elastic medium the potential energy component of the internal energy has an elastic nature expressed in terms of the stress σ i j {\displaystyle \sigma _{ij}} and strain ε i j {\displaystyle \varepsilon _{ij}} involved in elastic processes. In Einstein notation for tensors, with summation over repeated indices, for unit volume, the infinitesimal statement is
Euler's theorem yields for the internal energy: [ 18 ]
For a linearly elastic material, the stress is related to the strain by
where the C i j k l {\displaystyle C_{ijkl}} are the components of the 4th-rank elastic constant tensor of the medium.
Elastic deformations, such as sound , passing through a body, or other forms of macroscopic internal agitation or turbulent motion create states when the system is not in thermodynamic equilibrium. While such energies of motion continue, they contribute to the total energy of the system; thermodynamic internal energy pertains only when such motions have ceased.
James Joule studied the relationship between heat, work, and temperature. He observed that friction in a liquid, such as caused by its agitation with work by a paddle wheel, caused an increase in its temperature, which he described as producing a quantity of heat . Expressed in modern units, he found that c. 4186 joules of energy were needed to raise the temperature of one kilogram of water by one degree Celsius. [ 19 ] | https://en.wikipedia.org/wiki/Internal_energy |
The internal environment (or milieu intérieur in French ; French pronunciation: [mi.ljø ɛ̃.te.ʁjœʁ] ) was a concept developed by Claude Bernard , [ 1 ] [ 2 ] a French physiologist in the 19th century, to describe the interstitial fluid and its physiological capacity to ensure protective stability for the tissues and organs of multicellular organisms .
Claude Bernard used the French phrase milieu intérieur (internal environment in English) in several works from 1854 until his death in 1878. He most likely adopted it from the histologist Charles Robin , who had employed the phrase "milieu de l’intérieur" as a synonym for the ancient hippocratic idea of humors . Bernard was initially only concerned with the role of the blood but he later included that of the whole body in ensuring this internal stability. [ 3 ] He summed up his idea as follows:
The fixity of the milieu supposes a perfection of the organism such that the external variations are at each instant compensated for and equilibrated.... All of the vital mechanisms, however varied they may be, have always one goal, to maintain the uniformity of the conditions of life in the internal environment.... The stability of the internal environment is the condition for the free and independent life. [ 4 ]
Bernard's work regarding the internal environment of regulation was supported by work in Germany at the same time. While Rudolf Virchow placed the focus on the cell, others, such as Carl von Rokitansky (1804–1878) continued to study humoral pathology particularly the matter of microcirculation . Von Rokitansky suggested that illness originated in damage to this vital microcirculation or internal system of communication. Hans Eppinger (1879–1946), a professor of internal medicine in Vienna, further developed von Rokitansky's point of view and showed that every cell requires a suitable environment which he called the ground substance for successful microcirculation. This work of German scientists was continued in the 20th century by Alfred Pischinger (1899–1982) who defined the connections between the ground substance or extracellular matrix and both the hormonal and autonomic nervous systems and saw therein a complex system of regulation for the body as a whole and for cellular functioning, which he termed the ground regulatory ( das System der Grundregulation ). [ 5 ]
Bernard created his concept to replace the ancient idea of life forces with that of a mechanistic process in which the body's physiology was regulated through multiple mechanical equilibrium adjustment feedbacks. [ 6 ] Walter Cannon 's later notion of homeostasis (while also mechanistic) lacked this concern, and was even advocated in the context of such ancient notions as vis medicatrix naturae . [ 6 ]
Cannon, in contrast to Bernard, saw the self-regulation of the body as a requirement for the evolutionary emergence and exercise of intelligence, and further placed the idea in a political context: "What corresponds in a nation to the internal environment of the body? The closest analogue appears to be the whole intricate system of production and distribution of merchandise". [ 7 ] He suggested, as an analogy to the body's own ability to ensure internal stability, that society should preserve itself with a technocratic bureaucracy, "biocracy". [ 6 ]
The idea of milieu intérieur, it has been noted, led Norbert Wiener to the notion of cybernetics and negative feedback creating self-regulation in the nervous system and in nonliving machines, and that "today, cybernetics, a formalization of Bernard's constancy hypothesis, is viewed as one of the critical antecedents of contemporary cognitive science". [ 3 ]
Bernard's idea was initially ignored in the 19th century. This happened in spite of Bernard being highly honored as the founder of modern physiology (he indeed received the first French state funeral for a scientist). Even the 1911 edition of Encyclopædia Britannica does not mention it. His ideas about milieu intérieur only became central to the understanding of physiology in the early part of the 20th century. [ 3 ] It was only with Joseph Barcroft , Lawrence J. Henderson , and particularly Walter Cannon and his idea of homeostasis , that it received its present recognition and status. [ 6 ] The current 15th edition notes it as being Bernard's most important idea.
In addition to providing the basis for understanding the internal physiology in terms of the interdependence of the cellular and extracellular matrix or ground system, Bernard's fruitful concept of the milieu intérieur has also led to significant research regarding the system of communication that allows for the complex dynamics of homeostasis. [ 8 ]
Initial work was conducted by Albert Szent-Györgyi who concluded that organic communication could not be explained solely by the random collisions of molecules and studied energy fields as well as the connective tissue. He was aware of earlier work by Moglich and Schon (1938) [ 9 ] and Jordan (1938) [ 10 ] on non-electrolytic mechanisms of charge transfer in living systems. This was further explored and advanced by Szent-Györgyi in 1941 in a Koranyi Memorical Lecture in Budapest, published in both Science and Nature , wherein he proposed that proteins are semi-conductors and capable of rapid transfer of free electrons within an organism. This idea was received with skepticism, but it is now generally accepted that most if not all parts of the extracellular matrix have semiconductor properties. [ 11 ] [ 12 ] The Koranyi Lecture triggered a growing molecular-electronics industry, using biomolecular semiconductors in nanoelectronic circuits.
In 1988 Szent-Györgyi stated that "Molecules do not have to touch each other to interact. Energy can flow through... the electromagnetic field" which "along with water, forms the matrix of life." This water is related also to the surfaces of proteins, DNA and all living molecules in the matrix. This is a structured water that provides stability for metabolic functioning, and related to collagen as well, the major protein in the extracellular matrix [ 13 ] and in DNA. [ 14 ] [ 15 ] The structured water can form channels of energy flow for protons (unlike electrons that flow through the protein structure to create bio-electricity ). Mitchell (1976) refers to these flow as 'proticity'. [ 16 ]
Work in Germany over the last half-century has also focused on the internal communication system, in particular as it relates to the ground system. This work has led to their characterization of the ground system or extracellular matrix interaction with the cellular system as a 'ground regulatory system', seeing therein the key to homeostasis, a body-wide communication and support system, vital to all functions. [ 5 ]
In 1953 a German doctor and scientist, Reinhold Voll, discovered that points used in acupuncture had different electrical properties from the surrounding skin, namely a lower resistance. Voll further discovered that the measurement of the resistances at the points gave valuable indications as to the state of the internal organs. Further research was done by Dr. Alfred Pischinger, the originator of the concept of the 'system of ground regulation', as well as Drs. Helmut Schimmel, and Hartmut Heine, using Voll's method of electro-dermal screening. This further research revealed that the gene is not so much the controller but the repository of blueprints on how cells and higher systems should operate, and that the actual regulation of biological activities (see Epigenetic cellular biology ) lies in a 'system of ground regulation'. This system is built on the ground substance, a complex connective tissue between all the cells, often also called the extra-cellular matrix. This ground substance is made up of 'amorphous' and 'structural' ground substance. The former is "a transparent, half-fluid gel produced and sustained by the fibroblast cells of the connective tissues " consisting of highly polymerized sugar-protein complexes. [ 17 ] [ unreliable source? ]
The ground substance, according to German research, determines what enters and exits the cell and maintains homeostasis, which requires a rapid communication system to respond to complex signals (see also Bruce Lipton ).
This is made possible by the diversity of molecular structures of the sugar polymers of the ground substance, the ability to swiftly generate new such substances, and their high interconnectedness. This creates a redundance that makes possible the controlled oscillation of values above and below the dynamic homeostasis present in all living creatures. This is a kind of fast-responding, "short term memory" of the ground substance. Without this labile capacity, the system would quickly move to an energetic equilibrium, which would bring inactivity and death . [ 17 ]
For its biochemical survival, every organism requires the ability to rapidly construct, destroy and reconstruct the constituents of the ground substance. [ 17 ]
Between the molecules that make up the ground substance there are minimal surfaces of potential energy . The charging and discharging of the materials of the ground substance cause 'biofield oscillations' (photon fields). The interference of these fields creates short lived (from 10–9 to up to 10–5 seconds) tunnels through the ground substance. Through these tunnels, shaped like the hole through a donut, large chemicals may traverse from capillaries through the ground substance and into the functional cells of organs and back again. All metabolic processes depend upon this transport mechanism. [ 17 ]
Major ordering energy structures in the body are created by the ground substance, such as collagen , which not only conducts energy but generates it, due to its piezoelectric properties.
Like quartz crystal, collagen in the ground substance and the more stable connective tissues ( fascia , tendons , bones , etc.). transforms mechanical energy (pressure, torsion, stretch) into electromagnetic energy , which then resonates through the ground substance (Athenstaedt, 1974). However, if the ground substance is chemically imbalanced, the energy resonating through the body loses coherence. [ 17 ]
This is what occurs in the adaptation response described by Hans Selye . When the ground regulation is out of balance, the probability of chronic illness increases. Research by Heine indicates that unresolved emotional traumas release a neurotransmitter substance P which causes the collagen to take on a hexagonal structure that is more ordered than their usual structure, putting the ground substance out of balance, what he calls an "emotional scar "providing" an important scientific verification that diseases can have psychological causes." [ 17 ] (see also Bruce Lipton )
While the initial work on identifying the importance of the ground regulatory system was done in Germany, more recent work examining the implications of inter and intra-cellular communication via the extra-cellular matrix has taken place in the U.S. and elsewhere. [ clarification needed ]
Structural continuity between extracellular , cyst skeletal and nuclear components was discussed by Hay, [ 18 ] Berezny et al. [ 19 ] and Oschman. [ 20 ] Historically, these elements have been referred to as ground substances, and because of their continuity, they act to form a complex, interlaced system that reaches into and contacts every part of the body. Even as early as 1851 it was recognized that the nerve and blood systems do not directly connect to the cell, but are mediated by and through an extracellular matrix. [ 21 ]
Recent research regarding the electrical charges of the various glycol-protein components of the extracellular matrix shows that because of the high density of negative charges on glycosaminoglycans (provided by sulfate and carboxylate groups of the uronic acid residues) the matrix is an extensive redox system capable of absorbing and donating electrons at any point. [ 22 ] This electron transfer function reaches into the interiors of cells as the cytoplasmic matrix is also strongly negatively charged. [ 23 ] The entire extracellular and cellular matrix functions as a biophysical storage system or accumulator for electrical charge.
From thermodynamic , energetic and geometrical considerations, molecules of the ground substance are considered to form minimal physical and electrical surfaces, such that, based on the mathematics of minimal surfaces, minuscule changes can lead to significant changes in distant areas of the ground substance. [ 24 ] This discovery is seen as having implications for many physiological and biochemical processes, including membrane transport , antigen–antibody interactions , protein synthesis , oxidation reactions , actin–myosin interactions, sol to gel transformations in polysaccharides . [ 25 ]
One description of the charge transfer process in the matrix is, "highly vectoral electron transport along biopolymer pathways". [ 26 ] Other mechanisms involve clouds of negative charge created around the proteoglycans in the matrix. There are also soluble and mobile charge transfer complexes in cells and tissues (e.g. Slifkin, 1971; [ 27 ] Gutman, 1978; [ 28 ] Mattay, 1994 [ 29 ] ).
Rudolph A. Marcus of the California Institute of Technology found that when the driving force increases beyond a certain level, electron transfer will begin to slow down instead of speed up (Marcus, 1999) [ 30 ] and he received a Nobel Prize in chemistry in 1992 for this contribution to the theory of electron transfer reactions in chemical systems. The implication of the work is that a vectoral electron transport process may be greater the smaller the potential, as in living systems . | https://en.wikipedia.org/wiki/Internal_environment |
Internal heat is the heat source from the interior of celestial objects , such as stars , brown dwarfs , planets , moons , dwarf planets , and (in the early history of the Solar System ) even asteroids such as Vesta , resulting from contraction caused by gravity (the Kelvin–Helmholtz mechanism ), nuclear fusion , tidal heating , core solidification ( heat of fusion released as molten core material solidifies), and radioactive decay . The amount of internal heating depends on mass ; the more massive the object, the more internal heat it has; also, for a given density, the more massive the object, the greater the ratio of mass to surface area, and thus the greater the retention of internal heat. The internal heating keeps celestial objects warm and active.
In the early history of the Solar System, radioactive isotopes having a half-life on the order of a few million years (such as aluminium-26 and iron-60 ) were sufficiently abundant to produce enough heat to cause internal melting of some moons and even some asteroids, such as Vesta noted above. After these radioactive isotopes had decayed to insignificant levels, the heat generated by longer-lived radioactive isotopes (such as potassium-40 , thorium-232 , and uranium-235 and uranium-238 ) was insufficient to keep these bodies molten unless they had an alternative source of internal heating, such as tidal heating. Thus, Earth's Moon , which has no alternative source of internal heating is now geologically dead, whereas a moon as small as Enceladus that has sufficient tidal heating (or at least had it recently) and some remaining radioactive heating, is able to maintain an active and directly detectable cryovolcanism .
The internal heating within terrestrial planets powers tectonic and volcanic activities. Of the terrestrial planets in the Solar System, Earth has the most internal heating because it is the largest. Mercury and Mars have no ongoing visible surface effects of internal heating because they are only 5 and 11% the mass of Earth respectively; they are nearly "geologically dead" (however, see Mercury's magnetic field and Geological history of Mars ). Earth, being more massive, has a great enough ratio of mass to surface area for its internal heating to drive plate tectonics and volcanism .
The giant planets have much greater internal heating than terrestrial planets, due to their greater mass and greater compressibility making more energy available from gravitational contraction. Jupiter , the most massive planet in the Solar System, has the most internal heating, with core temperature estimated to be 36,000 K. For the outer planets of the Solar System, internal heating powers the weather and wind instead of sunlight that powers the weather for terrestrial planets. The internal heating within giant planets raise temperatures higher than effective temperatures , as in the case of Jupiter, this makes 40 K warmer than given effective temperature. A combination of external and internal heating (which may be a combination of tidal heating and electromagnetic heating) is thought to make giant planets that orbit very close to their stars ( hot Jupiters ) into " puffy planets " (external heating is not thought to be sufficient by itself).
Brown dwarfs have greater internal heating than gas giants but not as great as stars. The internal heating within brown dwarfs (initially generated by gravitational contraction) is great enough to ignite and sustain fusion of deuterium with hydrogen to helium ; for the largest brown dwarfs, it is also enough to ignite and sustain fusion of lithium with hydrogen, but not fusion of hydrogen with itself. Like gas giants, brown dwarfs can have weather and wind powered by internal heating. Brown dwarfs are substellar objects not massive enough to sustain hydrogen-1 fusion reactions in their cores, unlike main-sequence stars. Brown dwarfs occupy the mass range between the heaviest gas giants and the lightest stars, with an upper limit around 75 to 80 Jupiter masses (MJ). Brown dwarfs heavier than about 13 MJ are thought to fuse deuterium and those above ~65 MJ, fuse lithium as well.
The internal heating within stars is so great that (after an initial phase of gravitational contraction) they ignite and sustain thermonuclear reaction of hydrogen (with itself) to form helium , and can make heavier elements (see Stellar nucleosynthesis ). The Sun for example has a core temperature of 13,600,000 K. The more massive and older the stars are, the more internal heating they have. During the end of its lifecycle, the internal heating of a star increases dramatically, caused by the change of composition of the core as successive fuels for fusion are consumed, and the resulting contraction (accompanied by faster consumption of the remaining fuel). Depending upon the mass of the star, the core may become hot enough to fuse helium (forming carbon and oxygen and traces of heavier elements), and for sufficiently massive stars even large quantities of heavier elements. Fusion to produce elements heavier than iron and nickel no longer produces energy, and since stellar cores massive enough to attain the temperatures required to produce these elements are too massive to form stable white dwarf stars, a core collapse supernova results, producing a neutron star or a black hole , depending upon the mass. Heat generated by the collapse is trapped within a neutron star and only escapes slowly, due to the small surface area; heat cannot be conducted out of a black hole at all (however, see Hawking radiation ). | https://en.wikipedia.org/wiki/Internal_heating |
In the subject area of control theory , an internal model is a process that simulates the response of the system in order to estimate the outcome of a system disturbance. The internal model principle was first articulated in 1976 by B. A. Francis and W. M. Wonham [ 1 ] as an explicit formulation of the Conant and Ashby good regulator theorem. [ 2 ] It stands in contrast to classical control, in that the classical feedback loop fails to explicitly model the controlled system (although the classical controller may contain an implicit model). [ 3 ] [ 4 ]
The internal model theory of motor control argues that the motor system is controlled by the constant interactions of the “ plant ” and the “ controller .” The plant is the body part being controlled, while the internal model itself is considered part of the controller. Information from the controller, such as information from the central nervous system (CNS) , feedback information, and the efference copy , is sent to the plant which moves accordingly.
Internal models can be controlled through either feed-forward or feedback control. Feed-forward control computes its input into a system using only the current state and its model of the system. It does not use feedback, so it cannot correct for errors in its control. In feedback control, some of the output of the system can be fed back into the system's input, and the system is then able to make adjustments or compensate for errors from its desired output. Two primary types of internal models have been proposed: forward models and inverse models. In simulations, models can be combined to solve more complex movement tasks.
In their simplest form, forward models take the input of a motor command to the “plant” and output a predicted position of the body.
The motor command input to the forward model can be an efference copy, as seen in Figure 1. The output from that forward model, the predicted position of the body, is then compared with the actual position of the body. The actual and predicted position of the body may differ due to noise introduced into the system by either internal (e.g. body sensors are not perfect, sensory noise) or external (e.g. unpredictable forces from outside the body) sources. If the actual and predicted body positions differ, the difference can be fed back as an input into the entire system again so that an adjusted set of motor commands can be formed to create a more accurate movement.
Inverse models use the desired and actual position of the body as inputs to estimate the necessary motor commands which would transform the current position into the desired one. For example, in an arm reaching task, the desired position (or a trajectory of consecutive positions) of the arm is input into the postulated inverse model, and the inverse model generates the motor commands needed to control the arm and bring it into this desired configuration (Figure 2). Inverse internal models are also in close connection with the uncontrolled manifold hypothesis (UCM) , see also here .
Theoretical work has shown that in models of motor control, when inverse models are used in combination with a forward model, the efference copy of the motor command output from the inverse model can be used as an input to a forward model for further predictions. For example, if, in addition to reaching with the arm, the hand must be controlled to grab an object, an efference copy of the arm motor command can be input into a forward model to estimate the arm's predicted trajectory. With this information, the controller can then generate the appropriate motor command telling the hand to grab the object. It has been proposed that if they exist, this combination of inverse and forward models would allow the CNS to take a desired action (reach with the arm), accurately control the reach and then accurately control the hand to grip an object. [ 5 ]
With the assumption that new models can be acquired and pre-existing models can be updated, the efference copy is important for the adaptive control of a movement task. Throughout the duration of a motor task, an efference copy is fed into a forward model known as a dynamics predictor whose output allows prediction of the motor output. When applying adaptive control theory techniques to motor control, efference copy is used in indirect control schemes as the input to the reference model.
A wide range of scientists contribute to progress on the internal model hypothesis. Michael I. Jordan , Emanuel Todorov and Daniel Wolpert contributed significantly to the mathematical formalization. Sandro Mussa-Ivaldi , Mitsuo Kawato , Claude Ghez , Reza Shadmehr , Randy Flanagan and Konrad Kording contributed with numerous behavioral experiments. The DIVA model of speech production developed by Frank H. Guenther and colleagues uses combined forward and inverse models to produce auditory trajectories with simulated speech articulators. Two interesting inverse internal models for the control of speech production [ 6 ] were developed by Iaroslav Blagouchine & Eric Moreau. [ 7 ] Both models combine the optimum principles and the equilibrium-point hypothesis (motor commands λ are taken as coordinates of the internal space). The input motor command λ is found by minimizing the length of the path traveled in the internal space, either under the acoustical constraint (the first model), or under the both acoustical and mechanical constraints (the second model). The acoustical constraint is related to the quality of the produced speech (measured in terms of formants ), while the mechanical one is related to the stiffness of the tongue's body. The first model, in which the stiffness remains uncontrolled, is in agreement with the standard UCM hypothesis . In contrast, the second optimum internal model, in which the stiffness is prescribed, displays the good variability of speech (at least, in the reasonable range of stiffness) and is in agreement with the more recent versions of the uncontrolled manifold hypothesis (UCM) . There is also a rich clinical literature on internal models including work from John Krakauer , [ 8 ] Pietro Mazzoni , Maurice A. Smith , Kurt Thoroughman , Joern Diedrichsen , and Amy Bastian . | https://en.wikipedia.org/wiki/Internal_model_(motor_control) |
Internal oxidation , in corrosion of metals , is the process of formation of corrosion products (e.g. a metal oxide ) within the metal bulk. In other words, the corrosion products are created away from the metal surface, and they are isolated from the surface. [ 1 ]
Internal oxidation occurs when some components of the alloy are oxidized in preference to the balance of the bulk. [ clarification needed ] The oxidizer is often oxygen diffusing through the metal bulk from the interface, but it can be also another element (for example sulfur or nitrogen ).
Internal oxidation is a well-known corrosion mechanism of nickel-based alloys in the temperature range of 500 to 1200 °C. [ 2 ]
Internal oxidation is distinct from selective leaching .
This corrosion -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Internal_oxidation |
Internal pressure is a measure of how the internal energy of a system changes when it expands or contracts at constant temperature . It has the same dimensions as pressure , the SI unit of which is the pascal .
Internal pressure is usually given the symbol π T {\displaystyle \pi _{T}} . It is defined as a partial derivative of internal energy with respect to volume at constant temperature:
Internal pressure can be expressed in terms of temperature, pressure and their mutual dependence:
This equation is one of the simplest thermodynamic equations . More precisely, it is a thermodynamic property relation, since it holds true for any system and connects the equation of state to one or more thermodynamic energy properties. Here we refer to it as a "thermodynamic equation of state."
The fundamental thermodynamic equation states for the exact differential of the internal energy :
Dividing this equation by d V {\displaystyle \operatorname {d} V} at constant temperature gives:
And using one of the Maxwell relations :
In a perfect gas , there are no potential energy interactions between the particles, so any change in the internal energy of the gas is directly proportional to the change in the kinetic energy of its constituent species and therefore also to the change in temperature:
The internal pressure is taken to be at constant temperature, therefore
i.e. the internal energy of a perfect gas is independent of the volume it occupies. The above relation can be used as a definition of a perfect gas.
The relation π T = 0 {\displaystyle \pi _{T}=0} can be proved without the need to invoke any molecular arguments. It follows directly from the thermodynamic equation of state if we use the ideal gas law p V = n R T {\displaystyle pV=nRT} . We have
Real gases have non-zero internal pressures because their internal energy changes as the gases expand isothermally - it can increase on expansion ( π T > 0 {\displaystyle \pi _{T}>0} , signifying presence of dominant attractive forces between the particles of the gas) or decrease ( π T < 0 {\displaystyle \pi _{T}<0} , dominant repulsion).
In the limit of infinite volume these internal pressures reach the value of zero:
corresponding to the fact that all real gases can be approximated to be perfect in the limit of a suitably large volume. The above considerations are summarized on the graph on the right.
If a real gas can be described by the van der Waals equation
it follows from the thermodynamic equation of state that
Since the parameter a {\displaystyle a} is always positive, so is its internal pressure: internal energy of a van der Waals gas always increases when it expands isothermally.
The a {\displaystyle a} parameter models the effect of attractive forces between molecules in the gas. However, real non-ideal gases may be expected to exhibit a sign change between positive and negative internal pressures under the right environmental conditions if repulsive interactions become important, depending on the system of interest. Loosely speaking, this would tend to happen under conditions of temperature and pressure such that Z {\displaystyle Z} the compression factor of the gas, is greater than 1.
In addition, through the use of the Euler chain relation it can be shown that
Defining μ J = ( ∂ T ∂ V ) U {\displaystyle \mu _{J}=\left({\frac {\partial T}{\partial V}}\right)_{U}} as the "Joule coefficient" [ 1 ] and recognizing ( ∂ U ∂ T ) V {\displaystyle \left({\frac {\partial U}{\partial T}}\right)_{V}} as the heat capacity at constant volume = C V {\displaystyle =C_{V}} , we have
The coefficient μ J {\displaystyle \mu _{J}} can be obtained by measuring the temperature change for a constant- U {\displaystyle U} experiment, i.e., an adiabatic free expansion (see below). This coefficient is often small, and usually negative at modest pressures (as predicted by the van der Waals equation).
James Joule tried to measure the internal pressure of air in his expansion experiment by adiabatically pumping high pressure air from one metal vessel into another evacuated one. The water bath in which the system was immersed did not change its temperature, signifying that no change in the internal energy occurred. Thus, the internal pressure of the air was apparently equal to zero and the air acted as a perfect gas. The actual deviations from the perfect behaviour were not observed since they are very small and the specific heat capacity of water is relatively high.
Much later, in 1925 Frederick Keyes and Francis Sears published measurements of the Joule effect for carbon dioxide at T 1 {\displaystyle T_{1}} = 30 °C, P 1 {\displaystyle P_{1}} = (13.3-16.5) atm using improved measurement techniques and better controls. Under these conditions the temperature dropped when the pressure was adiabatically lowered, which indicates that μ J {\displaystyle \mu _{J}} is negative. This is consistent with the van der Waals gas prediction that π T {\displaystyle \pi _{T}} is positive. [ 2 ] | https://en.wikipedia.org/wiki/Internal_pressure |
An internal ribosome entry site , abbreviated IRES , is an RNA element that allows for translation initiation in a cap-independent manner, as part of the greater process of protein synthesis . Initiation of eukaryotic translation nearly always occurs at and is dependent on the 5' cap of mRNA molecules, where the translation initiation complex forms and ribosomes engage the mRNA. IRES elements, however allow ribosomes to engage the mRNA and begin translation independently of the 5' cap.
IRES sequences were first discovered in 1988 in the poliovirus (PV) and encephalomyocarditis virus (EMCV) RNA genomes in the laboratories of Nahum Sonenberg [ 1 ] and Eckard Wimmer , [ 2 ] respectively. They are described as distinct regions of RNA molecules that are able to recruit the eukaryotic ribosome to the mRNA. This process is also known as cap-independent translation. It has been shown that IRES elements have a distinct secondary or even tertiary structure , but similar structural features at the levels of either primary or secondary structure that are common to all IRES segments have not been reported to date.
Use of IRES sequences in molecular biology soon became common as a tool for expressing multiple genes from a single transcriptional unit in a genetic vector . In such vectors, translation of the first cistron is initiated at the 5' cap, and translation of any downstream cistron is enabled by an IRES element appended at its 5' end. [ 3 ]
IRES elements are most commonly found in the 5' untranslated region , but may also occur elsewhere in mRNAs. The mRNA of viruses of the Dicistroviridae family possess two open reading frames (ORFs), and translation of each is directed by a distinct IRES. It has also been suggested that some mammalian cellular mRNAs also have IRESs, although this has been a matter of dispute. [ 4 ] [ 5 ] A number of these cellular IRES elements are located within mRNAs encoding proteins involved in stress survival , and other processes critical to survival. As of September 2009, there are 60 animal and 8 plant viruses reported to contain IRES elements and 115 mRNA sequences containing them as well. [ 6 ]
IRESs are often used by viruses as a means to ensure that viral translation is active when host translation is inhibited. These mechanisms of host translation inhibition are varied, and can be initiated by both virus and host, depending on the type of virus. However, in the case of most picornaviruses, such as poliovirus , this is accomplished by viral proteolytic cleavage of eIF4G so that it cannot interact with the 5'cap binding protein eIF4E . Interaction between these two eukaryotic initiation factors (eIFs) of the eIF4F complex is necessary for 40S ribosomal subunit recruitment to the 5' end of mRNAs, which is further thought to occur with mRNA 5'cap to 3' poly(A) tail loop formation. The virus may even use partially-cleaved eIF4G to aid in initiation of IRES-mediated translation.
Cells may also use IRESs to increase translation of certain proteins during mitosis and programmed cell death . In mitosis, the cell dephosphorylates eIF4E so that it has little affinity for the 5'cap . As a result, the 40S ribosomal subunit , and the translational machinery is diverted to IRES within the mRNA. Many proteins involved in mitosis are encoded by IRES mRNA. In programmed cell death, cleavage of eIF-4G, such as performed by viruses, decreases translation. Lack of essential proteins contributes to the death of the cell, as does translation of IRES mRNA sequences coding proteins involved in controlling cell death. [ 7 ]
To date, the mechanism of viral IRES function is better characterized than the mechanism of cellular IRES function, [ 8 ] which is still a matter of debate. HCV -like IRESs directly bind the 40S ribosomal subunit to position their initiator codons are located in ribosomal P-site without mRNA scanning. These IRESs still use the eukaryotic initiation factors (eIFs) eIF2 , eIF3 , eIF5 , and eIF5B , but do not require the factors eIF1 , eIF1A , and the eIF4F complex. In contrast, picornavirus IRESs do not bind the 40S subunit directly, but are recruited instead through the eIF4G -binding site. [ 9 ] Many viral IRES (and cellular IRES) require additional proteins to mediate their function, known as IRES trans -acting factors (ITAFs). The role of ITAFs in IRES function is still under investigation.
Testing of sequences for potential IRES function has generally relied on the use of bicistronic reporter assays . In these tests, a candidate IRES segment is introduced into a plasmid between two cistrons encoding two different reporter proteins. A promoter upstream of the first cistron drives transcription of both cistrons in a single mRNA. Cells are transfected with the plasmid and assays are subsequently performed to quantitate expression of the two reporters in the cells. An increase in the ratio of expression of the downstream reporter relative to the upstream reporter is taken as evidence for IRES activity in the test sequence. However, without characterization of the mRNA species produced from such plasmids, other explanations for the increase in this ratio cannot be ruled out. [ 4 ] [ 5 ] For example, there are multiple known cases of suspected IRES elements that were later reported as having promoter function. Unexpected splicing activity within several reported IRES elements have also been shown to be responsible for the apparent IRES function observed in bicistronic reporter tests. [ 10 ] A promoter or splice acceptor within a test sequence can result in the production of monocistronic mRNA from which the downstream cistron is translated by conventional cap-dependent, rather than IRES-mediated, initiation. A later study that documented a variety of unexpected aberrant mRNA species arising from reporter plasmids revealed that splice acceptor sites can mimic both IRES and promoter elements in tests employing such plasmids, further highlighting the need for caution in the interpretation of reporter assay results in the absence of careful RNA analysis. [ 11 ]
IRES sequences are often used in molecular biology to co-express multiple genes under the control of the same promoter, thereby mimicking a polycistronic mRNA. Within the past decades, IRES sequences have been used to develop hundreds of genetically modified rodent animal models. [ 12 ] The advantage of this technique is that molecular handling is improved. The problem about IRES is that the expression for each subsequent gene is decreased. [ 13 ]
Another viral element to establish polycistronic mRNA in eukaryotes are 2A-peptides . Here, the potential decrease in gene expression and the degree of incomplete separation of proteins is context dependent. [ 14 ] | https://en.wikipedia.org/wiki/Internal_ribosome_entry_site |
In a chemical analysis , the internal standard method involves adding the same amount of a chemical substance to each sample and calibration solution. The internal standard responds proportionally to changes in the analyte and provides a similar, but not identical, measurement signal. It must also be absent from the sample matrix to ensure there is no other source of the internal standard present. Taking the ratio of analyte signal to internal standard signal and plotting it against the analyte concentrations in the calibration solutions will result in a calibration curve . The calibration curve can then be used to calculate the analyte concentration in an unknown sample. [ 1 ]
Selecting an appropriate internal standard accounts for random and systematic sources of uncertainty that arise during sample preparation or instrument fluctuation. This is because the ratio of analyte relative to the amount of internal standard is independent of these variations. If the measured value of the analyte is erroneously shifted above or below the actual value, the internal standard measurements should shift in the same direction. [ 1 ]
Ratio plot provides good way of compensation of detector sensitivity variation, but may be biased and should be replaced by Relative concentration/Relative calibration calculations if the reason of response variability is in different mass of analysed sample and traditional (not internal standard) calibration curve of any analyte is not linear through origin. [ 2 ]
The earliest recorded use of the internal standard method dates back to Gouy's flame spectroscopy work in 1877, where he used an internal standard to determine if the excitation in his flame was consistent. [ 3 ] [ 4 ] His experimental procedure was later reintroduced in the 1940s, when recording flame photometers became readily available. [ 3 ] The use of internal standards continued to grow, being applied to a wide range of analytical techniques including nuclear magnetic resonance (NMR) spectroscopy , chromatography , and inductively coupled plasma spectroscopy .
In NMR spectroscopy, e.g. of the nuclei 1 H, 13 C and 29 Si, frequencies depend on the magnetic field, which is not the same across all experiments. Therefore, frequencies are reported as relative differences to tetramethylsilane (TMS), an internal standard that George Tiers proposed in 1958 and that the International Union of Pure and Applied Chemistry has since endorsed. [ 5 ] [ 6 ] The relative difference to TMS is called chemical shift . [ 7 ]
TMS works as an ideal standard because it is relatively inert and its identical methyl protons produce a strong upfield signal, isolated from most other protons. [ 7 ] It is soluble in most organic solvents and is removable via distillation due to its low boiling point. [ 1 ]
In practice, the difference between the signals of common solvents and TMS are known. Therefore, no TMS needs to be added to commercial deuterated solvents, as modern instruments are capable of detecting the small quantities of protonated solvent present. By specifying the lock solvent to be used, modern spectrometers are able to correctly reference the sample; in effect, the solvent itself serves as the internal standard. [ 1 ]
In chromatography, internal standards are used to determine the concentration of other analytes by calculating response factor . The selected internal standard should have a similar retention time and derivatization . It must be stable and not interfere with the sample components. This mitigates the uncertainty that can occur in preparatory steps such as sample injection. [ 1 ]
In gas chromatography-mass spectrometry (GC-MS), deuterated compounds with similar structures to the analyte commonly act as effective internal standards. [ 8 ] However, there are non-deuterated internal standards such as norleucine , which is popular in the analysis of amino acids because it can be separated from accompanying peaks. [ 9 ] [ 10 ] [ 11 ]
Selecting an internal standard for liquid chromatography-mass spectrometry (LC-MS) depends on the employed ionization method. The internal standard needs a comparable ionization response and fragmentation pattern to the analyte. [ 12 ] LC-MS internal standards are often isotopically analogous to the structure of the analyte, using isotopes such as deuterium ( 2 H), 13 C, 15 N and 18 O. [ 13 ]
Selecting an internal standard in inductively coupled plasma spectroscopy can be difficult, because signals from the sample matrix can overlap with those belonging to the analyte. Yttrium is a common internal standard that is naturally absent in most samples. It has both a mid-range mass and emission lines that don't interfere with many analytes. The intensity of the yttrium signal is what the signal from the analyte gets compared to. [ 1 ] [ 14 ]
In Inductively coupled plasma-mass spectrometry (ICP-MS), species with a similar mass to the analyte usually serve as good internal standards, though not in every case. Factors that also contribute to the effectiveness of an internal standard in ICP-MS include how close its ionization potential , change in enthalpy , and change in entropy are to the analyte. [ 15 ]
Inductively coupled plasma-optical emission spectroscopy (ICP-OES) internal standards can be selected by observing how the analyte and internal standard signals change with varying experimental conditions. This includes making adjustments to the sample matrix or instrumentation settings and evaluating whether the selected internal standard is reacting in the same way the analyte is. [ 16 ]
One way to visualize the internal standard method is to create one calibration curve that doesn't use the method and one calibration curve that does. Suppose there are known concentrations of nickel in a set of calibration solutions: 0 ppm, 1.6 ppm, 3.2 ppm, 4.8 ppm, 6.4 ppm, and 8 ppm. Each solution also has 5 ppm yttrium to act as an internal standard. If these solutions are measured using ICP-OES, the intensity of the yttrium signal should be consistent across all solutions. If not, the intensity of the nickel signal is likely imprecise as well.
The calibration curve that does not use the internal standard method ignores the uncertainty between measurements. The coefficient of determination (R 2 ) for this plot is 0.9985.
In the calibration curve that uses the internal standard, the y-axis is the ratio of the nickel signal to the yttrium signal. This ratio is unaffected by uncertainty in the nickel measurements, as it should affect the yttrium measurements in the same way. This results in a higher R 2 , 0.9993. | https://en.wikipedia.org/wiki/Internal_standard |
Internal transcribed spacer ( ITS ) is the spacer DNA situated between the small-subunit ribosomal RNA (rRNA) and large-subunit rRNA genes in the chromosome or the corresponding transcribed region in the polycistronic rRNA precursor transcript.
In bacteria and archaea , there is a single ITS, located between the 16S and 23S rRNA genes. Conversely, there are two ITSs in eukaryotes : ITS1 is located between 18S and 5.8S rRNA genes, while ITS2 is between 5.8S and 28S (in opisthokonts , or 25S in plants) rRNA genes. ITS1 corresponds to the ITS in bacteria and archaea, while ITS2 originated as an insertion that interrupted the ancestral 23S rRNA gene. [ 1 ] [ 2 ]
In bacteria and archaea , the ITS occurs in one to several copies, as do the flanking 16S and 23S genes. When there are multiple copies, these do not occur adjacent to one another. Rather, they occur in discrete locations in the circular chromosome. It is not uncommon in bacteria to carry tRNA genes in the ITS. [ 3 ] [ 4 ]
In eukaryotes, genes encoding ribosomal RNA and spacers occur in tandem repeats that are thousands of copies long, each separated by regions of non-transcribed DNA termed intergenic spacer (IGS) or non-transcribed spacer (NTS).
Each eukaryotic ribosomal cluster contains the 5' external transcribed spacer (5' ETS), the 18S rRNA gene, the ITS1, the 5.8S rRNA gene, the ITS2, the 26S or 28S rRNA gene, and finally the 3' ETS. [ 5 ]
During rRNA maturation, ETS and ITS pieces are excised. As non-functional by-products of this maturation, they are rapidly degraded. [ 6 ]
Sequence comparison of the eukaryotic ITS regions is widely used in taxonomy and molecular phylogeny because of several favorable properties: [ 7 ]
For example, ITS markers have proven especially useful for elucidating phylogenetic relationships among the following taxa.
ITS2 is known to be more conserved than ITS1 is. All ITS2 sequences share a common core of secondary structure, [ 26 ] while ITS1 structures are only conserved in much smaller taxonomic units. Regardless of the scope of conservation, structure-assisted comparison can provide higher resolution and robustness. [ 27 ]
The ITS region is the most widely sequenced DNA region in molecular ecology of fungi [ 28 ] and has been recommended as the universal fungal barcode sequence. [ 29 ] It has typically been most useful for molecular systematics at the species to genus level, and even within species (e.g., to identify geographic races). Because of its higher degree of variation than other genic regions of rDNA (for example, small- and large-subunit rRNA), variation among individual rDNA repeats can sometimes be observed within both the ITS and IGS regions. In addition to the universal ITS1+ITS4 primers [ 30 ] [ 31 ] used by many labs, several taxon-specific primers have been described that allow selective amplification of fungal sequences (e.g., see Gardes & Bruns 1993 paper describing amplification of basidiomycete ITS sequences from mycorrhiza samples). [ 32 ] Despite shotgun sequencing methods becoming increasingly utilized in microbial sequencing, the low biomass of fungi in clinical samples make the ITS region amplification an area of ongoing research. [ 33 ] [ 34 ] | https://en.wikipedia.org/wiki/Internal_transcribed_spacer |
Internal waves are gravity waves that oscillate within a fluid medium, rather than on its surface. To exist, the fluid must be stratified : the density must change (continuously or discontinuously) with depth/height due to changes, for example, in temperature and/or salinity. If the density changes over a small vertical distance (as in the case of the thermocline in lakes and oceans or an atmospheric inversion ), the waves propagate horizontally like surface waves, but do so at slower speeds as determined by the density difference of the fluid below and above the interface. If the density changes continuously, the waves can propagate vertically as well as horizontally through the fluid.
Internal waves, also called internal gravity waves, go by many other names depending upon the fluid stratification, generation mechanism, amplitude, and influence of external forces. If propagating horizontally along an interface where the density rapidly decreases with height, they are specifically called interfacial (internal) waves. If the interfacial waves are large amplitude they are called internal solitary waves or internal solitons . If moving vertically through the atmosphere where substantial changes in air density influences their dynamics, they are called anelastic (internal) waves. If generated by flow over topography, they are called Lee waves or mountain waves . If the mountain waves break aloft, they can result in strong warm winds at the ground known as Chinook winds (in North America) or Foehn winds (in Europe). If generated in the ocean by tidal flow over submarine ridges or the continental shelf, they are called internal tides. If they evolve slowly compared to the Earth's rotational frequency so that their dynamics are influenced by the Coriolis effect , they are called inertia gravity waves or, simply, inertial waves . Internal waves are usually distinguished from Rossby waves , which are influenced by the change of Coriolis frequency with latitude.
An internal wave can readily be observed in the kitchen by slowly tilting back and forth a bottle of salad dressing - the waves exist at the interface between oil and vinegar.
Atmospheric internal waves can be visualized by wave clouds : at the wave crests air rises and cools in the relatively lower pressure, which can result in water vapor condensation if the relative humidity is close to 100%. Clouds that reveal internal waves launched by flow over hills are called lenticular clouds because of their lens-like appearance. Less dramatically, a train of internal waves can be visualized by rippled cloud patterns described as herringbone sky or mackerel sky . The outflow of cold air from a thunderstorm can launch large amplitude internal solitary waves at an atmospheric inversion . In northern Australia, these result in Morning Glory clouds , used by some daredevils to glide along like a surfer riding an ocean wave. Satellites over Australia and elsewhere reveal these waves can span many hundreds of kilometers.
Undulations of the oceanic thermocline can be visualized by satellite because the waves increase the surface roughness where the horizontal flow converges, and this increases the scattering of sunlight (as in the image at the top of this page showing of waves generated by tidal flow through the Strait of Gibraltar ).
According to Archimedes' principle , the weight of an immersed object is reduced by the weight of fluid it displaces. This holds for a fluid parcel of density ρ {\displaystyle \rho } surrounded by an ambient fluid of density ρ 0 {\displaystyle \rho _{0}} . Its weight per unit volume is g ( ρ − ρ 0 ) {\displaystyle g(\rho -\rho _{0})} , in which g {\displaystyle g} is the acceleration of gravity. Dividing by a characteristic density, ρ 00 {\displaystyle \rho _{00}} , gives the definition of the reduced gravity:
If ρ > ρ 0 {\displaystyle \rho >\rho _{0}} , g ′ {\displaystyle g^{\prime }} is positive though generally much smaller than g {\displaystyle g} . Because water is much more dense than air, the displacement of water by air from a surface gravity wave feels nearly the full force of gravity ( g ′ ∼ g {\displaystyle g^{\prime }\sim g} ). The displacement of the thermocline of a lake, which separates warmer surface from cooler deep water, feels the buoyancy force expressed through the reduced gravity. For example, the density difference between ice water and room temperature water is 0.002 the characteristic density of water. So the reduced gravity is 0.2% that of gravity. It is for this reason that internal waves move in slow-motion relative to surface waves.
Whereas the reduced gravity is the key variable describing buoyancy for interfacial internal waves, a different quantity is used to describe buoyancy in continuously stratified fluid whose density varies with height as ρ 0 ( z ) {\displaystyle \rho _{0}(z)} . Suppose a water column is in hydrostatic equilibrium and a small parcel of fluid with density ρ 0 ( z 0 ) {\displaystyle \rho _{0}(z_{0})} is displaced vertically by a small distance Δ z {\displaystyle \Delta z} . The buoyant restoring force results in a vertical acceleration, given by [ 1 ] [ 2 ]
This is the spring equation whose solution predicts oscillatory vertical displacement about z 0 {\displaystyle z_{0}} in time about with frequency given by the buoyancy frequency :
The above argument can be generalized to predict the frequency, ω {\displaystyle \omega } , of a fluid parcel that oscillates along a line at an angle Θ {\displaystyle \Theta } to the vertical:
This is one way to write the dispersion relation for internal waves whose lines of constant phase lie at an angle Θ {\displaystyle \Theta } to the vertical. In particular, this shows that the buoyancy frequency is an upper limit of allowed internal wave frequencies.
The theory for internal waves differs in the description of interfacial waves and vertically propagating internal waves. These are treated separately below.
In the simplest case, one considers a two-layer fluid in which a slab of fluid with uniform density ρ 1 {\displaystyle \rho _{1}} overlies a slab of fluid with uniform density ρ 2 {\displaystyle \rho _{2}} . Arbitrarily the interface between the two layers is taken to be situated at z = 0. {\displaystyle z=0.} The fluid in the upper and lower layers are assumed to be irrotational . So the velocity in each layer is given by the gradient of a velocity potential , u → = ∇ ϕ , {\displaystyle {{\vec {u}}=\nabla \phi ,}} and the potential itself satisfies Laplace's equation :
Assuming the domain is unbounded and two-dimensional (in the x − z {\displaystyle x-z} plane), and assuming the wave is periodic in x {\displaystyle x} with wavenumber k > 0 , {\displaystyle k>0,} the equations in each layer reduces to a second-order ordinary differential equation in z {\displaystyle z} . Insisting on bounded solutions the velocity potential in each layer is
and
with A {\displaystyle A} the amplitude of the wave and ω {\displaystyle \omega } its angular frequency . In deriving this structure, matching conditions have been used at the interface requiring continuity of mass and pressure. These conditions also give the dispersion relation : [ 3 ]
in which the reduced gravity g ′ {\displaystyle g^{\prime }} is based on the density difference between the upper and lower layers:
with g {\displaystyle g} the Earth's gravity . Note that the dispersion relation is the same as that for deep water surface waves by setting g ′ = g . {\displaystyle g^{\prime }=g.}
The structure and dispersion relation of internal waves in a uniformly stratified fluid is found through the solution of the linearized conservation of mass, momentum, and internal energy equations assuming the fluid is incompressible and the background density varies by a small amount (the Boussinesq approximation ). Assuming the waves are two dimensional in the x-z plane, the respective equations are
in which ρ {\displaystyle \rho } is the perturbation density, p {\displaystyle p} is the pressure, and ( u , w ) {\displaystyle (u,w)} is the velocity. The ambient density changes linearly with height as given by ρ 0 ( z ) {\displaystyle \rho _{0}(z)} and ρ 00 {\displaystyle \rho _{00}} , a constant, is the characteristic ambient density.
Solving the four equations in four unknowns for a wave of the form exp [ i ( k x + m z − ω t ) ] {\displaystyle \exp[i(kx+mz-\omega t)]} gives the dispersion relation
in which N {\displaystyle N} is the buoyancy frequency and Θ = tan − 1 ( m / k ) {\displaystyle \Theta =\tan ^{-1}(m/k)} is the angle of the wavenumber vector to the horizontal, which is also the angle formed by lines of constant phase to the vertical.
The phase velocity and group velocity found from the dispersion relation predict the unusual property that they are perpendicular and that the vertical components of the phase and group velocities have opposite sign: if a wavepacket moves upward to the right, the crests move downward to the right.
Most people think of waves as a surface phenomenon, which acts between water (as in lakes or oceans) and the air. Where low density water overlies high density water in the ocean , internal waves propagate along the boundary. They are especially common over the continental shelf regions of the world oceans and where brackish water overlies salt water at the outlet of large rivers. There is typically little surface expression of the waves, aside from slick bands that can form over the trough of the waves.
Internal waves are the source of a curious phenomenon called dead water , first reported in 1893 by the Norwegian oceanographer Fridtjof Nansen , in which a boat may experience strong resistance to forward motion in apparently calm conditions. This occurs when the ship is sailing on a layer of relatively fresh water whose depth is comparable to the ship's draft. This causes a wake of internal waves that dissipates a huge amount of energy. [ 4 ]
Internal waves typically have much lower frequencies and higher amplitudes than surface gravity waves because the density differences (and therefore the restoring forces) within a fluid are usually much smaller. Wavelengths vary from centimetres to kilometres with periods of seconds to hours respectively.
The atmosphere and ocean are continuously stratified: potential density generally increases steadily downward. Internal waves in a continuously stratified medium may propagate vertically as well as horizontally. The dispersion relation for such waves is curious: For a freely-propagating internal wave packet , the direction of propagation of energy ( group velocity ) is perpendicular to the direction of propagation of wave crests and troughs ( phase velocity ). An internal wave may also become confined to a finite region of altitude or depth, as a result of varying stratification or wind . Here, the wave is said to be ducted or trapped , and a vertically standing wave may form, where the vertical component of group velocity approaches zero. A ducted internal wave mode may propagate horizontally, with parallel group and phase velocity vectors , analogous to propagation within a waveguide .
At large scales, internal waves are influenced both by the rotation of the Earth as well as by the stratification of the medium. The frequencies of these geophysical wave motions vary from a lower limit of the Coriolis frequency ( inertial motions ) up to the Brunt–Väisälä frequency , or buoyancy frequency (buoyancy oscillations). Above the Brunt–Väisälä frequency , there may be evanescent internal wave motions, for example those resulting from partial reflection . Internal waves at tidal frequencies are produced by tidal flow over topography/bathymetry, and are known as internal tides . Similarly, atmospheric tides arise from, for example, non-uniform solar heating associated with diurnal motion .
Cross-shelf transport, the exchange of water between coastal and offshore environments, is of particular interest for its role in delivering meroplanktonic larvae to often disparate adult populations from shared offshore larval pools. [ 5 ] Several mechanisms have been proposed for the cross-shelf of planktonic larvae by internal waves. The prevalence of each type of event depends on a variety of factors including bottom topography, stratification of the water body, and tidal influences.
Similarly to surface waves, internal waves change as they approach the shore. As the ratio of wave amplitude to water depth becomes such that the wave “feels the bottom,” water at the base of the wave slows down due to friction with the sea floor. This causes the wave to become asymmetrical and the face of the wave to steepen, and finally the wave will break, propagating forward as an internal bore. [ 6 ] [ 7 ] Internal waves are often formed as tides pass over a shelf break. [ 8 ] The largest of these waves are generated during springtides and those of sufficient magnitude break and progress across the shelf as bores. [ 9 ] [ 10 ] These bores are evidenced by rapid, step-like changes in temperature and salinity with depth, the abrupt onset of upslope flows near the bottom and packets of high frequency internal waves following the fronts of the bores. [ 11 ]
The arrival of cool, formerly deep water associated with internal bores into warm, shallower waters corresponds with drastic increases in phytoplankton and zooplankton concentrations and changes in plankter species abundances. [ 12 ] Additionally, while both surface waters and those at depth tend to have relatively low primary productivity, thermoclines are often associated with a chlorophyll maximum layer. These layers in turn attract large aggregations of mobile zooplankton [ 13 ] that internal bores subsequently push inshore. Many taxa can be almost absent in warm surface waters, yet plentiful in these internal bores. [ 12 ]
While internal waves of higher magnitudes will often break after crossing over the shelf break, smaller trains will proceed across the shelf unbroken. [ 10 ] [ 14 ] At low wind speeds these internal waves are evidenced by the formation of wide surface slicks, oriented parallel to the bottom topography, which progress shoreward with the internal waves. [ 15 ] [ 16 ] Waters above an internal wave converge and sink in its trough and upwell and diverge over its crest. [ 15 ] The convergence zones associated with internal wave troughs often accumulate oils and flotsam that occasionally progress shoreward with the slicks. [ 17 ] [ 18 ] These rafts of flotsam can also harbor high concentrations of larvae of invertebrates and fish an order of magnitude higher than the surrounding waters. [ 18 ]
Thermoclines are often associated with chlorophyll maximum layers. [ 13 ] Internal waves represent oscillations of these thermoclines and therefore have the potential to transfer these phytoplankton rich waters downward, coupling benthic and pelagic systems. [ 19 ] [ 20 ] Areas affected by these events show higher growth rates of suspension feeding ascidians and bryozoans , likely due to the periodic influx of high phytoplankton concentrations. [ 21 ] Periodic depression of the thermocline and associated downwelling may also play an important role in the vertical transport of planktonic larvae.
Large steep internal waves containing trapped, reverse-oscillating cores can also transport parcels of water shoreward. [ 22 ] These non-linear waves with trapped cores had previously been observed in the laboratory [ 23 ] and predicted theoretically. [ 24 ] These waves propagate in environments characterized by high shear and turbulence and likely derive their energy from waves of depression interacting with a shoaling bottom further upstream. [ 22 ] The conditions favorable to the generation of these waves are also likely to suspend sediment along the bottom as well as plankton and nutrients found along the benthos in deeper water. | https://en.wikipedia.org/wiki/Internal_wave |
Internal working model of attachment is a psychological approach that attempts to describe the development of mental representations, specifically the worthiness of the self and expectations of others' reactions to the self. This model is a result of interactions with primary caregivers which become internalized, and is therefore an automatic process. [ 1 ] John Bowlby implemented this model in his attachment theory in order to explain how infants act in accordance with these mental representations. It is an important aspect of general attachment theory .
Such internal working models guide future behavior as they generate expectations of how attachment figures will respond to one's behavior. [ 2 ] For example, a parent rejecting the child's need for care conveys that close relationships should be avoided in general, resulting in maladaptive attachment styles.
The most influential figure for the idea of the internal working model of attachment is Bowlby, who laid the groundwork for the concept in the 1960s. He was inspired by both psychoanalysis, especially object relations theory , and more recent research into ethology, evolution and information-processing.
In psychoanalytic theory, there has been the idea of an inner or representational world (proposed by Freud ) as well as the internalization of relationships ( Fairbairn , Winnicott ). According to Freud first schemata evolve out of experiences regarding need fulfilment via the attachment figure. [ 3 ] He argued that the resulting mental representation is an internal copy of the external world made up from memories, and thinking serves the role of experimental action. Fairbairn and Winnicott proposed that these early patterns of relationships become internalized and govern future relationships. [ 2 ]
However, the ethological-evolutionary aspects of the theory received more attention. Bowlby was interested in separation distress, and bonding in animals. He noticed that many infant behaviours are organized around the goal of maintaining proximity to the caregiver. [ 4 ] He proposed that human infants like other mammals must have an attachment motivational-behavioural system which enhances chances for survival. [ 2 ] Ainsworth observed mother-infant interaction and came to the conclusion that individual differences in reaction to separation could not be explained by simple absence or presence of the caregiver but must be the result of a cognitive process. [ 4 ]
However, when Bowlby developed his attachment theory, cognitive psychology was still at its beginning. Only in 1967, Neisser proposed a theory of mental representation based on schemas which later led to the development of schema theory . It was said that these scripts might be the base of the structure of internal working models. [ 5 ]
The term internal working model, however, was coined quite early by Craik (1943). What he called internal working model was a more elaborate and modern version of the psychoanalytical idea of the internal world. [ 2 ] In essence, he claimed that humans carry a small-scale representation, or model of reality, and their own potential actions within it in their mind. [ 6 ]
In summary, Bowlby remodelled Freud’s work about relationship development in terms of newer fields of research (evolutionary biology, ethology, information-processing theory), drawing both from Craik’s idea of representations as the formation and use of dynamic models and Piaget ’s theory of cognitive development. [ 4 ]
There are several hypothesized functions of an internal working model of attachment, both in terms of its evolutionary origins and inherent functioning.
Bowlby proposed that proximity-seeking behaviour evolved out of selection pressure. [ 4 ] In the context of survival, a healthy internal working model helps the infant to maintain proximity to their caregiver in the face of threat or danger. [ 7 ] This is especially important for species with prolonged periods of development, like humans. Due to the relative immaturity of the infant at birth, offspring that manages to maintain a close relationship to their caregiver by seeking their proximity has a survival advantage. [ 4 ] A close emotional bond to the caregiver is therefore crucial for protection from physical harm, and thus the internal working model mediates attachment. [ 8 ] This regulation is enforced via a motivational-behaviour system, motivating both infant and caregiver to seek proximity. [ 6 ] Specifically, caregiving is regulated by behavioural processes complementary to the infant’s proximity-seeking, e.g. the baby smiles, the adult feels reward as a result. [ 4 ]
Having an adequate internal model or representation of the self and the caregiver also serves the adaptive function of ensuring appropriate interpretation and prediction of, as well as response to the environment. [ 6 ] Craik especially emphasized that those organisms that are capable of forming complex internal working models have higher chances of survival. [ 4 ] The better the internal working model can simulate reality, the better the individual’s capacity to plan and respond. [ 6 ] According to Bowlby, individuals form both models of the world and the self within it. These models, initially the product of specific experiences of reality, then aid future attention to and perception and interpretation of the world, which in turn creates certain expectations about possible future events, allowing foresightful and appropriate behaviour. Hence, having adequate representations of the self and caregivers serves an adaptive function. [ 8 ] [ 6 ]
Lastly, if the infant can be sure about the availability of the attachment figure, it will be less prone to fear due to the supportive presence or secure base function of the caregiver, which makes exploration of the environment and hence learning possible. [ 6 ] This felt security is the primary goal of all working models. [ 8 ] Ainsworth researched the secure base phenomenon in her strange situation procedure in which an infant uses their mother as a secure base. [ 4 ] The attachment system provides the child with a sense of security in the form of this base, which supports exploration of the environment and hence independence. [ 7 ] A securely attached child will, in turn, achieve a balance between intimacy and independence. [ 8 ] This corresponds to a balance between the attachment system which serves the function of protection and the exploration system which facilitates learning. [ 4 ]
The function of other attachment styles can be explained in terms of an imbalance of intimacy and independence, a preoccupation with one of these goals. This overriding chronic goal is intimacy in preoccupied children, independence or self-protection in dismissive children, and in case of the fearful child, there is a conflicting chronic goal of achieving both intimacy and independence at the same time or an approach-avoidance conflict due to relative inflexibility in comparison to secure attachment.
The internal working model functions largely outside of conscious awareness. Those subconscious aspects might be especially important for the function of self-protection and serve as a defence mechanism in the face of contradicting models, where one of them operates within the subconscious to prevent a threat to the self. This is mostly the case for dismissive-avoidant attachment where conflicting ideas of the caregiver as both loving and neglecting cause the defence mechanism of downplaying the need for intimacy, not relying on the attachment figure, and emphasizing independence. [ 8 ]
Infants develop different types of internal working models dependent on two factors: the responsiveness and accessibility of the parent and the worthiness of the self to be loved and supported. Thus, by the age of three years, infants will have developed several expectations about how attachment figures will react to their need for help and start to evaluate how likely the self is worth of support in general. [ 9 ] These internalized representations of the self, of attachment figures, and of relationships are constructed as a result of experiences with primary caregivers. It guides the individual’s expectations about relationships throughout life, subsequently influencing social behavior, perception of others and development of self-esteem. [ 10 ]
Essentially, four different internal working models can be defined which are based on positive or negative images of self and others. [ 7 ] Children who feel securely attached seek their parent as a secure base and are willing to explore their environment. In adulthood, they hold a positive model of self and others, therefore, feeling comfortable with intimacy and autonomy. On the contrary, adults who develop a fearful-avoidant internal working model (negative self, negative others) construct defense mechanisms in order to protect themselves from being rejected by others. Consequently, they avoid intimate relationships. The third category is classified as the preoccupied model, indicating a combination of negative self-evaluation and the appreciation of others, which makes them overly dependent on their environment. Finally, dismissive-avoidant adults aim for independence as they view themselves as valuable and autonomous. They rarely open up and mainly rely on themselves due to lack of trust in others. [ 7 ]
Internal working models are considered to result out of generalized representations of past events between attachment figure and the child. [ 11 ] [ 2 ] [ 3 ] Thus, in forming an internal working model a child takes into account past experiences with the caregiver as well as the outcomes of past attempts to establish contact with the caregiver. [ 3 ] One important factor in the establishment of generalized representations is caregiver behaviour. [ 8 ] Accordingly, a child whose caretaker exhibits high levels of parental sensitivity , responsiveness and reliability is likely to develop a positive internal working model of the self. Conversely, frequent experiences of unreliability and neglect by the attachment figure foster the emergence of negative internal working models of self and others. [ 12 ]
As infants have been shown to possess the social and cognitive capacities necessary to form internal working models, initial development of these may occur within the first year of life. [ 11 ] [ 3 ] Once established, internal working models are assumed to remain largely consistent over time, developing primarily in complexity and sophistication. [ 5 ] As such, internal working models of young children may include representations of past instances of caregiver responsiveness or availability, while older children's and adults' internal working models may integrate more advanced cognitive abilities such as the imagination of hypothetical future interactions. [ 8 ] However, changes to internal representations of attachment relationships can occur. This is most likely to happen upon repeated experiences that are incompatible with the internal working model in place at the time. [ 11 ] One way this can happen is during major periods (meaning weeks or months) of absence of the attachment figure. [ 11 ] During such prolonged absence, a child's expectation of the caregiver's availability to respond is continuously violated. This results in a change of behaviour toward the caregiver upon reunion, reflecting changes in the child's internal working model of the relationship. [ 3 ]
Internal working models are subject to intergenerational transmission, meaning that parents' internal working model patterns may be passed on to their children. [ 2 ] [ 13 ] Indeed, high correlations have been found between security of early infant attachment and parental internal working model security. [ 3 ] [ 13 ] A central aspect in intergenerational transmission of internal working models is that caretakers themselves are influenced in their behaviour toward children by their own internal working models. For instance, a parent with a secure and consistent internal working model is likely to interpret an infant's attachment signals appropriately, whereas a parent with an insecure internal working model is less likely to do so. [ 2 ] In the latter case, the infant itself might be drawn to construct a negative working model of the self and the relationship. Furthermore, a parent with a negative, poorly organized and inconsistent working model might fail to provide useful feedback about the parent-infant dyad and other relationships, thus disrupting the infant's forming of a well-adapted working model at an early stage. [ 2 ] The result will be a negative, disorganized internal working model employed by the infant.
One mechanism by which attachment (and thus, internal working models of attachment) can be transmitted is joint reminiscing about past events or memories. For instance, mothers who are securely attached tend to communicate about past events in more elaborate ways than do mothers who are not securely attached. [ 5 ] While reminiscing together about past events, securely attached mothers will then engage in more elaborate reasoning with their child, thereby stimulating the development of a more elaborate, coherent internal working model by the child itself. [ 5 ] [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Internal_working_model_of_attachment |
Internally grooved copper tubes, also known as "microfin tubes", are a small diameter coil technology for modern air conditioning and refrigeration systems. Grooved coils facilitate more efficient heat transfer than smooth coils. [ 1 ] [ 2 ] Small diameter coils have better rates of heat transfer than conventionally-sized condenser and evaporator coils with round copper tubes and aluminum or copper fin that have been the standard in the HVAC industry for many years. Small diameter coils can withstand the higher pressures required by the new generation of environmentally friendlier refrigerants. They have lower material costs because they require less refrigerant, fin, and coil materials. They enable the design of smaller and lighter high-efficiency air conditioners and refrigerators because the evaporator and condenser coils are smaller and lighter.
With MicroGroove technology, heat transfer is enhanced by grooving the inside surface of the tube. This increases the surface to volume ratio, mixes the refrigerant , and homogenizes refrigerant temperatures across the tube. [ 3 ] [ 4 ] [ 5 ] [ 6 ]
Tubes with MicroGroove technology can be made with copper or aluminium . Copper fins are an attractive alternative to aluminium due to the better corrosion resistance of copper and its antimicrobial benefits. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
To use smaller tubes instead of conventional-sized tubes in air conditioners, heat exchangers must be redesigned including the fin and tube circuits. [ 12 ] Design optimization requires the use of computational fluid dynamics to analyze airflow around the tubes and fins, as well as computer simulations of refrigerant flow and temperatures inside the tubes. This is important because the overall heat transfer coefficient of a coil is a function of the convection of the refrigerant inside the tube to the tube wall, conduction through the tube wall, and dissipation through the fins. [ 13 ] [ 14 ] [ 15 ]
Engineering considerations for using MicroGroove include:
Published experiments on MicroGroove coil performance and energy efficiency take into account the effects of fin spacing and fin design, tube diameter, and tube circuitry. [ 17 ] Tube circuitry is substantially different than for conventional coils. Coils should be optimized with respect to the number of paths between the inlet and outlet manifolds. Typically, smaller diameter tubes require more paths of shorter lengths. Published research on tube circuitry [ 18 ] and fin design for heat exchangers made with 4 mm tubes [ 19 ] are available.
Research on a heat exchanger redesign with 5-mm diameter tubes demonstrated a 5% greater heat exchange capacity than that of the same size heat exchanger with 7 mm diameter tubes. Also, the refrigerant charge of the 5 mm diameter tubes was less than the 7 mm diameter tubes. [ 20 ] In China, Chigo, Gree, and Kelon are producing air conditioners with coils that have 5 mm diameter tubes. [ 21 ]
A variety of fin designs have been developed for use with small-diameter copper tubes. The performance of slotted and louvered fin designs have been evaluated and compared as a function of various fin dimensions. Simulations have been used to optimize fin design performance. [ 22 ]
The phasing out of CFC and HCFC refrigerants (e.g., HCFC-22 , also known as R22 ) due to global warming concerns has helped to spur innovations in cooling technologies. [ 23 ] [ 24 ] Natural refrigerants such as carbon dioxide ( R744 ) and propane ( R290 ), as well as R-410A, have become attractive replacements for air conditioning and refrigeration applications.
Higher pressures are typically required to condense these new environmentally friendly refrigerants compared to those that are being phased out. Small diameter copper tubes are more desirable in applications with higher pressures. For tubes of the same thickness, smaller diameter tubes can withstand higher pressures than larger diameter tubes. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 25 ] Hence, as tube diameters decrease, burst pressures increase. This is because working pressure is directly proportional to wall thickness and inversely proportional to diameter. By designing coils with shorter tube lengths, less work is required to circulate the refrigerant. Therefore, refrigerant pressure drop factors due to small diameter tubes can be offset.
Carbon dioxide (R744) refrigerants are used in modern vending machines , refrigerated supermarket display cases, ice-skating rinks , and other emerging applications. [ 26 ] [ 27 ] Microgroove's smaller diameter copper tubes have the strength to withstand the very high gas cooler and burst pressures of R744 while allowing for lower overall refrigerant volumes. [ 28 ]
Propane ( R290 ) is an eco-friendly refrigerant with outstanding thermodynamic properties. [ 29 ] [ 26 ] The pressure requirements for R290 are much less than for carbon dioxide, but R290 is extremely flammable. Research has demonstrated that MicroGroove is suitable for R290-charged room air conditioners because the refrigerant charge requirement is dramatically reduced with smaller diameter copper tubes. The risk of tube explosions is dramatically reduced as well. [ 30 ] [ 31 ] Research conducted with propane in MicroGroove has implications for heat exchanger coils used in refrigerators , heat pumps and commercial air conditioning systems. [ 32 ]
In a design study of functionally equivalent 5 kW HVAC heat exchangers, tube materials in the coils weighed 3.09 kg for 9.52 mm diameter tube, 2.12 kg for 7 mm diameter tube, and 1.67 kg for 5 mm diameter tube. Tube weight was reduced by 31% when copper tube diameters were downsized from 3/8 inch to 7 mm. Tube weight was reduced by 46% when copper tube diameters were downsized from 3/8 inch to 5 mm. The weights of the fin materials in the coils was 3.55 kg for the 9.52 mm coils, 2.61 kg for the 7 mm coils, and 1.55 kg for the 5 mm coils. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 33 ]
Copper is an antimicrobial material. Bio buildup can be reduced with copper coils. This helps to maintain high levels of energy efficiency for longer periods of time and avoids energy efficiency drop off over time.
The use of copper coils to inhibit the growth of fungi and bacteria is a recent development in innovative air conditioning and refrigeration products. OEM companies, such as Chigo in China and Hydronic in France, are now manufacturing all-copper antimicrobial air conditioning systems to improve indoor air quality. [ 24 ]
Smaller diameter refrigerant paths can also be realized with extruded aluminium tubes. These have been designed with several microchannels in one flat, ribbon-like tube. Aluminium microchannel technology offers significant advantages over conventional copper-aluminium round tube plate fin coil, including improved heat transfer performance and reduced refrigerant charge. [ 34 ] However, copper MicroGroove offers higher heat transfer efficiencies than aluminium microchannel tubes and it enables smaller refrigerant volumes because the tube ends of MicroGroove are connected by small U-joints rather than large headers. [ 35 ]
Copper tubes are often produced by a cast and roll process. Copper ingots are cast into mother tubes and these tubes are then drawn to a final shape, annealed , and enhanced with an inner surface texture to improve heat transfer performance. The production of small diameter copper tubes requires only the addition of one or two additional drawing passes to achieve 5 mm tube diameters. [ 36 ] [ 37 ]
Existing air conditioner coils made of round copper tubes and aluminium fins (CTAF coils) typically are mechanically assembled using tube expansion. [ 37 ] [ 25 ]
The equipment used in manufacturing Microgroove products expands the tubes circumferentially (i.e., the circumference of the tube is increased without changing the length). This "non-shrinkage" expansion allows for better control of tube lengths in preparation for subsequent assembly operations. Tubes are inserted, or laced, into the holes in a stack of precisely spaced fins. Expanders are inserted into the tubes and the tube diameters are increased slightly until mechanical contact is achieved between the tubes and fins. The high ductility of copper allows for this process to be performed accurately and precisely. Heat exchanger coils made in this manner have excellent durability and heat transfer properties. [ 38 ] [ 39 ]
The small-diameter tube project in China involves manufacturers who together account for more than 80 percent of HVAC production of approximately 75 million units. Several OEMs in North America are marketing residential air-conditioner products with copper tubes. [ 25 ] Air-conditioner OEMs, including Guangdong Chigo Air Conditioning, [ 40 ] the Refrigeration Research Institute of Guangdong Midea Refrigeration Appliances Group, [ 41 ] and Shanghai Golden Dragon Refrigeration Technology Co., Ltd. [ 42 ] have described the benefits of small-diameter copper tubes versus the standard for various designs and diameters. ACR coils from original equipment manufacturers (OEMs) Gree, Haier, Midea, Chigo and HiSense Kelon are also available. [ 43 ] | https://en.wikipedia.org/wiki/Internally_grooved_copper_tube |
An internalnet is a computer network composed of devices inside and on the human body . Such a system could be used to link nanochondria , bionic implants, wearable computers , and other devices.
This computer networking article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Internalnet |
The International Academy for Production Engineering (CIRP) is a professional body for research into production engineering . CIRP comes from the French acronym of College International pour la Recherche en Productique (CIRP). [ 1 ]
CIRP was founded in 1951 as the International Institution for Production Engineering Research. [ 2 ] [ 3 ]
CIRP uses different platforms for scientific exchange. One of this is "CIRP Global Web Conference on Production Engineering Research" born in 2011. [ 4 ] The CSI of Naples University , Ing Fabrizio Pietrafesa provided technical support for multimedial contents
in 2014.
This article about an organization in France is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/International_Academy_for_Production_Engineering |
The International Academy of Mathematical Chemistry (IAMC) was founded in Dubrovnik , Croatia , in 2005 by Milan Randić . [ 1 ] It is an organization for chemistry and mathematics avocation; its predecessors have been around since the 1930s. [ 1 ] There are 88 Academy members (as of 2011 [update] ) from around the world (27 countries), comprising six scientists awarded the Nobel Prize . [ 1 ]
This article about an international organization is a stub . You can help Wikipedia by expanding it .
This article about a chemistry organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/International_Academy_of_Mathematical_Chemistry |
The International Academy of Quantum Molecular Science ( IAQMS ) is an international scientific learned society covering all applications of quantum theory to chemistry and chemical physics . It was created in Menton in 1967. The founding members were Raymond Daudel , Per-Olov Löwdin , Robert G. Parr , John Pople and Bernard Pullman . Its foundation was supported by Louis de Broglie . [ 1 ]
Originally, the academy had 25 regular members under 65 years of age. This was later raised to 30, and then to 35. There is no limit on the number of members over 65 years of age. The members are "chosen among the scientists of all countries who have distinguished themselves by the value of their scientific work, their role of pioneer or leader of a school in the broad field of quantum chemistry , i.e. the application of quantum mechanics to the study of molecules and macromolecules ". [ 2 ] As of 2006, the academy consisted of 90 members. The academy organizes the International Congress of Quantum Chemistry every three years.
The academy awards a medal to a young member of the scientific community who has distinguished themselves by a pioneering and important contribution. The award has been made every year since 1967. [ 3 ]
Presidents and vice-presidents of the academy since its inception: [ 2 ] | https://en.wikipedia.org/wiki/International_Academy_of_Quantum_Molecular_Science |
The International Academy of Wood Science (IAWS) [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] is an international academy and a non-profit assembly of wood scientists, recognizing all fields of wood science with their associated technological domains and securing a worldwide representation. [ 9 ]
Since June 2023, the academy is represented by Dr. Stavros Avramidis , a Greek-Canadian professor and wood scientist who serves as the 19th President of the IAWS, [ 10 ] and, also by Dr. Ingo Burgert, a Swiss wood engineer who is presently the elected vice-president for the period 2023–2026. [ 11 ]
The academy was first established on June 2, 1966, at the Centre Technique du Bois in Paris .
The development and establishment of the International Academy of Wood Science involved many people, but the individual who decided to found a wood academy was Professor Franz Gustav Kollmann, who had studied in the Wood Research (German: Holzforschung ) department at the Technical University of Munich , Germany, and was then working in industry. [ 12 ] He was also the first elected President of the academy in the years 1966–1972.
Since 1967, the official scientific journal of the IAWS is the journal of Wood Science and Technology . [ 13 ]
The academy has the objective of promoting at the international level the concerted development of wood science and its standing by recognizing meritorious wood scientists by their election as Fellows, thereby honouring distinguished achievements in the science of wood, and promoting a high standard of research and publication. In addition, the academy holds annual plenary meetings, including business meetings and technical sessions, in the form of international scientific conferences. [ 14 ]
Fellows of the IAWS are wood scientists who are elected as figures actively engaged in wood research in the broadest sense, their election being evidence of high scientific standards. New Fellows are nominated and evaluated by Fellows. The executive committee determines the number of nominees to be accepted as Fellows each year, based on those evaluations.
The tasks of the Fellows of the IAWS are to:
The executive committee of the IAWS consists of the following officers: [ 15 ] | https://en.wikipedia.org/wiki/International_Academy_of_Wood_Science |
The International Association for Bridge and Structural Engineering (IABSE) [ 1 ] is a non-profit organisation with mission to promote the exchange of knowledge and to advance the practice of structural engineering worldwide in the service of the profession and society, taking into consideration technical, economic, environmental, aesthetic and social aspects.
IABSE deals all kinds of structures composed of any kind of material, all phases of the construction process, as well as education and research . The association’s name in French is "Association Internationale des Ponts et Charpentes (AIPC)" and in German "Internationale Vereinigung für Brückenbau und Hochbau (IVBH)“. It was founded in 1929 and has its seat in Zurich . IABSE publishes the quarterly Journal Structural Engineering International SEI, available online via Ingenta .
The Outstanding Structure Award has been presented annually since 2000. It recognises the most remarkable, innovative, creative, or otherwise stimulating structures completed within the last few years. As a project award, it recognizes the team effort of the engineer, the architect, the contractor, and the owner involved in completion of the project. [ 2 ]
The Anton Tedesko Medal is awarded by the IABSE Foundation to honour a Laureate and support a Fellow of the association for study leave for a promising young engineer to gain practical experience in a prestigious engineering firm, outside his/her home country. [ 3 ] | https://en.wikipedia.org/wiki/International_Association_for_Bridge_and_Structural_Engineering |
The International Association for Cereal Science and Technology (ICC) was founded in 1955 and was originally called the International Association for Cereal Chemistry . It was set up to develop international standard testing procedures for cereals and flour . It has currently more than fifty member countries and headquarters in Vienna , Austria . The ICC celebrated its 50th anniversary in 2005.
The International Association for Cereal Science and Technology (ICC) was founded in 1955 in Hamburg , Germany, and was originally named the International Association for Cereal Chemistry. [ 1 ] The founding took place at the third International Bread Congress. The impetus was to develop "internationally approved and accepted standard testing procedures for cereals and flour". [ 1 ] Leading scientists involved in the establishment of the association included Dr. Friedrich Schweitzer (Austria), Dr. Fuchs and Dr. Paul F. Pelshenke (Germany), Dr. Hintzer (The Netherlands), Prof. Maes (Belgium), Prof. Buré (France), and Dr. Widhe (Sweden). The United States and Canada were represented by Dr. Shellenberger, Dr. Zeleny, and Dr. Andersen. [ 1 ] Schweitzer was the first president of the ICC. [ 1 ]
In 1978, the name of the association was changed to the International Association for Cereal Science and Technology, as members felt it better reflected the association's scope. [ 1 ] The ICC celebrated its 50th anniversary in 2005. [ 1 ] [ 2 ] Its headquarters are in Vienna. [ 1 ]
The ICC holds an international congress, the Cereals and Bread Congress, as well as local meetings and symposia around the world. Friedrich Schweitzer organised the first international meeting of the association, which was held in Vienna on 5–8 December 1956, and a second ten years later, also in Vienna. [ 1 ]
The ICC confers a number of awards and honours significant contributions with Fellow of the ICC Academy . The foremost medal is the Clyde H. Bailey Medal , which was initiated in 1969 and is awarded for "outstanding achievements in cereals science and technology". [ 1 ] The Friedrich Schweitzer Medal , created in 1989, is awarded for distinguished service to the ICC. [ 1 ] | https://en.wikipedia.org/wiki/International_Association_for_Cereal_Science_and_Technology |
The International Association for Engineering and Food (IAEF) is a global body of around 25 delegates representing professional engineering societies including food engineering activities. This organization identifies the sites for ICEF events. ICEF, International Congress on Engineering and Food , is a congress in the field of food engineering . It is usually held in a four year cycle at different locations. [ citation needed ] | https://en.wikipedia.org/wiki/International_Association_for_Engineering_and_Food |
The International Association for Hydro-Environment Engineering and Research ( IAHR ), founded in 1935, is a worldwide, non-profit, independent organisation of engineers and water specialists working in fields related to the hydro-environment and in particular with reference to hydraulics and its practical application. IAHR was called the International Association of Hydraulic Engineering and Research until 2009.
Activities range from river and maritime hydraulics to water resources development, flood risk management and eco-hydraulics, through to ice engineering , hydroinformatics and continuing education and training. IAHR stimulates and promotes both research and its application, and by so doing strives to contribute to sustainable development, the optimisation of world water resources management and industrial flow processes. IAHR accomplishes its goals by a wide variety of member activities including: working groups, research agenda, congresses, specialty conferences, workshops and short courses; Journals, Monographs and Proceedings; by collaborating with international organisations such as UN Water , UNESCO , WMO , IDNDR , GWP , ICSU ; and by co-operation with other water-related national and international organisations.
IAHR publishes several international scientific journals in collaboration with Taylor & Francis and Elsevier – the Journal of Hydraulic Research , the Journal of River Basin Management , the Journal of Water Engineering and Research , the Revista Iberoamericana del Agua RIBAGUA jointly with the World Council of Civil Engineers (WCCE), the Journal of Ecohydraulics and the Journal of Hydro-Environment Engineering and Research with the Korean Water Resources Association. It also publishes Hydrolink , a quarterly magazine now FREE ACCESS.
The activities of IAHR are carried out by two full-time professional secretariats with offices in Madrid , Spain, which is hosted by the consortium Spain Water ( CEDEX , Direccion General del Agua , Direccion General de Costas, MAPAMA, Spain ), and in Beijing , China, hosted by IWHR . [ 1 ] [ 2 ]
The governing body of the association is a council elected by member ballot every two years. The current president is Prof. Joseph Hun-wei Lee (Hong Kong, China). The current vice-presidents are: Prof. Silke Wieprecht (Germany), Dr. Robert Ettema (USA), and Prof. Hyoseop Woo (South Korea). Dr. Ramon Gutierrez-Serret and Dr. Peng Jing are secretaries general.
IAHR is a Scientific Associate of the International Council for Science (ICSU) and is a partner organisation of UN-Water .
The IAHR World Congress is one of the most important activities of the International Association for Hydro-Environment Engineering and Research (IAHR) which typically attracts between 800 and 1500 participants from around the world. The 2022 IAHR World Congress, under the overall theme "From Snow to Sea", took place in Granada, Spain.
IAHR publishes the Journal of Hydraulic Research [ 3 ] in partnership with Taylor & Francis .
IAHR publishes the International Journal of River Basin Management together with the International Association of Hydrological Sciences and INBO and in partnership with Taylor & Francis . [ 4 ]
IAHR publishes the International Journal of Applied Water Engineering and Research together with the World Council of Civil Engineers and in partnership with Taylor & Francis . [ 5 ]
The IAHR Asia Pacific Division publishes the Journal of Hydro-Environment Research [ 6 ] in collaboration with the KWRA, Korean Water Resources Association and Elsevier
The IAHR Latin America Division publishes the Revista Iberoamericana del Agua [ 7 ] in collaboration with the World Council of Civil Engineers (WCCE) | https://en.wikipedia.org/wiki/International_Association_for_Hydro-Environment_Engineering_and_Research |
The International Association for Life-Cycle Civil Engineering ( IALCCE ) is an international organization founded in October 2006. Its declared mission is " to be the premier international organization for the advancement of the state-of-the-art in the field of life-cycle civil engineering ".
The activities of the IALCCE cover all aspects of life-cycle assessment, design, maintenance, rehabilitation and monitoring of civil engineering systems. Eight International Symposia have been organized since the foundation of IALCCE. The inaugural IALCCE Symposium was held in Varenna, Lake Como, Italy, in June 2008, under the auspices of Politecnico di Milano. Following IALCCE 2008, a series of Symposia have been organized in Taipei, Taiwan (IALCCE 2010), Vienna, Austria (IALCCE 2012), Tokyo, Japan (IALCCE 2014), Delft, Netherlands (IALCCE 2016), Ghent, Belgium (IALCCE 2018), Shanghai, China (IALCCE 2020), and Milan, Italy (IALCCE 2023). These events have been very successful, both technically and academically, and IALCCE Symposia have become established events in the field of Life-Cycle Civil Engineering and related topics. With IALCCE 2023 back to Italy, the 15th Anniversary of IALCCE Symposia has been celebrated where these events were initiated. The Ninth International Symposium on Life‑Cycle Civil Engineering (IALCCE 2025) will be held in Melbourne, Australia, on July 15-19, 2025.
The outcomes of the IALCCE International Symposia are collected in a Book Series "Life-Cycle of Civil Engineering Systems", published by CRC Press, Taylor & Francis Group. The proceedings published in this Series will serve as a valuable reference to all concerned with life-cycle performance of civil engineering systems.
The extended version of selected papers included in the proceedings of IALCCE Symposia have been considered for publication in special issues of Structure and Infrastructure Engineering, an international peer-reviewed journal included in the ISI Science Citation Index and endorsed by IALCCE. Seven special issues have been published, with 80+ journal papers (about 5% of the papers presented at IALCCE Symposia) and 1000+ pages. A special issue dedicated to IALCCE 2023 is currently in progress. | https://en.wikipedia.org/wiki/International_Association_for_Life_Cycle_Civil_Engineering |
The International Association for Sports Surface Sciences (ISSS) is the union of labs and experts in the field of sports surfaces. [ 1 ] It was founded in 1985 in Switzerland . Its aims are the exchange of information and ideas regarding testing sports surfaces such as sports hall floors, synthetic surfaces of athletic tracks and artificial turf surfaces. [ 2 ] Members are located all over the world.
The ISSS is officially related to the IAAF . The ISSS has a board of directors consisting of Hans J. Kolitzus (CH/D), Vic Watson (GB), Ties Joosten (NL) and Alastair Cox (GB)). The head office is located in Switzerland. The ISSS organizes regular Technical Meetings with experts from the industry to discuss issues of common interest. | https://en.wikipedia.org/wiki/International_Association_for_Sports_Surface_Sciences |
The International Association for the Advancement of Space Safety (IAASS) is a non-profit organization committed to furthering international cooperation and scientific advancement in space systems safety. Its aim is to advance the science and application of space safety. IAASS was legally established on April 16, 2004 in the Netherlands . It became a member of the International Astronautical Federation (IAF) in October 2004. [ 1 ] The IAASS is based on the intellectual interaction of individual members who together shape the technical vision of the association, and make the association services available to stakeholders (on a non-profit basis).
In June 2006, former US Senator John Glenn and first American to orbit became an Honorary Member. In June 2010, IAASS was granted the Observer status at the United Nations COPUOS ( Committee on the Peaceful Uses of Outer Space ).
The association counts more than 200 professional members from 25 countries, 55% of the members are from industry, while the remaining 45% come from space agencies, governmental institutions and academia.
A 2005 report by the Space and Advanced Communications Research Institute (SACRI) of George Washington University , titled Space Safety Report: Vulnerabilities and Risk Reduction In U.S. Human Space Flight Programs suggested that the newly formed IAASS might help improve the safety of the International Space Station (ISS). It also recommended NASA work with the IAASS to develop safety standards and advancement of space debris minimization and control. [ 2 ]
The first IAASS conference was held in Nice, France in October 2005. The European Space Agency sponsored the second IAASS Conference "Space Safety in a Global World" in May 2007 in Chicago. [ 3 ] The third conference was held 21–23 October 2008 in Rome, Italy. The fourth conference was held 19–21 May in Huntsville, USA. The fifth conference was held 17–19 October 2011 in Versailles, France, and the 6th conference 21–23 May 2013 in Montreal, Canada.
Space safety is defined as freedom from man-made or natural harmful conditions. Harmful conditions are defined as those conditions that can cause death, injury, illness, damage to or loss of systems, facilities, equipment or property, or damage to the environment.
This definition of space safety includes human on board, personnel directly involved in system integration and operation, personnel not directly involved but co-located, as well as general public. For unmanned systems such as robotic satellites, damages due to non-malicious external causes that translates into degradation or loss of mission objectives is also included in the definition of safety. For example, unwanted collision of a satellite with another satellite, or with a space debris. Fig.1 shows the various fields of space safety, their national, international or global scope of interest, and the principal means for achieving safety (by design, or operations), although a mixture would be generally used.
Although safety refers to threats that are non-voluntary in nature (design errors, malfunctions, human errors, etc.), security refers to threats which are voluntary (i.e. of aggressive nature such as use of anti-satellite weapons). In some languages a single term is used for both, which may sometimes be confusing.
Absolute freedom from harmful conditions is impossible to achieve. To be absolutely safe a system, product, device, material or environment should never cause or have the potential to cause an accident. In the realization and operation of systems the term safety is generally used to mean acceptable risk level, not absolute safety.
Acceptable risk level is not the same as personal acceptance of risk, but it refers to risk acceptability by stakeholders’ community or by society in a broad sense. Acceptable risk levels vary from system to system, and evolve with time due to socio-economic changes and technological advancement. Implementing proven best-practices at status-of-art is a prerequisite for achieving an acceptable risk level, or in other words to make a system safe. Best-practices are traditionally established by government regulations and norms, and/or by industrial standards. Without such reference the term safety, or acceptable risk, becomes meaningless. In other words, compliance with regulations, norms and standards represents the safety yardstick of a system.
The Journal of Space Safety Engineering ( JSSE ) is a quarterly publication of the International Association for the Advancement of Space Safety (IAASS). JSSE serves applied scientists, engineers, policy makers and safety advocates with a platform to develop, promote and coordinate the science, technology and practice of space safety.
The journal has a distinguished Editorial Board with extensive qualifications, ensuring that the journal maintains high scientific and technical standards and has a broad international coverage.
The Space Safety Magazine ( SSM ) is a quarterly print magazine and a daily news website, jointly published by the International Association for Advancement of Space Safety (IAASS) and the International Space Safety Foundation (ISSF). Space Safety Magazine [ 4 ] is focused on safety related issues affecting space as well as safety on Earth from space events and objects. | https://en.wikipedia.org/wiki/International_Association_for_the_Advancement_of_Space_Safety |
The International Association for the Study of Dreams ( IASD ) is a multi-disciplinary [ 1 ] professional nonprofit organization for scientific dream research ( oneirology ), [ 2 ] [ 3 ] founded in 1983 [ 4 ] and headquartered in the U.S. [ 5 ] [ 6 ]
The organization was originally named the Association for the Study of Dreams ( ASD ). [ nb 1 ] [ 5 ] [ 7 ]
Attracting "a 'rainbow coalition' of scientists, scholars, therapists, cultural practitioners, artists, and the general public", [ 8 ] the organization publishes scientific research across all dream -related subjects, including dreams in analytical psychology , oneirology , dreamwork , oneiromancy , and lucid dreaming via its:
Writing in 1989, psychology professor, Harry T. Hunt states that "on an organizational level, the Sleep Research Society (srs) and its small cluster of researchers focusing on physiological, neurocognitive, and content analysis approaches to dreams have been supplemented by a more eclectic organization, the Association for the Study of Dreams (asp) [sic]. Within ASD, a diverse group of Freudian, Jungian, existential, and other psychologists interested primarily in dream interpretation and 'dreamwork' has banded together with others attempting to relate dreams to altered states of consciousness and transpersonal psychology, and a small number of srs experimenters." [ 18 ]
Writing more recently, in 2017, historian and academic, Jonson Miller states that "[t]he IASD is a scholarly association for the study of dreams, including dream interpretation, dreams in culture, creativity and dreams, the physiology of dreaming, and lucid dreaming. They publish two magazines and a newsletter, hold conferences (both traditional and online), and provide classes on dream work. Their website has many useful resources, including bibliographies, videos, podcasts, recordings from past conferences, and even images from dream art exhibitions." [ 19 ]
The nonprofit has historically been led by the following researchers: [ 20 ] [ 21 ] | https://en.wikipedia.org/wiki/International_Association_for_the_Study_of_Dreams |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.