text
stringlengths
11
320k
source
stringlengths
26
161
Tuft cells are chemosensory cells in the epithelial lining of the intestines . Similar tufted cells are found in the respiratory epithelium where they are known as brush cells . [ 1 ] The name "tuft" refers to the brush-like microvilli projecting from the cells. Ordinarily there are very few tuft cells present but they have been shown to greatly increase at times of a parasitic infection . [ 2 ] Several studies have proposed a role for tuft cells in defense against parasitic infection. In the intestine, tuft cells are the sole source of secreted interleukin 25 (IL-25). [ 3 ] [ 4 ] [ 5 ] ATOH1 is required for tuft cell specification but not for maintenance of a mature differentiated state, and knockdown of Notch results in increased numbers of tuft cells. [ 5 ] The human gastrointestinal (GI) tract is full of tuft cells for its entire length. These cells were located between the crypts and villi. On the basal pole of all cells was expressed DCLK1 . They did not have the same morphology as was describe in animal studies but they showed an apical brush border the same thickness. Colocalization of synaptophysin and DCLK1 were found in the duodenum , this suggests that these cells play a neuroendocrine role in this region. A specific marker of intestinal tuft cells is microtubule kinase - Double cortin-like kinase 1 (DCLK1). Tuft cells that are positive in this kinase are important in gastrointestinal chemosensation, inflammation or can make repairs after injuries in the intestine. [ 6 ] One key to understanding the role of tuft cells is that they share many characteristics with chemosensory cells in taste buds. For instance, they express many taste receptors and taste signaling apparatus. This might suggest that tuft cells could function as chemoreceptive cells that can sense many chemical signals around them. However, with more new research suggests that tuft cells can also be activated by the taste receptor apparatus. These can also be triggered by different small molecules, such as succinate and aeroallergens . Tuft cells have been known to secrete various molecules which are important for biological functions. Due to this, tuft cells act as danger sensors and trigger a secretion of biologically active mediators. Despite this, the signals and the mediators that they secrete are wholly dependent on context. For example, tuft cells that are in the urethra respond to bitter compounds, through activation of the taste receptor. This then results in a rise in intracellular Ca2+  and the release of acetylcholine . It is thought that this then triggers an activation of various other cells in the proximity which then leads to bladder detrusor reflex and a greater emptying of the bladder . [ 7 ] It has been discovered that the tuft cells in the intestines of mice are activated by parasitic infections. This leads to a secretion of IL25 . IL25, being the key activator of innate lymphoid cells type 2. This then initiates and amplifies type-2 cytokine response, characterized by secretion of cytokines from ILC2 cells. [ 7 ] Tissue remodeling during type-2 immune response is based on cytokine interleukin (IL)-13. This interleukin is produced mainly by group 2 innate lymphoid cells (ILC2s) and type 2 helper T cells (Th2s) located in lamina propria . Also during worm infection, the amount of tuft cells dramatically rises. Hyperplasia of tuft cells and goblet cells is a hallmark of type 2 infection and is regulated by a feed-forward signalling circuit. IL-25 produced by tuft cells induces IL-13 production by ILC2s in the lamina propria. IL-13 then interact with uncommitted epithelial progenitors to affect their lineage selection toward goblet and tuft cells. As a result, the IL-13 is responsible for dramatic remodeling enterocyte epithelium to epithelium which are dominated by tuft and goblet cells. Without IL-25 from tuft cells worm clearance is delayed. The type-2 immune response is based on tuft cells and the response is severely reduced without the presence of these cells, which confirm the important physiologic function for these cells during worm infection. [ 8 ] Activation of Th2 cells is an important part of this feed-forward loop. The activation of tuft cells in the intestine is connected with metabolite succinate, which is produced by a parasite and binds to the specific tuft cells receptor Sucnr1 on their surface. Also, the role of intestinal tuft cells can be important for local regeneration in the intestine after an infection. [ 7 ] Tuft cells were identified for the first time in the trachea and gastrointestinal tract in rodent, due to their typical morphology, by electron microscopy . The characteristic tubulovesicular system and apical bundle of microfilaments which are connected to tuft by long and thick microvilli, reaching into the lumen, gave them their name. [ 1 ] This figure gave these cells their name and the whole of tufted morphology. The distribution and size of tuft cell microvilli are very different from enterocytes that neighbour them. Also tuft cells, in comparison with enterocytes, do not have a terminal web at the base of apical microvilli. [ 9 ] Other characteristics of tuft cells are: quite narrow apical membrane which cause the tuft cells to be viewed as pinched at the top, prominent microfilaments from actin which extend to the cell and finish just above the nucleus, vast but largely empty apical vesicles which make a tubulovesicular network, on the apical side of the cells' nucleus is a Golgi apparatus , deficiency of rough endoplasmic reticulum and desmosomes with tight junction which fixes tuft cells to their neighbours. [ 8 ] The shape of the tuft cell body varies and depends on the organ. Tuft cells in the intestine are cylindric and narrow at the apical and basal ends. Alveolar tuft cells are flatter in comparison with intestinal and gall bladder tuft cells have a cuboidal shape. Differences in tuft cells can reflect their organ's specific functions. Tuft cells express chemosensory proteins, like TRPM5 and α-gustducin. These proteins indicate that neighbouring neurons can innervate tuft cells. [ 9 ] Tuft cells can be identified by staining for cytokeratin 18, neurofilaments, actin filaments, acetylated tubulin, and DCLK1 to differentiate between tuft cells and enterocytes. [ 5 ] Tuft cells are found in the intestine, and stomach, and as pulmonary brush cells in the respiratory tract, from nose to alveoli. [ 10 ] A loss of tolerance to antigens that appear in the environment cause inflammatory bowel disease (IBD) and Crohn's disease (CD) in people who are more genetically susceptible. Helminth colonization inducts a type-2 immune response, causes mucosal healing and achieves clinical remission. During an intense infection, tuft cells can make their own specification and the hyperplasia of tuft cells is a key response to the expulsion of the worm. This shows that the modulation of tuft cell function may be effective in the treatment of Crohn's Disease. [ 11 ] Tuft cells have been shown to use taste receptors in the detection of many different helminth species. The clearance of helminth in mice that lacked taste receptor function (Trpm5 or/-gustducin  KO)   or enough tuft cells (Pou2f3 KO) was impaired compared to that of wild-type mice. This shows that tufts cells are important in playing a protective role during the helminth infections. It was observed that IL-25 derived from tuft cells was mediating the protective response, initiating type 2 immune responses. [ 12 ] Tuft cells were first discovered in the trachea of the rat , and in the mouse stomach . [ 5 ] In the late 1920s, Dr. Chlopkov was tracking a project on developmental stages of goblet cells which are in the intestines. In the microscope he found a cell with a bundle of unusually long microvilli rising into the intestinal lumen. He thought he had found an early stage intestinal goblet cell but it was actually the first report of a new epithelial lineage which we now call the tuft cell. In 1956, two scientists, Rhodin and Dalhamn, described tuft cells in the rat trachea; later the same year Järvi and Keyriläinen found similar cells in the mouse stomach. [ 8 ] Tuft cells are generally located in the columnar epithelium organs derived from endoderm . In rodents, they have been definitively been found: for example, in the trachea, the thymus, the glandular stomach, the gall bladder, the small intestine, the colon, the auditory tube, the pancreatic duct and the urethra. Tuft cells are most of the time isolated cells and take <1% of the epithelium. In the mouse gall bladder and rat bile and pancreatic duct, the tuft cells are more abundant but still isolated. [ 8 ]
https://en.wikipedia.org/wiki/Tuft_cell
Tufting is a type of textile manufacturing in which a thread is inserted on a primary base. It is an ancient technique for making warm garments , especially mittens . After the knitting is done, short U-shaped loops of extra yarn are introduced through the fabric from the outside so that their ends point inwards (e.g., towards the hand inside the mitten). Usually, the tuft yarns form a regular array of "dots" on the outside, sometimes in a contrasting color (e.g., white on red). On the inside, the tuft yarns may be tied for security, although they need not be. The ends of the tuft yarns are then frayed , so that they will subsequently felt , creating a dense, insulating layer within the knitted garment. Tufting was first developed by carpet manufacturers in Dalton, Georgia . [ 1 ] A tufted piece is completed in three steps: tufting, gluing, then backing and finishing. When tufting, the work is completed from the backside of the finished piece. A loop-pile machine sends yarn through the primary backing and leaves the loops uncut. A cut-pile machine produces plush or shaggy carpet by cutting the yarn as it comes through to the front of the piece. [ 1 ] Tufted rugs can be made with coloured yarn to create a design, or plain yarn can be tufted and then dyed in a separate process. [ 1 ] A tufting gun is a tool commonly used to automate the tufting process, more specifically in the realm of rug making. The yarn is fed through a hollow needle, that penetrates the stretched cloth backing for a modifiable length. [ 2 ] They can usually create two types of rugs, a cut or loop pile . A cut pile rug's yarn is snipped every other loop into the backing, creating a “U” shape from the side profile, while a loop pile rug isn't snipped and creates a continuous “M” or “W”. [ 3 ] Tufting guns are useful tools for both mass production and home use due to its flexibility in scale and color variation. Tufting requires the use of specialised primary backing fabric, which is often composed of woven polypropylene . [ 4 ] Primary backing fabric is produced with a range of densities and weaving styles, allowing for use with different gauges of needles. [ 4 ] Primary backing fabric must be stretched tightly to the frame so that it is stable enough to withstand the pressure of the tufting gun and taut enough for the yarn to be held in place. [ 5 ] Tufting frames are generally constructed of wood, with carpet tacks or grippers around the edge to hold the primary backing fabric in place. Eye hooks are an important addition to a tufting frame, they are used as yarn feeders and work to keep the tension consistent. The frame must be sturdy and can be either freestanding or clamped to a table top. It is important to keep pressure and speed consistent when tufting so that the amount of yarn per square inch of the fabric is consistent. [ 5 ] Any mistakes in the design can be corrected throughout the tufting process by simply pulling out yarn strands from the primary backing fabric and re-tufting the area. [ 5 ] Designs can drawn directly on to the primary backing fabric, this can be done freehand or with the aid of a projector. After tufting is completed, the tufted piece requires a coat of latex glue on the back in order to keep the tufts anchored in their place. Latex glue is beneficial for tufted pieces as it provides flexibility and dimensional stability. The piece should remain stretched on the frame until the glue has finished drying to avoid loss of shape and the possibility of mildew . [ 5 ] A secondary backing layer is then applied, providing further dimensional stability and protection for the finished piece as well as improving its appearance. [ 5 ] A wide variety of materials can be used for the secondary backing fabric depending on the intended use of the piece. Felt , canvas , drill and other harder wearing materials can be used for floor rugs, however backing fabric for wall hangings need only be aesthetic, as it is only required to cover up the glue layer and does not need to be hard wearing. Wool is the traditional fibre used in pile tufting and is considered to be a high-quality material, especially for pieces designed to be used in high-traffic areas. [ 6 ] Wool can be spun into yarn by two systems, either woollen or worsted . Worsted yarn is more favourable for tufting when the finished product will be used in high-traffic areas, as it produces a hard flat surface that is tightly woven together. This is due to the tightly wound, fine yarn which is created in the worsted process. In comparison, woollen yarn used in tufting encases more entrapped air in the finished product and a bulkier finish. [ 4 ] Different yarn fibres can be used depending on the final use of the tufted object and the desired effect. Cotton and acrylic yarns are also commonly used, and decorative yarns may be used for wall hangings or other decorative tufting projects. Yarn should be spun onto cones before tufting to ensure it unwinds consistently and without tangles. Either a single strand or multiple strands of yarn can be used, depending on the thickness of the yarn and the gauge of the needle. There are two types of tufting guns, manual or electric. A tufting gun is a handheld machine where yarn is fed through a needle and subsequently punched in rapid succession through a backing fabric, either with or without scissors. Electric tufting guns can be cut-pile, loop-pile, or a combination of both and are able to produce multiple pile heights. [ 5 ] A similar effect can be achieved with punch needle embroidery or rug hooking . The choice between a cut pile and a loop pile lies in the distinctive characteristics they offer. Cut pile tufting creates rugs with a loose, hairy texture, while loop pile tufting produces rugs with tight, connected loops, resulting in a trackless surface. Cut pile rugs are softer but require carpet glue for stability. In contrast, loop pile rugs does not have to be glued. [ 7 ] After tufting, the pile can be sheared or cut using electric shearers or scissors to tidy and sculpt the yarn for the finished product. This can be done either before or after the latex glue is applied to the backing. This process also helps to remove any loose fibres which may have come to the surface during the tufting process. [ 4 ] The diverse set of equipment mentioned above plays a crucial role in the rug-making process, with each tool serving a distinct purpose. Collectively, these tools contribute to the tufting and rug making process. [ 9 ] Tufting guns must be regularly cleaned and maintained to prevent damage. Regularly removing excess yarn fluff that gathers around the needle and gears helps the mechanism to move without and excess friction. In order to avoid wear and ensure the mechanisms can function smoothly, lubricating oil should be regularly applied to the machine. [ 10 ] Tufted rugs can be cleaned regularly with a vacuum to remove dirt, however spills or stains should be spot cleaned immediately. [ 4 ] Tufting has seen a rise in popularity since 2018, when Tim Eads started an online community for tufting and made electric tufting guns easily accessible. [ 11 ] [ 12 ] Tufting produces both practical and decorative pieces with many uses and effects. The short format of TikTok and Instagram reels lends itself well to the process of tufting, providing a platform for the textile artform to reach a wider audience. [ 13 ] [ 14 ] The increase in popularity online has also seen a rise in copyrighted images being recreated without permission. [ 12 ] [ 14 ] Recycling tufted pieces can be difficult as they are typically made up of three layers, which can require additional energy to break down into their individual components. [ 4 ] Processed waste from tufting can be turned into many things, including cushion stuffing, as concrete reinforcement or as modifiers in asphalt mixtures. [ 4 ] Tufted pieces, such as rugs or wall hangings, provide acoustic properties which can minimize noise and absorb airborne sounds. [ 6 ] They also provide thermal comfort when walking on tufted rugs with bare feet, and larger pieces provide insulation which may reduce the cost of heating. [ 6 ] Rugs or wall hangings made from wool fibres have been shown to improve air quality in indoor spaces. [ 6 ] Wool acts as a filter through which contaminants such as sulphur dioxide and nitrogen are absorbed. [ 6 ] Wool is also a highly absorbent fibre and can help manage humidity changes indoors. [ 6 ] Tufted carpets and rugs provide a safe surface to walk on, offering slip resistance and a more forgiving surface should objects be dropped or falls occur. [ 1 ] Wool carpets are also resistant to flammability and hide soil and other dirts well. [ 1 ] Tufted pieces made from nylon yarn may face colour degradation over time if exposed to excess sunlight. [ 4 ] More on tufting rug: https://www.firstrug.com www.ilovetuft.fr This textile arts article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tufting
The Tufts Center for the Study of Drug Development is an independent, academic, non-profit research center at Tufts University in Boston , dedicated to researching drug development . It was established in 1976 by American physician Louis Lasagna . [ 1 ] The Center develops and publishes information to help researchers, regulators, and policy makers in areas related to the pharmaceutical and biotechnology industries. In any given year, approximately 55% of Tufts CSDD's operating expenses are supported by grants from the private sector and 45% from the public sector. [ 2 ] [ 3 ] The Center studies trends in the pharmaceutical industry, maintaining databases pertaining to investigational new drugs , approved drugs, biopharmaceuticals , fast-tracked drugs, and orphan drugs . [ 4 ] The Center provides this information with the aim to improve the efficiency of drug development, foster innovation, and increase patient access to medicines. [ 5 ] The center has published numerous studies estimating the cost of developing new pharmaceutical drugs . In 2001, researchers from the Center estimated that the cost of doing so was $802 million, [ 6 ] and in 2014, they released a study estimating that this amount had risen to nearly $2.6 billion. [ 3 ] The 2014 study was criticized by Medecins Sans Frontieres , which said it was unreliable because the industry's research and development spending is not made public. [ 7 ] Aaron Carroll of the New York Times also criticized the study, saying it "contains a lot of assumptions that tend to favor the pharmaceutical industry." [ 8 ] The center's 2016 estimate, published in the Journal of Health Economics , found the cost to have averaged $2.87 billion (in 2013 dollars). [ 9 ]
https://en.wikipedia.org/wiki/Tufts_Center_for_the_Study_of_Drug_Development
The Tulip System I is a 16-bit personal computer based on the Intel 8086 and made by Tulip Computers , formerly an import company for the Exidy Sorcerer , called Compudata Systems . [ 1 ] [ 2 ] [ 3 ] Tulip System I is based around the Intel 8086 microprocessor with a 16-bit architecture, running at 8 MHz, almost twice the speed of the IBM PC XT which was launched only a few months earlier in July 1983. [ 2 ] The standard configuration includes 128 KB of RAM, expandable to 896 KB (much more than the 640 KB of the original PC) in units of 128 KB increments. [ 1 ] [ 4 ] Its Motorola 6845 -based video display controller [ 4 ] could display 80 × 24 text in 8 different fonts with support for different languages, including a ( Videotex -based) font with pseudo graphic symbols for displaying 160 × 72 pixel graphics in text mode . The video display generator could also display graphics with a 384 × 288 or 768 × 288 (color) or 768 × 576 (monochrome) pixel resolution using its built-in NEC 7220 video display coprocessor , [ 4 ] which had hardware supported drawing functions , with an advanced set of bit-block transfers it could do line generating, arc, circle, ellipse, ellipse arc, filled arc, filled circle, filled ellipse, filled elliptical arc and many other commands. It has the possibility to use an Intel 8087 math coprocessor, [ 4 ] which increased the speed to > 200 kflops, which was near mainframe data at that time. It included a SASI hard disk interface (a predecessor of the SCSI -standard) and was optionally delivered with a 5 MB or 10 MB hard disk. The floppy disk size was 400 KB (10 sectors, instead of 8 or 9 with the IBM PC) or 800 KB (80 tracks). After initially using CP/M-86 , it quickly switched to using generic MS-DOS 2.00 . There was a rudimentary IBM BIOS -emulator, which allowed the user to use WordStar and a few other IBM-PC software, but Compudata B.V. shipped WordStar and some other software as adopted software for this computer. There was programming support by Compudata B.V. with MS-Basic , MS-Pascal and MS-Fortran . On a private base, TeX and Turbo Pascal were ported to the Tulip System I. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tulip_System-1
The Tumalo Irrigation Project was a privately funded corporation begun to provide water to late-19th century farms in the area of Tumalo Creek , Oregon . [ 1 ] The Three Sisters Irrigation Company and its successors owned and managed the project, under the provisions of the Carey Act . Controversy arose when corporate investors engaged in land speculation rather than irrigation construction. In 1912, during the administration of Governor Oswald West , the state of Oregon assumed control of the project. By 1913 the work was terminated and the work camp abandoned. [ 2 ] Winch, Martin. “Tumalo—Thirsty Land.” Oregon Historical Quarterly 84, 85, 86, 1984-1986.
https://en.wikipedia.org/wiki/Tumalo_Irrigation_Project
A tumble flap is a flap housed in the intake area of many modern automotive gasoline engines to produce a swirl at right-angles to the cylinder axis. This swirling motion improves the air-fuel mixture and enhances power and torque, while at the same time lowering fuel consumption and decreasing emissions. [ 1 ] The flaps can be actuated with pneumatic or electric power. Furthermore, the position of the flap can be controlled continuously with a feedback controller or just kept either fully closed or open. Use of a tumble flap improves the lean burn ability of a spark-ignition engine . The set point of the tumble flap is adjusted by an electrical or vacuum-activated servo mechanism which is under the control of the engine management system . Tumble flaps are open or closed depending on engine operating states (related to engine speed and load), engine temperatures, combustion modes (characterized by air-fuel ratio ), catalytic converter heating or cold start active or inactive etc. In gasoline direct injection , stratified charge mode is used for light-load running conditions, at constant or reducing road speeds, where no acceleration is required. In this charge mode, the air-fuel mixture is concentrated around the spark plug by means of the specifically produced air flow and a special geometry of the piston, while pure air is placed near the cylinder walls. Tumble flaps are used to realize this stratified charge. The flaps remain closed during the stratified charge mode. A switchable tumble system is normally used to direct a targeted air flow. The so-called "tumble plate" divides the air inlet channel into an upper and lower half. An upstream flap allows air flow either only over the upper part or over the entire cross-section. [ 2 ] At higher engine speeds and torques, the tumble flap is opened to achieve a better degree of filling. During this homogeneous mode of combustion, the engine functions like a conventional fuel injection engine, but with higher efficiency due to the higher compression. [ 3 ] The tumble flaps are also actuated to improve cold engine idling. During scavenging the flaps are opened in order to draw much fresh air into the cylinder.
https://en.wikipedia.org/wiki/Tumble_flap
Tumor-homing bacteria are facultative or obligate anaerobic bacteria (capable of producing ATP when oxygen is absent or is destroyed in normal oxygen levels) that are able to target cancerous cells in the body, suppress tumor growth and survive in the body for a long time even after the infection. When this type of bacteria is administered into the body, it migrates to the cancerous tissues and starts to grow, and then deploys distinct mechanisms to destroy solid tumors. Each bacteria species uses a different process to eliminate the tumor. Some common tumor homing bacteria include Salmonella , Clostridium , Bifidobacterium , Listeria , and Streptococcus . [ 1 ] The earliest research of this type of bacteria was highlighted in 1813 when scientists began observing that patients that had gas gangrene , an infection caused by the bacteria Clostridium , were able to have tumor regressions. [ 2 ] Different strains of tumor homing bacteria in distinct environments use unique or similar processes to inhibit or destroy tumor growth. Bacterial cancer therapy is an emerging field for cancer treatment. Although many clinical trials are taking place, as of right now only a few confirmed treatments are being administered to patients. Tumor homing bacteria can be genetically engineered to enhance their anti-tumor activities and be used to transport therapeutic materials based on medical needs. [ 7 ] They are usually transformed into a plasmid that contains the specific gene expression of these therapeutic proteins of the bacteria. After the plasmid reaches the target site, the protein's genetic sequence is expressed and the bacteria can have its full biological effect. Currently, there is no approved treatment with genetically engineered bacteria. However, research is being conducted on Listeria and Clostridium as vectors to transport RNAi (suppresses genes) for colon cancer . [ 8 ] Some active tumor-homing bacteria can be harmful to the human body, since they produce toxins that disturb the cell cycle which results in altered cell growth and chronic infections. However, many ways to enhance the safety of tumor homing bacteria in the body has been found. For example, when the virulent genes of the bacteria are removed by gene targeting , a process where genes are deleted or modified, it can be reduced in pathogenicity [ citation needed ] (property of causing disease). The most researched bacteria for cancer therapy are Salmonella , Listeria, and Clostridium. A genetically engineered strain of Salmonella (TAPET-CD) has completed phase 1 clinical trials for patients with stage 4 metastatic cancer. [ 11 ] Listeria- based cancer vaccines are currently being produced and are undergoing many clinical trials. [ 12 ] Phase I trials of the Clostridium strain called Clostridium novyi ( C. novyi -NT) for patients with treatment-refractory tumors or tumors that are unresponsive to treatment is currently underway. [ 13 ]
https://en.wikipedia.org/wiki/Tumor-homing_bacteria
Tumor-informed minimal residual disease (MRD) testing , often abbreviated as tiMRD , is a highly sensitive, personalized approach for detecting and monitoring minimal residual disease (MRD) in cancer patients. It primarily analyzes circulating tumor DNA (ctDNA) — small fragments of DNA shed from tumor cells into the blood plasma . [ 1 ] [ 2 ] [ 3 ] This method addresses limitations of traditional cancer staging in identifying individuals with minimal residual disease after treatment, who are at high risk of relapse. [ 1 ] Tumor-informed assays are custom-built for each patient by typically sequencing the patient's tumor tissue to identify its unique set of somatic mutations , and then creating a personalized panel to track these specific markers in subsequent blood tests. [ 3 ] [ 4 ] This personalized approach is primarily applied in solid tumors — including colorectal cancer , lung cancer , breast cancer , and bladder cancer — to assess recurrence risk, monitor treatment response, and potentially guide adjuvant therapy decisions. [ 5 ] [ 1 ] tiMRD testing has many clinical applications in oncology care through the lifecycle of detection, treatment, monitoring, and prevention: Clinical validation studies have demonstrated high performance of tiMRD tests. For instance, in colorectal cancer surveillance, certain tiMRD assays show sensitivity for detecting recurrence around 90% with serial testing and specificity exceeding 90%. [ 7 ] [ 2 ] The performance of a tiMRD test relies heavily on the design of mutation panel relevant to the disease and/or a cohort of the patient. This typically involves: By tracking multiple (often dozens or hundreds) confirmed somatic mutations known to originate from the patient's tumor, tiMRD assays can achieve high analytical sensitivity, allowing detection of very low ctDNA levels typical in the MRD setting. [ 13 ] [ 2 ] Specificity is enhanced because the assay targets variants confirmed absent in the patient's matched normal DNA, effectively filtering out background noise from non-tumor sources like clonal hematopoiesis of indeterminate potential (CHIP) . [ 2 ] [ 4 ] Further, ctDNA analysis, even though based on initial tumor sample, may provide a more comprehensive snapshot of overall tumor burden and heterogeneity compared to single-site tissue biopsies, as ctDNA is shed from various tumor sites. [ 14 ] The major challenge of tiMRD stems from the requirement of having adequate quality and quantity of tumor sample from initial diagnosis/surgery, [ 2 ] which may be unavailable or degraded, and matched healthy tissue. This results in longer assay design time and higher costs, delaying the start of monitoring. [ 2 ] [ 12 ] Further, detecting the extremely low fraction of ctDNA present in early-stage disease or post-treatment remains challenging, potentially leading to false negatives, especially if tumor shedding is inherently low. [ 15 ] [ 11 ] While serial testing can improve sensitivity over single time points, [ 2 ] the mutational landscape of a tumor can evolve over time reducing assay's effectiveness due to loss of mutations selected for the initial panel. However, targeting clonal/truncal mutations can minimize this risk. Finally, there is significant heterogeneity between different commercial and laboratory-developed tiMRD assays regarding the number of genes sequenced, variants tracked, bioinformatics pipelines, and performance characteristics. This lack of standardization complicates cross-study comparisons and widespread clinical adoption, and [ 11 ] [ 3 ] requires harmonization efforts. [ 11 ] Several commercial and research-based tumor-informed MRD assays exist. Notable examples include: [ 3 ] These assays vary in their specific methodologies (e.g., number of variants tracked, sequencing technology).
https://en.wikipedia.org/wiki/Tumor-informed_minimal_residual_disease
Tumor M2-PK is a synonym for the dimeric form of the pyruvate kinase isoenzyme type M2 ( PKM2 ), a key enzyme within tumor metabolism . Tumor M2-PK can be elevated in many tumor types, rather than being an organ-specific tumor marker such as PSA . Increased stool (fecal) levels are being investigated as a method of screening for colorectal tumors , and EDTA plasma levels are undergoing testing for possible application in the follow-up of various cancers . Sandwich ELISAs based on two monoclonal antibodies which specifically recognize Tumor M2-PK (the dimeric form of M2-PK) are available for the quantification of Tumor M2-PK in stool and EDTA-plasma samples respectively. As a biomarker , the amount of Tumor M2-PK in stool and EDTA-plasma reflects the specific metabolic status of the tumors. M2-PK, as measured in feces , is a potential tumor marker for colorectal cancer . When measured in feces with a cutoff value of 4 U/ml, its sensitivity has been estimated to be 85% (with a 95% confidence interval of 65 to 96%) for colon cancer and 56% (confidence interval 41–74%) for rectal cancer. [ 1 ] Its specificity is 95%. [ 2 ] The M2-PK test is not dependent on occult blood ( ELISA method), so it can detect bleeding or non-bleeding bowel cancer and also polyps with high sensitivity and high specificity with no false negative, but false positives may occur. [ 3 ] Most people are more willing to accept non-invasive preventive medical check-ups. Therefore, the measurement of tumor M2-PK in stool samples, with follow-up by colonoscopy to clarify the tumor M2-PK positive results, may prove to be an advance in the early detection of colorectal carcinomas. The CE marked M2-PK Test is available in form of an ELISA test for quantitative results or as point of care test to receive results within minutes. Tumor M2-PK is also useful to diagnose lung cancer and better than SCC and NSE tumor markers . [ 4 ] With renal cell carcinoma (RCC), the M2PK test has sensitivity of 66.7 percent for metastatic RCC and 27.5 percent for nonmetastatic RCC, but M2PK test cannot detect transitional cell carcinoma of the bladder, prostate cancer and benign prostatic hyperplasia. [ 5 ] Studies from various international working groups have revealed a significantly increased amount of Tumor M2-PK in EDTA-plasma samples of patients with renal, lung, breast, cervical and gastrointestinal tumors (oesophagus, stomach, pancreas, colon, rectum), as well as melanoma, which correlated with the tumor stage . The combination of Tumor M2-PK with the appropriate classical tumor marker, such as CEA for bowel cancer, CA 19-9 for pancreatic cancer and CA 72-4 for gastric cancer, significantly increases the sensitivity to detect various cancers. An important application of the Tumor M2-PK test in EDTA-plasma is for follow-up during tumor therapy, to monitor the success or failure of the chosen treatment, as well as predicting the chances of a “cure” and survival. If Tumor M2-PK levels decrease during therapy and then remain low after therapy it points towards successful treatment. An increase in the Tumor M2-PK values during or after therapy points towards relapse and/or metastasis. Increased Tumor M2-PK values can sometimes also occur in severe inflammatory diseases , which must be excluded by differential diagnosis. Pyruvate kinase catalyzes the last step within the glycolytic sequence , the dephosphorylation of phosphoenolpyruvate to pyruvate and is responsible for net energy production within the glycolytic pathway. Depending upon the different metabolic functions of the tissues, different isoenzymes of pyruvate kinase are expressed. M2-PK ( PKM2 ) is the predominant pyruvate kinase isoform in proliferating cells, such as fibroblasts , embryonic cells and adult stem cells and most human tissue, including lung, bladder, kidney and thymus; M2-PK is upgregulated in many human tumors. [ 6 ] M2-PK can occur in two different forms in proliferating cells: The tetrameric form of M2-PK has a high affinity to its substrate, phosphoenolpyruvate (PEP), and is highly active at physiological PEP concentrations. Furthermore, the tetrameric form of M2-PK is associated with several other glycolytic enzymes within the so-called glycolytic enzyme complex . Due to the close proximity of the enzymes, the association within the glycolytic enzyme complex leads to a highly effective conversion of glucose to lactate. When M2-PK is mainly in the highly active tetrameric form, which is the case in most normal cells, glucose is mostly converted to lactate, with the attendant production of energy. In contrast, the dimeric form of M2-PK has a low affinity for phosphoenolpyruvate, being nearly inactive at physiological PEP concentrations. When M2-PK is mainly in the dimeric form, which is the case in tumor cells, all phosphometabolites above pyruvate kinase accumulate and are channelled into synthetic processes which branch off from glycolytic intermediates, such as nucleic acids , phospholipids and amino acids , important cell building blocks for highly proliferating cells such as tumor cells. As a consequence of the key position of pyruvate kinase within glycolysis, the tetramer : dimer ratio of M2-PK determines whether glucose carbons are converted to pyruvate and lactate, along with the production of energy (tetrameric form), or channelled into synthetic processes (dimeric form). In tumor cells M2-PK is mainly in the dimeric form. Therefore, the dimeric form of M2-PK has been termed Tumor M2-PK . The dimerization of M2-PK in tumor cells is induced by the direct interaction of M2-PK with different oncoproteins . However, the tetramer : dimer ratio of M2-PK is not constant. Oxygen starvation or highly accumulated glycolytic intermediates, such as fructose 1,6-bisphosphate (fructose 1,6-P2) or the amino acid serine, induce the reassociation of the dimeric form of M2-PK to the tetrameric form. Consequently, due to the activation of M2-PK, glucose is converted to pyruvate and lactate under the production of energy until the fructose 1,6-P2 levels drop below a certain threshold value, which allows the dissociation of the tetrameric form of M2-PK to the dimeric form. Thereafter, the cycle of oscillation starts again when the fructose 1,6-P2 levels reach a certain upper threshold value which induces the tetramerization of M2-PK. When M2-PK is mainly in the less active dimeric form, energy is produced by the degradation of the amino acid glutamine to aspartate, pyruvate and lactate, which is termed glutaminolysis . In tumor cells the increased rate of lactate production in the presence of oxygen is termed the Warburg effect . For the first time pyruvate kinase M2 enzyme was reported with two missense mutations, H391Y and K422R, found in cells from Bloom syndrome patients, prone to develop cancer. Results show that despite the presence of mutations in the inter-subunit contact domain, the K422R and H391Y mutant proteins maintained their homotetrameric structure, similar to the wild-type protein, but showed a loss of activity of 75 and 20%, respectively. H391Y showed a 6-fold increase in affinity for its substrate phosphoenolpyruvate and behaved like a non-allosteric protein with compromised cooperative binding. However, the affinity for phosphoenolpyruvate was lost significantly in K422R. Unlike K422R, H391Y showed enhanced thermal stability, stability over a range of pH values, a lesser effect of the allosteric inhibitor Phe, and resistance toward structural alteration upon binding of the activator (fructose 1,6-bisphosphate) and inhibitor (Phe). Both mutants showed a slight shift in the pH optimum from 7.4 to 7.0. [ 7 ] The co-expression of homotetrameric wild type and mutant PKM2 in the cellular milieu resulting in the interaction between the two at the monomer level was substantiated further by in vitro experiments. The cross-monomer interaction significantly altered the oligomeric state of PKM2 by favoring dimerisation and heterotetramerization. In silico study provided an added support in showing that hetero-oligomerization was energetically favorable. The hetero-oligomeric populations of PKM2 showed altered activity and affinity, and their expression resulted in an increased growth rate of Escherichia coli as well as mammalian cells, along with an increased rate of polyploidy. These features are known to be essential to tumor progression. [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Tumor_M2-PK
A tumor marker is a biomarker that can be used to indicate the presence of cancer or the behavior of cancers (measure progression or response to therapy). They can be found in bodily fluids or tissue . Markers can help with assessing prognosis, surveilling patients after surgical removal of tumors, and even predicting drug-response and monitor therapy. [ 1 ] Tumor markers can be molecules that are produced in higher amounts by cancer cells than normal cells, but can also be produced by other cells from a reaction with the cancer. [ 2 ] The markers can't be used to give patients a diagnosis but can be compared with the result of other tests like biopsy or imaging. [ 2 ] Tumor markers can be proteins , carbohydrates , receptors and gene products. Proteins include hormones and enzymes . To detect enzyme tumor markers enzyme activity is measured. They were previously widely used, but they have largely been replaced by oncofetal antigens and monoclonal antibodies , due to disadvantages such as most of them lacking organ specificity. Carbohydrates consists of antigens on and/or secreted from tumor cells, these are either high-molecular weight mucins or blood group antigens. Receptors are used to determine prognosis and measure how the patient responds to treatment, while genes or gene product can be analyzed to identify mutations in the genome or altered gene expression. [ citation needed ] Tumor markers may be used for the following purposes: When a malignant tumor is found by the presence of a tumor marker, the level of marker found in the body can be monitored to determine the state of the tumor and how it responds to treatment . If the quantity stays the same during treatment it can indicate that the treatment isn't working, and an alternative treatment should be considered. Rising levels of tumor marker does not necessarily reflect a growing malignancy but can result from things like unrelated illnesses. By determining the stage of cancer, it's possible to give a prognosis and treatment plan. [ 3 ] No screening test is wholly specific, and a high level of tumor marker can still be found in benign tumors. The only tumor marker currently used in screening is PSA (prostate-specific antigen). Tumor markers alone can't be used for diagnostic purposes, due to lack of sensitivity and specificity. [ 4 ] The only approved diagnostic method for cancer is with a biopsy . Tumor markers can detect reoccurring cancers in patients post-treatment. [ 3 ] Tumor markers can be determined in serum or rarely in urine or other body fluids, often by immunoassay , ⁣⁣ but other techniques such as enzyme activity determination are sometimes used. Assaying tumor markers were significantly improved after the creation of ELISA and RIA techniques and the advancement of monoclonal antibodies in the 1960s and 1970s. [ 2 ] For many assays, different assay techniques are available. It is important that the same assay is used, as the results from different assays are generally not comparable. For example, mutations of the p53 gene can be detected through immunohistochemical polymorphism screening of DNA, sequence analysis of DNA, or by single-strand conformational polymorphism screening of DNA. Each assay may give different results of the clinical value of the p53 mutations as a prognostic factor. [ 5 ] Interlaboratory proficiency testing for tumor marker tests, and for clinical tests more generally, is routine in Europe and an emerging field [ 6 ] in the United States. New York state is prominent in advocating such research. [ 7 ] The ideal tumor marker has the following characteristics: An ideal tumor marker does not exist, and how they are clinically applied depends on the specific tumor marker. For example, tumor markers like Ki-67 can be used to choose form of treatment or in prognostics but are not useful to give a diagnosis, while other tumor markers have the opposite functionality. Therefore it's important to follow the guidelines of the specific tumor marker. Tumor markers are mainly used in clinical medicine to support a diagnosis and monitor the state of malignancy or reocurrence of cancer. [ 4 ]
https://en.wikipedia.org/wiki/Tumor_marker
The study of the tumor metabolism , also known as tumor metabolome describes the different characteristic metabolic changes in tumor cells. The characteristic attributes [ 2 ] of the tumor metabolome are high glycolytic enzyme activities, the expression of the pyruvate kinase isoenzyme type M2, increased channeling of glucose carbons into synthetic processes, such as nucleic acid , amino acid and phospholipid synthesis, a high rate of pyrimidine and purine de novo synthesis, a low ratio of Adenosine triphosphate and Guanosine triphosphate to Cytidine triphosphate and Uridine triphosphate , low Adenosine monophosphate levels, high glutaminolytic capacities, release of immunosuppressive substances and dependency on methionine . Although the link between the cancer and metabolism was observed in the early days of cancer research by Otto Heinrich Warburg , [ 3 ] which is also known as Warburg hypothesis , not much substantial research was carried out until the late 1990s because of the lack of in vitro tumor models and the difficulty in creating environments that lack oxygen. Recent research has revealed that metabolic reprogramming occurs as a consequence of mutations in cancer genes and alterations in cellular signaling. Therefore, the alteration of cellular and energy metabolism has been suggested as one of the hallmarks of cancer . [ 4 ] [ 5 ] High amount of aerobic glycolysis (also known as the Warburg effect ) distinguishes cancer cells from normal cells. The conversion of glucose to lactate rather than metabolizing it in the mitochondria through oxidative phosphorylation, (which can also occur in hypoxic normal cells) persists in malignant tumor despite the presence of oxygen. This process normally inhibits glycolysis which is also known as Pasteur effect . One of the reasons it is observed is because of the malfunction of mitochondria. Although ATP production by glycolysis can be more rapid than by oxidative phosphorylation, it is far less efficient in terms of ATP generated per unit of glucose consumed. Rather than oxidizing glucose for ATP production, glucose in cancer cells tends to be used for anabolic processes, such as ribose production, protein glycosylation and serine synthesis. This shift therefore demands that tumor cells implement an abnormally high rate of glucose uptake to meet their increased needs. [ 5 ] As neoplastic cells accumulate in three-dimensional multicellular masses, local low nutrient and oxygen levels trigger the growth of new blood vessels into the neoplasm . The imperfect neovasculature in the tumor bed is poorly formed and is inefficient. It therefore, causes nutrient and hypoxic stress (or a state of hypoxia ). [ 6 ] [ 7 ] In this regard, cancer cells and stromal cells can symbiotically recycle and maximize the use of nutrients. Hypoxic adaptation by cancer cells is essential for survival and progression of a tumor. [ 8 ] [ 9 ] In addition to cell-autonomous changes that drive a cancer cell to proliferate and contribute to tumorigenesis, it has also been observed that alterations in whole-organism metabolism such as obesity are associated with heightened risks for a variety of cancers. [ 10 ] The protein AKT1 (also known as Protein Kinase B or PKB) in the PI3K/AKT/mTOR pathway is an important driver of the tumor glycolytic phenotype and stimulates ATP generation. AKT1 stimulates glycolysis by increasing the expression and membrane translocation of glucose transporters and by phosphorylating key glycolytic enzymes, such as hexokinase and phosphofructokinase 2 . This leads to inhibition of forkhead box subfamily O transcription factors, leading to the increase of glycolytic capacity. Activated mTOR stimulates protein and lipid biosynthesis and cell growth in response to sufficient nutrient and energy conditions and is often constitutively activated during tumorigenesis. [ 5 ] mTOR directly stimulates mRNA translation and ribosome biogenesis, and indirectly causes other metabolic changes by activating transcription factors such as hypoxia-inducible factor 1 ( HIF1A ). The subsequent HIF1-dependent metabolic changes are a major determinant of the glycolytic phenotype downstream of PI3K, AKT1 and mTOR. [ 11 ] Apart from being as a general tumor suppressor gene , p53 also plays an important part in regulating of metabolism. p53 activates hexokinase 2 (HK2) that converts glucose to glucose-6-phosphate (G6P) which enters glycolysis to produce ATP, or enters the pentose phosphate pathway (PPP) . It therefore, supports macromolecular biosynthesis by producing reducing potential in the form of reduced Nicotinamide adenine dinucleotide phosphate (NADPH) and/or ribose that are used for nucleotide synthesis. [ 12 ] p53 inhibits the glycolytic pathway by upregulating the expression of TP53-induced glycolysis and apoptosis regulator. Wild-type p53 supports the expression of PTEN (gene) , which inhibits the PI3K pathway, thereby suppressing glycolysis. POU2F1 also cooperate with p53 in regulating the balance between oxidative and glycolytic metabolism. It provides resistance to oxidative stress that would regulates a set of genes that increase glucose metabolism and reduce mitochondrial respiration. This will provide additive force when the p53 is lost. [ 5 ] Mutated Ras also enhances glycolysis, partly through increasing the activity of Myc and hypoxia-inducible factors . Although HIF-1 inhibits Myc, HIF-2 activates Myc causing the multiplicity of the tumor cells. [ 9 ] Mutations in fumarate hydratase are found among patients suffering from kidney cancers, and mutations in succinate dehydrogenase were found in patients with pheochromocytoma and paragangliomas . These mutations cause a disruption of the TCA cycle with the accumulation of fumarate or succinate, both of which can inhibit dioxygenases or prolyl hydrolases that mediate the degradation of HIF proteins. HIF-1 could be elevated under aerobic conditions downstream from activated PI3K, which stimulates the synthesis of HIF-1. Loss of the tumor suppressor VHL in kidney cancer also stabilizes HIF-1, permitting it to activate glycolytic genes, which are normally activated by HIF-1 under hypoxic conditions. [ 9 ] HIF1 then would activate the pyruvate dehydrogenase kinase (PDKs), which inactivate the mitochondrial pyruvate dehydrogenase complex. It reduces the flow of glucose-derived pyruvate into the tricarboxylic acid ( citric acid cycle or TCA cycle). This reduction in pyruvate flux into the TCA cycle decreases the rate of oxidative phosphorylation and oxygen consumption, reinforcing the glycolytic phenotype and sparing oxygen under hypoxic conditions. [ 13 ] [ 14 ] Pyruvate kinase type M2 or PKM2 is present in embryonic, adult stem cells. It is also expressed by many tumor cells. The alterations to metabolism by PKM2 increases ATP resources, stimulates macromolecular biosynthesis and redox control. Pyruvate kinase catalyses the ATP-generating step of glycolysis in which phosphoenolpyruvate (PEP) is converted to pyruvate. This is a rate-limiting step. [ 15 ] It decreases the glycolysis activity and allows carbohydrate metabolites to enter other pathways, like hexosamine pathway, Uridine diphosphate glucose –glucose synthesis, glycerol synthesis and Pentose phosphate pathway or PPP. It helps in generating macromolecule precursors, that are necessary to support cell proliferation, and reducing equivalents such as NADPH . [ 16 ] [ 17 ] It has been observed in some studies that MYC promotes expression of PKM2 over PKM1 by modulating exon splicing. [ 5 ] A key molecule produced as a result of the oxidative PPP by PKM2 is NADPH. NADPH functions as a cofactor and provides reducing power in many enzymatic reactions that are crucial for macromolecular biosynthesis. Another mechanism by which NADPH is produced in mammalian cells is the reaction converting isocitrate to α-ketoglutarate (αKG), which is catalysed by NADP-dependent isocitrate dehydrogenase 1 ( IDH1 ) and IDH2 and have been found linked to tumorigenesis in glioblastoma and acute myeloid leukemia . [ 18 ] [ 19 ] They are also found to interact with arginine residues required for isocitrate binding in the active site of the proteins of IDH1 and IDH2. [ 5 ] Fatty acid synthesis is an anabolic process that starts from the conversion of acetyl-CoA to malonyl-CoA by acetyl-CoA carboxylase. Malonyl CoA leads to fatty acid synthesis (FAS) and is involved in the elongation of fatty acids through Fatty acid synthase (FASN). Although aerobic glycolysis is the best documented metabolic phenotype of tumor cells, it is not a universal feature of all human cancers. Amino acids and fatty acids have been shown to function as fuels for tumor cells to proliferate. The carnitine palmitoyltransferase enzymes that regulate the β-oxidation of fatty acids may have a key role in determining some of these phenotypes. [ 5 ] Enhanced fatty acid synthesis provides lipids for membrane biogenesis to tumor cells and hence, it gives advantage in both growth and survival of the cell. It has also been seen that metabolic phenotype of tumor cells changes to adapt to the prevailing local conditions. A convergence between phenotypic and metabolic state transitions that confers a survival advantage to cancer cells against clinically used drug combinations like taxanes and anthracyclines have also been reported while drug resistant cancer cells had increased activity of both the glycolytic and oxidative pathways and glucose flux through the pentose phosphate pathway (PPP). [ 20 ] Some of the fatty acids have been linked to acquire resistance against some of the cancer drugs. Fatty acid synthase (FASN), a key complex catalyzing fatty acid synthesis has been found to be linked to acquired docetaxel , trastuzumab and adriamycin resistance in breast cancer. Similar resistance have been found with intrinsic gemcitabine and radiation resistance in pancreatic cancer. Glutaminolysis is linked to cisplatin resistance via the activation of mTORC1 signaling in gastric cancer. [ 21 ] NADPH plays an important role as an antioxidant by decreasing the reactive oxygen produced during rapid cell proliferation. It has been shown that attenuation of the PPP would dampen NADPH production in cancer cells, leading to the decrease in macromolecular biosynthesis and rendering the transformed cells that are vulnerable free radical-mediated damage. In this way, the advantage conferred by PKM2 expression would be eliminated. In preclinical studies, drugs such as 6-amino-nicotinamide (6-AN), which inhibits G6P dehydrogenase, the enzyme that initiates the PPP have shown anti-tumorigenic effects in leukemia , glioblastoma and lung cancer cell lines. [ 22 ] Cyclosporine inhibits TOR and is used as an effective immunosuppressant. Mycophenolic acid inhibits of IMPDH and pyrimidine biosynthesis and is clinically used as immunosuppressant. Both agents also display anti-tumor effects in animal studies. [ 9 ] Metabolites such as Alanine , Saturated lipids, Glycine , Lactate, Myo- Inositol , Nucleotides , Polyunsaturated fatty acids and Taurine are considered as the potential biomarkers in various studies. [ 23 ] The use of the amino acid glutamine as an energy source is facilitated by the multistep catabolism of glutamine called glutaminolysis. This energy pathway is upregulated in cancer, which may represent a therapeutic target as cancer cells are thought to be more dependent on glutamine than healthy cells. [ 24 ] This especially holds true for specific tumor types that are metabolically dysregulated, such as malignant brain tumors (i.e. glioblastoma ) that carry mutations in the IDH1 gene. These tumors use glutamine or the structurally related amino acid glutamate as an energy source and a chemotactic sensor in the brain, which increases their malignancy and may explain why these tumors grow so invasive.[9][10]
https://en.wikipedia.org/wiki/Tumor_metabolome
Tumour mutational burden (abbreviated as TMB ) is a genetic characteristic of tumorous tissue that can be informative to cancer research and treatment. It is defined as the number of non-inherited mutations per million bases (Mb) of investigated genomic sequence, [ 1 ] and its measurement has been enabled by next generation sequencing . High TMB and DNA damage repair mutations were discovered to be associated with superior clinical benefit from immune checkpoint blockade therapy by Timothy Chan and colleagues at the Memorial Sloan Kettering Cancer Center. [ 2 ] TMB has been validated as a predictive biomarker with several applications, including associations reported between different TMB levels and patient response to immune checkpoint inhibitor (ICI) therapy in a variety of cancers. [ 3 ] [ 4 ] TMB is also strongly predictive of overall as well as disease-specific survival, independently of cancer type, stage or grade. Patients with both low and high TMB fare notably better than those with intermediate burden. [ 5 ] While both TMB and mutational signatures provide critical information about cancer behaviour, they have different definitions. TMB is defined as the number of somatic mutations/megabase whereas mutational signatures are distinct mutational patterns of single base substitutions, double base substitutions, or small insertions and deletions in tumors. [ 6 ] For instance, COSMIC single base substitution signature 1 is characterized by the enzymatic deamination of cytosine to thymine and has been associated with age of an individual. [ 6 ] Scientists postulate that high TMB is associated with an increased amount of neoantigens, which are tumour specific markers displayed by cells. [ 2 ] [ 7 ] An increase in these antigens may then lead to increased detection of cancer cells by the immune system and more robust activation of cytotoxic T-lymphocytes . Activation of T-cells is further regulated by immune checkpoints that can be displayed by cancer cells, thus treatment with ICIs can lead to improved patient survival . [ 8 ] On June 16, 2020 the U.S. Food and Drug Administration expanded the approval of the immunotherapy drug pembrolizumab to treat any advanced solid-tumor cancers with a TMB greater than 10 mutations per Mb and continued growth following prior treatments. [ 9 ] This marks the first time that the FDA has approved a drug with its use based on TMB measurements. [ 10 ] One survival mechanism in tumors is to increase the expression of immune checkpoint molecules that can bind to tumor-specific T-cells and inactivate them, so that the tumor cells cannot be detected and killed. [ 11 ] ICIs have been shown to improve patients' response and the survival rates as they help the immune system to target tumor cells. [ 1 ] [ 10 ] However, there is a variation in response to ICIs among patients and it is crucial to know which patients can benefit from ICI therapy. [ 1 ] The expression of PD-L1 (programmed death-ligand 1; one of the immune checkpoints ) has been demonstrated to be a good biomarker of PD-L1 blockade therapy in some cancers. [ 10 ] However, there is a need for better biomarkers as there are some predictive errors with PD-L1 expression. [ 10 ] Studies on TMB have illustrated that there is an association between patients' outcome (of ICI therapy) and the TMB value. [ 1 ] It has been proposed that TMB can be used as a predictive marker of response in ICI therapy across many cancer types. [ 10 ] Also, TMB can be helpful to identify individuals that can benefit from ICI therapy with cancers that generally have low TMB values. [ 10 ] Furthermore, it has been shown that tumors with higher TMB values usually result in a higher number of neoantigens, the antigens that are presented on the tumor cells surface that are usually a result of missense mutations. [ 10 ] So, TMB can be a good estimator of neoantigen load and can help find the patients who can benefit from ICI therapy by increasing the chance of detecting the neoantigens. [ 10 ] However, it is important to note that different sequencing platforms and bioinformatics pipelines have been used to estimate TMB and it is important to harmonize TMB quantification protocols and procedures before it can be used as a reliable biomarker . [ 1 ] [ 12 ] There have been some efforts to standardize these methods. [ 1 ] TMB has been found to correlate with patient response to therapies such as immune checkpoint inhibitors (ICIs). An analysis of a large cohort of patients receiving ICI therapy revealed that higher TMB levels (≥ 20 mutations/Mb) corresponded to a 58% response rate to ICIs while lower TMB levels (<20 mutations/Mb) reduced response to 20%. [ 13 ] Researchers could also show a significant correlation between treatment response rate and TMB level in patients treated with anti-PD-1 or anit-PD-L1 (types of ICIs). [ 14 ] Additionally, it has been reported that when ICIs were the only treatments used by patients, 55% of the differences in the objective response rate across cancer types were explained by TMB. [ 14 ] Associations have been reported between TMB and patient outcome in a variety of cancers. In one study, scientists observed differences in survival rates , with high TMB individuals having a median progression-free survival of 12.8 months and a median overall survival not reached by the time of publication, compared to 3.3 months and 16.3 months respectively for individuals with lower TMB. [ 13 ] Another study examining patients who had not received ICI therapy found that intermediate levels of TMB (>5 and <20 mutations/Mb) correlate with significantly decreased survival , likely as a result of the accumulation of mutations in oncogenes . [ 7 ] This relationship does not appear to be significantly disparate across different tissues types and is only modestly affected by corrections for confounders such as smoking, sex, age, and ethnicity. [ 7 ] This suggests that TMB is both an independent and reliable indicator of poor patient outcomes in the absence of ICI therapy. [ 7 ] Interestingly, very high levels of TMB (≥ 50 mutations/Mb) were reported to correlate with increased survival , giving an overall parabolic shape to the trend. [ 7 ] While this association is still under investigation, it has been hypothesized that the decreased risk of death under very high TMB could result from reduced cell viability due to genetic instability or increased production of neoantigens recognized by the immune system. [ 7 ] There is a large variation in TMB values across different cancer types as the number of somatic mutations can span from 0.01 to 400 mutations per megabase of genome. [ 1 ] [ 10 ] [ 11 ] It has been shown that melanoma , NSCLC and other squamous carcinomas have the highest levels of TMB in this order, while leukemias and pediatric tumors have the lowest levels of TMB and other cancers like breast , kidney , and ovary have intermediate TMB values. [ 10 ] There is also variation in TMB across different subtypes of different cancers. [ 10 ] Due to high variability in TMB across different cancer types and subtypes, it is important to define different cut-offs to have an improved survival prediction and a better treatment decision. [ 1 ] [ 10 ] [ 11 ] For example, Fernandez et al. showed that TMB can range from 0.03 to 14.13 mutations per megabase (mean=1.23) in TCGA prostate cancer cohort while this range is from 0.04-99.68 mutations per megabase (mean=6.92) in TCGA bladder cancer cohort. [ 15 ] A recent study illustrated that different cut-offs are needed for different cancer types to find the patients who can benefit from ICI therapy. [ 1 ] In addition, it is crucial to understand that usually there are different clusters of cells in a tumor, known as tumor heterogeneity , that can affect TMB and consequently the response to ICIs. [ 10 ] Another factor that can affect TMB is whether the source of the sample is primary or metastatic tissue. [ 16 ] Most metastatic samples have been shown to be monoclonal (i.e. there is only one cluster of cells in the tumor), while primary tumors usually consist of a higher number of clusters and have higher overall genetic diversity (more heterogeneous). [ 16 ] Scientists have shown that metastatic tumors usually have a higher TMB level compared to primary tumors and this can be due to monoclonal nature of metastatic lesions. [ 16 ] There are disparities between how TMB is calculated in clinical and research settings. [ 17 ] Broadly, whole genome sequencing , whole exome sequencing , and panel based approaches can be used to help to calculate TMB. [ 17 ] Studies of TMB from research perspectives typically incorporate whole exome sequencing , and occasionally whole genome sequencing within their workflows while clinical applications use panel sequencing to estimate TMB primarily for their comparatively quicker speed and low cost. [ 17 ] Within panel based approaches, different strategies to calculate TMB have been adopted. [ 17 ] For instance, consider MSK-IMPACT developed by the Memorial Sloan Kettering Cancer Center and F1CDx developed by Foundation Medicine . [ 18 ] [ 19 ] F1CDx utilizes tumor-only sequencing strategy while MSK-IMPACT requires sequencing of both the tumor and its matched normal sample. Additionally, F1CDx counts synonymous mutations while excluding hotspot driver mutations. [ 18 ] MSK-IMPACT calculates TMB with similar filtering criteria to those used in whole exome sequencing , considering both synonymous mutations and hotspot driver mutations . [ 19 ] Ensembles of targeted panels and whole exome sequencing panels have been recommended for optimal results. [ 20 ] As an approach that is potentially more expedient and cost effective than sequencing, TMB can be calculated directly from H&E stained pathology images using deep learning . [ 21 ] Overall, 5 primary factors have been identified to influence TMB calculations. [ 22 ] Greater tumor cell content and sequencing coverage play a key role in the quality of TMB data. [ 22 ] For instance, targeted panels may enable deeper sequencing compared to whole exome sequencing , enabling higher sensitivity, that have been shown to perform well even when tumor cell content is low (defined as <10%). [ 22 ] Targeted panels have shown to enable much greater coverage than in whole exome sequencing . [ 22 ] For example, one recent study reached a mean sequencing coverage across all tumor samples of 744× when using the MSK-IMPACT panel, while the WES led to a mean target coverage of 232× in tumor sequences. [ 23 ] Typically, tumor tissues are fixated in formalin to preserve tissue and cellular morphology in the formalin-fixed paraffin-embedded (FFPE) protocols. [ 24 ] While FFPE offers a cost-effective method to store tissues for long durations of time, limitations must be considered as to how it will affect TMB calculations. [ 24 ] One limitation of this method is that it induces the formation of various crosslinks, whereby strands of DNA become covalently bound to each other, which may consequently lead to deamination of cytosine bases. [ 22 ] Cytosine deamination is the major cause of baseline noise in Next Generation Sequencing , leading to the most prevalent sequence artifacts in FFPE (C:G > T:A). [ 22 ] This may generate artefacts that must be removed in the downstream pipeline. Different sequencing strategies enable different number of genes to be included in the calculation of TMB (with WGS and WES approaches allowing a greater quantity of genes to be analyzed). While panel based approaches analyze comparatively fewer genes than other strategies, one advantage of panel based sequencing is that genes of interest can be covered in much greater sequencing depths, and rare variants can possibly be identified. [ 22 ] The panel sizes vary across panels with 468 genes in the MSK-IMPACT panel, 315 genes in the Foundation Medicine panel, and 409 genes in the Life Technologies panel. [ 22 ] As panel sizes are smaller, uncertainty associated with TMB estimation becomes greater, with coefficient of variance increases rapidly when the size of the targeted panels is less than 1 Mb. [ 24 ] In most calculations of TMB, synonymous variants and germline variants are filtered out as they are unlikely to be directly involved in creating neoantigens. [ 22 ] However, some pipelines maintain synonymous variants . [ 24 ] To account for germline variants, ideally sequencing would have been performed on a matched non-tumor sample from each patient. [ 24 ] However, in a clinical practice, the availability of this matched sample may vary across different institutions and diverse organizational factors, and data unavailability may inhibit germline variants to be filtered. [ 24 ] The choice of variant callers and other software in the downstream analyses may also affect how TMB is ultimately calculated. [ 24 ] TMB can be calculated directly from histopathology images using a multiscale deep learning pipeline, avoiding the need for sequencing and variant calling. [ 21 ] Different studies have assigned different cut-offs to delineate between high and low TMB status. [ 22 ] In the lung, the median TMB across more than 18,000 lung cancer cases was 7.2 mutations/Mb, with approximately 12% of the patients showing more than 20 mutations/Mb. [ 24 ] The authors identified a tumor mutational burden greater than or equal to 10 mutations/Mb as the optimal cut-off to benefit from combination immunotherapy . [ 24 ] However, in other cancer types, high TMB status has been classified as >20 mutations/Mb. [ 7 ] One approved biomarker of ICI therapy is PD-L1 expression, but the predictive power of this biomarker is affected by factors such as assay interpretation and lack of standard methods. [ 10 ] TMB is also affected by these factors in addition to accessibility issues. [ 10 ] Biological factors like specimen type and cancer type as well as technical factors like sequencing technology can affect evaluation of TMB. [ 1 ] Thus, it is necessary to harmonize evaluation methods and there are still so many factors that can complicate this task. [ 1 ] [ 10 ] For example, gene fusions and post-translational changes in proteins contribute to tumor behaviour and consequently response to therapy while these factors are not considered in TMB estimation. [ 10 ] In addition, currently all mutations have the same weight in TMB calculation, while they can have very different effects on proteins and pathways activity. [ 10 ] Furthermore, there is still no good answer to the question of how mutations in genes that are known to influence ICI therapy should be treated in TMB evaluation. [ 10 ] It is also important to note that TMB is highly variable across cancer types and subtypes and different studies are being conducted to find distinct TMB thresholds. [ 10 ] Some studies argue that to have better prediction of response to ICI therapy, TMB should be used as a complementary marker with other biomarkers such as PD-L1 . [ 10 ] Other studies have shown that a combination of TMB and neoantigen load can be used as a biomarker to predict survival in patients with melanoma who received adaptive T cell transfer therapy. [ 10 ] Since TMB is a relatively new biomarker, there is still a need to perform more studies and many labs are being focused on different aspects of this biomarker. [ 10 ] [ 11 ]
https://en.wikipedia.org/wiki/Tumor_mutational_burden
1CA9 , 3ALQ 7133 21938 ENSG00000028137 ENSMUSG00000028599 P20333 P25119 NM_001066 NM_011610 NP_001057 NP_035740 Tumor necrosis factor receptor 2 ( TNFR2 ), also known as tumor necrosis factor receptor superfamily member 1B ( TNFRSF1B ) and CD120b , is one of two membrane receptors that binds tumor necrosis factor-alpha (TNFα). [ 5 ] [ 6 ] Like its counterpart, tumor necrosis factor receptor 1 (TNFR1) , the extracellular region of TNFR2 consists of four cysteine-rich domains which allow for binding to TNFα . [ 7 ] [ 8 ] TNFR1 and TNFR2 possess different functions when bound to TNFα due to differences in their intracellular structures, such as TNFR2 lacking a death domain ( DD ). [ 7 ] The protein encoded by this gene is a member of the tumor necrosis factor receptor superfamily, which also contains TNFRSF1A . This protein and TNF-receptor 1 form a heterocomplex that mediates the recruitment of two anti-apoptotic proteins, c-IAP1 and c-IAP2 , which possess E3 ubiquitin ligase activity. The function of IAPs in TNF-receptor signalling is unknown, however, c-IAP1 is thought to potentiate TNF-induced apoptosis by the ubiquitination and degradation of TNF-receptor-associated factor 2 ( TRAF2 ), which mediates anti-apoptotic signals. Knockout studies in mice also suggest a role of this protein in protecting neurons from apoptosis by stimulating antioxidative pathways. [ 9 ] At least partly because TNFR2 has no intracellular death domain, TNFR2 is neuroprotective . [ 10 ] Patients with schizophrenia have increased levels of soluble tumor necrosis factor receptor 2 ( sTNFR2 ). [ 11 ] Targeting of TNRF2 in tumor cells is associated with increased tumor cell death and decreased progression of tumor cell growth. [ 8 ] Increased expression of TNFR2 is found in breast cancer , cervical cancer , colon cancer , and renal cancer . [ 8 ] A link between the expression of TNRF2 in tumor cells and late-stage cancer has been discovered. [ 8 ] TNFR2 plays a significant role in tumor cell growth as it has been found that the loss of TNFR2 expression is linked with increased death of associated tumor cells and a significant standstill of further growth. [ 8 ] There is therapeutic potential in the targeting of TNFR2 for cancer treatments through TNFR2 inhibition. [ 12 ] A small scale study of 289 Japanese patients suggested a minor increased predisposition from an amino acid substitution of the 196 allele at exon 6. Genomic testing of 81 SLE patients and 207 healthy patients in a Japanese study showed 37% of SLE patients had a polymorphism on position 196 of exon 6 compared to 18.8% of healthy patients. The TNFR2 196R allele polymorphism suggests that even one 196R allele results in increased risk for SLE. [ 13 ] TNFRSF1B has been shown to interact with: This article on a gene on human chromosome 1 is a stub . You can help Wikipedia by expanding it . This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tumor_necrosis_factor_receptor_2
Tuna Altınel is a Turkish mathematician , born February 12, 1966, in Istanbul , who has worked at the University Lyon 1 in France since 1996. [ 1 ] He is a specialist in group theory and mathematical logic . With Alexandre Borovik and Gregory Cherlin , he proved a major case of the Cherlin–Zilber conjecture . [ 2 ] Altınel is active in the Academics for Peace movement, which supports a peaceful resolution of the conflict in south-eastern Turkey , and calls for the human rights of the civilian population to be respected. [ 3 ] Accused by the Turkish authorities of membership in a terrorist organization, Altınel has been imprisoned since May 11, 2019, at the Kepsut prison in Turkey. [ 4 ] After undergraduate studies in mathematics and computer science at Boğaziçi University , Istanbul, Altınel received his doctorate from Rutgers University ( New Jersey , United States) under the direction of Gregory Cherlin. [ 5 ] In 1996 he joined the department of mathematics of the university Lyon-1 , as maître de conférences , and completed his French habilitation in 2001. [ 1 ] Altınel has written 26 mathematical articles, principally on the subject of groups in model theory , more particularly groups of finite Morley rank and the Cherlin–Zilber Algebraicity Conjecture , concerning the structure of the simple groups of finite Morley rank. He is joint author with Alexandre Borovik and Gregory Cherlin of a book in which this conjecture is proved in the case of infinite 2-rank , after the development of a body of machinery analogous to certain chapters of finite simple group theory. [ 2 ] Altınel's doctoral advisees include Éric Jaligot , winner of the 2000 Sacks Prize , [ 6 ] a prize given annually for an outstanding doctoral thesis in mathematical logic [ 7 ] (doctoral thesis supervised jointly by Tuna Altınel and Bruno Poizat [ 8 ] ). He is active in the domain of scientific cooperation with Turkey; in particular, he was an organizer of an international mathematics conference held in Istanbul in 2016 in honor of Alexandre Borovik and Ali Nesin ( Leelavati prize winner, 2018). [ 9 ] Altınel has been an active supporter of a peaceful resolution of the conflict in southeastern Turkey and of human rights and civil liberties in Turkey. [ 10 ] With regard to the Kurdish conflict in southeastern Turkey, he was one of 116 academics who signed a 2003 letter in support of a peaceful resolution of that conflict, [ 11 ] among the first group of signatories of a similar peace petition in January 2016 that garnered 1128 signatures at the time of its promulgation under the title "We will not be parties to this crime," [ 12 ] among the 132 intellectuals calling for assistance to those wounded in the conflict at Cizre , [ 13 ] and one of 170 academics to sign a letter in 2018 opposing the Afrin operation. [ 14 ] On February 21, 2019, he acted as translator for a former member of parliament of the Peoples' Democratic Party (HDP) at a public meeting in Lyon , France , in which a documentary on the Cizre massacres was shown, followed by a discussion. [ 15 ] With the resumption of active conflict in August 2015 following a period of relative calm, Altınel reached out to the affected community and began to visit the areas involved in September 2015. [ 10 ] His own account of these activities is quoted below, from subsequent court testimony. With the trials of the signatories of the January 2016 petition and the broader wave of repression following the attempted coup of July 2016, described in more detail below, questions of academic freedom and freedom of speech become more prominent. Altınel's actions in this direction include These activities have led to two separate court cases against Altınel in Turkey and his social media postings have been used to justify the second of these cases. [ 20 ] Altınel was one of the first signatories of the January 2016 peace petition entitled "We will not be parties to this crime!", which was promulgated by the Academics for Peace on January 11, 2016. [ 12 ] The following day, President Erdoğan publicly criticized the signatories, and within a few days 27 had been arrested." [ 21 ] At the same time foreign reaction was strongly supportive of the signatories. [ 22 ] The peace petition ultimately garnered 2212 signatures of academics, largely in Turkey. [ 3 ] Altınel is one of over 750 signatories from the first group of 1128 such who have been prosecuted or sentenced as individuals for that act under Turkish Anti-Terrorism legislation, through June 2019, [ 23 ] on a charge of "propaganda in support of a terrorist organization." Since 2016 Altınel has been an active and vocal supporter both of the content of this petition and of the civil rights of its signers. In the second hearing in his case, February 28, 2019, at the 29th Central Criminal Court, Çağlayan Courthouse, Istanbul , Altınel testified that he had aided civilian victims of military operations that took place in the towns placed under military curfew: [ 24 ] Since September 2015, I have traveled several times to a number of provinces, including some of those mentioned in the Peace Petition which I signed. ... I carried bag upon bag of provisions to help the victims of destruction and forced migration, I spoke with those who had lost their homes and relatives. I did all of this on my own initiative, and my principle was as follows: If every Turkish citizen will do what I do, we will come closer to peace. You can find the traces of my efforts where I sojourned in the towns of Sur , Nusaybin , Cizre , Hakkari , and Yüksekova . The Prosecutor may use this as evidence against me. ... I did not simply sign the Peace Petition. I thought about it, felt it, lived it. I wrote that text. [ 25 ] I stand behind every sentence. The sentencing hearing for Altınel's trial for "propaganda on behalf of a terrorist organization" in the context of the Academics for Peace Trials is scheduled for July 16, 2019. [ 26 ] On April 12, 2019, on arriving for a visit to Turkey, Altınel's passport was confiscated at the airport. On May 10 he requested a new passport at the Balıkesir prefecture and was taken into custody for interrogation and placed in pre-trial detention on the following day. It was learned later that a new charge had been filed against him on April 30, 2019, at the prosecutor general's office in Balıkesir. [ 27 ] This new charge is "membership in a terrorist organization", [ 28 ] based on his participation on February 21, 2019, at a public meeting in Villeurbanne , near Lyon, France. This meeting was organized by the local Kurdish Society; a documentary was shown on the subject of the Cizre massacres and a discussion was held with a former member of the Turkish parliament, Faysal Sarıyıldız (HDP), now in exile. [ 32 ] At that public meeting, Altınel acted as translator for the former MP. [ 15 ] On May 8 Füsun Üstel was incarcerated and began serving a 15-month sentence for signing the peace petition of January 2016. Altınel was arrested on May 11. [ 33 ] After his first hearing on the new charge was scheduled for July 30, 2019, he was released. [ 34 ] Altınel's May 11 arrest was widely reported in the press, notably in France and in Turkey. Some early reports of the arrest in Turkey quoting variously from Altınel's lawyer or Academics for Peace put the case in the context of the Academics for Peace trials and the conference held in Lyon, France. [ 35 ] Other reports originating with the İhlas News Agency and reported on Habertürk and elsewhere described the case as the capture of a wanted terrorist; one of these reports stated that an anti-terrorist operation captured five members of Gülen Movement and the Kurdistan Workers' Party (PKK), listing Altınel's arrest as the fifth. [ 36 ] The first article in France, in Mediapart , [ 37 ] appeared that same day and was followed rapidly by articles in Le Progrès , Le Monde , 20 minutes , Lyon Capitale , Lyon Mag , Le Figaro Étudiant , Le Figaro , Le Canard enchaîné , Libération , and L’Humanité . [ 38 ] Altınel was featured as L’Humanité's Man of the Day on May 16, 2019. Euronews TV reported on the case on May 30, 2019. [ 39 ] Less than weeks after the confiscation of Altınel's passport, on April 23, 2019, the French Applied Mathematics Society and the French Mathematical Society wrote jointly to President Macron of France . [ 40 ] On May 11, the day of Altınel's arrest, the Turkish Consul General in Lyon, Mehmet Özgür Çakar, stated "Tuna Altınel organized, and moderated, a meeting in Lyon consisting entirely of propaganda in favor of the PKK. ... It is possible that this had a negative effect on his situation." [ 41 ] The consul also noted that the PKK remains classified a terrorist group by Ankara, the United States , and the European Union . [ 42 ] The French Ministry of Europe and Foreign Affairs expressed its "disquiet" on May 13, 2019. [ 43 ] A support committee formed at Lyon created a website to document the evolution of the affair, and on May 23 the committee launched a petition in favor of the liberation of Altınel, with over 6000 signatories as of June, 2019, predominantly academics, along with approximately 60 members of the French National Assembly . [ 44 ] Professional societies from a number of countries, including mathematics societies in the United States, France, Great Britain, Germany Austria, Italy, and Belgium, as well as the European Mathematical Society , the Association for Symbolic Logic , and the Committee of Concerned Scientists have issued statements in support of Altınel. [ 45 ] On June 11, 2019, the French mathematician and politician Cédric Villani ( LREM ), Member of Parliament for Essonne's fifth district and Fields medalist , who is a colleague and an outspoken supporter of Altınel, [ 46 ] posed a question on the subject during a session of the National Assembly to the Minister for Europe and Foreign Affairs Jean-Yves Le Drian , who stated that the government was committed to doing "everything in its power" in favor of his liberation, notably on the occasion of his June 13 visit to Turkey to consult his counterpart there. [ 47 ]
https://en.wikipedia.org/wiki/Tuna_Altınel
A tunable metamaterial is a metamaterial with a variable response to an incident electromagnetic wave. This includes remotely controlling how an incident electromagnetic wave (EM wave) interacts with a metamaterial. This translates into the capability to determine whether the EM wave is transmitted, reflected, or absorbed. In general, the lattice structure of the tunable metamaterial is adjustable in real time, making it possible to reconfigure a metamaterial device during operation. It encompasses developments beyond the bandwidth limitations in left-handed materials by constructing various types of metamaterials. The ongoing research in this domain includes electromagnetic band gap metamaterials (EBG), also known as photonic band gap (PBG), and negative refractive index material (NIM). [ 1 ] [ 2 ] [ 3 ] Since natural materials exhibit very weak coupling through the magnetic component of the electromagnetic wave , artificial materials that exhibit a strong magnetic coupling are being researched and fabricated . These artificial materials are known as metamaterials. The first of these were fabricated (in the lab) with an inherent, limited, response to only a narrow frequency band at any given time. Its main purpose was to practically demonstrate metamaterials. The resonant nature of metamaterials results in frequency dispersion and narrow bandwidth operation where the center frequency is fixed by the geometry and dimensions of the rudimentary elements comprising the metamaterial composite. These were followed by demonstrations of metamaterials that were tunable only by changing the geometry and/or position of their components. These have been followed by metamaterials that are tunable in wider frequency ranges along with strategies for varying the frequencies of a single medium (metamaterial). This is in contrast to the fixed frequency metamaterial, which is determined by the imbued parameters during fabrication. [ 3 ] [ 4 ] Metamaterial-based devices could come to include filters, modulators, amplifiers, transistors, and resonators, among others. The usefulness of such a device could be extended tremendously if the metamaterial’s response characteristics can be dynamically tuned. Control of the effective electromagnetic parameters of a metamaterial is possible through externally tunable components. Studies have examined the ability to control the response of individual particles using tunable devices such as varactor diodes, semiconductor materials, and barium strontium titanate (BST) thin films. [ 5 ] For example, H. T. Chen, in 2008, were able to fabricate a repeating split-ring resonator (SRR) cell with semiconductor material aligning the gaps. This initial step in metamaterial research expanded the spectral range of operation for a given, specific, metamaterial device. Also this opened the door for implementing new device concepts. The importance of incorporating the semiconductor material this way is noted because of the higher frequency ranges at which this metamaterial operates. It is suitable at terahertz (THz) and higher frequencies, where the entire metamaterial composite may have more than 10 4 unit cells, along with bulk-vertical integration of the tuning elements. Strategies employed for tuning at lower frequencies would not be possible because of the number of unit cells involved. The semiconductor material, such as silicon, is controlled by photoexcitation. This in turn controls, or alters, the effective size of the capacitor and tunes the capacitance. The whole structure is not just semiconductor material. This was termed a 'hybrid', because the semiconductor material was fused with dielectric material; a silicon-on-sapphire (SOS) wafer. Wafers were then stacked - fabricating a whole structure. [ 6 ] A. Degiron et al., appear to have used a similar strategy in 2007. [ note 1 ] A multielement tunable magnetic medium was reported by Zhao et al. This structure immersed SRRs in liquid crystals, and achieved a 2% tunable range. [ note 2 ] BST-loaded SRRs comprising tunable metamaterial, encapsulates all of the tunability within the SRR circuit. [ 5 ] In a section below, a research team reported a tunable negative index medium using copper wires and ferrite sheets. The negative permeability behavior appears to be dependent on the location and bandwidth of the ferrimagnetic resonance, a break from wholly non-magnetic materials, which produces a notable negative index band. A coil or permanent magnetic is needed to supply the magnetic field bias for tuning. Electrical tuning for tunable metamaterials. [ 6 ] Magnetostatic control for tunable metamaterials. [ 6 ] Optical pumping for tunable metamaterials. [ 6 ] Yttrium iron garnet (YIG) films allow for a continuously tunable negative permeability , which results in a tunable frequency range over the higher frequency side of the ferromagnetic resonance of the YIG. Complementary negative permittivity is achieved using a single periodic array of copper wires. Eight wires were spaced 1 mm apart and a ferromagnetic film of a multi-layered YIG at 400 mm thickness was placed in a K band waveguide. The YIG film was applied to both sides of a gadolinium gallium garnet substrate of 0.5 mm thickness. Ferromagnetic resonance was induced when the external H magnetic field was applied along the X axis. [ 3 ] The external magnetic field was generated with an electromagnet . Pairs of E–H tuners were connected before and after the waveguide containing the NIM composite. The tunability was demonstrated from 18 to 23 GHz . Theoretical analysis, which followed, closely matched the experimental results. [ 3 ] An air gap was built into the structure between the array of copper wires and the YIG . This reduces coupling with the ferrite , YIG material. When negative permeability is achieved across a range of frequencies, the interaction of the ferrite with the wires in close proximity, reduces the net current flow in the wires. This is the same as moving toward positive permittivity. This would be an undesired result as the material would no longer be a NIM. The separation also reduces the effective loss of the dielectric , induced by the interaction of the wire's self-field with permeability. Furthermore, there are two sources of conduction in the copper wire . First, the electric field in a ( microwave ) waveguide creates a current in the wire. Second, any arbitrary magnetic field created by the ferrite when it moves into a perpendicular configuration induces a current . Additionally, at frequencies where μ is negative, the induced microwave magnetic field is opposite to the field excited in a TE10 mode of propagation in a waveguide . Hence, the induced current is opposite to the current resulting from the electric field in a waveguide. [ 3 ] In aerospace applications (for example) negative index metamaterials are likely candidates for tunable, compact and lightweight phase shifters . Because the designated metamaterials can handle the appropriate power levels, have strong dispersion characteristics, and are tunable in the microwave range these show potential to be desirable phase shifters. [ 7 ] The YIG negative index metamaterial is a composite which actually utilizes ferrite material. As a metamaterial, the ferrite produces a resonant, (real) magnetic permeability μ' that is large enough to be comparable to the conventional ferrite phase shifter. The advantage of using a ferrite NIM material for phase shifter application is that it allows use of a ferrite in the negative magnetic permeability region near the FMR (ferromagnetic resonance frequency) when is relatively high and still maintains low losses. Near the FMR frequency, the magnitude of μ' is larger than that at frequencies away from it. Assuming the loss factor to be about the same for the NIM and the conventional ferrite phase shifter, we would expect a much improved performance using the NIM composite, since the phase shifts would be significantly higher owing to higher differential μ' . [ 7 ] Tuning in the near infrared range is accomplished by adjusting the permittivity of an attached nematic liquid crystal . The liquid crystal material appears to be used as both a substrate and a jacket for a negative index metamaterial . The metamaterial can be tuned from negative index values, to zero index, to positive index values. In addition, negative index values can be increased or decreased by this method. [ 8 ] [ 9 ] Sub-wavelength metal arrays, essentially another form of metamaterial, usually operate in the microwave and optical frequencies. A liquid crystal is both transparent and anisotropic at those frequencies. In addition, a liquid crystal has the inherent properties to be both intrinsically tunable and provide tuning for the metal arrays. This method of tuning a type of metamaterial can be readily used as electrodes for applying switching voltages. [ 10 ] Areas of active research in optical materials are metamaterials that are capable of negative values for index of refraction (NIMs), and metamaterials that are capable of zero index of refraction (ZIMs). Complicated steps required to fabricate these nano-scale metamaterials have led to the desire for fabricated, tunable structures capable of the prescribed spectral ranges or resonances. The most commonly applied scheme to achieve these effects is electro-optical tuning . Here the change in refractive index is proportional to either the applied electric field, or is proportional to the square modulus of the electric field. These are the Pockels effect and Kerr effect , respectively. However, to achieve these effects electrodes must be built-in during the fabrication process. This introduces problematic complexity into material formation techniques. Another alternative is to employ a nonlinear optical material as one of the constituents of this system, and depend on the optical field intensity to modify the refractive index, or magnetic parameters. [ 11 ] Ring resonators are optical devices designed to show resonance for specific wavelengths. In silicon-on-insulator layered structures, they can be very small, exhibit a high Q factor and have low losses that make them efficient wavelength-filters. The goal is to achieve a tunable refractive index over a larger bandwidth. [ 12 ] A novel approach is proposed for efficient tuning of the transmission characteristics of metamaterials through a continuous adjustment of the lattice structure, and is confirmed experimentally in the microwave range. [ 13 ] Metamaterials were originally researched as a passive response material . The passive response was and still is determined by the patterning of the metamaterial elements. In other words, the majority of research has focused on the passive properties of the novel transmission, e.g., the size and shape of the inclusions, the effects of metal film thickness, hole geometry, periodicity , with passive responses such as a negative electric response, negative index or gradient index etc. In addition, the resonant response can be significantly affected by depositing a dielectric layer on metal hole arrays and by doping a semiconductor substrate. The result is significant shifting of the resonance frequency. However, even these last two methods are part of the passive material research. [ 14 ] Electromagnetic metamaterials can be viewed as structured composites with patterned metallic subwavelength inclusions. As mesoscopic physical systems, these are built starting from the unit cell level. These unit cells are designed to yield prescribed electromagnetic properties. A characteristic of this type of metamaterial is that the individual components have a resonant (coupling) response to the electric, magnetic or both components of the electromagnetic radiation of the source. The EM metamaterial as an artificially designed transmission medium, has so far delivered desired responses at frequencies from the microwave through to the near visible. [ 6 ] The introduction of a natural semiconductor material within or as part of each metamaterial cell results in a new design flexibility. The incorporation, application, and location of semiconductor material is strategically planned so as to be strongly coupled at the resonance frequency of the metamaterial elements. The hybrid metamaterial composite is still a passive material. However, the coupling with the semiconductor material then allows for external stimulus and control of the hybrid system as a whole, which produces alterations in the passive metamaterial response. External excitation is produced in the form, for example, photoconductivity, nonlinearity, or gain in the semiconductor material. [ 6 ] Terahertz (THz) metamaterials can show a tunable spectral range, where the magnetic permeability reaches negative values. These values were established both theoretically and experimentally. The demonstrated principle represents a step forward toward a metamaterial with negative refractive index capable of covering continuously a broad range of THz frequencies and opens a path for the active manipulation of millimeter and submillimeter beams. [ 15 ] Frequency selective surfaces ( FSS ) has become an alternative to the fixed frequency metamaterial where static geometries and spacings of unit cells determine the frequency response of a given metamaterial. Because arrayed unit cells maintain static positions throughout operation, a new set of geometrical shapes and spacings would have to be embedded in a newly fabricated material for each different radiated frequency and response . Instead, FSS based metamaterials allow for optional changes of frequencies in a single medium (metamaterial) rather than a restriction to a fixed frequency response. [ 4 ] Frequency selective surfaces can be fabricated as planar 2-dimensional periodic arrays of metallic elements with specific geometrical shapes, or can be periodic apertures in a metallic screen. The transmission and reflection coefficients for these surfaces are dependent on the frequency of operation and may also depend on the polarization and the angle of the transmitted electromagnetic wave striking the material or angle of incidence . The versatility of these structures are shown when having frequency bands at which a given FSS is completely opaque ( stopbands ) and other bands at which the same surface allows wave transmission . [ 16 ] An example of where this alternative is highly advantageous is in deep space or with a satellite or telescope in orbit . The expense of regular space missions to access a single piece of equipment for tuning and maintenance would be prohibitive. Remote tuning , in this case, is advantageous. [ 4 ] FSS was first developed to control the transmission and reflection characteristics of an incident radiation wave . This has resulted in smaller cell size along with increases in bandwidth and the capability to shift frequencies in real time for artificial materials . [ 4 ] This type of structure can be used to create a metamaterial surface with the intended application of artificial magnetic conductors or applications for boundary conditions . Another application is as stopband device for surface wave propagation along the interface. This is because surface waves are a created as a consequence of an interface between two media having dissimilar refractive indices . Depending on the application of the system that includes the two media, there may be a need to attenuate surface waves or utilize them. [ 17 ] An FSS based metamaterial employs a (miniature) model of equivalent LC circuitry . At low frequencies the physics of the interactions is essentially defined by the LC model analysis and numerical simulation . This is also known as the static LC model. At higher frequencies the static LC concepts become unavailable. This is due to dependence on phasing . When the FSS is engineered for electromagnetic band-gap ( EBG ) characteristics, the FSS is designed to enlarge its stopband properties in relation to dispersive , surface wave (SW) frequencies (microwave and radio frequencies). Furthermore, as an EBG it is designed to reduce its dependence on the propagating direction of the surface wave traveling across the surface (interface). [ 17 ] A type of FSS based metamaterial has the interchangeable nomenclature Artificial Magnetic Conductor (AMC) or High Impedance Surface (HIS). The HIS, or AMC, is an artificial, metallic, electromagnetic structure. The structure is designed to be selective in supporting surface wave currents, different from conventional metallic conductors. It has applications for microwave circuits and antennas. [ 18 ] [ 19 ] [ 20 ] As an antenna ground plane it suppresses the propagation of surface waves , and deployed as an improvement over the flat metal sheet as a ground plane , or reflector. Hence, this strategy tends to upgrade the performance of the selected antenna. [ 18 ] [ 19 ] [ 20 ] Strong surface waves of sufficient strength, which propagate on the metal ground plane will reach the edge and propagate into free space . This creates a multi-path interference . In contrast the HIS surface suppresses the propagation of surface waves. Furthermore, control of the radio frequency or microwave radiation pattern is efficiently increased, and mutual coupling between antennas is also reduced. [ 18 ] [ 19 ] [ 20 ] When employing conventional ground planes as the experimental control, the HIS surface exhibits a smoother radiation pattern, an increase in the gain of the main lobe , a decrease in undesirable return radiation, and a decrease in mutual coupling. [ 18 ] An HIS, or AMC, can be described as a type of electromagnetic band gap (EBG) material or a type of synthetic composite that is intentionally structured with a magnetic conductor surface for an allotted, but defined range of frequencies . AMC, or HIS structures often emerge from an engineered periodic dielectric base along with metallization patterns designed for microwave and radio frequencies . The metalization pattern is usually determined by the intended application of the AMC or HIS structure. Furthermore, two inherent notable properties, which cannot be found in natural materials, have led to a significant number of microwave circuit applications. [ 19 ] [ 20 ] First, AMC or HIS surfaces are designed to have an allotted set of frequencies over which electromagnetic surface waves and currents will not be allowed to propagate . These materials are then both beneficial and practical as antenna ground planes , small flat signal processing filters , or filters as part of waveguide structures. For example, AMC surfaces as antenna ground planes are able to effectively attenuate undesirable wave fluctuations, or undulations, while producing good radiation patterns. This is because the material can suppress surface wave propagation within the prescribed range of forbidden frequencies. Second, AMC surfaces have very high surface impedance within a specific frequency range , where the tangential magnetic field is small, even with a large electric field along the surface. Therefore, an AMC surface can have a reflection coefficient of +1. [ 19 ] [ 20 ] In addition, the reflection phase of incident light is part of the AMC and HIS tool box. [ note 3 ] The phase of the reflected electric field has normal incidence the same phase of the electric field impinging at the interface of the reflecting surface. The variation of the reflection phase is continuous between +180◦ and −180◦ relative to the frequency. Zero is crossed at one frequency , where resonance occurs. A notable characteristic is that the useful bandwidth of an AMC is generally defined as +90◦ to −90◦ on either side of the central frequency. [ 21 ] Thus, due to this unusual boundary condition, in contrast to the case of a conventional metal ground plane , an AMC surface can function as a new type of ground plane for low-profile wire antennas ( wireless communication systems ). For example, even though a horizontal wire antenna is extremely close to an AMC surface, the current on the antenna and its image current on the ground plane are in-phase, rather than out-of phase, thereby strengthening the radiation. [ 20 ] [ 21 ] [ 22 ] Frequency selective surfaces (FSS) materials can be utilized as band gap material in the surface wave domain, at microwave and radio frequency wavelengths. Support of surface waves is a given property of metals . These are propagating electromagnetic waves that are bound to the interface between the metal surface and the air. Surface plasmons occur at optical frequencies, but at microwave frequencies, they are the normal currents that occur on any electrical conductor . [ 17 ] [ 19 ] At radio frequencies, the fields associated with surface waves can extend thousands of wavelengths into the surrounding space, and they are often best described as surface currents. They can be modeled from the viewpoint of an effective dielectric constant, or an effective surface impedance. [ 19 ] For example, a flat metal sheet always has low surface impedance . However, by incorporating a special texture on a conducting surface, a specially designed geometry , it is possible to engineer a high surface impedance and alter its electromagnetic-radio-frequency properties . The protrusions are arranged in a two dimensional lattice structure, and can be visualized as thumbtacks protruding from the surface. [ 19 ] Because the protrusions are fractionally smaller than the operating wavelength , the structure can be described using an effective medium model , and the electromagnetic properties can be described using lumped-circuit elements ( capacitors and inductors ). They behave as a network of parallel resonant LC circuits , which act as a two-dimensional electric filter to block the flow of currents along the sheet. [ 19 ] This structure can then serve as an artificial magnetic conductor (AMC), because of its high surface impedance within a certain frequency range. In addition, as an artificial magnetic conductor it has a forbidden frequency band, over which surface waves and currents cannot propagate. Therefore, AMC surfaces have good radiation patterns without unwanted ripples based on suppressing the surface wave propagation within the band gap frequency range. [ 20 ] The surface impedance is derived from the ratio of the electric field at the surface to the magnetic field at the surface, which extends far into the metal beyond the skin depth. When a texture is applied to the metal surface, the surface impedance is altered, and its surface wave properties are changed. At low frequencies, it is inductive , and supports transverse-magnetic (TM) waves. At high frequencies, it is capacitive, and supports transverse electric (TE) waves. Near the LC resonance frequency, the surface impedance is very high. In this region, waves are not bound to the surface. Instead, they radiate into the surrounding space . [ 19 ] [ 23 ] A high-impedance surface was fabricated as a printed circuit board. The structure consists of a triangular lattice of hexagonal metal plates, connected to a solid metal sheet by vertical conducting vias . [ 19 ] The uniplanar compact photonic-bandgap (UC-PBG) is proposed, simulated, and then constructed in the lab to overcome elucidated limitations of planar circuit technology. Like photonic bandgap structures it is etched into the ground plane of the microstrip line. The geometry is square metal pads. Each metal pad has four connecting branches forming a distributed LC circuit. [ 24 ] [ 25 ]
https://en.wikipedia.org/wiki/Tunable_metamaterial
Tunable resistive pulse sensing ( TRPS ) is a single-particle technique used to measure the size, concentration and zeta potential of particles as they pass through a size-tunable nanopore . [ 1 ] [ 2 ] The technique adapts the principle of resistive pulse sensing , which monitors current flow through an aperture, combined with the use of tunable nanopore technology, allowing the passage of ionic current and particles to be regulated by adjusting the pore size. [ 3 ] [ 4 ] The addition of the tunable nanopore allows for the measurement of a wider range of particle sizes and improves accuracy. [ 3 ] [ 4 ] Particles crossing a nanopore are detected one at a time as a transient change in the ionic current flow, which is denoted as a blockade event with its amplitude denoted as the blockade magnitude. As blockade magnitude is proportional to particle size, accurate particle sizing can be achieved after calibration with a known standard. This standard is composed of particles of a known size and concentration. For TRPS, carboxylated polystyrene particles are often used. [ 5 ] Nanopore-based detection allows particle-by-particle assessment of complex mixtures. [ 5 ] [ 6 ] [ 7 ] By selecting an appropriately sized nanopore and adjusting its stretch, the nanopore size can be optimized for particle size and improve measurement accuracy. Adjustments to nanopore stretch, in combination with a fine-control of pressure and voltage allow TRPS to determine sample concentration [ 8 ] and to accurately derive individual particle zeta potential [ 9 ] in addition to particle size information. TRPS was developed by Izon Science Limited , producer of commercially available nanopore-based particle characterization systems. [ 10 ] Izon Science Limited currently sell one TRPS device, known as the "Exoid". Previous devices include the "qNano", the "qNano Gold" and the "qViron". These systems have been applied to measure a wide range of biological and synthetic particle types including viruses and nanoparticles. TRPS has been applied in both academic and industrial research fields, including:
https://en.wikipedia.org/wiki/Tunable_resistive_pulse_sensing
A tuned radio frequency receiver (or TRF receiver ) is a type of radio receiver that is composed of one or more tuned radio frequency (RF) amplifier stages followed by a detector ( demodulator ) circuit to extract the audio signal and usually an audio frequency amplifier. This type of receiver was popular in the 1920s. Early examples could be tedious to operate because when tuning in a station each stage had to be individually adjusted to the station's frequency , but later models had ganged tuning, the tuning mechanisms of all stages being linked together, and operated by just one control knob. By the mid 1930s, it was replaced by the superheterodyne receiver patented by Edwin Armstrong . The TRF receiver was patented in 1916 by Ernst Alexanderson . His concept was that each stage would amplify the desired signal while reducing the interfering ones. Multiple stages of RF amplification would make the radio more sensitive to weak stations, and the multiple tuned circuits would give it a narrower bandwidth and more selectivity than the single stage receivers common at that time. All tuned stages of the radio must track and tune to the desired reception frequency. This is in contrast to the modern superheterodyne receiver that must only tune the receiver's RF front end and local oscillator to the desired frequencies; all the following stages work at a fixed frequency and do not depend on the desired reception frequency. Antique TRF receivers can often be identified by their cabinets. They typically have a long, low appearance, with a flip-up lid for access to the vacuum tubes and tuned circuits . On their front panels there are typically two or three large dials, each controlling the tuning for one stage. Inside, along with several vacuum tubes, there will be a series of large coils. These will usually be with their axes at right angles to each other to reduce magnetic coupling between them. A problem with the TRF receiver built with triode vacuum tubes is the triode's interelectrode capacitance. The interelectrode capacitance allows energy in the output circuit to feedback into the input. That feedback can cause instability and oscillation that frustrate reception and produce squealing or howling noises in the speaker. In 1922, Louis Alan Hazeltine invented the technique of neutralization that uses additional circuitry to partially cancel the effect of the interelectrode capacitance. [ 1 ] Neutralization was used in the popular Neutrodyne series of TRF receivers. Under certain conditions, "the neutralization is substantially independent of frequency over a wide frequency band." [ 2 ] "Perfect neutralization cannot be maintained in practice over a wide band of frequencies because leakage inductances and stray capacities" are not completely canceled. [ 3 ] The later development of the tetrode and pentode vacuum tubes minimized the effect of interelectrode capacitances and could make neutralization unnecessary; the additional electrodes in those tubes shield the plate and grid and minimize feedback. [ 4 ] The classic TRF receivers of the 1920s and 30s usually consisted of three sections: Each tuned RF stage consists of an amplifying device, a triode (or in later sets a tetrode ) vacuum tube , and a tuned circuit which performs the filtering function. The tuned circuit consisted of an air-core RF coupling transformer which also served to couple the signal from the plate circuit of one tube to the input grid circuit of the next tube. One of the windings of the transformer had a variable capacitor connected across it to make a tuned circuit . A variable capacitor (or sometimes a variable coupling coil called a variometer ) was used, with a knob on the front panel to tune the receiver. The RF stages usually had identical circuits to simplify design. Each RF stage had to be tuned to the same frequency, so the capacitors had to be tuned in tandem when bringing in a new station. In some later sets the capacitors were "ganged", mounted on the same shaft or otherwise linked mechanically so that the radio could be tuned with a single knob, but in most sets the resonant frequencies of the tuned circuits could not be made to "track" well enough to allow this, and each stage had its own tuning knob. [ 5 ] The detector was usually a grid-leak detector . Some sets used a crystal detector ( semiconductor diode ) instead. Occasionally, a regenerative detector was used, to increase selectivity. Some TRF sets that were listened to with earphones didn't need an audio amplifier, but most sets had one to three transformer-coupled or RC-coupled audio amplifier stages to provide enough power to drive a loudspeaker . The schematic diagram shows a typical TRF receiver. This particular example uses six triodes. It has two radio frequency amplifier stages, one grid-leak detector/amplifier and three class ‘A’ audio amplifier stages. There are 3 tuned circuits T1-C1, T2-C2, and T3-C3 . The second and third tuning capacitors, C2 and C3 , are ganged together (indicated by line linking them) and controlled by a single knob, to simplify tuning. Generally, two or three RF amplifiers were required to filter and amplify the received signal enough for good reception. Terman characterizes the TRF's disadvantages as "poor selectivity and low sensitivity in proportion to the number of tubes employed. They are accordingly practically obsolete." [ 6 ] Selectivity requires narrow bandwidth, but the bandwidth of a filter with a given Q factor increases with frequency. So to achieve a narrow bandwidth at a high radio frequency required high-Q filters or many filter sections. Achieving constant sensitivity and bandwidth across an entire broadcast band was rarely achieved. In contrast, a superheterodyne receiver translates the incoming high radio frequency to a lower intermediate frequency which does not change. The problem of achieving constant sensitivity and bandwidth over a range of frequencies arises only in one circuit (the first stage) and is therefore considerably simplified. The major problem with the TRF receiver, particularly as a consumer product, was its complicated tuning. All the tuned circuits need to track to keep the narrow bandwidth tuning. Keeping multiple tuned circuits aligned while tuning over a wide frequency range is difficult. In the early TRF sets the operator had to perform that task, as described above. A superheterodyne receiver only needs to track the RF and LO stages; the onerous selectivity requirements are confined to the IF amplifier which is fixed-tuned. During the 1920s, an advantage of the TRF receiver over the regenerative receiver was that, when properly adjusted, it did not radiate interference . [ 7 ] [ 8 ] The popular regenerative receiver, in particular, used a tube with positive feedback operated very close to its oscillation point, so it often acted as a transmitter, emitting a signal at a frequency near the frequency of the station it was tuned to. [ 7 ] [ 8 ] This produced audible heterodynes , shrieks and howls, in other nearby receivers tuned to the same frequency, bringing criticism from neighbors. [ 7 ] [ 8 ] In an urban setting, when several regenerative sets in the same block or apartment house were tuned to a popular station, it could be virtually impossible to hear. [ 7 ] [ 8 ] Britain, [ 9 ] and eventually the US, passed regulations that prohibited receivers from radiating spurious signals, which favored the TRF. Although the TRF design has been largely superseded by the superheterodyne receiver, with the advent of semiconductor electronics in the 1960s the design was "resurrected" and used in some simple integrated radio receivers for hobbyist radio projects, kits, and low-end consumer products. One example is the ZN414 TRF radio integrated circuit from Ferranti in 1972 shown below
https://en.wikipedia.org/wiki/Tuned_radio_frequency_receiver
In electronics and radio , a tuner is a type of receiver subsystem that receives RF transmissions, such as AM or FM broadcasts , and converts the selected carrier frequency into a form suitable for further processing or output, such as to an amplifier or loudspeaker . A tuner is also a standalone home audio product, component , or device called an AM/FM tuner or a stereo tuner that is part of a hi-fi or stereo system, or a TV tuner for television broadcasts. The verb tuning in radio contexts means adjusting the receiver to detect the desired radio signal carrier frequency that a particular radio station uses. Tuners were a major consumer electronics product in the 20th century but in practice are often integrated into other products in the modern day, such as stereo or AV receivers or portable radios . The purpose of a tuner's design is to reduce noise and have a strong ability to amplify the wanted signal. [ 3 ] Tuners may be monophonic or stereophonic , and generally output left and right channels of sound. [ 4 ] Tuners generally include a tuning knob or keypad to adjust the frequency , i.e. the intended radio station, measured in megahertz (e.g. 101.1 MHz). Mistuning is the greatest source of distortion in FM reception. [ 3 ] Some models realize manual tuning by means of mechanically operated ganged variable capacitors (gangs). Often several sections are provided on a tuning capacitor, to tune several stages of the receiver in tandem, or to allow switching between different frequency bands. A later method used a potentiometer supplying a variable voltage to varactor diodes in the local oscillator and tank circuits of front end tuner, for electronic tuning. Modern radio tuners use a superheterodyne receiver with tuning selected by adjustment of the frequency of a local oscillator. This system shifts the radio frequency of interest to a fixed frequency so that it can be tuned with fixed-frequency band-pass filter . Still later, phase locked loop methods were used, with microprocessor control. [ citation needed ] The crystal radio receiver is the simplest kind of radio receiver or tuner, and was the basis for the first commercially successful type of radio product design. Inexpensive and reliable, it was sold in millions of units and became popular in kits used by hobbyists, and was a major factor in the popularity of radio broadcasting around 1920. [ 5 ] [ 6 ] The crystal radio consists of an antenna , a variable inductor and a variable capacitor connected in parallel. This creates a tank circuit which responds to one resonant frequency when combined with a detector , also known as a demodulator (diode D1 in the circuit). [ 7 ] [ 6 ] Stereophonic receivers include a decoder as well. [ 8 ] Vacuum tubes made crystal sets obsolete in the 1920s due to their effective amplification. [ 10 ] From the 1920s until the 1960s, most tuners used a vacuum tube -based design. Manufacturing shifted to solid state electronics in the 1960s, but this didn't always result in improved sound quality compared to the older tube tuners. [ 11 ] [ 12 ] The radiogram , which combined a gramophone with a radio, was a predecessor of the hi-fi tuner. [ 13 ] The transistor was invented in 1947 and largely replaced tubes. [ 14 ] The MOSFET was used because it is capable of handling larger inputs than bipolar transistors . [ 8 ] Starting in the 1960s, Japanese transistor radios , which were cheaper despite their crudeness compared to American designs, began to outcompete the American products in the portable radio market. Eventually, after switching from germanium to silicon transistors, the Japanese consumer electronics companies achieved a dominant market position. Heathkit , an American company which had supplied popular kits for electronic devices since the 1940s, went out of business in 1980. [ 14 ] [ 15 ] FM broadcasting originated in the United States and was adopted as a worldwide standard. [ 16 ] FM broadcasting in stereo in the USA began in 1961 when authorized by the FCC . This led to greater demand for new radio stations and better technology in radios. The growth of hi-fi stereo systems and car radios in turn led to a boost in FM listening. FM surpassed AM radio in 1978. [ 17 ] FM also doubled the number of stations, enabling specialized broadcasts for different genres of music. It also required consumers to purchase new equipment. [ 13 ] The broadcast audio FM band ( 88 – 108 MHz in most countries) is around 100 times higher in frequency than the AM band and provides enough space for a bandwidth of 50 kHz. This bandwidth is sufficient to transmit both stereo channels with almost the full hearing range . [ citation needed ] The Post–World War II economic expansion in the US led to the growth of hi-fi products, increasingly seen as high tech hardware , with requisite jargon , and separated into premium quality components with high-class aesthetics and marketing. [ 20 ] The 1970s and 80s were the peak period for the hi-fi audio market. [ 11 ] Demand increased for stereo products which fueled the growth of the industry as Japan caught up with the US. [ 21 ] Standalone audio stereo FM tuners are still sought after for audiophile and TV/FM DX applications, especially those produced in the 1970s and early 1980s, when performance and manufacturing standards were higher. [ 22 ] The McIntosh MR78 (1972) is known as one of the first FM tuners precise enough to tune into a weaker station broadcast on the same frequency as another stronger signal. [ 23 ] As a result of circuit miniaturization , tuners began to be integrated with other products such as amplifiers and preamps , and other digital electronics , and marketed as AV or stereo receivers for home theater or hi-fi systems. [ 24 ] [ 25 ] The Japanese development of silicon transistor technology led to popular radio products in the 1980s such as the boombox and the Sony Walkman . [ 13 ] Although integrated hi-fi stereo systems and AV or stereo receivers contain integrated tuners, separate components are sometimes preferred for higher quality. [ 26 ] [ 27 ] Separating amplification also often increases overall performance. [ 28 ] A television tuner or TV tuner, also called a TV receiver, is a component or subsystem that converts analog television or digital television transmissions into audio and video signals which can be further processed to produce sound and a picture . [ 29 ] [ 30 ] [ 31 ] A TV tuner must filter out unwanted signals and have a high signal-to-noise ratio. [ 32 ] Television standards supported by TV tuners include PAL , NTSC , SECAM , ATSC , DVB-C , DVB-T , DVB-T2 , ISDB , DTMB , T-DMB , and open cable. VHF / UHF TV tuners are rarely found as a separate component, but are incorporated into television sets . Cable boxes , converter boxes and other set top boxes contain tuners for digital TV services, and send their output via SCART or other connector, or using an RF modulator (typically on channel 36 in Europe and channel 3/4 in North America) to TV receivers that do not natively support the services. They provide outputs via composite , S-video , or component video . Many can be used with video monitors that do not have a TV tuner or direct video input. They are often part of a VCR or digital video recorder (DVR, PVR). [ citation needed ] Analog tuners can tune only analog signals . An ATSC tuner is a digital tuner that tunes digital signals only. Some digital tuners provide an analog bypass. An example frequency range is 48.25 MHz – 855.25 MHz (E2-E69) , with a tuning frequency step size of 31.25, 50 or 62.5 kHz. Before the use of solid-state frequency synthesizers, covering the broad range of TV signal frequencies with a single tuned circuit and sufficient precision was uneconomic. Television channel frequencies were non-contiguous, with many non-broadcast services interleaved between VHF channels 6 and 7 in North America, for example. Instead, TV tuners of the era incorporated multiple sets of tuned circuits for the main signal path and local oscillator circuit. These "turret" tuners mechanically switched the receiving circuits by rotating a knob to select the desired channel. Channels were presented in fixed sequence, with no means to skip channels unused in a particular area. When UHF TV broadcasting was made available, often two complete separate tuner stages were used, with separate tuning knobs for selection of VHF band and UHF band channels. To allow for a small amount of drift or misalignment of the tuner with the actual transmitted frequency, tuners of that era included a "fine tuning" knob to allow minor adjustment for best reception. The combination of high frequencies, multiple electrical contacts, and frequent changing of channels in the tuner made it a high maintenance part of the television receiver, as relatively small electrical or mechanical problems with the tuner would make the set unusable. [ citation needed ] Computers may use an internal TV tuner card or USB connected external tuner to allow reception of overt-the-air broadcasts or cable signals. [ citation needed ]
https://en.wikipedia.org/wiki/Tuner_(radio)
Tungsten disilicide ( WSi 2 ) is an inorganic compound, a silicide of tungsten . It is an electrically conductive ceramic material. Tungsten disilicide can react violently with substances such as strong acids , fluorine , oxidizers , and interhalogens . It is used in microelectronics as a contact material, with resistivity 60–80 μΩ cm; it forms at 1000 °C. It is often used as a shunt over polysilicon lines to increase their conductivity and increase signal speed. Tungsten disilicide layers can be prepared by chemical vapor deposition , e.g. using monosilane or dichlorosilane with tungsten hexafluoride as source gases. The deposited film is non- stoichiometric , and requires annealing to convert to more conductive stoichiometric form. Tungsten disilicide is a replacement for earlier tungsten films. [ 2 ] Tungsten disilicide is also used as a barrier layer between silicon and other metals, e.g. tungsten. Tungsten disilicide is also of value towards use in microelectromechanical systems , where it is mostly applied as thin films for fabrication of microscale circuits. For such purposes, films of tungsten disilicide can be plasma-etched using e.g. nitrogen trifluoride gas. WSi 2 performs well in applications as oxidation -resistant coatings. In particular, in similarity to Molybdenum disilicide , MoSi 2 , the high emissivity of tungsten disilicide makes this material attractive for high temperature radiative cooling , with implications in heat shields . [ 3 ]
https://en.wikipedia.org/wiki/Tungsten_disilicide
Tungsten(VI) fluoride , also known as tungsten hexafluoride , is an inorganic compound with the formula W F 6 . It is a toxic, corrosive, colorless gas, with a density of about 13 kg/m 3 (22 lb/cu yd) (roughly 11 times heavier than air). [ 2 ] [ 3 ] It is the densest known gas under standard ambient temperature and pressure (298 K, 1 atm) and the only well characterized gas under these conditions that contains a transition metal. [ 4 ] [ 5 ] WF 6 is commonly used by the semiconductor industry to form tungsten films, through the process of chemical vapor deposition . This layer is used in a low- resistivity metallic " interconnect ". [ 6 ] It is one of seventeen known binary hexafluorides . The WF 6 molecule is octahedral with the symmetry point group of O h . The W–F bond distances are 183.2 pm . [ 7 ] Between 2.3 and 17 °C , tungsten hexafluoride condenses into a colorless liquid having the density of 3.44 g/cm 3 at 15 °C . [ 8 ] At 2.3 °C it freezes into a white solid having a cubic crystalline structure, the lattice constant of 628 pm and calculated density 3.99 g/cm 3 . At −9 °C this structure transforms into an orthorhombic solid with the lattice constants of a = 960.3 pm, b = 871.3 pm, and c = 504.4 pm, and the density of 4.56 g/cm 3 . In this phase, the W–F distance is 181 pm, and the mean closest molecular contacts are 312 pm . Whereas WF 6 gas is one of the densest gases, with the density exceeding that of the heaviest elemental gas radon (9.73 g/L), the density of WF 6 in the liquid and solid state is rather moderate. [ 9 ] The vapor pressure of WF 6 between −70 and 17 °C can be described by the equation where the P = vapor pressure ( bar ), T = temperature (°C). [ 10 ] [ 11 ] Tungsten hexafluoride was first obtained by conversion of tungsten hexachloride with hydrogen fluoride by Otto Ruff and Fritz Eisner in 1905. [ 12 ] [ 13 ] The compound is now commonly produced by the exothermic reaction of fluorine gas with tungsten powder at a temperature between 350 and 400 °C : [ 8 ] The gaseous product is separated from WOF 4 , a common impurity, by distillation. In a variation on the direct fluorination, the metal is placed in a heated reactor, slightly pressurized to 1.2 to 2.0 psi (8.3 to 13.8 kPa), with a constant flow of WF 6 infused with a small amount of fluorine gas. [ 14 ] The fluorine gas in the above method can be substituted by ClF , ClF 3 or BrF 3 . An alternative procedure for producing tungsten fluoride is to treat tungsten trioxide ( WO 3 ) with HF , BrF 3 or SF 4 . And besides HF, other fluorinating agents can also be used to convert tungsten hexachloride in a way similar to Ruff and Eisner original method: [ 4 ] On contact with water , tungsten hexafluoride gives hydrogen fluoride (HF) and tungsten oxyfluorides, eventually forming tungsten trioxide : [ 4 ] Unlike some other metal fluorides, WF 6 is not a useful fluorinating agent nor is it a powerful oxidant. It can be reduced to the yellow WF 4 . [ 15 ] WF 6 forms a variety of 1:1 and 1:2 adducts with Lewis bases , examples being WF 6 ( S(CH 3 ) 2 ), WF 6 (S(CH 3 ) 2 ) 2 , WF 6 ( P(CH 3 ) 3 ), and WF 6 ( py ) 2 . [ 16 ] The dominant application of tungsten fluoride is in semiconductor industry, where it is widely used for depositing tungsten metal in a chemical vapor deposition (CVD) process. The expansion of the industry in the 1980s and 1990s resulted in the increase of WF 6 consumption, which remains at around 200 tonnes per year worldwide. Tungsten metal is attractive because of its relatively high thermal and chemical stability, as well as low resistivity (5.6 μΩ·cm) and very low electromigration . WF 6 is favored over related compounds, such as WCl 6 or WBr 6 , because of its higher vapor pressure resulting in higher deposition rates. Since 1967, two WF 6 deposition routes have been developed and employed, thermal decomposition and hydrogen reduction. [ 17 ] The required WF 6 gas purity is rather high and varies between 99.98% and 99.9995% depending on the application. [ 4 ] WF 6 molecules have to be split up in the CVD process. The decomposition is usually facilitated by mixing WF 6 with hydrogen, silane , germane , diborane , phosphine , and related hydrogen-containing gases. WF 6 reacts upon contact with a silicon substrate. [ 4 ] The WF 6 decomposition on silicon is temperature-dependent: This dependence is crucial, as twice as much silicon is being consumed at higher temperatures. The deposition occurs selectively on pure silicon only, but not on silicon dioxide or silicon nitride , thus the reaction is highly sensitive to contamination or substrate pre-treatment. The decomposition reaction is fast, but saturates when the tungsten layer thickness reaches 10–15 micrometers . The saturation occurs because the tungsten layer stops diffusion of WF 6 molecules to the Si substrate which is the only catalyst of molecular decomposition in this process. [ 4 ] If the deposition occurs not in an inert atmosphere but in an oxygen-containing atmosphere (air), then instead of tungsten, a tungsten oxide layer is produced. [ 18 ] The deposition process occurs at temperatures between 300 and 800 °C and results in formation of hydrogen fluoride vapors: The crystallinity of the produced tungsten layers can be controlled by altering the WF 6 / H 2 ratio and the substrate temperature: low ratios and temperatures result in (100) oriented tungsten crystallites whereas higher values favor the (111) orientation. Formation of HF is a drawback, as the HF vapor is very aggressive and etches away most materials. Also, the deposited tungsten shows poor adhesion to the silicon dioxide which is the main passivation material in semiconductor electronics. Therefore, SiO 2 has to be covered with an extra buffer layer prior to the tungsten deposition. On the other hand, etching by HF may be beneficial to remove unwanted impurity layers. [ 4 ] The characteristic features of tungsten deposition from the WF 6 / SiH 4 are high speed, good adhesion and layer smoothness. The drawbacks are explosion hazard and high sensitivity of the deposition rate and morphology to the process parameters, such as mixing ratio, substrate temperature, etc. Therefore, silane is commonly used to create a thin tungsten nucleation layer. It is then switched to hydrogen, that slows down the deposition and cleans up the layer. [ 4 ] Deposition from WF 6 / GeH 4 mixture is similar to that of WF 6 / SiH 4 , but the tungsten layer becomes contaminated with relatively (compared to Si) heavy germanium up to concentrations of 10–15%. This increases tungsten resistance from about 5 to 200 μΩ·cm. [ 4 ] WF 6 can be used for the production of tungsten carbide . As a heavy gas, WF 6 can be used as a buffer to control gas reactions. For example, it slows down the chemistry of the Ar / O 2 / H 2 flame and reduces the flame temperature. [ 19 ] Tungsten hexafluoride is an extremely corrosive compound that attacks any tissue. Because of the formation of hydrofluoric acid upon reaction of WF 6 with humidity, WF 6 storage vessels have Teflon gaskets. [ 20 ]
https://en.wikipedia.org/wiki/Tungsten_hexafluoride
A tunnel is an underground or undersea passageway. It is dug through surrounding soil, earth or rock, or laid under water, and is usually completely enclosed except for the two portals common at each end, though there may be access and ventilation openings at various points along the length. A pipeline differs significantly from a tunnel, [ 1 ] [ clarification needed ] though some recent tunnels have used immersed tube construction techniques rather than traditional tunnel boring methods. [ 2 ] A tunnel may be for foot or vehicular road traffic , for rail traffic, or for a canal . The central portions of a rapid transit network are usually in the tunnel. Some tunnels are used as sewers or aqueducts to supply water for consumption or for hydroelectric stations. Utility tunnels are used for routing steam, chilled water, electrical power or telecommunication cables, as well as connecting buildings for convenient passage of people and equipment. [ 3 ] Secret tunnels are built for military purposes, or by civilians for smuggling of weapons , contraband , or people . [ 4 ] Special tunnels, such as wildlife crossings , are built to allow wildlife to cross human-made barriers safely. [ 5 ] Tunnels can be connected together in tunnel networks . A tunnel is relatively long and narrow; the length is often much greater than twice the diameter , although similar shorter excavations can be constructed, such as cross passages between tunnels. The definition of what constitutes a tunnel can vary widely from source to source. For example, in the United Kingdom, a road tunnel is defined as "a subsurface highway structure enclosed for a length of 150 metres (490 ft) or more." [ 6 ] In the United States, the NFPA definition of a tunnel is "An underground structure with a design length greater than 23 m (75 ft) and a diameter greater than 1,800 millimetres (5.9 ft)." [ 7 ] The word "tunnel" comes from the Middle English tonnelle , meaning "a net", derived from Old French tonnel , a diminutive of tonne ("cask"). The modern meaning, referring to an underground passageway, evolved in the 16th century as a metaphor for a narrow, confined space like the inside of a cask. [ 8 ] [ 9 ] [ 10 ] It's more likely that the first tunneling was done made by prehistoric people seeking to enlarge their caves. [ 2 ] Babylon , about 2200 B.C., it is believed that the first artificial tunnel was constructed. To join the temple of Belos with the palace, this was built with the aid of the cut and cover technique. [ 11 ] In the Mahabharata , the Pandavas built a secret tunnel within their new home, called " Lakshagriha " (House of Lac), which was constructed by Purochana [ 12 ] under the orders of Duryodhana by the intention of burning them alive inside, allowing them to escape when the palace was set on fire; this act of foresight by the Pandavas saved their lives. [ 13 ] [ 10 ] Some of the earliest tunnels used by humans were paleoburrows excavated by prehistoric mammals. [ 14 ] Much of the early technology of tunnelling evolved from mining and military engineering . The etymology of the terms "mining" (for mineral extraction or for siege attacks ), " military engineering ", and " civil engineering " reveals these deep historic connections. Predecessors of modern tunnels were adits that transported water for irrigation , drinking, or sewerage . The first qanats are known from before 2000 BC. The earliest tunnel known to have been excavated from both ends is the Siloam Tunnel , built in Jerusalem by the kings of Judah around the 8th century BC. [ 15 ] Another tunnel excavated from both ends, maybe the second known, is the Tunnel of Eupalinos , which is a tunnel aqueduct 1,036 m (3,400 ft) long running through Mount Kastro in Samos , Greece. It was built in the 6th century BC to serve as an aqueduct . In Pakistan , the mughal era tunnel has been restored in the Lahore . [ 16 ] [ 17 ] In Ethiopia , the Siqurto foot tunnel , hand-hewn in the Middle Ages, crosses a mountain ridge. In the Gaza Strip , the network of tunnels was used by Jewish strategists as rock-cut shelters, in first links to Judean resistance against Roman rule in the Bar Kokhba revolt during the 2nd century AD. A major tunnel project must start with a comprehensive investigation of ground conditions by collecting samples from boreholes and by other geophysical techniques. [ 18 ] An informed choice can then be made of machinery and methods for excavation and ground support, which will reduce the risk of encountering unforeseen ground conditions. In planning the route, the horizontal and vertical alignments can be selected to make use of the best ground and water conditions. It is common practice to locate a tunnel deeper than otherwise would be required, in order to excavate through solid rock or other material that is easier to support during construction. Conventional desk and preliminary site studies may yield insufficient information to assess such factors as the blocky nature of rocks, the exact location of fault zones, or the stand-up times of softer ground. This may be a particular concern in large-diameter tunnels. To give more information, a pilot tunnel (or "drift tunnel") may be driven ahead of the main excavation. This smaller tunnel is less likely to collapse catastrophically should unexpected conditions be met, and it can be incorporated into the final tunnel or used as a backup or emergency escape passage. Alternatively, horizontal boreholes may sometimes be drilled ahead of the advancing tunnel face. Other key geotechnical factors: For water crossings, a tunnel is generally more costly to construct than a bridge. [ 22 ] However, both navigational and traffic considerations may limit the use of high bridges or drawbridges intersecting with shipping channels, necessitating a tunnel. Bridges usually require a larger footprint on each shore than tunnels. In areas with expensive real estate, such as Manhattan and urban Hong Kong , this is a strong factor in favor of a tunnel. Boston's Big Dig project replaced elevated roadways with a tunnel system to increase traffic capacity, hide traffic, reclaim land, redecorate, and reunite the city with the waterfront. [ 23 ] The 1934 Queensway Tunnel under the River Mersey at Liverpool was chosen over a massively high bridge partly for defence reasons; it was feared that aircraft could destroy a bridge in times of war, not merely impairing road traffic but blocking the river to navigation. [ 24 ] Maintenance costs of a massive bridge to allow the world's largest ships to navigate under were considered higher than for a tunnel. Similar conclusions were reached for the 1971 Kingsway Tunnel under the Mersey. In Hampton Roads, Virginia , tunnels were chosen over bridges for strategic considerations; in the event of damage, bridges might prevent US Navy vessels from leaving Naval Station Norfolk . Water-crossing tunnels built instead of bridges include the Seikan Tunnel in Japan; the Holland Tunnel and Lincoln Tunnel between New Jersey and Manhattan in New York City ; the Queens-Midtown Tunnel between Manhattan and the borough of Queens on Long Island ; the Detroit-Windsor Tunnel between Michigan and Ontario ; and the Elizabeth River tunnels between Norfolk and Portsmouth, Virginia ; the 1934 River Mersey road Queensway Tunnel ; the Western Scheldt Tunnel , Zeeland, Netherlands; and the North Shore Connector tunnel in Pittsburgh, Pennsylvania . The Sydney Harbour Tunnel was constructed to provide a second harbour crossing and to alleviate traffic congestion on the Sydney Harbour Bridge , without spoiling the iconic view. Other reasons for choosing a tunnel instead of a bridge include avoiding difficulties with tides, weather, and shipping during construction (as in the 51.5-kilometre or 32.0-mile Channel Tunnel ), aesthetic reasons (preserving the above-ground view, landscape, and scenery), and also for weight capacity reasons (it may be more feasible to build a tunnel than a sufficiently strong bridge). Some water crossings are a mixture of bridges and tunnels, such as the Denmark to Sweden link and the Chesapeake Bay Bridge-Tunnel in Virginia . There are particular hazards with tunnels, especially from vehicle fires when combustion gases can asphyxiate users, as happened at the Gotthard Road Tunnel in Switzerland in 2001. One of the worst railway disasters ever, the Balvano train disaster , was caused by a train stalling in the Armi tunnel in Italy in 1944, killing 426 passengers. Designers try to reduce these risks by installing emergency ventilation systems or isolated emergency escape tunnels parallel to the main passage. Government funds are often required for the creation of tunnels. [ 25 ] When a tunnel is being planned or constructed, economics and politics play a large factor in the decision making process. Civil engineers usually use project management techniques for developing a major structure. Understanding the amount of time the project requires, and the amount of labor and materials needed is a crucial part of project planning. The project duration must be identified using a work breakdown structure and critical path method . Also, the land needed for excavation and construction staging, and the proper machinery must be selected. Large infrastructure projects require millions or even billions of dollars, involving long-term financing, usually through issuance of bonds . The costs and benefits for an infrastructure such as a tunnel must be identified. Political disputes can occur, as in 2005 when the US House of Representatives approved a $100 million federal grant to build a tunnel under New York Harbor. However, the Port Authority of New York and New Jersey was not aware of this bill and had not asked for a grant for such a project. [ 26 ] Increased taxes to finance a large project may cause opposition. [ 27 ] Tunnels are dug in types of materials varying from soft clay to hard rock. The method of tunnel construction depends on such factors as the ground conditions, the groundwater conditions, the length and diameter of the tunnel drive, the depth of the tunnel, the logistics of supporting the tunnel excavation, the final use and the shape of the tunnel and appropriate risk management. There are three basic types of tunnel construction in common use. Cut-and-cover tunnels are constructed in a shallow trench and then covered over. Bored tunnels are constructed in situ, without removing the ground above. Finally, a tube can be sunk into a body of water, which is called an immersed tunnel. Cut-and-cover is a simple method of construction for shallow tunnels where a trench is excavated and roofed over with an overhead support system strong enough to carry the load of what is to be built above the tunnel. [ 28 ] There are two basic forms of cut-and-cover tunnelling: Shallow tunnels are often of the cut-and-cover type (if under water, of the immersed-tube type), while deep tunnels are excavated, often using a tunnelling shield . For intermediate levels, both methods are possible. Large cut-and-cover boxes are often used for underground metro stations, such as Canary Wharf tube station in London. This construction form generally has two levels, which allows economical arrangements for ticket hall, station platforms, passenger access and emergency egress, ventilation and smoke control, staff rooms, and equipment rooms. The interior of Canary Wharf station has been likened to an underground cathedral, owing to the sheer size of the excavation. This contrasts with many traditional stations on London Underground , where bored tunnels were used for stations and passenger access. Nevertheless, the original parts of the London Underground network, the Metropolitan and District Railways, were constructed using cut-and-cover. These lines pre-dated electric traction and the proximity to the surface was useful to ventilate the inevitable smoke and steam. A major disadvantage of cut-and-cover is the widespread disruption generated at the surface level during construction. [ 29 ] This, and the availability of electric traction, brought about London Underground's switch to bored tunnels at a deeper level towards the end of the 19th century. Prior to the replacement of manual excavation by the use of boring machines, Victorian tunnel excavators developed a specialized method called clay-kicking for digging tunnels in clay-based soils. The clay-kicker lies on a plank at a 45-degree angle away from the working face and rather than a mattock with his hands, inserts with his feet a tool with a cup-like rounded end, then turns the tool with his hands to extract a section of soil, which is then placed on the waste extract. Clay-kicking is a specialized method developed in the United Kingdom of digging tunnels in strong clay-based soil structures. This method of cut and cover construction required relatively little disturbance of property during the renewal of the United Kingdom's then ancient sewerage systems. It was also used during the First World War by Royal Engineer tunnelling companies placing mines beneath German lines, because it was almost silent and so not susceptible to listening methods of detection. [ 30 ] Tunnel boring machines (TBMs) and associated back-up systems are used to highly automate the entire tunnelling process, reducing tunnelling costs. In certain predominantly urban applications, tunnel boring is viewed as a quick and cost-effective alternative to laying surface rails and roads. Expensive compulsory purchase of buildings and land, with potentially lengthy planning inquiries, is eliminated. Disadvantages of TBMs arise from their usually large size – the difficulty of transporting the large TBM to the site of tunnel construction, or (alternatively) the high cost of assembling the TBM on-site, often within the confines of the tunnel being constructed. There are a variety of TBM designs that can operate in a variety of conditions, from hard rock to soft water-bearing ground. Some TBMs, the bentonite slurry and earth-pressure balance types, have pressurized compartments at the front end, allowing them to be used in difficult conditions below the water table . This pressurizes the ground ahead of the TBM cutter head to balance the water pressure. The operators work in normal air pressure behind the pressurized compartment, but may occasionally have to enter that compartment to renew or repair the cutters. This requires special precautions, such as local ground treatment or halting the TBM at a position free from water. Despite these difficulties, TBMs are now preferred over the older method of tunnelling in compressed air, with an airlock/decompression chamber some way back from the TBM, which required operators to work in high pressure and go through decompression procedures at the end of their shifts, much like deep-sea divers . In February 2010, Aker Wirth delivered a TBM to Switzerland, for the expansion of the Linth–Limmern Power Stations located south of Linthal in the canton of Glarus . The borehole has a diameter of 8.03 metres (26.3 ft). [ 31 ] The four TBMs used for excavating the 57-kilometre (35 mi) Gotthard Base Tunnel , in Switzerland , had a diameter of about 9 metres (30 ft). A larger TBM was built to bore the Green Heart Tunnel (Dutch: Tunnel Groene Hart) as part of the HSL-Zuid in the Netherlands, with a diameter of 14.87 metres (48.8 ft). [ 32 ] This in turn was superseded by the Madrid M30 ringroad , Spain, and the Chong Ming tunnels in Shanghai , China. All of these machines were built at least partly by Herrenknecht . As of August 2013 [update] , the world's largest TBM was " Big Bertha ", a 17.5-metre (57.5 ft) diameter machine built by Hitachi Zosen Corporation , which dug the Alaskan Way Viaduct replacement tunnel in Seattle, Washington (US). [ 33 ] A temporary access shaft is sometimes necessary during the excavation of a tunnel. They are usually circular and go straight down until they reach the level at which the tunnel is going to be built. A shaft normally has concrete walls and is usually built to be permanent. Once the access shafts are complete, TBMs are lowered to the bottom and excavation can start. Shafts are the main entrance in and out of the tunnel until the project is completed. If a tunnel is going to be long, multiple shafts at various locations may be bored so that entrance to the tunnel is closer to the unexcavated area. [ 21 ] Once construction is complete, construction access shafts are often used as ventilation shafts , and may also be used as emergency exits. The new Austrian tunnelling method (NATM)—also referred to as the Sequential Excavation Method (SEM) [ 34 ] —was developed in the 1960s. The main idea of this method is to use the geological stress of the surrounding rock mass to stabilize the tunnel, by allowing a measured relaxation and stress reassignment into the surrounding rock to prevent full loads becoming imposed on the supports. Based on geotechnical measurements, an optimal cross section is computed. The excavation is protected by a layer of sprayed concrete, commonly referred to as shotcrete . Other support measures can include steel arches, rock bolts, and mesh. Technological developments in sprayed concrete technology have resulted in steel and polypropylene fibers being added to the concrete mix to improve lining strength. This creates a natural load-bearing ring, which minimizes the rock's deformation . [ 34 ] By special monitoring the NATM method is flexible, even at surprising changes of the geomechanical rock consistency during the tunneling work. The measured rock properties lead to appropriate tools for tunnel strengthening . [ 34 ] In pipe jacking , hydraulic jacks are used to push specially made pipes through the ground behind a TBM or shield. This method is commonly used to create tunnels under existing structures, such as roads or railways. Tunnels constructed by pipe jacking are normally small diameter bores with a maximum size of around 3.2 metres (10 ft). Box jacking is similar to pipe jacking, but instead of jacking tubes, a box-shaped tunnel is used. Jacked boxes can be a much larger span than a pipe jack, with the span of some box jacks in excess of 20 metres (66 ft). A cutting head is normally used at the front of the box being jacked, and spoil removal is normally by excavator from within the box. Recent developments of the Jacked Arch and Jacked deck have enabled longer and larger structures to be installed to close accuracy. There are also several approaches to underwater tunnels, the two most common being bored tunnels or immersed tubes , examples are Bjørvika Tunnel and Marmaray . Submerged floating tunnels are a novel approach under consideration; however, no such tunnels have been constructed to date. During construction of a tunnel it is often convenient to install a temporary railway, particularly to remove excavated spoil , often narrow gauge so that it can be double track to allow the operation of empty and loaded trains at the same time. The temporary way is replaced by the permanent way at completion, thus explaining the term " Perway ". The vehicles or traffic using a tunnel can outgrow it, requiring replacement or enlargement: An open building pit consists of a horizontal and a vertical boundary that keeps groundwater and soil out of the pit. There are several potential alternatives and combinations for (horizontal and vertical) building pit boundaries. The most important difference with cut-and-cover is that the open building pit is muted after tunnel construction; no roof is placed. Some tunnels are double-deck, for example, the two major segments of the San Francisco–Oakland Bay Bridge (completed in 1936) are linked by a 160-metre (540 ft) double-deck tunnel section through Yerba Buena Island , the largest-diameter bored tunnel in the world. [ 41 ] At construction this was a combination bidirectional rail and truck pathway on the lower deck with automobiles above, now converted to one-way road vehicle traffic on each deck. In Turkey, the Eurasia Tunnel under the Bosphorus , opened in 2016, has at its core a 5.4 km (3.4 miles) two-deck road tunnel with two lanes on each deck. [ 42 ] Additionally, in 2015 the Turkish government announced that it will build three -level tunnel, also under the Bosporus. [ 43 ] The tunnel is intended to carry both the Istanbul metro and a two-level highway, over a length of 6.5 km (4.0 miles). The French A86 Duplex Tunnel [ fr ] in west Paris consists of two bored tunnel tubes, the eastern one of which has two levels for light motorized vehicles, over a length of 10 km (6.2 miles). Although each level offers a physical height of 2.54 m (8.3 ft), only traffic up to 2 m (6.6 ft) tall is allowed in this tunnel tube, and motorcyclists are directed to the other tube. Each level was built with a three-lane roadway, but only two lanes per level are used – the third serves as a hard shoulder within the tunnel. The A86 Duplex is Europe's longest double-deck tunnel. In Shanghai , China, a 2.8 km (1.7 miles) two-tube double-deck tunnel was built starting in 2002. In each tube of the Fuxing Road Tunnel [ zh ] both decks are for motor vehicles. In each direction, only cars and taxis travel on the 2.6 m (8.5 ft) high two-lane upper deck, and heavier vehicles, like trucks and buses, as well as cars, may use the 4.0 m (13 ft) high single-lane lower level. [ 44 ] In the Netherlands, a 2.3 km (1.4 miles) two-storey, eight-lane, cut-and-cover road tunnel under the city of Maastricht was opened in 2016. [ 45 ] Each level accommodates a full height, two by two-lane highway. The two lower tubes of the tunnel carry the A2 motorway , which originates in Amsterdam, through the city; and the two upper tubes take the N2 regional highway for local traffic. [ 46 ] The Alaskan Way Viaduct replacement tunnel , is a $3.3 billion 2.83-kilometre (1.76 mi), double-decker bored highway tunnel under Downtown Seattle . Construction began in July 2013 using " Bertha ", at the time the world's largest earth pressure balance tunnel boring machine, with a 17.5-metre (57.5 ft) cutterhead diameter. After several delays, tunnel boring was completed in April 2017, and the tunnel opened to traffic on 4 February 2019. New York City 's 63rd Street Tunnel under the East River , between the boroughs of Manhattan and Queens , was intended to carry subway trains on the upper level and Long Island Rail Road commuter trains on the lower level. Construction started in 1969, [ 47 ] and the two sides of the tunnel were bored through in 1972. [ 48 ] The upper level, used by the IND 63rd Street Line ( F and <F> train) of the New York City Subway, was not opened for passenger service until 1989. [ 49 ] The lower level, intended for commuter rail, saw passenger service after completion of the East Side Access project, in late 2022. [ 50 ] In the UK, the 1934 Queensway Tunnel under the River Mersey between Liverpool and Birkenhead was originally to have road vehicles running on the upper deck and trams on the lower. During construction the tram usage was cancelled. The lower section is only used for cables, pipes and emergency accident refuge enclosures. Hong Kong's Lion Rock Tunnel , built in the mid 1960s, connecting New Kowloon and Sha Tin , carries a motorway but also serves as an aqueduct , featuring a gallery containing five water mains lines with diameters between 1.2 and 1.5 m (4 and 5 ft) below the road section of the tunnel. [ 51 ] Wuhan 's Yangtze River Highway and Railway Tunnel is a 2.59 km (1.61 mi) two-tube double-deck tunnel under the Yangtze River completed in 2018. Each tube carries three lanes of local traffic on the top deck with one track Wuhan Metro Line 7 on the lower deck. [ 52 ] [ 53 ] [ 54 ] Mount Baker Tunnel has three levels. The bottom level is to be used by Sound Transit light rail. The middle level is used by car traffic, and the top layer is for bicycle and pedestrian access. Some tunnels have more than one purpose. The SMART Tunnel in Malaysia is the first multipurpose " Stormwater Management And Road Tunnel " in the world, created to convey both traffic and occasional flood waters in Kuala Lumpur . When necessary, floodwater is first diverted into a separate bypass tunnel located underneath the 4.0 km (2.5 miles) double-deck roadway tunnel. In this scenario, traffic continues normally. Only during heavy, prolonged rains when the threat of extreme flooding is high, the upper tunnel tube is closed off to vehicles and automated flood control gates are opened so that water can be diverted through both tunnels. [ 55 ] Over-bridges can sometimes be built by covering a road or river or railway with brick or steel arches , and then levelling the surface with earth. In railway parlance, a surface-level track which has been built or covered over is normally called a "covered way". Snow sheds are a kind of artificial tunnel built to protect a railway from avalanches of snow. Similarly the Stanwell Park , New South Wales "steel tunnel", on the Illawarra railway line , protects the line from rockfalls. An underpass is a road or railway or other passageway passing under another road or railway, under an overpass . This is not strictly a tunnel. A Utility Tunnel is built for the purpose of carrying one or more utilities in the same space, for this reason they are also referred to as Multi-Utility Tunnels or MUTs. Through co-location of different utilities in one tunnel, organizations are able to reduce the financial and environmental costs of building and maintaining utilities. [ 56 ] These tunnels can be used for many types of utilities, routing steam, chilled water, electrical power or telecommunication cables, as well as connecting buildings for convenient passage of people and equipment. [ 3 ] Owing to the enclosed space of a tunnel, fires can have very serious effects on users. The main dangers are gas and smoke production, with even low concentrations of carbon monoxide being highly toxic. Fires killed 11 people in the Gotthard tunnel fire of 2001 for example, all of the victims succumbing to smoke and gas inhalation. Over 400 passengers died in the Balvano train disaster in Italy in 1944, when the locomotive halted in a long tunnel. Carbon monoxide poisoning was the main cause of death. In the Caldecott Tunnel fire of 1982, the majority of fatalities were caused by toxic smoke, rather than by the initial crash. Likewise 84 people were killed in the Paris Métro train fire of 1904. Motor vehicle tunnels usually require ventilation shafts and powered fans to remove toxic exhaust gases during routine operation. [ 57 ] Rail tunnels usually require fewer air changes per hour , but still may require forced-air ventilation . Both types of tunnels often have provisions to increase ventilation under emergency conditions, such as a fire. Although there is a risk of increasing the rate of combustion through increased airflow, the primary focus is on providing breathable air to persons trapped in the tunnel, as well as firefighters . The aerodynamic pressure wave produced by high speed trains entering a tunnel [ 58 ] reflect at its open ends and change sign ( compression wavefront changes to rarefaction wavefront and vice versa). When two wavefronts of the same sign meet the train, significant and rapid air pressure [ 59 ] may cause ear discomfort [ 60 ] for passengers and crew. When a high-speed train exits a tunnel, a loud " Tunnel boom " may occur, which can disturb residents near the mouth of the tunnel, and it is exacerbated in mountain valleys where the sound can echo. When there is a parallel, separate tunnel available, airtight but unlocked emergency doors are usually provided which allow trapped personnel to escape from a smoke-filled tunnel to the parallel tube. [ 61 ] Larger, heavily used tunnels, such as the Big Dig tunnel in Boston, Massachusetts , may have a dedicated 24-hour staffed operations center which monitors and reports on traffic conditions, and responds to emergencies. [ 62 ] Video surveillance equipment is often used, and real-time pictures of traffic conditions for some highways may be viewable by the general public via the Internet. A database of seismic damage to underground structures using 217 case histories shows the following general observations can be made regarding the seismic performance of underground structures: Earthquakes are one of nature's most formidable threats. A magnitude 6.7 earthquake shook the San Fernando valley in Los Angeles in 1994. The earthquake caused extensive damage to various structures, including buildings, freeway overpasses and road systems throughout the area. The National Center for Environmental Information estimates total damages to be 40 billion dollars. [ 64 ] According to an article issued by Steve Hymon of TheSource – Transportation News and Views, there was no serious damage sustained by the LA subway system. Metro, the owner of the LA subway system, issued a statement through their engineering staff about the design and consideration that goes into a tunnel system. Engineers and architects perform extensive analysis as to how hard they expect earthquakes to hit that area. All of this goes into the overall design and flexibility of the tunnel. This same trend of limited subway damage following an earthquake can be seen in many other places. In 1985 a magnitude 8.1 earthquake shook Mexico City; there was no damage to the subway system, and in fact the subway systems served as a lifeline for emergency personnel and evacuations. A magnitude 7.2 ripped through Kobe Japan in 1995, leaving no damage to the tunnels themselves. Entry portals sustained minor damages, however these damages were attributed to inadequate earthquake design that originated from the original construction date of 1965. In 2010 a magnitude 8.8, massive by any scale, afflicted Chile. Entrance stations to subway systems suffered minor damages, and the subway system was down for the rest of the day. By the next afternoon, the subway system was operational again. [ 65 ] The history of ancient tunnels and tunneling in the world is reviewed in various sources which include many examples of these structures that were built for different purposes. [ 66 ] [ 67 ] Some well known ancient and modern tunnels are briefly introduced below: The use of tunnels for mining is called drift mining . Drift mining can help find coal, goal, iron, and other minerals, just like normal mining. Sub-surface mining consists of digging tunnels or shafts into the earth to reach buried ore deposits. Some tunnels are not for transport at all but rather, are fortifications, for example Mittelwerk and Cheyenne Mountain Complex . Excavation techniques, as well as the construction of underground bunkers and other habitable areas, are often associated with military use during armed conflict , or civilian responses to threat of attack. Another use for tunnels was for the storage of chemical weapons [ 91 ] [ 92 ] [1] . Secret tunnels have given entrance to or escape from an area, such as the Cu Chi Tunnels or the smuggling tunnels in the Gaza Strip which connect it to Egypt . Although the Underground Railroad network used to transport escaped slaves was "underground" mostly in the sense of secrecy, hidden tunnels were occasionally used. Secret tunnels were also used during the Cold War , under the Berlin Wall and elsewhere, to smuggle refugees, and for espionage . Smugglers use secret tunnels to transport or store contraband , such as illegal drugs and weapons . Elaborately engineered 300-metre (1,000 ft) tunnels built to smuggle drugs across the Mexico-US border were estimated to require up to 9 months to complete, and an expenditure of up to $1 million. [ 93 ] Some of these tunnels were equipped with lighting, ventilation, telephones, drainage pumps, hydraulic elevators, and in at least one instance, an electrified rail transport system. [ 93 ] Secret tunnels have also been used by thieves to break into bank vaults and retail stores after hours. [ 94 ] [ 95 ] Several tunnels have been discovered by the Border Security Forces across the Line of Control along the India-Pakistan border , mainly to allow terrorists access to the Indian territory of Jammu and Kashmir . [ 96 ] [ 97 ] The actual usage of erdstall tunnels is unknown but theories connect it to a rebirth ritual.
https://en.wikipedia.org/wiki/Tunnel
The Tunnel and Reservoir Plan (abbreviated TARP and more commonly known as the Deep Tunnel Project or the Chicago Deep Tunnel ) is a large civil engineering project that aims to reduce flooding in the metropolitan Chicago area, and to reduce the harmful effects of flushing raw sewage into Lake Michigan by diverting storm water and sewage into temporary holding reservoirs . The megaproject is one of the largest civil engineering projects ever undertaken in terms of scope , cost and timeframe. Commissioned in the mid-1970s, the project is managed by the Metropolitan Water Reclamation District of Greater Chicago . Completion of the system is not anticipated until 2029, [ 1 ] but substantial portions of the system have already opened and are currently operational. Across 30 years of construction, over $3 billion has been spent on the project. [ 2 ] The Deep Tunnel Project is the latest in a series of civil engineering projects dating back to 1834. Many of the problems experienced by the city of Chicago are directly related to its low level topography and the fact that the city is largely built upon marsh or wet prairie. This combined with a temperate wet climate and the human development of open land, leads to substantial water runoff. Lake Michigan was ineffective in carrying sewage away from the city, and in the event of a rainstorm, the water pumps that provided drinking water to Chicagoans became contaminated with sewage. Though no epidemics were caused by this system (see Chicago 1885 cholera epidemic myth ), it soon became clear that the sewage system needed to be diverted to flow away from Lake Michigan in order to handle an increasing population's sanitation needs. [ dubious – discuss ] Between 1864 and 1867, under the leadership of Ellis S. Chesbrough , the city built the two-mile Chicago lake tunnel to a new water intake location farther from the shore. Crews began from the intake location and the shore, tunneling in two shifts a day. Clay and earth were drawn away by mule-drawn railcars. Masons lined the five-foot-diameter tunnel with two layers of brick. The lake and shore crews met in November 1866, less than seven inches out of alignment. A second tunnel was added in 1874. [ 3 ] In 1871, the deepening of the Illinois and Michigan Canal was completed to reverse the flow of the Chicago River to drain diluted sewage southwest away from Lake Michigan. However, the canal only had the capacity to drain to the Des Plaines River during dry weather; during heavy rains, the Des Plaines would flood and overflow into the canal, reversing its flow back into the lake. [ 4 ] In 1900, to improve general health standards, the flow of the main branch of the Chicago River was permanently reversed with the construction of the Chicago Sanitary and Ship Canal . This further improved the sanitation of Lake Michigan and helped to prevent further waterborne epidemic scares. The construction of the Sanitary and Ship Canal (1892–1900), enlargements to the North Shore Channel (1907–1910), the construction of the Cal-Sag Channel (1911–1922), and the construction of locks at the mouth of the Chicago River (1933–1938) brought further improvements to the sanitary issues of the time. These projects blocked further amounts of sewage from draining into Lake Michigan. The projects also brought fresh lake water to inland waterways to further dilute sewage that was already in the waterways. Surrounding farmland also engaged in flood control projects. The Illinois Farm Drainage Act of 1879 established drainage districts . These districts were generally named for the basin they drained—for example, the Fox River Drainage District. After World War II , suburban communities began to realize the benefits of separating stormwater from sewage water and began to construct separate sewer and storm drainage lines. The primary benefit of wastestream separation is that storm water requires less treatment than sewage before being returned to the environment. Flood damage grew markedly after 1938, when surrounding natural drainage areas were lost to development and human activity. Serious flooding has occurred in the Chicago metropolitan area in 1849, 1855, 1885, 1938, 1952, 1954 , 1957, 1961, 1973, 1979, 1986, 1987, 1996, 2007, 2008, 2010, 2011, 2022, 2023 — but most record-setting crests occurred after 1948. In the 1960s, the concept of Deep Tunnel was studied and recommended as a solution to continuing flooding issues. Phase 1, the creation of 109.4 miles (176.1 km) of drainage tunnels ranging from 9 to 33 feet (2.7 to 10.1 m) in diameter, up to 350 feet (110 m) underground, was adopted in 1972, commenced in 1975, and completed and operational by 2006. Phase 2, creation of reservoirs primarily intended for flood control, remains underway with an expected completion date of 2029. Currently, up to 2.3 billion US gallons (8.7 gigalitres) of sewage can be stored and held in the tunnels themselves while awaiting processing at sewage treatment plants, which release treated water into the Calumet and Des Plaines rivers. [ 5 ] Additional sewage is stored at the 7.9-billion-US-gallon (30-gigalitre) Thornton Composite Reservoir, and the 350-million-US-gallon (1,300-megalitre) Gloria Alitto Majewski Reservoir near O'Hare International Airport . The 3.5-billion-US-gallon (13 GL) McCook Reservoir was completed in 2017 and will be expanded to 10 billion US gallons (38 GL ) by 2029. [ 6 ] [ 7 ] Because the reservoirs are decommissioned quarries, construction has been delayed by decreased demand for the quarried gravel. Upon completion, the TARP system will have a storage capacity of 17.5 billion US gallons (66 GL). Severe weather events have forced water management agencies to pump excess wastewater into the lake and river in order to prevent flooding. These incidents have decreased in frequency as more of the Deep Tunnel system has become operational. Long considered an open sewer, the Chicago River now hosts more than 60 fish species and increased wildlife along its shores. Substantial development is occurring along many portions of the riverfront. Canoeing is once again allowed on the waterway, but swimming is still prohibited due to high pollution levels. On October 3, 1986, a heavy thunderstorm drenched the southern portion of the Deep Tunnel area with several inches of rain in a short period of time. While the Deep Tunnel system performed satisfactorily by absorbing excess water, water within the system itself rushed past the north side of Chicago and near the Bahá'í Temple in Wilmette . Geysers of over 65 feet (20 m) were reported in both locations for up to an hour as the water was redistributed more evenly through the system. A 30 feet (9 m) geyser erupted downtown at the corner of Jefferson and Monroe, trapping a woman inside her car as it filled with water. [ 10 ] A system of watertight bulkheads has since been installed to prevent the event from occurring again. During the Chicago Flood of 1992, the water from the Chicago River that leaked into the long-disused underground freight tunnel system was eventually drained into the Deep Tunnel network, which itself was still under construction. [ 11 ] Operation of the tunnels reduced sewer overflow events from an average of 100 days per year to 50. Since Thornton Reservoir came online in 2015, combined sewer overflows in that reservoir's service area have been nearly eliminated. [ 12 ]
https://en.wikipedia.org/wiki/Tunnel_and_Reservoir_Plan
A tunnel cluster , more formally tunnel cluster of the cervix and cervical tunnel cluster , is a benign group of dilated endocervical glands in the cervix . It is significant only in that it can be confused for a malignancy, i.e. cancer . [ 1 ] This article related to pathology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tunnel_cluster
A tunnel finisher is a machine that removes wrinkles from garments and is often used in the textile industry . As with other industrial pressing equipment, this machine is employed to improve the quality and look of a textile product. [ 1 ] It has a chamber called a "tunnel" and includes a conveyor fed unit through which the garments are steamed and dried. [ 2 ] The machine also features hook systems; air curtain entrance to eliminate moisture or condensation; cotton care and roller units; exhaust steam, and a preconditioning module. [ 2 ] Most garments are shipped by sea freight from the country of production . They get very wrinkled because of the box packing being used. In the receiving country, they are unpacked and put on a clothes hanger . Those hangers are sent via automated transport through the tunnel with a speed up to 3,000 garments per hour. These garments are then sent to a room to be steamed and dried. [ 3 ] The machine processes each garment through several stages. First, the garment passes through a steam chamber to make the fabric moldable. Then wrinkles are removed by a strong hot air flow alongside the garments. Finally, the garment is dried by cooler air before it leaves the tunnel finisher. In the case of garments, smaller areas such as collars require further pressing using other equipment such as steam iron for a better finish. [ 4 ] The tunnel finisher is also used in laundries and dry cleaners to remove wrinkles from garments after washing or dry cleaning . Tunnel finishers can be grouped into two different classifications, "wide body" or "narrow body." "Wide body" machines are designed for high production finishing of blended garments wet-to-dry, damp-to-dry and or dry-to-dry. "Narrow body" machines are designed for shoulder-to-shoulder processing and are best suited for the dry-to-dry finishing of garments. However; they are capable of damp-to-dry finishing at slower production speeds. These units are ideal for dry cleaners, hotel laundries, institutional laundries and other on-premises laundry applications. The smaller capacity version of the tunnel finisher is called "cabinet tunnel" and this typically capable of automated processing of separate batches of 4 or 5 garments at the same time. [ 2 ] The production capacity for this smaller equipment is 10 percent of the tunnel finisher. This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tunnel_finisher
Tunnel injection is a field electron emission effect; specifically a quantum process called Fowler–Nordheim tunneling , whereby charge carriers are injected to an electric conductor through a thin layer of an electric insulator. [ 1 ] It is used to program NAND flash memory . The process used for erasing is called tunnel release . This injection is achieved by creating a large voltage difference between the gate and the body of the MOSFET. When V GB >> 0, electrons are injected into the floating gate. When V GB << 0, electrons are forced out of the floating gate. An alternative to tunnel injection is the spin injection . This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tunnel_injection
In physics , tunnel ionization is a process in which electrons in an atom (or a molecule ) tunnel through the potential barrier and escape from the atom (or molecule). In an intense electric field , the potential barrier of an atom (molecule) is distorted drastically. Therefore, as the length of the barrier that electrons have to pass decreases, the electrons can escape from the atom's potential more easily. Tunneling ionization is a quantum mechanical phenomenon since in the classical picture an electron does not have sufficient energy to overcome the potential barrier of the atom. When the atom is in a DC external field, the Coulomb potential barrier is lowered and the electron has an increased, non-zero probability of tunnelling through the potential barrier. In the case of an alternating electric field, the direction of the electric field reverses after the half period of the field. The ionized electron may come back to its parent ion. The electron may recombine with the nucleus (nuclei) and its kinetic energy is released as light ( high harmonic generation ). If the recombination does not occur, further ionization may proceed by collision between high-energy electrons and a parent atom (molecule). This process is known as non-sequential ionization . [ 1 ] Tunneling ionization from the ground state of a hydrogen atom in an electrostatic (DC) field was solved schematically by Lev Landau , [ 2 ] using parabolic coordinates. This provides a simplified physical system that given it proper exponential dependence of the ionization rate on the applied external field. When ⁠ E ≪ E a {\displaystyle E\ll E_{a}} ⁠ , the ionization rate for this system is given by: Landau expressed this in atomic units , where ⁠ m e = e = ℏ = 1 {\displaystyle m_{\text{e}}=e=\hbar =1} ⁠ . In SI units the previous parameters can be expressed as: The ionization rate is the total probability current through the outer classical turning point. This rate is found using the WKB approximation to match the ground state hydrogen wavefunction through the suppressed coulomb potential barrier. A more physically meaningful form for the ionization rate above can be obtained by noting that the Bohr radius and hydrogen atom ionization energy are given by where R H ≈ 13.6 e V {\displaystyle R_{\text{H}}\approx \mathrm {13.6\,eV} } is the Rydberg energy . Then, the parameters E a {\displaystyle E_{a}} and ω a {\displaystyle \omega _{a}} can be written as so that the total ionization rate can be rewritten This form for the ionization rate w {\displaystyle w} emphasizes that the characteristic electric field needed for ionization E a = 2 E ion / e a 0 {\displaystyle E_{a}={2E_{\text{ion}}}/{ea_{0}}} is proportional to the ratio of the ionization energy E ion {\displaystyle E_{\text{ion}}} to the characteristic size of the electron's orbital ⁠ a 0 {\displaystyle a_{0}} ⁠ . Thus, atoms with low ionization energy (such as alkali metals ) with electrons occupying orbitals with high principal quantum number n {\displaystyle n} (i.e. far down the periodic table) ionize most easily under a DC field. Furthermore, for a hydrogenic atom , the scaling of this characteristic ionization field goes as ⁠ Z 3 {\displaystyle Z^{3}} ⁠ , where Z {\displaystyle Z} is the nuclear charge. This scaling arises because the ionization energy scales as ∝ Z 2 {\displaystyle \propto Z^{2}} and the orbital radius as ⁠ ∝ Z − 1 {\displaystyle \propto Z^{-1}} ⁠ . More accurate and general formulas for the tunneling from Hydrogen orbitals can also be obtained. [ 3 ] As an empirical point of reference, the characteristic electric field E a {\displaystyle E_{a}} for the ordinary hydrogen atom is about 51 V / Å (or 5.1 × 10 3 MV/cm ) and the characteristic frequency ω a {\displaystyle \omega _{a}} is 4.1 × 10 4 THz . The ionization rate of a hydrogen atom in an alternating electric field, like that of a laser, can be treated, in the appropriate limit, as the DC ionization rate averaged over a single period of the electric field's oscillation. Multiphoton and tunnel ionization of an atom or a molecule describes the same process by which a bounded electron, through the absorption of more than one photon from the laser field, is ionized. The difference between them is a matter of definition under different conditions. They can henceforth be called multiphoton ionization (MPI) whenever the distinction is not necessary. The dynamics of the MPI can be described by finding the time evolution of the state of the atom which is described by the Schrödinger equation . When the intensity of the laser is strong, the lowest-order perturbation theory is not sufficient to describe the MPI process. In this case, the laser field on larger distances from the nucleus is more important than the Coulomb potential and the dynamic of the electron in the field should be properly taken into account. The first work in this category was published by Leonid Keldysh . [ 4 ] He modeled the MPI process as a transition of the electron from the ground state of the atom to the Volkov states (the state of a free electron in the electromagnetic field [ 5 ] ). In this model, the perturbation of the ground state by the laser field is neglected and the details of atomic structure in determining the ionization probability are not taken into account. The major difficulty with Keldysh's model was its neglect of the effects of Coulomb interaction on the final state of the electron. As is observed from the figure, the Coulomb field is not very small in magnitude compared to the potential of the laser at larger distances from the nucleus. This is in contrast to the approximation made by neglecting the potential of the laser at regions near the nucleus. A. M. Perelomov, V. S. Popov and M. V. Terent'ev [ 6 ] [ 7 ] included the Coulomb interaction at larger internuclear distances. Their model (which is called the PPT model after their initials) was derived for short-range potential and includes the effect of the long-range Coulomb interaction through the first-order correction in the quasi-classical action. In the quasi-static limit, the PPT model approaches the ADK model by M. V. Ammosov, N. B. Delone, and V. P. Krainov. [ 8 ] Many experiments have been carried out on the MPI of rare gas atoms using strong laser pulses, through measuring both the total ion yield and the kinetic energy of the electrons. Here, one only considers the experiments designed to measure the total ion yield. Among these experiments are those by S. L. Chin et al., [ 9 ] S. Augst et al. [ 10 ] and T. Auguste et al. [ 11 ] Chin et al. used a 10.6 μm CO 2 laser in their experiment. Due to the very small frequency of the laser, the tunneling is strictly quasi-static, a characteristic that is not easily attainable using pulses in the near infrared or visible region of frequencies. These findings weakened the suspicion on the applicability of models basically founded on the assumption of a structureless atom. S. Larochelle et al. [ 12 ] have compared the theoretically predicted ion versus intensity curves of rare gas atoms interacting with a Ti:sapphire laser with experimental measurement. They have shown that the total ionization rate predicted by the PPT model fits very well the experimental ion yields for all rare gases in the intermediate regime of Keldysh parameter. The dynamics of the MPI can be described by finding the time evolution of the state of the atom which is described by the Schrödinger equation. The form of this equation in the electric field gauge, assuming the single active electron (SAE) approximation and using dipole approximation, is the following where E ( t ) {\displaystyle \mathbf {E} (t)} is the electric field of the laser and V ( r ) {\displaystyle V(r)} is the static Coulomb potential of the atomic core at the position of the active electron. By finding the exact solution of equation (1) for a potential 2 E i . δ ( r ) {\displaystyle {\sqrt {2E_{\text{i}}}}.\delta (\mathbf {r} )} ( ⁠ E i {\displaystyle E_{\text{i}}} ⁠ the magnitude of the ionization potential of the atom), the probability current J ( r , t ) {\displaystyle \mathbf {J} (\mathbf {r} ,t)} is calculated. Then, the total MPI rate from short-range potential for linear polarization, ⁠ W ( E , ω ) {\displaystyle W(\mathbf {E} ,\omega )} ⁠ , is found from where ω {\displaystyle \omega } is the frequency of the laser, which is assumed to be polarized in the direction of the x {\displaystyle x} axis. The effect of the ionic potential, which behaves like Z / r {\displaystyle {Z}/{r}} ( Z {\displaystyle Z} is the charge of atomic or ionic core) at a long distance from the nucleus, is calculated through first order correction on the semi-classical action. The result is that the effect of ionic potential is to increase the rate of MPI by a factor of Where n ∗ = Z / 2 E i {\displaystyle n^{*}=Z/{\sqrt {2E_{\text{i}}}}} and F {\displaystyle F} is the peak electric field of laser. Thus, the total rate of MPI from a state with quantum numbers l {\displaystyle l} and m {\displaystyle m} in a laser field for linear polarization is calculated to be where γ = ω 2 E i F {\displaystyle \gamma ={\frac {\omega {\sqrt {2E_{\text{i}}}}}{F}}} is the Keldysh's adiabaticity parameter and ⁠ l ∗ = n ∗ − 1 {\displaystyle l^{*}=n^{*}-1} ⁠ . The coefficients f l m {\displaystyle f_{lm}} , g ( γ ) {\displaystyle g(\gamma )} and C n ∗ l ∗ {\displaystyle C_{n^{*}l^{*}}} are given by The coefficient A m ( ω , γ ) {\displaystyle A_{m}(\omega ,\gamma )} is given by where The ADK model is the limit of the PPT model when γ {\displaystyle \gamma } approaches zero (quasi-static limit). In this case, which is known as quasi-static tunnelling (QST), the ionization rate is given by In practice, the limit for the QST regime is ⁠ γ < 1 / 2 {\displaystyle \gamma <1/2} ⁠ . This is justified by the following consideration. [ 13 ] Referring to the figure, the ease or difficulty of tunneling can be expressed as the ratio between the equivalent classical time it takes for the electron to tunnel out the potential barrier while the potential is bent down. This ratio is indeed ⁠ γ {\displaystyle \gamma } ⁠ , since the potential is bent down during half a cycle of the field oscillation and the ratio can be expressed as where τ T {\displaystyle \tau _{\text{T}}} is the tunneling time (classical time of flight of an electron through a potential barrier, and τ L {\displaystyle \tau _{\text{L}}} is the period of laser field oscillation. Contrary to the abundance of theoretical and experimental work on the MPI of rare gas atoms, the amount of research on the prediction of the rate of MPI of neutral molecules was scarce until recently. Walsh et al. [ 14 ] have measured the MPI rate of some diatomic molecules interacting with a 10.6 μm CO 2 laser. They found that these molecules are tunnel-ionized as if they were structureless atoms with an ionization potential equivalent to that of the molecular ground state. A. Talebpour et al. [ 15 ] [ 16 ] were able to quantitatively fit the ionization yield of diatomic molecules interacting with a Ti:sapphire laser pulse. The conclusion of the work was that the MPI rate of a diatomic molecule can be predicted from the PPT model by assuming that the electron tunnels through a barrier given by Z eff / r {\displaystyle {Z_{\text{eff}}}/{r}} instead of barrier 1 / r {\displaystyle {1}/{r}} which is used in the calculation of the MPI rate of atoms. The importance of this finding is in its practicality; the only parameter needed for predicting the MPI rate of a diatomic molecule is a single parameter, ⁠ Z eff {\displaystyle Z_{\text{eff}}} ⁠ . Using the semi-empirical model for the MPI rate of unsaturated hydrocarbons is feasible. [ 17 ] This simplistic view ignores the ionization dependence on orientation of molecular axis with respect to polarization of the electric field of the laser, which is determined by the symmetries of the molecular orbitals. This dependence can be used to follow molecular dynamics using strong field multiphoton ionization. [ 18 ] The question of how long a tunneling particle spends inside the barrier region has remained unresolved since the early days of quantum mechanics. It is sometimes suggested that the tunneling time is instantaneous because both the Keldysh and the closely related Buttiker-Landauer [ 19 ] times are imaginary (corresponding to the decay of the wavefunction under the barrier). In a recent publication [ 20 ] the main competing theories of tunneling time are compared against experimental measurements using the attoclock in strong laser field ionization of helium atoms. Refined attoclock measurements reveal a real and not instantaneous tunneling delay time over a large intensity regime. It is found that the experimental results are compatible with the probability distribution of tunneling times constructed using a Feynman path integral (FPI) formulation. [ 21 ] [ 22 ] However, later work in atomic hydrogen has demonstrated that most of the tunneling time measured in the experiment is purely from the long-range Coulomb force exerted by the ion core on the outgoing electron. [ 23 ]
https://en.wikipedia.org/wiki/Tunnel_ionization
Tunnel magnetoresistance ( TMR ) is a magnetoresistive effect that occurs in a magnetic tunnel junction ( MTJ ), which is a component consisting of two ferromagnets separated by a thin insulator . If the insulating layer is thin enough (typically a few nanometres ), electrons can tunnel from one ferromagnet into the other. Since this process is forbidden in classical physics, the tunnel magnetoresistance is a strictly quantum mechanical phenomenon, and lies in the study of spintronics . Magnetic tunnel junctions are manufactured in thin film technology. On an industrial scale the film deposition is done by magnetron sputter deposition ; on a laboratory scale molecular beam epitaxy , pulsed laser deposition and electron beam physical vapor deposition are also utilized. The junctions are prepared by photolithography . The direction of the two magnetizations of the ferromagnetic films can be switched individually by an external magnetic field . If the magnetizations are in a parallel orientation it is more likely that electrons will tunnel through the insulating film than if they are in the oppositional (antiparallel) orientation. Consequently, such a junction can be switched between two states of electrical resistance , one with low and one with very high resistance. The effect was originally discovered in 1975 by Michel Jullière (University of Rennes, France) in Fe / Ge - O / Co -junctions at 4.2 K. The relative change of resistance was around 14%, and did not attract much attention. [ 1 ] In 1991 Terunobu Miyazaki ( Tohoku University , Japan) found a change of 2.7% at room temperature. Later, in 1994, Miyazaki found 18% in junctions of iron separated by an amorphous aluminum oxide insulator [ 2 ] and Jagadeesh Moodera found 11.8% in junctions with electrodes of CoFe and Co. [ 3 ] The highest effects observed at this time with aluminum oxide insulators was around 70% at room temperature. Since the year 2000, tunnel barriers of crystalline magnesium oxide (MgO) have been under development. In 2001 Butler and Mathon independently made the theoretical prediction that using iron as the ferromagnet and MgO as the insulator, the tunnel magnetoresistance can reach several thousand percent. [ 4 ] [ 5 ] The same year, Bowen et al. were the first to report experiments showing a significant TMR in a MgO based magnetic tunnel junction [Fe/MgO/FeCo(001)]. [ 6 ] In 2004, Parkin and Yuasa were able to make Fe/MgO/Fe junctions that reach over 200% TMR at room temperature. [ 7 ] [ 8 ] In 2008, effects of up to 604% at room temperature and more than 1100% at 4.2 K were observed in junctions of CoFeB/MgO/CoFeB by S. Ikeda, H. Ohno group of Tohoku University in Japan. [ 9 ] The read-heads of modern hard disk drives work on the basis of magnetic tunnel junctions. TMR, or more specifically the magnetic tunnel junction, is also the basis of MRAM , a new type of non-volatile memory . The 1st generation technologies relied on creating cross-point magnetic fields on each bit to write the data on it, although this approach has a scaling limit at around 90–130 nm. [ 10 ] There are two 2nd generation techniques currently being developed: Thermal Assisted Switching (TAS) [ 10 ] and Spin-transfer torque . Magnetic tunnel junctions are also used for sensing applications. Today they are commonly used for position sensors and current sensors in various automotive, industrial and consumer applications. These higher performance sensors are replacing Hall sensors in many applications due to their improved performance. [ 11 ] The relative resistance change—or effect amplitude—is defined as where R a p {\displaystyle R_{\mathrm {ap} }} is the electrical resistance in the anti-parallel state, whereas R p {\displaystyle R_{\mathrm {p} }} is the resistance in the parallel state. The TMR effect was explained by Jullière with the spin polarizations of the ferromagnetic electrodes. The spin polarization P is calculated from the spin dependent density of states (DOS) D {\displaystyle {\mathcal {D}}} at the Fermi energy : P = D ↑ ( E F ) − D ↓ ( E F ) D ↑ ( E F ) + D ↓ ( E F ) {\displaystyle P={\frac {{\mathcal {D}}_{\uparrow }(E_{\mathrm {F} })-{\mathcal {D}}_{\downarrow }(E_{\mathrm {F} })}{{\mathcal {D}}_{\uparrow }(E_{\mathrm {F} })+{\mathcal {D}}_{\downarrow }(E_{\mathrm {F} })}}} The spin-up electrons are those with spin orientation parallel to the external magnetic field, whereas the spin-down electrons have anti-parallel alignment with the external field. The relative resistance change is now given by the spin polarizations of the two ferromagnets, P 1 and P 2 : T M R = 2 P 1 P 2 1 − P 1 P 2 {\displaystyle \mathrm {TMR} ={\frac {2P_{1}P_{2}}{1-P_{1}P_{2}}}} If no voltage is applied to the junction, electrons tunnel in both directions with equal rates. With a bias voltage U , electrons tunnel preferentially to the positive electrode. With the assumption that spin is conserved during tunneling, the current can be described in a two-current model. The total current is split in two partial currents, one for the spin-up electrons and another for the spin-down electrons. These vary depending on the magnetic state of the junctions. There are two possibilities to obtain a defined anti-parallel state. First, one can use ferromagnets with different coercivities (by using different materials or different film thicknesses). And second, one of the ferromagnets can be coupled with an antiferromagnet ( exchange bias ). In this case the magnetization of the uncoupled electrode remains "free". The TMR becomes infinite if P 1 and P 2 equal 1, i.e. if both electrodes have 100% spin polarization. In this case the magnetic tunnel junction becomes a switch, that switches magnetically between low resistance and infinite resistance. Materials that come into consideration for this are called ferromagnetic half-metals . Their conduction electrons are fully spin-polarized. This property is theoretically predicted for a number of materials (e.g. CrO 2 , various Heusler alloys ) but its experimental confirmation has been the subject of subtle debate. Nevertheless, if one considers only those electrons that enter into transport, measurements by Bowen et al. of up to 99.6% [ 12 ] spin polarization at the interface between La 0.7 Sr 0.3 MnO 3 and SrTiO 3 pragmatically amount to experimental proof of this property. The TMR decreases with both increasing temperature and increasing bias voltage. Both can be understood in principle by magnon excitations and interactions with magnons, as well as due to tunnelling with respect to localized states induced by oxygen vacancies (see Symmetry Filtering section hereafter). [ 13 ] Prior to the introduction of epitaxial magnesium oxide (MgO), amorphous aluminum oxide was used as the tunnel barrier of the MTJ, and typical room temperature TMR was in the range of tens of percent. MgO barriers increased TMR to hundreds of percent. This large increase reflects a synergetic combination of electrode and barrier electronic structures, which in turn reflects the achievement of structurally ordered junctions. Indeed, MgO filters the tunneling transmission of electrons with a particular symmetry that are fully spin-polarized within the current flowing across body-centered cubic Fe-based electrodes. Thus, in the MTJ's parallel (P) state of electrode magnetization, electrons of this symmetry dominate the junction current. In contrast, in the MTJ's antiparallel (AP) state, this channel is blocked, such that electrons with the next most favorable symmetry to transmit dominate the junction current. Since those electrons tunnel with respect to a larger barrier height, this results in the sizeable TMR. Beyond these large values of TMR across MgO-based MTJs, [ 9 ] this impact of the barrier's electronic structure on tunnelling spintronics has been indirectly confirmed by engineering the junction's potential landscape for electrons of a given symmetry. This was first achieved by examining how the electrons of a lanthanum strontium manganite half-metallic electrode with both full spin (P=+1 [ 12 ] ) and symmetry polarization tunnel across an electrically biased SrTiO 3 tunnel barrier. [ 14 ] The conceptually simpler experiment of inserting an appropriate metal spacer at the junction interface during sample growth was also later demonstrated [ 15 ] [ 16 ] . While theory, first formulated in 2001, [ 4 ] [ 5 ] predicts large TMR values associated with a 4eV barrier height in the MTJ's P state and 12eV in the MTJ's AP state, experiments reveal barrier heights as low as 0.4eV. [ 7 ] This contradiction is lifted if one takes into account the localized states of oxygen vacancies in the MgO tunnel barrier. Extensive solid-state tunnelling spectroscopy experiments across MgO MTJs revealed in 2014 [ 13 ] that the electronic retention on the ground and excited states of an oxygen vacancy, which is temperature-dependent, determines the tunnelling barrier height for electrons of a given symmetry, and thus crafts the effective TMR ratio and its temperature dependence. This low barrier height in turn enables the high current densities required for spin-transfer torque, discussed hereafter. The effect of spin-transfer torque has been studied and applied widely in MTJs, where there is a tunnelling barrier sandwiched between a set of two ferromagnetic electrodes such that there is (free) magnetization of the right electrode, while assuming that the left electrode (with fixed magnetization) acts as spin-polarizer. This may then be pinned to some selecting transistor in a magnetoresistive random-access memory device, or connected to a preamplifier in a hard disk drive application. The spin-transfer torque vector, driven by the linear response voltage, can be computed from the expectation value of the torque operator: T = T r [ T ^ ρ ^ n e q ] {\displaystyle \mathbf {T} =\mathrm {Tr} [{\hat {\mathbf {T} }}{\hat {\rho }}_{\mathrm {neq} }]} where ρ ^ n e q {\displaystyle {\hat {\rho }}_{\mathrm {neq} }} is the gauge-invariant nonequilibrium density matrix for the steady-state transport, in the zero-temperature limit, in the linear-response regime, [ 17 ] and the torque operator T ^ {\displaystyle {\hat {\mathbf {T} }}} is obtained from the time derivative of the spin operator: T ^ = d S ^ d t = − i ℏ [ ℏ 2 σ , H ^ ] {\displaystyle {\hat {\mathbf {T} }}={\frac {d{\hat {\mathbf {S} }}}{dt}}=-{\frac {i}{\hbar }}\left[{\frac {\hbar }{2}}{\boldsymbol {\sigma }},{\hat {H}}\right]} Using the general form of a 1D tight-binding Hamiltonian: H ^ = H ^ 0 − Δ ( σ ⋅ m ) / 2 {\displaystyle {\hat {H}}={\hat {H}}_{0}-\Delta ({\boldsymbol {\sigma }}\cdot \mathbf {m} )/2} where total magnetization (as macrospin) is along the unit vector m {\displaystyle \mathbf {m} } and the Pauli matrices properties involving arbitrary classical vectors p , q {\displaystyle \mathbf {p} ,\mathbf {q} } , given by ( σ ⋅ p ) ( σ ⋅ q ) = p ⋅ q + i ( p × q ) ⋅ σ {\displaystyle ({\boldsymbol {\sigma }}\cdot \mathbf {p} )({\boldsymbol {\sigma }}\cdot \mathbf {q} )=\mathbf {p} \cdot \mathbf {q} +i(\mathbf {p} \times \mathbf {q} )\cdot {\boldsymbol {\sigma }}} ( σ ⋅ p ) σ = p + i σ × p {\displaystyle ({\boldsymbol {\sigma }}\cdot \mathbf {p} ){\boldsymbol {\sigma }}=\mathbf {p} +i{\boldsymbol {\sigma }}\times \mathbf {p} } σ ( σ ⋅ q ) = q + i q × σ {\displaystyle {\boldsymbol {\sigma }}({\boldsymbol {\sigma }}\cdot \mathbf {q} )=\mathbf {q} +i\mathbf {q} \times {\boldsymbol {\sigma }}} it is then possible to first obtain an analytical expression for T ^ {\displaystyle {\hat {\mathbf {T} }}} (which can be expressed in compact form using Δ , m {\displaystyle \Delta ,\mathbf {m} } , and the vector of Pauli spin matrices σ = ( σ x , σ y , σ z ) {\displaystyle {\boldsymbol {\sigma }}=(\sigma _{x},\sigma _{y},\sigma _{z})} ). The spin-transfer torque vector in general MTJs has two components: a parallel and perpendicular component: A parallel component: T ∥ = T x 2 + T z 2 {\displaystyle T_{\parallel }={\sqrt {T_{x}^{2}+T_{z}^{2}}}} And a perpendicular component: T ⊥ = T y {\displaystyle T_{\perp }=T_{y}} In symmetric MTJs (made of electrodes with the same geometry and exchange splitting), the spin-transfer torque vector has only one active component, as the perpendicular component disappears: T ⊥ ≡ 0 {\displaystyle T_{\perp }\equiv 0} . [ 18 ] Therefore, only T ∥ {\displaystyle T_{\parallel }} vs. θ {\displaystyle \theta } needs to be plotted at the site of the right electrode to characterise tunnelling in symmetric MTJs, making them appealing for production and characterisation at an industrial scale. Note: In these calculations the active region (for which it is necessary to calculate the retarded Green's function ) should consist of the tunnel barrier + the right ferromagnetic layer of finite thickness (as in realistic devices). The active region is attached to the left ferromagnetic electrode (modeled as semi-infinite tight-binding chain with non-zero Zeeman splitting ) and the right N electrode (semi-infinite tight-binding chain without any Zeeman splitting), as encoded by the corresponding self-energy terms. Theoretical tunnelling magneto-resistance ratios of 10000% [ 19 ] have been predicted. However, the largest that have been observed are only 604%. [ 20 ] One suggestion is that grain boundaries could be affecting the insulating properties of the MgO barrier; however, the structure of films in buried stack structures is difficult to determine. [ 21 ] The grain boundaries may act as short circuit conduction paths through the material, reducing the resistance of the device. Recently, using new scanning transmission electron microscopy techniques, the grain boundaries within FeCoB/MgO/FeCoB MTJs have been atomically resolved. This has allowed first principles density functional theory calculations to be performed on structural units that are present in real films. Such calculations have shown that the band gap can be reduced by as much as 45%. [ 22 ] In addition to grain boundaries, point defects such as boron interstitial and oxygen vacancies could be significantly altering the tunnelling magneto-resistance. Recent theoretical calculations have revealed that boron interstitials introduce defect states in the band gap potentially reducing the TMR further [ 23 ] These theoretical calculations have also been backed up by experimental evidence showing the nature of boron within the MgO layer between two different systems and how the TMR is different. [ 24 ]
https://en.wikipedia.org/wiki/Tunnel_magnetoresistance
The tunnel problem is a philosophical thought experiment first introduced by Jason Millar in 2014. It is a variation on the classic trolley problem designed to focus on the ethics of autonomous vehicles , as well as the question of who gets to decide how they react in life-and-death scenarios. The tunnel problem is intended to draw one's attention to a specific issue in design/engineering ethics, and was first presented as follows: Tunnel Problem: You are travelling along a single lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child attempts to run across the road but trips in the center of the lane, effectively blocking the entrance to the tunnel. The car has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you. How should the car react? [ 1 ] Similar thought experiments have been brought forth by other philosophers focusing on the topic of autonomous cars. [ 2 ] The premise of these thought experiments is that even with highly sophisticated self-driving-car technologies, the cars will face situations where harm cannot be avoided. The tunnel problem is meant to focus one's attention on two questions that it raises for designers and users of autonomous cars: In its original formulation, the tunnel problem is discussed as an "end-of-life" decision for the passenger of the car: depending on the way the car reacts, the passenger either lives or dies. Because of that feature, Millar argues that the tunnel problem forces us to question whether designers/engineers have the legitimate moral authority to make the decision on behalf of autonomous car users. Indeed, the second question is meant to challenge the standard notion that all design decisions are just technical in nature. Where design features provide "material answers to moral questions" [ 3 ] in the use context, Millar argues that designers must find ways to incorporate user preferences in order to avoid unjustifiable paternalistic relationships between technology and the user. [ 4 ] Because the tunnel problem focuses on ethical design issues in semi-autonomous technologies, it is considered a problem in roboethics . [ 5 ] Roger Crisp featured the tunnel problem on the Oxford University Practical Ethics blog. The entry contains a critique of the problem as presented by Millar. [ 6 ] The tunnel problem was the focus of a poll conducted by the Open Roboethics Initiative (ORi). In response, 64% of participants said the car should continue straight and kill the child, while 36% said it should swerve and kill the passenger. In addition, 48% of respondents reported that the decision was "easy", while 28% and 24% claimed it was "moderately difficult" and "difficult" respectively. When asked who should make the decision, only 12% felt the designer/manufacturer should make it, 44% felt the passenger should make it, and 33% thought it should be left to lawmakers. [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Tunnel_problem
Tunnel warfare refers to aspects of warfare relating to tunnels and other underground cavities. It includes the construction of underground facilities in order to attack or defend, and the use of existing natural caves and artificial underground facilities for military purposes. Tunnels can be used to undermine fortifications and slip into enemy territory for a surprise attack, while it can strengthen a defense by creating the possibility of ambush, counterattack and the ability to transfer troops from one portion of the battleground to another unseen and protected. Tunnels can serve as shelter from enemy attack. Since antiquity, sappers have used mining against walled cites, fortresses, castles or other strongly held and fortified military positions. Defenders have dug counter-mines to attack miners or destroy a mine threatening their fortifications. Since tunnels are commonplace in urban areas, tunnel warfare is often a feature, though usually a minor one, of urban warfare. A good example of this was seen in the Syrian Civil War in Aleppo , where in March 2015 rebels planted a large amount of explosives under the Syrian Air Force Intelligence Directorate headquarters. Tunnels are narrow and restrict fields of fire; thus, troops in a tunnel usually have only a few areas exposed to fire or sight at any one time. They can be part of an extensive labyrinth and have cul-de-sacs and reduced lighting, typically creating a closed-in night combat environment. [ citation needed ] The Greek historian Polybius , in his Histories , gives a graphic account of mining and counter mining at the Roman siege of Ambracia : The Aetolians ... offered a gallant resistance to the assault of the siege artillery and [the Romans], therefore, in despair had recourse to mines and tunnels. Having safely secured the central one of their three works, and carefully concealed the shaft with wattle screens, they erected in front of it a covered walk or stoa about two hundred feet long, parallel with the wall; and beginning digging from that, they carried it on unceasingly day and night, working in relays. For a considerable number of days the besieged did not discover them carrying the earth away through the shaft; but when the heap of earth thus brought out became too high to be concealed from those inside the city, the commanders of the besieged garrison set to work vigorously digging a trench inside, parallel to the wall and to the stoa which faced the towers. When the trench was made to the required depth, they next placed in a row along the side of the trench nearest the wall a number of brazen vessels made very thin; and, as they walked along the bottom of the trench past these, they listened for the noise of the digging outside. Having marked the spot indicated by any of these brazen vessels, which were extraordinarily sensitive and vibrated to the sound outside, they began digging from within, at right angles to the trench, another tunnel leading under the wall, so calculated as to exactly hit the enemy's tunnel. This was soon accomplished, for the Romans had not only brought their mine up to the wall, but had under-pinned a considerable length of it on either side of their mine; and thus the two parties found themselves face to face. [ 1 ] The Aetolians then countered the Roman mine with smoke from burning feathers with charcoal, [ 1 ] in essence an early form of chemical warfare . Another extraordinary use of siege-mining in ancient Greece was during Philip V of Macedon 's siege of the little town of Prinassos , according to Polybius , "the ground around the town were extremely rocky and hard, making any siege-mining virtually impossible. However, Philip ordered his soldiers during the cover of night collect earth from elsewhere and throw it all down at the fake tunnel's entrance, making it look like the Macedonians were almost finished completing the tunnels. Eventually, when Philip V announced that large parts of the town-walls were undermined, the citizens surrendered without delay." [ 2 ] Polybius also describes the Seleucids and Parthians employing tunnels and counter-tunnels during the siege of Sirynx. [ 3 ] The oldest known sources about employing tunnels and trenches for guerrilla-like warfare are Roman . After the Revolt of the Batavi , the insurgent tribes soon started to change defensive practices, from only local strongholds to using the advantage of wider terrain. Hidden trenches to assemble for surprise attacks were dug, connected via tunnels for secure fallback. [ 4 ] In action, often barriers were used to prevent the enemy from pursuing. Roman legions entering the country soon learned to fear this warfare, as the ambushing of marching columns caused high casualties. Therefore, they approached possibly fortified areas very carefully, giving time to evaluate, assemble troops and organize them. When the Romans were themselves on the defensive the large underground aqueduct system was used in the defense of Rome , as well as to evacuate fleeing leaders. The use of tunnels as a means of guerrilla-like warfare against the Roman Empire was also a common practice of the Jewish rebels in Judea during the Bar Kokhba revolt (132–136 AD). With time the Romans understood that efforts should be made to expose these tunnels. Once an entrance was discovered fire was lit, either smoking out the rebels or suffocating them to death. Well-preserved evidence of mining and counter-mining operations has been unearthed at the fortress of Dura-Europos , which fell to the Sassanians in 256/7 AD during Roman–Persian wars . Mining was a siege method used in ancient China from at least the Warring States (481–221 BC) period forward. When enemies attempted to dig tunnels under walls for mining or entry into the city, the defenders used large bellows to pump smoke into the tunnels in order to suffocate the intruders. [ 5 ] In warfare during the Middle Ages , a "mine" was a tunnel dug to bring down castles and other fortifications. Attackers used this technique when the fortification was not built on solid rock, developing it as a response to stone-built castles that could not be burned like earlier-style wooden forts. A tunnel would be excavated under the outer defenses either to provide access into the fortification or to collapse the walls. These tunnels would normally be supported by temporary wooden props as the digging progressed. Once the excavation was complete, the attackers would collapse the wall or tower being undermined by filling the excavation with combustible material that, when lit, would burn away the props leaving the structure above unsupported and thus liable to collapse. A tactic related to mining is sapping the wall, where engineers would dig at the base of a wall with crowbars and picks. Peter of les Vaux-de-Cernay recounts how at the battle of Carcassonne, during the Albigensian Crusade, "after the top of the wall had been somewhat weakened by bombardment from petraries, our engineers succeeded with great difficulty in bringing a four-wheeled wagon, covered in oxhides, close to the wall, from which they set to work to sap the wall". [ 6 ] As in the siege of Carcassonne, defenders worked to prevent sapping by dumping anything they had down on attackers who tried to dig under the wall. Successful sapping usually ended the battle, since the defenders would no longer be able to defend their position and would surrender, or the attackers could enter the fortification and engage the defenders in close combat. Several methods resisted or countered undermining. Often the siting of a castle could make mining difficult. The walls of a castle could be constructed either on solid rock or on sandy or water-logged land, making it difficult to dig mines. A very deep ditch or moat could be constructed in front of the walls, as was done at Pembroke Castle , or even artificial lakes, as was done at Kenilworth Castle . This makes it more difficult to dig a mine, and even if a breach is made, the ditch or moat makes exploiting the breach difficult. Defenders could also dig counter mines. From these they could then dig into the attackers' tunnels and sortie into them to either kill the miners or to set fire to the pit-props to collapse the attackers' tunnel. Alternatively they could under-mine the attackers' tunnels and create a camouflet to collapse the attackers' tunnels. Finally if the walls were breached, they could either place obstacles in the breach, for example a cheval de frise to hinder a forlorn hope , or construct a coupure . The great concentric ringed fortresses, like Beaumaris Castle on Anglesey , were designed so that the inner walls were ready-built coupures: if an attacker succeeded in breaching the outer walls, he would enter a killing field between the lower outer walls and the higher inner walls. A major change took place in the art of tunnel warfare in the 15th century in Italy with the development of gunpowder , since its use reduced the effort required to undermine a wall while also increasing lethality. Ivan the Terrible took Kazan with the use of gunpowder explosions to undermine its walls. Many fortresses built counter mine galleries, "hearing tunnels" which were used to listen for enemy mines being built. At a distance of about fifty yards they could be used to detect tunneling. The Kremlin had such tunnels. Since the 16th century, during assault on enemy positions, saps began to be used. The Austrian general of Italian origin Raimondo Montecuccoli (1609–1680) in his classic work on military affairs described methods of destruction and countering of enemy saps. In his paper on "the assaulting of fortresses" Vauban (1633–1707) the creator of the French School of Fortification gave a theory of mine attack and how to calculate various saps and the amount of gunpowder needed for explosions. As early as 1840 Eduard Totleben and Schilder-Schuldner had been engaged on questions of organisation and conduct of underground attacks. They began to use electric current to disrupt charges. Special boring instruments of complex design were developed. In the Siege of Sevastopol (1854–1855) underground fighting became immense. At first the allies began digging saps without any precautions. After a series of explosions caused by counter mine action the allies increased the depth of the tunnels but began to meet rocky ground and the underground war had to return to higher levels. During the siege Russian sappers dug 6.8 kilometres (4.2 mi) of saps and counter mines. During the same period the allies dug 1.3 kilometres (0.81 mi). The Russians expended 12 tons of gunpowder in the underground war while the allies used 64 tons. These figures show that the Russians tried to create a more extensive network of tunnels and carried out better targeted attacks with only minimal use of gunpowder. The allies used outdated fuses so that many charges failed to go off. Conditions in the tunnels were severe: wax candles often went out, sappers fainted due to stale air, ground water flooded tunnels and counter mines. The Russians repulsed the siege and started to dig tunnels under the allies fortifications. The Russian success in the underground war was recognised by the allies. The Times noted that the laurels for this kind of warfare must go to the Russians. In 1864, during the Siege of Petersburg by the Union Army of the Potomac, a mine made of 3,600 kilograms (8,000 lb) of gunpowder was set off approximately 6 metres (20 ft) under Maj. Gen. Ambrose E. Burnside's IX Corps sector. The explosion blew a gap in the Confederate defenses of Petersburg, Virginia , creating a crater 52 metres (170 ft) long, 30 to 37 metres (100 to 120 ft) wide, and at least 9 metres (30 ft) deep. The combat was accordingly known as the Battle of the Crater . From this propitious beginning, everything deteriorated rapidly for the Union attackers. Unit after unit charged into and around the crater, where soldiers milled in confusion. The Confederates quickly recovered and launched several counterattacks led by Brig. Gen. William Mahone . The breach was sealed off, and Union forces were repulsed with severe casualties. The horror of this engagement was portrayed in the Charles Frazier novel, and subsequent Anthony Minghella movie, Cold Mountain . During the Siege of Vicksburg , in 1863, Union troops led by General Ulysses S. Grant tunnelled under the Confederate trenches and detonated a mine beneath the 3rd Louisiana Redan on June 25, 1863. The subsequent assault, led by General John A. Logan, gained a foothold in the Confederate trenches where the crater was formed, but the attackers were eventually forced to withdraw. The increased firepower that came with the use of smokeless powder , cordite and dynamite by the end of the 19th century made it very expensive to build above-ground fortifications that could withstand any attack. As a result, fortifications were covered with earth and eventually were built entirely underground to maximize protection. For the purpose of firing artillery and machine guns , emplacements had loopholes . Mining saw a particular resurgence as a military tactic during the First World War , when army engineers attempted to break the stalemate of trench warfare by tunneling under no man's land and laying large quantities of explosives beneath the enemy's trenches. As in siege warfare, tunnel warfare was possible due to the static nature of the fighting. During the Gallipoli campaign , the Western and Italian Front during the First World War , the military employed specialist miners to dig tunnels. On the Italian Front, the high peaks of the Dolomites range were an area of fierce mountain warfare and mining operations . In order to protect their soldiers from enemy fire and the hostile alpine environment, both Austro-Hungarian and Italian military engineers constructed fighting tunnels which offered a degree of cover and allowed better logistics support . In addition to building underground shelters and covered supply routes for their soldiers, both sides also attempted to break the stalemate of trench warfare by tunneling under no man's land and placing explosive charges beneath the enemy's positions. Their efforts in high mountain peaks such as Col di Lana , Lagazuoi and Marmolada were portrayed in fiction in Luis Trenker 's Mountains on Fire film of 1931. At Gillipoli and on the Western Front, the main objective of tunnel warfare was to place large quantities of explosives beneath enemy defensive positions. When it was detonated, the explosion would destroy that section of the trench. The infantry would then advance towards the enemy front-line hoping to take advantage of the confusion that followed the explosion of an underground mine. It could take as long as a year to dig a tunnel and place a mine. As well as digging their own tunnels, the military engineers had to listen out for enemy tunnellers. On occasions miners accidentally dug into the opposing side's tunnel and an underground fight took place. When an enemy's tunnel was found it was usually destroyed by placing an explosive charge inside. During the height of the underground war on the Western Front in June 1916, British tunnellers fired 101 mines or camouflets, while German tunnellers fired 126 mines or camouflets. This amounts to a total of 227 mine explosions in one month – one detonation every three hours. [ 7 ] Large battles, like the Battle of the Somme in 1916 (see mines on the Somme ) and the Battle of Vimy Ridge in 1917, were also supported by mine explosions. Well known examples are the mines on the Italian Front laid by Austro-Hungarian and Italian miners, where the largest individual mine contained a charge of 50,000 kilograms (110,000 lb) of blasting gelatin , and the activities of the Tunnelling companies of the Royal Engineers on the Western Front. At the beginning of the Somme offensive , the British simultaneously detonated 19 mines of varying sizes beneath the German positions, including two mines that contained 18,000 kilograms (40,000 lb) of explosives. In January 1917, General Plumer gave orders for over 20 mines to be placed under German lines at Messines . Over the next five months more than 8,000 m (26,000 ft) of tunnel were dug and 450–600 tons of explosive were placed in position. Simultaneous explosion of the mines took place at 3:10 a.m. on 7 June 1917. The blast killed an estimated 10,000 soldiers and was so loud it was heard in London. [ 8 ] The near simultaneous explosions created 19 large craters and ranks among the largest non-nuclear explosions of all time. Two mines were not ignited in 1917 because they had been abandoned before the battle, and four were outside the area of the offensive. On 17 July 1955, a lightning strike set off one of these four latter mines. There were no human casualties, but one cow was killed. Another of the unused mines is believed to have been found in a location beneath a farmhouse, [ 9 ] but no attempt has been made to remove it. [ 10 ] The last mine fired by the British in World War I was near Givenchy on 10 August 1917, [ 11 ] after which the tunnelling companies of the Royal Engineers concentrated on constructing deep dugouts for troop accommodation. The largest single mines at Messines were at St Eloi , which was charged with 43,400 kilograms (95,600 lb) of ammonal , at Maedelstede Farm, which was charged with 43,000 kg (94,000 lb), and beneath German lines at Spanbroekmolen, which was charged with 41,000 kg (91,000 lb) of ammonal. The Spanbroekmolen mine created a crater that afterwards measured 130 metres (430 ft) from rim to rim. Now known as the Pool of Peace, it is large enough to house a 12 m (40 ft) deep lake. [ 12 ] On May 10, 1933, Paraguayan troops used a tunnel to attack in the rear of the Bolivian troops. They were victorious. The term tunnel war or tunnel warfare (地道战) was first used for the guerrilla tactic employed by the Chinese in the Second Sino-Japanese War . The tunnel systems were fast and easy to construct and enabled a small force to successfully fight superior enemies. One particular tunnel network called the "Ranzhuang tunnel" evolved in the course of resisting Japanese counterinsurgency operations in Hebei . Particularly, the Chinese Communist forces or local peasant resistance used tunnel war tactics against the Japanese (and later the Kuomintang during Chinese Civil War ). The tunnels were dug beneath the earth to cover the battlefield with numerous hidden gun holes to make a surprise attack. Entrances usually were hidden beneath a straw mat inside a house, or down a well. This allowed for flexible manoeuvers or exits. The main disadvantage of tunnel war was that usually the Japanese could fill the holes or pour water in to suffocate the soldiers inside the tunnels. This proved to be a major problem but was later solved by installing filters that would consume the water and poisonous gases. It is said that there were even women and children who voluntarily fought in the tunnels. The movie Tunnel War , which is based on the stories about fighting Japanese in tunnels, made tunnel warfare well known in China. [ 17 ] More films were soon produced and adapted in the same setting. [ 18 ] After the war, the Ranzhuang tunnel site became a key heritage preservation unit promoting patriotism and national defense education. Being a famous war tourism site in China, it attracted tens of thousands of visitors each year. Most of the villagers were working in tourism service industry, an industry worth US$700,000 each year. [ 19 ] The first to copy tunnel warfare were the Japanese themselves. In the battles of the Western Pacific , they would maximize their capabilities by establishing a strong point defense, using cave warfare. The first encounter of the US Marines with this new tactic was the island of Peleliu . The invading marines suffered twice as many casualties as on Tarawa , where the old Japanese tactic of defending the beach had been employed. The pinnacle of this form of defense, however, can be found on Iwo Jima , where the Japanese engineered the whole Mount Suribachi with many tunnels leading to defensive emplacements, or exits for quick counterattacks. Tunnel warfare by the Japanese forced the US Marines to adopt the "blowtorch and corkscrew" tactics to systematically flush out the Japanese defenders, one cave at a time. In Australia, the demand for protection from air attack became more serious in the early 1940s when there was significant axis naval activity in Australian waters and when three Japanese midget submarines entered and attacked the Sydney Harbour in 1942. [ 20 ] In Sydney in 1941, the Royal Australian Navy excavated a series of tunnels to shelter over 2,500 men working at the naval base from air raids , and as well as to transport guns and ammunition within the tunnels after the Australian government and people expected a Japanese invasion of Australia . [ 21 ] [ 22 ] There are other military fortifications in coastal Sydney that feature a tunnel warfare system, such as the Georges Head Battery (which was constructed in 1801 and was added to the New South Wales State Heritage Register in 1999), [ 23 ] [ 24 ] Lower Georges Heights Commanding Position (which was built in 1877 and became part of the Sydney Harbour defences , where the underground rooms and tunnels were used to store ammunition), Henry Head Battery (which was constructed in 1892 and was re-employed during World War II to defend the approaches to Botany Bay ), [ 25 ] the Middle Head Fortifications (a heritage-listed [ 26 ] fort built in 1801), Malabar Battery (a coastal defense battery built in 1943) and the smaller Steel Point Battery . [ 27 ] In Wollongong , just south of Sydney, there exists the Illowra Battery and Drummond Battery . [ 28 ] To the north of Sydney, in Newcastle , the Shepherds Hill military installations , a NSW state heritage-listed site , was built from 1890 to 1940 and consists of a former military gun battery emplacement, a 100 metres (330 ft) long tunnel and an observation post. [ 29 ] As part of the strengthening of Newcastle's defense system, various new projects were undertaken at Shepherds Hill during WWII, such as accommodation for troops stationed. [ 29 ] Fort Scratchley , which had close ties to Shepherds Hill, responded to an attack on Newcastle by a Japanese submarine in June 1942. This is the only place on the mainland of Australia known to have returned fire. The batteries at Shepherds Hill formed an integrated system with the batteries at Fort Scratchley, Fort Wallace at Stockton and at Tomaree on Port Stephens . [ 29 ] During the Japanese occupation of the Philippines , the Ilagan Japanese Tunnel was part of a military base built by the Japanese government as headquarters for its soldiers during World War II. [ 30 ] In the Philippines campaign (1941–1942) , Philippines President Manuel L. Quezon , General MacArthur, other high-ranking military officers and diplomats and families escaped the bombardment of Manila and were housed in Corregidor 's Malinta Tunnel . Prior to their arrival, Malinta's laterals had served as high command headquarters, hospital and storage of food and arms. In March 1942, several U.S. Navy submarines arrived on the north side of Corregidor. The Navy brought in mail, orders, and weaponry. During the re-taking of the island by U.S. forces in 1945, Japanese soldiers who had been trapped in the tunnel after the entrance was blocked as a result of gunfire from USS Converse (DD-509) began committing suicide by detonating explosives within the tunnel complex the night of 23 February 1945. [ 31 ] The collapsed laterals resulting from these explosions have never been excavated. During the Battle of Corregidor , the third lateral on the north side from the Malinta Tunnel's east entrance served as the headquarters of General Douglas MacArthur and the USAFFE . Malinta Tunnel also served as the seat of government of the Commonwealth of the Philippines . At the vicinity of the tunnel's west entrance in the afternoon of 30 December 1941, Manuel L. Quezon and Sergio Osmeña took their oaths of office as President and Vice-president of the Philippine Commonwealth in simple ceremonies attended by members of the garrison. [ 32 ] [ 33 ] On the Korean Peninsula , the underground war reached a massive scale. From experience in the Second World War, the US relied upon aviation. North Korean forces suffered heavy losses from air strikes which obliged them to construct underground shelters. Initially underground fortifications were built independently by individual units and their placement was chaotic. Subsequently, underground fortifications were united into a single large system. The length of the front was 250 kilometres (160 mi) while the length of tunnels was 500 kilometres (310 mi); for every kilometre of front, there were two kilometres of tunnels. A total of 2,000,000 cubic metres (71,000,000 cu ft) of rocks were extracted. North Korea developed a theory of underground warfare. Manpower, warehouses and small calibre guns were completely housed underground making them less vulnerable to air strikes and artillery. On the surface, the many false targets (bunkers, trenches and decoy entrances to the tunnel system) made it difficult to detect true targets, forcing US forces to waste ammunition. Directly under the surface, spacious barracks were built, allowing whole units to be quickly brought to the surface for a short time and as quickly returned to shelter underground. North Korea even created underground shelters for artillery. During bombing, artillery was rolled into bunkers located inside mountains. When a lull came, the guns were rolled back out onto a firing area, fired some shells, and rolled back into the bunker again. Unlike other examples of underground warfare, North Korean troops did not just remain in the tunnels. North Korean forces were sheltering in the tunnels from the bombing and shelling and awaiting US bayonet attacks. When US forces reached the ground in the area of the tunnels, chosen North Korean units would emerge to engage in hand-to-hand combat, taking advantage of their numerical superiority. To this day, the North Korean strategy is to construct as many underground facilities as possible for military use in the event of a US attack. The depth of underground facilities reaches 80 to 100 m (260 to 330 ft), making them difficult to destroy even with the use of tactical nuclear weapons. In the Korean War the tactic of tunnel warfare was employed by the Chinese forces themselves. "The Chinese resort to tunnel warfare, and the devastating losses to American soldiers, led to the sealing of tunnel entrances by United Nations Command. According to later prisoner of war interrogations, Chinese officers had killed a number of their own soldiers in the tunnels, because the latter had wished to dig their way out and surrender to the United Nations Command ." [ 34 ] The Chinese People's Volunteer Army under General Qin Jiwei constructed an intricate series of defensive networks, which were composed of 9,000 meters (9,800 yd) of tunnels, 50,000 m (55,000 yd) of trenches and 5,000 m (5,500 yd) of obstacles and minefields. [ 35 ] This tunnel network proved its use in the Battle of Triangle Hill in October and November 1952, where, despite the United States Eight Army enjoying complete air and artillery superiority, the Chinese managed to keep the hill and inflict heavy casualties on the Americans. To maintain a full-scale guerrilla war in South Vietnam , camouflaged bases were used capable of supplying the guerillas for a long period of time. Throughout South Vietnam, there were secret underground bases that operated successfully. There are reports that every villager was obliged to dig 90 cm (35 in) of tunnel a day. The largest underground base was the tunnels of Cu Chi with an overall length of 320 km (200 mi). [ citation needed ] To combat the guerillas in the tunnels the US used soldiers dubbed tunnel rats . [ 36 ] Part of the Ho Chi Minh trail was based in caves made of karst . When Vietnam became a French colony again after the Second World War, the Communistic Viet Minh started to dig tunnels close to Saigon . After the French army left (they were defeated at Dien Bien Phu ) the tunnels were maintained to prepare for a possible war with South Vietnam would start. Ho Chi Minh , leader of North Vietnam , ordered the expansion of the tunnels after the Americans entered the war between the North and the South; the tunnels would be used by the Viet Cong . Systems of tunnels were not occupied temporarily for military purpose, but began to contain whole villages of people living permanently underground. The tunnel system contained a complete world below ground, featuring kitchens, hospitals, workshops, sleeping areas, communications, ammunition storage, and even forms of entertainment. The tunnels eventually became a target for American forces because the enemy not only hid in them, but further could strike anywhere in the vast range of the tunnel complex (hundreds of miles) without a single warning before disappearing again. These tactics were also applied against the Chinese during the Sino-Vietnamese War . The Củ Chi tunnels , a complex of over 200 kilometres (120 mi) of tunnel systems, allowed NLF guerrillas during the Vietnam War to keep a large presence relatively close to Saigon . [ citation needed ] During the Palestinian insurgency in South Lebanon in the 1970s, PLO leader Yasser Arafat instructed his top military commander Khalil al-Wazir (Abu Jihad) to construct a network of underground bunkers and tunnels under Beirut and the Ain al-Hilweh refugee camp to defend against a possible Israeli invasion. Wazir, who had previously travelled to China, Vietnam and North Korea, based this system on the Viet Cong's model, hiding huge quantities of military supplies and linking Beirut with the PLO's strongholds in Southern Lebanon. [ 37 ] A Lebanese Army officer later said: 'We just do not know how many miles of these tunnels there are. Some are new, some are old. We have no maps. They may be booby-trapped. Who knows?' [ 38 ] The tunnels proved useful for the PLO in the 1982 Lebanon War against Israel, being essential in a surprise attack during the Battle of Sultan Yacoub where Palestinians captured three Israel Defense Forces soldiers, who were later exchanged for over 1,000 imprisoned Palestinians. Perhaps the most effective use of tunnels by the PLO were during the siege of the Ain al-Hilweh refugee camp, when the PLO managed to inflict relatively heavy losses on the IDF and considerably slow its projected advance towards Beirut. Israeli historian Gil'ad Be'eri has written: The Refugee camps were heavily fortified, full of bunkers and fire positions. The Palestinian defence at Ein El Hilweh and other refugee camps was based on hand-carried anti-tank weapons such as the RPG ( Rocket propelled grenade ). (...) The IDF was not prepared for this kind of fighting, having at hand mainly armoured forces intended for use in open areas. The built-up area inhibited long-range weapons, created an equality between the tank and the RPG (often wielded by 13- or 14-year-old boys), and increased the number of Israeli casualties. (...) Palestinian resistance seriously disrupted the timetable of the planned rapid advance to Beirut . It took eight days before the final crushing of resistance in Ein El Hilweh. The method adopted by the army was to use loud-speakers to call upon the civilian population to move away, search the houses one by one, surround points of remaining active resistance and subdue them by overwhelming fire. [ 39 ] Imad Mughniyeh , one of Abu Jihad's most trusted lieutenants and member of Fatah 's elite Force 17 during the 1982 Lebanon War, would go on to become a senior Hezbollah military commander and was instrumental in the construction of Hezbollah's own tunnel network leading up to the 2006 Lebanon War (see below). [ 40 ] An underground war was actively pursued in the Afghan War . Water pipes extend under the entire Afghan territory. In wartime, Afghans have used these tunnels to both hide and to appear suddenly behind the enemy force. To clear these tunnels, Soviet troops used explosives and gasoline. The most famous underground base of the Mujahideen and then the Taliban was Tora Bora ; this tunnel system went to a depth of 400 meters and had a length of 25 kilometres (16 mi). To combat guerillas in Tora Bora, the United States used special forces . Osama bin Laden had in 1987 established his base near the Afghan-Pakistani border, for his Afghan Arab fighters who would later form the core of al-Qaeda . The base was equipped with an extensive tunnel network constructed by al-Qaeda's military chief Mohammed Atef , later one of the masterminds of the September 11 attacks . In May and June 1987 Soviet forces attacked the base with heavy artillery, aerial bombardment and numerous ground assaults, in the Battle of Jaji . In the end, the Mujahideen successfully held their complex system of tunnels and caves named al-Masada just outside the village of Jaji , near the Pakistani border, from Soviet capture. [ 41 ] [ 42 ] Between May 1992 and November 1995, during the Siege of Sarajevo in the Bosnian Army built the Sarajevo Tunnel in order to link the city of Sarajevo, which was entirely cut off by Serbian forces, with the Bosnian-held territory on the other side of the Sarajevo Airport , an area controlled by the United Nations. The tunnel linked the Sarajevo neighbourhoods of Dobrinja and Butmir , allowing food, war supplies, and humanitarian aid to come into the city, and people to get out. The tunnel was one of the major ways of bypassing the international arms embargo and providing the city defenders with weaponry. The Sarajevo Tunnel is now converted into a war museum, with 20 metres (66 ft) of the original tunnel open for tourists visit. [ 43 ] Due to the prevalence of bunker-busting munitions and combined arms maneuver warfare there has been a simple lack of need for such operations since the mid 20th century, making tunneling extremely rare outside of insurgencies (which often cannot use either of the former). During the Syrian civil war , rebel groups like the Islamic Front , Al-Nusra and ISIS dug tunnels and used explosives to attack fixed military positions of the Syrian Armed Forces and allied militias. A notable example is the attack on the Air Force Intelligence Building in Aleppo where on 4 March 2015, rebel forces detonated a large quantity of explosives in a tunnel dug close to or under the building. The building suffered a partial collapse as a result of the explosion which was immediately followed by an armed rebel assault. [ 44 ] [ 45 ] In July 2006, a group of Hezbollah operatives crossed from southern Lebanon into northern Israel killing three Israeli soldiers and abducting two , which started the Lebanese-Israeli war . Faced with Israeli's air attacks, Hezbollah needed to create a defensive system that would enable these rocket attacks to continue uninterrupted throughout any conflict with Israel. To do so they created an intricate system of tunnels and underground bunkers, anti-tank units, and explosive-ridden areas. [ 46 ] Hezbollah built a sophisticated network of tunnels with North Korean assistance, with close resemblance to North Korea 's own network of tunnels in the demilitarized zone separating the two Koreas. [ 47 ] [ 48 ] The underground network included twenty-five kilometer tunnels, bunkers, fiber optics communication systems, and storerooms to hold missiles and ammunition. [ 47 ] Its capabilities were extended by Iranian supply of advanced weaponry and in-depth training of Hezbollah operatives. [ 47 ] In addition to the tunnel network built in southern Lebanon, Hezbollah has constructed tunnels beneath the southern suburbs of Beirut , where its headquarters are located and where it stores missiles. Analysts also suggest that the group maintains tunnels along the Syrian border, facilitating the smuggling of weapons from Iran. [ 49 ] Between December 2018 and January 2019, the Israeli military destroyed six tunnels built by Hezbollah along Israel's border during Operation Northern Shield . [ 50 ] In October 2024, during a ground offensive against Hezbollah , the IDF again targeted tunnels in southern Lebanon, later reporting the discovery and destruction of over 50 tunnel shafts in the area. [ 49 ] Sometimes the ongoing conflict between the Israeli Army and Islamist militants in Hamas-governed Gaza Strip is called a tunnel conflict. In 2017, the Barrier against tunnels along the Israel-Gaza Strip border began to be built to prevent the digging of cross-border attack tunnels. On October 30 such tunnel was located within Israeli borders and was detonated. [ 52 ] Hamas has constructed an extensive network of tunnels under Gaza City and other populated areas of the Gaza Strip, sometimes called the Gaza metro . The network is over 500 km (300 mi) in length according to Hamas claims. According to experts, these tunnels serve multiple purposes for Hamas, including holding kidnapped hostages, smuggling goods, moving militants, storing weapons, and sheltering Hamas members and infrastructure. However, the location and use of military tunnels in densely populated areas have raised concerns about Hamas endangering civilians. [ 53 ] It has been reported during the ongoing Gaza war that Hamas has dug extremely extensive tunnels under Gaza, and that capturing and destroying the tunnels is a "top priority" of the IDF. [ 54 ] A 2024 Royal United Services Institute report details Hamas's use of two types of tunnels: deep, well-equipped ones for high-ranking commanders and shallower ones for lower-level members. Initially, the Israeli Defense Forces (IDF) planned to secure territory before searching for tunnels, but this strategy allowed Hamas to launch ambushes from underground. This experience highlighted the need for effective counter-tunnel operations to combine both surface and subterranean combat, while also addressing the risk of friendly fire. [ 55 ] See Subterranean warfare#Gaza war for more on the subject. In 2022, Ukrainian resistance managed to hold off the Russian invading army for 80 days by using tunnels underneath the city of Mariupol . [ 56 ] By April 2022, Russian and separatist troops had pushed deep into most of the city, separating the last Ukrainian troops from the few pockets of Ukrainian troops retreating into the Azovstal Iron and Steel Works , which contains a complex of bunkers and tunnels which could even resist a nuclear bombing . [ 57 ] In January 2024, Russian forces used tunneling tactics during the battle of Avdiivka to break through Ukrainian positions in the south of the city. According to Ukrainska Pravda and a separate 5 Kanal report, Russian tunnelers supplied with oxygen tanks entered the local underground drainage network near Spartak and began digging tunnels and clearing debris in an abandoned service water pipe "for several days", creating exit holes every 100 metres. Beginning around 15 January, reconnaissance teams then used the 1.3-1.4 metres high passage to infiltrate "about a kilometer" forward and conduct sneak attacks on Ukrainian positions, with varying degrees of success. According to Russian sources, the tunneling operation occurred over several weeks as Russian scouts cleared the flooded 0.5 metre-wide drainage pipe of icy water and cut holes into it using power tools, covering up the noise of the operation with mortar and artillery fire. As many as 150 special operations personnel used the network to infiltrate 2 km and emerge behind Ukrainian positions near the "Tsarska Okhota" park, capturing the fortification, according to Russian sources. [ 58 ] [ 59 ] In July 2024, during the battle of Toretsk , the same Russian units that participated in the Avdiivka tunnel raid managed to cut into the Ukrainian defences near Pivnichne and Druzhba to a depth of 3-5 kilometers using a multi-kilometer tunnel they had dug under Ukrainian strongholds. [ 60 ] In March 2025, Russian forces traveled through the disused Urengoy–Pomary–Uzhhorod pipeline to infiltrate behind Ukrainian lines near Sudzha during the Kursk offensive . [ 61 ]
https://en.wikipedia.org/wiki/Tunnel_warfare
A tunnel washer , also called a continuous batch washer , is an industrial washing machine designed specifically to handle heavy loads of laundry . The screw is made of perforated metal, so items can progress through the washer in one direction, while water and washing chemicals move through in the opposite direction. Thus, the linen moves through pockets of progressively cleaner water and fresher chemicals. Soiled linen can be continuously fed into one end of the tunnel while clean linen emerges from the other. [ 1 ] Originally, one of the machine's major drawbacks was the necessity of using one wash formula for all items. Modern computerized tunnel washers can monitor and adjust the chemical levels in individual pockets, effectively overcoming this problem. [ 2 ]
https://en.wikipedia.org/wiki/Tunnel_washer
In computer networks , a tunneling protocol is a communication protocol which allows for the movement of data from one network to another. They can, for example, allow private network communications to be sent across a public network (such as the Internet ), or for one network protocol to be carried over an incompatible network, through a process called encapsulation . Because tunneling involves repackaging the traffic data into a different form, perhaps with encryption as standard, it can hide the nature of the traffic that is run through a tunnel. Tunneling protocols work by using the data portion of a packet (the payload ) to carry the packets that actually provide the service. Tunneling uses a layered protocol model such as those of the OSI or TCP/IP protocol suite, but usually violates the layering when using the payload to carry a service not normally provided by the network. Typically, the delivery protocol operates at an equal or higher level in the layered model than the payload protocol. A tunneling protocol may, for example, allow a foreign protocol to run over a network that does not support that particular protocol, such as running IPv6 over IPv4 . Another important use is to provide services that are impractical or unsafe to be offered using only the underlying network services, such as providing a corporate network address to a remote user whose physical network address is not part of the corporate network. Users can also use tunneling to "sneak through" a firewall, using a protocol that the firewall would normally block, but "wrapped" inside a protocol that the firewall does not block, such as HTTP . If the firewall policy does not specifically exclude this kind of "wrapping", this trick can function to get around the intended firewall policy (or any set of interlocked firewall policies). Another HTTP-based tunneling method uses the HTTP CONNECT method/command . A client issues the HTTP CONNECT command to an HTTP proxy. The proxy then makes a TCP connection to a particular server:port, and relays data between that server:port and the client connection. [ 1 ] Because this creates a security hole, CONNECT-capable HTTP proxies commonly restrict access to the CONNECT method. The proxy allows connections only to specific ports, such as 443 for HTTPS. [ 2 ] Other tunneling methods able to bypass network firewalls make use of different protocols such as DNS , [ 3 ] MQTT , [ 4 ] SMS . [ 5 ] As an example of network layer over network layer, Generic Routing Encapsulation (GRE), a protocol running over IP ( IP protocol number 47), often serves to carry IP packets, with RFC 1918 private addresses, over the Internet using delivery packets with public IP addresses. In this case, the delivery and payload protocols are the same, but the payload addresses are incompatible with those of the delivery network. It is also possible to establish a connection using the data link layer. The Layer 2 Tunneling Protocol (L2TP) allows the transmission of frames between two nodes. A tunnel is not encrypted by default: the TCP/IP protocol chosen determines the level of security. SSH uses port 22 to enable data encryption of payloads being transmitted over a public network (such as the Internet) connection, thereby providing VPN functionality. IPsec has an end-to-end Transport Mode, but can also operate in a tunneling mode through a trusted security gateway. To understand a particular protocol stack imposed by tunneling, network engineers must understand both the payload and delivery protocol sets. Tunneling a TCP- encapsulating payload (such as PPP ) over a TCP-based connection (such as SSH's port forwarding) is known as "TCP-over-TCP", and doing so can induce a dramatic loss in transmission performance — known as the TCP meltdown problem [ 6 ] [ 7 ] which is why virtual private network (VPN) software may instead use a protocol simpler than TCP for the tunnel connection. TCP meltdown occurs when a TCP connection is stacked on top of another. The underlying layer may detect a problem and attempt to compensate, and the layer above it then overcompensates because of that, and this overcompensation causes said delays and degraded transmission performance. A Secure Shell (SSH) tunnel consists of an encrypted tunnel created through an SSH protocol connection. Users may set up SSH tunnels to transfer unencrypted traffic over a network through an encrypted channel. It is a software-based approach to network security and the result is transparent encryption. [ 8 ] For example, Microsoft Windows machines can share files using the Server Message Block (SMB) protocol, a non-encrypted protocol. If one were to mount a Microsoft Windows file-system remotely through the Internet, someone snooping on the connection could see transferred files. To mount the Windows file-system securely, one can establish a SSH tunnel that routes all SMB traffic to the remote fileserver through an encrypted channel. Even though the SMB protocol itself contains no encryption, the encrypted SSH channel through which it travels offers security. Once an SSH connection has been established, the tunnel starts with SSH listening to a port on the remote or local host. Any connections to it are forwarded to the specified address and port originating from the opposing (remote or local, as previously) host. The TCP meltdown problem is often not a problem when using OpenSSH's port forwarding, because many use cases do not entail TCP-over-TCP tunneling; the meltdown is avoided because the OpenSSH client processes the local, client-side TCP connection in order to get to the actual payload that is being sent, and then sends that payload directly through the tunnel's own TCP connection to the server side, where the OpenSSH server similarly "unwraps" the payload in order to "wrap" it up again for routing to its final destination. [ 9 ] Naturally, this wrapping and unwrapping also occurs in the reverse direction of the bidirectional tunnel. SSH tunnels provide a means to bypass firewalls that prohibit certain Internet services – so long as a site allows outgoing connections. For example, an organization may prohibit a user from accessing Internet web pages (port 80) directly without passing through the organization's proxy filter (which provides the organization with a means of monitoring and controlling what the user sees through the web). But users may not wish to have their web traffic monitored or blocked by the organization's proxy filter. If users can connect to an external SSH server , they can create an SSH tunnel to forward a given port on their local machine to port 80 on a remote web server. To access the remote web server, users would point their browser to the local port at http://localhost/ Some SSH clients support dynamic port forwarding that allows the user to create a SOCKS 4/5 proxy. In this case users can configure their applications to use their local SOCKS proxy server. This gives more flexibility than creating an SSH tunnel to a single port as previously described. SOCKS can free the user from the limitations of connecting only to a predefined remote port and server. If an application does not support SOCKS, a proxifier can be used to redirect the application to the local SOCKS proxy server. Some proxifiers, such as Proxycap, support SSH directly, thus avoiding the need for an SSH client. In recent versions of OpenSSH it is even allowed to create layer 2 or layer 3 tunnels if both ends have enabled such tunneling capabilities. This creates tun (layer 3, default) or tap (layer 2) virtual interfaces on both ends of the connection. This allows normal network management and routing to be used, and when used on routers, the traffic for an entire subnetwork can be tunneled. A pair of tap virtual interfaces function like an Ethernet cable connecting both ends of the connection and can join kernel bridges. Over the years, tunneling and data encapsulation in general have been frequently adopted for malicious reasons, in order to maliciously communicate outside of a protected network. In this context, known tunnels involve protocols such as HTTP , [ 10 ] SSH , [ 11 ] DNS , [ 12 ] [ 13 ] MQTT . [ 14 ]
https://en.wikipedia.org/wiki/Tunneling_protocol
In number theory , Tunnell's theorem gives a partial resolution to the congruent number problem , and under the Birch and Swinnerton-Dyer conjecture , a full resolution. The congruent number problem asks which positive integers can be the area of a right triangle with all three sides rational . Tunnell's theorem relates this to the number of integral solutions of a few fairly simple Diophantine equations . For a given square-free integer n , define Tunnell's theorem states that supposing n is a congruent number, if n is odd then 2 A n = B n and if n is even then 2 C n = D n . Conversely , if the Birch and Swinnerton-Dyer conjecture holds true for elliptic curves of the form y 2 = x 3 − n 2 x {\displaystyle y^{2}=x^{3}-n^{2}x} , these equalities are sufficient to conclude that n is a congruent number. The theorem is named for Jerrold B. Tunnell , a number theorist at Rutgers University , who proved it in Tunnell (1983) . The importance of Tunnell's theorem is that the criterion it gives is testable by a finite calculation. For instance, for a given n {\displaystyle n} , the numbers A n , B n , C n , D n {\displaystyle A_{n},B_{n},C_{n},D_{n}} can be calculated by exhaustively searching through x , y , z {\displaystyle x,y,z} in the range − n , … , n {\displaystyle -{\sqrt {n}},\ldots ,{\sqrt {n}}} .
https://en.wikipedia.org/wiki/Tunnell's_theorem
In computer science and information theory , Tunstall coding is a form of entropy coding used for lossless data compression . Tunstall coding was the subject of Brian Parker Tunstall's PhD thesis in 1967, while at Georgia Institute of Technology. The subject of that thesis was "Synthesis of noiseless compression codes" [ 1 ] Its design is a precursor to Lempel–Ziv . Unlike variable-length codes , which include Huffman and Lempel–Ziv coding , Tunstall coding is a code which maps source symbols to a fixed number of bits. [ 2 ] Both Tunstall codes and Lempel–Ziv codes represent variable-length words by fixed-length codes. [ 3 ] Unlike typical set encoding , Tunstall coding parses a stochastic source with codewords of variable length. It can be shown that, for a large enough dictionary, the number of bits per source letter can be arbitrarily close to H ( U ) {\displaystyle H(U)} , the entropy of the source. [ 4 ] The algorithm requires as input an input alphabet U {\displaystyle {\mathcal {U}}} , along with a distribution of probabilities for each word input. It also requires an arbitrary constant C {\displaystyle C} , which is an upper bound to the size of the dictionary that it will compute. The dictionary in question, D {\displaystyle D} , is constructed as a tree of probabilities, in which each edge is associated to a letter from the input alphabet. The algorithm goes like this: Let's imagine that we wish to encode the string "hello, world". Let's further assume (somewhat unrealistically) that the input alphabet U {\displaystyle {\mathcal {U}}} contains only characters from the string "hello, world" — that is, 'h', 'e', 'l', ',', ' ', 'w', 'o', 'r', 'd'. We can therefore compute the probability of each character based on its statistical appearance in the input string. For instance, the letter L appears thrice in a string of 12 characters: its probability is 3 12 {\displaystyle 3 \over 12} . We initialize the tree, starting with a tree of | U | = 9 {\displaystyle |{\mathcal {U}}|=9} leaves. Each word is therefore directly associated to a letter of the alphabet. The 9 words that we thus obtain can be encoded into a fixed-sized output of ⌈ log 2 ⁡ ( 9 ) ⌉ = 4 {\displaystyle \lceil \log _{2}(9)\rceil =4} bits. We then take the leaf of highest probability (here, w 1 {\displaystyle w_{1}} ), and convert it to yet another tree of | U | = 9 {\displaystyle |{\mathcal {U}}|=9} leaves, one for each character. We re-compute the probabilities of those leaves. For instance, the sequence of two letters L happens once. Given that there are three occurrences of letters followed by an L, the resulting probability is 1 3 ⋅ 3 12 = 1 12 {\displaystyle {1 \over 3}\cdot {3 \over 12}={1 \over 12}} . We obtain 17 words, which can each be encoded into a fixed-sized output of ⌈ log 2 ⁡ ( 17 ) ⌉ = 5 {\displaystyle \lceil \log _{2}(17)\rceil =5} bits. Note that we could iterate further, increasing the number of words by | U | − 1 = 8 {\displaystyle |{\mathcal {U}}|-1=8} every time. Tunstall coding requires the algorithm to know, prior to the parsing operation, what the distribution of probabilities for each letter of the alphabet is. This issue is shared with Huffman coding . Its requiring a fixed-length block output makes it lesser than Lempel–Ziv , which has a similar dictionary-based design, but with a variable-sized block output. [ clarification needed ] This is an example of a Tunstall code being used to read ( for transmit ) any data that is scrambled, e.g. by polynomial scrambling. This particular example helps to modify the base of the data from 2 to 3 in a stream therefore avoiding expensive base modification routines. With base modification we are particularly bound by 'efficiency' of reads, where ideally log n {\textstyle \log _{n}} bits are used at an average to read the code. This ensures that upon use of the new base, which is duty bound to use at best log n {\textstyle \log _{n}} bits per code, our reads do not result in lesser margin of efficiency of transmission for which we are employing the base modification in the first place. We can therefore then employ the read-to-modify-base mechanism for efficiently transmitting the data across channels that have a different base. eg. transmitting binary data across say MLT-3 channels with increased efficiency when compared to mapping codes ( with large number of unused codes ). We are essentially reading perfectly scrambled binary data or 'implied data' for the purpose of transmitting it using base-3 channels. Please see leaf nodes in the Ternary Tunstall Tree. As we can see, the read will result in the first digit being 'B' - 25% of the time as it has an implied probability of 25%, being of length 2 trying to read from implied data. A 'B' such read does not read any further, but with 75% probability we read 'A' or 'C', requiring another code. Thus the efficiency of the read is 2.75 ( average length of the size 7 Huffman code ) / 1.75 ( average length of the 1 or 2-digit base - 3 Tunstall code ) = 1.57142857 {\textstyle 1.57142857} which is as per requirement very close to log 2 ⁡ 3 = 1.5849625 {\textstyle \log _{2}3=1.5849625} which calculates to an efficiency of 99.15 % {\textstyle 99.15\%} . We can then transmit the symbols using base-3 channels efficiently.
https://en.wikipedia.org/wiki/Tunstall_coding
In mathematics , a tuple is a finite sequence or ordered list of numbers or, more generally, mathematical objects , which are called the elements of the tuple. An n -tuple is a tuple of n elements, where n is a non-negative integer . There is only one 0-tuple, called the empty tuple . A 1-tuple and a 2-tuple are commonly called a singleton and an ordered pair , respectively. The term "infinite tuple" is occasionally used for "infinite sequences" . Tuples are usually written by listing the elements within parentheses " ( ) " and separated by commas; for example, (2, 7, 4, 1, 7) denotes a 5-tuple. Other types of brackets are sometimes used, although they may have a different meaning. [ a ] An n -tuple can be formally defined as the image of a function that has the set of the n first natural numbers as its domain . Tuples may be also defined from ordered pairs by a recurrence starting from an ordered pair ; indeed, an n -tuple can be identified with the ordered pair of its ( n − 1) first elements and its n th element, for example, ( ( ( 1 , 2 ) , 3 ) , 4 ) = ( 1 , 2 , 3 , 4 ) {\displaystyle \left(\left(\left(1,2\right),3\right),4\right)=\left(1,2,3,4\right)} . In computer science , tuples come in many forms. Most typed functional programming languages implement tuples directly as product types , [ 1 ] tightly associated with algebraic data types , pattern matching , and destructuring assignment . [ 2 ] Many programming languages offer an alternative to tuples, known as record types , featuring unordered elements accessed by label. [ 3 ] A few programming languages combine ordered tuple product types and unordered record types into a single construct, as in C structs and Haskell records. Relational databases may formally identify their rows (records) as tuples . Tuples also occur in relational algebra ; when programming the semantic web with the Resource Description Framework (RDF); in linguistics ; [ 4 ] and in philosophy . [ 5 ] The term originated as an abstraction of the sequence: single, couple/double, triple, quadruple, quintuple, sextuple, septuple, octuple, ..., n ‑tuple, ..., where the prefixes are taken from the Latin names of the numerals. The unique 0-tuple is called the null tuple or empty tuple . A 1‑tuple is called a single (or singleton ), a 2‑tuple is called an ordered pair or couple , and a 3‑tuple is called a triple (or triplet ). The number n can be any nonnegative integer . For example, a complex number can be represented as a 2‑tuple of reals, a quaternion can be represented as a 4‑tuple, an octonion can be represented as an 8‑tuple, and a sedenion can be represented as a 16‑tuple. Although these uses treat ‑tuple as the suffix, the original suffix was ‑ple as in "triple" (three-fold) or "decuple" (ten‑fold). This originates from medieval Latin plus (meaning "more") related to Greek ‑πλοῦς, which replaced the classical and late antique ‑plex (meaning "folded"), as in "duplex". [ 6 ] [ b ] The general rule for the identity of two n -tuples is Thus a tuple has properties that distinguish it from a set : There are several definitions of tuples that give them the properties described in the previous section. The 0 {\displaystyle 0} -tuple may be identified as the empty function . For n ≥ 1 , {\displaystyle n\geq 1,} the n {\displaystyle n} -tuple ( a 1 , … , a n ) {\displaystyle \left(a_{1},\ldots ,a_{n}\right)} may be identified with the ( surjective ) function with domain and with codomain that is defined at i ∈ domain ⁡ F = { 1 , … , n } {\displaystyle i\in \operatorname {domain} F=\left\{1,\ldots ,n\right\}} by That is, F {\displaystyle F} is the function defined by in which case the equality necessarily holds. Functions are commonly identified with their graphs , which is a certain set of ordered pairs. Indeed, many authors use graphs as the definition of a function. Using this definition of "function", the above function F {\displaystyle F} can be defined as: Another way of modeling tuples in set theory is as nested ordered pairs . This approach assumes that the notion of ordered pair has already been defined. This definition can be applied recursively to the ( n − 1) -tuple: Thus, for example: A variant of this definition starts "peeling off" elements from the other end: This definition can be applied recursively: Thus, for example: Using Kuratowski's representation for an ordered pair , the second definition above can be reformulated in terms of pure set theory : In this formulation: In discrete mathematics , especially combinatorics and finite probability theory , n -tuples arise in the context of various counting problems and are treated more informally as ordered lists of length n . [ 7 ] n -tuples whose entries come from a set of m elements are also called arrangements with repetition , permutations of a multiset and, in some non-English literature, variations with repetition . The number of n -tuples of an m -set is m n . This follows from the combinatorial rule of product . [ 8 ] If S is a finite set of cardinality m , this number is the cardinality of the n -fold Cartesian power S × S × ⋯ × S . Tuples are elements of this product set. In type theory , commonly used in programming languages , a tuple has a product type ; this fixes not only the length, but also the underlying types of each component. Formally: and the projections are term constructors: The tuple with labeled elements used in the relational model has a record type . Both of these types can be defined as simple extensions of the simply typed lambda calculus . [ 9 ] The notion of a tuple in type theory and that in set theory are related in the following way: If we consider the natural model of a type theory, and use the Scott brackets to indicate the semantic interpretation, then the model consists of some sets S 1 , S 2 , … , S n {\displaystyle S_{1},S_{2},\ldots ,S_{n}} (note: the use of italics here that distinguishes sets from types) such that: and the interpretation of the basic terms is: The n -tuple of type theory has the natural interpretation as an n -tuple of set theory: [ 10 ] The unit type has as semantic interpretation the 0-tuple.
https://en.wikipedia.org/wiki/Tuple
Tuple-versioning (also called point-in-time ) is a mechanism used in a relational database management system to store past states of a relation . Normally, only the current state is captured. Using tuple-versioning techniques, typically two values for time are stored along with each tuple : a start time and an end time. These two values indicate the validity of the rest of the values in the tuple. Typically when tuple-versioning techniques are used, the current tuple has a valid start time, but a null value for end time. Therefore, it is easy and efficient to obtain the current values for all tuples by querying for the null end time. A single query that searches for tuples with start time less than, and end time greater than, a given time (where null end time is treated as a value greater than the given time) will give as a result the valid tuples at the given time. For example, if a person's job changes from Engineer to Manager, there would be two tuples in an Employee table , one with the value Engineer for job and the other with the value Manager for job. The end time for the Engineer tuple would be equal to the start time for the Manager tuple. The pattern known as log trigger uses this technique to automatically store historical information of a table in a database . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tuple-versioning
Tupper's self-referential formula is a formula that visually represents itself when graphed at a specific location in the ( x , y ) plane. The formula was defined by Jeff Tupper and appears as an example in his 2001 SIGGRAPH paper on reliable two-dimensional computer graphing algorithms. [ 1 ] This paper discusses methods related to the GrafEq formula-graphing program developed by Tupper. [ 2 ] The formula is an inequality defined as: 1 2 < ⌊ m o d ( ⌊ y 17 ⌋ 2 − 17 ⌊ x ⌋ − m o d ( ⌊ y ⌋ , 17 ) , 2 ) ⌋ {\displaystyle {\frac {1}{2}}<\left\lfloor \mathrm {mod} \left(\left\lfloor {\frac {y}{17}}\right\rfloor 2^{-17\lfloor x\rfloor -\mathrm {mod} \left(\lfloor y\rfloor ,17\right)},2\right)\right\rfloor } where ⌊ … ⌋ {\displaystyle \lfloor \dots \rfloor } denotes the floor function , and mod is the modulo operation . Let k {\displaystyle k} equal the following 543-digit integer: Graphing the set of points ( x , y ) {\displaystyle (x,y)} in 0 ≤ x < 106 {\displaystyle 0\leq x<106} and k ≤ y < k + 17 {\displaystyle k\leq y<k+17} which satisfy the formula, results in the following plot: [ note 1 ] The formula is a general-purpose method of decoding a bitmap stored in the constant k {\displaystyle k} , and it could be used to draw any other image. When applied to the unbounded positive range 0 ≤ y {\displaystyle 0\leq y} , the formula tiles a vertical swath of the plane with a pattern that contains all possible 17-pixel-tall bitmaps. One horizontal slice of that infinite bitmap depicts the drawing formula since other slices depict all other possible formulae that might fit in a 17-pixel-tall bitmap. Tupper has created extended versions of his original formula that rule out all but one slice. [ 3 ] The constant k {\displaystyle k} is a simple monochrome bitmap image of the formula treated as a binary number and multiplied by 17. If k {\displaystyle k} is divided by 17, the least significant bit encodes the upper-right corner ( k , 0 ) {\displaystyle (k,0)} ; the 17 least significant bits encode the rightmost column of pixels; the next 17 least significant bits encode the 2nd-rightmost column, and so on. It fundamentally describes a way to plot points on a two-dimensional surface. The value of k {\displaystyle k} is the number whose binary digits form the plot. The following plot demonstrates the addition of different k {\displaystyle k} values. In the fourth subplot, the k-value of "AFGP" and "Aesthetic Function Graph" is added to get the resultant graph, where both texts can be seen with some distortion due to the effects of binary addition. The information regarding the shape of the plot is stored within k {\displaystyle k} . [ 4 ]
https://en.wikipedia.org/wiki/Tupper's_self-referential_formula
Turbidimetric inhibition immuno assay (TINIA) is a type of immunoassay that uses turbidimetry as the measurement principle and is used for many commercial immunoassays , e.g. measurement of HbA1c %, [ 1 ] Digoxin etc. in whole blood sample in several commercial assays employ this principle. This immunology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Turbidimetric_inhibition_immunoassay
Turbidimetry (the name being derived from turbidity ) is the process of measuring the loss of intensity of transmitted light due to the scattering effect of particles suspended in it. Light is passed through a filter creating a light of known wavelength which is then passed through a cuvette containing a solution. A photoelectric cell collects the light which passes through the cuvette. A measurement is then given for the amount of absorbed light. [ 1 ] Turbidimetry can be used in biology to find the number of cells in a suspension . [ 2 ] Turbidity-is an expression of optical look of a suspension caused by radiation to the scattered and absorbed wavelength. Scattering of light is elastic so both incident and scattered radiation have same wavelength. A turbidometer measures the amount of radiation that passes through a fluid in forward direction, analogous to absorption spectrophotometry. Standard for turbidimetry is prepared by dissolving 5g of hydrazinium (2+) sulfate(N2H4H2SO4) and 50g of hexamethylenetertramine in 1liter of distilled water is defined as 4000 nephelometric Turbidity Unit(NTU) Application Determination of water Clarity of pharma products and drinks Immunoassay in lab Turbidimetry offers little advantage than nephelometry in measurement of sensitivity in low level antigen a antibody immunoassay. Antigen excess and matrix effects are limitations encountered Immunoturbidimetry is an important tool in the broad diagnostic field of clinical chemistry. It is used to determine serum proteins not detectable with classical clinical chemistry methods. Immunoturbidimetry uses the classical antigen-antibody reaction. The antigen-antibody complexes aggregate to form particles that can be optically detected by a photometer. This physical chemistry -related article is a stub . You can help Wikipedia by expanding it . This scattering –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Turbidimetry
Turbidity is the cloudiness or haziness of a fluid caused by large numbers of individual particles that are generally invisible to the naked eye , similar to smoke in air . The measurement of turbidity is a key test of both water clarity and water quality . Fluids can contain suspended solid matter consisting of particles of many different sizes. While some suspended material will be large enough and heavy enough to settle rapidly to the bottom of the container if a liquid sample is left to stand (the settable solids ), very small particles will settle only very slowly or not at all if the sample is regularly agitated or the particles are colloidal . These small solid particles cause the liquid to appear turbid. Turbidity (or haze) is also applied to transparent solids such as glass or plastic. In plastic production, haze is defined as the percentage of light that is deflected more than 2.5° from the incoming light direction. [ 1 ] Turbidity in open water may be caused by growth of phytoplankton . Human activities that disturb land, such as construction , mining and agriculture , can lead to high sediment levels entering water bodies during rain storms due to storm water runoff . Areas prone to high bank erosion rates as well as urbanized areas also contribute large amounts of turbidity to nearby waters, through stormwater pollution from paved surfaces such as roads, bridges, parking lots and airports. [ 2 ] Some industries such as quarrying , mining and coal recovery can generate very high levels of turbidity from colloidal rock particles. In drinking water, the higher the turbidity level, the higher the risk that people may develop gastrointestinal diseases . [ 3 ] This is especially problematic for immunocompromised people, because contaminants like viruses or bacteria can become attached to the suspended solids. The suspended solids interfere with water disinfection with chlorine [ 4 ] because the particles act as shields for viruses and bacteria. Similarly, suspended solids can protect bacteria from ultraviolet (UV) sterilization of water. [ 5 ] In water bodies such as lakes , rivers and reservoirs , high turbidity levels can reduce the amount of light reaching lower depths, which can inhibit growth of submerged aquatic plants and consequently affect species which are dependent on them, such as fish and shellfish . High turbidity levels can also affect the ability of fish gills to absorb dissolved oxygen. This phenomenon has been regularly observed throughout the Chesapeake Bay in the eastern United States. [ 6 ] [ 7 ] For many mangrove areas, high turbidity is needed in order to support certain species, such as to protect juvenile fish from predators. For most mangroves along the eastern coast of Australia , in particular Moreton Bay , turbidity levels as high as 600 Nephelometric Turbidity Units (NTU) are needed for proper ecosystem health. [ citation needed ] The measurement of turbidity is a key test of both water clarity and water quality . [ citation needed ] There are two standard units for reporting turbidity: Formazin Nephelometric Units (FNU) from ISO 7027 and Nephelometric Turbidity Units (NTU) from USEPA Method 180.1. ISO 7027 and FNU is mostly widely used in Europe, whereas NTU is mostly widely used in the U.S. ISO 7027 provides the method in water quality for the determination of turbidity. It is used to determine the concentration of suspended particles in a sample of water by measuring the incident light scattered at right angles from the sample. The scattered light is captured by a photodiode , which produces an electronic signal that is converted to a turbidity. Open source hardware has been developed following the ISO 7027 method to measure turbidity reliably using an Arduino microcontroller and inexpensive LEDs . [ 8 ] There are several practical ways of checking water quality, the most direct being some measure of attenuation (that is, reduction in strength) of light as it passes through a sample column of water. [ 9 ] The alternatively used Jackson Candle method (units: Jackson Turbidity Unit or JTU ) is essentially the inverse measure of the length of a column of water needed to completely obscure a candle flame viewed through it. The more water needed (the longer the water column), the clearer the water. Of course water alone produces some attenuation, and any substances dissolved in the water that produce color can attenuate some wavelengths. Modern instruments do not use candles, but this approach of attenuation of a light beam through a column of water should be calibrated and reported in JTUs. [ 9 ] The propensity of particles to scatter a light beam focused on them is now considered a more meaningful measure of turbidity in water. Turbidity measured this way uses an instrument called a nephelometer with the detector set up to the side of the light beam. More light reaches the detector if there are many small particles scattering the source beam than if there are few. The units of turbidity from a calibrated nephelometer can be either NTU or FTU, depending on the standard method used. To some extent, how much light reflects for a given amount of particulates is dependent upon properties of the particles like their shape, color, and reflectivity. For this reason (and the reason that heavier particles settle quickly and do not contribute to a turbidity reading), a correlation between turbidity and total suspended solids (TSS) is somewhat unusual for each location or situation. [ 9 ] Turbidity in lakes, reservoirs, channels, and the ocean can be measured using a Secchi disk . This black and white disk is lowered into the water until it can no longer be seen; the depth (Secchi depth) is then recorded as a measure of the transparency of the water (inversely related to turbidity). The Secchi disk has the advantages of integrating turbidity over depth (where variable turbidity layers are present), being quick and easy to use, and inexpensive. It can provide a rough indication of the depth of the euphotic zone with a 3-fold division of the Secchi depth , however this cannot be used in shallow waters where the disk can still be seen on the bottom. [ 10 ] Cameras and computer vision have also been used to measured turbidity. [ 11 ] Such monitoring can make use of machine learning to identify problems in sewage. [ 12 ] An additional device, which may help measuring turbidity in shallow waters is the turbidity tube. [ 13 ] [ 10 ] The turbidity tube condenses water in a graded tube which allows determination of turbidity based on a contrast disk in its bottom, being analogous to the Secchi disk. Turbidity in air, which causes solar attenuation, is used as a measure of pollution. To model the attenuation of beam irradiance, several turbidity parameters have been introduced, including the Linke turbidity factor (T L ). [ 14 ] [ 15 ] Governments have set standards on the allowable turbidity in drinking water. In the United States, public water systems that use conventional or direct filtration methods must not have a turbidity higher than 1.0 NTU at the plant outlet and all samples for turbidity must be less than or equal to 0.3 NTU for at least 95 percent of the samples in any month. Systems that use filtration other than the conventional or direct filtration must follow state limits, which must include turbidity at no time exceeding 5 NTU. Many drinking water utilities strive to achieve levels as low as 0.1 NTU. [ 16 ] The European turbidity standard is 4 NTU. [ 17 ] The US Environmental Protection Agency (EPA) has published water quality criteria for turbidity. [ 18 ] These criteria are scientific assessments of the effects of turbidity, which are used by states to develop water quality standards for water bodies. (States may also publish their own criteria.) Some states have promulgated water quality standards for turbidity, including: Published analytical test methods for turbidity include: Turbidity is commonly treated using a settling or filtration process, or both settling and filtration. Depending on the application, flocculants may be dosed into the water stream to increase the effectiveness of the settling or filtration process. [ 25 ] [ 26 ] Potable water treatment and municipal wastewater plants often remove turbidity with a combination of settling tanks, granular media filtration , and clarifiers . In-situ water treatment or direct dosing for the treatment of turbidity is common when the affected water bodies are dispersed (i.e. there are numerous water bodies spread out over a geographical area, such as small drinking water reservoirs), when the problem is not consistent (i.e. when there is turbidity in a water body only during and after the wet season) or when a low cost solution is required. In-situ treatment of turbidity involves the addition of a reagent, generally a flocculant , evenly dispensed over the surface of the body of water. The flocs then settle at the bottom of the water body where they remain or are removed when the water body is drained. This method is commonly used at coal mines and coal loading facilities where stormwater collection ponds have seasonal issues with turbidity. A number of companies offer portable treatment systems for in-situ water treatment or direct dosing of reagents. There are a number of chemical reagents that are available for treating turbidity. Reagents that are available for treating turbidity include aluminium sulfate or alum (Al 2 (SO 4 ) 3 ·nH 2 O), ferric chloride (FeCl 3 ), gypsum (CaSO 4 ·2H 2 O), poly- aluminium chloride , long chain acrylamide -based polymers and numerous proprietary reagents. [ 27 ] The water chemistry must be carefully considered when chemical dosing as some reagents, such as alum, will alter the pH of the water. The dosing process must also be considered when using reagents as the flocs may be broken apart by excessive mixing.
https://en.wikipedia.org/wiki/Turbidity
A turbidity current is most typically an underwater current of usually rapidly moving, sediment-laden water moving down a slope; although current research (2018) indicates that water-saturated sediment may be the primary actor in the process. [ 1 ] Turbidity currents can also occur in other fluids besides water. Researchers from the Monterey Bay Aquarium Research Institute found that a layer of water-saturated sediment moved rapidly over the seafloor and mobilized the upper few meters of the preexisting seafloor. Plumes of sediment-laden water were observed during turbidity current events but they believe that these were secondary to the pulse of the seafloor sediment moving during the events. The belief of the researchers is that the water flow is the tail-end of the process that starts at the seafloor. [ 1 ] In the most typical case of oceanic turbidity currents, sediment laden waters situated over sloping ground will flow down-hill because they have a higher density than the adjacent waters. The driving force behind a turbidity current is gravity acting on the high density of the sediments temporarily suspended within a fluid. These semi-suspended solids make the average density of the sediment bearing water greater than that of the surrounding, undisturbed water. As such currents flow, they often have a "snow-balling-effect", as they stir up the ground over which they flow, and gather even more sedimentary particles in their current. Their passage leaves the ground over which they flow scoured and eroded. Once an oceanic turbidity current reaches the calmer waters of the flatter area of the abyssal plain (main oceanic floor), the particles borne by the current settle out of the water column. The sedimentary deposit of a turbidity current is called a turbidite . Seafloor turbidity currents are often the result of sediment-laden river outflows, and can sometimes be initiated by earthquakes , slumping and other soil disturbances. They are characterized by a well-defined advance-front, also known as the current's head, and are followed by the current's main body. In terms of the more often observed and more familiar above sea-level phenomenon, they somewhat resemble flash floods. Turbidity currents can sometimes result from submarine seismic instability, which is common with steep underwater slopes, and especially with submarine trench slopes of convergent plate margins, continental slopes and submarine canyons of passive margins. With an increasing continental shelf slope, current velocity increases, as the velocity of the flow increases, turbulence increases, and the current draws up more sediment. The increase in sediment also adds to the density of the current, and thus increases its velocity even further. Turbidity currents are traditionally defined as those sediment gravity flows in which sediment is suspended by fluid turbulence. [ 2 ] [ 3 ] [ 4 ] However, the term "turbidity current" was adopted to describe a natural phenomenon whose exact nature is often unclear. The turbulence within a turbidity current is not always the support mechanism that keeps the sediment in suspension; however it is probable that turbulence is the primary or sole grain support mechanism in dilute currents (<3%). [ 5 ] Definitions are further complicated by an incomplete understanding of the turbulence structure within turbidity currents, and the confusion between the terms turbulent (i.e. disturbed by eddies) and turbid (i.e. opaque with sediment). [ 6 ] Kneller & Buckee, 2000 define a suspension current as 'flow induced by the action of gravity upon a turbid mixture of fluid and (suspended) sediment, by virtue of the density difference between the mixture and the ambient fluid'. A turbidity current is a suspension current in which the interstitial fluid is a liquid (generally water); a pyroclastic current is one in which the interstitial fluid is gas. [ 5 ] When the concentration of suspended sediment at the mouth of a river is so large that the density of river water is greater than the density of sea water a particular kind of turbidity current can form called a hyperpycnal plume. [ 7 ] The average concentration of suspended sediment for most river water that enters the ocean is much lower than the sediment concentration needed for entry as a hyperpycnal plume. Although some rivers can often have continuously high sediment load that can create a continuous hyperpycnal plume, such as the Haile River (China), which has an average suspended concentration of 40.5 kg/m 3 . [ 7 ] The sediment concentration needed to produce a hyperpycnal plume in marine water is 35 to 45 kg/m 3 , depending on the water properties within the coastal zone. [ 7 ] Most rivers produce hyperpycnal flows only during exceptional events, such as storms , floods , glacier outbursts, dam breaks, and lahar flows. In fresh water environments, such as lakes , the suspended sediment concentration needed to produce a hyperpycnal plume is quite low (1 kg/m 3 ). [ 7 ] The transport and deposition of the sediments in narrow alpine reservoirs is often caused by turbidity currents. They follow the thalweg of the lake to the deepest area near the dam , where the sediments can affect the operation of the bottom outlet and the intake structures. [ 8 ] Controlling this sedimentation within the reservoir can be achieved by using solid and permeable obstacles with the right design. [ 8 ] Turbidity currents are often triggered by tectonic disturbances of the sea floor. The displacement of continental crust in the form of fluidization and physical shaking both contribute to their formation. Earthquakes have been linked to turbidity current deposition in many settings, particularly where physiography favors preservation of the deposits and limits the other sources of turbidity current deposition. [ 9 ] [ 10 ] Since the famous case of breakage of submarine cables by a turbidity current following the 1929 Grand Banks earthquake , [ 11 ] earthquake triggered turbidites have been investigated and verified along the Cascadia subduction Zone, [ 12 ] the Northern San Andreas Fault, [ 13 ] a number of European, Chilean and North American lakes, [ 14 ] [ 15 ] [ 16 ] Japanese lacustrine and offshore regions [ 17 ] [ 18 ] and a variety of other settings. [ 19 ] [ 20 ] When large turbidity currents flow into canyons they may become self-sustaining, [ 21 ] and may entrain sediment that has previously been introduced into the canyon by littoral drift , storms or smaller turbidity currents. Canyon-flushing associated with surge-type currents initiated by slope failures may produce currents whose final volume may be several times that of the portion of the slope that has failed (e.g. Grand Banks). [ 22 ] Sediment that has piled up at the top of the continental slope , particularly at the heads of submarine canyons can create turbidity current due to overloading, thus consequent slumping and sliding. A buoyant sediment-laden river plume can induce a secondary turbidity current on the ocean floor by the process of convective sedimentation. [ 24 ] [ 4 ] Sediment in the initially buoyant hypopycnal flow accumulates at the base of the surface flow, [ 25 ] so that the dense lower boundary become unstable. The resulting convective sedimentation leads to a rapid vertical transfer of material to the sloping lake or ocean bed, potentially forming a secondary turbidity current. The vertical speed of the convective plumes can be much greater than the Stokes settling velocity of an individual particle of sediment. [ 26 ] Most examples of this process have been made in the laboratory, [ 24 ] [ 27 ] but possible observational evidence of a secondary turbidity current was made in Howe Sound, British Columbia, [ 28 ] where a turbidity current was periodically observed on the delta of the Squamish River. As the vast majority of sediment laden rivers are less dense than the ocean, [ 7 ] rivers cannot readily form plunging hyperpycnal flows. Hence convective sedimentation is an important possible initiation mechanism for turbidity currents. [ 4 ] Large and fast-moving turbidity currents can carve gulleys and ravines into the ocean floor of continental margins and cause damage to artificial structures such as telecommunication cables on the seafloor . Understanding where turbidity currents flow on the ocean floor can help to decrease the amount of damage to telecommunication cables by avoiding these areas or reinforcing the cables in vulnerable areas. When turbidity currents interact with regular ocean currents, such as contour currents , they can change their direction. This ultimately shifts submarine canyons and sediment deposition locations. One example of this is located in the western part of the Gulf of Cadiz , where the ocean current leaving the Mediterranean Sea (also known as the Mediterranean outflow water) pushes turbidity currents westward. This has changed the shape of submarine valleys and canyons in the region to also curve in that direction. [ 29 ] When the energy of a turbidity current lowers, its ability to keep suspended sediment decreases, thus sediment deposition occurs. When the material comes to rest, it is the sand and other coarse material which settles first followed by mud and eventually the very fine particulate matter. It is this sequence of deposition that creates the so called Bouma sequences that characterize turbidite deposits. Because turbidity currents occur underwater and happen suddenly, they are rarely seen as they happen in nature, thus turbidites can be used to determine turbidity current characteristics. Some examples: grain size can give indication of current velocity, grain lithology and the use of foraminifera for determining origins, grain distribution shows flow dynamics over time and sediment thickness indicates sediment load and longevity. Turbidites are commonly used in the understanding of past turbidity currents, for example, the Peru-Chile Trench off Southern Central Chile (36°S–39°S) contains numerous turbidite layers that were cored and analysed. [ 30 ] From these turbidites the predicted history of turbidity currents in this area was determined, increasing the overall understanding of these currents. [ 30 ] Some of the largest antidunes on Earth are formed by turbidity currents. One observed sediment-wave field is located on the lower continental slope off Guyana , South America. [ 31 ] This sediment-wave field covers an area of at least 29 000 km 2 at a water depth of 4400–4825 meters. [ 31 ] These antidunes have wavelengths of 110–2600 m and wave heights of 1–15 m. [ 31 ] Turbidity currents responsible for wave generation are interpreted as originating from slope failures on the adjacent Venezuela , Guyana and Suriname continental margins. [ 31 ] Simple numerical modelling has been enabled to determine turbidity current flow characteristics across the sediment waves to be estimated: internal Froude number = 0.7–1.1, flow thickness = 24–645 m, and flow velocity = 31–82 cm·s −1 . [ 31 ] Generally, on lower gradients beyond minor breaks of slope, flow thickness increases and flow velocity decreases, leading to an increase in wavelength and a decrease in height. [ 31 ] The behaviour of turbidity currents with buoyant fluid (such as currents with warm, fresh or brackish interstitial water entering the sea) has been investigated to find that the front speed decreases more rapidly than that of currents with the same density as the ambient fluid. [ 32 ] These turbidity currents ultimately come to a halt as sedimentation results in a reversal of buoyancy, and the current lifts off, the point of lift-off remaining constant for a constant discharge. [ 32 ] The lofted fluid carries fine sediment with it, forming a plume that rises to a level of neutral buoyancy (if in a stratified environment) or to the water surface, and spreads out. [ 32 ] Sediment falling from the plume produces a widespread fall-out deposit, termed hemiturbidite. [ 33 ] Experimental turbidity currents [ 34 ] and field observations [ 35 ] suggest that the shape of the lobe deposit formed by a lofting plume is narrower than for a similar non-lofting plume Prediction of erosion by turbidity currents, and of the distribution of turbidite deposits, such as their extent, thickness and grain size distribution, requires an understanding of the mechanisms of sediment transport and deposition , which in turn depends on the fluid dynamics of the currents. The extreme complexity of most turbidite systems and beds has promoted the development of quantitative models of turbidity current behaviour inferred solely from their deposits. Small-scale laboratory experiments therefore offer one of the best means of studying their dynamics. Mathematical models can also provide significant insights into current dynamics. In the long term, numerical techniques are most likely the best hope of understanding and predicting three-dimensional turbidity current processes and deposits. In most cases, there are more variables than governing equations , and the models rely upon simplifying assumptions in order to achieve a result. [ 5 ] The accuracy of the individual models thus depends upon the validity and choice of the assumptions made. Experimental results provide a means of constraining some of these variables as well as providing a test for such models. [ 5 ] Physical data from field observations, or more practical from experiments, are still required in order to test the simplifying assumptions necessary in mathematical models . Most of what is known about large natural turbidity currents (i.e. those significant in terms of sediment transfer to the deep sea) is inferred from indirect sources, such as submarine cable breaks and heights of deposits above submarine valley floors. Although during the 2003 Tokachi-oki earthquake a large turbidity current was observed by the cabled observatory which provided direct observations, which is rarely achieved. [ 36 ] Oil and gas companies are also interested in turbidity currents because the currents deposit organic matter that over geologic time gets buried, compressed and transformed into hydrocarbons . The use of numerical modelling and flumes are commonly used to help understand these questions. [ 37 ] Much of the modelling is used to reproduce the physical processes which govern turbidity current behaviour and deposits. [ 37 ] The so-called depth-averaged or shallow-water models are initially introduced for compositional gravity currents [ 38 ] and then later extended to turbidity currents. [ 39 ] [ 40 ] The typical assumptions used along with the shallow-water models are: hydrostatic pressure field, clear fluid is not entrained (or detrained), and particle concentration does not depend on the vertical location. Considering the ease of implementation, these models can typically predict flow characteristic such as front location or front speed in simplified geometries, e.g. rectangular channels, fairly accurately. With the increase in computational power, depth-resolved models have become a powerful tool to study gravity and turbidity currents. These models, in general, are mainly focused on the solution of the Navier-Stokes equations for the fluid phase. With dilute suspension of particles, a Eulerian approach proved to be accurate to describe the evolution of particles in terms of a continuum particle concentration field. Under these models, no such assumptions as shallow-water models are needed and, therefore, accurate calculations and measurements are performed to study these currents. Measurements such as, pressure field, energy budgets, vertical particle concentration and accurate deposit heights are a few to mention. Both Direct numerical simulation (DNS) [ 41 ] and Turbulence modeling [ 42 ] are used to model these currents.
https://en.wikipedia.org/wiki/Turbidity_current
A turbidostat is a continuous microbiological culture device, similar to a chemostat or an auxostat , which has feedback between the turbidity of the culture vessel and the dilution rate. [ 1 ] [ 2 ] The theoretical relationship between growth in a chemostat and growth in a turbidostat is somewhat complex, in part because they are similar. A chemostat has a fixed volume and flow rate, and thus a fixed dilution rate. A turbidostat dynamically adjusts the flow rate (and therefore the dilution rate) to make the turbidity constant. At steady state, operation of both the chemostat and turbidostat are identical. It is only when classical chemostat assumptions are violated (for instance, out of equilibrium; or the cells are mutating) that a turbidostat is functionally different. One case may be while cells are growing at their maximum growth rate, in which case it is difficult to set a chemostat to the appropriate constant dilution rate. [ 3 ] While most turbidostats use a spectrophotometer/turbidimeter to measure the optical density for control purposes, there exist other methods, such as dielectric permittivity. [ 4 ] The morbidostat is a similar device built to study the evolution of antimicrobial resistance. The aim is also to maintain constant turbidity levels, but this is controlled using the addition of antimicrobials. [ 5 ] This article about biological engineering is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Turbidostat
A turbine blade is a radial aerofoil mounted in the rim of a turbine disc and which produces a tangential force which rotates a turbine rotor. [ 2 ] Each turbine disc has many blades. [ 3 ] As such they are used in gas turbine engines and steam turbines . The blades are responsible for extracting energy from the high temperature, high pressure gas produced by the combustor . The turbine blades are often the limiting component of gas turbines. [ 4 ] To survive in this difficult environment, turbine blades often use exotic materials like superalloys and many different methods of cooling that can be categorized as internal and external cooling, [ 5 ] [ 6 ] [ 7 ] and thermal barrier coatings . [ 8 ] [ 9 ] Blade fatigue is a major source of failure in steam turbines and gas turbines. Fatigue is caused by the stress induced by vibration and resonance within the operating range of machinery. To protect blades from these high dynamic stresses, friction dampers are used. [ 10 ] Blades of wind turbines and water turbines are designed to operate in different conditions, which typically involve lower rotational speeds and temperatures. In a gas turbine engine , a single turbine stage is made up of a rotating disk that holds many turbine blades and a stationary ring of nozzle guide vanes in front of the blades. The turbine is connected to a compressor using a shaft (the complete rotating assembly sometimes called a "spool"). Air is compressed, raising the pressure and temperature, as it passes through the compressor. The temperature is then increased by combustion of fuel inside the combustor which is located between the compressor and the turbine. The high-temperature, high-pressure gas then passes through the turbine. The turbine stages extract energy from this flow, lowering the pressure and temperature of the gas and transfer the kinetic energy to the compressor. The way the turbine works is similar to how the compressor works, only in reverse, in so far as energy exchange between the gas and the machine is concerned, for example. There is a direct relationship between how much the gas temperature changes (increase in compressor, decrease in turbine) and the shaft power input (compressor) or output (turbine). [ 11 ] For a turbofan engine the number of turbine stages required to drive the fan increases with the bypass-ratio [ 12 ] unless the turbine speed can be increased by adding a gearbox between the turbine and fan in which case fewer stages are required. [ 13 ] The number of turbine stages can have a great effect on how the turbine blades are designed for each stage. Many gas turbine engines are twin-spool designs, meaning that there is a high-pressure spool and a low-pressure spool. Other gas turbines use three spools, adding an intermediate-pressure spool between the high- and low-pressure spool. The high-pressure turbine is exposed to the hottest, highest-pressure air, and the low-pressure turbine is subjected to cooler, lower-pressure air. The difference in conditions leads to the design of high-pressure and low-pressure turbine blades that are significantly different in material and cooling choices even though the aerodynamic and thermodynamic principles are the same. [ 14 ] Under these severe operating conditions inside the gas and steam turbines, the blades face high temperature, high stresses, and potentially high vibrations. Steam turbine blades are critical components in power plants which convert the linear motion of high-temperature and high-pressure steam flowing down a pressure gradient into a rotary motion of the turbine shaft. [ 15 ] Turbine blades are subjected to very strenuous environments inside a gas turbine. They face high temperatures, high stresses, and a potential environment of high vibration. All three of these factors can lead to blade failures, potentially destroying the engine, therefore turbine blades are carefully designed to resist these conditions. [ 16 ] Turbine blades are subjected to stress from centrifugal force (turbine stages can rotate at tens of thousands of revolutions per minute (RPM)) and fluid forces that can cause fracture , yielding , or creep [ nb 1 ] failures. Additionally, the first stage (the stage directly following the combustor) of a modern gas turbine faces temperatures around 2,500 °F (1,370 °C), [ 17 ] up from temperatures around 1,500 °F (820 °C) in early gas turbines. [ 18 ] Modern military jet engines, like the Snecma M88 , can see turbine temperatures of 2,900 °F (1,590 °C). [ 19 ] Those high temperatures can weaken the blades and make them more susceptible to creep failures. The high temperatures can also make the blades susceptible to corrosion failures. [ 15 ] Finally, vibrations from the engine and the turbine itself can cause fatigue failures. [ 16 ] A limiting factor in early jet engines was the performance of the materials available for the hot section (combustor and turbine) of the engine. The need for better materials spurred much research in the field of alloys and manufacturing techniques, and that research resulted in a long list of new materials and methods that make modern gas turbines possible. [ 18 ] One of the earliest of these was Nimonic , used in the British Whittle engines. The development of superalloys in the 1940s and new processing methods such as vacuum induction melting in the 1950s greatly increased the temperature capability of turbine blades. Further processing methods like hot isostatic pressing improved the alloys used for turbine blades and increased turbine blade performance. [ 18 ] Modern turbine blades often use nickel -based superalloys that incorporate chromium , cobalt , and rhenium . [ 16 ] [ 20 ] Aside from alloy improvements, a major breakthrough was the development of directional solidification (DS) and single crystal (SC) production methods. These methods help greatly increase strength against fatigue and creep by aligning grain boundaries in one direction (DS) or by eliminating grain boundaries altogether (SC). SC research began in the 1960s with Pratt and Whitney and took about 10 years to be implemented. One of the first implementations of DS was with the J58 engines of the SR-71 . [ 18 ] [ 21 ] [ 22 ] Another major improvement to turbine blade material technology was the development of thermal barrier coatings (TBC). Where DS and SC developments improved creep and fatigue resistance, TBCs improved corrosion and oxidation resistance, both of which became greater concerns as temperatures increased. The first TBCs, applied in the 1970s, were aluminide coatings. Improved ceramic coatings became available in the 1980s. These coatings increased turbine blade temperature capability by about 200 °F (90 °C). [ 18 ] The coatings also improve blade life, almost doubling the life of turbine blades in some cases. [ 23 ] Most turbine blades are manufactured by investment casting (or lost-wax processing). This process involves making a precise negative die of the blade shape that is filled with wax to form the blade shape. If the blade is hollow (i.e., it has internal cooling passages), a ceramic core in the shape of the passage is inserted into the middle. The wax blade is coated with a heat-resistant material to make a shell, and then that shell is filled with the blade alloy. This step can be more complicated for DS or SC materials, but the process is similar. If there is a ceramic core in the middle of the blade, it is dissolved in a solution that leaves the blade hollow. The blades are coated with a TBC, and then any cooling holes are machined. [ 24 ] Ceramic matrix composites (CMC), where fibers are embedded in a matrix of polymer derived ceramics , are being developed for use in turbine blades. [ 25 ] The main advantage of CMCs over conventional superalloys is their light weight and high temperature capability. SiC/SiC composites consisting of a silicon carbide matrix reinforced by silicon carbide fibers have been shown to withstand operating temperatures 200°-300 °F higher than nickel superalloys. [ 26 ] GE Aviation successfully demonstrated the use of such SiC/SiC composite blades for the low-pressure turbine of its F414 jet engine. [ 27 ] [ 28 ] Note: This list is not inclusive of all alloys used in turbine blades. [ 29 ] [ 30 ] At a constant pressure ratio, thermal efficiency of the engine increases as the turbine entry temperature (TET) increases. However, high temperatures can damage the turbine, as the blades are under large centrifugal stresses and materials are weaker at high temperature. So, turbine blade cooling is essential for the first stages but since the gas temperature drops through each stage it is not required for later stages such as in the low pressure turbine or a power turbine. [ 36 ] Current modern turbine designs are operating with inlet temperatures higher than 1900 kelvins which is achieved by actively cooling the turbine components. [ 5 ] Turbine blades are cooled using air, except for limited use of steam cooling in a combined cycle power plant. Water cooling has been extensively tested but has never been introduced. [ 37 ] The General Electric "H" class gas turbine has cooled rotating blades and static vanes using steam from a combined cycle steam turbine although GE was reported in 2012 to be going back to air-cooling for its "FlexEfficiency" units. [ 38 ] Liquid cooling seems to be more attractive because of high specific heat capacity and chances of evaporative cooling but there can be leakage, corrosion, choking and other problems which work against this method. [ 36 ] On the other hand, air cooling allows the discharged air into main flow without any problem. Quantity of air required for this purpose is 1–3% of main flow and blade temperature can be reduced by 200–300 °C. [ 36 ] There are many techniques of cooling used in gas turbine blades; convection , film, transpiration cooling, cooling effusion, pin fin cooling etc. which fall under the categories of internal and external cooling. While all methods have their differences, they all work by using cooler air taken from the compressor to remove heat from the turbine blades. [ 39 ] It works by passing cooling air through passages internal to the blade. [ 40 ] Heat is transferred by conduction through the blade, and then by convection into the air flowing inside of the blade. A large internal surface area is desirable for this method, so the cooling paths tend to be serpentine and full of small fins. The internal passages in the blade may be circular or elliptical in shape. Cooling is achieved by passing the air through these passages from hub towards the blade tip. This cooling air comes from an air compressor. In case of gas turbine the fluid outside is relatively hot which passes through the cooling passage and mixes with the main stream at the blade tip. [ 39 ] [ 41 ] A variation of convection cooling, impingement cooling, works by hitting the inner surface of the blade with high velocity air. This allows more heat to be transferred by convection than regular convection cooling does. Impingement cooling is used in the regions of greatest heat loads. In case of turbine blades, the leading edge has maximum temperature and thus heat load. Impingement cooling is also used in mid chord of the vane. Blades are hollow with a core. [ 42 ] There are internal cooling passages. Cooling air enters from the leading edge region and turns towards the trailing edge. [ 41 ] Film cooling (also called thin film cooling), a widely used type, allows for higher cooling effectiveness than either convection and impingement cooling. [ 43 ] This technique consists of pumping the cooling air out of the blade through multiple small holes or slots in the structure. A thin layer (the film) of cooling air is then created on the external surface of the blade, reducing the heat transfer from main flow, whose temperature (1300–1800 kelvins ) can exceed the melting point of the blade material (1300–1400 kelvins). [ 44 ] [ 45 ] The ability of the film cooling system to cool the surface is typically evaluated using a parameter called cooling effectiveness. Higher cooling effectiveness (with maximum value of one) indicates that the blade material temperature is closer to the coolant temperature. In locations where the blade temperature approaches the hot gas temperature, the cooling effectiveness approaches to zero. The cooling effectiveness is mainly affected by the coolant flow parameters and the injection geometry. Coolant flow parameters include the velocity, density, blowing and momentum ratios which are calculated using the coolant and mainstream flow characteristics. Injection geometry parameters consist of hole or slot geometry (i.e. cylindrical, shaped holes or slots) and injections angle. [ 5 ] [ 6 ] A United States Air Force program in the early 1970s funded the development of a turbine blade that was both film and convection cooled, and that method has become common in modern turbine blades. [ 18 ] Injecting the cooler bleed into the flow reduces turbine isentropic efficiency; the compression of the cooling air (which does not contribute power to the engine) incurs an energetic penalty; and the cooling circuit adds considerable complexity to the engine. [ 46 ] All of these factors have to be compensated by the increase in overall performance (power and efficiency) allowed by the increase in turbine temperature. [ 47 ] In recent years, researchers have suggested using plasma actuator for film cooling. The film cooling of turbine blades by using a dielectric barrier discharge plasma actuator was first proposed by Roy and Wang. [ 48 ] A horseshoe-shaped plasma actuator, which is set in the vicinity of holes for gas flow, has been shown to improve the film cooling effectiveness significantly. Following the previous research, recent reports using both experimental and numerical methods demonstrated the effect of cooling enhancement by 15% using a plasma actuator. [ 49 ] [ 50 ] [ 51 ] The blade surface is made of porous material which means having a large number of small orifices on the surface. Cooling air is forced through these porous holes which forms a film or cooler boundary layer. Besides this uniform cooling is caused by effusion of the coolant over the entire blade surface. [ 36 ] In the narrow trailing edge film cooling is used to enhance heat transfer from the blade. There is an array of pin fins on the blade surface. Heat transfer takes place from this array and through the side walls. As the coolant flows across the fins with high velocity, the flow separates and wakes are formed. Many factors contribute towards heat transfer rate among which the type of pin fin and the spacing between fins are the most significant. [ 42 ] This is similar to film cooling in that it creates a thin film of cooling air on the blade, but it is different in that air is "leaked" through a porous shell rather than injected through holes. This type of cooling is effective at high temperatures as it uniformly covers the entire blade with cool air. [ 41 ] [ 52 ] Transpiration-cooled blades generally consist of a rigid strut with a porous shell. Air flows through internal channels of the strut and then passes through the porous shell to cool the blade. [ 53 ] As with film cooling, increased cooling air decreases turbine efficiency, therefore that decrease has to be balanced with improved temperature performance. [ 47 ]
https://en.wikipedia.org/wiki/Turbine_blade
Each turbine in a gas turbine engine has an operating map. Complete maps are either based on turbine rig test results or are predicted by a special computer program. Alternatively, the map of a similar turbine can be suitably scaled. A turbine map [ 1 ] shows lines of percent corrected speed (based on a reference value) plotted against the x-axis which is pressure ratio, but deltaH/T (roughly proportional to temperature drop across the unit/component entry temperature) is also often used. The y-axis is some measure of flow, usually non-dimensional flow or corrected flow, but not actual flow. Sometimes, the axes of a turbine map are transposed, to be consistent with those of a compressor map . As in this case, a companion plot, showing the variation of isentropic (i.e. adiabatic ) or polytropic efficiency, is often also included. The turbine may be a transonic unit, where the throat Mach number reaches sonic conditions and the turbine becomes truly choked . Consequently, there is virtually no variation in flow between the corrected speed lines at high pressure ratios. Most turbines however, are subsonic devices, the highest Mach number at the NGV throat being about 0.85. Under these conditions, there is a slight scatter in flow between the percent corrected speed lines in the 'choked' region of the map, where the flow for a given speed reaches a plateau. Unlike a compressor or fan, surge or stall does not occur in a turbine. This is because the gas flows through the turbine in its natural direction, from high to low pressure. As a result, there is no surge line marked on a turbine map. Working lines are difficult to see on a conventional turbine map because the speed lines bunch up. The map may be replotted, with the y-axis being the multiple of flow and corrected speed. This separates the speed lines, enabling working lines (and efficiency contours) to be cross-plotted and clearly seen. The following discussion relates to the expansion system of a 2-spool, high bypass ratio, unmixed, turbofan. On the RHS is a typical primary (i.e. hot) nozzle map (or characteristic). Its appearance is similar to that of a turbine map, but it lacks any (rotational) speed lines. Note that at high flight speeds (ignoring the change in altitude), the hot nozzle is usually in, or close to, a choking condition. This is because the ram rise in the air intake factors-up the nozzle pressure ratio. At static (e.g. SLS) conditions there is no ram rise, so the nozzle tends to operate unchoked (LHS of plot). The low pressure turbine 'sees' the variation in flow capacity of the primary nozzle. A falling nozzle flow capacity tends to reduce the LP turbine pressure ratio (and deltaH/T). As the left hand map shows, initially the reduction in LP turbine deltaH/T has little effect upon the entry flow of the unit. Eventually, however, the LP turbine unchokes, causing the flow capacity of the LP turbine to start to decrease. As long as the LP turbine remains choked, there is no significant change in HP turbine pressure ratio (or deltaH/T) and flow. Once, however, the LP turbine unchokes, the HP turbine deltaH/T starts to decrease. Eventually the HP turbine unchokes, causing its flow capacity to start to fall. Ground Idle is often reached shortly after HPT unchoke.
https://en.wikipedia.org/wiki/Turbine_map
Turboswing is a type of grease filter used in kitchen ventilation to remove grease particles from the air. It is typically installed inside the extractor hoods of restaurant kitchens. Its operation is based on a rotating filtering medium. [ 1 ] [ 2 ] The main difference between turboswing and most common filters is that in turboswing filters the filtering medium is not static. There is a perforated disk rotating at high speed. [ 1 ] When the grease particles go through the rotating disk they are separated from the air. After separation, centrifugal force due to the rotating disk throws particles against the inner walls of the filter. [ 3 ] Particles then drip down the walls of the chamber onto the lower collection basin, where they stay until they are removed through the tap at the bottom of the filter dome. [ 4 ] Turboswing filters can remove grease particles starting from 4μm, as opposed to 8μm for common filters. This is because the filtering medium is moving, and this increases the probability of collision between the filter and the particle. [ 1 ] [ 5 ] [ 6 ] [ 7 ] Turboswing filters can work with varying airflows . The grease extraction level is not affected by the airflow. This means that this kind of filter can be used in restaurants that turn down the air volumes at non-peak times in order to save energy . The explanation is that if the airflow is lower, the particles go through the rotating disk at a slower speed, therefore increasing, the collision probability. Other filters like cyclonic filters require the airflow to be high on a permanent basis, or else the performance of the filter drops. Therefore, the use of filters like turboswing make it possible to save vast amounts of energy in restaurant kitchen ventilation. [ 8 ] [ 9 ] Turboswing grease filters make it possible to do heat recovery with the air of a kitchen. [ 10 ] Unlike common filters, turboswing filters extract the small particles responsible for making the heat exchanger dirty. [ 5 ] Heat recovery makes it possible to save energy in the ventilation of a building. In particular, kitchen air is hotter than the air in most other rooms, and therefore a large amount of energy can potentially be saved. However, when it comes to the ventilation of a kitchen, if the correct kind of filter is not used, heat recovery can be very difficult or even impossible, because of the presence of grease particles in the air. Grease particles accumulate at the heat exchanger, rendering it useless very quickly. [ 11 ] In order to have heat recovery in a kitchen, the air must be completely clear of grease, in other words, both large and small grease particles must be removed from the air. Static filters cannot adequately deal with small particles, therefore making it impossible to recover heat. Turboswing filters exhibit high performance in dealing with small particles, and this is why they enable heat recovery to be done with kitchen air. [ 1 ] [ 11 ]
https://en.wikipedia.org/wiki/TurboSwing
A turboexpander , also referred to as a turbo-expander or an expansion turbine , is a centrifugal or axial-flow turbine , through which a high- pressure gas is expanded to produce work that is often used to drive a compressor or generator . [ 1 ] [ 2 ] [ 3 ] Because work is extracted from the expanding high-pressure gas, the expansion is approximated by an isentropic process (i.e., a constant- entropy process), and the low-pressure exhaust gas from the turbine is at a very low temperature , −150 °C or less, depending upon the operating pressure and gas properties. Partial liquefaction of the expanded gas is not uncommon. Turboexpanders are widely used as sources of refrigeration in industrial processes such as the extraction of ethane and natural-gas liquids (NGLs) from natural gas , [ 4 ] the liquefaction of gases (such as oxygen , nitrogen , helium , argon and krypton ) [ 5 ] [ 6 ] and other low-temperature processes. Turboexpanders currently in operation range in size from about 750 W to about 7.5 MW (1 hp to about 10,000 hp). Although turboexpanders are commonly used in low-temperature processes, they are used in many other applications. This section discusses one of the low-temperature processes, as well as some of the other applications. Raw natural gas consists primarily of methane (CH 4 ), the shortest and lightest hydrocarbon molecule, along with various amounts of heavier hydrocarbon gases such as ethane (C 2 H 6 ), propane (C 3 H 8 ), normal butane ( n -C 4 H 10 ), isobutane ( i -C 4 H 10 ), pentanes and even higher- molecular-mass hydrocarbons. The raw gas also contains various amounts of acid gases such as carbon dioxide (CO 2 ), hydrogen sulfide (H 2 S) and mercaptans such as methanethiol (CH 3 SH) and ethanethiol (C 2 H 5 SH). When processed into finished by-products (see Natural-gas processing ), these heavier hydrocarbons are collectively referred to as NGL (natural-gas liquids). The extraction of the NGL often involves a turboexpander [ 7 ] and a low-temperature distillation column (called a demethanizer ) as shown in the figure. The inlet gas to the demethanizer is first cooled to about −51 °C in a heat exchanger (referred to as a cold box ), which partially condenses the inlet gas. The resultant gas–liquid mixture is then separated into a gas stream and a liquid stream. The liquid stream from the gas–liquid separator flows through a valve and undergoes a throttling expansion from an absolute pressure of 62 bar to 21 bar (6.2 to 2.1 MPa), which is an isenthalpic process (i.e., a constant-enthalpy process) that results in lowering the temperature of the stream from about −51 °C to about −81 °C as the stream enters the demethanizer. The gas stream from the gas–liquid separator enters the turboexpander, where it undergoes an isentropic expansion from an absolute pressure of 62 bar to 21 bar (6.2 to 2.1 MPa) that lowers the gas stream temperature from about −51 °C to about −91 °C as it enters the demethanizer to serve as distillation reflux . Liquid from the top tray of the demethanizer (at about −90 °C) is routed through the cold box, where it is warmed to about 0 °C as it cools the inlet gas, and is then returned to the lower section of the demethanizer. Another liquid stream from the lower section of the demethanizer (at about 2 °C) is routed through the cold box and returned to the demethanizer at about 12 °C. In effect, the inlet gas provides the heat required to "reboil" the bottom of the demethanizer, and the turboexpander removes the heat required to provide reflux in the top of the demethanizer. The overhead gas product from the demethanizer at about −90 °C is processed natural gas that is of suitable quality for distribution to end-use consumers by pipeline . It is routed through the cold box, where it is warmed as it cools the inlet gas. It is then compressed in the gas compressor driven by the turboexpander and further compressed in a second-stage gas compressor driven by an electric motor before entering the distribution pipeline. The bottom product from the demethanizer is also warmed in the cold box, as it cools the inlet gas, before it leaves the system as NGL. The operating conditions of an offshore gas conditioning turbo-expander/recompressor are as follows: [ 8 ] The figure depicts an electric power-generation system that uses a heat source, a cooling medium (air, water or other), a circulating working fluid and a turboexpander. The system can accommodate a wide variety of heat sources such as: The circulating working fluid (usually an organic compound such as R-134a) is pumped to a high pressure and then vaporized in the evaporator by heat exchange with the available heat source. The resulting high-pressure vapor flows to the turboexpander, where it undergoes an isentropic expansion and exits as a vapor–liquid mixture, which is then condensed into a liquid by heat exchange with the available cooling medium. The condensed liquid is pumped back to the evaporator to complete the cycle. The system in the figure implements a Rankine cycle as it is used in fossil-fuel power plants , where water is the working fluid and the heat source is derived from the combustion of natural gas, fuel oil or coal used to generate high-pressure steam. The high-pressure steam then undergoes an isentropic expansion in a conventional steam turbine . The steam turbine exhaust steam is next condensed into liquid water, which is then pumped back to steam generator to complete the cycle. When an organic working fluid such as R-134a is used in the Rankine cycle, the cycle is sometimes referred to as an organic Rankine cycle (ORC). [ 9 ] [ 10 ] [ 11 ] A refrigeration system utilizes a compressor, a turboexpander and an electric motor. Depending on the operating conditions, the turboexpander reduces the load on the electric motor by 6–15% compared to a conventional vapor-compression refrigeration system that uses a throttling expansion valve rather than a turboexpander. [ 12 ] Basically, this can be seen as a form of turbo compounding . The system employs a high-pressure refrigerant (i.e., one with a low normal boiling point ) such as: [ 12 ] As shown in the figure, refrigerant vapor is compressed to a higher pressure, resulting in a higher temperature as well. The hot, compressed vapor is then condensed into a liquid. The condenser is where heat is expelled from the circulating refrigerant and is carried away by whatever cooling medium is used in the condenser (air, water, etc.). The refrigerant liquid flows through the turboexpander, where it is vaporized, and the vapor undergoes an isentropic expansion, which results in a low-temperature mixture of vapor and liquid. The vapor–liquid mixture is then routed through the evaporator, where it is vaporized by heat absorbed from the space being cooled. The vaporized refrigerant flows to the compressor inlet to complete the cycle. In the case where the working fluid remains gaseous into the heat exchangers without undergoing phase changes, this cycle is also referred to as reverse Brayton cycle or "refrigerating Brayton cycle". The combustion flue gas from the catalyst regenerator of a fluid catalytic cracker is at a temperature of about 715 °C and at a pressure of about 2.4 barg (240 kPa gauge). Its gaseous components are mostly carbon monoxide (CO), carbon dioxide (CO 2 ) and nitrogen (N 2 ). Although the flue gas has been through two stages of cyclones (located within the regenerator) to remove entrained catalyst fines, it still contains some residual catalyst fines. The figure depicts how power is recovered and utilized by routing the regenerator flue gas through a turboexpander. After the flue gas exits the regenerator, it is routed through a secondary catalyst separator containing swirl tubes designed to remove 70–90% of the residual catalyst fines. [ 13 ] This is required to prevent erosion damage to the turboexpander. As shown in the figure, expansion of the flue gas through a turboexpander provides sufficient power to drive the regenerator's combustion air compressor. The electrical motor-generator in the power-recovery system can consume or produce electrical power. If the expansion of the flue gas does not provide enough power to drive the air compressor, the electric motor-generator provides the needed additional power. If the flue gas expansion provides more power than needed to drive the air compressor, then the electric motor-generator converts the excess power into electric power and exports it to the refinery's electrical system. [ 14 ] The steam turbine is used to drive the regenerator's combustion air compressor during start-ups of the fluid catalytic cracker until there is sufficient combustion flue gas to take over that task. The expanded flue gas is then routed through a steam-generating boiler (referred to as a CO boiler ), where the carbon monoxide in the flue gas is burned as fuel to provide steam for use in the refinery. [ 14 ] The flue gas from the CO boiler is processed through an electrostatic precipitator (ESP) to remove residual particulate matter . The ESP removes particulates in the size range of 2 to 20 micrometers from the flue gas. [ 14 ] The possible use of an expansion machine for isentropically creating low temperatures was suggested by Carl Wilhelm Siemens ( Siemens cycle ), a German engineer in 1857. About three decades later, in 1885, Ernest Solvay of Belgium attempted to use a reciprocating expander machine, but could not attain any temperatures lower than −98 °C because of problems with lubrication of the machine at such temperatures. [ 2 ] In 1902, Georges Claude , a French engineer, successfully used a reciprocating expansion machine to liquefy air. He used a degreased, burnt leather packing as a piston seal without any lubrication. With an air pressure of only 40 bar (4 MPa), Claude achieved an almost isentropic expansion resulting in a lower temperature than had before been possible. [ 2 ] The first turboexpanders seem to have been designed in about 1934 or 1935 by Guido Zerkowitz, an Italian engineer working for the German firm of Linde AG . [ 15 ] [ 16 ] In 1939, the Russian physicist Pyotr Kapitsa perfected the design of centrifugal turboexpanders. His first practical prototype was made of Monel metal, had an outside diameter of only 8 cm (3.1 in), operated at 40,000 revolutions per minute and expanded 1,000 cubic metres of air per hour. It used a water pump as a brake and had an efficiency of 79–83%. [ 2 ] [ 16 ] Most turboexpanders in industrial use since then have been based on Kapitsa's design, and centrifugal turboexpanders have taken over almost 100% of the industrial gas liquefaction and low-temperature process requirements. [ 2 ] [ 16 ] The availability of liquid oxygen revolutionized the production of steel using the basic oxygen steelmaking process. In 1978, Pyotr Kapitsa was awarded a Nobel physics prize for his body of work in the area of low-temperature physics. [ 17 ] In 1983, San Diego Gas and Electric was among the first to install a turboexpander in a natural-gas letdown station for energy recovery . [ 18 ] Turboexpanders can be classified by loading device or bearings. Three main loading devices used in turboexpanders are centrifugal compressors , electrical generators or hydraulic brakes. With centrifugal compressors and electrical generators the shaft power from the turboexpander is recouped either to recompress the process gas or to generate electrical energy, lowering utility bills. Hydraulic brakes are used when the turboexpander is very small and harvesting the shaft power is not economically justifiable. Bearings used are either oil bearings or magnetic bearings .
https://en.wikipedia.org/wiki/Turboexpander
Turbomachinery , in mechanical engineering , describes machines that transfer energy between a rotor and a fluid , including both turbines and compressors . While a turbine transfers energy from a fluid to a rotor, a compressor transfers energy from a rotor to a fluid. [ 1 ] [ 2 ] It is an important application of fluid mechanics . [ 3 ] These two types of machines are governed by the same basic relationships including Newton's second Law of Motion and Euler's pump and turbine equation for compressible fluids . Centrifugal pumps are also turbomachines that transfer energy from a rotor to a fluid, usually a liquid, while turbines and compressors usually work with a gas. [ 1 ] The first turbomachines could be identified as water wheels , which appeared between the 3rd and 1st centuries BCE in the Mediterranean region. These were used throughout the medieval period and began the first Industrial Revolution . When steam power started to be used, as the first power source driven by the combustion of a fuel rather than renewable natural power sources, this was as reciprocating engines . Primitive turbines and conceptual designs for them, such as the smoke jack , appeared intermittently but the temperatures and pressures required for a practically efficient turbine exceeded the manufacturing technology of the time. The first patent for gas turbines were filed in 1791 by John Barber . Practical hydroelectric water turbines and steam turbines did not appear until the 1880s. Gas turbines appeared in the 1930s. The first impulse type turbine was created by Carl Gustaf de Laval in 1883. This was closely followed by the first practical reaction type turbine in 1884, built by Charles Parsons . Parsons’ first design was a multi-stage axial-flow unit, which George Westinghouse acquired and began manufacturing in 1895, while General Electric acquired de Laval's designs in 1897. Since then, development has skyrocketed from Parsons’ early design, producing 0.746 kW, to modern nuclear steam turbines producing upwards of 1500 MW. Furthermore, steam turbines accounted for roughly 45% of electrical power generated in the United States in 2021. [ 4 ] Then the first functioning industrial gas turbines were used in the late 1890s to power street lights (Meher-Homji, 2000). In general, the two kinds of turbomachines encountered in practice are open and closed turbomachines. Open machines such as propellers , windmills , and unshrouded fans act on an infinite extent of fluid, whereas closed machines operate on a finite quantity of fluid as it passes through a housing or casing. [ 2 ] Turbomachines are also categorized according to the type of flow. When the flow is parallel to the axis of rotation , they are called axial flow machines, and when flow is perpendicular to the axis of rotation, they are referred to as radial (or centrifugal) flow machines. There is also a third category, called mixed flow machines, where both radial and axial flow velocity components are present. [ 2 ] Turbomachines may be further classified into two additional categories: those that absorb energy to increase the fluid pressure , i.e. pumps , fans , and compressors , and those that produce energy such as turbines by expanding flow to lower pressures. Of particular interest are applications which contain pumps, fans, compressors and turbines. These components are essential in almost all mechanical equipment systems, such as power and refrigeration cycles . [ 2 ] [ 5 ] Any device that extracts energy from or imparts energy to a continuously moving stream of fluid can be called a turbomachine. Elaborating, a turbomachine is a power or heat generating machine which employs the dynamic action of a rotating element, the rotor; the action of the rotor changes the energy level of the continuously flowing fluid through the machine. Turbines, compressors and fans are all members of this family of machines. [ 6 ] In contrast to positive displacement machines (particularly of the reciprocating type which are low speed machines based on the mechanical and volumetric efficiency considerations), the majority of turbomachines run at comparatively higher speeds without any mechanical problems and volumetric efficiency close to one hundred percent. [ 7 ] Turbomachines can be categorized on the basis of the direction of energy conversion: [ 1 ] [ 2 ] Turbomachines can be categorized on the basis of the nature of the flow path through the passage of the rotor: [ 8 ] Axial flow turbomachines - When the path of the through-flow is wholly or mainly parallel to the axis of rotation, the device is termed an axial flow turbomachine. [ 9 ] The radial component of the fluid velocity is negligible. Since there is no change in the direction of the fluid, several axial stages can be used to increase power output. A Kaplan turbine is an example of an axial flow turbine. In the figure: Radial flow turbomachines - When the path of the throughflow is wholly or mainly in a plane perpendicular to the rotation axis, the device is termed a radial flow turbomachine. [ 9 ] Therefore, the change of radius between the entry and the exit is finite. A radial turbomachine can be inward or outward flow type depending on the purpose that needs to be served. The outward flow type increases the energy level of the fluid and vice versa. Due to continuous change in direction, several radial stages are generally not used. A centrifugal pump is an example of a radial flow turbomachine. Mixed flow turbomachines – When axial and radial flow are both present and neither is negligible, the device is termed a mixed flow turbomachine. [ 9 ] It combines flow and force components of both radial and axial types. A Francis turbine is an example of a mixed-flow turbine. Turbomachines can finally be classified on the relative magnitude of the pressure changes that take place across a stage: [ 2 ] [ 5 ] Impulse Turbomachines operate by accelerating and changing the flow direction of fluid through a stationary nozzle (the stator blade) onto the rotor blade. The nozzle serves to change the incoming pressure into velocity, the enthalpy of the fluid decreases as the velocity increases. Pressure and enthalpy drop over the rotor blades is minimal. Velocity will decrease over the rotor. [ 1 ] [ 9 ] Newton's second law describes the transfer of energy. Impulse turbomachines do not require a pressure casement around the rotor since the fluid jet is created by the nozzle prior to reaching the blading on the rotor. A Pelton wheel is an impulse design. Reaction Turbomachines operate by reacting to the flow of fluid through aerofoil shaped rotor and stator blades. The velocity of the fluid through the sets of blades increases slightly (as with a nozzle) as it passes from rotor to stator and vice versa. The velocity of the fluid then decreases again once it has passed between the gap. Pressure and enthalpy consistently decrease through the sets of blades. [ 1 ] Newton's third law describes the transfer of energy for reaction turbines. A pressure casement is needed to contain the working fluid. For compressible working fluids, multiple turbine stages are usually used to harness the expanding gas efficiently. Most turbomachines use a combination of impulse and reaction in their design, often with impulse and reaction parts on the same blade. The following dimensionless ratios are often used for the characterisation of fluid machines. They allow a comparison of flow machines with different dimensions and boundary conditions. Hydro electric - Hydro-electric turbomachinery uses potential energy stored in water to flow over an open impeller to turn a generator which creates electricity Steam turbines - Steam turbines used in power generation come in many different variations. The overall principle is high pressure steam is forced over blades attached to a shaft, which turns a generator. As the steam travels through the turbine, it passes through smaller blades causing the shaft to spin faster, creating more electricity. Gas turbines - Gas turbines work much like steam turbines. Air is forced in through a series of blades that turn a shaft. Then fuel is mixed with the air and causes a combustion reaction, increasing the power. This then causes the shaft to spin faster, creating more electricity. Windmills - Also known as a wind turbine , windmills are increasing in popularity for their ability to efficiently use the wind to generate electricity. Although they come in many shapes and sizes, the most common one is the large three-blade. The blades work on the same principle as an airplane wing . As wind passes over the blades, it creates an area of low and high pressure, causing the blade to move, spinning a shaft and creating electricity. It is most like a steam turbine, but works with an infinite supply of wind. Steam turbine - Steam turbines in marine applications are very similar to those in power generation. The few differences between them are size and power output. Steam turbines on ships are much smaller because they don't need to power a whole town. They aren't very common because of their high initial cost, high specific fuel consumption, and expensive machinery that goes with it. Gas turbines - Gas turbines in marine applications are becoming more popular due to their smaller size, increased efficiency, and ability to burn cleaner fuels. They run just like gas turbines for power generation, but are also much smaller and do require more machinery for propulsion. They are most popular in naval ships as they can be at a dead stop to full power in minutes (Kayadelen, 2013), and are much smaller for a given amount of power. Water jet - Essentially a water jet drive is like an aircraft turbojet with the difference that the operating fluid is water instead of air. [ 10 ] Water jets are best suited to fast vessels and are thus used often by the military. Water jet propulsion has many advantages over other forms of marine propulsion, such as stern drives , outboard motors , shafted propellers and surface drives . [ 11 ] Turbochargers - Turbochargers are one of the most popular turbomachines. They are used mainly for adding power to engines by adding more air. It combines both forms of turbomachines. Exhaust gases from the engine spin a bladed wheel, much like a turbine. That wheel then spins another bladed wheel, sucking and compressing outside air into the engine. Superchargers - Superchargers are used for engine-power enhancement as well, but only work off the principle of compression. They use the mechanical power from the engine to spin a screw or vane, some way to suck in and compress the air into the engine. Pumps - Pumps are another very popular turbomachine. Although there are very many different types of pumps, they all do the same thing. Pumps are used to move fluids around using some sort of mechanical power, from electric motors to full size diesel engines. Pumps have thousands of uses, and are the true basis to turbomachinery (Škorpík, 2017). Air compressors - Air compressors are another very popular turbomachine. They work on the principle of compression by sucking in and compressing air into a holding tank. Air compressors are one of the most basic turbomachines. Fans - Fans are the most general type of turbomachines. Gas turbines - Aerospace gas turbines, more commonly known as jet engines, are the most common gas turbines. Turbopumps - Rocket engines require very high propellant pressures and mass flow rates, meaning their pumps require a lot of power. One of the most common solutions to this issue is to use a turbopump that extracts energy from an energetic fluid flow. The source of this energetic fluid flow could be one or a combination of many things, including the decomposition of hydrogen peroxide, the combustion of a portion of the propellants, or even the heating of cryogenic propellants run through coolant jackets in the combustion chamber's walls. Many types of dynamic continuous flow turbomachinery exist. Below is a partial list of these types. What is notable about these turbomachines is that the same fundamentals apply to all. Certainly there are significant differences between these machines and between the types of analysis that are typically applied to specific cases. This does not negate the fact that they are unified by the same underlying physics of fluid dynamics, gas dynamics, aerodynamics, hydrodynamics, and thermodynamics.
https://en.wikipedia.org/wiki/Turbomachinery
A Turbo mixer , also known as a high speed mixer or a tank mixer , is a type of industrial mixer that uses PVC for mixing raw materials to form a free-flowing powder blend. It includes a cylindrical tank with a mixing tool assembled on the bottom that typically operates at a peripheral speed of between 20 and 50 m/s, depending on the material to blend. The material is heated inside by a mixer, through the mechanical energy that is produced between the mixing tools and the material which generates mutual impacts of the particles. [ 1 ] [ 2 ] During the mixing phase, the Turbo-mixer creates an axial vortex. The structure and position of the blades inside the mixer guarantee homogeneous material dispersion. To avoid thermal degradation, it is usually combined with a cooler that cools down the dry blend to the temperature of around 45-55 C. Due to the poor heat conductivity of the cooler, the cooler is usually three times larger than the mixer as the cooling time is proportional to contact surface. The typical uses of the Turbo mixer is for the production of PVC (dry-blend rigid or plasticized) and for other kinds of thermoplastic composites (like Master-Batch, Wood Plastic Composites, Additives, Thermoplastics Polymers). The largest high-speed mixer known on the market has a tank volume of 2500 litres, which corresponds to a PVC batch size of about 1160 kg and is combined with a horizontal cooler 8600 L. This machine, due to the kind of products was mixed, they have also an introduction of around 500 kg into the cooler mixer, and they can produce around 14 Ton/hour It was manufactured by the Italian company PROMIXON S.r.L. in 2014. [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Turbomixer
In fluid dynamics , turbulence or turbulent flow is fluid motion characterized by chaotic changes in pressure and flow velocity . It is in contrast to laminar flow , which occurs when a fluid flows in parallel layers with no disruption between those layers. [ 2 ] Turbulence is commonly observed in everyday phenomena such as surf , fast flowing rivers, billowing storm clouds, or smoke from a chimney, and most fluid flows occurring in nature or created in engineering applications are turbulent. [ 3 ] [ 4 ] : 2 Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the damping effect of the fluid's viscosity. For this reason, turbulence is commonly realized in low viscosity fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. The onset of turbulence can be predicted by the dimensionless Reynolds number , the ratio of kinetic energy to viscous damping in a fluid flow. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence create a very complex phenomenon. Physicist Richard Feynman described turbulence as the most important unsolved problem in classical physics. [ 5 ] The turbulence intensity affects many fields, for examples fish ecology, [ 6 ] air pollution, [ 7 ] precipitation, [ 8 ] and climate change. [ 9 ] Turbulence is characterized by the following features: Turbulent diffusion is usually described by a turbulent diffusion coefficient . This turbulent diffusion coefficient is defined in a phenomenological sense, by analogy with the molecular diffusivities, but it does not have a true physical meaning, being dependent on the flow conditions, and not a property of the fluid itself. In addition, the turbulent diffusivity concept assumes a constitutive relation between a turbulent flux and the gradient of a mean variable similar to the relation between flux and gradient that exists for molecular transport. In the best case, this assumption is only an approximation. Nevertheless, the turbulent diffusivity is the simplest approach for quantitative analysis of turbulent flows, and many models have been postulated to calculate it. For instance, in large bodies of water like oceans this coefficient can be found using Richardson 's four-third power law and is governed by the random walk principle. In rivers and large ocean currents, the diffusion coefficient is given by variations of Elder's formula. Via this energy cascade , turbulent flow can be realized as a superposition of a spectrum of flow velocity fluctuations and eddies upon a mean flow . The eddies are loosely defined as coherent patterns of flow velocity, vorticity and pressure. Turbulent flows may be viewed as made of an entire hierarchy of eddies over a wide range of length scales and the hierarchy can be described by the energy spectrum that measures the energy in flow velocity fluctuations for each length scale ( wavenumber ). The scales in the energy cascade are generally uncontrollable and highly non-symmetric. Nevertheless, based on these length scales these eddies can be divided into three categories. The integral time scale for a Lagrangian flow can be defined as: where u ′ is the velocity fluctuation, and τ {\displaystyle \tau } is the time lag between measurements. [ 18 ] Although it is possible to find some particular solutions of the Navier–Stokes equations governing fluid motion, all such solutions are unstable to finite perturbations at large Reynolds numbers. Sensitive dependence on the initial and boundary conditions makes fluid flow irregular both in time and in space so that a statistical description is needed. The Russian mathematician Andrey Kolmogorov proposed the first statistical theory of turbulence, based on the aforementioned notion of the energy cascade (an idea originally introduced by Richardson ) and the concept of self-similarity . As a result, the Kolmogorov microscales were named after him. It is now known that the self-similarity is broken so the statistical description is presently modified. [ 19 ] A complete description of turbulence is one of the unsolved problems in physics . According to an apocryphal story, Werner Heisenberg was asked what he would ask God , given the opportunity. His reply was: "When I meet God, I am going to ask him two questions: Why relativity ? And why turbulence? I really believe he will have an answer for the first." [ 20 ] [ a ] A similar witticism has been attributed to Horace Lamb in a speech to the British Association for the Advancement of Science : "I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather more optimistic." [ 21 ] [ 22 ] The onset of turbulence can be, to some extent, predicted by the Reynolds number , which is the ratio of inertial forces to viscous forces within a fluid which is subject to relative internal movement due to different fluid velocities, in what is known as a boundary layer in the case of a bounding surface such as the interior of a pipe. A similar effect is created by the introduction of a stream of higher velocity fluid, such as the hot gases from a flame in air. This relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which as it increases, progressively inhibits turbulence, as more kinetic energy is absorbed by a more viscous fluid. The Reynolds number quantifies the relative importance of these two types of forces for given flow conditions, and is a guide to when turbulent flow will occur in a particular situation. [ 23 ] This ability to predict the onset of turbulent flow is an important design tool for equipment such as piping systems or aircraft wings, but the Reynolds number is also used in scaling of fluid dynamics problems, and is used to determine dynamic similitude between two different cases of fluid flow, such as between a model aircraft, and its full size version. Such scaling is not always linear and the application of Reynolds numbers to both situations allows scaling factors to be developed. A flow situation in which the kinetic energy is significantly absorbed due to the action of fluid molecular viscosity gives rise to a laminar flow regime. For this the dimensionless quantity the Reynolds number ( Re ) is used as a guide. With respect to laminar and turbulent flow regimes: The Reynolds number is defined as [ 24 ] where: While there is no theorem directly relating the non-dimensional Reynolds number to turbulence, flows at Reynolds numbers larger than 5000 are typically (but not necessarily) turbulent, while those at low Reynolds numbers usually remain laminar. In Poiseuille flow , for example, turbulence can first be sustained if the Reynolds number is larger than a critical value of about 2040; [ 25 ] moreover, the turbulence is generally interspersed with laminar flow until a larger Reynolds number of about 4000. The transition occurs if the size of the object is gradually increased, or the viscosity of the fluid is decreased, or if the density of the fluid is increased. When flow is turbulent, particles exhibit additional transverse motion which enhances the rate of energy and momentum exchange between them thus increasing the heat transfer and the friction coefficient. Assume for a two-dimensional turbulent flow that one was able to locate a specific point in the fluid and measure the actual flow velocity v = ( v x , v y ) of every particle that passed through that point at any given time. Then one would find the actual flow velocity fluctuating about a mean value: and similarly for temperature ( T = T + T′ ) and pressure ( P = P + P′ ), where the primed quantities denote fluctuations superposed to the mean. This decomposition of a flow variable into a mean value and a turbulent fluctuation was originally proposed by Osborne Reynolds in 1895, and is considered to be the beginning of the systematic mathematical analysis of turbulent flow, as a sub-field of fluid dynamics. While the mean values are taken as predictable variables determined by dynamics laws, the turbulent fluctuations are regarded as stochastic variables. The heat flux and momentum transfer (represented by the shear stress τ ) in the direction normal to the flow for a given time are where c P is the heat capacity at constant pressure, ρ is the density of the fluid, μ turb is the coefficient of turbulent viscosity and k turb is the turbulent thermal conductivity . [ 4 ] Richardson's notion of turbulence was that a turbulent flow is composed by "eddies" of different sizes. The sizes define a characteristic length scale for the eddies, which are also characterized by flow velocity scales and time scales (turnover time) dependent on the length scale. The large eddies are unstable and eventually break up originating smaller eddies, and the kinetic energy of the initial large eddy is divided into the smaller eddies that stemmed from it. These smaller eddies undergo the same process, giving rise to even smaller eddies which inherit the energy of their predecessor eddy, and so on. In this way, the energy is passed down from the large scales of the motion to smaller scales until reaching a sufficiently small length scale such that the viscosity of the fluid can effectively dissipate the kinetic energy into internal energy. In his original theory of 1941, Kolmogorov postulated that for very high Reynolds numbers , the small-scale turbulent motions are statistically isotropic (i.e. no preferential spatial direction could be discerned). In general, the large scales of a flow are not isotropic, since they are determined by the particular geometrical features of the boundaries (the size characterizing the large scales will be denoted as L ). Kolmogorov's idea was that in the Richardson's energy cascade this geometrical and directional information is lost, while the scale is reduced, so that the statistics of the small scales has a universal character: they are the same for all turbulent flows when the Reynolds number is sufficiently high. Thus, Kolmogorov introduced a second hypothesis: for very high Reynolds numbers the statistics of small scales are universally and uniquely determined by the kinematic viscosity ν and the rate of energy dissipation ε . With only these two parameters, the unique length that can be formed by dimensional analysis is This is today known as the Kolmogorov length scale (see Kolmogorov microscales ). A turbulent flow is characterized by a hierarchy of scales through which the energy cascade takes place. Dissipation of kinetic energy takes place at scales of the order of Kolmogorov length η , while the input of energy into the cascade comes from the decay of the large scales, of order L . These two scales at the extremes of the cascade can differ by several orders of magnitude at high Reynolds numbers. In between there is a range of scales (each one with its own characteristic length r ) that has formed at the expense of the energy of the large ones. These scales are very large compared with the Kolmogorov length, but still very small compared with the large scale of the flow (i.e. η ≪ r ≪ L ). Since eddies in this range are much larger than the dissipative eddies that exist at Kolmogorov scales, kinetic energy is essentially not dissipated in this range, and it is merely transferred to smaller scales until viscous effects become important as the order of the Kolmogorov scale is approached. Within this range inertial effects are still much larger than viscous effects, and it is possible to assume that viscosity does not play a role in their internal dynamics (for this reason this range is called "inertial range"). Hence, a third hypothesis of Kolmogorov was that at very high Reynolds number the statistics of scales in the range η ≪ r ≪ L are universally and uniquely determined by the scale r and the rate of energy dissipation ε . The way in which the kinetic energy is distributed over the multiplicity of scales is a fundamental characterization of a turbulent flow. For homogeneous turbulence (i.e., statistically invariant under translations of the reference frame) this is usually done by means of the energy spectrum function E ( k ) , where k is the modulus of the wavevector corresponding to some harmonics in a Fourier representation of the flow velocity field u ( x ) : where û ( k ) is the Fourier transform of the flow velocity field. Thus, E ( k ) d k represents the contribution to the kinetic energy from all the Fourier modes with k < | k | < k + d k , and therefore, where ⁠ 1 / 2 ⁠ ⟨ u i u i ⟩ is the mean turbulent kinetic energy of the flow. The wavenumber k corresponding to length scale r is k = ⁠ 2π / r ⁠ . Therefore, by dimensional analysis, the only possible form for the energy spectrum function according with the third Kolmogorov's hypothesis is where K 0 ≈ 1.5 {\displaystyle K_{0}\approx 1.5} would be a universal constant. This is one of the most famous results of Kolmogorov 1941 theory, [ 26 ] describing transport of energy through scale space without any loss or gain. The Kolmogorov five-thirds law was first observed in a tidal channel, [ 27 ] and considerable experimental evidence has since accumulated that supports it. [ 28 ] Outside of the inertial area, one can find the formula [ 29 ] below : In spite of this success, Kolmogorov theory is at present under revision. This theory implicitly assumes that the turbulence is statistically self-similar at different scales. This essentially means that the statistics are scale-invariant and non-intermittent in the inertial range. A usual way of studying turbulent flow velocity fields is by means of flow velocity increments: that is, the difference in flow velocity between points separated by a vector r (since the turbulence is assumed isotropic, the flow velocity increment depends only on the modulus of r ). Flow velocity increments are useful because they emphasize the effects of scales of the order of the separation r when statistics are computed. The statistical scale-invariance without intermittency implies that the scaling of flow velocity increments should occur with a unique scaling exponent β , so that when r is scaled by a factor λ , should have the same statistical distribution as with β independent of the scale r . From this fact, and other results of Kolmogorov 1941 theory, it follows that the statistical moments of the flow velocity increments (known as structure functions in turbulence) should scale as where the brackets denote the statistical average, and the C n would be universal constants. There is considerable evidence that turbulent flows deviate from this behavior. The scaling exponents deviate from the ⁠ n / 3 ⁠ value predicted by the theory, becoming a non-linear function of the order n of the structure function. The universality of the constants have also been questioned. For low orders the discrepancy with the Kolmogorov ⁠ n / 3 ⁠ value is very small, which explain the success of Kolmogorov theory in regards to low order statistical moments. In particular, it can be shown that when the energy spectrum follows a power law with 1 < p < 3 , the second order structure function has also a power law, with the form Since the experimental values obtained for the second order structure function only deviate slightly from the ⁠ 2 / 3 ⁠ value predicted by Kolmogorov theory, the value for p is very near to ⁠ 5 / 3 ⁠ (differences are about 2% [ 30 ] ). Thus the "Kolmogorov − ⁠ 5 / 3 ⁠ spectrum" is generally observed in turbulence. However, for high order structure functions, the difference with the Kolmogorov scaling is significant, and the breakdown of the statistical self-similarity is clear. This behavior, and the lack of universality of the C n constants, are related with the phenomenon of intermittency in turbulence and can be related to the non-trivial scaling behavior of the dissipation rate averaged over scale r . [ 31 ] This is an important area of research in this field, and a major goal of the modern theory of turbulence is to understand what is universal in the inertial range, and how to deduce intermittency properties from the Navier-Stokes equations, i.e. from first principles.
https://en.wikipedia.org/wiki/Turbulence
The turbulent Prandtl number ( Pr t ) is a non-dimensional term defined as the ratio between the momentum eddy diffusivity and the heat transfer eddy diffusivity. It is useful for solving the heat transfer problem of turbulent boundary layer flows. The simplest model for Pr t is the Reynolds analogy , which yields a turbulent Prandtl number of 1. From experimental data, Pr t has an average value of 0.85, but ranges from 0.7 to 0.9 depending on the Prandtl number of the fluid in question. The introduction of eddy diffusivity and subsequently the turbulent Prandtl number works as a way to define a simple relationship between the extra shear stress and heat flux that is present in turbulent flow. If the momentum and thermal eddy diffusivities are zero (no apparent turbulent shear stress and heat flux), then the turbulent flow equations reduce to the laminar equations. We can define the eddy diffusivities for momentum transfer ε M {\displaystyle \varepsilon _{M}} and heat transfer ε H {\displaystyle \varepsilon _{H}} as − u ′ v ′ ¯ = ε M ∂ u ¯ ∂ y {\displaystyle -{\overline {u'v'}}=\varepsilon _{M}{\frac {\partial {\bar {u}}}{\partial y}}} and − v ′ T ′ ¯ = ε H ∂ T ¯ ∂ y {\displaystyle -{\overline {v'T'}}=\varepsilon _{H}{\frac {\partial {\bar {T}}}{\partial y}}} where − u ′ v ′ ¯ {\displaystyle -{\overline {u'v'}}} is the apparent turbulent shear stress and − v ′ T ′ ¯ {\displaystyle -{\overline {v'T'}}} is the apparent turbulent heat flux. The turbulent Prandtl number is then defined as P r t = ε M ε H . {\displaystyle \mathrm {Pr} _{\mathrm {t} }={\frac {\varepsilon _{M}}{\varepsilon _{H}}}.} The turbulent Prandtl number has been shown to not generally equal unity (e.g. Malhotra and Kang, 1984; Kays, 1994; McEligot and Taylor, 1996; and Churchill, 2002). It is a strong function of the molecular Prandtl number amongst other parameters and the Reynolds Analogy is not applicable when the molecular Prandtl number differs significantly from unity as determined by Malhotra and Kang; [ 1 ] and elaborated by McEligot and Taylor [ 2 ] and Churchill [ 3 ] Turbulent momentum boundary layer equation: u ¯ ∂ u ¯ ∂ x + v ¯ ∂ u ¯ ∂ y = − 1 ρ d P ¯ d x + ∂ ∂ y [ ( ν ∂ u ¯ ∂ y − u ′ v ′ ¯ ) ] . {\displaystyle {\bar {u}}{\frac {\partial {\bar {u}}}{\partial x}}+{\bar {v}}{\frac {\partial {\bar {u}}}{\partial y}}=-{\frac {1}{\rho }}{\frac {d{\bar {P}}}{dx}}+{\frac {\partial }{\partial y}}\left[(\nu {\frac {\partial {\bar {u}}}{\partial y}}-{\overline {u'v'}})\right].} Turbulent thermal boundary layer equation, u ¯ ∂ T ¯ ∂ x + v ¯ ∂ T ¯ ∂ y = ∂ ∂ y ( α ∂ T ¯ ∂ y − v ′ T ′ ¯ ) . {\displaystyle {\bar {u}}{\frac {\partial {\bar {T}}}{\partial x}}+{\bar {v}}{\frac {\partial {\bar {T}}}{\partial y}}={\frac {\partial }{\partial y}}\left(\alpha {\frac {\partial {\bar {T}}}{\partial y}}-{\overline {v'T'}}\right).} Substituting the eddy diffusivities into the momentum and thermal equations yields u ¯ ∂ u ¯ ∂ x + v ¯ ∂ u ¯ ∂ y = − 1 ρ d P ¯ d x + ∂ ∂ y [ ( ν + ε M ) ∂ u ¯ ∂ y ] {\displaystyle {\bar {u}}{\frac {\partial {\bar {u}}}{\partial x}}+{\bar {v}}{\frac {\partial {\bar {u}}}{\partial y}}=-{\frac {1}{\rho }}{\frac {d{\bar {P}}}{dx}}+{\frac {\partial }{\partial y}}\left[(\nu +\varepsilon _{M}){\frac {\partial {\bar {u}}}{\partial y}}\right]} and u ¯ ∂ T ¯ ∂ x + v ¯ ∂ T ¯ ∂ y = ∂ ∂ y [ ( α + ε H ) ∂ T ¯ ∂ y ] . {\displaystyle {\bar {u}}{\frac {\partial {\bar {T}}}{\partial x}}+{\bar {v}}{\frac {\partial {\bar {T}}}{\partial y}}={\frac {\partial }{\partial y}}\left[(\alpha +\varepsilon _{H}){\frac {\partial {\bar {T}}}{\partial y}}\right].} Substitute into the thermal equation using the definition of the turbulent Prandtl number to get u ¯ ∂ T ¯ ∂ x + v ¯ ∂ T ¯ ∂ y = ∂ ∂ y [ ( α + ε M P r t ) ∂ T ¯ ∂ y ] . {\displaystyle {\bar {u}}{\frac {\partial {\bar {T}}}{\partial x}}+{\bar {v}}{\frac {\partial {\bar {T}}}{\partial y}}={\frac {\partial }{\partial y}}\left[(\alpha +{\frac {\varepsilon _{M}}{\mathrm {Pr} _{\mathrm {t} }}}){\frac {\partial {\bar {T}}}{\partial y}}\right].} In the special case where the Prandtl number and turbulent Prandtl number both equal unity (as in the Reynolds analogy ), the velocity profile and temperature profiles are identical. This greatly simplifies the solution of the heat transfer problem. If the Prandtl number and turbulent Prandtl number are different from unity, then a solution is possible by knowing the turbulent Prandtl number so that one can still solve the momentum and thermal equations. In a general case of three-dimensional turbulence, the concept of eddy viscosity and eddy diffusivity are not valid. Consequently, the turbulent Prandtl number has no meaning. [ 4 ]
https://en.wikipedia.org/wiki/Turbulent_Prandtl_number
Turbulent diffusion is the transport of mass, heat, or momentum within a system due to random and chaotic time dependent motions. [ 1 ] It occurs when turbulent fluid systems reach critical conditions in response to shear flow , which results from a combination of steep concentration gradients, density gradients, and high velocities. It occurs much more rapidly than molecular diffusion and is therefore extremely important for problems concerning mixing and transport in systems dealing with combustion , contaminants , dissolved oxygen, and solutions in industry. In these fields, turbulent diffusion acts as an excellent process for quickly reducing the concentrations of a species in a fluid or environment, in cases where this is needed for rapid mixing during processing, or rapid pollutant or contaminant reduction for safety. However, it has been extremely difficult to develop a concrete and fully functional model that can be applied to the diffusion of a species in all turbulent systems due to the inability to characterize both an instantaneous and predicted fluid velocity simultaneously. In turbulent flow, this is a result of several characteristics such as unpredictability, rapid diffusivity, high levels of fluctuating vorticity, and dissipation of kinetic energy. [ 2 ] Atmospheric dispersion, [ 3 ] or diffusion, studies how pollutants are mixed in the environment. There are many factors included in this modeling process, such as which level of atmosphere(s) the mixing is taking place, the stability of the environment and what type of contaminant and source is being mixed. The Eulerian and Lagrangian (discussed below) models have both been used to simulate atmospheric diffusion, and are important for a proper understanding of how pollutants react and mix in different environments. Both of these models take into account both vertical and horizontal wind, but additionally integrate Fickian diffusion theory to account for turbulence. While these methods have to use ideal conditions and make numerous assumptions, at this point in time, it is difficult to better calculate the effects of turbulent diffusion on pollutants. Fickian diffusion theory and further advancements in research on atmospheric diffusion can be applied to model the effects that current emission rates of pollutants from various sources have on the atmosphere. [ 4 ] Using planar laser-induced fluorescence (PLIF) and particle image velocimetry (PIV) processes, there has been on-going research on the effects of turbulent diffusion in flames. Main areas of study include combustion systems in gas burners used for power generation and chemical reactions in jet diffusion flames involving methane (CH 4 ), hydrogen (H 2 ) and nitrogen (N 2 ). [ 5 ] Additionally, double-pulse Rayleigh temperature imaging has been used to correlate extinction and ignition sites with changes in temperature and the mixing of chemicals in flames. [ 6 ] The Eulerian approach to turbulent diffusion focuses on an infinitesimal volume at a specific space and time in a fixed frame of reference, at which physical properties such as mass, momentum, and temperature are measured. [ 7 ] The model is useful because Eulerian statistics are consistently measurable and offer great application to chemical reactions. Similarly to molecular models, it must satisfy the same principles as the continuity equation below (where the advection of an element or species is balanced by its diffusion, generation by reaction, and addition from other sources or points) and the Navier–Stokes equations : ∂ c i ∂ t + ∂ ∂ x j ( u j c i ) = D i ∂ 2 c i ∂ x j ∂ x j + R i ( c 1 , . . . , c N , T ) + S i ( x , t ) {\displaystyle {\partial c_{i} \over \partial t}+{\partial \over \partial x_{j}}(u_{j}c_{i})=D_{i}{\partial ^{2}c_{i} \over \partial x_{j}\partial x_{j}}+R_{i}(c_{1},...,c_{N},T)+S_{i}(x,t)} i = 1 , 2 , . . . , N {\displaystyle {i=1,2,...,N}} where c i {\displaystyle c_{i}} = species concentration of interest, u j {\displaystyle u_{j}} = velocity t= time, x j {\displaystyle x_{j}} = direction, D i {\displaystyle D_{i}} = molecular diffusion constant, R i {\displaystyle R_{i}} = rate of c i {\displaystyle c_{i}} generated reaction, S i {\displaystyle S_{i}} = rate of c i {\displaystyle c_{i}} generated by source. [ 8 ] Note that c i {\displaystyle c_{i}} is concentration per unit volume, and is not mixing ratio ( k g / k g {\displaystyle kg/kg} ) in a background fluid. If we consider an inert species (no reaction) with no sources and assume molecular diffusion to be negligible, only the advection terms on the left hand side of the equation survive. The solution to this model seems trivial at first, however we have ignored the random component of the velocity plus the average velocity in u j = ū + u j ’ that is typically associated with turbulent behavior. In turn, the concentration solution for the Eulerian model must also have a random component c j = c + c j ’. This results in a closure problem of infinite variables and equations and makes it impossible to solve for a definite c i on the assumptions stated. [ 9 ] Fortunately there exists a closure approximation in introducing the concept of eddy diffusivity and its statistical approximations for the random concentration and velocity components from turbulent mixing: ⟨ u j ′ c ′ ⟩ = − K j j ∂ ( c ) ∂ x j {\displaystyle \langle u_{j}'c'\rangle =-K_{jj}{\partial (c) \over \partial x_{j}}} where K jj is the eddy diffusivity. [ 8 ] Substituting into the first continuity equation and ignoring reactions, sources, and molecular diffusion results in the following differential equation considering only the turbulent diffusion approximation in eddy diffusion: ∂ c i ∂ t + u ¯ j ∂ ( c ) ∂ x j = ∂ ∂ x j ( K j j ∂ ( c ) ∂ x j ) {\displaystyle {\partial c_{i} \over \partial t}+{\overline {u}}_{j}{\partial (c) \over \partial x_{j}}={\partial \over \partial x_{j}}{\bigg (}K_{jj}{\partial (c) \over \partial x_{j}}{\bigg )}} Unlike the molecular diffusion constant D, the eddy diffusivity is a matrix expression that may vary in space, and thus may not be taken outside the outer derivative. The Lagrangian model to turbulent diffusion uses a moving frame of reference to follow the trajectories and displacements of the species as they move and follows the statistics of each particle individually. [ 7 ] Initially, the particle sits at a location x ’ (x 1 , x 2 , x 3 ) at time t ’. The motion of the particle is described by its probability of existing in a specific volume element at time t , that is described by Ψ (x 1 , x 2 , x 3 , t ) dx 1 dx 2 dx 3 = Ψ ( x , t )d x which follows the probability density function (pdf) such that: ψ ( x , t ) = ∫ − ∞ ∞ ∫ − ∞ ∞ ∫ − ∞ ∞ Q ( x , t | x ′ , t ′ ) ψ ( x ′ , t ′ ) d x ′ {\displaystyle {\boldsymbol {\psi }}(\mathbf {x} ,{\mathit {t}})=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }{\mathit {Q}}(\mathbf {x} ,{\mathit {t}}|\mathbf {x} ',{\mathit {t}}'){\boldsymbol {\psi }}(\mathbf {x} ',{\mathit {t}}')d\mathbf {x} '} Where function Q is the probably density for particle transition. The concentration of particles at a location x and time t can then be calculated by summing the probabilities of the number of particles observed as follows: ⟨ c ( x , t ) ⟩ = ∑ i = 1 m ψ i ( x , t ) {\displaystyle \langle c(\mathbf {x} ,{\mathit {t}})\rangle =\sum _{i=1}^{m}{\boldsymbol {\psi }}_{i}(\mathbf {x} ,{\mathit {t}})} Which is then evaluated by returning to the pdf integral [ 8 ] ⟨ c ( x , t ) ⟩ == ∫ − ∞ ∞ ∫ − ∞ ∞ ∫ − ∞ ∞ Q ( x , t | x 0 , t 0 ) ⟨ c ( x 0 , t 0 ) ⟩ d x 0 + ∫ − ∞ ∞ ∫ − ∞ ∞ ∫ − ∞ ∞ ∫ t 0 t Q ( x , t | x ′ , t ′ ) S ( x ′ , t ′ ) d t d x ′ {\displaystyle \langle c(\mathbf {x} ,{\mathit {t}})\rangle ==\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }{\mathit {Q}}(\mathbf {x} ,{\mathit {t}}|\mathbf {x} _{0},{\mathit {t}}_{0})\langle c(\mathbf {x} _{0},{\mathit {t}}_{0})\rangle d\mathbf {x} _{0}+\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\int _{t_{0}}^{t}{\mathit {Q}}(\mathbf {x} ,{\mathit {t}}|\mathbf {x} ',{\mathit {t}}'){\mathit {S}}(\mathbf {x} ',{\mathit {t}}')d{\mathit {t}}d\mathbf {x} '} Thus, this approach is used to evaluate the position and velocity of particles relative to their neighbors and environment, and approximates the random concentrations and velocities associated with turbulent diffusion in the statistics of their motion. The resulting solution for solving the final equations listed above for both the Eulerian and Lagrangian models for analyzing the statistics of species in turbulent flow, both result in very similar expressions for calculating the average concentration at a location from a continuous source. Both solutions develop a Gaussian Plume and are virtually identical under the assumption that the variances in the x,y,z directions are related to the eddy diffusivity: ⟨ c ( x , y , z ) ⟩ = q 2 π σ y σ z exp ⁡ [ − ( y 2 σ y 2 + z 2 σ z 2 ) ] {\displaystyle \langle c(x,y,z)\rangle ={\frac {q}{2\pi \sigma _{y}\sigma _{z}}}\exp {\bigg [}-{\bigg (}{\frac {y^{2}}{\sigma _{y}^{2}}}+{\frac {z^{2}}{\sigma _{z}^{2}}}{\bigg )}{\bigg ]}} where σ y 2 = 2 K y y x u ¯ σ z 2 = 2 K z z x u ¯ {\displaystyle \sigma _{y}^{2}={\frac {2K_{yy}x}{\overline {u}}}\sigma _{z}^{2}={\frac {2K_{zz}x}{\overline {u}}}} q= species emission rate, u = wind speed, σ i 2 = variance in i direction. [ 8 ] Under various external conditions such as directional flow speed (wind) and environmental conditions, the variances and diffusivities of turbulent diffusion are measured and used to calculate a good estimate of concentrations at a specific point from a source. This model is very useful in atmospheric sciences, especially when dealing with concentrations of contaminants in air pollution that emanate from sources such as combustion stacks, rivers, or strings of automobiles on a road. [ 2 ] Because applying mathematical equations to turbulent flow and diffusion is so difficult, research in this area has been lacking until recently. In the past, laboratory efforts have used data from steady flow in streams or from fluids, that have a high Reynolds number , flowing through pipes, but it is difficult to obtain accurate data from these methods. This is because these methods involve ideal flow, which cannot simulate the conditions of turbulent flow necessary for developing turbulent diffusion models. With the advancement in computer-aided modeling and programming, scientists have been able to simulate turbulent flow in order to better understand turbulent diffusion in the atmosphere and in fluids. Currently in use on research efforts are two main non-intrusive applications. The first is planar laser-induced fluorescence (PLIF), which is used to detect instantaneous concentrations at up to one million points per second. This technology can be paired with particle image velocimetry (PIV), which detects instantaneous velocity data. In addition to finding concentration and velocity data, these techniques can be used to deduce spatial correlations and changes in the environment. As technology and computer abilities are rapidly expanding, these methods will also improve greatly, and will more than likely be at the forefront of future research on modeling turbulent diffusion. [ 10 ] Aside from these efforts, there also have been advances in fieldwork used before computers were available. Real-time monitoring of turbulence, velocity and currents for fluid mixing is now possible. This research has proved important for studying the mixing cycles of contaminants in turbulent flows, especially for drinking water supplies. As researching techniques and availability increase, many new areas are showing interest in utilizing these methods. Studying how robotics or computers can detect odor and contaminants in a turbulent flow is one area that will likely produce a lot of interest in research. These studies could help the advancement of recent research on placing sensors in aircraft cabins to effectively detect biological weapons and/or viruses.
https://en.wikipedia.org/wiki/Turbulent_diffusion
Turdus Solitarius ( Latin for solitary thrush ) was a constellation created by French astronomer Pierre Charles Le Monnier in 1776 from stars of Hydra 's tail. It was named after the Rodrigues solitaire , an extinct flightless bird that was endemic to the island of Rodrigues East of Madagascar in the Indian Ocean. [ 1 ] It was replaced by another constellation, Noctua (the Owl), in A Celestial Atlas (1822) by the British amateur astronomer Alexander Jamieson , but neither was adopted by the International Astronomical Union among its 88 recognized constellations . The IAU Working Group on Star Names approved the name Solitaire for the star E Hydrae in 2024, after the obsolete constellation. [ 2 ] This constellation -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Turdus_Solitarius
Ahmet Turgay Uzer is a Turkish-born American theoretical physicist and nature photographer . Regents' Professor Emeritus at Georgia Institute of Technology following Joseph Ford (physicist) . He has contributed in the field of atomic and molecular physics, nonlinear dynamics and chaos significantly. [ 1 ] His research on interplay between quantum dynamics and classical mechanics , in the context of chaos is considered to be novel in molecular and theoretical physics and chemistry. Turgay Uzer completed his bachelor's degree at Turkey's prestigious Middle East Technical University . According to Harvard University Library [ 2 ] his doctoral thesis was entitled "Photon and electron interactions with diatomic molecules." He defended his dissertation and graduated from Harvard University in 1979. Before joining Georgia Tech in 1985 as an associate professor, he worked as a research fellow at University of Oxford 1979/81, Caltech 1982/1983, and as a research associate at University of Colorado 1983/85. Currently, he is a faculty member with the Center for Nonlinear Science and full professor of physics at Georgia Tech . His research areas are quite broad, but he has focused on the dynamics of intermolecular energy transfer, reaction dynamics, quantal manifestations of classical mechanics, quantization of nonlinear systems, computational physics, molecular physics, applied mathematics. Uzer was Alexander von Humboldt -Stiftung Foundation Fellow in 1993–1994 at Max Planck Institute , Munich. Uzer is of Turkish origin and was also awarded the prestigious Science award for his contributions to physics from the Scientific and Technological Research Council (TÜBİTAK) [1] in 1998. Uzer has more than 80 referenced Journal articles, in a number of highly respected scientific journals.
https://en.wikipedia.org/wiki/Turgay_Uzer
Turgor pressure is the force within the cell that pushes the plasma membrane against the cell wall . [ 1 ] It is also called hydrostatic pressure , and is defined as the pressure in a fluid measured at a certain point within itself when at equilibrium. [ 2 ] Generally, turgor pressure is caused by the osmotic flow of water and occurs in plants , fungi , and bacteria . The phenomenon is also observed in protists that have cell walls. [ 3 ] This system is not seen in animal cells, as the absence of a cell wall would cause the cell to lyse when under too much pressure. [ 4 ] The pressure exerted by the osmotic flow of water is called turgidity. It is caused by the osmotic flow of water through a selectively permeable membrane . Movement of water through a semipermeable membrane from a volume with a low solute concentration to one with a higher solute concentration is called osmotic flow. In plants, this entails the water moving from the low concentration solute outside the cell into the cell's vacuole . [ citation needed ] Osmosis is the process in which water flows from a volume with a low solute concentration (osmolarity), [ 5 ] to an adjacent region with a higher solute concentration until equilibrium between the two areas is reached. [ 6 ] It is usually accompanied by a favorable increase in the entropy of the solvent. All cells are surrounded by a lipid bi-layer cell membrane which permits the flow of water into and out of the cell while limiting the flow of solutes. When the cell is in a hypertonic solution, water flows out of the cell, which decreases the cell's volume. When in a hypotonic solution, water flows into the membrane and increases the cell's volume, while in an isotonic solution, water flows in and out of the cell at an equal rate. [ 4 ] Turgidity is the point at which the cell's membrane pushes against the cell wall, which is when turgor pressure is high. When the cell has low turgor pressure, it is flaccid. In plants, this is shown as wilted anatomical structures. This is more specifically known as plasmolysis. [ 7 ] The volume and geometry of the cell affects the value of turgor pressure and how it can affect the cell wall's plasticity. Studies have shown that smaller cells experience a stronger elastic change when compared to larger cells. [ 3 ] Turgor pressure also plays a key role in plant cell growth when the cell wall undergoes irreversible expansion due to the force of turgor pressure as well as structural changes in the cell wall that alter its extensibility. [ 8 ] Turgor pressure within cells is regulated by osmosis and this also causes the cell wall to expand during growth. Along with size, rigidity of the cell is also caused by turgor pressure; a lower pressure results in a wilted cell or plant structure (i.e. leaf, stalk). One mechanism in plants that regulate turgor pressure is the cell's semipermeable membrane, which allows only some solutes to travel in and out of the cell, maintaining a minimum pressure. Other mechanisms include transpiration , which results in water loss and decreases turgidity in cells. [ 9 ] Turgor pressure is also a large factor for nutrient transport throughout the plant. Cells of the same organism can have differing turgor pressures throughout the organism's structure. In vascular plants , turgor pressure is responsible for apical growth of features such as root tips [ 10 ] and pollen tubes . [ 11 ] Transport proteins that pump solutes into the cell can be regulated by cell turgor pressure. Lower values allow for an increase in the pumping of solutes, which in turn increases osmotic pressure. This function is important as a plant response under drought conditions [ 12 ] (seeing as turgor pressure is maintained), and for cells which need to accumulate solutes (i.e. developing fruits ). [ 13 ] It has been recorded that the petals of Gentiana kochiana and Kalanchoe blossfeldiana bloom via volatile turgor pressure of cells on the plant's adaxial surface. [ 11 ] During processes like anther dehiscence , it has been observed that drying endothecium cells cause an outward bending force which leads to the release of pollen. This means that lower turgor pressures are observed in these structures due to the fact that they are dehydrated. Pollen tubes are cells which elongate when pollen lands on the stigma , at the carpal tip. These cells undergo tip growth rather quickly due to increases in turgor pressure. The pollen tube of lilies have a mean turgor pressure of 0.21 MPa when growing during this process. [ 14 ] In fruits such as Impatiens parviflora , Oxalia acetosella and Ecballium elaterium , turgor pressure is the method by which seeds are dispersed. [ 15 ] In Ecballium elaterium , or squirting cucumber, turgor pressure builds up in the fruit to the point that it aggressively detaches from the stalk, and seeds and water are squirted everywhere as the fruit falls to the ground. Turgor pressure within the fruit ranges from .003 to 1.0 MPa. [ 16 ] The action of turgor pressure on extensible cell walls is usually said to be the driving force of growth within the cell. [ 17 ] An increase of turgor pressure causes expansion of cells and extension of apical cells, pollen tubes, and other plant structures such as root tips. Cell expansion and an increase in turgor pressure is due to inward diffusion of water into the cell, and turgor pressure increases due to the increasing volume of vacuolar sap . A growing root cell's turgor pressure can be up to 0.6 MPa, which is over three times that of a car tire. Epidermal cells in a leaf can have pressures ranging from 1.5 to 2.0 MPa. [ 18 ] These high pressures can explain why plants can grow through asphalt and other hard surfaces. [ 17 ] Turgidity is observed in a cell where the cell membrane is pushed against the cell wall. In some plants, cell walls loosen at a faster rate than water can cross the membrane, which results in cells with lower turgor pressure. [ 3 ] Turgor pressure within the stomata regulates when the stomata can open and close, which plays a role in transpiration rates of the plant. This is also important because this function regulates water loss within the plant. Lower turgor pressure can mean that the cell has a low water concentration and closing the stomata would help to preserve water. High turgor pressure keeps the stomata open for gas exchanges necessary for photosynthesis. [ 9 ] It has been concluded that loss of turgor pressure within the leaves of Mimosa pudica is responsible for the plant's reaction when touched. Other factors such as changes in osmotic pressure, protoplasmic contraction and increase in cellular permeability have been observed to affect this response. It has also been recorded that turgor pressure is different in the upper and lower pulvinar cells of the plant, and the movement of potassium and calcium ions throughout the cells cause the increase in turgor pressure. When touched, the pulvinus is activated and exudes contractile proteins, which in turn increases turgor pressure and closes the leaves of the plant. [ 19 ] As earlier stated, turgor pressure can be found in other organisms besides plants and can play a large role in the development, movement, and nature of said organisms. In fungi, turgor pressure has been observed as a large factor in substrate penetration. In species such as Saprolegnia ferax, Magnaporthe grisea and Aspergillus oryzae , immense turgor pressures have been observed in their hyphae . The study showed that they could penetrate substances like plant cells , and synthetic materials such as polyvinyl chloride . [ 20 ] In observations of this phenomenon, it is noted that invasive hyphal growth is due to turgor pressure, along with the coenzymes secreted by the fungi to invade said substrates. [ 21 ] Hyphal growth is directly related to turgor pressure, and growth slows as turgor pressure decreases. In Magnaporthe grisea , pressures of up to 8 MPa have been observed. [ 22 ] Some protists do not have cell walls and cannot experience turgor pressure. These few protists use their contractile vacuole to regulate the quantity of water within the cell. Protist cells avoid lysing in hypotonic solution by utilizing a vacuole which pumps water out of the cells to maintain osmotic equilibrium. [ 23 ] Turgor pressure is not observed in animal cells because they lack a cell wall. In organisms with cell walls, the cell wall prevents the cell from being lysed by high turgor pressure. [ 1 ] In diatoms, the Heterokontophyta have polyphyletic turgor-resistant cell walls. Throughout these organisms' life cycle, carefully controlled turgor pressure is responsible for cell expansion and for the release of sperm, but not for processes such as seta growth. [ 24 ] Gas-vaculate [ check spelling ] cyanobacterium are the ones generally responsible for water-blooms . They have the ability to float due to the accumulation of gases within their vacuole, and the role of turgor pressure and its effect on the capacity of these vacuoles has been reported in varying scientific papers. [ 25 ] [ 26 ] It is noted that the higher the turgor pressure, the lower the capacity of the gas-vacuoles in different cyanobacteria. Experiments used to correlate osmosis and turgor pressure in prokaryotes have been used to show how diffusion of solutes into the cell affects turgor pressure within the cell. [ 27 ] When measuring turgor pressure in plants, many factors have to be taken into account. It is generally stated that fully turgid cells have a turgor pressure that is equal to that of the cell and that flaccid cells have a value at or near zero. Other cellular mechanisms to be taken into consideration include the protoplast , solutes within the protoplast (solute potential), transpiration rates of the cell and the tension of cell walls. Measurement is limited depending on the method used, some of which are explored and explained below. Not all methods can be used for all organisms, due to size or other properties. For example, a diatom does not have the same properties as a plant, which would place limitations on methods that could be used to infer turgor pressure. [ 28 ] Units used to measure turgor pressure are independent from the measures used to infer its values. Common units include bars , MPa , or newtons per square meter. 1 bar is equal to 0.1 MPa. [ 29 ] Turgor pressure can be deduced when the total water potential , Ψ w , and the osmotic potential , Ψ s , are known in a water potential equation. [ 30 ] These equations are used to measure the total water potential of a plant by using variables such as matric potential, osmotic potential, pressure potential, gravitational effects and turgor pressure. [ 31 ] After taking the difference between Ψ s and Ψ w , the value for turgor pressure is obtained. When using this method, gravity and matric potential are considered to be negligible, since their values are generally either negative or close to zero. [ 30 ] The pressure bomb technique was developed by Scholander et al., reviewed by Tyree and Hammel in their 1972 publication, in order to test water movement through plants. The instrument is used to measure turgor pressure by placing a leaf (with stem attached) into a closed chamber where pressurized gas is added in increments. Measurements are taken when xylem sap appears out of the cut surface and at the point which it doesn't accumulate or retreat back into the cut surface. [ 32 ] Atomic force microscopes use a type of scanning probe microscopy (SPM). Small probes are introduced to the area of interest, and a spring within the probe measures values via displacement. [ 33 ] This method can be used to measure turgor pressure of organisms. When using this method, supplemental information such as continuum mechanic equations , single force depth curves and cell geometries can be used to quantify turgor pressures within a given area (usually a cell). This machine was originally used to measure individual algal cells, but can now be used on larger-celled specimens. It is usually used on higher plant tissues but was not used to measure turgor pressure until Hüsken and Zimmerman improved the method. [ 34 ] Pressure probes measure turgor pressure via displacement. A glass micro-capillary tube is inserted into the cell and whatever the cell exudes into the tube is observed through a microscope. An attached device then measures how much pressure is required to push the emission back into the cell. [ 32 ] These are used to accurately quantify measurements of smaller cells. In an experiment by Weber, Smith and colleagues, single tomato cells were compressed between a micro-manipulation probe and glass to allow the pressure probe's micro-capillary to find the cell's turgor pressure. [ 35 ] It has been observed that the value of Ψ w decreases as the cell becomes more dehydrated, [ 30 ] but scientists have speculated whether this value will continue to decrease but never fall to zero, or if the value can be less than zero. There have been studies [ 36 ] [ 37 ] which show that negative cell pressures can exist in xerophytic plants, but a paper by M. T. Tyree explores whether this is possible, or a conclusion based on misinterpreted data. He concludes that claims of negative turgor pressure values were incorrect and resulted from mis-categorization of "bound" and "free" water in a cell. By analyzing the isotherms of apoplastic and symplastic water, he shows that negative turgor pressures cannot be present within arid plants due to net water loss of the specimen during droughts. Despite this analysis and interpretation of data, negative turgor pressure values are still used within the scientific community. [ 38 ] A hypothesis presented by M. Harold and colleagues suggests that tip growth in higher plants is amoebic in nature, and is not caused by turgor pressure as is widely believed, meaning that extension is caused by the actin cytoskeleton in these plant cells. Regulation of cell growth is implied to be caused by cytoplasmic micro-tubules which control the orientation of cellulose fibrils, which are deposited into the adjacent cell wall and results in growth. In plants, the cells are surrounded by cell walls and filamentous proteins which retain and adjust the plant cell's growth and shape. It is concluded that lower plants grow through apical growth, which differs since the cell wall only expands on one end of the cell. [ 39 ]
https://en.wikipedia.org/wiki/Turgor_pressure
Turin Polytechnic University in Tashkent ( Uzbek : Toshkent shahridagi Turin politexnika universiteti (TTPU) ) is a non-profit public higher education institution in Uzbekistan. [ 1 ] Turin Polytechnic University in Tashkent was established in 2009 in a partnership with Politecnico di Torino , Italy. TTPU's main objective is to prepare specialists for the automotive, mechanical engineering, electrical industries and companies in the field of civil engineering and construction, and the power industry, in accordance with the educational programs adopted in collaboration with Politecnico di Torino, Italy. [ 2 ] TTPU has five departments: Department of Natural-Mathematical Sciences, Department of Humanitarian-Economy Sciences, Department of Control and Computer Engineering , Department of Civil Engineering and Architecture , and Department of Mechanical and Aerospace Engineering. TTPU is a teaching and research university. The official foundation date of the university is April 27, 2009, when the decree of the President of the Republic of Uzbekistan No. PP-1106 “On the organization of Turin Polytechnic University in Tashkent” was issued and from that date the university began its activity as a higher educational institution in accordance with the Educational Standards of the Republic of Uzbekistan. [ 3 ] In summer of 2009, the first 200 students were admitted for bachelor's degree program [ 4 ] and the new university building with academic and administrative buildings and a modern campus was commissioned. [ 5 ] TTPU was established from the collaboration among Polytechnic University of Turin , UZAVTOSANOAT (the leading car manufacturer in Uzbekistan), and the Uzbek Ministry of Higher Education. The Cooperation Agreement and Double Degree Agreement was signed 2009 with the Politecnico di Torino (Italy) that developed three HE curricula in Engineering BS and MS in Uzbekistan in accordance with the Italian HE system and acknowledged from both, the Ministry of Higher and Secondary Education of Republic of Uzbekistan and the Italian legislation . [ 6 ] In May 2010, an academic lyceum was established under the university to prepare students hard sciences [ 7 ] and the building of the academic lyceum with the capacity of 450 students was constructed and commissioned by September 2011. [ 8 ] In the same year, a Metrology Center in cooperation with the Italian company Hexagon Metrology S.P.A., [ 9 ] Mechatronics Center with the support of General Motors Powertrain JSC and the German company Festo and CAD / CAM / CAE Center were established at the university. [ 10 ] In 2014, the MAN training center was organized in cooperation with MAN Truck & Bus and JV MAN Auto-UZBEKISTAN LLC JV. [ 11 ] In 2015, admission for the master's degree program in the specialty direction of “Mechatronics” was organized in the university. [ 12 ] In 2016, the university became one of the first higher education institutions in the field of technology to receive a certificate of ISO 9001 : 2008 International Quality Standard for services in the field of education. [ 13 ] In 2019, the undergraduate program for obtaining a double degree diploma “2+2” was organized in cooperation with the Andijan Machine building Institute. [ 14 ] In November 2020, the undergraduate program with a double degree diploma “2+2” was developed in cooperation with Turin Polytechnic University in Tashkent and Pittsburg State University , Kansas, the United States of America . [ 15 ] The campus is located in Tashkent , Uzbekistan , with modern educational and administrative buildings, conference halls, library, sport complex, research centers, residence hall, dormitories for professors and the large soccer stadium.  The campus is under 24/7 security watch, has the Information Resource Center and the cafeteria. Moreover, the university territory includes Academic and Administrative buildings, Specialized laboratory, Technopark and Metrology center. There are also Academic Lyceum, Mechatronics Center, MAN Academy, CAD / CAM / CAE Center and CLAAS Center under the authority of the university [ 16 ] The period of study for students to obtain an educational qualification degree is 4 years for Bachelor's and 2 years for master's degree. Students are taught in English language with the involvement of professors and teachers of Turin Polytechnic University (Politecnico di Torino, Italy). [ 17 ] Turin Polytechnic University in Tashkent offers the following courses: The core engineering courses are mainly taught by Italian professors and local professors who were educated in Italy , Japan , South Korea and the United States . TTPU's Bachelor's and master's degree programs are based on POLITO academic program and are offered at Turin Polytechnic University in Tashkent with a “mixed” approach. That is, some courses are delivered by POLITO faculty members; others, by TTPU faculty members previously trained by teachers from Polytechnic University of Turin . In accordance with the signed Agreement between the Universities for the awarding of diplomas, graduates receive an Italian diploma of Turin Polytechnic University (Politecnico di Torino). [ 18 ] TTPU organizes many activities and closely cooperate with major, local and foreign companies across Uzbekistan . Moreover, it runs several international projects on education and development. [ 19 ] TTPU runs many fundamental, innovative and practical projects and conducts educational, methodological and researches under foreign grants. Students actively participate in international science competitions. The number and the quality of scientific articles have increased; great attention is paid to the publication of scientific collections and monographs, as well as, the patenting and implementation of scientific developments. In particular, the publication of articles by doctoral students engaged in doctoral dissertations, including in prestigious foreign journals, in the web journals of Science and Scopus that is gaining momentum. [ 20 ] Turin Polytechnic University in Tashkent was awarded the Scopus Award-2018 in nomination “The best scientists of the year”(DilshodTulaganov) [ 21 ] and The Scopus Award-2019 in nomination “The impact of the year.” [ 22 ] Regularly, TTPU's sport teams participate in sport activities in soccer, basketball, volleyball, table tennis, wrestling, chess, athletics and swimming competitions. Moreover, TTPU competes in collaboration with the participants from other universities. [ 23 ] Some sport competitions take place in Sport Complex and Stadium of the university. [ 24 ] [ 25 ] [ 26 ] A football academy "Juventus Academy in Tashkent," which is the official branch of Juventus football Academy of Italy, was established at TTPU's campus in 2019. [ 27 ] TTPU closely cooperates with European, American and Asian higher education institutions and companies [ 28 ] and with more than 40 universities from more than 19 countries. [ 29 ] Moreover, the university has developed many international projects funded by the European Union's Erasmus + capacity building program. [ 30 ]
https://en.wikipedia.org/wiki/Turin_Polytechnic_University_in_Tashkent
Turing's proof is a proof by Alan Turing , first published in November 1936 [ 1 ] with the title " On Computable Numbers, with an Application to the Entscheidungsproblem ". It was the second proof (after Church's theorem ) of the negation of Hilbert 's Entscheidungsproblem ; that is, the conjecture that some purely mathematical yes–no questions can never be answered by computation ; more technically, that some decision problems are " undecidable " in the sense that there is no single algorithm that infallibly gives a correct "yes" or "no" answer to each instance of the problem. In Turing's own words: "what I shall prove is quite different from the well-known results of Gödel ... I shall now show that there is no general method which tells whether a given formula U is provable in K [ Principia Mathematica ]". [ 2 ] Turing followed this proof with two others. The second and third both rely on the first. All rely on his development of typewriter -like " computing machines " that obey a simple set of rules and his subsequent development of a " universal computing machine ". As per UK copyright law , the work entered the public domain on 1 January 2025, 70 full calendar years after Turing's death on 7 June 1954. In his proof that the Entscheidungsproblem can have no solution, Turing proceeded from two proofs that were to lead to his final proof. His first theorem is most relevant to the halting problem , the second is more relevant to Rice's theorem . First proof : that no "computing machine" exists that can decide whether or not an arbitrary "computing machine" (as represented by an integer 1, 2, 3, . . .) is "circle-free" (i.e. goes on printing its number in binary ad infinitum): "...we have no general process for doing this in a finite number of steps" (p. 132, ibid .). Turing's proof, although it seems to use the "diagonal process", in fact shows that his machine (called H) cannot calculate its own number, let alone the entire diagonal number ( Cantor's diagonal argument ): "The fallacy in the argument lies in the assumption that B [the diagonal number] is computable" [ 3 ] The proof does not require much mathematics. Second proof : This one is perhaps more familiar to readers as Rice's theorem : "We can show further that there can be no machine E which, when supplied with the S.D ["program"] of an arbitrary machine M, will determine whether M ever prints a given symbol (0 say) " [ a ] Third proof : "Corresponding to each computing machine M we construct a formula Un(M) and we show that, if there is a general method for determining whether Un(M) is provable, then there is a general method for determining whether M ever prints 0". [ 2 ] The third proof requires the use of formal logic to prove a first lemma, followed by a brief word-proof of the second: Lemma 1: If S1 [symbol "0"] appears on the tape in some complete configuration of M, then Un(M) is provable. [ 4 ] Lemma 2: [The converse] If Un(M) is provable then S1 [symbol "0"] appears on the tape in some complete configuration of M. [ 5 ] Finally, in only 64 words and symbols Turing proves by reductio ad absurdum that "the Hilbert Entscheidungsproblem can have no solution". [ 2 ] Turing created a thicket of abbreviations. See the glossary at the end of the article for definitions. Some key clarifications: Turing's machine H is attempting to print a diagonal number of 0s and 1s. This diagonal number is created when H actually "simulates" each "successful" machine under evaluation and prints the R-th "figure" (1 or 0) of the R-th "successful" machine. Turing spent much of his paper actually "constructing" his machines to convince us of their truth. This was required by his use of the reductio ad absurdum form of proof. We must emphasize the "constructive" nature of this proof. Turing describes what could be a real machine, really buildable. The only questionable element is the existence of machine D, which this proof will eventually show to be impossible. Turing begins the proof with the assertion of the existence of a “decision/determination” machine D. When fed any S.D (string of symbols A, C, D, L, R, N, semicolon “;”) it will determine if this S.D (symbol string) represents a "computing machine" that is either "circular" — and therefore "un-satisfactory u" — or "circle-free" — and therefore "satisfactory s". Turing has previously demonstrated in his commentary that all "computing machines" — machines that compute a number as 1s and 0s forever — can be written as an S.D on the tape of the “universal machine” U. Most of his work leading up to his first proof is spent demonstrating that a universal machine truly exists, i.e. There truly exists a universal machine U For each number N, there truly exists a unique S.D, Every Turing machine has an S.D Every S.D on U’s tape can be “run” by U and will produce the same “output” (figures 1, 0) as the original machine. Turing makes no comment about how machine D goes about its work. For sake of argument, we suppose that D would first look to see if the string of symbols is "well-formed" (i.e. in the form of an algorithm and not just a scramble of symbols), and if not then discard it. Then it would go “circle-hunting”. To do this perhaps it would use “heuristics” (tricks: taught or learned). For purposes of the proof, these details are not important. Turing then describes (rather loosely) the algorithm (method) to be followed by a machine he calls H. Machine H contains within it the decision-machine D (thus D is a “subroutine” of H). Machine H’s algorithm is expressed in H’s table of instructions, or perhaps in H’s Standard Description on tape and united with the universal machine U; Turing does not specify this. In the course of describing universal machine U, Turing has demonstrated that a machine’s S.D (string of letters similar to a “program”) can be converted to an integer (base 8) and vice versa. Any number N (in base 8) can be converted to an S.D with the following replacements: 1 by A, 2 by C, 3 by D, 4 by L, 5 by R, 6 by N, 7 by semicolon ";". As it turns out, machine H's unique number (D.N) is the number "K". We can infer that K is some hugely long number, maybe tens-of-thousands of digits long. But this is not important to what follows. Machine H is responsible for converting any number N into an equivalent S.D symbol string for sub-machine D to test. (In programming parlance: H passes an arbitrary "S.D” to D, and D returns “satisfactory” or “unsatisfactory”.) Machine H is also responsible for keeping a tally R (“Record”?) of successful numbers (we suppose that the number of “successful” S.D's, i.e. R, is much less than the number of S.D's tested, i.e. N). Finally, H prints on a section of its tape a diagonal number “beta-primed” B’. H creates this B’ by “simulating” (in the computer-sense) the “motions” of each “satisfactory” machine/number; eventually this machine/number under test will arrive at its Rth “figure” (1 or 0), and H will print it. H then is responsible for “cleaning up the mess” left by the simulation, incrementing N and proceeding onward with its tests, ad infinitum . Note: All these machines that H is hunting for are what Turing called "computing machines". These compute binary-decimal-numbers in an endless stream of what Turing called "figures": only the symbols 1 and 0. An example: Suppose machine H has tested 13472 numbers and produced 5 satisfactory numbers, i.e. H has converted the numbers 1 through 13472 into S.D's (symbol strings) and passed them to D for test. As a consequence H has tallied 5 satisfactory numbers and run the first one to its 1st "figure", the second to its 2nd figure, the third to its 3rd figure, the fourth to its 4th figure, and the fifth to its 5th figure. The count now stands at N = 13472, R = 5, and B' = ".10011" (for example). H cleans up the mess on its tape, and proceeds: H increments N = 13473 and converts "13473" to symbol string ADRLD. If sub-machine D deems ADLRD unsatisfactory, then H leaves the tally-record R at 5. H will increment the number N to 13474 and proceed onward. On the other hand, if D deems ADRLD satisfactory then H will increment R to 6. H will convert N (again) into ADLRD [this is just an example, ADLRD is probably useless] and “run” it using the universal machine U until this machine-under-test (U "running" ADRLD) prints its 6th “figure” i.e. 1 or 0. H will print this 6th number (e.g. “0”) in the “output” region of its tape (e.g. B’ = “.100110”). H cleans up the mess, and then increments the number N to 13474. The whole process unravels when H arrives at its own number K. We will proceed with our example. Suppose the successful-tally/record R stands at 12. H finally arrives at its own number minus 1, i.e. N = K-1 = 4335...321 4 , and this number is unsuccessful. Then H increments N to produce K = 4355...321 5 , i.e. its own number. H converts this to “LDDR...DCAR” and passes it to decision-machine D. Decision-machine D must return “satisfactory” (that is: H must by definition go on and on testing, ad infinitum , because it is "circle-free"). So H now increments tally R from 12 to 13 and then re-converts the number-under-test K into its S.D and uses U to simulate it. But this means that H will be simulating its own motions. What is the first thing the simulation will do? This simulation K-aka-H either creates a new N or “resets” the “old” N to 1. This "K-aka-H" either creates a new R or “resets” the “old” R to 0. Old-H “runs” new "K-aka-H" until it arrives at its 12th figure. But it never makes it to the 13th figure; K-aka-H eventually arrives at 4355...321 5 , again, and K-aka-H must repeat the test. K-aka-H will never reach the 13th figure. The H-machine probably just prints copies of itself ad infinitum across blank tape. But this contradicts the premise that H is a satisfactory, non-circular computing machine that goes on printing the diagonal numbers's 1's and 0's forever. (We will see the same thing if N is reset to 1 and R is reset to 0.) If the reader does not believe this, they can write a "stub" for decision-machine D (stub "D" will return "satisfactory") and then see for themselves what happens at the instant machine H encounters its own number. Less than one page long, the passage from premises to conclusion is obscure. Turing proceeds by reductio ad absurdum . He asserts the existence of a machine E, which when given the S.D (Standard Description, i.e. "program") of an arbitrary machine M, will determine whether M ever prints a given symbol (0 say). He does not assert that this M is a "computing machine". Given the existence of machine E, Turing proceeds as follows: The difficulty in the proof is step 1. The reader will be helped by realizing that Turing is not explaining his subtle handiwork. (In a nutshell: he is using certain equivalencies between the “existential-“ and “universal-operators” together with their equivalent expressions written with logical operators.) Here's an example: Suppose we see before us a parking lot full of hundreds of cars. We decide to go around the entire lot looking for: “Cars with flat (bad) tires”. After an hour or so we have found two “cars with bad tires.” We can now say with certainty that “Some cars have bad tires”. Or we could say: “It’s not true that ‘All the cars have good tires’”. Or: “It is true that: ‘not all the cars have good tires”. Let us go to another lot. Here we discover that “All the cars have good tires.” We might say, “There’s not a single instance of a car having a bad tire.” Thus we see that, if we can say something about each car separately then we can say something about ALL of them collectively. This is what Turing does: From M he creates a collection of machines { M 1, M 2, M 3, M 4, ..., Mn } and about each he writes a sentence: “ X prints at least one 0” and allows only two “ truth values ”, True = blank or False = :0:. One by one he determines the truth value of the sentence for each machine and makes a string of blanks or :0:, or some combination of these. We might get something like this: “ M 1 prints a 0” = True AND “ M 2 prints a 0” = True AND “ M 3 prints a 0” = True AND “ M 4 prints a 0” = False, ... AND “ Mn prints a 0” = False. He gets the string BBB:0::0::0: ... :0: ... ad infinitum if there are an infinite number of machines Mn . If on the other hand if every machine had produced a "True" then the expression on the tape would be BBBBB....BBBB ... ad infinitum Thus Turing has converted statements about each machine considered separately into a single "statement" (string) about all of them. Given the machine (he calls it G) that created this expression, he can test it with his machine E and determine if it ever produces a 0. In our first example above we see that indeed it does, so we know that not all the M's in our sequence print 0s. But the second example shows that, since the string is blanks then every Mn in our sequence has produced a 0. All that remains for Turing to do is create a process to create the sequence of Mn's from a single M. Suppose M prints this pattern: Turing creates another machine F that takes M and crunches out a sequence of Mn's that successively convert the first n 0's to “0-bar” ( 0 ): He states, without showing details, that this machine F is truly build-able. We can see that one of a couple things could happen. F may run out of machines that have 0's, or it may have to go on ad infinitum creating machines to “cancel the zeros”. Turing now combines machines E and F into a composite machine G. G starts with the original M, then uses F to create all the successor-machines M1, M2,. . ., Mn. Then G uses E to test each machine starting with M. If E detects that a machine never prints a zero, G prints :0: for that machine. If E detects that a machine does print a 0 (we assume, Turing doesn’t say) then G prints :: or just skips this entry, leaving the squares blank. We can see that a couple things can happen. G will print no 0’s, ever, if all the Mn’s print 0’s, OR, G will print ad infinitum 0’s if all the M’s print no 0’s, OR, G will print 0’s for a while and then stop. Now, what happens when we apply E to G itself? If E(G) determines that G never prints a 0 then we know that all the Mn’s have printed 0’s. And this means that, because all the Mn came from M, that M itself prints 0’s ad infinitum , OR If E(G) determines that G does print a 0 then we know that not all the Mn’s print 0’s; therefore M does not print 0’s ad infinitum . As we can apply the same process for determining if M prints 1 infinitely often. When we combine these processes, we can determine that M does, or does not, go on printing 1's and 0's ad infinitum . Thus we have a method for determining if M is circle-free. By Proof 1 this is impossible. So the first assertion that E exists, is wrong: E does not exist. Here Turing proves "that the Hilbert Entscheidungsproblem can have no solution". [ 2 ] Here he …show(s) that there can be no general process for determining whether a given formula U of the functional calculus K is provable. ( ibid .) Both Lemmas #1 and #2 are required to form the necessary "IF AND ONLY IF" (i.e. logical equivalence ) required by the proof: A set E is computably decidable if and only if both E and its complement are computably enumerable (Franzén, p. 67) Turing demonstrates the existence of a formula Un (M) which says, in effect, that "in some complete configuration of M, 0 appears on the tape" (p. 146). This formula is TRUE, that is, it is "constructible", and he shows how to go about this. Then Turing proves two Lemmas, the first requiring all the hard work. (The second is the converse of the first.) Then he uses reductio ad absurdum to prove his final result: [If readers intend to study the proof in detail they should correct their copies of the pages of the third proof with the corrections that Turing supplied. Readers should also come equipped with a solid background in (i) logic (ii) the paper of Kurt Gödel : " On Formally Undecidable Propositions of Principia Mathematica and Related Systems ". [ b ] For assistance with Gödel's paper they may consult e.g. Ernest Nagel and James R. Newman , Gödel's Proof , New York University Press, 1958.] To follow the technical details, the reader will need to understand the definition of "provable" and be aware of important "clues". "Provable" means, in the sense of Gödel, that (i) the axiom system itself is powerful enough to produce (express) the sentence "This sentence is provable", and (ii) that in any arbitrary "well-formed" proof the symbols lead by axioms, definitions, and substitution to the symbols of the conclusion. First clue: "Let us put the description of M into the first standard form of §6". Section 6 describes the very specific "encoding" of machine M on the tape of a "universal machine" U. This requires the reader to know some idiosyncrasies of Turing's universal machine U and the encoding scheme. (i) The universal machine is a set of "universal" instructions that reside in an "instruction table". Separate from this, on U's tape, a "computing machine" M will reside as "M-code". The universal table of instructions can print on the tape the symbols A, C, D, 0, 1, u, v, w, x, y, z, : . The various machines M can print these symbols only indirectly by commanding U to print them. (ii) The "machine code" of M consists of only a few letters and the semicolon, i.e. D, C, A, R, L, N, ; . Nowhere within the "code" of M will the numerical "figures" (symbols) 1 and 0 ever appear. If M wants U to print a symbol from the collection blank, 0, 1 then it uses one of the following codes to tell U to print them. To make things more confusing, Turing calls these symbols S0, S1, and S2, i.e. (iii) A "computing machine", whether it is built directly into a table (as his first examples show), or as machine-code M on universal-machine U's tape, prints its number on blank tape (to the right of M-code, if there is one) as 1 s and 0 s forever proceeding to the right. (iv) If a "computing machine" is U+"M-code", then "M-code" appears first on the tape; the tape has a left end and the "M-code" starts there and proceeds to the right on alternate squares. When the M-code comes to an end (and it must, because of the assumption that these M-codes are finite algorithms), the "figures" will begin as 1 s and 0 s on alternate squares, proceeding to the right forever. Turing uses the (blank) alternate squares (called "E"- "eraseable"- squares) to help U+"M-code" keep track of where the calculations are, both in the M-code and in the "figures" that the machine is printing. (v) A "complete configuration" is a printing of all symbols on the tape, including M-code and "figures" up to that point, together with the figure currently being scanned (with a pointer-character printed to the left of the scanned symbol?). If we have interpreted Turing's meaning correctly, this will be a hugely long set of symbols. But whether the entire M-code must be repeated is unclear; only a printing of the current M-code instruction is necessary plus the printing of all figures with a figure-marker). (vi) Turing reduced the vast possible number of instructions in "M-code" (again: the code of M to appear on the tape) to a small canonical set, one of three similar to this: {qi Sj Sk R ql} e.g. If machine is executing instruction #qi and symbol Sj is on the square being scanned, then Print symbol Sk and go Right and then go to instruction ql : The other instructions are similar, encoding for "Left" L and "No motion" N. It is this set that is encoded by the string of symbols qi = DA...A, Sj = DC...C, Sk = DC...C, R, ql = DA....A. Each instruction is separated from another one by the semicolon. For example, {q5, S1 S0 L q3} means: Instruction #5: If scanned symbol is 0 then print blank , go Left, then go to instruction #3. It is encoded as follows ; D A A A A A D C D L D A A A Second clue: Turing is using ideas introduced in Gödel's paper, that is, the "Gödelization" of (at least part of) the formula for Un (M). This clue appears only as a footnote on page 138 ( Davis (1965) , p. 138): "A sequence of r primes is denoted by ^ (r)" ( ibid .) [Here, r inside parentheses is "raised".] This "sequence of primes" appears in a formula called F^(n). Third clue: This reinforces the second clue. Turing's original attempt at the proof uses the expression: (Eu)N(u) & (x)(... etc. ...) [ 6 ] Earlier in the paper Turing had previously used this expression (p. 138) and defined N(u) to mean "u is a non-negative integer" ( ibid .) (i.e. a Gödel number). But, with the Bernays corrections, Turing abandoned this approach (i.e. the use of N(u)) and the only place where "the Gödel number" appears explicitly is where he uses F^(n). What does this mean for the proof? The first clue means that a simple examination of the M-code on the tape will not reveal if a symbol 0 is ever printed by U+"M-code". A testing-machine might look for the appearance of DC in one of the strings of symbols that represent an instruction. But will this instruction ever be "executed?" Something has to "run the code" to find out. This something can be a machine, or it can be lines in a formal proof, i.e. Lemma #1. The second and third clues mean that, as its foundation is Gödel's paper, the proof is difficult. In the example below we will actually construct a simple "theorem"—a little Post–Turing machine program "run it". We will see just how mechanical a properly designed theorem can be. A proof, we will see, is just that, a "test" of the theorem that we do by inserting a "proof example" into the beginning and see what pops out at the end. Both Lemmas #1 and #2 are required to form the necessary "IF AND ONLY IF" (i.e. logical equivalence) required by the proof: A set E is computably decidable if and only if both E and its complement are computably enumerable. (Franzén, p. 67) To quote Franzén: A sentence A is said to be decidable in a formal system S if either A or its negation is provable in S. (Franzén, p. 65) Franzén has defined "provable" earlier in his book: A formal system is a system of axioms (expressed in some formally defined language) and rules of reasoning (also called inference rules), used to derive the theorems of the system. A theorem is any statement in the language of the system obtainable by a series of applications of the rules of reasoning, starting from the axioms. A proof is a finite sequence of such applications, leading to a theorem as its conclusion. ( ibid. p. 17) Thus a "sentence" is a string of symbols, and a theorem is a string of strings of symbols. Turing is confronted with the following task: to convert a Universal Turing machine "program", and the numerical symbols on the tape (Turing's "figures", symbols "1" and "0"), into a "theorem"—that is, a (monstrously long) string of sentences that define the successive actions of the machine, (all) the figures of the tape, and the location of the "tape head". Thus the "string of sentences" will be strings of strings of symbols. The only allowed individual symbols will come from Gödel's symbols defined in his paper.(In the following example we use the "<" and ">" around a "figure" to indicate that the "figure" is the symbol being scanned by the machine). In the following, we have to remind ourselves that every one of Turing's “computing machines” is a binary-number generator/creator that begins work on “blank tape”. Properly constructed, it always cranks away ad infinitum, but its instructions are always finite. In Turing's proofs, Turing's tape had a “left end” but extended right ad infinitum. For sake of example below we will assume that the “machine” is not a Universal machine, but rather the simpler “dedicated machine” with the instructions in the Table. Our example is based on a modified Post–Turing machine model of a Turing Machine. This model prints only the symbols 0 and 1. The blank tape is considered to be all b's. Our modified model requires us to add two more instructions to the 7 Post–Turing instructions. The abbreviations that we will use are: R, RIGHT: look to the right and move tape to left, or move tape head right L, LEFT : look to the left and to move tape right, or move tape head left E, ERASE scanned square (e.g. make square blank) P0,: PRINT 0 in scanned square P1,: PRINT 1 in scanned square Jb_n, JUMP-IF-blank-to-instruction_#n, J0_n, JUMP-IF-0-to-instruction_#n, J1_n, JUMP-IF-1-to-instrucntion_#n, HALT. In the cases of R, L, E, P0, and P1 after doing its task the machine continues on to the next instruction in numerical sequence; ditto for the jumps if their tests fail. But, for brevity, our examples will only use three squares. And these will always start as there blanks with the scanned square on the left: i.e. bbb. With two symbols 1, 0 and blank we can have 27 distinct configurations: bbb, bb0, bb1, b0b, b00, b01, b1b, b10, b11, 0bb, 0b0, 0b1, 00b, 000, 001, 01b, 010, 011, 1bb, 1b0, 1b1, 10b, 100, 101, 11b, 110, 111 We must be careful here, because it is quite possible that an algorithm will (temporarily) leave blanks in between figures, then come back and fill something in. More likely, an algorithm may do this intentionally. In fact, Turing's machine does this—it prints on alternate squares, leaving blanks between figures so it can print locator symbols. Turing always left alternate squares blank so his machine could place a symbol to the left of a figure (or a letter if the machine is the universal machine and the scanned square is actually in the “program”). In our little example we will forego this and just put symbols ( ) around the scanned symbol, as follows: b(b)0 this means, "Tape is blanks-to-the-left of left blank but left blank is 'in play', the scanned-square-is-blank, '0', blanks-to-right" 1(0)1 this means, "Tape is blanks-to-the-left, then 1, scanned square is '0'" Let us write a simple program: start: P1, R, P1, R, P1, H Remember that we always start with blank tape. The complete configuration prints the symbols on the tape followed by the next instruction: start config: (b) P1, config #1: (1) R, config #2: 1(b) P1, config #3: 1(1) R, config #4: 11(b) P1, config #5: 11(1) H Let us add “jump” into the formula. When we do this we discover why the complete configuration must include the tape symbols. (Actually, we see this better, below.) This little program prints three “1”s to the right, reverses direction and moves left printing 0’s until it hits a blank. We will print all the symbols that our machine uses: start: P1, R, P1, R, P1, P0, L, J1_7, H (b)bb P1, (1)bb R, 1(b)b P1, 1(1)b R, 11(b) P1, 11(1) P0, 11(0) L, 1(1)0 J1_7 1(1)0 L (1)10 J0_7 (1)10 L (b)110 J0_7 (b)110 H Here at the end we find that a blank on the left has “come into play” so we leave it as part of the total configuration. Given that we have done our job correctly, we add the starting conditions and see “where the theorem goes”. The resulting configuration—the number 110—is the PROOF. Turing's proof is complicated by a large number of definitions, and confounded with what Martin Davis called "petty technical details" and "...technical details [that] are incorrect as given". [ c ] Turing himself published "A Correction" in 1938: "The author is indebted to P. Bernays for pointing out these errors". [ 7 ] Specifically, in its original form the third proof is badly marred by technical errors. And even after Bernays' suggestions and Turing's corrections, errors remained in the description of the universal machine . And confusingly, since Turing was unable to correct his original paper, some text within the body harks to Turing's flawed first effort. Bernays' corrections may be found in Davis (1965) , pp. 152–154; the original is to be found as "On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction," Proceedings of the London Mathematical Society (2), 43 (1938), 544-546. The on-line version of Turing's paper has these corrections in an addendum; however, corrections to the Universal Machine must be found in an analysis provided by Emil Post . At first, the only mathematician to pay close attention to the details of the proof was Post (cf. Hodges p. 125) — mainly because he had arrived simultaneously at a similar reduction of "algorithm" to primitive machine-like actions, so he took a personal interest in the proof. Strangely (perhaps World War II intervened) it took Post some ten years to dissect it in the Appendix to his paper Recursive Unsolvability of a Problem of Thue , 1947. [ d ] Other problems present themselves: In his Appendix Post commented indirectly on the paper's difficulty and directly on its "outline nature" [ e ] and "intuitive form" of the proofs. [ e ] Post had to infer various points: If our critique is correct, a machine is said to be circle-free if it is a Turing computing ... machine which prints an infinite number of 0s and 1s. And the two theorems of Turing's in question are really the following. There is no Turing ... machine which, when supplied with an arbitrary positive integer n, will determine whether n is the D.N of a Turing computing ... machine that is circle-free. [Secondly], There is no Turing convention-machine which, when supplied with an arbitrary positive integer n, will determine whether n is the D.N of a Turing computing ... machine that ever prints a given symbol (0 say). [ f ] Anyone who has ever tried to read the paper will understand Hodges' complaint: The paper started attractively, but soon plunged (in typical Turing manner) into a thicket of obscure German Gothic type in order to develop his instruction table for the universal machine. The last people to give it a glance would be the applied mathematicians who had to resort to practical computation... (Hodges p. 124) 1 computable number — a number whose decimal is computable by a machine (i.e., by finite means such as an algorithm) 2 M — a machine with a finite instruction table and a scanning/printing head. M moves an infinite tape divided into squares each “capable of bearing a symbol”. The machine-instructions are only the following: move one square left, move one square right, on the scanned square print symbol p, erase the scanned square, if the symbol is p then do instruction aaa, if the scanned symbol is not p then do instruction aaa, if the scanned symbol is none then do instruction aaa, if the scanned symbol is any do instruction aaa [where “aaa” is an instruction-identifier]. 3 computing machine — an M that prints two kinds of symbols, symbols of the first type are called “figures” and are only binary symbols 1 and 0; symbols of the second type are any other symbols. 4 figures — symbols 1 and 0 , a.k.a. “symbols of the first kind” 5 m-configuration — the instruction-identifier, either a symbol in the instruction table, or a string of symbols representing the instruction- number on the tape of the universal machine (e.g. "DAAAAA = #5") 6 symbols of the second kind — any symbols other than 1 and 0 7 circular — an unsuccessful computating machine. It fails to print, ad infinitum, the figures 0 or 1 that represent in binary the number it computes 8 circle-free — a successful computating machine. It prints, ad infinitum, the figures 0 or 1 that represent in binary the number it computes 9 sequence — as in “sequence computed by the machine”: symbols of the first kind a.k.a. figures a.k.a. symbols 0 and 1. 10 computable sequence — can be computed by a circle-free machine 11 S.D – Standard Description: a sequence of symbols A, C, D, L, R, N, “;” on a Turing machine tape 12 D.N — Description number : an S.D converted to a number: 1=A, 2=C, 3 =D, 4=L, 5=R, 6=N, 7=; 13 M(n) — a machine whose D.N is number “n” 14 satisfactory — a S.D or D.N that represents a circle-free machine 15 U — a machine equipped with a “universal” table of instructions. If U is “supplied with a tape on the beginning of which is written the S.D of some computing machine M, U will compute the same sequence as M.” 16 β’ —“beta-primed”: A so-called “diagonal number” made up of the n-th figure (i.e. 0 or 1) of the n-th computable sequence [also: the computable number of H, see below] 17 u — an unsatisfactory, i.e. circular, S.D 18 s — satisfactory, i.e. circle-free S.D 19 D — a machine contained in H (see below). When supplied with the S.D of any computing machine M, D will test M's S.D and if circular mark it with “u” and if circle-free mark it with “s” 20 H — a computing machine. H computes B’, maintains R and N. H contains D and U and an unspecified machine (or process) that maintains N and R and provides D with the equivalent S.D of N. E also computes the figures of B’ and assembles the figures of B’. 21 R — a record, or tally, of the quantity of successful (circle-free) S.D tested by D 22 N — a number, starting with 1, to be converted into an S.D by machine E. E maintains N. 23 K — a number. The D.N of H. 5 m-configuration — the instruction-identifier, either a symbol in the instruction table, or a string of symbols representing the instruction's number on the tape of the universal machine (e.g. "DAAAAA = instruction #5"). In Turing's S.D the m-configuration appears twice in each instruction, the left-most string is the "current instruction"; the right-most string is the next instruction. 24 complete configuration — the number (figure 1 or 0 ) of the scanned square, the complete sequence of all symbols on the tape, and the m-configuration (the instruction-identifier, either a symbol or a string of symbols representing a number, e.g. "instruction DAAAA = #5") 25 RSi(x, y) — "in the complete configuration x of M the symbol on square y is Si; "complete configuration" is definition #5 26 I(x, y) — "in the complete configuration x of M the square y is scanned" 27 Kqm(x) — "in the complete configuration x of M the machine-configuration (instruction number) is qm" 28 F(x,y) — "y is the immediate successor of x" (follows Gödel's use of "f" as the successor-function). 29 G(x,y) — "x precedes y", not necessarily immediately 30 Inst{qi, Sj Sk L ql} is an abbreviation, as are Inst{qi, Sj Sk R ql} , and Inst{qi, Sj Sk N ql} . See below. Turing reduces his instruction set to three “canonical forms” – one for Left, Right, and No-movement. Si and Sk are symbols on the tape. For example, the operations in the first line are PSk = PRINT symbol Sk from the collection A, C, D, 0, 1, u, v, w, x, y, z, : , then move tape LEFT. These he further abbreviated as: (N1) qi Sj Sk L qm (N2) qi Sj Sk R qm (N3) qi Sj Sk N qm In Proof #3 he calls the first of these “Inst{qi Sj Sk L ql}”, and he shows how to write the entire machine S.D as the logical conjunction (logical OR): this string is called “Des(M)”, as in “Description-of-M”. i.e. if the machine prints 0 then 1's and 0's on alternate squares to the right ad infinitum it might have the table (a similar example appears on page 119): q1, blank, P0, R, q2 q2, blank, P-blank, R, q3 q3, blank, P1, R, q4 q4, blank, P-blank, R, q1 (This has been reduced to canonical form with the “p-blank” instructions so it differs a bit from Turing's example.) If put them into the “ Inst( ) form” the instructions will be the following (remembering: S0 is blank, S1 = 0, S2 = 1): Inst {q1 S0 S1 R q2} Inst {q2 S0 S0 R q3} Inst {q3 S0 S2 R q4} Inst {q4 S0 S0 R q1} The reduction to the Standard Description (S.D) will be: ; D A D D C R D A A ; D A A D D R D A A A ; D A A A D D C C R D A A A A ; D A A A A D D R D A ; This agrees with his example in the book (there will be a blank between each letter and number). Universal machine U uses the alternate blank squares as places to put "pointers".
https://en.wikipedia.org/wiki/Turing's_proof
The Turing Talk , previously known as the Turing Lecture , [ 1 ] is an annual award lecture delivered by a noted speaker on the subject of Computer Science . Sponsored and co-hosted by the Institution of Engineering and Technology (IET) [ 2 ] and the British Computer Society , [ 3 ] the talk has been delivered at different locations in the United Kingdom annually since 1999. Venues for the talk have included Savoy Place , the Royal Institution in London, Cardiff University , The University of Manchester , Belfast City Hall and the University of Glasgow . [ 3 ] [ 1 ] The main talk is preluded with an insightful speaker, who performs an opening act for the main event. The talk is named in honour of Alan Turing and should not be confused with the Turing Award lecture organised by the Association for Computing Machinery (ACM). [ 4 ] Recent Turing talks are available as a live webcast and archived online. [ 5 ] Previous speakers have included: This computer science article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Turing_Talk
The Turing pattern is a concept introduced by English mathematician Alan Turing in a 1952 paper titled " The Chemical Basis of Morphogenesis " which describes how patterns in nature , such as stripes and spots, can arise naturally and autonomously from a homogeneous, uniform state. [ 1 ] [ 2 ] The pattern arises due to Turing instability which in turn arises due to the interplay between differential diffusion of chemical species and chemical reaction. The instability mechanism is surprising because a pure diffusion, such as molecular diffusion , would be expected to have a stabilizing influence on the system (i.e., complete mixing). In his paper, [ 1 ] Turing examined the behaviour of a system in which two diffusible substances interact with each other, and found that such a system is able to generate a spatially periodic pattern even from a random or almost uniform initial condition. [ 3 ] Prior to the discovery of this instability mechanism arising due to unequal diffusion coefficients of the two substances, diffusional effects were always presumed to have stabilizing influences on the system. Turing hypothesized that the resulting wavelike patterns are the chemical basis of morphogenesis . [ 3 ] Turing patterning is often found in combination with other patterns: vertebrate limb development is one of the many phenotypes exhibiting Turing patterning overlapped with a complementary pattern (in this case a French flag model ). [ 4 ] Before Turing, Yakov Zeldovich in 1944 discovered this instability mechanism in connection with the cellular structures observed in lean hydrogen flames. [ 5 ] Zeldovich explained the cellular structure as a consequence of hydrogen's diffusion coefficient being larger than the thermal diffusion coefficient . In combustion literature, Turing instability is referred to as diffusive–thermal instability . The original theory, a reaction–diffusion theory of morphogenesis, has served as an important model in theoretical biology . [ 6 ] Reaction–diffusion systems have attracted much interest as a prototype model for pattern formation . Patterns such as fronts , hexagons , spirals , stripes and dissipative solitons are found as solutions of Turing-like reaction–diffusion equations . [ 7 ] Turing proposed a model wherein two homogeneously distributed substances (P and S) interact to produce stable patterns during morphogenesis. These patterns represent regional differences in the concentrations of the two substances. Their interactions would produce an ordered structure out of random chaos. [ 8 ] In Turing's model, substance P promotes the production of more substance P as well as substance S. However, substance S inhibits the production of substance P; if S diffuses more readily than P, sharp waves of concentration differences will be generated for substance P. An important feature of Turing's model is that particular wavelengths in the substances' distribution will be amplified while other wavelengths will be suppressed. [ 8 ] The parameters depend on the physical system under consideration. In the context of fish skin pigmentation, the associated equation is a three field reaction–diffusion one in which the linear parameters are associated with pigmentation cell concentration and the diffusion parameters are not the same for all fields. [ 9 ] In dye-doped liquid crystals , a photoisomerization process in the liquid crystal matrix is described as a reaction–diffusion equation of two fields (liquid crystal order parameter and concentration of cis-isomer of the azo-dye). [ 10 ] The systems have very different physical mechanisms on the chemical reactions and diffusive process, but on a phenomenological level, both have the same ingredients. Turing-like patterns have also been demonstrated to arise in developing organisms without the classical requirement of diffusible morphogens. Studies in chick and mouse embryonic development suggest that the patterns of feather and hair-follicle precursors can be formed without a morphogen pre-pattern, and instead are generated through self-aggregation of mesenchymal cells underlying the skin. [ 11 ] [ 12 ] In these cases, a uniform population of cells can form regularly patterned aggregates that depend on the mechanical properties of the cells themselves and the rigidity of the surrounding extra-cellular environment. Regular patterns of cell aggregates of this sort were originally proposed in a theoretical model formulated by George Oster, which postulated that alterations in cellular motility and stiffness could give rise to different self-emergent patterns from a uniform field of cells. [ 13 ] This mode of pattern formation may act in tandem with classical reaction-diffusion systems, or independently to generate patterns in biological development. Turing patterns may also be responsible for the formation of human fingerprints . [ 14 ] As well as in biological organisms, Turing patterns occur in other natural systems – for example, the wind patterns formed in sand, the atomic-scale repetitive ripples that can form during growth of bismuth crystals, and the uneven distribution of matter in galactic disc . [ 15 ] [ 16 ] Although Turing's ideas on morphogenesis and Turing patterns remained dormant for many years, they are now inspirational for much research in mathematical biology . [ 17 ] It is a major theory in developmental biology ; the importance of the Turing model is obvious, as it provides an answer to the fundamental question of morphogenesis: "how is spatial information generated in organisms?". [ 3 ] Turing patterns can also be created in nonlinear optics as demonstrated by the Lugiato–Lefever equation . A mechanism that has gained increasing attention as a generator of spot- and stripe-like patterns in developmental systems is related to the chemical reaction-diffusion process described by Turing in 1952. This has been schematized in a biological "local autoactivation-lateral inhibition" (LALI) framework by Meinhardt and Gierer. [ 19 ] LALI systems, while formally similar to reaction-diffusion systems, are more suitable to biological applications, since they include cases where the activator and inhibitor terms are mediated by cellular "reactors" rather than simple chemical reactions, [ 20 ] and spatial transport can be mediated by mechanisms in addition to simple diffusion. [ 21 ] These models can be applied to limb formation and teeth development among other examples. Reaction-diffusion models can be used to forecast the exact location of the tooth cusps in mice and voles based on differences in gene expression patterns. [ 8 ] The model can be used to explain the differences in gene expression between mice and vole teeth, the signaling center of the tooth, enamel knot, secrets BMPs, FGFs and Shh. Shh and FGF inhibits BMP production, while BMP stimulates both the production of more BMPs and the synthesis of their own inhibitors. BMPs also induce epithelial differentiation, while FGFs induce epithelial growth. [ 22 ] The result is a pattern of gene activity that changes as the shape of the tooth changes, and vice versa. Under this model, the large differences between mouse and vole molars can be generated by small changes in the binding constants and diffusion rates of the BMP and Shh proteins. A small increase in the diffusion rate of BMP4 and a stronger binding constant of its inhibitor is sufficient to change the vole pattern of tooth growth into that of the mouse. [ 22 ] [ 23 ] Experiments with the sprouting of chia seeds planted in trays have confirmed Turing's mathematical model. [ 24 ]
https://en.wikipedia.org/wiki/Turing_pattern
The greater Turkana Basin in East Africa (mainly northwestern Kenya and southern Ethiopia , smaller parts of eastern Uganda and southeastern South Sudan ) determines a large endorheic basin , a drainage basin with no outflow centered around the north-southwards directed Gregory Rift system in Kenya and southern Ethiopia. The deepest point of the basin is the endorheic Lake Turkana , a brackish soda lake with a very high ecological productivity in the Gregory Rift. A narrower definition for the term Turkana Basin is also in widespread use and means Lake Turkana and its environment within the confines of the Gregory Rift in Kenya and Ethiopia. This includes the lower Omo River valley in Ethiopia. The Basin in the narrower definition is a site of geological subsidence containing one of the most continuous and temporally well controlled fossil records of the Plio-Pleistocene [ 1 ] [ 2 ] with some fossils as old as the Cretaceous . [ 3 ] Among the Basin's critical fossiliferous sites are Lothagam , Allia Bay , and Koobi Fora . Lake Turkana sits at the center of the Turkana Basin and is flanked by the Chalbi Desert to the east, the Lotakipi Plains to the north, Karasuk to the west and Samburu to the south. [ 4 ] Included within these regions are desert scrub, desert grass and shrubland, and scattered acacia or open grasslands. [ 4 ] The only true perennial river is the Omo River in Ethiopia, in the northern part of the basin, which discharges into the lake on its northern shore and supplies the lake with more than 98% of its annual water inflow. The two intermittent rivers – which almost alone contribute the remaining 2% of water inflow – are the Turkwel River and the Kerio River in Kenya, in the western part of the basin. [ 5 ] Much of the Turkana Basin today can be described as arid scrubland or even desert. The exception is the Omo- Gibe River valley to the north. Important towns within the Turkana Basin include Lokitaung, Kakuma, Lodwar, Lorogumu, Ileret and Kargi. The Turkana people inhabit the west of the Basin, the Samburu and Pokot people inhabit the south, and the Nyangatom , Daasanach and Borana Oromo peoples inhabit the north and east. [ 4 ] The oldest sedimentary records go back to the Cretaceous , including units previously informally referred to as the Turkana grits like the Lapurr Sandstone and are dominated by eastward flowing fluvial sequences draining into the Indian Ocean; [ 3 ] later formations from the Oligocene and Miocene are characterised by similar fluvial regimes that are not however unified under a single geological group or system . [ 6 ] [ 7 ] Approximately 4.2 million years ago (Ma), the region experienced widespread and significant volcanism , associated with the Gombe basalts in the Koobi Fora formation to the east and with the Lothagam basalts further south; this event created a lake in the center of the basin and apparently established the modern, continuous depositional system of the Turkana Basin. [ 1 ] Deposition in the Turkana Basin overall is driven primarily by subsidence , a result of rifting between the Somali and Nubian plates that has created a series of horst and graben structures, and led to approximately 1 km of sedimentary deposits at the center of the basin every 1 million years. Sedimentary records , which become more sparse and discontinuous at greater distance from the basin center, suggest that the basin has alternated between fluvial and lacustrine regimes throughout the Plio-Pleistocene , primarily as a result of continued volcanic activity first to the east, and later to the south of the basin. [ 8 ] Fossil records in the basin help establish much of what is known about African faunal evolution in the Neogene and Quaternary . [ 9 ] As in other regions, the end-Miocene Messinian aridification crisis and global cooling trend seem to have influenced fossil assemblages in the Turkana Basin, either through migrations or de novo evolutionary events . [ 10 ] Fossilized leaves characteristic of more mesic landscapes, faunal community compositions, and increase " C4 " or arid-adapted plant contribution to herbivore carbon intake, all suggest that the Miocene world was more lush than the Pliocene . [ 11 ] Some herbivores, like horses , responded rapidly to the spread of C4 grasslands , while other herbivores evolved more slowly, or developed a number of different responses to an increasingly arid landscape. [ 12 ] Evolutionary studies of the Turkana Basin have found what may be major intervals of faunal turnover after the Miocene as well, most notably in the late Pliocene and early Pleistocene, [ 13 ] [ 14 ] though later studies have suggested more gradual changes in herbivore community composition throughout this interval. [ 15 ] One cause of focus on the late Pliocene and early Pleistocene is the large literature on hominin fossil remains showing an apparent " adaptive radiation " across this boundary. While previous hominin species are considered to be part of a single, continuously evolving " anagenetic " lineage, [ 16 ] hominin fossil remains become extraordinarily diverse in East Africa 2.5 million years ago, with numerous species of robust australopithecine and early human ancestors found first in the Turkana Basin, and ultimately in South Africa as well. The earliest putative evidence for stone tool use among human ancestors is found within the Turkana Basin. [ 17 ]
https://en.wikipedia.org/wiki/Turkana_Basin
The Turkish Space Agency ( Turkish : Türkiye Uzay Ajansı , TUA ) is a government agency for national aerospace research as a part of the space program of Turkey . It was formally established by a presidential decree on 13 December 2018. [ 3 ] [ 4 ] Headquartered in Ankara , [ 5 ] the agency is subordinated to the Ministry of Industry and Technology . With the establishment of TUA, the Department for Aviation and Space Technologies at the Ministry of Transportation and Infrastructure was abolished. TUA prepares strategic plans that include medium and long-term goals, basic principles and approaches, objectives and priorities, performance measures, methods to be followed and resource allocation for aerospace science and technologies. [ 6 ] TUA works in close collaboration with the TÜBİTAK Space Technologies Research Institute (TÜBİTAK UZAY). It is administrated by an executive board of seven members. The tenure of board members, the chairperson excluded, is three years. [ 7 ] [ 6 ] Internationally, TUA currently has agreements with Ukraine , Hungary and Kazakhstan 's space programs, and claims to conduct extensive nation-wide assessments regarding membership to ESA since 2020 as part of Turkey's cooperation agreement with the agency in 2004. [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Turkish_Space_Agency
In aviation , the turn and slip indicator ( T/S , a.k.a. turn indicator and turn and bank indicator ) and the turn coordinator (TC) variant are essentially two aircraft flight instruments in one device. One indicates the rate of turn, or the rate of change in the aircraft's heading; the other part indicates whether the aircraft is in coordinated flight , showing the slip or skid of the turn. The slip indicator is actually an inclinometer that at rest displays the angle of the aircraft's transverse axis with respect to horizontal, and in motion displays this angle as modified by the acceleration of the aircraft. [ 1 ] The most commonly used units are degrees per second (deg/s) or minutes per turn (min/tr). [ citation needed ] The turn and slip indicator can be referred to as the turn and bank indicator, although the instrument does not respond directly to bank angle. Neither does the turn coordinator, but it does respond to roll rate, which enables it to respond more quickly to the start of a turn. [ 2 ] The turn indicator is a gyroscopic instrument that works on the principle of precession . The gyro is mounted in a gimbal . The gyro's rotational axis is in-line with the lateral (pitch) axis of the aircraft, while the gimbal has limited freedom around the longitudinal (roll) axis of the aircraft. As the aircraft yaws , a torque force is applied to the gyro around the vertical axis, due to aircraft yaw, which causes gyro precession around the roll axis. The gyro spins on an axis that is 90 degrees relative to the direction of the applied yaw torque force. The gyro and gimbal rotate (around the roll axis) with limited freedom against a calibrated spring. The torque force against the spring reaches an equilibrium and the angle that the gimbal and gyro become positioned is directly connected to the display needle, thereby indicating the rate of turn. [ 3 ] In the turn coordinator , the gyro is canted 30 degrees from the horizontal so it responds to roll as well as yaw. The display contains hash marks for the pilot's reference during a turn. When the needle is lined up with a hash mark, the aircraft is performing a "standard rate turn" which is defined as three degrees per second, known in some countries as "rate one". This translates to two minutes per 360 degrees of turn (a complete circle). Indicators are marked as to their sensitivity, [ 4 ] with "2 min turn" for those whose hash marks correspond to a standard rate or two-minute turn, and "4 min turn" for those, used in faster aircraft, that show a half standard rate or four-minute turn. The supersonic Concorde jet aircraft and many military jets are examples of aircraft that use 4 min. turn indicators. The hash marks are sometimes called "dog houses", because of their distinct shape on various makes of turn indicators. Under instrument flight rules , using these figures allows a pilot to perform timed turns in order to conform with the required air traffic patterns. For a change of heading of 90 degrees, a turn lasting 30 seconds would be required to perform a standard rate or "rate one" turn. Coordinated flight indication is obtained by using an inclinometer , which is recognized as the "ball in a tube". An inclinometer contains a ball sealed inside a curved glass tube, which also contains a liquid to act as a damping medium. The original form of the indicator is in effect a spirit level with the tube curved in the opposite direction and a bubble standing in for the ball. [ 5 ] In some early aircraft the indicator was merely a pendulum with a dashpot for damping. The ball gives an indication of whether the aircraft is slipping, skidding or in coordinated flight. The ball's movement is caused by the force of gravity and the aircraft's centripetal acceleration. When the ball is centered in the middle of the tube, the aircraft is said to be in coordinated flight. If the ball is on the inside (wing down side) of a turn, the aircraft is slipping. And finally, when the ball is on the outside (wing up side) of the turn, the aircraft is skidding. A simple alternative to the balance indicator used on gliders is a yaw string , which allows the pilot to simply view the string's movements as rudimentary indication of aircraft balance. The turn coordinator (TC) is a further development of the turn and slip indicator (T/S) with the major difference being the display and the axis upon which the gimbal is mounted. The display is that of a miniature airplane as seen from behind. This looks similar to that of an attitude indicator. [ 6 ] "NO PITCH INFORMATION" is usually written on the instrument to avoid confusion regarding the aircraft's pitch, which can be obtained from the attitude indicator . In contrast to the T/S, the TC's gimbal is pitched up 30 degrees from the transverse axis. This causes the instrument to respond to roll as well as yaw [ 6 ] and allows the instrument to display a change more quickly as it will react to the change in roll before the aircraft has even begun to yaw. Although this instrument reacts to changes in the aircraft's roll, it does not display the roll attitude. The turn coordinator may be used as a performance instrument when the attitude indicator has failed. This is called "partial panel" operations. It can be unnecessarily difficult or even impossible if the pilot does not understand that the instrument is showing roll rates as well as turn rates. The usefulness is also impaired if the internal dashpot is worn out. In the latter case, the instrument is underdamped and in turbulence will indicate large full-scale deflections to the left and right, all of which are actually roll rate responses. Slipping and skidding within a turn is sometimes referred to as a sloppy turn, due to the perceptive discomfort it can cause to the pilot and passengers. When the aircraft is in a balanced turn (ball is centered), passengers experience gravity directly in line with their seat (force perpendicular to seat). With a well balanced turn, passengers may not even realize the aircraft is turning unless they are viewing objects outside the aircraft. While aircraft slipping and skidding are often undesired in a usual turn that maintains altitude, slipping of the aircraft can be used for practical purposes. Intentionally putting an aircraft into a slip is used as a forward slip and a sideslip . These slips are performed by applying opposite inputs of the aileron and rudder controls. A forward slip allows a pilot to quickly drop altitude without gaining unnecessary speed, while a sideslip is one method utilized to perform a crosswind landing . Although the Turn and Slip Indicator (and later the Turn Coordinator) was felt to be a necessary and required instrument for flight under instrument flight rules , the Federal Aviation Administration (FAA) has more recently decided that these instruments are obsolete in today's flight environment. Advisory Circular No. 91-75, issued on 6/25/2003, states the following: [section 5 b] "...in today's air traffic control system, there is little need for precisely measured standard rate turns or timed turns based on standard rate." The Advisory Circular further states: "...the FAA believes, and all other commenters apparently agree...the rate-of-turn indicator is no longer as useful as an instrument which gives both horizontal and vertical attitude information." Thus one can now legally replace a Turn-and-Slip or Turn Coordinator instrument with a second attitude indicator, preferably driven by a system different from the primary flight display. So if the aircraft primary display is vacuum powered, the second attitude indicator should be electric, and vice-versa. This gives more flight information than the rate-of-turn indicator and gives a safety measure of redundancy of systems. The slip indicator (the "ball") is still required. The slip indicator may be mounted separately in the panel, or, some attitude indicators now have a slip indicator included in the display.
https://en.wikipedia.org/wiki/Turn_and_slip_indicator
A routing algorithm decides the path followed by a packet from the source to destination routers in a network . An important aspect to be considered while designing a routing algorithm is avoiding a deadlock . Turn restriction routing [ 1 ] is a routing algorithm for mesh -family of topologies which avoids deadlocks by restricting the types of turns that are allowed in the algorithm while determining the route from source node to destination node in a network. A deadlock (shown in fig 1) is a situation in which no further transportation of packets can take place due to the saturation of network resources like buffers or links . The main reason for a deadlock is the cyclic acquisition of channels in the network. [ 2 ] For example, consider there are four channels in a network. Four packets have filled up the input buffers of these four channels and needs to be forwarded to the next channel. Now assume that the output buffers of all these channels are also filled with packets that need to be transmitted to the next channel. If these four channels form a cycle, it is impossible to transmit packets any further because the output buffers and input buffers of all channels are already full. This is known as cyclic acquisition of channels and this results in a deadlock. Deadlocks can either be detected , broken or avoided from happening altogether. Detecting and breaking deadlocks in the network is expensive in terms of latency and resources. So an easy and inexpensive solution is to avoid deadlocks by choosing routing techniques that prevent cyclic acquisition of channels. [ 3 ] Logic behind turn restriction routing derives from a key observation. A cyclic acquisition of channels can take place only if all the four possible clockwise (or anti-clockwise) turns have occurred. This means deadlocks can be avoided by prohibiting at least one of the clockwise turns and one of the anti-clockwise turns. All the clockwise and anti-clockwise turns that are possible in a non restricted routing algorithm are shown in fig 2. A turn restriction routing can be obtained by prohibiting at least one of the four possible clockwise turns and at least one of the four possible anti-clockwise turns in the routing algorithm. This means there are at least 16 (4x4) possible turn restriction routing techniques as you have 4 clockwise turns and 4 anti-clockwise turns to choose from. Some of these techniques have been listed below. Dimension ordered (X-Y) routing [ 1 ] (shown in fig 3) restricts all turns from y-dimension to x-dimension. This prohibits two anti-clockwise and two clockwise turns which is more than what is actually required. Even then since it restricts the number of turns that are allowed we can tell that this is an example for turn restriction routing. West first routing [ 1 ] (shown in fig 4) restricts all turns to the west direction. This means west direction should be taken first if needed in the proposed route. North last routing [ 1 ] (shown in fig 5) restricts turning to any other direction if the current direction is north. This means north direction should be taken last if needed in the proposed route. Negative first routing [ 1 ] (shown in fig 6) restricts turning to a negative direction while the current direction is positive. West is considered as the negative direction in X-dimension and south is considered as the negative direction in Y-dimension. This means any hop in one of the negative directions should be taken before taking any other turn. For example, consider figure 7 below. Say there are multiple routers, F1, F2 etc., that feed packets to a congested, but low-cost link from source router S to destination router D. Implementing Turn restriction routing means that some of the turns from any of the feeder routers to the congested router S may now be restricted. Those feeder routers may have to use a longer path to get to destination D, thereby decongesting the link from S to D to an extent.
https://en.wikipedia.org/wiki/Turn_restriction_routing
A turnaround ( TAR ) is a scheduled event wherein an entire process unit of an industrial plant, such as a refinery , petrochemical plant, power plant , or paper mill , is taken offstream for an extended period for work to be carried out. Turnaround is a blanket term that encompasses more specific terms such as I&Ts (inspection and testing), and maintenance . Turnaround can also be used as a synonym of downtime . Related terms are shutdowns, and outages [ 1 ] sometimes written as Turnarounds, Shutdowns, and Outages (TSO). [ 2 ] Turnarounds are expensive, both in terms of lost production while the process unit is offline and in terms of direct costs for the labour , tools, heavy equipment and materials used to execute the project. They are the most significant portion of an industrial plant's yearly maintenance budget and can affect the company's bottom line if mismanaged. [ 3 ] Turnarounds have unique project management characteristics. [ 4 ]
https://en.wikipedia.org/wiki/Turnaround_(refining)
Turndown ratio refers to the width of the operational range of a device, and is defined as the ratio of the maximum capacity to minimum capacity. For example, a device with a maximum output of 10 units and a minimum output of 2 units has a turndown ratio of 5. The term is commonly used with measurement devices and combustion plant like boilers and gasifiers . In flow measurement , the turndown ratio indicates the range of flow that a flow meter is able to measure with acceptable accuracy. It is also known as rangeability. It is important when choosing a flow meter technology for a specific application. If a gas flow to be measured is expected to vary between 100,000 m 3 per day and 1,000,000 m 3 per day, the specific application has a turndown ratio of at least 10:1. Therefore, the meter requires a turndown ratio of at least 10:1. For example: if the meter had an advertised maximum flow of 2,000,000 m 3 per day then the required turndown ratio would be 20:1. [ citation needed ] The turndown ratio of each type of meter is limited by theoretical considerations and by practical considerations. For example, orifice meters create a pressure drop in the measured fluid proportional to the square of the velocity. Therefore, the range of differential pressure can become too large and compromise accuracy. It can also create process problems such as hydrate formation, and in the case of measuring the discharge of a compressor, there is a limit to how much pressure loss is acceptable. The examples are here for gas flow, but the same meter types can be used on liquids as well, with similar turndown ratios. Note that meter manufacturers state their products' turndown ratios—a specific product may have a turndown ratio that varies from the list below. [ citation needed ] A thermal mass flow meter has a turndown ratio of 1000:1. An orifice plate meter has a practical turndown ratio of 3:1. A turbine meter has a turndown ratio of 10:1. Rotary positive displacement meters have a turndown ratio of between 10:1 and 80:1, depending on the manufacturer and the application. Diaphragm meters are considered to have a turndown ratio of 80:1. Multipath ultrasonic meters often have a stated turndown ratio of 50:1. Boiler turndown ratio is the ratio of maximum heat output to the minimum level of heat output at which the boiler will operate efficiently or controllably. Many boilers are designed to operate at a variety of output levels. As the desired temperature/pressure point is approached, the heat source is progressively turned down. If pressure/temperature falls, the heat source is progressively turned up. If a boiler application requires it to operate at a low proportion of its maximum output, a high turndown ratio is required. Conversely, in applications where the operational conditions are not expected to vary significantly (for example, a large power plant), a low turndown ratio will be sufficient. If the heating plant is only working at a small fraction of its maximum and the turndown ratio is too low, at some point the burner will still need to be shut off when the desired pressure/temperature is achieved. This in turn leads to a rapid reduction in temperature/pressure, requiring the boiler to restart. Cycling frequency can be as high as 12 times per hour. [ 1 ] This is undesirable, as flue gases are purged during both the shut-down and start-up phases, leading to energy losses and therefore inefficiency. Additionally, typical startup times for boilers are in the order of one to two minutes, leading to an inability to respond to sudden load demands. [ 1 ] Electricity As there are no combustion losses associated with electricity, nor delays in system startup, is it unusual to have any means of modulating down the energy supply (i.e., turndown ratio is 1). [ citation needed ] Gas Gas boilers can be designed for turndown ratios of 10–12 with little to no loss in combustion efficiency, while some gas burners may achieve a ratio of 35. [ 2 ] [ unreliable source? ] However, typical turndown ratio is 5. [ 3 ] In the search for increased efficiency, even very small gas boilers have modulating burners these days. In practice only boilers with fan assisted fuel/air circulation will have the modulating feature. The fan also mixes gas and air more thoroughly, so achieving more efficient combustion. If the boiler is of the high efficiency condensing type, high turndown ratios are feasible and the higher the turndown ratio , the more efficient it will be. Every time a gas/oil boiler stops, it has to be "purged" with cold air to remove any combustible gases that may have accumulated in the boiler before restarting. (This to prevent possible explosion.) This cold air takes heat from the boiler every time this happens. Higher turndown ratios mean fewer stops and starts and hence fewer losses. Oil Oil burning boilers can achieve turndown ratios as high as 20, [ 2 ] but typically only 2 to 4 with conventional burner designs. [ 3 ] Small domestic "vaporising" (i.e. burning kerosene or 28 second oil) burners do not modulate at all and are relatively inefficient. Boilers using the pressure jet type of burner, i.e. with a fan, (usually with 35 second oil) can achieve a turndown ratio of 2, while the rotary cup type burner can achieve 4. [ 3 ] Condensing oil boilers are fairly unusual; the condensate from the combustion of oil is far more aggressive than gas, mainly due to sulphur content. These days oil companies are reducing sulphur content of oil on environmental grounds, so this may change. However, due to problem of mixing the oil and air, turndown ratios of greater than four are uncommon. Coal These days mechanised coal boilers only occur in large industrial plant due to the convenience and easy availability of gas. Theoretically coal burning plant can have quite a high turndown ratio, and in the days of hand firing coal boilers this was common. On systems where coal is burned on a grate, turndown ratio must be greater than 1 due to the fact that a sudden reduction/cessation of the load can leave many tons of burning fuel on the grate. [ citation needed ]
https://en.wikipedia.org/wiki/Turndown_ratio
The versine or versed sine is a trigonometric function found in some of the earliest ( Sanskrit Aryabhatia , [ 1 ] Section I) trigonometric tables . The versine of an angle is 1 minus its cosine . There are several related functions, most notably the coversine and haversine . The latter, half a versine, is of particular importance in the haversine formula of navigation. The versine [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] or versed sine [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] is a trigonometric function already appearing in some of the earliest trigonometric tables. It is symbolized in formulas using the abbreviations versin , sinver , [ 13 ] [ 14 ] vers , or siv . [ 15 ] [ 16 ] In Latin , it is known as the sinus versus (flipped sine), versinus , versus , or sagitta (arrow). [ 17 ] Expressed in terms of common trigonometric functions sine, cosine, and tangent, the versine is equal to versin ⁡ θ = 1 − cos ⁡ θ = 2 sin 2 ⁡ θ 2 = sin ⁡ θ tan ⁡ θ 2 {\displaystyle \operatorname {versin} \theta =1-\cos \theta =2\sin ^{2}{\frac {\theta }{2}}=\sin \theta \,\tan {\frac {\theta }{2}}} There are several related functions corresponding to the versine: Special tables were also made of half of the versed sine, because of its particular use in the haversine formula used historically in navigation . hav θ = sin 2 ⁡ ( θ 2 ) = 1 − cos ⁡ θ 2 {\displaystyle {\text{hav}}\ \theta =\sin ^{2}\left({\frac {\theta }{2}}\right)={\frac {1-\cos \theta }{2}}} The ordinary sine function ( see note on etymology ) was sometimes historically called the sinus rectus ("straight sine"), to contrast it with the versed sine ( sinus versus ). [ 31 ] The meaning of these terms is apparent if one looks at the functions in the original context for their definition, a unit circle : For a vertical chord AB of the unit circle, the sine of the angle θ (representing half of the subtended angle Δ ) is the distance AC (half of the chord). On the other hand, the versed sine of θ is the distance CD from the center of the chord to the center of the arc. Thus, the sum of cos( θ ) (equal to the length of line OC ) and versin( θ ) (equal to the length of line CD ) is the radius OD (with length 1). Illustrated this way, the sine is vertical ( rectus , literally "straight") while the versine is horizontal ( versus , literally "turned against, out-of-place"); both are distances from C to the circle. This figure also illustrates the reason why the versine was sometimes called the sagitta , Latin for arrow . [ 17 ] [ 30 ] If the arc ADB of the double-angle Δ = 2 θ is viewed as a " bow " and the chord AB as its "string", then the versine CD is clearly the "arrow shaft". In further keeping with the interpretation of the sine as "vertical" and the versed sine as "horizontal", sagitta is also an obsolete synonym for the abscissa (the horizontal axis of a graph). [ 30 ] In 1821, Cauchy used the terms sinus versus ( siv ) for the versine and cosinus versus ( cosiv ) for the coversine. [ 15 ] [ 16 ] [ nb 1 ] As θ goes to zero, versin( θ ) is the difference between two nearly equal quantities, so a user of a trigonometric table for the cosine alone would need a very high accuracy to obtain the versine in order to avoid catastrophic cancellation , making separate tables for the latter convenient. [ 12 ] Even with a calculator or computer, round-off errors make it advisable to use the sin 2 formula for small θ . Another historical advantage of the versine is that it is always non-negative, so its logarithm is defined everywhere except for the single angle ( θ = 0, 2 π , …) where it is zero—thus, one could use logarithmic tables for multiplications in formulas involving versines. In fact, the earliest surviving table of sine (half- chord ) values (as opposed to the chords tabulated by Ptolemy and other Greek authors), calculated from the Surya Siddhantha of India dated back to the 3rd century BC, was a table of values for the sine and versed sine (in 3.75° increments from 0 to 90°). [ 31 ] The versine appears as an intermediate step in the application of the half-angle formula sin 2 ( ⁠ θ / 2 ⁠ ) = ⁠ 1 / 2 ⁠ versin( θ ), derived by Ptolemy , that was used to construct such tables. The haversine, in particular, was important in navigation because it appears in the haversine formula , which is used to reasonably accurately compute distances on an astronomic spheroid (see issues with the Earth's radius vs. sphere ) given angular positions (e.g., longitude and latitude ). One could also use sin 2 ( ⁠ θ / 2 ⁠ ) directly, but having a table of the haversine removed the need to compute squares and square roots. [ 12 ] An early utilization by José de Mendoza y Ríos of what later would be called haversines is documented in 1801. [ 14 ] [ 32 ] The first known English equivalent to a table of haversines was published by James Andrew in 1805, under the name "Squares of Natural Semi-Chords". [ 33 ] [ 34 ] [ 17 ] In 1835, the term haversine (notated naturally as hav. or base-10 logarithmically as log. haversine or log. havers. ) was coined [ 35 ] by James Inman [ 14 ] [ 36 ] [ 37 ] in the third edition of his work Navigation and Nautical Astronomy: For the Use of British Seamen to simplify the calculation of distances between two points on the surface of the Earth using spherical trigonometry for applications in navigation. [ 3 ] [ 35 ] Inman also used the terms nat. versine and nat. vers. for versines. [ 3 ] Other high-regarded tables of haversines were those of Richard Farley in 1856 [ 33 ] [ 38 ] and John Caulfield Hannyngton in 1876. [ 33 ] [ 39 ] The haversine continues to be used in navigation and has found new applications in recent decades, as in Bruce D. Stark's method for clearing lunar distances utilizing Gaussian logarithms since 1995 [ 40 ] [ 41 ] or in a more compact method for sight reduction since 2014. [ 29 ] While the usage of the versine, coversine and haversine as well as their inverse functions can be traced back centuries, the names for the other five cofunctions appear to be of much younger origin. One period (0 < θ < 2 π ) of a versine or, more commonly, a haversine waveform is also commonly used in signal processing and control theory as the shape of a pulse or a window function (including Hann , Hann–Poisson and Tukey windows ), because it smoothly ( continuous in value and slope ) "turns on" from zero to one (for haversine) and back to zero. [ nb 2 ] In these applications, it is named Hann function or raised-cosine filter . The functions are circular rotations of each other. Inverse functions like arcversine (arcversin, arcvers, [ 8 ] avers, [ 43 ] [ 44 ] aver), arcvercosine (arcvercosin, arcvercos, avercos, avcs), arccoversine (arccoversin, arccovers, [ 8 ] acovers, [ 43 ] [ 44 ] acvs), arccovercosine (arccovercosin, arccovercos, acovercos, acvc), archaversine (archaversin, archav, haversin −1 , [ 45 ] invhav, [ 46 ] [ 47 ] [ 48 ] ahav, [ 43 ] [ 44 ] ahvs, ahv, hav −1 [ 49 ] [ 50 ] ), archavercosine (archavercosin, archavercos, ahvc), archacoversine (archacoversin, ahcv) or archacovercosine (archacovercosin, archacovercos, ahcc) exist as well: These functions can be extended into the complex plane . [ 42 ] [ 19 ] [ 24 ] Maclaurin series : [ 24 ] When the versine v is small in comparison to the radius r , it may be approximated from the half-chord length L (the distance AC shown above) by the formula [ 51 ] v ≈ L 2 2 r . {\displaystyle v\approx {\frac {L^{2}}{2r}}.} Alternatively, if the versine is small and the versine, radius, and half-chord length are known, they may be used to estimate the arc length s ( AD in the figure above) by the formula s ≈ L + v 2 r {\displaystyle s\approx L+{\frac {v^{2}}{r}}} This formula was known to the Chinese mathematician Shen Kuo , and a more accurate formula also involving the sagitta was developed two centuries later by Guo Shoujing . [ 52 ] A more accurate approximation used in engineering [ 53 ] is v ≈ s 3 2 L 1 2 8 r {\displaystyle v\approx {\frac {s^{\frac {3}{2}}L^{\frac {1}{2}}}{8r}}} The term versine is also sometimes used to describe deviations from straightness in an arbitrary planar curve, of which the above circle is a special case. Given a chord between two points in a curve, the perpendicular distance v from the chord to the curve (usually at the chord midpoint) is called a versine measurement. For a straight line, the versine of any chord is zero, so this measurement characterizes the straightness of the curve. In the limit as the chord length L goes to zero, the ratio ⁠ 8 v / L 2 ⁠ goes to the instantaneous curvature . This usage is especially common in rail transport , where it describes measurements of the straightness of the rail tracks [ 54 ] and it is the basis of the Hallade method for rail surveying . The term sagitta (often abbreviated sag ) is used similarly in optics , for describing the surfaces of lenses and mirrors .
https://en.wikipedia.org/wiki/Turned_chord
The sclerometer , also known as the Turner-sclerometer (from Ancient Greek : σκληρός meaning "hard"), is an instrument used by metallurgists , material scientists and mineralogists to measure the scratch hardness of materials. It was invented in 1896 by Thomas Turner (1861–1951), the first Professor of metallurgy in Britain, at the University of Birmingham . The Turner-Sclerometer test consists of measuring the amount of load required to make a scratch. [ 1 ] [ 2 ] In test a weighted diamond point is drawn, once forward and once backward, over the smooth surface of the material to be tested. The hardness number is the weight in grams required to produce a standard scratch. The scratch selected is one which is just visible to the naked eye as a dark line on a bright reflecting surface. It is also the scratch which can just be felt with the edge of a quill when the latter is drawn over the smooth surface at right angles to a series of such scratches produced by regularly increasing weights. This standards - or measurement -related article is a stub . You can help Wikipedia by expanding it . This technology-related article is a stub . You can help Wikipedia by expanding it . This tool article is a stub . You can help Wikipedia by expanding it . This mineralogy article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Turner-sclerometer
The Turner angle Tu , introduced by Ruddick(1983) [ 2 ] and named after J. Stewart Turner , is a parameter used to describe the local stability of an inviscid water column as it undergoes double-diffusive convection . The temperature and salinity attributes, which generally determine the water density , both respond to the water vertical structure. By putting these two variables in orthogonal coordinates , the angle with the axis can indicate the importance of the two in stability. Turner angle is defined as: [ 1 ] where tan −1 is the four-quadrant arctangent; α is the coefficient of thermal expansion ; β is the equivalent coefficient for the addition of salinity, sometimes referred to as the "coefficient of saline contraction"; θ is potential temperature ; and S is salinity . The relation between Tu and stability is as shown [ 3 ] Turner angle is related to the density ratio mathematically by: Meanwhile, Turner angle has more advantages than density ratio in aspects of: [ 2 ] Nevertheless, Turner angle is not as directly obvious as density ratio when assessing different attributions of thermal and haline stratification . Its strength mainly focuses on classification. Turner angle is usually discussed when researching ocean stratification and double diffusion . Turner angle assesses the vertical stability, indicating the density of the water column changes with depth. The density is generally related to potential temperature and salinity profile: the cooler and saltier the water is, the denser it is. As the light water overlays on the dense water, the water column is stably stratified . The buoyancy force preserves stable stratification. The Brunt-Vaisala frequency (N) is a measure of stability. If N 2 >0, the fluid is stably stratified. A stably-statified fluid may be doubly stable. For instance, in the ocean, if the temperature decreases with depth (∂θ/∂z>0) and salinity increases with depth (∂S/∂z<0), then that part of the ocean is stably stratified with respect to both θ and S. In this state, the Turner angle is between -45° and 45°. However, the fluid column can maintain static stability even if two attributes have opposite effects on the stability; the effect of one just has to have the predominant effect, overwhelming the smaller effect. In this sort of stable stratification, double diffusion occurs. Both attributes diffuse in opposite directions, reducing stability and causing mixing and turbulence . If the slower-diffusing component is the one that is stably-stratified, then the vertical gradient will stay smooth. If the faster-diffusing component is the one providing stability, then the interface will develop long "fingers", as diffusion will create pockets of fluid with intermediate attributes, but not intermediate density. In the ocean, heat diffuses faster than salt. If colder, fresher water overlies warmer, saltier water, the salinity structure is stable while the temperature structure is unstable (∂θ/∂z<0, ∂S/∂z<0). In these diffusive cases, the Turner angle is -45° to -90°. If warmer, saltier water overlies colder, fresher water (∂θ/∂z>0, ∂S/∂z>0), salt fingering can be expected. This is because patchy mixing will create pockets of cold, salty water and pockets of warm, fresh water. and these pockets will sink and rise. In these fingering cases, the Turner angle is 45° to 90°. Since Turner angle can indicate the thermal and haline properties of the water column, it is used to discuss thermohaline water structures. For instance, it can be used to define the boundaries of the subarctic front. [ 4 ] The global meridional Turner angle distributions at the surface and 300-m depth in different seasons are investigated by Tippins, Duncan & Tomczak, Matthias (2003), [ 5 ] which indicates the overall stability of the ocean over a long-time scale. It's worth noting that 300-m depth is deep enough to be beneath the mixed layer during all seasons over most of the subtropics, yet shallow enough to be located entirely in the permanent thermocline , even in the tropics. At the surface, as the temperature and salinity increase from the Subpolar Front towards subtropics, the Turner angle is positive, while it becomes negative due to the meridional salinity gradient being reversed on the equatorial side of the subtropical surface salinity maximum. Tu becomes positive again in the Pacific and Atlantic Oceans near the equator. A band of negative Tu in the South Pacific extends westward along 45°S, produced by the low salinities because of plenty of rainfall, off the southern coast of Chile. In 300-m depth, it is dominated by positive Tu nearly everywhere except for only narrow bands of negative Turner angles. This reflects the shape of the permanent thermocline , which sinks to its greatest depth in the center of the oceanic gyres and then rises again towards the equator, and which also indicates a vertical structure in temperature and salinity where both decrease with depth. The function of Turner angle is available: For Python , published in the GSW Oceanographic Toolbox from the function gsw_Turner_Rsubrho . For R , please reference this page Home/CRAN/gsw/gsw_Turner_Rsubrho: Turner Angle and Density Ratio . For MATLAB , please reference this page GSW-Matlab/gsw_Turner_Rsubrho.m .
https://en.wikipedia.org/wiki/Turner_angle
Turning assistant is an advanced driver-assistance system introduced in 2015. The system monitors opposing traffic when turning across traffic at low speeds. In critical situation, it brakes the car. This is a common scenario at busy city crossings as well as on highways, where the speed limits are higher. [ citation needed ] This article about an automotive technology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Turning_assistant
The turning radius (alternatively, turning diameter or turning circle ) of a vehicle defines the minimum dimension (typically the radius or diameter ) of available space required for that vehicle to make a semi-circular U-turn without skidding . The Oxford English Dictionary describes turning circle as "the smallest circle within which a ship, motor vehicle, etc., can be turned round completely". [ 1 ] The term thus refers to a theoretical minimal circle in which for example an aeroplane , a ground vehicle or a watercraft can be turned around. The terms ( radius , diameter , or circle ) can have different meanings; refer to the § Alternative nomenclature section. On wheeled vehicles with the common type of front wheel steering (i.e. one, two or even four wheels at the front capable of steering), the vehicle's turning diameter measures the minimum space needed to turn the vehicle around while the steering is set to its maximum displacement from the central 'straight ahead' position - i.e. either extreme left or right. If a marker pen was placed on the point of the vehicle furthest from the center of the turn, the diameter of the circle traced during the turn defines the value of that vehicle's turning diameter. Mathematically, the turning radius would be half of the turning diameter. The curb-to-curb turning radius, which considers the chassis and wheels only without body protrusions, can be expressed as a simplified function of the wheelbase , tire width, and steering angle : [ 2 ] Aircraft have a similar minimum turning circle concept, [ 3 ] generally associated with a standard rate turn , in which an aircraft enters a coordinated turn which changes its heading at a rate of 3° per second, or 180° in one minute. In this case, the turning radius depends on the true airspeed v t {\displaystyle v_{t}} (in knots) as: Turning diameter is sometimes used in everyday language as a generalized term rather than with numerical figures . [ citation needed ] For example, a wheeled vehicle with a very small turning circle may be described as having a "tight turning radius", meaning that it is easier to turn around very tight corners. Wheeled vehicles with four-wheel steering will have a smaller turning radius than vehicles that steer wheels on one axle. Technically, the minimum possible turning circle for a vehicle would be where it does not move either forwards or backwards while turning and simply pivots on its central axis . For a rectangular vehicle capable of doing this, the smallest turning circle would be equal to the diagonal length of the vehicle. As an example, some boats can be turned in this way, generally by using azimuth thrusters . Some wheeled vehicles are designed to spin around their central axis by making all wheels steerable, such as certain lawnmowers and wheelchairs as they do not follow a circular path as they turn. In this case the vehicle is referred to as a " zero turning radius" vehicle. Some camera dollies used in the film industry have a "round" mode which allows them to spin around their z axis by allowing synchronized inverse rotation of their left and right wheel sets, effectively giving them "zero" turning radius. Many conventionally steerable vehicles (only one axle with steerable wheels) can reverse the direction of travel in a space smaller than the stated turning radius by executing a specialized maneuver, such as a J-turn or similar skid, or in a discontinuous motion such as a three-point turn . Other terms are sometimes used synonymously for turning diameter, which can lead to confusion. The automotive term turning radius has been used as equivalent and interchangeable with the turning diameter . [ citation needed ] For example, the 2017 Audi A4 is specified by the manufacturer as having a turning diameter (curb-to-curb) of 11.6 m (38 ft). [ 4 ] Mathematically, the radius of a circle is half the diameter, so the correct turning radius in this example would be ⁠ 11.6 m / 2 ⁠ = 5.8 m. However, another source lists the turning radius of the same vehicle as also being 11.6 m, [ 5 ] which is the turning diameter. In practice, the values of turning diameter tend to be listed more frequently in vehicle specifications, [ citation needed ] so the term turning diameter will therefore be more correct in most cases. The turning diameter will always give a higher number for a given vehicle, and the turning diameter measurement is usually preferred by automotive manufacturers. [ citation needed ] Such mixing of terms can lead to confusion among consumers. The term turning circle is another term also sometimes used synonymously for the turning diameter. Some argue that turning circle is less ambiguous than turning radius, but "turning circle" may introduce its own ambiguities since the same circle can be defined by multiple measurements, including the radius r {\displaystyle r} , diameter ( d = 2 ⋅ r {\displaystyle d=2\cdot r} , twice as big), or circumference ( 2 π r {\displaystyle 2\pi r} , about 6.28 times as big). For example, Motor Trend refers to a "curb-to-curb turning circle" of a 2008 Cadillac CTS as 10.82 metres (35.5 ft), but the terminology is not yet settled. AutoChannel.com refers to the "turning radius" of the same car as 10.82 metres (35.5 ft). Turning circle is also sometimes used to refer to the path swept in the manoeuvre, [ citation needed ] i.e. the arc , or the circle's circumference in the case when the manoeuvre makes a complete turn . There are two methods for measuring the vehicle turning diameter which will give slightly different results. These two methods are called wall-to-wall and curb-to-curb (US spelling), or alternatively kerb-to-kerb (UK spelling). The wall-to-wall turning circle is the minimum distance between two walls, both of which exceed the height of the vehicle, in which the vehicle can make a U-turn. The kerb-to-kerb turning circle is the minimum distance between two raised curbs, both of which are lower than the lowest body protrusions, in which the vehicle can make a U-turn. The wall-to-wall turning circle is greater than the kerb-to-kerb measure for the same vehicle because of the front and rear body overhangs. [ 2 ] One can find these two ways of measuring the turning circle used in auto specifications, for example, a van might be listed as having a turning circle (in meters) of 12.1 (C) / 12.4 (W). A curb or curb-to-curb turning circle will show the straight-line distance from one side of the circle to the other, through the center. The name "curb-to-curb" indicates that a street would have to be this wide before this car can make a U-turn and not hit a street curb with a wheel. If you took the street curb and built it higher, as high as the car, and tried to make a U-turn in the street, parts of the car (bumper) would hit the wall. The kerb-to-kerb turning circle can be smaller than the turning circle as it refers to only a partial circle (~180°) with the vehicle alongside one kerb to start with. To perform a U turn in a forward direction only, the centre of the turn is not coincident with the centre of the road - thus a complete circle would not be possible (without driving onto the pavement to complete the manoeuvre). It also does not take into account that part of the vehicle that overhangs the wheels where as 'turning circle' does. The name wall or wall-to-wall turning circle denotes how far apart the two walls would have to be to allow a U-turn without scraping the walls. Road vehicles must be able to carry out a 360 degrees turn on an annulus with an outer radius of 12.5 metres (41 ft) and an inner radius of 5.3 metres (17 ft), measured wall-to-wall. In addition, when entering this annulus, no part of the vehicle can overreach a tangent by more than 80 centimetres (31 in); this tangent is drawn at the outer, 12.5 m limit of the annulus. [ 6 ] [ 7 ] [ 8 ] New Zealand requires that road vehicles can perform a 360 degrees turn on a circle with a 25 metres (82 ft) diameter, measured wall-to-wall. The only part of the vehicle that may reach over this limitation are collapsible mirrors. [ 9 ]
https://en.wikipedia.org/wiki/Turning_radius
A turnkey , [ 1 ] a turnkey project , or a turnkey operation (also spelled turn-key ) is a type of project that is constructed so that it can be sold to any buyer as a completed product. This is contrasted with build to order , where the constructor builds an item to the buyer's exact specifications, or when an incomplete product is sold with the assumption that the buyer would complete it. A turnkey project or contract as described by Duncan Wallace (1984) is [ 2 ] …. a contract where the essential design emanates from, or is supplied by, the Contractor and not the owner, so that the legal responsibility for the design, suitability and performance of the work after completion will be made to rest … with the contractor …. 'Turnkey' is treated as merely signifying the design responsibility as the contractor's. A turnkey contract is typically a construction contract under which a contractor is employed to plan, design and build a project or an infrastructure and do any other necessary development to make it functional or ‘ready to use’ at an agreed price and by a fixed date. [ 3 ] In turnkey contracts, most of the time the employer provides the primary design. The contractor must follow the primary design provided by the employer. A turnkey computer system is a complete computer including hardware, operating system and application(s) designed and sold to satisfy specific business requirements. Turnkey refers to something that is ready for immediate use, generally used in the sale or supply of goods or services. The word is a reference to the fact that the customer, upon receiving the product, just needs to turn the ignition key to make it operational, or that the key just needs to be turned over to the customer. [ 4 ] Turnkey is commonly used in the construction industry, for instance, in which it refers to bundling of materials and labour by the home builder or general contractor to complete the home without owner involvement. The word is often used to describe a home built on the developer's land with the developer's financing ready for the customer to move in. If a contractor builds a "turnkey home" it frames the structure and finish the interior; everything is completed down to the cabinets and carpet. Turnkey is also commonly used in motorsports to describe a car being sold with powertrain (engine, transmission, etc.) to contrast with a vehicle sold without one so that other components may be re-used. Similarly, this term may be used to advertise the sale of an established business, including all the equipment necessary to run it, or by a business-to-business supplier providing complete packages for business start-up. [ 4 ] An example would be the creation of a "turnkey hospital" which would be building a complete medical. In manufacturing, the turnkey manufacturing contractor (the business that takes on the turnkey project) normally provide help during the initial design process, machining and tooling, quality assurance, to production, packaging and delivery. Turnkey manufacturing have advantages in saving production time, single point of contact, cost savings and price certainty and quality assurance. [ 1 ] The term turnkey is also often used in the technology industry, most commonly to describe pre-built computer "packages" in which everything needed to perform a certain type of task (e.g. audio editing) is put together by the supplier and sold as a bundle. [ citation needed ] This often includes a computer with pre-installed software, various types of hardware, and accessories. Such packages are commonly called appliances . A website with a ready-made solutions and some configurations is called a turnkey website. In real estate, turnkey is defined as a home or property that is ready for occupation for its intended purpose, i.e., a home that is fully functional, needs no upgrading or repairs (move-in ready). In commercial use, a building set up to do auto repairs would be defined as turnkey if it came fully stocked with all needed machinery and tools for that particular trade. [ citation needed ] The turnkey process includes all of the steps involved to open a location including the site selection, negotiations, space planning, construction coordination and complete installation. "Turnkey real estate" also refers to a type of investment . This process includes the purchase, construction or rehab (of an existing site), the leasing out to tenants, and then the sale of the property to a buyer. The buyer is purchasing an investment property which is producing a stream of income. In drilling , the term indicates an arrangement where a contractor must fully complete a well up to some milestone to receive any payment (in exchange for greater compensation upon completion). [ 5 ]
https://en.wikipedia.org/wiki/Turnkey
Turntable stretch wrappers are a type of automatic and semi-automatic stretch wrapping system. A load is placed on a turntable , which rotates relative to the film roll , which is housed in a carriage attached to a vertical "mast" on which it may move up and down. In the simplest turntable systems, stretch is achieved by rotating the load at a speed in which the take-up demand on the load surface is faster than the rate at which the film is allowed to be fed, being limited by a brake system. More sophisticated systems also pre-stretch the film before wrapping by means of fixed/variable gear ratios or other, even more sophisticated means, such as hydraulic ratios which can better take advantage of film "sweet spots", improving performance and film savings.
https://en.wikipedia.org/wiki/Turntable_stretch_wrapper
A turret lathe is a form of metalworking lathe that is used for repetitive production of duplicate parts, which by the nature of their cutting process are usually interchangeable . It evolved from earlier lathes with the addition of the turret , which is an indexable toolholder that allows multiple cutting operations to be performed, each with a different cutting tool , in easy, rapid succession, with no need for the operator to perform set-up tasks in between (such as installing or uninstalling tools) or to control the toolpath. The latter is due to the toolpath's being controlled by the machine, either in jig -like fashion, via the mechanical limits placed on it by the turret's slide and stops, or via digitally -directed servomechanisms for computer numerical control lathes. The name derives from the way early turrets took the general form of a flattened cylindrical block mounted to the lathe's cross-slide, capable of rotating about the vertical axis and with toolholders projecting out to all sides, and thus vaguely resembled a swiveling gun turret . Capstan lathe is the usual name in the UK and Commonwealth, though the two terms are also used in contrast: see below, Capstan versus turret . Turret lathes became indispensable to the production of interchangeable parts and for mass production. The first turret lathe was built by Stephen Fitch in 1845 to manufacture screws for pistol percussion parts. [ 2 ] In the mid-nineteenth century, the need for interchangeable parts for Colt revolvers enhanced the role of turret lathes in achieving this goal as part of the " American system " of manufacturing arms. Clock-making and bicycle manufacturing had similar requirements. [ 3 ] Christopher Spencer invented the first fully automated turret lathe in 1873, which led to designs using cam action or hydraulic mechanisms. [ 2 ] From the late-19th through mid-20th centuries, turret lathes, both manual and automatic (i.e., screw machines and chuckers), were one of the most important classes of machine tools for mass production . They were used extensively in the mass production for the war effort in World War II. [ 4 ] There are many variants of the turret lathe. They can be most generally classified by size (small, medium, or large); method of control (manual, automated mechanically, or automated via computer (numerical control (NC) or computer numerical control (CNC)); and bed orientation (horizontal or vertical). In the late 1830s a "capstan lathe" with a turret was patented in Britain. [ 5 ] The first American turret lathe was invented by Stephen Fitch in 1845. [ 6 ] The archetypical turret lathe, and the first in order of historical appearance, is the horizontal-bed, manual turret lathe. The term "turret lathe" without further qualification is still understood to refer to this type. The formative decades for this class of machine were the 1840s through 1860s, when the basic idea of mounting an indexable turret on a bench lathe or engine lathe was born, developed, and disseminated from the originating shops to many other factories. Some important tool-builders in this development were Stephen Fitch; Gay, Silver & Co.; Elisha K. Root of Colt ; J.D. Alvord of the Sharps Armory ; Frederick W. Howe, Richard S. Lawrence, and Henry D. Stone of Robbins & Lawrence; J.R. Brown of Brown & Sharpe ; and Francis A. Pratt of Pratt & Whitney . [ 7 ] Various designers at these and other firms later made further refinements. Sometimes machines similar to those above, but with power feeds and automatic turret-indexing at the end of the return stroke, are called "semi-automatic turret lathes". This nomenclature distinction is blurry and not consistently observed. The term "turret lathe" encompasses them all. During the 1860s, when semi-automatic turret lathes were developed, [ 6 ] they were sometimes called "automatic". What we today would call "automatics", that is, fully automatic machines, had not been developed yet. During that era both manual and semi-automatic turret lathes were sometimes called "screw machines", although we today reserve that term for fully automatic machines. [ 8 ] During the 1870s through 1890s, the mechanically automated "automatic" turret lathe was developed and disseminated. These machines can execute many part-cutting cycles without human intervention. Thus the duties of the operator, which were already greatly reduced by the manual turret lathe, were even further reduced, and productivity increased. These machines use cams to automate the sliding and indexing of the turret and the opening and closing of the chuck . Thus, they execute the part-cutting cycle somewhat analogously to the way in which an elaborate cuckoo clock performs an automated theater show. Small- to medium-sized automatic turret lathes are usually called " screw machines " or "automatic screw machines", while larger ones are usually called "automatic chucking lathes", "automatic chuckers", or "chuckers". [ citation needed ] Machine tools of the "automatic" variety, which in the pre-computer era meant mechanically automated, had already reached a highly advanced state by World War I . [ citation needed ] When World War II ended, the digital computer was poised to develop from a colossal laboratory curiosity into a practical technology that could begin to disseminate into business and industry. The advent of computer-based automation in machine tools via numerical control (NC) and then computer numerical control (CNC) displaced to a large extent, but not at all completely, the previously existing manual and mechanically automated machines. Numerically controlled turrets allow automated selection of tools on a turret. [ 9 ] CNC lathes may be horizontal or vertical in orientation and mount six separate tools on one or more turrets. [ 10 ] Such machine tools can work in two axes per turret, with up to six axes being feasible for complex work. [ 10 ] Vertical turret lathes have the workpiece held vertically, which allows the headstock to sit on the floor and the faceplate to become a horizontal rotating table, analogous to a huge potter's wheel . This is useful for the handling of very large, heavy, short workpieces. Vertical lathes in general are also called "vertical boring mills" or often simply "boring mills"; therefore a vertical turret lathe is a vertical boring mill equipped with a turret. [ 9 ] The term "capstan lathe" overlaps in sense with the term "turret lathe" to a large extent. In many times and places, it has been understood to be synonymous with "turret lathe". In other times and places it has been held in technical contradistinction to "turret lathe", with the difference being in whether the turret's slide is fixed to the bed (ram-type turret) or slides on the bed's ways (saddle-type turret). [ 11 ] [ 12 ] The difference in terminology is mostly a matter of United Kingdom and Commonwealth usage versus United States usage. [ 7 ] A subtype of horizontal turret lathe is the flat-turret lathe. Its turret is flat (and analogous to a rotary table ), allowing the turret to pass beneath the part. Patented by James Hartness of Jones & Lamson, and first disseminated in the 1890s, it was developed to provide more rigidity via requiring less overhang in the tool setup, especially when the part is relatively long. [ 13 ] Hollow-hexagon turret lathes competed with flat-turret lathes by taking the conventional hexagon turret and making it hollow, allowing the part to pass into it during the cut, analogously to how the part would pass over the flat turret. In both cases, the main idea is to increase rigidity by allowing a relatively long part to be turned without the tool overhang that would be needed with a conventional turret, which is not flat or hollow. [ 14 ] The term "monitor lathe" formerly (1860s–1940s) referred to the class of small- to medium-sized manual turret lathes used on relatively small work. The name was inspired by the monitor-class warships , which the monitor lathe's turret resembled. Today, lathes of such appearance, such as the Hardinge DSM-59 and its many clones, are still common, but the name "monitor lathe" is no longer current in the industry. [ 8 ] Turrets can be added to non-turret lathes (bench lathes, engine lathes, toolroom lathes, etc.) by mounting them on the toolpost, tailstock, or both. Often these turrets are not as large as a turret lathe's, and they usually do not offer the sliding and stopping that a turret lathe's turret does; but they do offer the ability to index through successive tool settings.
https://en.wikipedia.org/wiki/Turret_lathe
In mathematics, Turán's inequalities are some inequalities for Legendre polynomials found by Pál Turán ( 1950 ) (and first published by Szegö (1948) ). There are many generalizations to other polynomials, often called Turán's inequalities, given by (E. F. Beckenbach, W. Seidel & Otto Szász 1951 ) and other authors. If P n {\displaystyle P_{n}} is the n {\displaystyle n} th Legendre polynomial , Turán's inequalities state that For H n {\displaystyle H_{n}} , the n {\displaystyle n} th Hermite polynomial , Turán's inequalities are whilst for Chebyshev polynomials they are This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Turán's_inequalities
In graph theory , Turán's theorem bounds the number of edges that can be included in an undirected graph that does not have a complete subgraph of a given size. It is one of the central results of extremal graph theory , an area studying the largest or smallest graphs with given properties, and is a special case of the forbidden subgraph problem on the maximum number of edges in a graph that does not have a given subgraph. An example of an n {\displaystyle n} - vertex graph that does not contain any ( r + 1 ) {\displaystyle (r+1)} -vertex clique K r + 1 {\displaystyle K_{r+1}} may be formed by partitioning the set of n {\displaystyle n} vertices into r {\displaystyle r} parts of equal or nearly equal size, and connecting two vertices by an edge whenever they belong to two different parts. The resulting graph is the Turán graph T ( n , r ) {\displaystyle T(n,r)} . Turán's theorem states that the Turán graph has the largest number of edges among all K r +1 -free n -vertex graphs. Turán's theorem, and the Turán graphs giving its extreme case, were first described and studied by Hungarian mathematician Pál Turán in 1941. [ 1 ] The special case of the theorem for triangle-free graphs is known as Mantel's theorem ; it was stated in 1907 by Willem Mantel, a Dutch mathematician. [ 2 ] Turán's theorem states that every graph G {\displaystyle G} with n {\displaystyle n} vertices that does not contain K r + 1 {\displaystyle K_{r+1}} as a subgraph has at most as many edges as the Turán graph T ( n , r ) {\displaystyle T(n,r)} . For a fixed value of r {\displaystyle r} , this graph has ( 1 − 1 r + o ( 1 ) ) n 2 2 {\displaystyle \left(1-{\frac {1}{r}}+o(1)\right){\frac {n^{2}}{2}}} edges, using little-o notation . Intuitively, this means that as n {\displaystyle n} gets larger, the fraction of edges included in T ( n , r ) {\displaystyle T(n,r)} gets closer and closer to 1 − 1 r {\displaystyle 1-{\frac {1}{r}}} . Many of the following proofs only give the upper bound of ( 1 − 1 r ) n 2 2 {\displaystyle \left(1-{\frac {1}{r}}\right){\frac {n^{2}}{2}}} . [ 3 ] Aigner & Ziegler (2018) list five different proofs of Turán's theorem. [ 3 ] Many of the proofs involve reducing to the case where the graph is a complete multipartite graph , and showing that the number of edges is maximized when there are r {\displaystyle r} parts of size as close as possible to equal. This was Turán's original proof. Take a K r + 1 {\displaystyle K_{r+1}} -free graph on n {\displaystyle n} vertices with the maximal number of edges. Find a K r {\displaystyle K_{r}} (which exists by maximality), and partition the vertices into the set A {\displaystyle A} of the r {\displaystyle r} vertices in the K r {\displaystyle K_{r}} and the set B {\displaystyle B} of the n − r {\displaystyle n-r} other vertices. Now, one can bound edges above as follows: Adding these bounds gives the result. [ 1 ] [ 3 ] This proof is due to Paul Erdős . Take the vertex v {\displaystyle v} of largest degree. Consider the set A {\displaystyle A} of vertices not adjacent to v {\displaystyle v} and the set B {\displaystyle B} of vertices adjacent to v {\displaystyle v} . Now, delete all edges within A {\displaystyle A} and draw all edges between A {\displaystyle A} and B {\displaystyle B} . This increases the number of edges by our maximality assumption and keeps the graph K r + 1 {\displaystyle K_{r+1}} -free. Now, B {\displaystyle B} is K r {\displaystyle K_{r}} -free, so the same argument can be repeated on B {\displaystyle B} . Repeating this argument eventually produces a graph in the same form as a Turán graph , which is a collection of independent sets, with edges between each two vertices from different independent sets. A simple calculation shows that the number of edges of this graph is maximized when all independent set sizes are as close to equal as possible. [ 3 ] [ 4 ] This proof, as well as the Zykov Symmetrization proof, involve reducing to the case where the graph is a complete multipartite graph , and showing that the number of edges is maximized when there are r {\displaystyle r} independent sets of size as close as possible to equal. This step can be done as follows: Let S 1 , S 2 , … , S r {\displaystyle S_{1},S_{2},\ldots ,S_{r}} be the independent sets of the multipartite graph. Since two vertices have an edge between them if and only if they are not in the same independent set, the number of edges is ∑ i ≠ j | S i | | S j | = 1 2 ( n 2 − ∑ i | S i | 2 ) , {\displaystyle \sum _{i\neq j}\left|S_{i}\right|\left|S_{j}\right|={\frac {1}{2}}\left(n^{2}-\sum _{i}\left|S_{i}\right|^{2}\right),} where the left hand side follows from direct counting, and the right hand side follows from complementary counting. To show the ( 1 − 1 r ) n 2 2 {\displaystyle \left(1-{\frac {1}{r}}\right){\frac {n^{2}}{2}}} bound, applying the Cauchy–Schwarz inequality to the ∑ i | S i | 2 {\textstyle \sum \limits _{i}\left|S_{i}\right|^{2}} term on the right hand side suffices, since ∑ i | S i | = n {\textstyle \sum \limits _{i}\left|S_{i}\right|=n} . To prove the Turán Graph is optimal, one can argue that no two S i {\displaystyle S_{i}} differ by more than one in size. In particular, supposing that we have | S i | ≥ | S j | + 2 {\displaystyle \left|S_{i}\right|\geq \left|S_{j}\right|+2} for some i ≠ j {\displaystyle i\neq j} , moving one vertex from S j {\displaystyle S_{j}} to S i {\displaystyle S_{i}} (and adjusting edges accordingly) would increase the value of the sum. This can be seen by examining the changes to either side of the above expression for the number of edges, or by noting that the degree of the moved vertex increases. This proof is due to Motzkin & Straus (1965) . They begin by considering a K r + 1 {\displaystyle K_{r+1}} free graph with vertices labelled 1 , 2 , … , n {\displaystyle 1,2,\ldots ,n} , and considering maximizing the function f ( x 1 , x 2 , … , x n ) = ∑ i , j adjacent x i x j {\displaystyle f(x_{1},x_{2},\ldots ,x_{n})=\sum _{i,j\ {\text{adjacent}}}x_{i}x_{j}} over all nonnegative x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} with sum 1 {\displaystyle 1} . This function is known as the Lagrangian of the graph and its edges. The idea behind their proof is that if x i , x j {\displaystyle x_{i},x_{j}} are both nonzero while i , j {\displaystyle i,j} are not adjacent in the graph, the function f ( x 1 , … , x i − t , … , x j + t , … , x n ) {\displaystyle f(x_{1},\ldots ,x_{i}-t,\ldots ,x_{j}+t,\ldots ,x_{n})} is linear in t {\displaystyle t} . Hence, one can either replace ( x i , x j ) {\displaystyle (x_{i},x_{j})} with either ( x i + x j , 0 ) {\displaystyle (x_{i}+x_{j},0)} or ( 0 , x i + x j ) {\displaystyle (0,x_{i}+x_{j})} without decreasing the value of the function. Hence, there is a point with at most r {\displaystyle r} nonzero variables where the function is maximized. Now, the Cauchy–Schwarz inequality gives that the maximal value is at most 1 2 ( 1 − 1 r ) {\displaystyle {\frac {1}{2}}\left(1-{\frac {1}{r}}\right)} . Plugging in x i = 1 n {\displaystyle x_{i}={\frac {1}{n}}} for all i {\displaystyle i} gives that the maximal value is at least | E | n 2 {\displaystyle {\frac {|E|}{n^{2}}}} , giving the desired bound. [ 3 ] [ 5 ] The key claim in this proof was independently found by Caro and Wei. This proof is due to Noga Alon and Joel Spencer , from their book The Probabilistic Method . The proof shows that every graph with degrees d 1 , d 2 , … , d n {\displaystyle d_{1},d_{2},\ldots ,d_{n}} has an independent set of size at least S = 1 d 1 + 1 + 1 d 2 + 1 + ⋯ + 1 d n + 1 . {\displaystyle S={\frac {1}{d_{1}+1}}+{\frac {1}{d_{2}+1}}+\cdots +{\frac {1}{d_{n}+1}}.} The proof attempts to find such an independent set as follows: A vertex of degree d {\displaystyle d} is included in this with probability 1 d + 1 {\displaystyle {\frac {1}{d+1}}} , so this process gives an average of S {\displaystyle S} vertices in the chosen set. Applying this fact to the complement graph and bounding the size of the chosen set using the Cauchy–Schwarz inequality proves Turán's theorem. [ 3 ] See Method of conditional probabilities § Turán's theorem for more. Aigner and Ziegler call the final one of their five proofs "the most beautiful of them all". Its origins are unclear, but the approach is often referred to as Zykov Symmetrization as it was used in Zykov's proof of a generalization of Turán's Theorem [ 6 ] . This proof goes by taking a K r + 1 {\displaystyle K_{r+1}} -free graph, and applying steps to make it more similar to the Turán Graph while increasing edge count. In particular, given a K r + 1 {\displaystyle K_{r+1}} -free graph, the following steps are applied: All of these steps keep the graph K r + 1 {\displaystyle K_{r+1}} free while increasing the number of edges. Now, non-adjacency forms an equivalence relation . The equivalence classes give that any maximal graph the same form as a Turán graph. As in the maximal degree vertex proof, a simple calculation shows that the number of edges is maximized when all independent set sizes are as close to equal as possible. [ 3 ] The special case of Turán's theorem for r = 2 {\displaystyle r=2} is Mantel's theorem: The maximum number of edges in an n {\displaystyle n} -vertex triangle-free graph is ⌊ n 2 / 4 ⌋ . {\displaystyle \lfloor n^{2}/4\rfloor .} [ 2 ] In other words, one must delete nearly half of the edges in K n {\displaystyle K_{n}} to obtain a triangle-free graph. A strengthened form of Mantel's theorem states that any Hamiltonian graph with at least n 2 / 4 {\displaystyle n^{2}/4} edges must either be the complete bipartite graph K n / 2 , n / 2 {\displaystyle K_{n/2,n/2}} or it must be pancyclic : not only does it contain a triangle, it must also contain cycles of all other possible lengths up to the number of vertices in the graph. [ 7 ] Another strengthening of Mantel's theorem states that the edges of every n {\displaystyle n} -vertex graph may be covered by at most ⌊ n 2 / 4 ⌋ {\displaystyle \lfloor n^{2}/4\rfloor } cliques which are either edges or triangles. As a corollary, the graph's intersection number (the minimum number of cliques needed to cover all its edges) is at most ⌊ n 2 / 4 ⌋ {\displaystyle \lfloor n^{2}/4\rfloor } . [ 8 ] There is no analogous of Turán's theorem for k {\displaystyle k} -uniform hypergraphs. In fact, in Turán's original paper [ 1 ] , he asked for the maximum number of hyperedges an n {\displaystyle n} -vertex 3 {\displaystyle 3} -uniform hypergraph can have without containing the complete 3 {\displaystyle 3} -uniform hypergraph on 4 {\displaystyle 4} vertices, K 4 ( 3 ) {\displaystyle K_{4}^{(3)}} . This maximum number of hyperedges is known as the extremal number . More precisely and more generally, for a hypergraph F {\displaystyle F} , the extremal number of F {\displaystyle F} for n {\displaystyle n} vertices, ex ( n , F ) {\displaystyle (n,F)} , is the maximum number of hyperedges an n {\displaystyle n} -vertex k {\displaystyle k} -uniform hypergraph can have without containing a copy of F {\displaystyle F} . To obtain a cleaner parameter, the Turán density of F {\displaystyle F} is defined by the following limit π ( F ) = lim n → ∞ ex ( n , F ) ( n k ) . {\displaystyle \pi (F)=\lim _{n\to \infty }{\frac {{\text{ex}}(n,F)}{\binom {n}{k}}}.} It is easy to see that ex ( n , F ) / ( n k ) {\displaystyle {\text{ex}}(n,F)/{\tbinom {n}{k}}} is non increasing sequence, and therefore, the limit above always converges. With this definition, an approximate answer for Turán's question would determine π ( K 4 ( 3 ) ) {\displaystyle \pi (K_{4}^{(3)})} . Turán's theorem shows that the largest number of edges in a K r + 1 {\displaystyle K_{r+1}} -free graph is ( 1 − 1 r + o ( 1 ) ) n 2 2 {\displaystyle \left(1-{\frac {1}{r}}+o(1)\right){\frac {n^{2}}{2}}} . The Erdős–Stone theorem finds the number of edges up to a o ( n 2 ) {\displaystyle o(n^{2})} error in all other graphs: (Erdős–Stone) Suppose H {\displaystyle H} is a graph with chromatic number χ ( H ) {\displaystyle \chi (H)} . The largest possible number of edges in a graph where H {\displaystyle H} does not appear as a subgraph is ( 1 − 1 χ ( H ) − 1 + o ( 1 ) ) n 2 2 {\displaystyle \left(1-{\frac {1}{\chi (H)-1}}+o(1)\right){\frac {n^{2}}{2}}} where the o ( 1 ) {\displaystyle o(1)} constant only depends on H {\displaystyle H} . One can see that the Turán graph T ( n , χ ( H ) − 1 ) {\displaystyle T(n,\chi (H)-1)} cannot contain any copies of H {\displaystyle H} , so the Turán graph establishes the lower bound. As a K r + 1 {\displaystyle K_{r+1}} has chromatic number r + 1 {\displaystyle r+1} , Turán's theorem is the special case in which H {\displaystyle H} is a K r + 1 {\displaystyle K_{r+1}} . The general question of how many edges can be included in a graph without a copy of some H {\displaystyle H} is the forbidden subgraph problem . Another natural extension of Turán's theorem is the following question: if a graph has no K r + 1 {\displaystyle K_{r+1}} s, how many copies of K a {\displaystyle K_{a}} can it have? Turán's theorem is the case where a = 2 {\displaystyle a=2} . Zykov's Theorem answers this question: (Zykov's Theorem) The graph on n {\displaystyle n} vertices with no K r + 1 {\displaystyle K_{r+1}} s and the largest possible number of K a {\displaystyle K_{a}} s is the Turán graph T ( n , r ) {\displaystyle T(n,r)} This was first shown by Zykov (1949) using Zykov Symmetrization [ 1 ] [ 3 ] . Since the Turán Graph contains r {\displaystyle r} parts with size around n r {\displaystyle {\frac {n}{r}}} , the number of K a {\displaystyle K_{a}} s in T ( n , r ) {\displaystyle T(n,r)} is around ( r a ) ( n r ) a {\displaystyle {\binom {r}{a}}\left({\frac {n}{r}}\right)^{a}} . A paper by Alon and Shikhelman in 2016 gives the following generalization, which is similar to the Erdos-Stone generalization of Turán's theorem: (Alon-Shikhelman, 2016) Let H {\displaystyle H} be a graph with chromatic number χ ( H ) > a {\displaystyle \chi (H)>a} . The largest possible number of K a {\displaystyle K_{a}} s in a graph with no copy of H {\displaystyle H} is ( 1 + o ( 1 ) ) ( χ ( H ) − 1 a ) ( n χ ( H ) − 1 ) a . {\displaystyle (1+o(1)){\binom {\chi (H)-1}{a}}\left({\frac {n}{\chi (H)-1}}\right)^{a}.} [ 9 ] As in Erdős–Stone, the Turán graph T ( n , χ ( H ) − 1 ) {\displaystyle T(n,\chi (H)-1)} attains the desired number of copies of K a {\displaystyle K_{a}} . Turan's theorem states that if a graph has edge homomorphism density strictly above 1 − 1 r − 1 {\displaystyle 1-{\frac {1}{r-1}}} , it has a nonzero number of K r {\displaystyle K_{r}} s. One could ask the far more general question: if you are given the edge density of a graph, what can you say about the density of K r {\displaystyle K_{r}} s? An issue with answering this question is that for a given density, there may be some bound not attained by any graph, but approached by some infinite sequence of graphs. To deal with this, weighted graphs or graphons are often considered. In particular, graphons contain the limit of any infinite sequence of graphs. For a given edge density d {\displaystyle d} , the construction for the largest K r {\displaystyle K_{r}} density is as follows: Take a number of vertices N {\displaystyle N} approaching infinity. Pick a set of d N {\displaystyle {\sqrt {d}}N} of the vertices, and connect two vertices if and only if they are in the chosen set. This gives a K r {\displaystyle K_{r}} density of d k / 2 . {\displaystyle d^{k/2}.} The construction for the smallest K r {\displaystyle K_{r}} density is as follows: Take a number of vertices approaching infinity. Let t {\displaystyle t} be the integer such that 1 − 1 t − 1 < d ≤ 1 − 1 t {\displaystyle 1-{\frac {1}{t-1}}<d\leq 1-{\frac {1}{t}}} . Take a t {\displaystyle t} -partite graph where all parts but the unique smallest part have the same size, and sizes of the parts are chosen such that the total edge density is d {\displaystyle d} . For d ≤ 1 − 1 r − 1 {\displaystyle d\leq 1-{\frac {1}{r-1}}} , this gives a graph that is ( r − 1 ) {\displaystyle (r-1)} -partite and hence gives no K r {\displaystyle K_{r}} s. The lower bound was proven by Razborov (2008) [ 10 ] for the case of triangles, and was later generalized to all cliques by Reiher (2016) [ 11 ] . The upper bound is a consequence of the Kruskal–Katona theorem [ 12 ] .
https://en.wikipedia.org/wiki/Turán's_theorem
The Turán–Kubilius inequality is a mathematical theorem in probabilistic number theory . It is useful for proving results about the normal order of an arithmetic function . [ 1 ] : 305–308 The theorem was proved in a special case in 1934 by Pál Turán and generalized in 1956 and 1964 by Jonas Kubilius . [ 1 ] : 316 This formulation is from Tenenbaum . [ 1 ] : 302 Other formulations are in Narkiewicz [ 2 ] : 243 and in Cojocaru & Murty. [ 3 ] : 45–46 Suppose f is an additive complex-valued arithmetic function , and write p for an arbitrary prime and ν for an arbitrary positive integer. Write and Then there is a function ε( x ) that goes to zero when x goes to infinity, and such that for x ≥ 2 we have Turán developed the inequality to create a simpler proof of the Hardy–Ramanujan theorem about the normal order of the number ω( n ) of distinct prime divisors of an integer n . [ 1 ] : 316 There is an exposition of Turán's proof in Hardy & Wright, §22.11. [ 4 ] Tenenbaum [ 1 ] : 305–308 gives a proof of the Hardy–Ramanujan theorem using the Turán–Kubilius inequality and states without proof several other applications.
https://en.wikipedia.org/wiki/Turán–Kubilius_inequality
Tussock grasses or bunch grasses are a group of grass species in the family Poaceae . They usually grow as singular plants in clumps, tufts, hummocks, or bunches, rather than forming a sod or lawn , in meadows , grasslands , and prairies. As perennial plants, most species live more than one season. Tussock grasses are often found as forage in pastures and ornamental grasses in gardens. [ 1 ] [ 2 ] [ 3 ] Many species have long roots that may reach two meters ( 6 + 1 ⁄ 2 feet) or more into the soil, which can aid slope stabilization, erosion control , and soil porosity for precipitation absorption. Also, their roots can reach moisture more deeply than other grasses and annual plants during seasonal or climatic droughts. The plants provide habitat and food for insects (including Lepidoptera ), birds, small animals and larger herbivores , and support beneficial soil mycorrhiza . The leaves supply material, such as for basket weaving , for indigenous peoples and contemporary artists . Tussock and bunch grasses occur in almost any habitat where other grasses are found, including: grasslands , savannas and prairies , wetlands and estuaries , riparian zones , shrublands and scrublands , woodlands and forests , montane and alpine zones, tundra and dunes , and deserts . In western North American wildfires , bunch grasses tend to smolder and not ignite into flames, unlike invasive species of annual grasses that contribute to a fire's spreading. [ 4 ]
https://en.wikipedia.org/wiki/Tussock_grass
In mathematics, the Tutte homotopy theorem , introduced by Tutte ( 1958 ), generalises the concept of "path" from graphs to matroids , and states roughly that closed paths can be written as compositions of elementary closed paths, so that in some sense they are homotopic to the trivial closed path. A matroid on a set Q is specified by a class of non-empty subsets M of Q , called circuits , such that no element of M contains another, and if X and Y are in M , a is in X and Y , b is in X but not in Y , then there is some Z in M containing b but not a and contained in X ∪ Y . The subsets of Q that are unions of circuits are called flats (this is the language used in Tutte's original paper, however in modern usage the flats of a matroid mean something different). The elements of M are called 0-flats, the minimal non-empty flats that are not 0-flats are called 1-flats, the minimal nonempty flats that are not 0-flats or 1-flats are called 2-flats, and so on. A path is a finite sequence of 0-flats such that any two consecutive elements of the path lie in some 1-flat. An elementary path is one of the form ( X , Y , X ), or ( X , Y , Z , X ) with X , Y , Z all lying in some 2-flat. Two paths P and Q such that the last 0-flat of P is the same as the first 0-flat of Q can be composed in the obvious way to give a path PQ . Two paths are called homotopic if one can be obtained from the other by the operations of adding or removing elementary paths inside a path, in other words changing a path PR to PQR or vice versa, where Q is elementary. A weak form of Tutte's homotopy theorem states that any closed path is homotopic to the trivial path. A stronger form states a similar result for paths not meeting certain "convex" subsets.
https://en.wikipedia.org/wiki/Tutte_homotopy_theorem
In graph theory , the Tutte matrix A of a graph G = ( V , E ) is a matrix used to determine the existence of a perfect matching : that is, a set of edges which is incident with each vertex exactly once. If the set of vertices is V = { 1 , 2 , … , n } {\displaystyle V=\{1,2,\dots ,n\}} then the Tutte matrix is an n -by- n matrix A with entries where the x ij are indeterminates . The determinant of this skew-symmetric matrix is then a polynomial (in the variables x ij , i < j ): this coincides with the square of the pfaffian of the matrix A and is non-zero (as a polynomial) if and only if a perfect matching exists. (This polynomial is not the Tutte polynomial of G .) The Tutte matrix is named after W. T. Tutte , and is a generalisation of the Edmonds matrix for a balanced bipartite graph . This graph theory -related article is a stub . You can help Wikipedia by expanding it . This article about matrices is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tutte_matrix
The Tutte polynomial , also called the dichromate or the Tutte–Whitney polynomial , is a graph polynomial . It is a polynomial in two variables which plays an important role in graph theory . It is defined for every undirected graph G {\displaystyle G} and contains information about how the graph is connected. It is denoted by T G {\displaystyle T_{G}} . The importance of this polynomial stems from the information it contains about G {\displaystyle G} . Though originally studied in algebraic graph theory as a generalization of counting problems related to graph coloring and nowhere-zero flow , it contains several famous other specializations from other sciences such as the Jones polynomial from knot theory and the partition functions of the Potts model from statistical physics . It is also the source of several central computational problems in theoretical computer science . The Tutte polynomial has several equivalent definitions. It is essentially equivalent to Whitney’s rank polynomial , Tutte’s own dichromatic polynomial and Fortuin–Kasteleyn’s random cluster model under simple transformations. It is essentially a generating function for the number of edge sets of a given size and connected components, with immediate generalizations to matroids . It is also the most general graph invariant that can be defined by a deletion–contraction recurrence . Several textbooks about graph theory and matroid theory devote entire chapters to it. [ 1 ] [ 2 ] [ 3 ] Definition. For an undirected graph G = ( V , E ) {\displaystyle G=(V,E)} one may define the Tutte polynomial as where k ( A ) {\displaystyle k(A)} denotes the number of connected components of the graph ( V , A ) {\displaystyle (V,A)} . In this definition it is clear that T G {\displaystyle T_{G}} is well-defined and a polynomial in x {\displaystyle x} and y {\displaystyle y} . The same definition can be given using slightly different notation by letting r ( A ) = | V | − k ( A ) {\displaystyle r(A)=|V|-k(A)} denote the rank of the graph ( V , A ) {\displaystyle (V,A)} . Then the Whitney rank generating function is defined as The two functions are equivalent under a simple change of variables: Tutte’s dichromatic polynomial Q G {\displaystyle Q_{G}} is the result of another simple transformation: Tutte’s original definition of T G {\displaystyle T_{G}} is equivalent but less easily stated. For connected G {\displaystyle G} we set where t i j {\displaystyle t_{ij}} denotes the number of spanning trees of internal activity i {\displaystyle i} and external activity j {\displaystyle j} . A third definition uses a deletion–contraction recurrence . The edge contraction G / u v {\displaystyle G/uv} of graph G {\displaystyle G} is the graph obtained by merging the vertices u {\displaystyle u} and v {\displaystyle v} and removing the edge u v {\displaystyle uv} . We write G − u v {\displaystyle G-uv} for the graph where the edge u v {\displaystyle uv} is merely removed. Then the Tutte polynomial is defined by the recurrence relation if e {\displaystyle e} is neither a loop nor a bridge , with base case if G {\displaystyle G} contains i {\displaystyle i} bridges and j {\displaystyle j} loops and no other edges. Especially, T G = 1 {\displaystyle T_{G}=1} if G {\displaystyle G} contains no edges. The random cluster model from statistical mechanics due to Fortuin & Kasteleyn (1972) provides yet another equivalent definition. [ 4 ] The partition sum is equivalent to T G {\displaystyle T_{G}} under the transformation [ 5 ] The Tutte polynomial factors into connected components. If G {\displaystyle G} is the union of disjoint graphs H {\displaystyle H} and H ′ {\displaystyle H'} then If G {\displaystyle G} is planar and G ∗ {\displaystyle G^{*}} denotes its dual graph then Especially, the chromatic polynomial of a planar graph is the flow polynomial of its dual. Tutte refers to such functions as V-functions . [ 6 ] Isomorphic graphs have the same Tutte polynomial, but the converse is not true. For example, the Tutte polynomial of every tree on m {\displaystyle m} edges is x m {\displaystyle x^{m}} . Tutte polynomials are often given in tabular form by listing the coefficients t i j {\displaystyle t_{ij}} of x i y j {\displaystyle x^{i}y^{j}} in row i {\displaystyle i} and column j {\displaystyle j} . For example, the Tutte polynomial of the Petersen graph , is given by the following table. Other example, the Tutte polynomial of the octahedron graph is given by W. T. Tutte ’s interest in the deletion–contraction formula started in his undergraduate days at Trinity College, Cambridge , originally motivated by perfect rectangles and spanning trees . He often applied the formula in his research and “wondered if there were other interesting functions of graphs, invariant under isomorphism , with similar recursion formulae.” [ 6 ] R. M. Foster had already observed that the chromatic polynomial is one such function, and Tutte began to discover more. His original terminology for graph invariants that satisfy the deletion–contraction recursion was W-function , and V-function if multiplicative over components. Tutte writes, “Playing with my W-functions I obtained a two-variable polynomial from which either the chromatic polynomial or the flow-polynomial could be obtained by setting one of the variables equal to zero, and adjusting signs.” [ 6 ] Tutte called this function the dichromate , as he saw it as a generalization of the chromatic polynomial to two variables, but it is usually referred to as the Tutte polynomial. In Tutte’s words, “This may be unfair to Hassler Whitney who knew and used analogous coefficients without bothering to affix them to two variables.” (There is “notable confusion” [ 7 ] about the terms dichromate and dichromatic polynomial , introduced by Tutte in different paper, and which differ only slightly.) The generalisation of the Tutte polynomial to matroids was first published by Crapo , though it appears already in Tutte’s thesis. [ 8 ] Independently of the work in algebraic graph theory , Potts began studying the partition function of certain models in statistical mechanics in 1952. The work by Fortuin and Kasteleyn [ 9 ] on the random cluster model , a generalisation of the Potts model , provided a unifying expression that showed the relation to the Tutte polynomial. [ 8 ] At various points and lines of the ( x , y ) {\displaystyle (x,y)} -plane, the Tutte polynomial evaluates to quantities that have been studied in their own right in diverse fields of mathematics and physics. Part of the appeal of the Tutte polynomial comes from the unifying framework it provides for analysing these quantities. At y = 0 {\displaystyle y=0} , the Tutte polynomial specialises to the chromatic polynomial, [ dubious – discuss ] where k ( G ) {\displaystyle k(G)} denotes the number of connected components of G . For integer λ the value of chromatic polynomial χ G ( λ ) {\displaystyle \chi _{G}(\lambda )} equals the number of vertex colourings of G using a set of λ colours. It is clear that χ G ( λ ) {\displaystyle \chi _{G}(\lambda )} does not depend on the set of colours. What is less clear is that it is the evaluation at λ of a polynomial with integer coefficients. To see this, we observe: The three conditions above enable us to calculate χ G ( λ ) {\displaystyle \chi _{G}(\lambda )} , by applying a sequence of edge deletions and contractions; but they give no guarantee that a different sequence of deletions and contractions will lead to the same value. The guarantee comes from the fact that χ G ( λ ) {\displaystyle \chi _{G}(\lambda )} counts something, independently of the recurrence. In particular, gives the number of acyclic orientations. Along the hyperbola x y = 1 {\displaystyle xy=1} , the Tutte polynomial of a planar graph specialises to the Jones polynomial of an associated alternating knot . [ dubious – discuss ] T G ( 2 , 1 ) {\displaystyle T_{G}(2,1)} counts the number of forests , i.e., the number of acyclic edge subsets. T G ( 1 , 1 ) {\displaystyle T_{G}(1,1)} counts the number of spanning forests (edge subsets without cycles and the same number of connected components as G ). If the graph is connected, T G ( 1 , 1 ) {\displaystyle T_{G}(1,1)} counts the number of spanning trees. T G ( 1 , 2 ) {\displaystyle T_{G}(1,2)} counts the number of spanning subgraphs (edge subsets with the same number of connected components as G ). T G ( 2 , 0 ) {\displaystyle T_{G}(2,0)} counts the number of acyclic orientations of G . [ 10 ] T G ( 0 , 2 ) {\displaystyle T_{G}(0,2)} counts the number of strongly connected orientations of G . [ 11 ] T G ( 2 , 2 ) {\displaystyle T_{G}(2,2)} is the number 2 | E | {\displaystyle 2^{|E|}} where | E | {\displaystyle |E|} is the number of edges of graph G . If G is a 4-regular graph, then counts the number of Eulerian orientations of G . Here k ( G ) {\displaystyle k(G)} is the number of connected components of G . [ 10 ] If G is the m × n grid graph , then 2 T G ( 3 , 3 ) {\displaystyle 2T_{G}(3,3)} counts the number of ways to tile a rectangle of width 4 m and height 4 n with T-tetrominoes . [ 12 ] [ 13 ] If G is a planar graph , then 2 T G ( 3 , 3 ) {\displaystyle 2T_{G}(3,3)} equals the sum over weighted Eulerian orientations in a medial graph of G , where the weight of an orientation is 2 to the number of saddle vertices of the orientation (that is, the number of vertices with incident edges cyclicly ordered "in, out, in out"). [ 14 ] Define the hyperbola in the xy −plane: the Tutte polynomial specialises to the partition function, Z ( ⋅ ) , {\displaystyle Z(\cdot ),} of the Ising model studied in statistical physics . [ dubious – discuss ] Specifically, along the hyperbola H 2 {\displaystyle H_{2}} the two are related by the equation: [ 15 ] In particular, for all complex α. More generally, if for any positive integer q , we define the hyperbola: then the Tutte polynomial specialises to the partition function of the q -state Potts model . [ dubious – discuss ] Various physical quantities analysed in the framework of the Potts model translate to specific parts of the H q {\displaystyle H_{q}} . At x = 0 {\displaystyle x=0} , the Tutte polynomial specialises to the flow polynomial studied in combinatorics. [ dubious – discuss ] For a connected and undirected graph G and integer k , a nowhere-zero k -flow is an assignment of “flow” values 1 , 2 , … , k − 1 {\displaystyle 1,2,\dots ,k-1} to the edges of an arbitrary orientation of G such that the total flow entering and leaving each vertex is congruent modulo k . The flow polynomial C G ( k ) {\displaystyle C_{G}(k)} denotes the number of nowhere-zero k -flows. This value is intimately connected with the chromatic polynomial, in fact, if G is a planar graph , the chromatic polynomial of G is equivalent to the flow polynomial of its dual graph G ∗ {\displaystyle G^{*}} in the sense that Theorem (Tutte). The connection to the Tutte polynomial is given by: At x = 1 {\displaystyle x=1} , the Tutte polynomial specialises to the all-terminal reliability polynomial studied in network theory. [ dubious – discuss ] For a connected graph G remove every edge with probability p ; this models a network subject to random edge failures. Then the reliability polynomial is a function R G ( p ) {\displaystyle R_{G}(p)} , a polynomial in p , that gives the probability that every pair of vertices in G remains connected after the edges fail. The connection to the Tutte polynomial is given by Tutte also defined a closer 2-variable generalization of the chromatic polynomial, the dichromatic polynomial of a graph. This is where k ( A ) {\displaystyle k(A)} is the number of connected components of the spanning subgraph ( V , A ). This is related to the corank-nullity polynomial by The dichromatic polynomial does not generalize to matroids because k ( A ) is not a matroid property: different graphs with the same matroid can have different numbers of connected components. The Martin polynomial m G → ( x ) {\displaystyle m_{\vec {G}}(x)} of an oriented 4-regular graph G → {\displaystyle {\vec {G}}} was defined by Pierre Martin in 1977. [ 17 ] He showed that if G is a plane graph and G → m {\displaystyle {\vec {G}}_{m}} is its directed medial graph , then The deletion–contraction recurrence for the Tutte polynomial, immediately yields a recursive algorithm for computing it for a given graph: as long as you can find an edge e that is not a loop or bridge , recursively compute the Tutte polynomial for when that edge is deleted, and when that edge is contracted . Then add the two sub-results together to get the overall Tutte polynomial for the graph. The base case is a monomial x m y n {\displaystyle x^{m}y^{n}} where m is the number of bridges, and n is the number of loops. Within a polynomial factor, the running time t of this algorithm can be expressed in terms of the number of vertices n and the number of edges m of the graph, a recurrence relation that scales as the Fibonacci numbers with solution [ 18 ] The analysis can be improved to within a polynomial factor of the number τ ( G ) {\displaystyle \tau (G)} of spanning trees of the input graph. [ 19 ] For sparse graphs with m = O ( n ) {\displaystyle m=O(n)} this running time is exp ⁡ ( O ( n ) ) {\displaystyle \exp(O(n))} . For regular graphs of degree k , the number of spanning trees can be bounded by where so the deletion–contraction algorithm runs within a polynomial factor of this bound. For example: [ 20 ] In practice, graph isomorphism testing is used to avoid some recursive calls. This approach works well for graphs that are quite sparse and exhibit many symmetries; the performance of the algorithm depends on the heuristic used to pick the edge e . [ 19 ] [ 21 ] [ 22 ] In some restricted instances, the Tutte polynomial can be computed in polynomial time, ultimately because Gaussian elimination efficiently computes the matrix operations determinant and Pfaffian . These algorithms are themselves important results from algebraic graph theory and statistical mechanics . T G ( 1 , 1 ) {\displaystyle T_{G}(1,1)} equals the number τ ( G ) {\displaystyle \tau (G)} of spanning trees of a connected graph. This is computable in polynomial time as the determinant of a maximal principal submatrix of the Laplacian matrix of G , an early result in algebraic graph theory known as Kirchhoff’s Matrix–Tree theorem . Likewise, the dimension of the bicycle space at T G ( − 1 , − 1 ) {\displaystyle T_{G}(-1,-1)} can be computed in polynomial time by Gaussian elimination. For planar graphs, the partition function of the Ising model, i.e., the Tutte polynomial at the hyperbola H 2 {\displaystyle H_{2}} , can be expressed as a Pfaffian and computed efficiently via the FKT algorithm . This idea was developed by Fisher , Kasteleyn , and Temperley to compute the number of dimer covers of a planar lattice model . Using a Markov chain Monte Carlo method, the Tutte polynomial can be arbitrarily well approximated along the positive branch of H 2 {\displaystyle H_{2}} , equivalently, the partition function of the ferromagnetic Ising model. This exploits the close connection between the Ising model and the problem of counting matchings in a graph. The idea behind this celebrated result of Jerrum and Sinclair [ 23 ] is to set up a Markov chain whose states are the matchings of the input graph. The transitions are defined by choosing edges at random and modifying the matching accordingly. The resulting Markov chain is rapidly mixing and leads to “sufficiently random” matchings, which can be used to recover the partition function using random sampling. The resulting algorithm is a fully polynomial-time randomized approximation scheme (fpras). Several computational problems are associated with the Tutte polynomial. The most straightforward one is In particular, the output allows evaluating T G ( − 2 , 0 ) {\displaystyle T_{G}(-2,0)} which is equivalent to counting the number of 3-colourings of G . This latter question is #P-complete , even when restricted to the family of planar graphs , so the problem of computing the coefficients of the Tutte polynomial for a given graph is #P-hard even for planar graphs. Much more attention has been given to the family of problems called Tutte ( x , y ) {\displaystyle (x,y)} defined for every complex pair ( x , y ) {\displaystyle (x,y)} : The hardness of these problems varies with the coordinates ( x , y ) {\displaystyle (x,y)} . If both x and y are non-negative integers, the problem T G ( x , y ) {\displaystyle T_{G}(x,y)} belongs to #P . For general integer pairs, the Tutte polynomial contains negative terms, which places the problem in the complexity class GapP , the closure of #P under subtraction. To accommodate rational coordinates ( x , y ) {\displaystyle (x,y)} , one can define a rational analogue of #P . [ 24 ] The computational complexity of exactly computing T G ( x , y ) {\displaystyle T_{G}(x,y)} falls into one of two classes for any x , y ∈ C {\displaystyle x,y\in \mathbb {C} } . The problem is #P-hard unless ( x , y ) {\displaystyle (x,y)} lies on the hyperbola H 1 {\displaystyle H_{1}} or is one of the points in which cases it is computable in polynomial time. [ 25 ] If the problem is restricted to the class of planar graphs, the points on the hyperbola H 2 {\displaystyle H_{2}} become polynomial-time computable as well. All other points remain #P-hard, even for bipartite planar graphs. [ 26 ] In his paper on the dichotomy for planar graphs, Vertigan claims (in his conclusion) that the same result holds when further restricted to graphs with vertex degree at most three, save for the point T G ( 0 , − 2 ) {\displaystyle T_{G}(0,-2)} , which counts nowhere-zero Z 3 -flows and is computable in polynomial time. [ 27 ] These results contain several notable special cases. For example, the problem of computing the partition function of the Ising model is #P-hard in general, even though celebrated algorithms of Onsager and Fisher solve it for planar lattices. Also, the Jones polynomial is #P-hard to compute. Finally, computing the number of four-colorings of a planar graph is #P-complete, even though the decision problem is trivial by the four color theorem . In contrast, it is easy to see that counting the number of three-colorings for planar graphs is #P-complete because the decision problem is known to be NP-complete via a parsimonious reduction . The question which points admit a good approximation algorithm has been very well studied. Apart from the points that can be computed exactly in polynomial time, the only approximation algorithm known for T G ( x , y ) {\displaystyle T_{G}(x,y)} is Jerrum and Sinclair’s FPRAS, which works for points on the “Ising” hyperbola H 2 {\displaystyle H_{2}} for y > 0. If the input graphs are restricted to dense instances, with degree Ω ( n ) {\displaystyle \Omega (n)} , there is an FPRAS if x ≥ 1, y ≥ 1. [ 28 ] Even though the situation is not as well understood as for exact computation, large areas of the plane are known to be hard to approximate. [ 24 ]
https://en.wikipedia.org/wiki/Tutte_polynomial
In the mathematical discipline of graph theory , the Tutte theorem , named after William Thomas Tutte , is a characterization of finite undirected graphs with perfect matchings . It is a special case of the Tutte–Berge formula . The goal is to characterize all graphs that do not have a perfect matching. Start with the most obvious case of a graph without a perfect matching: a graph with an odd number of vertices. In such a graph, any matching leaves at least one unmatched vertex, so it cannot be perfect. A slightly more general case is a disconnected graph in which one or more components have an odd number of vertices (even if the total number of vertices is even). Let us call such components odd components . In any matching, each vertex can only be matched to vertices in the same component. Therefore, any matching leaves at least one unmatched vertex in every odd component, so it cannot be perfect. Next, consider a graph G with a vertex u such that, if we remove from G the vertex u and its adjacent edges, the remaining graph (denoted G − u ) has two or more odd components. As above, any matching leaves, in every odd component, at least one vertex that is unmatched to other vertices in the same component. Such a vertex can only be matched to u . But since there are two or more unmatched vertices, and only one of them can be matched to u , at least one other vertex remains unmatched, so the matching is not perfect. Finally, consider a graph G with a set of vertices U such that, if we remove from G the vertices in U and all edges adjacent to them, the remaining graph (denoted G − U ) has more than | U | odd components. As explained above, any matching leaves at least one unmatched vertex in every odd component, and these can be matched only to vertices of U - but there are not enough vertices on U for all these unmatched vertices, so the matching is not perfect. We have arrived at a necessary condition: if G has a perfect matching, then for every vertex subset U in G , the graph G − U has at most | U | odd components. Tutte's theorem says that this condition is both necessary and sufficient for the existence of perfect matching. A graph, G = ( V , E ) , has a perfect matching if and only if for every subset U of V , the subgraph G − U has at most | U | odd components ( connected components having an odd number of vertices ). [ 1 ] First we write the condition: where o d d ( X ) {\displaystyle \mathrm {odd} (X)} denotes the number of odd components of the subgraph induced by X {\displaystyle X} . Necessity of (∗): This direction was already discussed in the section Intuition above, but let us sum up here the proof. Consider a graph G , with a perfect matching. Let U be an arbitrary subset of V . Delete U . Let C be an arbitrary odd component in G − U . Since G had a perfect matching, at least one vertex in C must be matched to a vertex in U . Hence, each odd component has at least one vertex matched with a vertex in U . Since each vertex in U can be in this relation with at most one connected component (because of it being matched at most once in a perfect matching), odd ( G − U ) ≤ | U | . [ 2 ] Sufficiency of (∗): Let G be an arbitrary graph with no perfect matching. We will find a so-called Tutte violator , that is, a subset S of V such that | S | < odd ( G − S ) . We can suppose that G is edge-maximal, i.e., G + e has a perfect matching for every edge e not present in G already. Indeed, if we find a Tutte violator S in edge-maximal graph G , then S is also a Tutte violator in every spanning subgraph of G , as every odd component of G − S will be split into possibly more components at least one of which will again be odd. We define S to be the set of vertices with degree | V | − 1 . First we consider the case where all components of G − S are complete graphs. Then S has to be a Tutte violator, since if odd ( G − S ) ≤ | S | , then we could find a perfect matching by matching one vertex from every odd component with a vertex from S and pairing up all other vertices (this will work unless | V | is odd, but then ∅ is a Tutte violator). Now suppose that K is a component of G − S and x , y ∈ K are vertices such that xy ∉ E . Let x , a , b ∈ K be the first vertices on a shortest x , y -path in K . This ensures that xa , ab ∈ E and xb ∉ E . Since a ∉ S , there exists a vertex c such that ac ∉ E . From the edge-maximality of G , we define M 1 as a perfect matching in G + xb and M 2 as a perfect matching in G + ac . Observe that surely xb ∈ M 1 and ac ∈ M 2 . Let P be the maximal path in G that starts from c with an edge from M 1 and whose edges alternate between M 1 and M 2 . How can P end? Unless we arrive at 'special' vertices such as x , a or b , we can always continue: c is M 2 -matched by ca , so the first edge of P is not in M 2 , therefore the second vertex is M 2 -matched by a different edge and we continue in this manner. Let v denote the last vertex of P . If the last edge of P is in M 1 , then v has to be a , since otherwise we could continue with an edge from M 2 (even to arrive at x or b ). In this case we define C := P + ac . If the last edge of P is in M 2 , then surely v ∈ { x , b } for analogous reason and we define C := P + va + ac . Now C is a cycle in G + ac of even length with every other edge in M 2 . We can now define M := M 2 Δ C (where Δ is symmetric difference ) and we obtain a perfect matching in G , a contradiction. The Tutte–Berge formula says that the size of a maximum matching of a graph G = ( V , E ) {\displaystyle G=(V,E)} equals min U ⊆ V ( | U | − odd ⁡ ( G − U ) + | V | ) / 2 {\displaystyle \min _{U\subseteq V}\left(|U|-\operatorname {odd} (G-U)+|V|\right)/2} . Equivalently, the number of unmatched vertices in a maximum matching equals max U ⊆ V ( odd ⁡ ( G − U ) − | U | ) {\displaystyle \max _{U\subseteq V}\left(\operatorname {odd} (G-U)-|U|\right)} . This formula follows from Tutte's theorem, together with the observation that G {\displaystyle G} has a matching of size k {\displaystyle k} if and only if the graph G ( k ) {\displaystyle G^{(k)}} obtained by adding | V | − 2 k {\displaystyle |V|-2k} new vertices, each joined to every original vertex of G {\displaystyle G} , has a perfect matching. Since any set X {\displaystyle X} which separates G ( k ) {\displaystyle G^{(k)}} into more than | X | {\displaystyle |X|} components must contain all the new vertices, (*) is satisfied for G ( k ) {\displaystyle G^{(k)}} if and only if k ≤ min U ⊆ V ( | U | − odd ⁡ ( G − U ) + | V | ) / 2 {\displaystyle k\leq \min _{U\subseteq V}\left(|U|-\operatorname {odd} (G-U)+|V|\right)/2} . For connected infinite graphs that are locally finite (every vertex has finite degree), a generalization of Tutte's condition holds: such graphs have perfect matchings if and only if there is no finite subset, the removal of which creates a number of finite odd components larger than the size of the subset. [ 3 ]
https://en.wikipedia.org/wiki/Tutte_theorem
In the mathematical discipline of graph theory the Tutte–Berge formula is a characterization of the size of a maximum matching in a graph . It is a generalization of Tutte theorem on perfect matchings , and is named after W. T. Tutte (who proved Tutte's theorem) and Claude Berge (who proved its generalization). The theorem states that the size of a maximum matching of a graph G = ( V , E ) {\displaystyle G=(V,E)} equals 1 2 min U ⊆ V ( | U | − odd ⁡ ( G − U ) + | V | ) , {\displaystyle {\frac {1}{2}}\min _{U\subseteq V}\left(|U|-\operatorname {odd} (G-U)+|V|\right),} where odd ⁡ ( H ) {\displaystyle \operatorname {odd} (H)} counts how many of the connected components of the graph H {\displaystyle H} have an odd number of vertices. Equivalently, the number of unmatched vertices in a maximum matching equals max U ⊆ V ( odd ⁡ ( G − U ) − | U | ) {\displaystyle \max _{U\subseteq V}\left(\operatorname {odd} (G-U)-|U|\right)} . Intuitively, for any subset U {\displaystyle U} of the vertices, the only way to completely cover an odd component of G − U {\displaystyle G-U} by a matching is for one of the matched edges covering the component to be incident to U {\displaystyle U} . If, instead, some odd component had no matched edge connecting it to U {\displaystyle U} , then the part of the matching that covered the component would cover its vertices in pairs, but since the component has an odd number of vertices it would necessarily include at least one leftover and unmatched vertex. Therefore, if some choice of U {\displaystyle U} has few vertices but its removal creates a large number of odd components, then there will be many unmatched vertices, implying that the matching itself will be small. This reasoning can be made precise by stating that the size of a maximum matching is at most equal to the value given by the Tutte–Berge formula. The characterization of Tutte and Berge proves that this is the only obstacle to creating a large matching: the size of the optimal matching will be determined by the subset U {\displaystyle U} with the biggest difference between its numbers of odd components outside U {\displaystyle U} and vertices inside U {\displaystyle U} . That is, there always exists a subset U {\displaystyle U} such that deleting U {\displaystyle U} creates the correct number of odd components needed to make the formula true. One way to find such a set U {\displaystyle U} is to choose any maximum matching M {\displaystyle M} , and to let X {\displaystyle X} be the set of vertices that are either unmatched in M {\displaystyle M} , or that can be reached from an unmatched vertex by an alternating path that ends with a matched edge. Then, let U {\displaystyle U} be the set of vertices that are matched by M {\displaystyle M} to vertices in X {\displaystyle X} . No two vertices in X {\displaystyle X} can be adjacent, for if they were then their alternating paths could be concatenated to give a path by which the matching could be increased, contradicting the maximality of M {\displaystyle M} . Every neighbor of a vertex x {\displaystyle x} in X {\displaystyle X} must belong to U {\displaystyle U} , for otherwise we could extend an alternating path to x {\displaystyle x} by one more pair of edges, through the neighbor, causing the neighbor to become part of U {\displaystyle U} . Therefore, in G − U {\displaystyle G-U} , every vertex of X {\displaystyle X} forms a single-vertex component, which is odd. There can be no other odd components, because all other vertices remain matched after deleting U {\displaystyle U} . So with this construction the size of U {\displaystyle U} and the number of odd components created by deleting U {\displaystyle U} are what they need to be to make the formula be true. Tutte's theorem characterizes the graphs with perfect matchings as being the ones for which deleting any subset U {\displaystyle U} of vertices creates at most | U | {\displaystyle |U|} odd components. (A subset U {\displaystyle U} that creates at least | U | {\displaystyle |U|} odd components can always be found in the empty set .) In this case, by the Tutte–Berge formula, the size of the matching is | V | / 2 {\displaystyle |V|/2} ; that is, the maximum matching is a perfect matching. Thus, Tutte's theorem can be derived as a corollary of the Tutte–Berge formula, and the formula can be seen as a generalization of Tutte's theorem.
https://en.wikipedia.org/wiki/Tutte–Berge_formula
Tweakers (formerly called Tweakers.net ) is a Dutch technology website featuring news and information about hardware, software, games and the Internet. The name is derived from the verb " tweaking ", which is a word used to refer to optimisation of hardware. Tweakers has grown substantially since its founding in 1998. [ 1 ] As of 2023, it publishes news, reviews, features and video reports about technology subjects, with a strong focus on the Netherlands and Belgium. Tweakers has more features for computer enthusiasts, such as reviews, bi-monthly Best Buy guides, a classifieds section for jobs and used hardware, the Pricewatch and the Shop Survey, among others. While the majority of the reviews are written in Dutch, several articles covering non-standard topics are translated into English. [ 2 ] Tweakers has more than 1.000.000 members and its forum has over 29 million posts. [ 3 ] It has won several awards, including "(Dutch) website of the year" in 2009, 2010, 2011, 2012, 2013 and 2014. [ 4 ] The website was founded in 1998 by Femme Taken , a computer enthusiast, under the name World of Tweaking as a student's hobby project to offer a Dutch alternative to hardware review sites such as Tom's Hardware Guide . [ 5 ] As of March 2006, Tweakers was taken over by the Dutch media conglomerate VNU , now known as The Nielsen Company . [ 6 ] [ 7 ] After The Nielsen Company sold its Business Media division to private equity firm 3i , VNU Media became owner of the Tweakers website . [ 8 ] In October 2012, Tweakers launched a new design, [ 9 ] designed by Femme Taken, the founder; the community was critical of this change. [ 10 ] A couple of days later, Tweakers launched a new feature that enabled the community to change the appearance of the website. [ 11 ] A number of changes and improvements were made in 2014. The current responsive design was introduced on 6 January that year. [ 12 ] The Android and iOS apps were removed from their respective stores on 7 October 2014. [ 13 ] In February 2014 Tweakers significantly improved the price comparison tool for sim only contracts. [ 14 ] In June 2014 Tweakers added prices of Belgian webshops to the Pricewatch , which now shows prices from Dutch and Belgian webshops. [ 15 ] Changes continue. In April 2015 Tweakers launched a completely new logo to be uniformly used throughout the website. [ 16 ] An important area of the site is the forum . Tweakers's forum is known as the Gathering of Tweakers (often abbreviated as GoT ). The forum is focused mainly on technical subjects and tries to maintain a relatively high standard of posting. It has a fairly strong moderation policy, especially when compared to other online discussion forums. Users of the forum are strongly urged to read the FAQ and to have invested a fair amount of his/her own time into finding an answer, either on GoT using the search tools, or anywhere else on the Internet, before asking a question. [ 17 ] Tweakers Awards (formerly known as "Tweakers Gouden Steeksleutel Awards" 2007 to 2011 [ 18 ] ) is a contest in which members can choose the best products from different technology categories.
https://en.wikipedia.org/wiki/Tweakers
A twilight switch is an electronic component that allows the automatic activation of a lighting circuit when natural light drops in a given environment. Among a large number of uses, the most common is to enable automatic lighting of streets, roads, highways, roads, gardens, courtyards, etc., when sunlight drops below a certain level (e.g.: a from twilight). [ 1 ] A circuit built with a twilight switch, in some cases, requires other components, such as relays or contactors, when you want to control a higher electrical power (lamps, electrical appliances, etc.). By means of a light intensity sensor ( photoresistor , photodiode , phototransistor , etc.) that detects the amount of light that illuminates an environment, it triggers an electrical circuit that opens or closes the contacts of a mechanical relay or of a solid state relay. ( power transistor , thyristor , triac , etc.), which activates the lighting system. Generally, the natural lighting that goes directly to a photoresistor is used, with the effect of a lamp that automatically turns on at dusk and always goes out in the first light of dawn. Thanks to this system, a wide range of examples of use are created, from lighting both public and private spaces to the simulation of presence, where the twilight switch provides the intermittent operation of a circuit. lighting to simulate the presence of people who are not physically present. [ 2 ] There are more and more innovative models that allow greater sensitivity to sunlight, with which you can adjust the threshold so that the switch fires at a certain level of darkness, thus setting a delay on and off with respect to the ambient light level. There are even models that do not turn on artificial light and distinguish it from natural light, although in some cases it may be convenient to combine them with time scheduling systems. [ 3 ] The most commonly used twilight switches are of the electromechanical type, which differ from electronic ones by the use of piloted relays already integrated in the circuit itself, and which allow small loads to be directly connected (for example, a single lamp). Twilight switches are available in various shapes to suit every need. In fact, they can vary from the shape of a lamp holder to that of a separate box (cylindrical, square, etc.). [ 4 ] Pay attention to the number of lamps that power the device and its power in watts, following the instructions in the user manual, in order to avoid a dangerous current overload on the contacts of the drive relay, in fact, if higher loads are required, a contactor will need to be inserted. For street lighting, for example, an individual twilight switch can be used, or a central switch that activates the other remote relays to turn on many lamps, so that the load that the central twilight switch must withstand is only of the individual coils of the relays in parallel. [ 5 ] The main benefit of using a twilight switch is the considerable energy savings it brings, combined with the convenience of unnecessary time scheduling, which effectively regulates sunlight. The major downside to doing the installation is that if you place artificial light near a photo-sensitive detector, the switch may not turn on. In this case you have to be careful where the light bulbs are located with respect to the photodetectors . Also, keep in mind that ignition is based solely on the amount of natural light present and not on the height of the sun, so "unwanted ignition" may occur, for example in the presence of compact clouds associated with a significant drop in ambient light on the sensor (e.g., when there is a storm). To avoid this, twilight switches can be combined with time scheduling systems. [ 3 ]
https://en.wikipedia.org/wiki/Twilight_switch
Twimight was an open source Android client for the social networking site Twitter . The client let users view in real time "tweets" or micro-blog posts on the Twitter website as well as publish their own. In addition to being a fully functional, ad-free and open-source [ 1 ] Twitter client, Twimight allowed communication if the cellular network is unavailable (for example, in case of a natural disaster). Twimight was also equipped with a feature called the "disaster mode", [ 2 ] [ 3 ] [ 4 ] which users could enable or disable at will. When the disaster mode was enabled and the cellular network was down, Twimight used peer-to-peer communication to let users tweet in any circumstance. Enabling the disaster mode enabled on the phone's Bluetooth transceiver and connected the user to other nearby phones. This created a mobile ad hoc network or MANET , which could be used, for example, to locate missing persons even when the communication infrastructure had failed. Twimight started out as a project for a Master thesis at ETH Zurich in the spring of 2011.
https://en.wikipedia.org/wiki/Twimight
Twins are two offspring produced by the same pregnancy . [ 1 ] Twins can be either monozygotic ('identical'), meaning that they develop from one zygote , which splits and forms two embryos , or dizygotic ('non-identical' or 'fraternal'), meaning that each twin develops from a separate egg and each egg is fertilized by its own sperm cell. [ 2 ] Since identical twins develop from one zygote, they will share the same sex, while fraternal twins may or may not. In very rare cases, fraternal or (semi-) identical twins can have the same mother and different fathers ( heteropaternal superfecundation ). In contrast, a fetus that develops alone in the womb (the much more common case in humans) is called a singleton , and the general term for one offspring of a multiple birth is a multiple . [ 3 ] Unrelated look-alikes whose resemblance parallels that of twins are referred to as doppelgänger . [ 4 ] The human twin birth rate in the United States rose 76% from 1980 through 2009, from 9.4 to 16.7 twin sets (18.8 to 33.3 twins) per 1,000 births. [ 5 ] The Yoruba people have the highest rate of twinning in the world, at 45–50 twin sets (90–100 twins) per 1,000 live births, [ 6 ] [ 7 ] [ 8 ] possibly because of high consumption of a specific type of yam containing a natural phytoestrogen which may stimulate the ovaries to release an egg from each side. [ 9 ] [ 10 ] In Central Africa , there are 18–30 twin sets (or 36–60 twins) per 1,000 live births. [ 11 ] In South America , South Asia , and Southeast Asia , the lowest rates are found; only 6 to 9 twin sets per 1,000 live births. North America and Europe have intermediate rates of 9 to 16 twin sets per 1,000 live births. [ 11 ] Multiple pregnancies are much less likely to carry to full term than single births, with twin pregnancies lasting on average 37 weeks, three weeks less than full term. [ 12 ] Women who have a family history of fraternal twins have a higher chance of producing fraternal twins themselves, as there is a genetically linked tendency to hyper- ovulate . There is no known genetic link for identical twinning. [ 13 ] Other factors that increase the odds of having fraternal twins include maternal age, fertility drugs and other fertility treatments, nutrition, and prior births. [ 14 ] Some women intentionally turn to fertility drugs in order to conceive twins. [ 15 ] [ 16 ] The vast majority of twins are either dizygotic (fraternal) or monozygotic (identical). In humans, dizygotic twins occur more often than monozygotic twins. [ 17 ] Less common variants are discussed further down the article. Fraternal twins can be any of the following: Among non-twin births, male singletons are slightly (about five percent) more common than female singletons. The rates for singletons vary slightly by country. For example, the sex ratio of birth in the US is 1.05 males/female, [ 18 ] while it is 1.07 males/female in Italy. [ 19 ] However, males are also more susceptible than females to die in utero , and since the death rate in utero is higher for twins, it leads to female twins being more common than male twins. [ 20 ] Zygosity is the degree of identity in the genome of twins. Dizygotic ( DZ ) or fraternal twins (also referred to as "non-identical twins", "dissimilar twins", "biovular twins", and, informally in the case of females, "sororal twins") usually occur when two fertilized eggs are implanted in the uterus wall at the same time. When two eggs are independently fertilized by two different sperm cells , fraternal twins result. The two eggs, or ova , form two zygotes , hence the terms dizygotic and biovular . Fraternal twins are, essentially, two ordinary siblings who happen to develop in the womb together and who are born at the same time, since they arise from two separate eggs fertilized by two separate sperm , just like ordinary siblings. This is the most common type of twin. [ 21 ] Dizygotic twins, like any other siblings, will practically always have different sequences on each chromosome, due to chromosomal crossover during meiosis . Dizygotic twins share on average 50 percent of each other's genes, the same as siblings that are conceived and born at different times. Like any other siblings , dizygotic twins may look similar , particularly as they are the same age. However, dizygotic twins may also look very different from each other (for example, be of opposite sexes). Studies show that there is a genetic proclivity for dizygotic twinning. However, it is only the mother who has any effect on the chances of having such twins; there is no known mechanism for a father to cause the release of more than one ovum . Dizygotic twinning ranges from six per thousand births in Japan (similar to the rate of monozygotic twins) to 14 and more per thousand in some African countries. [ 22 ] Dizygotic twins are also more common for older mothers, with twinning rates doubling in mothers over the age of 35. [ 23 ] With the advent of technologies and techniques to assist women in getting pregnant, the rate of fraternals has increased markedly. [ citation needed ] Monozygotic ( MZ ) or identical twins occur when a single egg is fertilized to form one zygote (hence, "monozygotic") which then divides into two separate embryos . Regarding spontaneous or natural monozygotic twinning, a 2007 theory related to in vitro fertilization (IVF) proposes that monozygotic twins may be formed when a blastocyst contains two inner cell masses (ICM), each of which will lead to a separate fetus, rather than by the embryo splitting while hatching from the zona pellucida (the gelatinous protective coating around the blastocyst). [ 24 ] Monozygotic twins may also be created artificially by embryo splitting. It can be used as an expansion of in vitro fertilization (IVF) to increase the number of available embryos for embryo transfer . [ 25 ] The chance of identical twins is approximately 3 to 4 in every 1,000 births. [ 26 ] The likelihood of a single fertilization resulting in monozygotic twins is uniformly distributed in all populations around the world. [ 23 ] This is in marked contrast to dizygotic twinning, which ranges from about six per thousand births in Japan (almost similar to the rate of identical twins, which is around 4–5) to 15 and more per thousand in some parts of India [ 27 ] and up to over 20 in some Central African countries. [ 11 ] The exact cause for the splitting of a zygote or embryo is unknown. IVF techniques are more likely to create dizygotic twins. For IVF deliveries, there are nearly 21 pairs of twins for every 1,000. [ 28 ] Monozygotic twins are genetically nearly identical and they are the same chromosomal sex unless there has been a mutation during development. The children of monozygotic twins test genetically as half-siblings (or full siblings, if a pair of monozygotic twins reproduces with another pair or with the same person), rather than first cousins. Identical twins do not have the same fingerprints however, because even within the confines of the womb, the fetuses touch different parts of their environment, giving rise to small variations in their corresponding prints and thus making them unique. [ 29 ] Monozygotic twins always have the same genotype . Normally due to an environmental factor or the deactivation of different X chromosomes in female monozygotic twins, and in some extremely rare cases, due to aneuploidy , twins may express different sexual phenotypes , normally from an XXY Klinefelter syndrome zygote splitting unevenly. [ 30 ] [ 31 ] [ 32 ] Monozygotic twins, although genetically very similar, are not genetically exactly the same. The DNA in white blood cells of 66 pairs of monozygotic twins was analyzed for 506,786 single-nucleotide polymorphisms known to occur in human populations. Polymorphisms appeared in 2 of the 33 million comparisons, leading the researchers to extrapolate that the blood cells of monozygotic twins may have on the order of one DNA-sequence difference for every 12 million nucleotides, which would imply hundreds of differences across the entire genome. [ 33 ] The mutations producing the differences detected in this study would have occurred during embryonic cell-division (after the point of fertilization). If they occur early in fetal development, they will be present in a very large proportion of body cells. [ citation needed ] Another cause of difference between monozygotic twins is epigenetic modification , caused by differing environmental influences throughout their lives. Epigenetics refers to the level of activity of any particular gene. A gene may become switched on, switched off, or could become partially switched on or off in an individual. This epigenetic modification is triggered by environmental events. Monozygotic twins can have markedly different epigenetic profiles. A study of 80 pairs of monozygotic twins ranging in age from three to 74 showed that the youngest twins have relatively few epigenetic differences. The number of epigenetic differences increases with age. Fifty-year-old twins had over three times the epigenetic difference of three-year-old twins. Twins who had spent their lives apart (such as those adopted by two different sets of parents at birth) had the greatest difference. [ 34 ] However, certain characteristics become more alike as twins age, such as IQ and personality. [ 35 ] [ 36 ] [ 37 ] In January 2021, new research from a team of researchers in Iceland was published in the journal Nature Genetics suggesting that identical twins may not be quite as identical as previously thought. [ 38 ] The four-year study of monozygotic (identical) twins and their extended families revealed that these twins have genetic differences that begin in the early stages of embryonic development. [ 39 ] A 1981 study of a deceased XXX twin fetus without a heart showed that although its fetal development suggested that it was an identical twin, as it shared a placenta with its healthy twin, tests revealed that it was probably a polar body twin. The authors were unable to predict whether a healthy fetus could result from a polar body twinning. [ 40 ] However, a study in 2012 found that it is possible for a polar body to result in a healthy fetus. [ 41 ] In 2003, a study argued that many cases of triploidy arise from sesquizygotic (semi-identical) twinning which happens when a single egg is fertilized by two sperm and splits the three sets of chromosomes into two separate cell sets. [ 42 ] [ 43 ] The degree of separation of the twins in utero depends on if and when they split into two zygotes. Dizygotic twins were always two zygotes. Monozygotic twins split into two zygotes at some time very early in the pregnancy. The timing of this separation determines the chorionicity (the number of placentae) and amniocity (the number of sacs) of the pregnancy. Dichorionic twins either never divided (i.e.: were dizygotic) or they divided within the first 4 days. Monoamnionic twins divide after the first week. [ citation needed ] In very rare cases, twins become conjoined twins . Non-conjoined monozygotic twins form up to day 14 of embryonic development, but when twinning occurs after 14 days, the twins will likely be conjoined. [ 44 ] Furthermore, there can be various degrees of shared environment of twins in the womb, potentially leading to pregnancy complications . [ citation needed ] It is a common misconception that two placentas automatically implies dizygotic twins, but if monozygotic twins separate early enough, the arrangement of sacs and placentas in utero is in fact indistinguishable from that of dizygotic twins. DiDi twins have the lowest mortality risk at about 9 percent, although that is still significantly higher than that of singletons. [ 47 ] Monochorionic twins generally have two amniotic sacs (called Monochorionic–Diamniotic "MoDi"), which occurs in 60–70% of the pregnancies with monozygotic twins, [ 46 ] and in 0.3% of all pregnancies. [ 48 ] Monochorionic-Diamniotic twins are almost always monozygotic, with a few exceptions where the blastocysts have fused. [ 45 ] Monochorionic twins share the same placenta , and thus have a risk of twin-to-twin transfusion syndrome . Monoamniotic twins are always monozygotic . [ 49 ] The survival rate for monoamniotic twins is somewhere between 50% [ 49 ] and 60%. [ 50 ] Monoamniotic twins, as with diamniotic monochorionic twins, have a risk of twin-to-twin transfusion syndrome . Also, the two umbilical cords have an increased chance of being tangled around the babies. Because of this, there is an increased chance that the newborns may be miscarried or suffer from cerebral palsy due to lack of oxygen. When the division of the developing zygote into 2 embryos occurs, 99% of the time it is within 8 days of fertilization. Mortality is highest for conjoined twins due to the many complications resulting from shared organs. A 2006 study has found that insulin-like growth factor present in dairy products may increase the chance of dizygotic twinning. Specifically, the study found that vegan mothers (who exclude dairy from their diets) are one-fifth as likely to have twins as vegetarian or omnivore mothers, and concluded that "Genotypes favoring elevated IGF and diets including dairy products, especially in areas where growth hormone is given to cattle, appear to enhance the chances of multiple pregnancies due to ovarian stimulation." [ 51 ] From 1980 to 1997, the number of twin births in the United States rose 52%. [ 52 ] This rise can at least partly be attributed to the increasing popularity of fertility drugs and procedures such as IVF, which result in multiple births more frequently than unassisted fertilizations do. It may also be linked to the increase of growth hormones in food. [ 51 ] About 1 in 90 human births (1.1%) results from a twin pregnancy. [ 53 ] The rate of dizygotic twinning varies greatly among ethnic groups , ranging as high as about 45 per 1000 births (4.5%) for the Yoruba to 10% for Linha São Pedro, a tiny Brazilian settlement which belongs to the city of Cândido Godói . [ 54 ] In Cândido Godói, one in five pregnancies has resulted in twins. [ 55 ] The Argentine historian Jorge Camarasa has put forward the theory that experiments of the Nazi doctor Josef Mengele could be responsible for the high ratio of twins in the area. His theory was rejected by Brazilian scientists who had studied twins living in Linha São Pedro; they suggested genetic factors within that community as a more likely explanation. [ 56 ] A high twinning rate has also been observed in other places of the world, including: In a study on the maternity records of 5750 Hausa women living in the Savannah zone of Nigeria , there were 40 twins and 2 triplets per 1000 births. Twenty-six percent of twins were monozygotic. The incidence of multiple births, which was about five times higher than that observed in any western population, was significantly lower than that of other ethnic groups, who live in the hot and humid climate of the southern part of the country. The incidence of multiple births was related to maternal age but did not bear any association to the climate or prevalence of malaria . [ 62 ] [ 63 ] Twins are more common in people of African descent. [ 64 ] The predisposing factors of monozygotic twinning are unknown. Dizygotic twin pregnancies are slightly more likely when the following factors are present in the woman: Women undergoing certain fertility treatments may have a greater chance of dizygotic multiple births. In the United States it has been estimated that by 2011 36% of twin births resulted from conception by assisted reproductive technology . [ 65 ] The risk of twin birth can vary depending on what types of fertility treatments are used. With in vitro fertilisation (IVF), this is primarily due to the insertion of multiple embryos into the uterus. Ovarian hyperstimulation without IVF has a very high risk of multiple birth. Reversal of anovulation with clomifene (trade names including Clomid ) has a relatively less but yet significant risk of multiple pregnancy. A 15-year German study [ 66 ] of 8,220 vaginally delivered twins (that is, 4,110 pregnancies) in Hesse yielded a mean delivery time interval of 13.5 minutes. [ 67 ] The delivery interval between the twins was measured as follows: The study stated that the occurrence of complications "was found to be more likely with increasing twin-to-twin delivery time interval" and suggested that the interval be kept short, though it noted that the study did not examine causes of complications and did not control for factors such as the level of experience of the obstetrician, the wish of the women giving birth, or the "management strategies" of the procedure of delivering the second twin. There have also been cases in which twins are born a number of days apart. Possibly the worldwide record for the duration of the time gap between the first and the second delivery was the birth of twins 97 days apart in Cologne, Germany, the first of which was born on November 17, 2018. [ 68 ] Researchers suspect that as many as 1 in 8 pregnancies start out as multiples, but only a single fetus is brought to full term, because the other fetus has died very early in the pregnancy and has not been detected or recorded. [ 69 ] Early obstetric ultrasonography exams sometimes reveal an "extra" fetus, which fails to develop and instead disintegrates and vanishes in the uterus. There are several reasons for the "vanishing" fetus, including it being embodied or absorbed by the other fetus, placenta or the mother. This is known as vanishing twin syndrome. Also, in an unknown proportion of cases, two zygotes may fuse soon after fertilization, resulting in a single chimeric embryo, and, later, fetus. Conjoined twins (or the once-commonly used term "siamese") are monozygotic twins whose bodies are joined during pregnancy. This occurs when the zygote starts to split after day 12 [ 45 ] following fertilization and fails to separate completely. This condition occurs in about 1 in 50,000 human pregnancies. Most conjoined twins are now evaluated for surgery to attempt to separate them into separate functional bodies. The degree of difficulty rises if a vital organ or structure is shared between twins, such as the brain , heart , liver or lungs . A chimera is an ordinary person or animal except that some of their parts actually came from their twin or from the mother. A chimera may arise either from monozygotic twin fetuses (where it would be impossible to detect), or from dizygotic fetuses, which can be identified by chromosomal comparisons from various parts of the body. The number of cells derived from each fetus can vary from one part of the body to another, and often leads to characteristic mosaicism skin coloration in human chimeras. A chimera may be intersex , composed of cells from a male twin and a female twin. In one case DNA tests determined that a woman, Lydia Fairchild , mystifyingly, was not the mother of two of her three children; she was found to be a chimera, and the two children were conceived from eggs derived from cells of their mother's twin. [ 70 ] Sometimes one twin fetus will fail to develop completely and continue to cause problems for its surviving twin. One fetus acts as a parasite towards the other. Sometimes the parasitic twin becomes an almost indistinguishable part of the other, and sometimes this needs to be treated medically. A very rare type of parasitic twinning is one where a single viable twin is endangered when the other zygote becomes cancerous, or "molar". This means that the molar zygote's cellular division continues unchecked, resulting in a cancerous growth that overtakes the viable fetus. Typically, this results when one twin has either triploidy or complete paternal uniparental disomy , resulting in little or no fetus and a cancerous, overgrown placenta, resembling a bunch of grapes . Occasionally, a woman will suffer a miscarriage early in pregnancy, yet the pregnancy will continue; one twin was miscarried but the other was able to be carried to term. This occurrence is similar to the vanishing twin syndrome, but typically occurs later, as the twin is not reabsorbed. It is very common for twins to be born at a low birth weight . More than half of twins are born weighing less than 5.5 pounds (2.5 kg), while the average birth weight of a healthy baby should be around 6–8 pounds (3–4 kg). [ 71 ] This is largely due to the fact that twins are typically born premature . Premature birth and low birth weights, especially when under 3.5 pounds (1.6 kg), can increase the risk of several health-related issues, such as vision and hearing loss, mental disabilities, and cerebral palsy . [ 72 ] There is an increased possibility of potential complications as the birth weight of the baby decreases. Monozygotic twins who share a placenta can develop twin-to-twin transfusion syndrome. This condition means that blood from one twin is being diverted into the other twin. One twin, the 'donor' twin, is small and anemic , the other, the 'recipient' twin, is large and polycythemic . The lives of both twins are endangered by this condition. Stillbirths occurs when a fetus dies after 20 weeks of gestation. There are two types of stillbirth, including intrauterine death and intrapartum death. Intrauterine death occurs when a baby dies during late pregnancy. Intrapartum death, which is more common, occurs when a baby dies while the mother is giving birth. The cause of stillbirth is often unknown, but the rate of babies who are stillborn is higher in twins and multiple births. Caesareans or inductions are advised after 38 weeks of pregnancy for twins, because the risk of stillbirth increases after this time. [ 73 ] Heterotopic pregnancy is an exceedingly rare type of dizygotic twinning in which one twin implants in the uterus as normal and the other remains in the fallopian tube as an ectopic pregnancy . Ectopic pregnancies must be resolved because they can be life-threatening to the mother. However, in most cases, the intrauterine pregnancy can be salvaged. [ citation needed ] For otherwise healthy twin pregnancies where both twins are head down, a trial of vaginal delivery is recommended at between 37 and 38 weeks. [ 74 ] [ 75 ] Vaginal delivery in this case does not worsen the outcome for the infant as compared with Caesarean section . [ 74 ] There is controversy on the best method of delivery where the first twin is head first and the second is not. [ 74 ] When the first twin is not head down a caesarean section is often recommended. [ 74 ] It is estimated that 75% of twin pregnancies in the United States were delivered by caesarean section in 2008. [ 76 ] In comparison, the rate of caesarean section for all pregnancies in the general population varies between 14% and 40%. [ 77 ] In twins that share the same placenta, delivery may be considered at 36 weeks. [ 78 ] For twins who are born early, there is insufficient evidence for or against placing preterm stable twins in the same cot or incubator (co-bedding). [ 79 ] Twin studies are utilized in an attempt to determine how much of a particular trait is attributable to either genetics or environmental influence. These studies compare monozygotic and dizygotic twins for medical , genetic , or psychological characteristics to try to isolate genetic influence from epigenetic and environmental influence. Twins that have been separated early in life and raised in separate households are especially sought-after for these studies, which have been used widely in the exploration of human nature . Classical twin studies are now being supplemented with molecular genetic studies which identify individual genes. This phenomenon is known as heteropaternal superfecundation . One 1992 study estimates that the frequency of heteropaternal superfecundation among dizygotic twins, whose parents were involved in paternity suits, was approximately 2.4%. [ citation needed ] Dizygotic twins from biracial couples can sometimes be mixed twins , which exhibit differing ethnic and racial features. One such pairing was born in London in 1993 to a white mother and Caribbean father. [ 80 ] Among monozygotic twins, in extremely rare cases, twins have been born with different sexes (one male, one female). [ 81 ] When monozygotic twins are born with different sexes it is because of chromosomal defects. The probability of this is so small that multiples having different sexes is universally accepted as a sound basis for in utero clinical determination that the multiples are not monozygotic. Another abnormality that can result in monozygotic twins of different sexes is if the egg is fertilized by a male sperm but during cell division only the X chromosome is duplicated. This results in one normal male (XY) and one female with Turner syndrome (45,X). [ 82 ] In these cases, although the twins did form from the same fertilized egg, it is incorrect to refer to them as genetically identical, since they have different karyotypes . Monozygotic twins can develop differently, due to their genes being differently activated. [ 83 ] More unusual are "semi-identical twins", also known as "sesquizygotic". As of 2019 [update] , only two cases have been reported. [ 84 ] [ 85 ] These "half-identical twins" are hypothesized to occur when an ovum is fertilized by two sperm . The cell assorts the chromosomes by heterogonesis and the cell divides into two, with each daughter cell now containing the correct number of chromosomes. The cells continue to develop into a morula . If the morula then undergoes a twinning event, two embryos will be formed, with different paternal genes but identical maternal genes. [ 86 ] In 2007, a study reported a case of a pair of living twins, which shared an identical set of maternal chromosomes, while each having a distinct set of paternal chromosomes, albeit from the same man, and thus they most likely share half of their father's genetic makeup. The twins were both found to be chimeras . One was an intersex XX, and one a XY male. The exact mechanism of fertilization could not be determined but the study stated that it was unlikely to be a case of polar body twinning. [ 87 ] [ 88 ] The likely genetic basis of semi-identical twins was reported in 2019 by Michael Gabbett and Nicholas Fisk . In their seminal publication, Gabbett, Fisk and colleagues documented a second case of sesquizygosis and presented molecular evidence of the phenomenon. [ 84 ] The reported twins shared 100% of their maternal chromosomes and 78% of their paternal genomic information. The authors presented evidence that two sperm from the same man fertilized an ovum simultaneously. The chromosomes assorted themselves through heterogonesis to form three cell lines. The purely paternal cell line died out due to genomic imprinting lethality, while the other two cell lines, each consisting of the same maternal DNA but only 50% identical paternal DNA, formed a morula which subsequently split into twins. [ 84 ] [ 89 ] Mirror image twins result when a fertilized egg splits later in the embryonic stage than normal timing, around day 9–12. This type of twinning could exhibit characteristics with reversed asymmetry, such as opposite dominant handedness, dental structure, or even organs ( situs inversus ). [ 90 ] If the split occurs later than this time period, the twins risk being conjoined. There is no DNA-based zygosity test that can determine if twins are indeed mirror image. [ 91 ] The term "mirror image" is used because the twins, when facing each other, appear as matching reflections. [ 92 ] There have been many studies highlighting the development of language in twins compared to single-born children. These studies have converged on the notion that there is a greater rate of delay in language development in twins compared to their single-born counterparts. [ 93 ] The reasons for this phenomenon are still in question; however, cryptophasia was thought to be the major cause. [ 94 ] Idioglossia is defined as a private language that is usually invented by young children, specifically twins. Another term to describe what some people call "twin talk" is cryptophasia where a language is developed by twins that only they can understand. The increased focused communication between two twins may isolate them from the social environment surrounding them. Idioglossia has been found to be a rare occurrence and the attention of scientists has shifted away from this idea. However, there are researchers and scientists that say cryptophasia or idioglossia is not a rare phenomenon. Current research is looking into the impacts of a richer social environment for these twins to stimulate their development of language. [ 95 ] Non-human dizygotic twinning is a common phenomenon in multiple animal species, including cats, dogs, cattle, bats, chimpanzees, and deer. This should not be confused with an animal's ability to produce a litter , because while litters are caused by the release of multiple eggs during an ovulation cycle, identical to the ovulation of dizygotic twins, they produce more than two offspring. Species such as sheep, goats, and deer have a higher propensity for dizygotic twinning, meaning that they carry a higher frequency of the allele responsible for the likelihood of twins, rather than the likelihood of litters (Whitcomb, 2021). Cases of monozygotic twinning in the animal kingdom are rare but have been recorded on a number of occasions. In 2016, a C-section of an Irish Wolfhound revealed identical twin puppies sharing a singular placenta. South African scientists, who were called in to study the identical twins wrote that... "To the best of our knowledge, this is the first report of monozygotic twinning in the dog confirmed using DNA profiling " (Horton, 2016). Additionally, armadillos have also been known to produce monozygotic twins, sometimes birthing two sets of identical twins during one reproductive cycle. Monozygotic twinning in armadillos functions as an evolutionary adaptation preventing inbreeding. Once an armadillo offspring enters its reproductive stage, the organism is forced to leave the nest in search of its mate, rather than mating with its siblings. Not only does monozygotic twinning dissuade from armadillo siblings inbreeding, but by forcing migration from the nest, this adaptation ensures the increased genetic variation and geographical population diffusion of armadillo species. Due to the increased parental investment provided for their offspring, larger mammals with longer life spans have slower reproductive cycles and tend to birth only one offspring at a time. This commonly repeated behavior in larger mammals evolved as a fixed, naturally-selected adaptation, resulting in a decreased twinning propensity in species such as giraffes, elephants, and hippopotami. Despite this adaptation, a case of rare monozygotic twinning has been documented in two elephant calves at the Bandipur Tiger Reserve in Karnataka, India. Chief Veterinarian of the Wildlife Trust of India, NVK Ashraf, in response to the twinning event, wrote that "in species that invest longer time in producing a baby, taking care of two twin calves will be difficult. Therefore, the incidence of twinning will be comparatively less."Ashraf's insight not only illuminates the rarity of twinning among large mammals in the natural world, but directs our attention to the increased twinning propensity of animals under human care. This increased twinning propensity is thought to be either caused by random mutation facilitated by genetic drift, or the positive selection of the "twinning" trait in human-controlled conditions. Due to the removal of natural predators and unpredictable environmental conditions with the increase of human-provided food and medical care, species residing in nature reserves, zoos, etc., carry an increased likelihood of reversing their naturally-selected traits that have been passed on for generations. When considering this phenomenon in relation to twinning, larger mammals not commonly associated with high twinning propensities can perhaps produce twins as an adaptive response to their human-controlled environment. Additionally, the high twinning propensity in species is thought to be positively correlated with the infant mortality rate of the reproducing organism's environment (Rickard, 2022, p. 2). Thus if a species lives in a controlled environment with a low infant mortality rate, the frequency of the "twinning trait" could increase, leading to a higher likelihood of producing twin offspring. In the case of the monozygotic twin calves in India, their existence could be connected to a new, positively selected adaptation of twinning attributed to species living under human care (Ward, 2014, p. 7-11). Species with small physicalities and quick reproductive cycles carry high twinning propensities as a result of increased predation and high mortality rates. As scientists continue to study the origin of dizygotic twinning in the animal kingdom, many have turned to species that demonstrated an increased output of twins during periods of evolutionary distress and natural selection. Through their studies on Vespertilionidae and Cebidae species, scientists Guilherme Siniciato Terra Garbino (2021) and Marco Varella (2018) have proven that smaller species experiencing infertility in old age and/or unstable habits as a result of increased predation or human interference can experience have undergone natural selection in gaining even higher twinning propensities. In his study on the evolution of litter size in bats, Garbino discovered that the vespertilionidae genus has higher twinning propensities as a result of their high roosting habitats. When tracked phylogenetically, scientists determined that the common ancestor of bats carried a higher twinning propensity which was then lost, and picked up again, eighteen times in evolutionary history. While other bat subfamilies such as Myotinae and Murinae inevitably lost the twinning trait, the family Vespertilionidae retained a high trait frequency due to mutation and environmental conditions that triggered natural selection. The height and exposed nature of Vespertilionidae's roosting locations resulted in a sharp increase in species mortality rate. Natural selection offsets these dangers by positively selecting high twinning propensity, resulting in not only Vespertilionidae's increased ability to produce twins but the increased likelihood of the genus's reproductive survival. This means that despite the family's high exposure to factors that would seemingly increase mortality rates, Vespertilionidae counteracts their environmental conditions through the evolutionary adaptation of dizygotic twins. The prevalence of dizygotic twinning in monkeys is thought to be an "insurance adaptation" for mothers reproducing at the end of their fertile years. While dizygotic twinning has been observed in species such as gorillas and chimpanzees, monkeys in the cebidae genus are found to be more likely to produce twins because of their small size and insect-based diet (Varella, 2018). This is because their small size indicates shorter gestation periods and the rapid maturation of offspring, resulting in a shorter lifespan where organisms are rapidly replaced by newer generations. The smaller size of the cebidae genus also makes these species more susceptible to predators, thus triggering the heightened pace of birth, maturation, reproduction, and death. Meanwhile, cebidae's insectivorous existence can be correlated with this genus's heightened ability to reproduce, as more resources become available, more organisms can take advantage of these resources. Thus, monkeys that are smaller and have more access to food, such as the cebidae genus, have the ability to produce more offspring at a quicker pace. In terms of dizygotic twinning, it has been observed that older mothers within the cebidae genus have a higher chance of producing twins than those at the beginning stages of their fertility. Despite their access to resources, the cebidae genus has a high mortality rate attributed to their size, meaning that in order to "keep up" their quickened lifecycle, they must produce an excess of offspring in ensuring generational survival. The positively-selected adaptation of twinning counteracts the genus's high mortality rate by giving older mothers the chance to produce more than one offspring. This not only increases the likelihood that one or more of these offspring will reach reproductive maturity, but gives the mother a chance to birth at least one viable offspring despite their age. Due to their short life cycles, the cebidae genus is more inclined to produce dizygotic twins in their older reproductive years, thus signaling that the trait of high twinning propensity is one that is passed down in service of this genus's survival.
https://en.wikipedia.org/wiki/Twin
Twin bridges are a set of two bridges running parallel to each other. A pair of twin bridges is often referred to collectively as a twin-span or dual-span bridge. Twin bridges are independent structures and each bridge has its own superstructure , substructure , and foundation . [ 1 ] Bridges of this type are often created by building a new bridge parallel to an existing one in order to increase the traffic capacity of the crossing. While most twin-span bridges consist of two identical bridges, this is not always the case. For a bridge owner, twin bridges can improve the maintenance and management of the structures. For motorists, twin bridges can limit the risk that both directions of traffic will be disrupted by an accident. [ 1 ] This article about a specific type of bridge is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Twin_bridges
In data compression , twin vector quantization is related to vector quantization , but the speed of the quantization is doubled by the secondary vector analyzer. By using a subdimensional vector space useless hyperspace will be destroyed in the process. The formula for calculating the amount of destroyed hyperspace is: This computer science article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Twin_vector_quantization
Twine is a stand-alone device that uses sensors to detect parts of its environment and that connects to a Wi-Fi network to communicate. Rules loaded into the Twine can test for sensor conditions and, based on logic, send messages through email or SMS , make an HTTP request , or light a LED . [ 1 ] It can act as a data logger . The device was created by Supermechanical in the US from funding raised on Kickstarter . Their original goal was for $35,000 yet they raised $556,541 from 3,966 backers on January 3, 2012. [ 2 ] The product successfully shipped in November 2012. As of April 5, 2016, Supermechanical no longer manufactures Twine. [ 3 ] [ 4 ] This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Twine_(device)
In molecular biology, a twintron is an intron -within-intron excised by sequential splicing reactions. A twintron is presumably formed by the insertion of a mobile intron into an existing intron. Twintrons were discovered by Donald W. Copertino and Richard B. Hallick as a group II intron within another group II intron in Euglena chloroplast genome. [ 1 ] They found that splicing of both the internal and external introns occurs via lariat intermediates. Additionally, twintron splicing was found to proceed by a sequential pathway, the internal intron being removed prior to the excision of the external intron. Since the original discovery, there have been other reports of Group III twintrons and GroupII/III twintrons in the chloroplast of Euglena gracilis . In 1993 a new type of complex twintron composed of four individual group III introns has been characterized. [ 2 ] The external intron was interrupted by an internal intron containing two additional introns. In 1995 scientists discovered the first non- Euglena twintron in cryptomonad alga Pyrenomonas salina . [ 3 ] In 2004, several twintrons were discovered in Drosophila . [ 4 ] The majority of these twintrons have been characterized within the Euglena chloroplast genome but these elements have also been found in cryptomonad algae ( Pyrenomonas salina ), [ 5 ] and group I intron based twintrons (group I inserted within a group I intron) have been described in Didymium iridis . [ 6 ] Since the discovery of the psbF twintron, several categories of twintrons have been characterized. A twintron can be simple (external intron interrupted by 1 internal intron), or complex (external intron interrupted by multiple internal introns). [ 7 ] Most probably, the internal and external introns comprising the twintron element are from the same category; group I internal to group I, [ 8 ] group II internal to group II, [ 9 ] and group III internal to group III. [ 10 ] Mixed twintrons (consisting of introns belonging to different categories) were characterized from the Euglena gracilis rps3 gene in which an internal group II intron is found to interrupt an external group III intron. [ 11 ] In Rhodomonas salina (= Pyrenomonas salina ) twintrons (nested group II/group III introns) were identified where the internal intron lost its splicing capacity, essentially merging with the outer intron forming one splicing unit. [ 12 ] Recently, two novel twintrons have been uncovered within the fungal mitochondrial genome, one at position mS917 of the Cryphonectria parasitica mt-rns gene, where a group ID intron encoding a LAGLIDADG ORF invaded another ORF-less group ID intron. Another twintron complex was detected at position mS1247 of the Chaetomium thermophilumhere mt-rns gene, a group IIA1 intron invaded the open reading frame embedded within a group IC2 intron. [ 13 ] The mS1247 twintron represents the first recorded fungal mitochondrial mixed twintron consisting of group II intron as an internal intron and a group I intron as an external intron. In mS1247 twintron, splicing of the internal group IIA1 intron reconstitutes the open reading frame encoded within the group IC2 intron and thus facilitates the expression of the encoded homing endonuclease. The mS1247 twintron encod ORF have been biochemically characterized and the results showed that it is an active homing endonuclease that could potentially mobilize the twintron to rns genes that have not yet been invaded by this mobile composite element. [ 14 ]
https://en.wikipedia.org/wiki/Twintron
In differential geometry , the twist of a ribbon is its rate of axial rotation . Let a ribbon ( X , U ) {\displaystyle (X,U)} be composed of a space curve , X = X ( s ) {\displaystyle X=X(s)} , where s {\displaystyle s} is the arc length of X {\displaystyle X} , and U = U ( s ) {\displaystyle U=U(s)} a unit normal vector , perpendicular at each point to ∂ X ( s ) ∂ s ( s ) {\displaystyle {\partial X(s) \over \partial s}(s)} . Since the ribbon ( X , U ) {\displaystyle (X,U)} has edges X {\displaystyle X} and X ′ = X + ε U {\displaystyle X'=X+\varepsilon U} , the twist (or total twist number ) T w {\displaystyle Tw} measures the average winding of the edge curve X ′ {\displaystyle X'} around and along the axial curve X {\displaystyle X} . According to Love (1944) twist is defined by where d X / d s {\displaystyle dX/ds} is the unit tangent vector to X {\displaystyle X} . The total twist number T w {\displaystyle Tw} can be decomposed (Moffatt & Ricca 1992) into normalized total torsion T ∈ [ 0 , 1 ) {\displaystyle T\in [0,1)} and intrinsic twist N ∈ Z {\displaystyle N\in \mathbb {Z} } as where τ = τ ( s ) {\displaystyle \tau =\tau (s)} is the torsion of the space curve X {\displaystyle X} , and [ Θ ] X {\displaystyle \left[\Theta \right]_{X}} denotes the total rotation angle of U {\displaystyle U} along X {\displaystyle X} . Neither N {\displaystyle N} nor T w {\displaystyle Tw} are independent of the ribbon field U {\displaystyle U} . Instead, only the normalized torsion T {\displaystyle T} is an invariant of the curve X {\displaystyle X} (Banchoff & White 1975). When the ribbon is deformed so as to pass through an inflectional state (i.e. X {\displaystyle X} has a point of inflection ), the torsion τ {\displaystyle \tau } becomes singular. The total torsion T {\displaystyle T} jumps by ± 1 {\displaystyle \pm 1} and the total angle N {\displaystyle N} simultaneously makes an equal and opposite jump of ∓ 1 {\displaystyle \mp 1} (Moffatt & Ricca 1992) and T w {\displaystyle Tw} remains continuous. This behavior has many important consequences for energy considerations in many fields of science (Ricca 1997, 2005; Goriely 2006). Together with the writhe W r {\displaystyle Wr} of X {\displaystyle X} , twist is a geometric quantity that plays an important role in the application of the Călugăreanu–White–Fuller formula L k = W r + T w {\displaystyle Lk=Wr+Tw} in topological fluid dynamics (for its close relation to kinetic and magnetic helicity of a vector field), physical knot theory , and structural complexity analysis.
https://en.wikipedia.org/wiki/Twist_(differential_geometry)
The Twist Compression Tester ("TCT") is a hydraulically operated bench-top apparatus used to evaluate the level of friction and/or wear between two materials under lubricated or non-lubricated conditions. Under controlled conditions, a rotating annular specimen is brought into contact with a non-rotating flat specimen. Specimens can be prepared from die materials, sheet or plate materials, metals or plastics. The applied normal force and the torque are measured, and the coefficient of friction is calculated. Although the twist-compression test does not simulate an actual process, it has been demonstrated to correlate well with processes where boundary lubrication predominates and lubricant depletion occurs. The Twist Compression Tester is an invaluable diagnostic tool for: evaluating lubricants, materials and coatings; screening products for production; understanding the effect of additives in a lubricant; and many other value-added testing scenarios. [ 1 ] The Twist Compression Tester was developed by Professor John Schey formerly of the University of Waterloo , and is manufactured, according to his design, under exclusive license by the Industrial Research + Development Institute ( Midland , Ontario , Canada ). Tribsys is also manufacturing a twist compression machine and other approaches have been used with the Falex MCTT MultiContactTriboTester in thrust-washer configurations. This industry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Twist_compression_tester
In mathematics, the twisted Poincaré duality is a theorem removing the restriction on Poincaré duality to oriented manifolds . The existence of a global orientation is replaced by carrying along local information, by means of a local coefficient system . Another version of the theorem with real coefficients features de Rham cohomology with values in the orientation bundle . This is the flat real line bundle denoted o ( M ) {\displaystyle o(M)} , that is trivialized by coordinate charts of the manifold M {\displaystyle M} , with transition maps the sign of the Jacobian determinant of the charts transition maps. As a flat line bundle , it has a de Rham cohomology, denoted by For M a compact manifold, the top degree cohomology is equipped with a so-called trace morphism that is to be interpreted as integration on M , i.e. , evaluating against the fundamental class . Poincaré duality for differential forms is then the conjunction, for M connected, of the following two statements: is non-degenerate. The oriented Poincaré duality is contained in this statement, as understood from the fact that the orientation bundle o(M) is trivial if the manifold is oriented, an orientation being a global trivialization, i.e. , a nowhere vanishing parallel section.
https://en.wikipedia.org/wiki/Twisted_Poincaré_duality
In mathematics, a twisted sheaf is a variant of a coherent sheaf . Precisely, it is specified by: an open covering in the étale topology U i , coherent sheaves F i over U i , a Čech 2-cocycle θ for G m {\displaystyle \mathbb {G} _{m}} on the covering U i as well as the isomorphisms satisfying The notion of twisted sheaves was introduced by Jean Giraud . The above definition due to Căldăraru is down-to-earth but is equivalent to a more sophisticated definition in terms of gerbe ; see § 2.1.3 of ( Lieblich 2007 ). This algebraic geometry –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Twisted_sheaf
Twister OS (Twister for short) is a 32-bit Operating System created by Pi Labs for the Raspberry Pi single board computer originally, with a x86_64 PC version released a few months later. [ 1 ] [ 2 ] Twister is meant to be a general-purpose OS that is familiar or nostalgic to users. Twister is based on Raspberry Pi OS Lite and uses the XFCE desktop environment. [ 3 ] Twister OS also has a version called "Twister OS Armbian" designed for ARM SBCs with the RK3399 CPU. [ 1 ] There are four versions of the operating system, TwisterOS Full (for the Raspberry Pi 4), Twister OS Lite (a stripped-down version with only themes), Twister UI (For x86_64 PCs running Linux Mint or Xubuntu) and Twisters OS Armbian (for RK3399 CPUs) . [ 1 ] TwisterOS has 7 main desktop themes, 5 out of those have dark modes. [ 4 ] Twister OS has its own theme called "Twister OS theme". The Twister 95, XP, 7, 10, and 11 themes are similar to the themes on the Windows 95, XP, 7, 10 and 11 operating systems. iTwister and iTwister Sur desktop themes are similar to the themes on macOS . [ 3 ] Box86 is an emulator used to run x86 software and games on ARM systems. [ 5 ] [ 6 ] Wine is a compatibility layer that lets the user to run Windows applications on non-Windows systems. [ 1 ] [ 3 ] CommanderPi is a system monitoring and configuration tool designed to check system information and overclock the CPU. [ 7 ] Twister OS Lite is for the Raspberry Pi as well. The Lite version only comes with the themes in Twister OS, as well as Box86 and Wine. [ 1 ] Twister UI is very similar to Twister OS, the only difference is that Twister UI is used for non-single board computers. [ 1 ] [ 8 ] Twister UI is designed to be installed by running a setup script on an already running installation of Linux Mint (XFCE) or Xubuntu. [ 1 ] [ 8 ] Twister OS Armbian is a version of Twister OS that can run on SBCs with RK3399 CPUs, like the Rock Pi 4B. [ 9 ] Twister OS Armbian also comes preinstalled on emmc chips inside the Rock Pi 4 Plus models. [ 10 ] Twister OS Armbian is based on the Armbian Linux operating system.
https://en.wikipedia.org/wiki/Twister_OS
The Twister supersonic separator is a compact tubular device which is used for removing water and/or hydrocarbon dewpointing of natural gas . [ 1 ] [ 2 ] The principle of operation is similar to the near isentropic Brayton cycle of a turboexpander. The gas is accelerated to supersonic velocities within the tube using a De Laval nozzle and inlet guide vanes spin the gas around an inner-body which creates the "ballerina effect" and centrifugally separates the water and liquids in the tube. Hydrates do not form in the Twister tube due to the very short residence time of the gas in the tube (around 2 milliseconds). A secondary separator treats the liquids and slip gas and also acts as a hydrate control vessel. Twister is able to dehydrate to typical pipeline dewpoint specifications and relies on a pressure drop from the inlet of about 25%, dependent on the performance required. The fundamental mathematics behind supersonic separation can be found in the Society of Petroleum Engineers paper (number 100442) entitled "Selective Removal of Water from Supercritical Natural Gas". [ 3 ] The closed Twister system enables gas treatment subsea . [ 4 ] It is a product of Twister BV , a Dutch firm acquired by WAEP Coöperatief U.A.
https://en.wikipedia.org/wiki/Twister_supersonic_separator
Twistronics (from twist and electronics ) is the study of how the angle (the twist) between layers of two-dimensional materials can change their electrical properties. [ 1 ] [ 2 ] Materials such as bilayer graphene have been shown to have vastly different electronic behavior, ranging from non-conductive to superconductive , that depends sensitively on the angle between the layers. [ 3 ] [ 4 ] The term was first introduced by the research group of Efthimios Kaxiras at Harvard University in their theoretical treatment of graphene superlattices. [ 1 ] [ 5 ] Pablo Jarillo-Herrero , Allan H. MacDonald and Rafi Bistritzer were awarded the 2020 Wolf Prize in Physics for their theoretical and experimental work on twisted bilayer graphene. [ 6 ] In 2007, National University of Singapore physicist Antonio H. Castro Neto hypothesized that pressing two misaligned graphene sheets together might yield new electrical properties, and separately proposed that graphene might offer a route to superconductivity, but he did not combine the two ideas. [ 4 ] In 2010 researchers in Eva Andrei 's laboratory at Rutgers University in Piscataway, New Jersey discovered twisted bilayer graphene through its defining moiré pattern and demonstrating that the twist angle has a strong effect on the band structure by measuring greatly renormalized van Hove singularities . [ 7 ] Also in 2010 researchers from Federico Santa María Technical University in Chile found that for a certain angle close to 1 degree the band of the electronic structure of twisted bilayer graphene became completely flat, [ 8 ] and because of that theoretical property, they suggested that collective behavior might be possible. In 2011 Allan H. MacDonald (of University of Texas at Austin ) and Rafi Bistritzer using a simple theoretical model found that for the previously found "magic angle" the amount of energy a free electron would require to tunnel between two graphene sheets radically changes. [ 9 ] In 2017, the research group of Efthimios Kaxiras at Harvard University used detailed quantum mechanics calculations to reduce uncertainty in the twist angle between two graphene layers that can induce extraordinary behavior of electrons in this two-dimensional system. [ 1 ] In 2018, Pablo Jarillo-Herrero , an experimentalist at Massachusetts Institute of Technology , found that the magic angle resulted in the unusual electrical properties that MacDonald and Bistritzer had predicted. [ 10 ] At 1.1 degrees rotation at sufficiently low temperatures, electrons move from one layer to the other, creating a lattice and the phenomenon of superconductivity. [ 11 ] Publication of these discoveries has generated a host of theoretical papers seeking to understand and explain the phenomena [ 12 ] as well as numerous experiments [ 3 ] using varying numbers of layers, twist angles and other materials. [ 4 ] [ 13 ] Subsequent works showed that electronic properties of the stack can also be strongly dependent on heterostrain especially near the magic angle [ 14 ] [ 15 ] allowing potential applications in straintronics . The theoretical predictions of superconductivity were confirmed by Pablo Jarillo-Herrero and his student Yuan Cao of MIT and colleagues from Harvard University and the National Institute for Materials Science in Tsukuba , Japan. In 2018 they verified that superconductivity existed in bilayer graphene where one layer was rotated by an angle of 1.1° relative to the other, forming a moiré pattern , at a temperature of 1.7 K (−271.45 °C; −456.61 °F). [ 2 ] [ 16 ] [ 17 ] They created two bilayer devices that acted as an insulator instead of a conductor without a magnetic field. Increasing the field strength turned the second device into a superconductor. A further advance in twistronics is the discovery of a method of turning the superconductive paths on and off by application of a small voltage differential. [ 18 ] Experiments have also been done using combinations of graphene layers with other materials that form heterostructures in the form of atomically thin sheets that are held together by the weak Van der Waals force . [ 19 ] For example, a study published in Science in July 2019 found that with the addition of a boron nitride lattice between two graphene sheets, unique orbital ferromagnetic effects were produced at a 1.17° angle, which could be used to implement memory in quantum computers . [ 20 ] Further spectroscopic studies of twisted bilayer graphene revealed strong electron-electron correlations at the magic angle. [ 21 ] Between 2-D layers for bismuth selenide and a dichalcogenide, researchers at the Northeastern University in Boston, discovered that at a specific degrees of twist a new lattice layer, consisting of only pure electrons, would develop between the two 2-D elemental layers. [ 22 ] The quantum and physical effects of the alignment between the two layers appears to create "puddle" regions which trap electrons into a stable lattice. Because this stable lattice consists only of electrons, it is the first non-atomic lattice observed and suggests new opportunities to confine, control, measure, and transport electrons. A three layer construction, consisting of two layers of graphene with a 2-D layer of boron nitride, has been shown to exhibit superconductivity, insulation and ferromagnetism. [ 23 ] In 2021, this was achieved on a single graphene flake. [ 24 ]
https://en.wikipedia.org/wiki/Twistronics
Twitching motility is a form of crawling bacterial motility used to move over surfaces. Twitching is mediated by the activity of hair-like filaments called type IV pili which extend from the cell's exterior, bind to surrounding solid substrates, and retract, pulling the cell forwards in a manner similar to the action of a grappling hook . [ 1 ] [ 2 ] [ 3 ] The name twitching motility is derived from the characteristic jerky and irregular motions of individual cells when viewed under the microscope. [ 4 ] It has been observed in many bacterial species, but is most well studied in Pseudomonas aeruginosa , Neisseria gonorrhoeae and Myxococcus xanthus . Active movement mediated by the twitching system has been shown to be an important component of the pathogenic mechanisms of several species. [ 2 ] The type IV pilus complex consists of both the pilus itself and the machinery required for its construction and motor activity. The pilus filament is largely composed of the PilA protein, with more uncommon minor pilins at the tip. These are thought to play a role in initiation of pilus construction. [ 5 ] Under normal conditions, the pilin subunits are arranged as a helix with five subunits in each turn, [ 5 ] [ 6 ] but pili under tension are able to stretch and rearrange their subunits into a second configuration with around 1 + 2 ⁄ 3 subunits per turn. [ 7 ] Three subcomplexes form the apparatus responsible for assembling and retracting the type IV pili. [ 8 ] The core of this machinery is the motor subcomplex, consisting of the PilC protein and the cytosolic ATPases PilB and PilT. These ATPases drive pilus extension or retraction respectively, depending on which of the two is currently bound to the pilus complex. Surrounding the motor complex is the alignment subcomplex, formed from the PilM, PilN, PilO and PilP proteins. These proteins form a bridge between the inner and outer membranes and create a link between the inner membrane motor subcomplex and the outer membrane secretion subcomplex. This consists of a pore formed from the PilQ protein, through which the assembled pilus can exit the cell. [ 9 ] Regulatory proteins associated with the twitching motility system have strong sequence and structural similarity to those that regulate bacterial chemotaxis using flagellae. [ 2 ] [ 10 ] In P. aeruginosa for example, a total of four homologous chemosensory pathways are present, three regulating swimming motility and one regulating twitching motility. [ 11 ] These chemotactic systems allow cells to regulate twitching so as to move towards chemoattractants such as phospholipids and fatty acids . [ 12 ] In contrast to the run-and-tumble model of chemotaxis associated with flagellated cells however, movement towards chemoattractants in twitching cells appears to be mediated via regulation of the timing of directional reversals. [ 13 ] Twitching motility is capable of driving the movement of individual cells. [ 1 ] [ 13 ] The pattern of motility that results is highly dependent upon cell shape and the distribution of pili over the cell surface. [ 14 ] In N. gonorrhoeae for example, the roughly spherical cell shape and uniform distribution of pili results in cells adopting a 2D random walk over the surface they are attached to. [ 15 ] In contrast, species such as P. aeruginosa and M. xanthus exist as elongated rods with pili localised at their poles, and show much greater directional persistence during crawling due to the resulting bias in force generation direction. [ 16 ] P. aeruginosa and M. xanthus are also able to reverse direction during crawling by switching the pole of pilus localization. [ 13 ] [ 14 ] Type IV pili also mediate a form of walking motility in P. aeruginosa , where pili are used to pull the cell rod into a vertical orientation and move it at much higher speeds than during horizontal crawling motility. [ 16 ] [ 17 ] The existence of many pili pulling simultaneously on the cell body results in a balance of forces determining the movement of the cell body. This is known as the tug-of-war model of twitching motility. [ 14 ] [ 15 ] Sudden changes in the balance of forces caused by detachment or release of individual pili results in a fast jerk (or 'slingshot') that combines fast rotational and lateral movements, in contrast to the slower lateral movements seen during the longer periods between slingshots. [ 18 ] Both presence of type IV pili and active pilar movement appear to be important contributors to the pathogenicity of several species. [ 8 ] In P. aeruginosa , loss of pilus retraction results in a reduction of bacterial virulence in pneumonia [ 19 ] and reduces colonisation of the cornea. [ 20 ] Some bacteria are also able to twitch along vessel walls against the direction of fluid flow within them, [ 21 ] which is thought to permit colonisation of otherwise inaccessible sites in the vasculatures of plants and animals. Bacterial cells can also be targeted by twitching: during the cell invasion phase of the lifecycle of Bdellovibrio , type IV pili are used by cells to pull themselves through gaps formed in the cell wall of prey bacteria. [ 22 ] Once inside, the Bdellovibrio are able to use the host cell's resources to grow and reproduce, eventually lysing the cell wall of the prey bacterium and escaping to invade other cells. Twitching motility is also important during the formation of biofilms . [ 8 ] During biofilm establishment and growth, motile bacteria are able to interact with secreted extracellular polymeric substances (EPSs) such as Psl, alginate and extracellular DNA. [ 23 ] As they encounter sites of high EPS deposition, P. aeruginosa cells slow down, accumulate and deposit further EPS components. This positive feedback is an important initiating factor for the establishment of microcolonies , the precursors to fully fledged biofilms. [ 24 ] In addition, once biofilms have become established, their twitching-mediated spread is facilitated and organised by components of the EPS. [ 25 ] Twitching can also influence the structure of biofilms. During their establishment, twitching-capable cells are able to crawl on top of cells lacking twitching motility and dominate the fast-growing external surface of the biofilm. [ 23 ] [ 26 ] Type IV pili and related structures can be found across almost all phyla of Bacteria and Archaea , [ 27 ] however definitive twitching motility has been shown in a more limited range of prokaryotes. Most well studied and wide spread are the twitching Pseudomonadota , such as Neisseria gonorrhoeae , Myxococcus xanthus and Pseudomonas aeruginosa . [ 14 ] [ 8 ] Nevertheless, twitching has been observed in other phyla as well. For example, twitching motility has been observed in the cyanobacterium Synechocystis , [ 28 ] as well as the gram-positive Bacillota Streptococcus sanguinis . [ 29 ] Other structures and systems closely related to type IV pili have also been observed in prokaryotes. In Archea , for example, bundles of type IV-like filaments have been observed to form helical structures similar in both form and function to the bacterial flagellum . These swimming associated structures have been termed archaella . [ 30 ] Also closely related to the type IV pilus is the type II secretion system , [ 31 ] itself widely distributed amongst gram-negative bacteria . In this secretion system, cargo destined for export is associated with tips of type IV-like pseudopili in the periplasm. Extension of the pseudopili through secretin proteins similar to PilQ permits these cargo proteins to cross the outer membrane and enter the extracellular environment. Because of this wide but patchy distribution of type IV pilus-like machinery, it has been suggested that the genetic material encoding it has been transferred between species via horizontal gene transfer following its initial development in a single species of Pseudomonadota. [ 6 ]
https://en.wikipedia.org/wiki/Twitching_motility
Two's complement is the most common method of representing signed (positive, negative, and zero) integers on computers, [ 1 ] and more generally, fixed point binary values. Two's complement uses the binary digit with the greatest value as the sign to indicate whether the binary number is positive or negative; when the most significant bit is 1 the number is signed as negative and when the most significant bit is 0 the number is signed as positive. As a result, non-negative numbers are represented as themselves: 6 is 0110, zero is 0000, and −6 is 1010 (the result of applying the bitwise NOT operator to 6 and adding 1). However, while the number of binary bits is fixed throughout a computation it is otherwise arbitrary. Unlike the ones' complement scheme, the two's complement scheme has only one representation for zero. Furthermore, arithmetic implementations can be used on signed as well as unsigned integers [ 2 ] and differ only in the integer overflow situations. The following is the procedure for obtaining the two's complement representation of a given negative number in binary digits: For example, to calculate the decimal number −6 in binary from the number 6 : To verify that 1010 indeed has a value of −6 , add the place values together, but subtract the sign value from the final calculation. Because the most significant value is the sign value, it must be subtracted to produce the correct result: 1010 = − ( 1 ×2 3 ) + ( 0 ×2 2 ) + ( 1 ×2 1 ) + ( 0 ×2 0 ) = 1 ×−8 + 0 + 1 ×2 + 0 = −6. Note that steps 2 and 3 together are a valid method to compute the additive inverse − n {\displaystyle -n} of any (positive or negative) integer n {\displaystyle n} where both input and output are in two's complement format. An alternative to compute − n {\displaystyle -n} is to use subtraction 0 − n {\displaystyle 0-n} . See below for subtraction of integers in two's complement format. Two's complement is an example of a radix complement . The 'two' in the name refers to the number 2 N - "two to the power of N", which is the value in respect to which the complement is calculated in an N -bit system (the only case where exactly 'two' would be produced in this term is N = 1 , so for a 1-bit system, but these do not have capacity for both a sign and a zero). As such, the precise definition of the two's complement of an N -bit number is the complement of that number with respect to 2 N . The defining property of being a complement to a number with respect to 2 N is simply that the summation of this number with the original produce 2 N . For example, using binary with numbers up to three bits (so N = 3 and 2 N = 2 3 = 8 = 1000 2 , where ' 2 ' indicates a binary representation), a two's complement for the number 3 ( 011 2 ) is 5 ( 101 2 ), because summed to the original it gives 2 3 = 1000 2 = 011 2 + 101 2 . Where this correspondence is employed for representing negative numbers, it effectively means, using an analogy with decimal digits and a number-space only allowing eight non-negative numbers 0 through 7, dividing the number-space into two sets: the first four of the numbers 0 1 2 3 remain the same, while the remaining four encode negative numbers, maintaining their growing order, so making 4 encode −4, 5 encode −3, 6 encode −2 and 7 encode −1. A binary representation has an additional utility however, because the most significant bit also indicates the group (and the sign): it is 0 for the first group of non-negatives, and 1 for the second group of negatives. The tables at right illustrate this property. Calculation of the binary two's complement of a positive number essentially means subtracting the number from the 2 N . But as can be seen for the three-bit example and the four-bit 1000 2 ( 2 3 ), the number 2 N will not itself be representable in a system limited to N bits, as it is just outside the N bits space (the number is nevertheless the reference point of the "Two's complement" in an N -bit system). Because of this, systems with maximally N -bits must break the subtraction into two operations: first subtract from the maximum number in the N -bit system, that is 2 N −1 (this term in binary is actually a simple number consisting of 'all 1s', and a subtraction from it can be done simply by inverting all bits in the number also known as the bitwise NOT operation ) and then adding the one. Coincidentally, that intermediate number before adding the one is also used in computer science as another method of signed number representation and is called a ones' complement (named that because summing such a number with the original gives the 'all 1s'). Compared to other systems for representing signed numbers (e.g., ones' complement ), the two's complement has the advantage that the fundamental arithmetic operations of addition , subtraction , and multiplication are identical to those for unsigned binary numbers (as long as the inputs are represented in the same number of bits as the output, and any overflow beyond those bits is discarded from the result). This property makes the system simpler to implement, especially for higher-precision arithmetic. Additionally, unlike ones' complement systems, two's complement has no representation for negative zero , and thus does not suffer from its associated difficulties. Otherwise, both schemes have the desired property that the sign of integers can be reversed by taking the complement of its binary representation, but two's complement has an exception – the lowest negative, as can be seen in the tables. [ 4 ] The method of complements had long been used to perform subtraction in decimal adding machines and mechanical calculators . John von Neumann suggested use of two's complement binary representation in his 1945 First Draft of a Report on the EDVAC proposal for an electronic stored-program digital computer. [ 5 ] The 1949 EDSAC , which was inspired by the First Draft , used two's complement representation of negative binary integers. Many early computers, including the CDC 6600 , the LINC , the PDP-1 , and the UNIVAC 1107, use ones' complement notation; the descendants of the UNIVAC 1107, the UNIVAC 1100/2200 series , continued to do so. The IBM 700/7000 series scientific machines use sign/magnitude notation, except for the index registers which are two's complement. Early commercial computers storing negative values in two's complement form include the English Electric DEUCE (1955) and the Digital Equipment Corporation PDP-5 (1963) and PDP-6 (1964). The System/360 , introduced in 1964 by IBM , then the dominant player in the computer industry, made two's complement the most widely used binary representation in the computer industry. The first minicomputer, the PDP-8 introduced in 1965, uses two's complement arithmetic, as do the 1969 Data General Nova , the 1970 PDP-11 , and almost all subsequent minicomputers and microcomputers. A two's-complement number system encodes positive and negative numbers in a binary number representation. The weight of each bit is a power of two, except for the most significant bit , whose weight is the negative of the corresponding power of two. The value w of an N -bit integer a N − 1 a N − 2 … a 0 {\displaystyle a_{N-1}a_{N-2}\dots a_{0}} is given by the following formula: w = − a N − 1 2 N − 1 + ∑ i = 0 N − 2 a i 2 i {\displaystyle w=-a_{N-1}2^{N-1}+\sum _{i=0}^{N-2}a_{i}2^{i}} The most significant bit determines the sign of the number and is sometimes called the sign bit . Unlike in sign-and-magnitude representation, the sign bit also has the weight −(2 N − 1 ) shown above. Using N bits, all integers from −(2 N − 1 ) to 2 N − 1 − 1 can be represented. In two's complement notation, a non-negative number is represented by its ordinary binary representation ; in this case, the most significant bit is 0. Though, the range of numbers represented is not the same as with unsigned binary numbers. For example, an 8-bit unsigned number can represent the values 0 to 255 (11111111). However a two's complement 8-bit number can only represent non-negative integers from 0 to 127 (01111111), because the rest of the bit combinations with the most significant bit as '1' represent the negative integers −1 to −128. The two's complement operation is the additive inverse operation, so negative numbers are represented by the two's complement of the absolute value . To get the two's complement of a negative binary number, all bits are inverted, or "flipped", by using the bitwise NOT operation; the value of 1 is then added to the resulting value, ignoring the overflow which occurs when taking the two's complement of 0. For example, using 1 byte (=8 bits), the decimal number 5 is represented by The most significant bit (the leftmost bit in this case) is 0, so the pattern represents a non-negative value. To convert to −5 in two's-complement notation, first, all bits are inverted, that is: 0 becomes 1 and 1 becomes 0: At this point, the representation is the ones' complement of the decimal value −5. To obtain the two's complement, 1 is added to the result, giving: The result is a signed binary number representing the decimal value −5 in two's-complement form. The most significant bit is 1, signifying that the value represented is negative. Alternatively, instead of adding 1 after inverting a positive binary number, 1 can be subtracted from the number before it is inverted. The two methods can easily be shown to be equivalent. The inversion (ones' complement) of x {\displaystyle x} equals ( 2 N − 1 ) − x {\displaystyle (2^{N}-1)-x} , so the sum of the inversion and 1 equals ( 2 N − 1 ) − x + 1 = {\displaystyle (2^{N}-1)-x+1=} 2 N − x − 1 + 1 = {\displaystyle 2^{N}-x-1+1=} 2 N − x {\displaystyle 2^{N}-x} , which equals the two's complement of x {\displaystyle x} as expected. The inversion of x − 1 {\displaystyle x-1} equals ( 2 N − 1 ) − ( x − 1 ) = {\displaystyle (2^{N}-1)-(x-1)=} ( 2 N − 1 ) − x + 1 = {\displaystyle (2^{N}-1)-x+1=} 2 N − x {\displaystyle 2^{N}-x} , identical to the previous equation. Essentially, the subtraction inherent in the inversion operation changes the −1 added to x {\displaystyle x} before the inversion into +1 added after the inversion. This alternate subtract-and-invert algorithm to form a two's complement can sometimes be advantageous in computer programming or hardware design, for example where the subtraction of 1 can be obtained for free by incorporating it into an earlier operation. [ 6 ] The two's complement of a negative number is the corresponding positive value, except in the special case of the most negative number . For example, inverting the bits of −5 (above) gives: And adding one gives the final value: The two's complement of the most negative number representable (e.g. a one as the most-significant bit and all other bits zero) is itself. Hence, there is an 'extra' negative number for which two's complement does not give the negation, see § Most negative number below. The case for the most negative number is one of only two special cases. The other special case is for zero, the two's complement of which is zero: inverting gives all ones, and adding one changes the ones back to zeros (since the overflow is ignored). Mathematically, in the two's complement system of signed integers (which represents the negative of each number as its two's complement), this is obviously correct: the negative of 0 is in fact 0 ( − 0 = 0 {\displaystyle -0=0} ). This zero case also makes sense by the definition of two's complements: by that definition, the two's complement of zero would be 2 N − 0 = 2 N {\displaystyle 2^{N}-0=2^{N}} , but in N {\displaystyle N} bits, all values are taken modulo 2 N {\displaystyle 2^{N}} , and 2 N {\displaystyle 2^{N}} mod 2 N = 0 {\displaystyle 2^{N}=0} . In other words, the two's complement of 0 in N {\displaystyle N} bits is (by definition) a single 1 bit followed by N {\displaystyle N} zeros, but the 1 gets truncated, leaving 0. [ 7 ] In summary, the two's complement of any number, either positive, negative, or zero, can be computed in the same ways. In two's complement signed integer representation, the two's complement of any integer is equal to -1 times that integer, except for the most negative integer representable in the given number of bits N {\displaystyle N} , i.e. the integer − 2 N − 1 {\displaystyle -2^{N-1}} , the two's complement of which is itself (still negative). The sum of a number and its ones' complement is an N -bit word with all 1 bits, which is (reading as an unsigned binary number) 2 N − 1 . Then adding a number to its two's complement results in the N lowest bits set to 0 and the carry bit 1, where the latter has the weight (reading it as an unsigned binary number) of 2 N . Hence, in the unsigned binary arithmetic the value of two's-complement negative number x * of a positive x satisfies the equality x * = 2 N − x . [ a ] For example, to find the four-bit representation of −5 (subscripts denote the base of the representation ): Hence, with N = 4 : The calculation can be done entirely in base 10, converting to base 2 at the end: A shortcut to manually convert a binary number into its two's complement is to start at the least significant bit (LSB), and copy all the zeros, working from LSB toward the most significant bit (MSB) until the first 1 is reached; then copy that 1, and flip all the remaining bits (Leave the MSB as a 1 if the initial number was in sign-and-magnitude representation). This shortcut allows a person to convert a number to its two's complement without first forming its ones' complement. For example: in two's complement representation, the negation of "0011 1100" is "1100 0 100 ", where the underlined digits were unchanged by the copying operation (while the rest of the digits were flipped). In computer circuitry, this method is no faster than the "complement and add one" method; both methods require working sequentially from right to left, propagating logic changes. The method of complementing and adding one can be sped up by a standard carry look-ahead adder circuit; the LSB towards MSB method can be sped up by a similar logic transformation. When turning a two's-complement number with a certain number of bits into one with more bits (e.g., when copying from a one-byte variable to a two-byte variable), the most-significant bit must be repeated in all the extra bits. Some processors do this in a single instruction; on other processors, a conditional must be used followed by code to set the relevant bits or bytes. Similarly, when a number is shifted to the right, the most-significant bit, which contains the sign information, must be maintained. However, when shifted to the left, a bit is shifted out. These rules preserve the common semantics that left shifts multiply the number by two and right shifts divide the number by two. However, if the most-significant bit changes from 0 to 1 (and vice versa), overflow is said to occur in the case that the value represents a signed integer. Both shifting and doubling the precision are important for some multiplication algorithms. Note that unlike addition and subtraction, width extension and right shifting are done differently for signed and unsigned numbers. With only one exception, starting with any number in two's-complement representation, if all the bits are flipped and 1 added, the two's-complement representation of the negative of that number is obtained. Positive 12 becomes negative 12, positive 5 becomes negative 5, zero becomes zero(+overflow), etc. Taking the two's complement (negation) of the minimum number in the range will not have the desired effect of negating the number. For example, the two's complement of −128 in an eight-bit system is −128 , as shown in the table to the right . Although the expected result from negating −128 is +128 , there is no representation of +128 with an eight bit two's complement system and thus it is in fact impossible to represent the negation. Note that the two's complement being the same number is detected as an overflow condition since there was a carry into but not out of the most-significant bit. Having a nonzero number equal to its own negation is forced by the fact that zero is its own negation, and that the total number of numbers is even. Proof: there are 2^n − 1 nonzero numbers (an odd number). Negation would partition the nonzero numbers into sets of size 2, but this would result in the set of nonzero numbers having even cardinality. So at least one of the sets has size 1, i.e., a nonzero number is its own negation. The presence of the most negative number can lead to unexpected programming bugs where the result has an unexpected sign, or leads to an unexpected overflow exception, or leads to completely strange behaviors. For example, In the C and C++ programming languages, the above behaviours are undefined and not only may they return strange results, but the compiler is free to assume that the programmer has ensured that undefined numerical operations never happen, and make inferences from that assumption. [ 10 ] This enables a number of optimizations, but also leads to a number of strange bugs in programs with these undefined calculations. This most negative number in two's complement is sometimes called "the weird number", because it is the only exception. [ 11 ] [ 12 ] Although the number is an exception, it is a valid number in regular two's complement systems. All arithmetic operations work with it both as an operand and (unless there was an overflow) a result. Given a set of all possible N -bit values, we can assign the lower (by the binary value) half to be the integers from 0 to (2 N − 1 − 1) inclusive and the upper half to be −2 N − 1 to −1 inclusive. The upper half (again, by the binary value) can be used to represent negative integers from −2 N − 1 to −1 because, under addition modulo 2 N they behave the same way as those negative integers. That is to say that, because i + j mod 2 N = i + ( j + 2 N ) mod 2 N , any value in the set { j + k 2 N | k is an integer } can be used in place of j . [ 13 ] For example, with eight bits, the unsigned bytes are 0 to 255. Subtracting 256 from the top half (128 to 255) yields the signed bytes −128 to −1. The relationship to two's complement is realised by noting that 256 = 255 + 1 , and (255 − x ) is the ones' complement of x . For example, an 8 bit number can only represent every integer from −128. to 127., inclusive, since (2 8 − 1 = 128.) . −95. modulo 256. is equivalent to 161. since Fundamentally, the system represents negative integers by counting backward and wrapping around . The boundary between positive and negative numbers is arbitrary, but by convention all negative numbers have a left-most bit ( most significant bit ) of one. Therefore, the most positive four-bit number is 0111 (7.) and the most negative is 1000 (−8.). Because of the use of the left-most bit as the sign bit, the absolute value of the most negative number (|−8.| = 8.) is too large to represent. Negating a two's complement number is simple: Invert all the bits and add one to the result. For example, negating 1111, we get 0000 + 1 = 1 . Therefore, 1111 in binary must represent −1 in decimal. [ 14 ] The system is useful in simplifying the implementation of arithmetic on computer hardware. Adding 0011 (3.) to 1111 (−1.) at first seems to give the incorrect answer of 10010. However, the hardware can simply ignore the left-most bit to give the correct answer of 0010 (2.). Overflow checks still must exist to catch operations such as summing 0100 and 0100. The system therefore allows addition of negative operands without a subtraction circuit or a circuit that detects the sign of a number. Moreover, that addition circuit can also perform subtraction by taking the two's complement of a number (see below), which only requires an additional cycle or its own adder circuit. To perform this, the circuit merely operates as if there were an extra left-most bit of 1. Adding two's complement numbers requires no special processing even if the operands have opposite signs; the sign of the result is determined automatically. For example, adding 15 and −5: Or the computation of 5 − 15 = 5 + (−15): This process depends upon restricting to 8 bits of precision; a carry to the (nonexistent) 9th most significant bit is ignored, resulting in the arithmetically correct result of 10 10 . The last two bits of the carry row (reading right-to-left) contain vital information: whether the calculation resulted in an arithmetic overflow , a number too large for the binary system to represent (in this case greater than 8 bits). An overflow condition exists when these last two bits are different from one another. As mentioned above, the sign of the number is encoded in the MSB of the result. In other terms, if the left two carry bits (the ones on the far left of the top row in these examples) are both 1s or both 0s, the result is valid; if the left two carry bits are "1 0" or "0 1", a sign overflow has occurred. Conveniently, an XOR operation on these two bits can quickly determine if an overflow condition exists. As an example, consider the signed 4-bit addition of 7 and 3: In this case, the far left two (MSB) carry bits are "01", which means there was a two's-complement addition overflow. That is, 1010 2 = 10 10 is outside the permitted range of −8 to 7. The result would be correct if treated as unsigned integer. In general, any two N -bit numbers may be added without overflow, by first sign-extending both of them to N + 1 bits, and then adding as above. The N + 1 bits result is large enough to represent any possible sum ( N = 5 two's complement can represent values in the range −16 to 15) so overflow will never occur. It is then possible, if desired, to 'truncate' the result back to N bits while preserving the value if and only if the discarded bit is a proper sign extension of the retained result bits. This provides another method of detecting overflow – which is equivalent to the method of comparing the carry bits – but which may be easier to implement in some situations, because it does not require access to the internals of the addition. Computers usually use the method of complements to implement subtraction. Using complements for subtraction is closely related to using complements for representing negative numbers, since the combination allows all signs of operands and results; direct subtraction works with two's-complement numbers as well. Like addition, the advantage of using two's complement is the elimination of examining the signs of the operands to determine whether addition or subtraction is needed. For example, subtracting −5 from 15 is really adding 5 to 15, but this is hidden by the two's-complement representation: Overflow is detected the same way as for addition, by examining the two leftmost (most significant) bits of the borrows; overflow has occurred if they are different. Another example is a subtraction operation where the result is negative: 15 − 35 = −20: As for addition, overflow in subtraction may be avoided (or detected after the operation) by first sign-extending both inputs by an extra bit. The product of two N -bit numbers requires 2 N bits to contain all possible values. [ 15 ] If the precision of the two operands using two's complement is doubled before the multiplication, direct multiplication (discarding any excess bits beyond that precision) will provide the correct result. [ 16 ] For example, take 6 × (−5) = −30 . First, the precision is extended from four bits to eight. Then the numbers are multiplied, discarding the bits beyond the eighth bit (as shown by " x "): This is very inefficient; by doubling the precision ahead of time, all additions must be double-precision and at least twice as many partial products are needed than for the more efficient algorithms actually implemented in computers. Some multiplication algorithms are designed for two's complement, notably Booth's multiplication algorithm . Methods for multiplying sign-magnitude numbers do not work with two's-complement numbers without adaptation. There is not usually a problem when the multiplicand (the one being repeatedly added to form the product) is negative; the issue is setting the initial bits of the product correctly when the multiplier is negative. Two methods for adapting algorithms to handle two's-complement numbers are common: As an example of the second method, take the common add-and-shift algorithm for multiplication. Instead of shifting partial products to the left as is done with pencil and paper, the accumulated product is shifted right, into a second register that will eventually hold the least significant half of the product. Since the least significant bits are not changed once they are calculated, the additions can be single precision, accumulating in the register that will eventually hold the most significant half of the product. In the following example, again multiplying 6 by −5, the two registers and the extended sign bit are separated by "|": Comparison is often implemented with a dummy subtraction, where the flags in the computer's status register are checked, but the main result is ignored. The zero flag indicates if two values compared equal. If the exclusive-or of the sign and overflow flags is 1, the subtraction result was less than zero, otherwise the result was zero or greater. These checks are often implemented in computers in conditional branch instructions. Unsigned binary numbers can be ordered by a simple lexicographic ordering , where the bit value 0 is defined as less than the bit value 1. For two's complement values, the meaning of the most significant bit is reversed (i.e. 1 is less than 0). The following algorithm (for an n -bit two's complement architecture) sets the result register R to −1 if A < B, to +1 if A > B, and to 0 if A and B are equal: In a classic HAKMEM published by the MIT AI Lab in 1972, Bill Gosper noted that whether or not a machine's internal representation was two's-complement could be determined by summing the successive powers of two. In a flight of fancy, he noted that the result of doing this algebraically indicated that "algebra is run on a machine (the universe) which is two's-complement." [ 18 ] Gosper's end conclusion is not necessarily meant to be taken seriously, and it is akin to a mathematical joke . The critical step is "...110 = ...111 − 1", i.e., "2 X = X − 1", and thus X = ...111 = −1. This presupposes a method by which an infinite string of 1s is considered a number, which requires an extension of the finite place-value concepts in elementary arithmetic. It is meaningful either as part of a two's-complement notation for all integers, as a typical 2-adic number , or even as one of the generalized sums defined for the divergent series of real numbers 1 + 2 + 4 + 8 + ⋯ . [ 19 ] Digital arithmetic circuits, idealized to operate with infinite (extending to positive powers of 2) bit strings, produce 2-adic addition and multiplication compatible with two's complement representation. [ 20 ] Continuity of binary arithmetical and bitwise operations in 2-adic metric also has some use in cryptography. [ 21 ] To convert a number with a fractional part, such as .0101, one must convert starting from right to left the 1s to decimal as in a normal conversion. In this example 0101 is equal to 5 in decimal. Each digit after the floating point represents a fraction where the denominator is a multiplier of 2. So, the first is 1/2, the second is 1/4 and so on. Having already calculated the decimal value as mentioned above, only the denominator of the LSB (LSB = starting from right) is used. The final result of this conversion is 5/16. For instance, having the floating value of .0110 for this method to work, one should not consider the last 0 from the right. Hence, instead of calculating the decimal value for 0110, we calculate the value 011, which is 3 in decimal (by leaving the 0 in the end, the result would have been 6, together with the denominator 2 4 = 16, which reduces to 3/8). The denominator is 8, giving a final result of 3/8.
https://en.wikipedia.org/wiki/Two's_complement
In quantum field theory , and in the significant subfields of quantum electrodynamics (QED) and quantum chromodynamics (QCD), the two-body Dirac equations (TBDE) of constraint dynamics provide a three-dimensional yet manifestly covariant reformulation of the Bethe–Salpeter equation [ 1 ] for two spin-1/2 particles. Such a reformulation is necessary since without it, as shown by Nakanishi, [ 2 ] the Bethe–Salpeter equation possesses negative-norm solutions arising from the presence of an essentially relativistic degree of freedom, the relative time. These "ghost" states have spoiled the naive interpretation of the Bethe–Salpeter equation as a quantum mechanical wave equation. The two-body Dirac equations of constraint dynamics rectify this flaw. The forms of these equations can not only be derived from quantum field theory [ 3 ] [ 4 ] they can also be derived purely in the context of Dirac's constraint dynamics [ 5 ] [ 6 ] and relativistic mechanics and quantum mechanics. [ 7 ] [ 8 ] [ 9 ] [ 10 ] Their structures, unlike the more familiar two-body Dirac equation of Breit , [ 11 ] [ 12 ] [ 13 ] which is a single equation, are that of two simultaneous quantum relativistic wave equations . A single two-body Dirac equation similar to the Breit equation can be derived from the TBDE. [ 14 ] Unlike the Breit equation, it is manifestly covariant and free from the types of singularities that prevent a strictly nonperturbative treatment of the Breit equation. [ 15 ] In applications of the TBDE to QED, the two particles interact by way of four-vector potentials derived from the field theoretic electromagnetic interactions between the two particles. In applications to QCD, the two particles interact by way of four-vector potentials and Lorentz invariant scalar interactions, derived in part from the field theoretic chromomagnetic interactions between the quarks and in part by phenomenological considerations. As with the Breit equation a sixteen-component spinor Ψ is used. For QED, each equation has the same structure as the ordinary one-body Dirac equation in the presence of an external electromagnetic field , given by the 4-potential A μ {\displaystyle A_{\mu }} . For QCD, each equation has the same structure as the ordinary one-body Dirac equation in the presence of an external field similar to the electromagnetic field and an additional external field given by in terms of a Lorentz invariant scalar S {\displaystyle S} . In natural units : [ 16 ] those two-body equations have the form. [ ( γ 1 ) μ ( p 1 − A ~ 1 ) μ + m 1 + S ~ 1 ] Ψ = 0 , [ ( γ 2 ) μ ( p 2 − A ~ 2 ) μ + m 2 + S ~ 2 ] Ψ = 0. {\displaystyle {\begin{aligned}\left[(\gamma _{1})_{\mu }(p_{1}-{\tilde {A}}_{1})^{\mu }+m_{1}+{\tilde {S}}_{1}\right]\Psi &=0,\\[1ex]\left[(\gamma _{2})_{\mu }(p_{2}-{\tilde {A}}_{2})^{\mu }+m_{2}+{\tilde {S}}_{2}\right]\Psi &=0.\end{aligned}}} where, in coordinate space, p μ is the 4-momentum , related to the 4-gradient by (the metric used here is η μ ν = ( − 1 , 1 , 1 , 1 ) {\displaystyle \eta _{\mu \nu }=(-1,1,1,1)} ) p μ = − i ∂ ∂ x μ {\displaystyle p^{\mu }=-i{\frac {\partial }{\partial x_{\mu }}}} and γ μ are the gamma matrices . The two-body Dirac equations (TBDE) have the property that if one of the masses becomes very large, say m 2 → ∞ {\displaystyle m_{2}\rightarrow \infty } then the 16-component Dirac equation reduces to the 4-component one-body Dirac equation for particle one in an external potential. In SI units : [ ( γ 1 ) μ ( p 1 − A ~ 1 ) μ + m 1 c + S ~ 1 ] Ψ = 0 , [ ( γ 2 ) μ ( p 2 − A ~ 2 ) μ + m 2 c + S ~ 2 ] Ψ = 0. {\displaystyle {\begin{aligned}\left[(\gamma _{1})_{\mu }(p_{1}-{\tilde {A}}_{1})^{\mu }+m_{1}c+{\tilde {S}}_{1}\right]\Psi &=0,\\[1ex]\left[(\gamma _{2})_{\mu }(p_{2}-{\tilde {A}}_{2})^{\mu }+m_{2}c+{\tilde {S}}_{2}\right]\Psi &=0.\end{aligned}}} where c is the speed of light and p μ = − i ℏ ∂ ∂ x μ {\displaystyle p^{\mu }=-i\hbar {\frac {\partial }{\partial x_{\mu }}}} Natural units will be used below. A tilde symbol is used over the two sets of potentials to indicate that they may have additional gamma matrix dependencies not present in the one-body Dirac equation. Any coupling constants such as the electron charge are embodied in the vector potentials. Constraint dynamics applied to the TBDE requires a particular form of mathematical consistency: the two Dirac operators must commute with each other. This is plausible if one views the two equations as two compatible constraints on the wave function. (See the discussion below on constraint dynamics.) If the two operators did not commute, (as, e.g., with the coordinate and momentum operators x , p {\displaystyle x,p} ) then the constraints would not be compatible (one could not e.g., have a wave function that satisfied both x Ψ = 0 {\displaystyle x\Psi =0} and p Ψ = 0 {\displaystyle p\Psi =0} ). This mathematical consistency or compatibility leads to three important properties of the TBDE. The first is a condition that eliminates the dependence on the relative time in the center of momentum (c.m.) frame defined by P = p 1 + p 2 = ( w , 0 → ) {\displaystyle P=p_{1}+p_{2}=(w,{\vec {0}})} . (The variable w {\displaystyle w} is the total energy in the c.m. frame.) Stated another way, the relative time is eliminated in a covariant way. In particular, for the two operators to commute, the scalar and four-vector potentials can depend on the relative coordinate x = x 1 − x 2 {\displaystyle x=x_{1}-x_{2}} only through its component x ⊥ {\displaystyle x_{\perp }} orthogonal to P {\displaystyle P} in which x ⊥ μ = ( η μ ν − P μ P ν / P 2 ) x ν , {\displaystyle x_{\perp }^{\mu }=(\eta ^{\mu \nu }-P^{\mu }P^{\nu }/P^{2})x_{\nu },\,} P μ x ⊥ μ = 0. {\displaystyle P_{\mu }x_{\perp }^{\mu }=0.\,} This implies that in the c.m. frame x ⊥ = ( 0 , x → = x → 1 − x → 2 ) {\displaystyle x_{\perp }=(0,{\vec {x}}={\vec {x}}_{1}-{\vec {x}}_{2})} , which has zero time component. Secondly, the mathematical consistency condition also eliminates the relative energy in the c.m. frame . It does this by imposing on each Dirac operator a structure such that in a particular combination they lead to this interaction independent form, eliminating in a covariant way the relative energy. P ⋅ p Ψ = ( − P 0 p 0 + P → ⋅ p ) Ψ = 0. {\displaystyle P\cdot p\Psi =(-P^{0}p^{0}+{\vec {P}}\cdot p)\Psi =0.\,} In this expression p {\displaystyle p} is the relative momentum having the form ( p 1 − p 2 ) / 2 {\displaystyle (p_{1}-p_{2})/2} for equal masses. In the c.m. frame ( P 0 = w , P → = 0 → {\displaystyle P^{0}=w,{\vec {P}}={\vec {0}}} ), the time component p 0 {\displaystyle p^{0}} of the relative momentum, that is the relative energy, is thus eliminated. in the sense that p 0 Ψ = 0 {\displaystyle p^{0}\Psi =0} . A third consequence of the mathematical consistency is that each of the world scalar S ~ i {\displaystyle {\tilde {S}}_{i}} and four vector A ~ i μ {\displaystyle {\tilde {A}}_{i}^{\mu }} potentials has a term with a fixed dependence on γ 1 {\displaystyle \gamma _{1}} and γ 2 {\displaystyle \gamma _{2}} in addition to the gamma matrix independent forms of S i {\displaystyle S_{i}} and A i μ {\displaystyle A_{i}^{\mu }} which appear in the ordinary one-body Dirac equation for scalar and vector potentials. These extra terms correspond to additional recoil spin-dependence not present in the one-body Dirac equation and vanish when one of the particles becomes very heavy (the so-called static limit). Constraint dynamics arose from the work of Dirac [ 6 ] and Bergmann. [ 17 ] This section shows how the elimination of relative time and energy takes place in the c.m. system for the simple system of two relativistic spinless particles. Constraint dynamics was first applied to the classical relativistic two particle system by Todorov, [ 18 ] [ 19 ] Kalb and Van Alstine, [ 20 ] [ 21 ] Komar, [ 22 ] [ 23 ] and Droz–Vincent. [ 24 ] With constraint dynamics, these authors found a consistent and covariant approach to relativistic canonical Hamiltonian mechanics that also evades the Currie–Jordan–Sudarshan "No Interaction" theorem. [ 25 ] [ 26 ] That theorem states that without fields, one cannot have a relativistic Hamiltonian dynamics . Thus, the same covariant three-dimensional approach which allows the quantized version of constraint dynamics to remove quantum ghosts simultaneously circumvents at the classical level the C.J.S. theorem. Consider a constraint on the otherwise independent coordinate and momentum four vectors, written in the form ϕ i ( p , x ) ≈ 0 {\displaystyle \phi _{i}(p,x)\approx 0} . The symbol ≈ 0 {\displaystyle \approx 0} is called a weak equality and implies that the constraint is to be imposed only after any needed Poisson brackets are performed. In the presence of such constraints, the total Hamiltonian H {\displaystyle {\mathcal {H}}} is obtained from the Lagrangian L {\displaystyle {\mathcal {L}}} by adding to the Legendre Hamiltonian ( p x ˙ − L ) {\displaystyle (p{\dot {x}}-{\mathcal {L}})} the sum of the constraints times an appropriate set of Lagrange multipliers ( λ i ) {\displaystyle (\lambda _{i})} . H = p x ˙ − L + λ i ϕ i , {\displaystyle {\mathcal {H}}=p{\dot {x}}-{\mathcal {L}}+\lambda _{i}\phi _{i},} This total Hamiltonian is traditionally called the Dirac Hamiltonian. Constraints arise naturally from parameter invariant actions of the form I = ∫ d τ L ( τ ) = ∫ d τ ′ d τ d τ ′ L ( τ ) = ∫ d τ ′ L ( τ ′ ) . {\displaystyle I=\int d\tau {\mathcal {L}}(\tau )=\int d\tau '{\frac {d\tau }{d\tau '}}{\mathcal {L}}(\tau )=\int d\tau '{\mathcal {L}}(\tau ').} In the case of four vector and Lorentz scalar interactions for a single particle the Lagrangian is L ( τ ) = − ( m + S ( x ) ) − x ˙ 2 + x ˙ ⋅ A ( x ) {\displaystyle {\mathcal {L}}(\tau )=-(m+S(x)){\sqrt {-{\dot {x}}^{2}}}+{\dot {x}}\cdot A(x)\,} The canonical momentum is p = ∂ L ∂ x ˙ = ( m + S ( x ) ) x ˙ − x ˙ 2 + A ( x ) {\displaystyle p={\frac {\partial {\mathcal {L}}}{\partial {\dot {x}}}}={\frac {(m+S(x)){\dot {x}}}{\sqrt {-{\dot {x}}^{2}}}}+A(x)} and by squaring leads to the generalized mass shell condition or generalized mass shell constraint ( p − A ) 2 + ( m + S ) 2 = 0. {\displaystyle (p-A)^{2}+(m+S)^{2}=0.\,} Since, in this case, the Legendre Hamiltonian vanishes p ⋅ x ˙ − L = 0 , {\displaystyle p\cdot {\dot {x}}-{\mathcal {L}}=0,\,} the Dirac Hamiltonian is simply the generalized mass constraint (with no interactions it would simply be the ordinary mass shell constraint) H = λ [ ( p − A ) 2 + ( m + S ) 2 ] ≡ λ ( p 2 + m 2 + Φ ( x , p ) ) . {\displaystyle {\mathcal {H}}=\lambda \left[\left(p-A\right)^{2}+(m+S)^{2}\right]\equiv \lambda (p^{2}+m^{2}+\Phi (x,p)).} One then postulates that for two bodies the Dirac Hamiltonian is the sum of two such mass shell constraints, H i = p i 2 + m i 2 + Φ i ( x 1 , x 2 , p 1 , p 2 ) ≈ 0 , {\displaystyle {\mathcal {H}}_{i}=p_{i}^{2}+m_{i}^{2}+\Phi _{i}(x_{1},x_{2},p_{1},p_{2})\approx 0,\,} that is H = λ 1 [ p 1 2 + m 1 2 + Φ 1 ( x 1 , x 2 , p 1 , p 2 ) ] + λ 2 [ p 2 2 + m 2 2 + Φ 2 ( x 1 , x 2 , p 1 , p 2 ) ] = λ 1 H 1 + λ 2 H 2 , {\displaystyle {\begin{aligned}{\mathcal {H}}&=\lambda _{1}[p_{1}^{2}+m_{1}^{2}+\Phi _{1}(x_{1},x_{2},p_{1},p_{2})]+\lambda _{2}[p_{2}^{2}+m_{2}^{2}+\Phi _{2}(x_{1},x_{2},p_{1},p_{2})]\\[1ex]&=\lambda _{1}{\mathcal {H}}_{1}+\lambda _{2}{\mathcal {H}}_{2},\end{aligned}}} and that each constraint H i {\displaystyle {\mathcal {H}}_{i}} be constant in the proper time associated with H {\displaystyle {\mathcal {H}}} H ˙ i = { H i , H } ≈ 0 {\displaystyle {\dot {\mathcal {H}}}_{i}=\{{\mathcal {H}}_{i},{\mathcal {H}}\}\approx 0\,} Here the weak equality means that the Poisson bracket could result in terms proportional one of the constraints, the classical Poisson brackets for the relativistic two-body system being defined by { O 1 , O 2 } = ∂ O 1 ∂ x 1 μ ∂ O 2 ∂ p 1 μ − ∂ O 1 ∂ p 1 μ ∂ O 2 ∂ x 1 μ + ∂ O 1 ∂ x 2 μ ∂ O 2 ∂ p 2 μ − ∂ O 1 ∂ p 2 μ ∂ O 2 ∂ x 2 μ . {\displaystyle \left\{O_{1},O_{2}\right\}={\frac {\partial O_{1}}{\partial x_{1}^{\mu }}}{\frac {\partial O_{2}}{\partial p_{1\mu }}}-{\frac {\partial O_{1}}{\partial p_{1}^{\mu }}}{\frac {\partial O_{2}}{\partial x_{1\mu }}}+{\frac {\partial O_{1}}{\partial x_{2}^{\mu }}}{\frac {\partial O_{2}}{\partial p_{2\mu }}}-{\frac {\partial O_{1}}{\partial p_{2}^{\mu }}}{\frac {\partial O_{2}}{\partial x_{2\mu }}}.} To see the consequences of having each constraint be a constant of the motion, take, for example H ˙ 1 = { H 1 , H } = λ 1 { H 1 , H 1 } + { H 1 , λ 1 } H 2 + λ 2 { H 2 , H 1 } + { λ 2 , H 1 } H 2 . {\displaystyle {\dot {\mathcal {H}}}_{1}=\{{\mathcal {H}}_{1},{\mathcal {H}}\}=\lambda _{1}\{{\mathcal {H}}_{1},{\mathcal {H}}_{1}\}+\{{\mathcal {H}}_{1},\lambda _{1}\}{\mathcal {H}}_{2}+\lambda _{2}\{{\mathcal {H}}_{2},{\mathcal {H}}_{1}\}+\{\lambda _{2},{\mathcal {H}}_{1}\}{\mathcal {H}}_{2}.} Since { H 1 , H 1 } = 0 {\displaystyle \{{\mathcal {H}}_{1},{\mathcal {H}}_{1}\}=0} and H 1 ≈ 0 {\displaystyle {\mathcal {H}}_{1}\approx 0} and H 2 ≈ 0 {\displaystyle {\mathcal {H}}_{2}\approx 0} one has H ˙ 1 ≈ λ 2 { H 2 , H 1 } ≈ 0. {\displaystyle {\dot {\mathcal {H}}}_{1}\approx \lambda _{2}\{{\mathcal {H}}_{2},{\mathcal {H}}_{1}\}\approx 0.} The simplest solution to this is Φ 1 = Φ 2 ≡ Φ ( x ⊥ ) {\displaystyle \Phi _{1}=\Phi _{2}\equiv \Phi (x_{\perp })} which leads to (note the equality in this case is not a weak one in that no constraint need be imposed after the Poisson bracket is worked out) { H 2 , H 1 } = 0 {\displaystyle \{{\mathcal {H}}_{2},{\mathcal {H}}_{1}\}=0\,} (see Todorov, [ 19 ] and Wong and Crater [ 27 ] ) with the same x ⊥ {\displaystyle x_{\perp }} defined above. In addition to replacing classical dynamical variables by their quantum counterparts, quantization of the constraint mechanics takes place by replacing the constraint on the dynamical variables with a restriction on the wave function H i ≈ 0 → H i Ψ = 0 , {\displaystyle {\mathcal {H}}_{i}\approx 0\rightarrow {\mathcal {H}}_{i}\Psi =0,} H ≈ 0 → H Ψ = 0. {\displaystyle {\mathcal {H}}\approx 0\rightarrow {\mathcal {H}}\Psi =0.} The first set of equations for i = 1, 2 play the role for spinless particles that the two Dirac equations play for spin-one-half particles. The classical Poisson brackets are replaced by commutators { O 1 , O 2 } → 1 i [ O 1 , O 2 ] . {\displaystyle \{O_{1},O_{2}\}\rightarrow {\frac {1}{i}}[O_{1},O_{2}].\,} Thus [ H 2 , H 1 ] = 0 , {\displaystyle [{\mathcal {H}}_{2},{\mathcal {H}}_{1}]=0,\,} and we see in this case that the constraint formalism leads to the vanishing commutator of the wave operators for the two particles. This is the analogue of the claim stated earlier that the two Dirac operators commute with one another. The vanishing of the above commutator ensures that the dynamics is independent of the relative time in the c.m. frame. In order to covariantly eliminate the relative energy, introduce the relative momentum p {\displaystyle p} defined by The above definition of the relative momentum forces the orthogonality of the total momentum and the relative momentum, P ⋅ p = 0 , {\displaystyle P\cdot p=0,} which follows from taking the scalar product of either equation with P {\displaystyle P} . From Eqs.( 1 ) and ( 2 ), this relative momentum can be written in terms of p 1 {\displaystyle p_{1}} and p 2 {\displaystyle p_{2}} as p = ε 2 − P 2 p 1 − ε 1 − P 2 p 2 {\displaystyle p={\frac {\varepsilon _{2}}{\sqrt {-P^{2}}}}p_{1}-{\frac {\varepsilon _{1}}{\sqrt {-P^{2}}}}p_{2}} where ε 1 = − p 1 ⋅ P − P 2 = − P 2 + p 1 2 − p 2 2 2 − P 2 {\displaystyle \varepsilon _{1}=-{\frac {p_{1}\cdot P}{\sqrt {-P^{2}}}}=-{\frac {P^{2}+p_{1}^{2}-p_{2}^{2}}{2{\sqrt {-P^{2}}}}}} ε 2 = − p 2 ⋅ P − P 2 = − P 2 + p 2 2 − p 1 2 2 − P 2 {\displaystyle \varepsilon _{2}=-{\frac {p_{2}\cdot P}{\sqrt {-P^{2}}}}=-{\frac {P^{2}+p_{2}^{2}-p_{1}^{2}}{2{\sqrt {-P^{2}}}}}} are the projections of the momenta p 1 {\displaystyle p_{1}} and p 2 {\displaystyle p_{2}} along the direction of the total momentum P {\displaystyle P} . Subtracting the two constraints H 1 Ψ = 0 {\displaystyle {\mathcal {H}}_{1}\Psi =0} and H 2 Ψ = 0 {\displaystyle {\mathcal {H}}_{2}\Psi =0} , gives Thus on these states Ψ {\displaystyle \Psi } ε 1 Ψ = − P 2 + m 1 2 − m 2 2 2 − P 2 Ψ {\displaystyle \varepsilon _{1}\Psi ={\frac {-P^{2}+m_{1}^{2}-m_{2}^{2}}{2{\sqrt {-P^{2}}}}}\Psi } ε 2 Ψ = − P 2 + m 2 2 − m 1 2 2 − P 2 Ψ . {\displaystyle \varepsilon _{2}\Psi ={\frac {-P^{2}+m_{2}^{2}-m_{1}^{2}}{2{\sqrt {-P^{2}}}}}\Psi .} The equation H Ψ = 0 {\displaystyle {\mathcal {H}}\Psi =0} describes both the c.m. motion and the internal relative motion. To characterize the former motion, observe that since the potential Φ {\displaystyle \Phi } depends only on the difference of the two coordinates [ P , H ] Ψ = 0. {\displaystyle [P,{\mathcal {H}}]\Psi =0.} (This does not require that [ P , λ i ] = 0 {\displaystyle [P,\lambda _{i}]=0} since the H i Ψ = 0 {\displaystyle {\mathcal {H}}_{i}\Psi =0} .) Thus, the total momentum P {\displaystyle P} is a constant of motion and Ψ {\displaystyle \Psi } is an eigenstate state characterized by a total momentum P ′ {\displaystyle P'} . In the c.m. system P ′ = ( w , 0 → ) , {\displaystyle P'=(w,{\vec {0}}),} with w {\displaystyle w} the invariant center of momentum (c.m.) energy. Thus and so Ψ {\displaystyle \Psi } is also an eigenstate of c.m. energy operators for each of the two particles, ε 1 Ψ = w 2 + m 1 2 − m 2 2 2 w Ψ {\displaystyle \varepsilon _{1}\Psi ={\frac {w^{2}+m_{1}^{2}-m_{2}^{2}}{2w}}\Psi } ε 2 Ψ = w 2 + m 2 2 − m 1 2 2 w Ψ . {\displaystyle \varepsilon _{2}\Psi ={\frac {w^{2}+m_{2}^{2}-m_{1}^{2}}{2w}}\Psi .} The relative momentum then satisfies p Ψ = ε 2 p 1 − ε 1 p 2 w Ψ , {\displaystyle p\Psi ={\frac {\varepsilon _{2}p_{1}-\varepsilon _{1}p_{2}}{w}}\Psi ,} so that p 1 Ψ = ( ε 1 w P + p ) Ψ , {\displaystyle p_{1}\Psi =\left({\frac {\varepsilon _{1}}{w}}P+p\right)\Psi ,} p 2 Ψ = ( ε 2 w P − p ) Ψ , {\displaystyle p_{2}\Psi =\left({\frac {\varepsilon _{2}}{w}}P-p\right)\Psi ,} The above set of equations follow from the constraints H i Ψ = 0 {\displaystyle {\mathcal {H}}_{i}\Psi =0} and the definition of the relative momenta given in Eqs.( 1 ) and ( 2 ). If instead one chooses to define (for a more general choice see Horwitz), [ 28 ] ε 1 = w 2 + m 1 2 − m 2 2 2 w , {\displaystyle \varepsilon _{1}={\frac {w^{2}+m_{1}^{2}-m_{2}^{2}}{2w}},} ε 2 = w 2 + m 2 2 − m 1 2 2 w , {\displaystyle \varepsilon _{2}={\frac {w^{2}+m_{2}^{2}-m_{1}^{2}}{2w}},} p = ε 2 p 1 − ε 1 p 2 w , {\displaystyle p={\frac {\varepsilon _{2}p_{1}-\varepsilon _{1}p_{2}}{w}},} independent of the wave function, then and it is straight forward to show that the constraint Eq.( 3 ) leads directly to: in place of P ⋅ p = 0 {\displaystyle P\cdot p=0} . This conforms with the earlier claim on the vanishing of the relative energy in the c.m. frame made in conjunction with the TBDE. In the second choice the c.m. value of the relative energy is not defined as zero but comes from the original generalized mass shell constraints. The above equations for the relative and constituent four-momentum are the relativistic analogues of the non-relativistic equations p → = m 2 p → 1 − m 1 p → 2 M , p → 1 = m 1 M P → + p → , p → 2 = m 2 M P → − p → . {\displaystyle {\begin{aligned}{\vec {p}}&={\frac {m_{2}{\vec {p}}_{1}-m_{1}{\vec {p}}_{2}}{M}},\\[1ex]{\vec {p}}_{1}&={\frac {m_{1}}{M}}{\vec {P}}+{\vec {p}},\\[1ex]{\vec {p}}_{2}&={\frac {m_{2}}{M}}{\vec {P}}-{\vec {p}}.\end{aligned}}} Using Eqs.( 5 ),( 6 ),( 7 ), one can write H {\displaystyle {\mathcal {H}}} in terms of P {\displaystyle P} and p {\displaystyle p} H Ψ = { λ 1 [ − ε 1 2 + m 1 2 + p 2 + Φ ( x ⊥ ) ] + λ 2 [ − ε 2 2 + m 2 2 + p 2 + Φ ( x ⊥ ) ] } Ψ {\displaystyle {\mathcal {H}}\Psi =\{\lambda _{1}[-\varepsilon _{1}^{2}+m_{1}^{2}+p^{2}+\Phi (x_{\perp })]+\lambda _{2}[-\varepsilon _{2}^{2}+m_{2}^{2}+p^{2}+\Phi (x_{\perp })]\}\Psi } where b 2 ( − P 2 , m 1 2 , m 2 2 ) = ε 1 2 − m 1 2 = ε 2 2 − m 2 2 = − 1 4 P 2 ( P 4 + 2 P 2 ( m 1 2 + m 2 2 ) + ( m 1 2 − m 2 2 ) 2 ) . {\displaystyle b^{2}(-P^{2},m_{1}^{2},m_{2}^{2})=\varepsilon _{1}^{2}-m_{1}^{2}=\varepsilon _{2}^{2}-m_{2}^{2}\ =-{\frac {1}{4P^{2}}}(P^{4}+2P^{2}(m_{1}^{2}+m_{2}^{2})+(m_{1}^{2}-m_{2}^{2})^{2})\,.} Eq.( 8 ) contains both the total momentum P {\displaystyle P} [through the b 2 ( − P 2 , m 1 2 , m 2 2 ) {\displaystyle b^{2}(-P^{2},m_{1}^{2},m_{2}^{2})} ] and the relative momentum p {\displaystyle p} . Using Eq. ( 4 ), one obtains the eigenvalue equation so that b 2 ( w 2 , m 1 2 , m 2 2 ) {\displaystyle b^{2}(w^{2},m_{1}^{2},m_{2}^{2})} becomes the standard triangle function displaying exact relativistic two-body kinematics: b 2 ( w 2 , m 1 2 , m 2 2 ) = 1 4 w 2 { w 4 − 2 w 2 ( m 1 2 + m 2 2 ) + ( m 1 2 − m 2 2 ) 2 } . {\displaystyle b^{2}(w^{2},m_{1}^{2},m_{2}^{2})={\frac {1}{4w^{2}}}\left\{w^{4}-2w^{2}(m_{1}^{2}+m_{2}^{2})+(m_{1}^{2}-m_{2}^{2})^{2}\right\}\,.} With the above constraint Eqs.( 7 ) on Ψ {\displaystyle \Psi } then p 2 Ψ = p ⊥ 2 Ψ {\displaystyle p^{2}\Psi =p_{\perp }^{2}\Psi } where p ⊥ = p − p ⋅ P P / P 2 {\displaystyle p_{\perp }=p-p\cdot PP/P^{2}} . This allows writing Eq. ( 9 ) in the form of an eigenvalue equation { p ⊥ 2 + Φ ( x ⊥ ) } Ψ = b 2 ( w 2 , m 1 2 , m 2 2 ) Ψ , {\displaystyle \{p_{\perp }^{2}+\Phi (x_{\perp })\}\Psi =b^{2}(w^{2},m_{1}^{2},m_{2}^{2})\Psi \,,} having a structure very similar to that of the ordinary three-dimensional nonrelativistic Schrödinger equation. It is a manifestly covariant equation, but at the same time its three-dimensional structure is evident. The four-vectors p ⊥ μ {\displaystyle p_{\perp }^{\mu }} and x ⊥ μ {\displaystyle x_{\perp }^{\mu }} have only three independent components since P ⋅ p ⊥ = P ⋅ x ⊥ = 0 . {\displaystyle P\cdot p_{\perp }=P\cdot x_{\perp }=0\,.} The similarity to the three-dimensional structure of the nonrelativistic Schrödinger equation can be made more explicit by writing the equation in the c.m. frame in which P = ( w , 0 → ) , {\displaystyle P=(w,{\vec {0}}),} p ⊥ = ( 0 , p → ) , {\displaystyle p_{\perp }=(0,{\vec {p}}),} x ⊥ = ( 0 , x → ) . {\displaystyle x_{\perp }=(0,{\vec {x}}).} Comparison of the resultant form with the time independent Schrödinger equation makes this similarity explicit. A plausible structure for the quasipotential Φ {\displaystyle \Phi } can be found by observing that the one-body Klein–Gordon equation ( p 2 + m 2 ) ψ = ( p → 2 − ε 2 + m 2 ) ψ = 0 {\displaystyle (p^{2}+m^{2})\psi =({\vec {p}}^{2}-\varepsilon ^{2}+m^{2})\psi =0} takes the form ( p → 2 − ε 2 + m 2 + 2 m S + S 2 + 2 ε A − A 2 ) ψ = 0 {\displaystyle ({\vec {p}}^{2}-\varepsilon ^{2}+m^{2}+2mS+S^{2}+2\varepsilon A-A^{2})\psi =0~} when one introduces a scalar interaction and timelike vector interaction via m → m + S {\displaystyle m\rightarrow m+S~} and ε → ε − A {\displaystyle \varepsilon \rightarrow \varepsilon -A} . In the two-body case, separate classical [ 29 ] [ 30 ] and quantum field theory [ 4 ] arguments show that when one includes world scalar and vector interactions then Φ {\displaystyle \Phi } depends on two underlying invariant functions S ( r ) {\displaystyle S(r)} and A ( r ) {\displaystyle A(r)} through the two-body Klein–Gordon-like potential form with the same general structure, that is Φ = 2 m w S + S 2 + 2 ε w A − A 2 . {\displaystyle \Phi =2m_{w}S+S^{2}+2\varepsilon _{w}A-A^{2}.} Those field theories further yield the c.m. energy dependent forms m w = m 1 m 2 / w , {\displaystyle m_{w}=m_{1}m_{2}/w,} and ε w = ( w 2 − m 1 2 − m 2 2 ) / 2 w , {\displaystyle \varepsilon _{w}=(w^{2}-m_{1}^{2}-m_{2}^{2})/2w,} ones that Tododov introduced as the relativistic reduced mass and effective particle energy for a two-body system. Similar to what happens in the nonrelativistic two-body problem, in the relativistic case we have the motion of this effective particle taking place as if it were in an external field (here generated by S {\displaystyle S} and A {\displaystyle A} ). The two kinematical variables m w {\displaystyle m_{w}} and ε w {\displaystyle \varepsilon _{w}} are related to one another by the Einstein condition ε w 2 − m w 2 = b 2 ( w ) , {\displaystyle \varepsilon _{w}^{2}-m_{w}^{2}=b^{2}(w),} If one introduces the four-vectors, including a vector interaction A μ {\displaystyle A^{\mu }} p = ε w P ^ + p , {\displaystyle {\mathfrak {p}}=\varepsilon _{w}{\hat {P}}+p,} A μ = P ^ μ A ( r ) {\displaystyle A^{\mu }={\hat {P}}^{\mu }A(r)} r = x ⊥ 2 , {\displaystyle r={\sqrt {x_{\perp }^{2}}}\,,} and scalar interaction S ( r ) {\displaystyle S(r)} , then the following classical minimal constraint form H = ( p − A ) 2 + ( m w + S ) 2 ≈ 0 , {\displaystyle {\mathcal {H}}=\left({\mathfrak {p-}}A\right)^{2}+(m_{w}+S)^{2}\approx 0\,,} reproduces Notice, that the interaction in this "reduced particle" constraint depends on two invariant scalars, A ( r ) {\displaystyle A(r)} and S ( r ) {\displaystyle S(r)} , one guiding the time-like vector interaction and one the scalar interaction. Is there a set of two-body Klein–Gordon equations analogous to the two-body Dirac equations? The classical relativistic constraints analogous to the quantum two-body Dirac equations (discussed in the introduction) and that have the same structure as the above Klein–Gordon one-body form are H 1 = ( p 1 − A 1 ) 2 + ( m 1 + S 1 ) 2 = p 1 2 + m 1 2 + Φ 1 ≈ 0 {\displaystyle {\mathcal {H}}_{1}=(p_{1}-A_{1})^{2}+(m_{1}+S_{1})^{2}=p_{1}^{2}+m_{1}^{2}+\Phi _{1}\approx 0} H 2 = ( p 1 − A 2 ) 2 + ( m 2 + S 2 ) 2 = p 2 2 + m 2 2 + Φ 2 ≈ 0 , {\displaystyle {\mathcal {H}}_{2}=(p_{1}-A_{2})^{2}+(m_{2}+S_{2})^{2}=p_{2}^{2}+m_{2}^{2}+\Phi _{2}\approx 0,} p 1 = ε 1 P ^ + p ; p 2 = ε 2 P ^ − p . {\displaystyle p_{1}=\varepsilon _{1}{\hat {P}}+p;~~p_{2}=\varepsilon _{2}{\hat {P}}-p~.} Defining structures that display time-like vector and scalar interactions π 1 = p 1 − A 1 = [ P ^ ( ε 1 − A 1 ) + p ] , {\displaystyle \pi _{1}=p_{1}-A_{1}=[{\hat {P}}(\varepsilon _{1}-{\mathcal {A}}_{1})+p],} π 2 = p 2 − A 2 = [ P ^ ( ε 2 − A 1 ) − p ] , {\displaystyle \pi _{2}=p_{2}-A_{2}=[{\hat {P}}(\varepsilon _{2}-{\mathcal {A}}_{1})-p],} M 1 = m 1 + S 1 , {\displaystyle M_{1}=m_{1}+S_{1},} M 2 = m 2 + S 2 , {\displaystyle M_{2}=m_{2}+S_{2},} gives H 1 = π 1 2 + M 1 2 , {\displaystyle {\mathcal {H}}_{1}=\pi _{1}^{2}+M_{1}^{2},} H 2 = π 2 2 + M 2 2 . {\displaystyle {\mathcal {H}}_{2}=\pi _{2}^{2}+M_{2}^{2}.} Imposing Φ 1 = Φ 2 ≡ Φ ( x ⊥ ) = − 2 p 1 ⋅ A 1 + A 1 2 + 2 m 1 S 1 + S 1 2 = − 2 p 2 ⋅ A 2 + A 2 2 + 2 m 2 S 2 + S 2 2 = 2 ε w A − A 2 + 2 m w S + S 2 , {\displaystyle {\begin{aligned}\Phi _{1}&=\Phi _{2}\equiv \Phi (x_{\perp })\\&=-2p_{1}\cdot A_{1}+A_{1}^{2}+2m_{1}S_{1}+S_{1}^{2}\\&=-2p_{2}\cdot A_{2}+A_{2}^{2}+2m_{2}S_{2}+S_{2}^{2}\\&=2\varepsilon _{w}A-A^{2}+2m_{w}S+S^{2},\end{aligned}}} and using the constraint P ⋅ p ≈ 0 {\displaystyle P\cdot p\approx 0} , reproduces Eqs.( 12 ) provided π 1 2 − p 2 = − ( ε 1 − A 1 ) 2 = − ε 1 2 + 2 ε w A − A 2 , {\displaystyle \pi _{1}^{2}-p^{2}=-\left(\varepsilon _{1}-{\mathcal {A}}_{1}\right)^{2}=-\varepsilon _{1}^{2}+2\varepsilon _{w}A-A^{2},} π 2 2 − p 2 = − ( ε 2 − A 2 ) 2 = − ε 2 2 + 2 ε w A − A 2 , {\displaystyle \pi _{2}^{2}-p^{2}=-\left(\varepsilon _{2}-{\mathcal {A}}_{2}\right)^{2}=-\varepsilon _{2}^{2}+2\varepsilon _{w}A-A^{2},} M 1 2 = m 1 2 + 2 m w S + S 2 , {\displaystyle M_{1}{}^{2}=m_{1}^{2}+2m_{w}S+S^{2},} M 2 2 = m 2 2 + 2 m w S + S 2 . {\displaystyle M_{2}^{2}=m_{2}^{2}+2m_{w}S+S^{2}.} The corresponding Klein–Gordon equations are ( π 1 2 + M 1 2 ) ψ = 0 , {\displaystyle \left(\pi _{1}^{2}+M_{1}^{2}\right)\psi =0,} ( π 2 2 + M 2 2 ) ψ = 0 , {\displaystyle \left(\pi _{2}^{2}+M_{2}^{2}\right)\psi =0,} and each, due to the constraint P ⋅ p ≈ 0 , {\displaystyle P\cdot p\approx 0,} is equivalent to H ψ = ( p ⊥ 2 + Φ − b 2 ) ψ = 0. {\displaystyle {\mathcal {H}}\psi =\left(p_{\perp }^{2}+\Phi -b^{2}\right)\psi =0.} For the two body system there are numerous covariant forms of interaction. The simplest way of looking at these is from the point of view of the gamma matrix structures of the corresponding interaction vertices of the single particle exchange diagrams. For scalar, pseudoscalar, vector, pseudovector, and tensor exchanges those matrix structures are respectively 1 1 1 2 ; γ 51 γ 52 ; γ 1 μ γ 2 μ ; γ 51 γ 1 μ γ 52 γ 2 μ ; σ 1 μ ν σ 2 μ ν , {\displaystyle 1_{1}1_{2};\gamma _{51}\gamma _{52};\gamma _{1}^{\mu }\gamma _{2\mu };\gamma _{51}\gamma _{1}^{\mu }\gamma _{52}\gamma _{2\mu };\sigma _{1\mu \nu }\sigma _{2}^{\mu \nu },} in which σ i μ ν = 1 2 i [ γ i μ , γ i ν ] ; i = 1 , 2. {\displaystyle \sigma _{i\mu \nu }={\frac {1}{2i}}[\gamma _{i\mu },\gamma _{i\nu }];i=1,2.} The form of the Two-Body Dirac equations which most readily incorporates each or any number of these intereractions in concert is the so-called hyperbolic form of the TBDE. [ 31 ] For combined scalar and vector interactions those forms ultimately reduce to the ones given in the first set of equations of this article. Those equations are called the external field-like forms because their appearances are individually the same as those for the usual one-body Dirac equation in the presence of external vector and scalar fields. The most general hyperbolic form for compatible TBDE is S 1 ψ = ( cosh ⁡ ( Δ ) S 1 + sinh ⁡ ( Δ ) S 2 ) ψ = 0 , {\displaystyle {\mathcal {S}}_{1}\psi =(\cosh(\Delta )\mathbf {S} _{1}+\sinh(\Delta )\mathbf {S} _{2})\psi =0,} where Δ {\displaystyle \Delta } represents any invariant interaction singly or in combination. It has a matrix structure in addition to coordinate dependence. Depending on what that matrix structure is one has either scalar, pseudoscalar, vector, pseudovector, or tensor interactions. The operators S 1 {\displaystyle \mathbf {S} _{1}} and S 2 {\displaystyle \mathbf {S} _{2}} are auxiliary constraints satisfying S 1 ψ ≡ ( S 10 cosh ⁡ ( Δ ) + S 20 sinh ⁡ ( Δ ) ) ψ = 0 , {\displaystyle \mathbf {S} _{1}\psi \equiv ({\mathcal {S}}_{10}\cosh(\Delta )+{\mathcal {S}}_{20}\sinh(\Delta )~)\psi =0,} in which the S i 0 {\displaystyle {\mathcal {S}}_{i0}} are the free Dirac operators This, in turn leads to the two compatibility conditions [ S 1 , S 2 ] ψ = 0 , {\displaystyle \lbrack {\mathcal {S}}_{1},{\mathcal {S}}_{2}]\psi =0,} and [ S 1 , S 2 ] ψ = 0 , {\displaystyle \lbrack \mathbf {S} _{1},\mathbf {S} _{2}]\psi =0,} provided that Δ = Δ ( x ⊥ ) . {\displaystyle \Delta =\Delta (x_{\perp }).} These compatibility conditions do not restrict the gamma matrix structure of Δ {\displaystyle \Delta } . That matrix structure is determined by the type of vertex-vertex structure incorporated in the interaction. For the two types of invariant interactions Δ {\displaystyle \Delta } emphasized in this article they are Δ L ( x ⊥ ) = − 1 1 1 2 L ( x ⊥ ) 2 O 1 , scalar , {\displaystyle \Delta _{\mathcal {L}}(x_{\perp })=-1_{1}1_{2}{\frac {{\mathcal {L}}(x_{\perp })}{2}}{\mathcal {O}}_{1},{\text{scalar}},} Δ G ( x ⊥ ) = γ 1 ⋅ γ 2 G ( x ⊥ ) 2 O 1 , vector , {\displaystyle \Delta _{\mathcal {G}}(x_{\perp })=\gamma _{1}\cdot \gamma _{2}{\frac {{\mathcal {G}}(x_{\perp })}{2}}{\mathcal {O}}_{1},{\text{vector}},} O 1 = − γ 51 γ 52 . {\displaystyle {\mathcal {O}}_{1}=-\gamma _{51}\gamma _{52}.} For general independent scalar and vector interactions Δ ( x ⊥ ) = Δ L + Δ G . {\displaystyle \Delta (x_{\perp })=\Delta _{\mathcal {L}}+\Delta _{\mathcal {G}}.} The vector interaction specified by the above matrix structure for an electromagnetic-like interaction would correspond to the Feynman gauge. If one inserts Eq.( 14 ) into ( 13 ) and brings the free Dirac operator ( 15 ) to the right of the matrix hyperbolic functions and uses standard gamma matrix commutators and anticommutators and cosh 2 ⁡ Δ − sinh 2 ⁡ Δ = 1 {\displaystyle \cosh ^{2}\Delta -\sinh ^{2}\Delta =1} one arrives at ( ∂ μ = ∂ / ∂ x μ ) , {\displaystyle \left(\partial _{\mu }=\partial /\partial x^{\mu }\right),} ( G γ 1 ⋅ P 2 − E 1 β 1 + M 1 − G i 2 Σ 2 ⋅ ∂ ( L β 2 − G β 1 ) γ 52 ) ψ = 0 , {\displaystyle {\big (}G\gamma _{1}\cdot {\mathcal {P}}_{2}-E_{1}\beta _{1}+M_{1}-G{\frac {i}{2}}\Sigma _{2}\cdot \partial ({\mathcal {L}}\beta _{2}-{\mathcal {G}}\beta _{1})\gamma _{52}{\big )}\psi =0,} in which G = exp ⁡ G , {\displaystyle G=\exp {\mathcal {G}},} β i = − γ i ⋅ P ^ , {\displaystyle \beta _{i}=-\gamma _{i}\cdot {\hat {P}},} γ i ⊥ μ = ( η μ ν + P ^ μ P ^ ν ) γ ν i , {\displaystyle \gamma _{i\perp }^{\mu }=(\eta ^{\mu \nu }+{\hat {P}}^{\mu }{\hat {P}}^{\nu })\gamma _{\nu i},} Σ i = γ 5 i β i γ ⊥ i , {\displaystyle \Sigma _{i}=\gamma _{5i}\beta _{i}\gamma _{\perp i},} P i ≡ p ⊥ − i 2 Σ i ⋅ ∂ G Σ i , i = 1 , 2. {\displaystyle {\mathcal {P}}_{i}\equiv p_{\perp }-{\frac {i}{2}}\Sigma _{i}\cdot \partial {\mathcal {G}}\Sigma _{i}\,,\quad i=1,2.} The (covariant) structure of these equations are analogous to those of a Dirac equation for each of the two particles, with M i {\displaystyle M_{i}} and E i {\displaystyle E_{i}} playing the roles that m + S {\displaystyle m+S} and ε − A {\displaystyle \varepsilon -A} do in the single particle Dirac equation ( γ ⋅ p − β ( ε − A ) + m + S ) ψ = 0. {\displaystyle (\mathbf {\gamma } \cdot \mathbf {p-} \beta (\varepsilon -A)+m+S)\psi =0.} Over and above the usual kinetic part γ 1 ⋅ p ⊥ {\displaystyle \gamma _{1}\cdot p_{\perp }} and time-like vector and scalar potential portions, the spin-dependent modifications involving Σ i ⋅ ∂ G Σ i {\displaystyle \Sigma _{i}\cdot \partial {\mathcal {G}}\Sigma _{i}} and the last set of derivative terms are two-body recoil effects absent for the one-body Dirac equation but essential for the compatibility (consistency) of the two-body equations. The connections between what are designated as the vertex invariants L , G {\displaystyle {\mathcal {L}},{\mathcal {G}}} and the mass and energy potentials M i , E i {\displaystyle M_{i},E_{i}} are M 1 = m 1 cosh ⁡ L + m 2 sinh ⁡ L , {\displaystyle M_{1}=m_{1}\cosh {\mathcal {L}}+m_{2}\sinh {\mathcal {L}},} M 2 = m 2 cosh ⁡ L + m 1 sinh ⁡ L , {\displaystyle M_{2}=m_{2}\cosh {\mathcal {L}}+m_{1}\sinh {\mathcal {L}},} E 1 = ε 1 cosh ⁡ G − ε 2 sinh ⁡ G , {\displaystyle E_{1}=\varepsilon _{1}\cosh {\mathcal {G}}-\varepsilon _{2}\sinh {\mathcal {G}},} E 2 = ε 2 cosh ⁡ G − ε 1 sinh ⁡ G . {\displaystyle E_{2}=\varepsilon _{2}\cosh {\mathcal {G}}-\varepsilon _{1}\sinh {\mathcal {G}}.} Comparing Eq.( 16 ) with the first equation of this article one finds that the spin-dependent vector interactions are A ~ 1 μ = ( ( ε 1 − E 1 ) ) P ^ μ + ( 1 − G ) p ⊥ μ − i 2 ∂ G ⋅ γ 2 γ 2 μ , {\displaystyle {\tilde {A}}_{1}^{\mu }={\big (}(\varepsilon _{1}-E_{1}){\big )}{\hat {P}}^{\mu }+(1-G)p_{\perp }^{\mu }-{\frac {i}{2}}\partial G\cdot \gamma _{2}\gamma _{2}^{\mu },} A 2 μ = ( ( ε 2 − E 2 ) ) P ^ μ − ( 1 − G ) p ⊥ μ + i 2 ∂ G ⋅ γ 1 γ 1 μ , {\displaystyle A_{2}^{\mu }={\big (}(\varepsilon _{2}-E_{2}){\big )}{\hat {P}}^{\mu }-(1-G)p_{\perp }^{\mu }+{\frac {i}{2}}\partial G\cdot \gamma _{1}\gamma _{1}^{\mu },} Note that the first portion of the vector potentials is timelike (parallel to P ^ μ ) {\displaystyle {\hat {P}}^{\mu })} while the next portion is spacelike (perpendicular to P ^ μ ) {\displaystyle {\hat {P}}^{\mu })} . The spin-dependent scalar potentials S ~ i {\displaystyle {\tilde {S}}_{i}} are S ~ 1 = M 1 − m 1 − i 2 G γ 2 ⋅ ∂ L , {\displaystyle {\tilde {S}}_{1}=M_{1}-m_{1}-{\frac {i}{2}}G\gamma _{2}\cdot \partial {\mathcal {L}},} S ~ 2 = M 2 − m 2 + i 2 G γ 1 ⋅ ∂ L . {\displaystyle {\tilde {S}}_{2}=M_{2}-m_{2}+{\frac {i}{2}}G\gamma _{1}\cdot \partial {\mathcal {L}}.} The parametrization for L {\displaystyle {\mathcal {L}}} and G {\displaystyle {\mathcal {G}}} takes advantage of the Todorov effective external potential forms (as seen in the above section on the two-body Klein Gordon equations) and at the same time displays the correct static limit form for the Pauli reduction to Schrödinger-like form. The choice for these parameterizations (as with the two-body Klein Gordon equations) is closely tied to classical or quantum field theories for separate scalar and vector interactions. This amounts to working in the Feynman gauge with the simplest relation between space- and timelike parts of the vector interaction. The mass and energy potentials are respectively M i 2 = m i 2 + exp ⁡ ( 2 G ) ( 2 m w S + S 2 ) , {\displaystyle M_{i}^{2}=m_{i}^{2}+\exp(2{\mathcal {G}})(2m_{w}S+S^{2}),} E i 2 = exp ⁡ ( 2 G ( A ) ) ( ε i − A ) 2 , {\displaystyle E_{i}^{2}=\exp(2{\mathcal {G}}(A))\left(\varepsilon _{i}-A\right)^{2},} so that exp ⁡ L = exp ⁡ ( L ( S , A ) ) = M 1 + M 2 m 1 + m 2 , {\displaystyle \exp {\mathcal {L}}=\exp({\mathcal {L}}(S,A))={\frac {M_{1}+M_{2}}{m_{1}+m_{2}}},} G = exp ⁡ G = exp ⁡ ( G ( A ) ) = 1 ( 1 − 2 A / w ) . {\displaystyle G=\exp {\mathcal {G}}=\exp({\mathcal {G}}(A))={\sqrt {\frac {1}{(1-2A/w)}}}.} The TBDE can be readily applied to two body systems such as positronium , muonium , hydrogen -like atoms, quarkonium , and the two- nucleon system. [ 32 ] [ 33 ] [ 34 ] These applications involve two particles only and do not involve creation or annihilation of particles beyond the two. They involve only elastic processes. Because of the connection between the potentials used in the TBDE and the corresponding quantum field theory, any radiative correction to the lowest order interaction can be incorporated into those potentials. To see how this comes about, consider by contrast how one computes scattering amplitudes without quantum field theory. With no quantum field theory one must come upon potentials by classical arguments or phenomenological considerations. Once one has the potential V {\displaystyle V} between two particles, then one can compute the scattering amplitude T {\displaystyle T} from the Lippmann–Schwinger equation [ 35 ] T + V + V G T = 0 , {\displaystyle T+V+VGT=0,} in which G {\displaystyle G} is a Green function determined from the Schrödinger equation. Because of the similarity between the Schrödinger equation Eq. ( 11 ) and the relativistic constraint equation ( 10 ), one can derive the same type of equation as the above T + Φ + Φ G T = 0 , {\displaystyle {\mathcal {T}}+\Phi +\Phi {\mathcal {G}}{\mathcal {T}}=0,} called the quasipotential equation with a G {\displaystyle {\mathcal {G}}} very similar to that given in the Lippmann–Schwinger equation. The difference is that with the quasipotential equation, one starts with the scattering amplitudes T {\displaystyle {\mathcal {T}}} of quantum field theory, as determined from Feynman diagrams and deduces the quasipotential Φ perturbatively. Then one can use that Φ in ( 10 ), to compute energy levels of two particle systems that are implied by the field theory. Constraint dynamics provides one of many, in fact an infinite number of, different types of quasipotential equations (three-dimensional truncations of the Bethe–Salpeter equation) differing from one another by the choice of G {\displaystyle {\mathcal {G}}} . [ 36 ] The relatively simple solution to the problem of relative time and energy from the generalized mass shell constraint for two particles, has no simple extension, such as presented here with the x ⊥ {\displaystyle x_{\perp }} variable, to either two particles in an external field [ 37 ] or to 3 or more particles. Sazdjian has presented a recipe for this extension when the particles are confined and cannot split into clusters of a smaller number of particles with no inter-cluster interactions [ 38 ] Lusanna has developed an approach, one that does not involve generalized mass shell constraints with no such restrictions, which extends to N bodies with or without fields. It is formulated on spacelike hypersurfaces and when restricted to the family of hyperplanes orthogonal to the total timelike momentum gives rise to a covariant intrinsic 1-time formulation (with no relative time variables) called the "rest-frame instant form" of dynamics, [ 39 ] [ 40 ]
https://en.wikipedia.org/wiki/Two-body_Dirac_equations
In classical mechanics , the two-body problem is to calculate and predict the motion of two massive bodies that are orbiting each other in space. The problem assumes that the two bodies are point particles that interact only with one another; the only force affecting each object arises from the other one, and all other objects are ignored. The most prominent example of the classical two-body problem is the gravitational case (see also Kepler problem ), arising in astronomy for predicting the orbits (or escapes from orbit) of objects such as satellites , planets , and stars . A two-point-particle model of such a system nearly always describes its behavior well enough to provide useful insights and predictions. A simpler "one body" model, the " central-force problem ", treats one object as the immobile source of a force acting on the other. One then seeks to predict the motion of the single remaining mobile object. Such an approximation can give useful results when one object is much more massive than the other (as with a light planet orbiting a heavy star, where the star can be treated as essentially stationary). However, the one-body approximation is usually unnecessary except as a stepping stone. For many forces, including gravitational ones, the general version of the two-body problem can be reduced to a pair of one-body problems , allowing it to be solved completely, and giving a solution simple enough to be used effectively. By contrast, the three-body problem (and, more generally, the n -body problem for n ≥ 3) cannot be solved in terms of first integrals, except in special cases. The two-body problem is interesting in astronomy because pairs of astronomical objects are often moving rapidly in arbitrary directions (so their motions become interesting), widely separated from one another (so they will not collide) and even more widely separated from other objects (so outside influences will be small enough to be ignored safely). Under the force of gravity , each member of a pair of such objects will orbit their mutual center of mass in an elliptical pattern, unless they are moving fast enough to escape one another entirely, in which case their paths will diverge along other planar conic sections . If one object is very much heavier than the other, it will move far less than the other with reference to the shared center of mass. The mutual center of mass may even be inside the larger object. For the derivation of the solutions to the problem, see Classical central-force problem or Kepler problem . In principle, the same solutions apply to macroscopic problems involving objects interacting not only through gravity, but through any other attractive scalar force field obeying an inverse-square law , with electrostatic attraction being the obvious physical example. In practice, such problems rarely arise. Except perhaps in experimental apparatus or other specialized equipment, we rarely encounter electrostatically interacting objects which are moving fast enough, and in such a direction, as to avoid colliding, and/or which are isolated enough from their surroundings. The dynamical system of a two-body system under the influence of torque turns out to be a Sturm-Liouville equation . [ 1 ] Although the two-body model treats the objects as point particles, classical mechanics only apply to systems of macroscopic scale. Most behavior of subatomic particles cannot be predicted under the classical assumptions underlying this article or using the mathematics here. Electrons in an atom are sometimes described as "orbiting" its nucleus , following an early conjecture of Niels Bohr (this is the source of the term " orbital "). However, electrons don't actually orbit nuclei in any meaningful sense, and quantum mechanics are necessary for any useful understanding of the electron's real behavior. Solving the classical two-body problem for an electron orbiting an atomic nucleus is misleading and does not produce many useful insights. The complete two-body problem can be solved by re-formulating it as two one-body problems: a trivial one and one that involves solving for the motion of one particle in an external potential . Since many one-body problems can be solved exactly, the corresponding two-body problem can also be solved. Let x 1 and x 2 be the vector positions of the two bodies, and m 1 and m 2 be their masses. The goal is to determine the trajectories x 1 ( t ) and x 2 ( t ) for all times t , given the initial positions x 1 ( t = 0) and x 2 ( t = 0) and the initial velocities v 1 ( t = 0) and v 2 ( t = 0) . When applied to the two masses, Newton's second law states that where F 12 is the force on mass 1 due to its interactions with mass 2, and F 21 is the force on mass 2 due to its interactions with mass 1. The two dots on top of the x position vectors denote their second derivative with respect to time, or their acceleration vectors. Adding and subtracting these two equations decouples them into two one-body problems, which can be solved independently. Adding equations (1) and ( 2 ) results in an equation describing the center of mass ( barycenter ) motion. By contrast, subtracting equation (2) from equation (1) results in an equation that describes how the vector r = x 1 − x 2 between the masses changes with time. The solutions of these independent one-body problems can be combined to obtain the solutions for the trajectories x 1 ( t ) and x 2 ( t ) . Let R {\displaystyle \mathbf {R} } be the position of the center of mass ( barycenter ) of the system. Addition of the force equations (1) and (2) yields m 1 x ¨ 1 + m 2 x ¨ 2 = ( m 1 + m 2 ) R ¨ = F 12 + F 21 = 0 {\displaystyle m_{1}{\ddot {\mathbf {x} }}_{1}+m_{2}{\ddot {\mathbf {x} }}_{2}=(m_{1}+m_{2}){\ddot {\mathbf {R} }}=\mathbf {F} _{12}+\mathbf {F} _{21}=0} where we have used Newton's third law F 12 = − F 21 and where R ¨ ≡ m 1 x ¨ 1 + m 2 x ¨ 2 m 1 + m 2 . {\displaystyle {\ddot {\mathbf {R} }}\equiv {\frac {m_{1}{\ddot {\mathbf {x} }}_{1}+m_{2}{\ddot {\mathbf {x} }}_{2}}{m_{1}+m_{2}}}.} The resulting equation: R ¨ = 0 {\displaystyle {\ddot {\mathbf {R} }}=0} shows that the velocity v = d R d t {\displaystyle \mathbf {v} ={\frac {dR}{dt}}} of the center of mass is constant, from which follows that the total momentum m 1 v 1 + m 2 v 2 is also constant ( conservation of momentum ). Hence, the position R ( t ) of the center of mass can be determined at all times from the initial positions and velocities. Dividing both force equations by the respective masses, subtracting the second equation from the first, and rearranging gives the equation r ¨ = x ¨ 1 − x ¨ 2 = ( F 12 m 1 − F 21 m 2 ) = ( 1 m 1 + 1 m 2 ) F 12 {\displaystyle {\ddot {\mathbf {r} }}={\ddot {\mathbf {x} }}_{1}-{\ddot {\mathbf {x} }}_{2}=\left({\frac {\mathbf {F} _{12}}{m_{1}}}-{\frac {\mathbf {F} _{21}}{m_{2}}}\right)=\left({\frac {1}{m_{1}}}+{\frac {1}{m_{2}}}\right)\mathbf {F} _{12}} where we have again used Newton's third law F 12 = − F 21 and where r is the displacement vector from mass 2 to mass 1, as defined above. The force between the two objects, which originates in the two objects, should only be a function of their separation r and not of their absolute positions x 1 and x 2 ; otherwise, there would not be translational symmetry , and the laws of physics would have to change from place to place. The subtracted equation can therefore be written: μ r ¨ = F 12 ( x 1 , x 2 ) = F ( r ) {\displaystyle \mu {\ddot {\mathbf {r} }}=\mathbf {F} _{12}(\mathbf {x} _{1},\mathbf {x} _{2})=\mathbf {F} (\mathbf {r} )} where μ {\displaystyle \mu } is the reduced mass μ = 1 1 m 1 + 1 m 2 = m 1 m 2 m 1 + m 2 . {\displaystyle \mu ={\frac {1}{{\frac {1}{m_{1}}}+{\frac {1}{m_{2}}}}}={\frac {m_{1}m_{2}}{m_{1}+m_{2}}}.} Solving the equation for r ( t ) is the key to the two-body problem. The solution depends on the specific force between the bodies, which is defined by F ( r ) {\displaystyle \mathbf {F} (\mathbf {r} )} . For the case where F ( r ) {\displaystyle \mathbf {F} (\mathbf {r} )} follows an inverse-square law , see the Kepler problem . Once R ( t ) and r ( t ) have been determined, the original trajectories may be obtained x 1 ( t ) = R ( t ) + m 2 m 1 + m 2 r ( t ) {\displaystyle \mathbf {x} _{1}(t)=\mathbf {R} (t)+{\frac {m_{2}}{m_{1}+m_{2}}}\mathbf {r} (t)} x 2 ( t ) = R ( t ) − m 1 m 1 + m 2 r ( t ) {\displaystyle \mathbf {x} _{2}(t)=\mathbf {R} (t)-{\frac {m_{1}}{m_{1}+m_{2}}}\mathbf {r} (t)} as may be verified by substituting the definitions of R and r into the right-hand sides of these two equations. The motion of two bodies with respect to each other always lies in a plane (in the center of mass frame ). Proof: Defining the linear momentum p and the angular momentum L of the system, with respect to the center of mass, by the equations L = r × p = r × μ d r d t , {\displaystyle \mathbf {L} =\mathbf {r} \times \mathbf {p} =\mathbf {r} \times \mu {\frac {d\mathbf {r} }{dt}},} where μ is the reduced mass and r is the relative position r 2 − r 1 (with these written taking the center of mass as the origin, and thus both parallel to r ) the rate of change of the angular momentum L equals the net torque N N = d L d t = r ˙ × μ r ˙ + r × μ r ¨ , {\displaystyle \mathbf {N} ={\frac {d\mathbf {L} }{dt}}={\dot {\mathbf {r} }}\times \mu {\dot {\mathbf {r} }}+\mathbf {r} \times \mu {\ddot {\mathbf {r} }}\ ,} and using the property of the vector cross product that v × w = 0 for any vectors v and w pointing in the same direction, N = d L d t = r × F , {\displaystyle \mathbf {N} \ =\ {\frac {d\mathbf {L} }{dt}}=\mathbf {r} \times \mathbf {F} \ ,} with F = μ d 2 r / dt 2 . Introducing the assumption (true of most physical forces, as they obey Newton's strong third law of motion ) that the force between two particles acts along the line between their positions, it follows that r × F = 0 and the angular momentum vector L is constant (conserved). Therefore, the displacement vector r and its velocity v are always in the plane perpendicular to the constant vector L . If the force F ( r ) is conservative then the system has a potential energy U ( r ) , so the total energy can be written as E tot = 1 2 m 1 x ˙ 1 2 + 1 2 m 2 x ˙ 2 2 + U ( r ) = 1 2 ( m 1 + m 2 ) R ˙ 2 + 1 2 μ r ˙ 2 + U ( r ) {\displaystyle E_{\text{tot}}={\frac {1}{2}}m_{1}{\dot {\mathbf {x} }}_{1}^{2}+{\frac {1}{2}}m_{2}{\dot {\mathbf {x} }}_{2}^{2}+U(\mathbf {r} )={\frac {1}{2}}(m_{1}+m_{2}){\dot {\mathbf {R} }}^{2}+{1 \over 2}\mu {\dot {\mathbf {r} }}^{2}+U(\mathbf {r} )} In the center of mass frame the kinetic energy is the lowest and the total energy becomes E = 1 2 μ r ˙ 2 + U ( r ) {\displaystyle E={\frac {1}{2}}\mu {\dot {\mathbf {r} }}^{2}+U(\mathbf {r} )} The coordinates x 1 and x 2 can be expressed as x 1 = μ m 1 r {\displaystyle \mathbf {x} _{1}={\frac {\mu }{m_{1}}}\mathbf {r} } x 2 = − μ m 2 r {\displaystyle \mathbf {x} _{2}=-{\frac {\mu }{m_{2}}}\mathbf {r} } and in a similar way the energy E is related to the energies E 1 and E 2 that separately contain the kinetic energy of each body: E 1 = μ m 1 E = 1 2 m 1 x ˙ 1 2 + μ m 1 U ( r ) E 2 = μ m 2 E = 1 2 m 2 x ˙ 2 2 + μ m 2 U ( r ) E tot = E 1 + E 2 {\displaystyle {\begin{aligned}E_{1}&={\frac {\mu }{m_{1}}}E={\frac {1}{2}}m_{1}{\dot {\mathbf {x} }}_{1}^{2}+{\frac {\mu }{m_{1}}}U(\mathbf {r} )\\[4pt]E_{2}&={\frac {\mu }{m_{2}}}E={\frac {1}{2}}m_{2}{\dot {\mathbf {x} }}_{2}^{2}+{\frac {\mu }{m_{2}}}U(\mathbf {r} )\\[4pt]E_{\text{tot}}&=E_{1}+E_{2}\end{aligned}}} For many physical problems, the force F ( r ) is a central force , i.e., it is of the form F ( r ) = F ( r ) r ^ {\displaystyle \mathbf {F} (\mathbf {r} )=F(r){\hat {\mathbf {r} }}} where r = | r | and r̂ = r / r is the corresponding unit vector . We now have: μ r ¨ = F ( r ) r ^ , {\displaystyle \mu {\ddot {\mathbf {r} }}={F}(r){\hat {\mathbf {r} }}\ ,} where F ( r ) is negative in the case of an attractive force.
https://en.wikipedia.org/wiki/Two-body_problem
The two-body problem in general relativity (or relativistic two-body problem ) is the determination of the motion and gravitational field of two bodies as described by the field equations of general relativity . Solving the Kepler problem is essential to calculate the bending of light by gravity and the motion of a planet orbiting its sun. Solutions are also used to describe the motion of binary stars around each other, and estimate their gradual loss of energy through gravitational radiation . General relativity describes the gravitational field by curved space-time; the field equations governing this curvature are nonlinear and therefore difficult to solve in a closed form . No exact solutions of the Kepler problem have been found, but an approximate solution has: the Schwarzschild solution . This solution pertains when the mass M of one body is overwhelmingly greater than the mass m of the other. If so, the larger mass may be taken as stationary and the sole contributor to the gravitational field. This is a good approximation for a photon passing a star and for a planet orbiting its sun. The motion of the lighter body (called the "particle" below) can then be determined from the Schwarzschild solution; the motion is a geodesic ("shortest path between two points") in the curved space-time. Such geodesic solutions account for the anomalous precession of the planet Mercury , which is a key piece of evidence supporting the theory of general relativity. They also describe the bending of light in a gravitational field, another prediction famously used as evidence for general relativity. If both masses are considered to contribute to the gravitational field, as in binary stars, the Kepler problem can be solved only approximately. The earliest approximation method to be developed was the post-Newtonian expansion , an iterative method in which an initial solution is gradually corrected. More recently, it has become possible to solve Einstein's field equation using a computer [ 1 ] [ 2 ] [ 3 ] instead of mathematical formulae. As the two bodies orbit each other, they will emit gravitational radiation ; this causes them to lose energy and angular momentum gradually, as illustrated by the binary pulsar PSR B1913+16 . For binary black holes , the numerical solution of the two-body problem was achieved in 2005 after four decades of research when three groups devised breakthrough techniques. [ 1 ] [ 2 ] [ 3 ] The Kepler problem derives its name from Johannes Kepler , who worked as an assistant to the Danish astronomer Tycho Brahe . Brahe took extraordinarily accurate measurements of the motion of the planets of the Solar System. From these measurements, Kepler was able to formulate Kepler's laws , the first modern description of planetary motion: Kepler published the first two laws in 1609 and the third law in 1619. They supplanted earlier models of the Solar System, such as those of Ptolemy and Copernicus . Kepler's laws apply only in the limited case of the two-body problem. Voltaire and Émilie du Châtelet were the first to call them "Kepler's laws". Nearly a century later, Isaac Newton had formulated his three laws of motion . In particular, Newton's second law states that a force F applied to a mass m produces an acceleration a given by the equation F = ma . Newton then posed the question: what must the force be that produces the elliptical orbits seen by Kepler? His answer came in his law of universal gravitation , which states that the force between a mass M and another mass m is given by the formula F = G M m r 2 , {\displaystyle F=G{\frac {Mm}{r^{2}}},} where r is the distance between the masses and G is the gravitational constant . Given this force law and his equations of motion, Newton was able to show that two point masses attracting each other would each follow perfectly elliptical orbits. The ratio of sizes of these ellipses is m / M , with the larger mass moving on a smaller ellipse. If M is much larger than m , then the larger mass will appear to be stationary at the focus of the elliptical orbit of the lighter mass m . This model can be applied approximately to the Solar System. Since the mass of the Sun is much larger than those of the planets, the force acting on each planet is principally due to the Sun; the gravity of the planets for each other can be neglected to first approximation. If the potential energy between the two bodies is not exactly the 1/ r potential of Newton's gravitational law but differs only slightly, then the ellipse of the orbit gradually rotates (among other possible effects). This apsidal precession is observed for all the planets orbiting the Sun, primarily due to the oblateness of the Sun (it is not perfectly spherical) and the attractions of the other planets to one another. The apsides are the two points of closest and furthest distance of the orbit (the periapsis and apoapsis, respectively); apsidal precession corresponds to the rotation of the line joining the apsides. It also corresponds to the rotation of the Laplace–Runge–Lenz vector , which points along the line of apsides. Newton's law of gravitation soon became accepted because it gave very accurate predictions of the motion of all the planets. [ dubious – discuss ] These calculations were carried out initially by Pierre-Simon Laplace in the late 18th century, and refined by Félix Tisserand in the later 19th century. Conversely, if Newton's law of gravitation did not predict the apsidal precessions of the planets accurately, it would have to be discarded as a theory of gravitation. Such an anomalous precession was observed in the second half of the 19th century. In 1859, Urbain Le Verrier discovered that the orbital precession of the planet Mercury was not quite what it should be; the ellipse of its orbit was rotating (precessing) slightly faster than predicted by the traditional theory of Newtonian gravity, even after all the effects of the other planets had been accounted for. [ 4 ] The effect is small (roughly 43 arcseconds of rotation per century), but well above the measurement error (roughly 0.1 arcseconds per century). Le Verrier realized the importance of his discovery immediately, and challenged astronomers and physicists alike to account for it. Several classical explanations were proposed, such as interplanetary dust, unobserved oblateness of the Sun , an undetected moon of Mercury, or a new planet named Vulcan . [ 5 ] After these explanations were discounted, some physicists were driven to the more radical hypothesis that Newton's inverse-square law of gravitation was incorrect. For example, some physicists proposed a power law with an exponent that was slightly different from 2. [ 6 ] Others argued that Newton's law should be supplemented with a velocity-dependent potential. However, this implied a conflict with Newtonian celestial dynamics. In his treatise on celestial mechanics, Laplace had shown that if the gravitational influence does not act instantaneously, then the motions of the planets themselves will not exactly conserve momentum (and consequently some of the momentum would have to be ascribed to the mediator of the gravitational interaction, analogous to ascribing momentum to the mediator of the electromagnetic interaction.) As seen from a Newtonian point of view, if gravitational influence does propagate at a finite speed, then at all points in time a planet is attracted to a point where the Sun was some time before, and not towards the instantaneous position of the Sun. On the assumption of the classical fundamentals, Laplace had shown that if gravity would propagate at a velocity on the order of the speed of light then the solar system would be unstable, and would not exist for a long time. The observation that the solar system is old enough allowed him to put a lower limit on the speed of gravity that turned out to be many orders of magnitude faster than the speed of light. [ 5 ] [ 7 ] Laplace's estimate for the speed of gravity is not correct in a field theory which respects the principle of relativity. Since electric and magnetic fields combine, the attraction of a point charge which is moving at a constant velocity is towards the extrapolated instantaneous position, not to the apparent position it seems to occupy when looked at. [ note 1 ] To avoid those problems, between 1870 and 1900 many scientists used the electrodynamic laws of Wilhelm Eduard Weber , Carl Friedrich Gauss , Bernhard Riemann to produce stable orbits and to explain the perihelion shift of Mercury's orbit. In 1890, Maurice Lévy succeeded in doing so by combining the laws of Weber and Riemann, whereby the speed of gravity is equal to the speed of light in his theory. And in another attempt Paul Gerber (1898) even succeeded in deriving the correct formula for the perihelion shift (which was identical to that formula later used by Einstein). However, because the basic laws of Weber and others were wrong (for example, Weber's law was superseded by Maxwell's theory), those hypotheses were rejected. [ 8 ] Another attempt by Hendrik Lorentz (1900), who already used Maxwell's theory, produced a perihelion shift which was too low. [ 5 ] Around 1904–1905, the works of Hendrik Lorentz , Henri Poincaré and finally Albert Einstein 's special theory of relativity , exclude the possibility of propagation of any effects faster than the speed of light . It followed that Newton's law of gravitation would have to be replaced with another law, compatible with the principle of relativity, while still obtaining the Newtonian limit for circumstances where relativistic effects are negligible. Such attempts were made by Henri Poincaré (1905), Hermann Minkowski (1907) and Arnold Sommerfeld (1910). [ 9 ] In 1907 Einstein came to the conclusion that to achieve this a successor to special relativity was needed. From 1907 to 1915, Einstein worked towards a new theory, using his equivalence principle as a key concept to guide his way. According to this principle, a uniform gravitational field acts equally on everything within it and, therefore, cannot be detected by a free-falling observer. Conversely, all local gravitational effects should be reproducible in a linearly accelerating reference frame, and vice versa. Thus, gravity acts like a fictitious force such as the centrifugal force or the Coriolis force , which result from being in an accelerated reference frame; all fictitious forces are proportional to the inertial mass , just as gravity is. To effect the reconciliation of gravity and special relativity and to incorporate the equivalence principle, something had to be sacrificed; that something was the long-held classical assumption that our space obeys the laws of Euclidean geometry , e.g., that the Pythagorean theorem is true experimentally. Einstein used a more general geometry, pseudo-Riemannian geometry , to allow for the curvature of space and time that was necessary for the reconciliation; after eight years of work (1907–1915), he succeeded in discovering the precise way in which space-time should be curved in order to reproduce the physical laws observed in Nature, particularly gravitation. Gravity is distinct from the fictitious forces centrifugal force and coriolis force in the sense that the curvature of spacetime is regarded as physically real, whereas the fictitious forces are not regarded as forces. The very first solutions of his field equations explained the anomalous precession of Mercury and predicted an unusual bending of light, which was confirmed after his theory was published. These solutions are explained below. In the normal Euclidean geometry , triangles obey the Pythagorean theorem , which states that the square distance ds 2 between two points in space is the sum of the squares of its perpendicular components d s 2 = d x 2 + d y 2 + d z 2 {\displaystyle ds^{2}=dx^{2}+dy^{2}+dz^{2}} where dx , dy and dz represent the infinitesimal differences between the x , y and z coordinates of two points in a Cartesian coordinate system . Now imagine a world in which this is not quite true; a world where the distance is instead given by d s 2 = F ( x , y , z ) d x 2 + G ( x , y , z ) d y 2 + H ( x , y , z ) d z 2 {\displaystyle ds^{2}=F(x,y,z)\,dx^{2}+G(x,y,z)\,dy^{2}+H(x,y,z)\,dz^{2}} where F , G and H are arbitrary functions of position. It is not hard to imagine such a world; we live on one. The surface of the earth is curved, which is why it is impossible to make a perfectly accurate flat map of the earth. Non-Cartesian coordinate systems illustrate this well; for example, in the spherical coordinates ( r , θ , φ ), the Euclidean distance can be written d s 2 = d r 2 + r 2 d θ 2 + r 2 sin 2 ⁡ θ d φ 2 {\displaystyle ds^{2}=dr^{2}+r^{2}\,d\theta ^{2}+r^{2}\sin ^{2}\theta \,d\varphi ^{2}} Another illustration would be a world in which the rulers used to measure length were untrustworthy, rulers that changed their length with their position and even their orientation. In the most general case, one must allow for cross-terms when calculating the distance ds d s 2 = g x x d x 2 + g x y d x d y + g x z d x d z + ⋯ + g z y d z d y + g z z d z 2 {\displaystyle ds^{2}=g_{xx}\,dx^{2}+g_{xy}\,dx\,dy+g_{xz}\,dx\,dz+\cdots +g_{zy}\,dz\,dy+g_{zz}\,dz^{2}} where the nine functions g xx , g xy , ..., g zz constitute the metric tensor , which defines the geometry of the space in Riemannian geometry . In the spherical-coordinates example above, there are no cross-terms; the only nonzero metric tensor components are g rr = 1, g θθ = r 2 and g φφ = r 2 sin 2 θ. In his special theory of relativity , Albert Einstein showed that the distance ds between two spatial points is not constant, but depends on the motion of the observer. However, there is a measure of separation between two points in space-time — called "proper time" and denoted with the symbol dτ — that is invariant; in other words, it does not depend on the motion of the observer. c 2 d τ 2 = c 2 d t 2 − d x 2 − d y 2 − d z 2 {\displaystyle c^{2}\,d\tau ^{2}=c^{2}\,dt^{2}-dx^{2}-dy^{2}-dz^{2}} which may be written in spherical coordinates as c 2 d τ 2 = c 2 d t 2 − d r 2 − r 2 d θ 2 − r 2 sin 2 ⁡ θ d φ 2 {\displaystyle c^{2}\,d\tau ^{2}=c^{2}\,dt^{2}-dr^{2}-r^{2}\,d\theta ^{2}-r^{2}\sin ^{2}\theta \,d\varphi ^{2}} This formula is the natural extension of the Pythagorean theorem and similarly holds only when there is no curvature in space-time. In general relativity , however, space and time may have curvature, so this distance formula must be modified to a more general form c 2 d τ 2 = g μ ν d x μ d x ν {\displaystyle c^{2}\,d\tau ^{2}=g_{\mu \nu }dx^{\mu }\,dx^{\nu }} just as we generalized the formula to measure distance on the surface of the Earth. The exact form of the metric g μν depends on the gravitating mass, momentum and energy, as described by the Einstein field equations . Einstein developed those field equations to match the then known laws of Nature; however, they predicted never-before-seen phenomena (such as the bending of light by gravity) that were confirmed later. According to Einstein's theory of general relativity, particles of negligible mass travel along geodesics in the space-time. In uncurved space-time, far from a source of gravity, these geodesics correspond to straight lines; however, they may deviate from straight lines when the space-time is curved. The equation for the geodesic lines is [ 10 ] d 2 x μ d q 2 + Γ ν λ μ d x ν d q d x λ d q = 0 {\displaystyle {\frac {d^{2}x^{\mu }}{dq^{2}}}+\Gamma _{\nu \lambda }^{\mu }{\frac {dx^{\nu }}{dq}}{\frac {dx^{\lambda }}{dq}}=0} where Γ represents the Christoffel symbol and the variable q parametrizes the particle's path through space-time , its so-called world line . The Christoffel symbol depends only on the metric tensor g μν , or rather on how it changes with position. The variable q is a constant multiple of the proper time τ for timelike orbits (which are traveled by massive particles), and is usually taken to be equal to it. For lightlike (or null) orbits (which are traveled by massless particles such as the photon ), the proper time is zero and, strictly speaking, cannot be used as the variable q . Nevertheless, lightlike orbits can be derived as the ultrarelativistic limit of timelike orbits, that is, the limit as the particle mass m goes to zero while holding its total energy fixed. An exact solution to the Einstein field equations is the Schwarzschild metric , which corresponds to the external gravitational field of a stationary, uncharged, non-rotating, spherically symmetric body of mass M . It is characterized by a length scale r s , known as the Schwarzschild radius , which is defined by the formula r s = 2 G M c 2 {\displaystyle r_{\text{s}}={\frac {2GM}{c^{2}}}} where G is the gravitational constant . The classical Newtonian theory of gravity is recovered in the limit as the ratio r s / r goes to zero. In that limit, the metric returns to that defined by special relativity . In practice, this ratio is almost always extremely small. For example, the Schwarzschild radius r s of the Earth is roughly 9 mm ; at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. The Schwarzschild radius of the Sun is much larger, roughly 2953 meters, but at its surface, the ratio r s / r is roughly 4 parts in a million. A white dwarf star is much denser, but even here the ratio at its surface is roughly 250 parts in a million. The ratio only becomes large close to ultra-dense objects such as neutron stars (where the ratio is roughly 50%) and black holes . The orbits of a test particle of infinitesimal mass m {\displaystyle m} about the central mass M {\displaystyle M} is given by the equation of motion ( d r d τ ) 2 = E 2 m 2 c 2 − ( 1 − r s r ) ( c 2 + h 2 r 2 ) . {\displaystyle \left({\frac {dr}{d\tau }}\right)^{2}={\frac {E^{2}}{m^{2}c^{2}}}-\left(1-{\frac {r_{\text{s}}}{r}}\right)\left(c^{2}+{\frac {h^{2}}{r^{2}}}\right).} where h {\displaystyle h} is the specific relative angular momentum , h = r × v = L μ {\textstyle h=r\times v={L \over \mu }} and μ {\displaystyle \mu } is the reduced mass . This can be converted into an equation for the orbit ( d r d φ ) 2 = r 4 b 2 − ( 1 − r s r ) ( r 4 a 2 + r 2 ) , {\displaystyle \left({\frac {dr}{d\varphi }}\right)^{2}={\frac {r^{4}}{b^{2}}}-\left(1-{\frac {r_{\text{s}}}{r}}\right)\left({\frac {r^{4}}{a^{2}}}+r^{2}\right),} where, for brevity, two length-scales, a = h c {\textstyle a={\frac {h}{c}}} and b = L c E {\textstyle b={\frac {Lc}{E}}} , have been introduced. They are constants of the motion and depend on the initial conditions (position and velocity) of the test particle. Hence, the solution of the orbit equation is φ = ∫ 1 r 2 [ 1 b 2 − ( 1 − r s r ) ( 1 a 2 + 1 r 2 ) ] − 1 / 2 d r . {\displaystyle \varphi =\int {\frac {1}{r^{2}}}\left[{\frac {1}{b^{2}}}-\left(1-{\frac {r_{\mathrm {s} }}{r}}\right)\left({\frac {1}{a^{2}}}+{\frac {1}{r^{2}}}\right)\right]^{-1/2}\,dr.} The equation of motion for the particle derived above ( d r d τ ) 2 = E 2 m 2 c 2 − c 2 + r s c 2 r − h 2 r 2 + r s h 2 r 3 {\displaystyle \left({\frac {dr}{d\tau }}\right)^{2}={\frac {E^{2}}{m^{2}c^{2}}}-c^{2}+{\frac {r_{\text{s}}c^{2}}{r}}-{\frac {h^{2}}{r^{2}}}+{\frac {r_{\text{s}}h^{2}}{r^{3}}}} can be rewritten using the definition of the Schwarzschild radius r s as 1 2 m ( d r d τ ) 2 = [ E 2 2 m c 2 − 1 2 m c 2 ] + G M m r − L 2 2 μ r 2 + G ( M + m ) L 2 c 2 μ r 3 {\displaystyle {\frac {1}{2}}m\left({\frac {dr}{d\tau }}\right)^{2}=\left[{\frac {E^{2}}{2mc^{2}}}-{\frac {1}{2}}mc^{2}\right]+{\frac {GMm}{r}}-{\frac {L^{2}}{2\mu r^{2}}}+{\frac {G(M+m)L^{2}}{c^{2}\mu r^{3}}}} which is equivalent to a particle moving in a one-dimensional effective potential V ( r ) = − G M m r + L 2 2 μ r 2 − G ( M + m ) L 2 c 2 μ r 3 {\displaystyle V(r)=-{\frac {GMm}{r}}+{\frac {L^{2}}{2\mu r^{2}}}-{\frac {G(M+m)L^{2}}{c^{2}\mu r^{3}}}} The first two terms are well-known classical energies, the first being the attractive Newtonian gravitational potential energy and the second corresponding to the repulsive "centrifugal" potential energy ; however, the third term is an attractive energy unique to general relativity . As shown below and elsewhere , this inverse-cubic energy causes elliptical orbits to precess gradually by an angle δφ per revolution δ φ ≈ 6 π G ( M + m ) c 2 A ( 1 − e 2 ) {\displaystyle \delta \varphi \approx {\frac {6\pi G(M+m)}{c^{2}A\left(1-e^{2}\right)}}} where A is the semi-major axis and e is the eccentricity. Here δφ is not the change in the φ -coordinate in ( t , r , θ , φ ) coordinates but the change in the argument of periapsis of the classical closed orbit. The third term is attractive and dominates at small r values, giving a critical inner radius r inner at which a particle is drawn inexorably inwards to r = 0; this inner radius is a function of the particle's angular momentum per unit mass or, equivalently, the a length-scale defined above. The effective potential V can be re-written in terms of the length a = h / c : V ( r ) = m c 2 2 [ − r s r + a 2 r 2 − r s a 2 r 3 ] . {\displaystyle V(r)={\frac {mc^{2}}{2}}\left[-{\frac {r_{\text{s}}}{r}}+{\frac {a^{2}}{r^{2}}}-{\frac {r_{\text{s}}a^{2}}{r^{3}}}\right].} Circular orbits are possible when the effective force is zero: F = − d V d r = − m c 2 2 r 4 [ r s r 2 − 2 a 2 r + 3 r s a 2 ] = 0 ; {\displaystyle F=-{\frac {dV}{dr}}=-{\frac {mc^{2}}{2r^{4}}}\left[r_{\text{s}}r^{2}-2a^{2}r+3r_{\text{s}}a^{2}\right]=0;} i.e., when the two attractive forces—Newtonian gravity (first term) and the attraction unique to general relativity (third term)—are exactly balanced by the repulsive centrifugal force (second term). There are two radii at which this balancing can occur, denoted here as r inner and r outer : r o u t e r = a 2 r s ( 1 + 1 − 3 r s 2 a 2 ) r i n n e r = a 2 r s ( 1 − 1 − 3 r s 2 a 2 ) = 3 a 2 r o u t e r , {\displaystyle {\begin{aligned}r_{\mathrm {outer} }&={\frac {a^{2}}{r_{\text{s}}}}\left(1+{\sqrt {1-{\frac {3{r_{\text{s}}}^{2}}{a^{2}}}}}\right)\\r_{\mathrm {inner} }&={\frac {a^{2}}{r_{\text{s}}}}\left(1-{\sqrt {1-{\frac {3{r_{\text{s}}}^{2}}{a^{2}}}}}\right)={\frac {3a^{2}}{r_{\mathrm {outer} }}},\end{aligned}}} which are obtained using the quadratic formula . The inner radius r inner is unstable, because the attractive third force strengthens much faster than the other two forces when r becomes small; if the particle slips slightly inwards from r inner (where all three forces are in balance), the third force dominates the other two and draws the particle inexorably inwards to r = 0. At the outer radius, however, the circular orbits are stable; the third term is less important and the system behaves more like the non-relativistic Kepler problem . When a is much greater than r s (the classical case), these formulae become approximately r o u t e r ≈ 2 a 2 r s r i n n e r ≈ 3 2 r s {\displaystyle {\begin{aligned}r_{\mathrm {outer} }&\approx {\frac {2a^{2}}{r_{\text{s}}}}\\r_{\mathrm {inner} }&\approx {\frac {3}{2}}r_{\text{s}}\end{aligned}}} Substituting the definitions of a and r s into r outer yields the classical formula for a particle of mass m orbiting a body of mass M . The following equation r outer 3 = G ( M + m ) ω φ 2 {\displaystyle r_{\text{outer}}^{3}={\frac {G(M+m)}{\omega _{\varphi }^{2}}}} where ω φ is the orbital angular speed of the particle, is obtained in non-relativistic mechanics by setting the centrifugal force equal to the Newtonian gravitational force: G M m r 2 = μ ω φ 2 r {\displaystyle {\frac {GMm}{r^{2}}}=\mu \omega _{\varphi }^{2}r} where μ {\displaystyle \mu } is the reduced mass . In our notation, the classical orbital angular speed equals ω φ 2 ≈ G M r o u t e r 3 = ( r s c 2 2 r o u t e r 3 ) = ( r s c 2 2 ) ( r s 3 8 a 6 ) = c 2 r s 4 16 a 6 {\displaystyle \omega _{\varphi }^{2}\approx {\frac {GM}{r_{\mathrm {outer} }^{3}}}=\left({\frac {r_{\text{s}}c^{2}}{2r_{\mathrm {outer} }^{3}}}\right)=\left({\frac {r_{\text{s}}c^{2}}{2}}\right)\left({\frac {{r_{\text{s}}}^{3}}{8a^{6}}}\right)={\frac {c^{2}{r_{\text{s}}}^{4}}{16a^{6}}}} At the other extreme, when a 2 approaches 3 r s 2 from above, the two radii converge to a single value r outer ≈ r inner ≈ 3 r s {\displaystyle r_{\text{outer}}\approx r_{\text{inner}}\approx 3r_{\text{s}}} The quadratic solutions above ensure that r outer is always greater than 3 r s , whereas r inner lies between ⁠ 3 / 2 ⁠ r s and 3 r s . Circular orbits smaller than ⁠ 3 / 2 ⁠ r s are not possible. For massless particles, a goes to infinity, implying that there is a circular orbit for photons at r inner = ⁠ 3 / 2 ⁠ r s . The sphere of this radius is sometimes known as the photon sphere . The orbital precession rate may be derived using this radial effective potential V . A small radial deviation from a circular orbit of radius r outer will oscillate in a stable manner with an angular frequency ω r 2 = 1 m [ d 2 V d r 2 ] r = r outer {\displaystyle \omega _{r}^{2}={\frac {1}{m}}\left[{\frac {d^{2}V}{dr^{2}}}\right]_{r=r_{\text{outer}}}} which equals ω r 2 = ( c 2 r s 2 r outer 4 ) ( r outer − r inner ) = ω φ 2 1 − 3 r s 2 a 2 {\displaystyle \omega _{r}^{2}=\left({\frac {c^{2}r_{\text{s}}}{2r_{\text{outer}}^{4}}}\right)\left(r_{\text{outer}}-r_{\text{inner}}\right)=\omega _{\varphi }^{2}{\sqrt {1-{\frac {3r_{\text{s}}^{2}}{a^{2}}}}}} Taking the square root of both sides and expanding using the binomial theorem yields the formula ω r = ω φ ( 1 − 3 r s 2 4 a 2 + ⋯ ) {\displaystyle \omega _{r}=\omega _{\varphi }\left(1-{\frac {3r_{\text{s}}^{2}}{4a^{2}}}+\cdots \right)} Multiplying by the period T of one revolution gives the precession of the orbit per revolution δ φ = T ( ω φ − ω r ) ≈ 2 π ( 3 r s 2 4 a 2 ) = 3 π m 2 c 2 2 L 2 r s 2 {\displaystyle \delta \varphi =T(\omega _{\varphi }-\omega _{r})\approx 2\pi \left({\frac {3r_{\text{s}}^{2}}{4a^{2}}}\right)={\frac {3\pi m^{2}c^{2}}{2L^{2}}}r_{\text{s}}^{2}} where we have used ω φ T = 2 π and the definition of the length-scale a . Substituting the definition of the Schwarzschild radius r s gives δ φ ≈ 3 π m 2 c 2 2 L 2 ( 4 G 2 M 2 c 4 ) = 6 π G 2 M 2 m 2 c 2 L 2 {\displaystyle \delta \varphi \approx {\frac {3\pi m^{2}c^{2}}{2L^{2}}}\left({\frac {4G^{2}M^{2}}{c^{4}}}\right)={\frac {6\pi G^{2}M^{2}m^{2}}{c^{2}L^{2}}}} This may be simplified using the elliptical orbit's semi-major axis A and eccentricity e related by the formula h 2 G ( M + m ) = A ( 1 − e 2 ) {\displaystyle {\frac {h^{2}}{G(M+m)}}=A\left(1-e^{2}\right)} to give the precession angle δ φ ≈ 6 π G ( M + m ) c 2 A ( 1 − e 2 ) {\displaystyle \delta \varphi \approx {\frac {6\pi G(M+m)}{c^{2}A\left(1-e^{2}\right)}}} Since the closed classical orbit is an ellipse in general, the quantity A (1 − e 2 ) is the semi- latus rectum l of the ellipse. Hence, the final formula of angular apsidal precession for a unit complete revolution is δ φ ≈ 6 π G ( M + m ) c 2 l {\displaystyle \delta \varphi \approx {\frac {6\pi G(M+m)}{c^{2}l}}} In the Schwarzschild solution, it is assumed that the larger mass M is stationary and it alone determines the gravitational field (i.e., the geometry of space-time) and, hence, the lesser mass m follows a geodesic path through that fixed space-time. This is a reasonable approximation for photons and the orbit of Mercury, which is roughly 6 million times lighter than the Sun. However, it is inadequate for binary stars , in which the masses may be of similar magnitude. The metric for the case of two comparable masses cannot be solved in closed form and therefore one has to resort to approximation techniques such as the post-Newtonian approximation or numerical approximations. In passing, we mention one particular exception in lower dimensions (see R = T model for details). In (1+1) dimensions, i.e. a space made of one spatial dimension and one time dimension, the metric for two bodies of equal masses can be solved analytically in terms of the Lambert W function . [ 11 ] However, the gravitational energy between the two bodies is exchanged via dilatons rather than gravitons which require three-space in which to propagate. The post-Newtonian expansion is a calculational method that provides a series of ever more accurate solutions to a given problem. [ 12 ] The method is iterative; an initial solution for particle motions is used to calculate the gravitational fields; from these derived fields, new particle motions can be calculated, from which even more accurate estimates of the fields can be computed, and so on. This approach is called "post-Newtonian" because the Newtonian solution for the particle orbits is often used as the initial solution. The theory can be divided into two parts: first one finds the two-body effective potential that captures the GR corrections to the Newtonian potential. Secondly, one should solve the resulting equations of motion. Einstein's equations can also be solved on a computer using sophisticated numerical methods. [ 1 ] [ 2 ] [ 3 ] Given sufficient computer power, such solutions can be more accurate than post-Newtonian solutions. However, such calculations are demanding because the equations must generally be solved in a four-dimensional space. Nevertheless, beginning in the late 1990s, it became possible to solve difficult problems such as the merger of two black holes, which is a very difficult version of the Kepler problem in general relativity. If there is no incoming gravitational radiation, according to general relativity , two bodies orbiting one another will emit gravitational radiation , causing the orbits to gradually lose energy. The formulae describing the loss of energy and angular momentum due to gravitational radiation from the two bodies of the Kepler problem have been calculated. [ 13 ] The rate of losing energy (averaged over a complete orbit) is given by [ 14 ] − ⟨ d E d t ⟩ = 32 G 4 m 1 2 m 2 2 ( m 1 + m 2 ) 5 c 5 a 5 ( 1 − e 2 ) 7 / 2 ( 1 + 73 24 e 2 + 37 96 e 4 ) {\displaystyle -\left\langle {\frac {dE}{dt}}\right\rangle ={\frac {32G^{4}m_{1}^{2}m_{2}^{2}(m_{1}+m_{2})}{5c^{5}a^{5}\left(1-e^{2}\right)^{7/2}}}\left(1+{\frac {73}{24}}e^{2}+{\frac {37}{96}}e^{4}\right)} where e is the orbital eccentricity and a is the semimajor axis of the elliptical orbit. The angular brackets on the left-hand side of the equation represent the averaging over a single orbit. Similarly, the average rate of losing angular momentum equals − ⟨ d L z d t ⟩ = 32 G 7 / 2 m 1 2 m 2 2 m 1 + m 2 5 c 5 a 7 / 2 ( 1 − e 2 ) 2 ( 1 + 7 8 e 2 ) {\displaystyle -\left\langle {\frac {dL_{z}}{dt}}\right\rangle ={\frac {32G^{7/2}m_{1}^{2}m_{2}^{2}{\sqrt {m_{1}+m_{2}}}}{5c^{5}a^{7/2}\left(1-e^{2}\right)^{2}}}\left(1+{\frac {7}{8}}e^{2}\right)} The rate of period decrease is given by [ 13 ] [ 15 ] − ⟨ d P b d t ⟩ = 192 π G 5 / 3 m 1 m 2 ( m 1 + m 2 ) − 1 / 3 5 c 5 ( 1 − e 2 ) 7 / 2 ( 1 + 73 24 e 2 + 37 96 e 4 ) ( P b 2 π ) − 5 / 3 {\displaystyle -\left\langle {\frac {dP_{b}}{dt}}\right\rangle ={\frac {192\pi G^{5/3}m_{1}m_{2}(m_{1}+m_{2})^{-1/3}}{5c^{5}\left(1-e^{2}\right)^{7/2}}}\left(1+{\frac {73}{24}}e^{2}+{\frac {37}{96}}e^{4}\right)\left({\frac {P_{b}}{2\pi }}\right)^{-{5/3}}} where P b is orbital period. The losses in energy and angular momentum increase significantly as the eccentricity approaches one, i.e., as the ellipse of the orbit becomes ever more elongated. The radiation losses also increase significantly with a decreasing size a of the orbit.
https://en.wikipedia.org/wiki/Two-body_problem_in_general_relativity